text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Increasing the potential for malaria elimination by targeting zoophilic vectors
Countries in the Asia Pacific region aim to eliminate malaria by 2030. A cornerstone of malaria elimination is the effective management of Anopheles mosquito vectors. Current control tools such as insecticide treated nets or indoor residual sprays target mosquitoes in human dwellings. We find in a high transmission region in India, malaria vector populations show a high propensity to feed on livestock (cattle) and rest in outdoor structures such as cattle shelters. We also find evidence for a shift in vector species complex towards increased zoophilic behavior in recent years. Using a malaria transmission model we demonstrate that in such regions dominated by zoophilic vectors, existing vector control tactics will be insufficient to achieve elimination, even if maximized. However, by increasing mortality in the zoophilic cycle, the elimination threshold can be reached. Current national vector control policy in India restricts use of residual insecticide sprays to domestic dwellings. Our study suggests substantial benefits of extending the approach to treatment of cattle sheds, or deploying other tactics that target zoophilic behavior. Optimizing use of existing tools will be essential to achieving the ambitious 2030 elimination target.
S1 Model derivation
The model is based on a standard expression for R 0 2 2 2 0 2 ma bc R e (Anderson and May eq 14.11 adjusted per text page 400 to reflect low human mortality rate relative to latent period) m = total number of Plasmodium-susceptible mosquitoes per person. a = biting rate per mosquito per day. c = proportion of bites on infectious humans producing infection in mosquito. 2 = mosquito instantaneous background daily mortality rate. b = probability that a bite from an infectious mosquito on a human host will generate a human Plasmodium infection. γ = per day instantaneous recovery rate in human host. τ = period from acquisition of Plasmodium infection to infectiousness in mosquito / days.
Model Assumptions
The model considers as its baseline a vector population subject to lethal interventions applied via human dwellings and explores the effects of varying the probabilities of taking a human feed and the effects of adding cowshed-based interventions.
Individual vectors are assumed to bite both human and livestock hosts with a given probability per feeding cycle of selecting a human host.
Vectors are assumed to feed once per feeding cycle.
Vector mortality and host choice is assumed to be unaffected by vector age. Once infectious, vectors do not recover and become non-infectious. Juvenile density dependence effects mean that changes to the adult vector population size do not affect the number of newly-mature adults joining the population per day. The human population size is not changed as a result of the interventions being considered Parameter/variable definitions m 0 = total number of susceptible mosquitoes per person with human-related intervention in place. m z = total number of susceptible mosquitoes per person which will choose to feed on a human host Z = proportion of blood meals taken on humans. a = biting rate per mosquito per day. c = proportion of bites on infectious humans producing infection in mosquito. 2 = mosquito instantaneous background daily mortality rate in absence of interventions. b = probability that a bite from an infectious mosquito on a human host will generate a human Plasmodium infection. γ = per day instantaneous recovery rate in human host. τ = time in days from acquisition of Plasmodium infection to infectiousness in vector. Δ H = increase in the average instantaneous daily mortality rate during a feeding cycle for mosquitoes attempting to feed on a human, arising from human-related intervention, as a proportion of the rate in the absence of any intervention. Δ L = increase in the average instantaneous daily mortality rate during a feeding cycle for mosquitoes attempting to feed on a non-human host, arising from cowshed-related intervention, as a proportion of the rate in the absence of any intervention.
Given an adult vector population size in the presence of a given human-feeding related intervention, m 0 , the introduction of a new source of mortality applicable to livestock-feeding vectors will change the adult population size. With an assumed constant rate of recruitment to the adult population, the average age structured survival probabilities of an individual vector also represent the age structure of the adult population.
The average per day mortality for the vector population before introducing the livestock-related intervention is 2 | 1,029 | 2017-01-16T00:00:00.000 | [
"Biology"
] |
Symbol Alphabets from Tensor Diagrams
We propose to use tensor diagrams and the Fomin-Pylyavskyy conjectures to explore the connection between symbol alphabets of $n$-particle amplitudes in planar $\mathcal{N}=4$ Yang-Mills theory and certain polytopes associated to the Grassmannian G(4, $n$). We show how to assign a web (a planar tensor diagram) to each facet of these polytopes. Webs with no inner loops are associated to cluster variables (rational symbol letters). For webs with a single inner loop we propose and explicitly evaluate an associated web series that contains information about algebraic symbol letters. In this manner we reproduce the results of previous analyses of $n \le 8$, and find that the polytope $\mathcal{C}^\dagger(4,9)$ encodes all rational letters, and all square roots of the algebraic letters, of known nine-particle amplitudes.
Introduction
The computational complexity of existing algorithms for answering this question is reviewed in Fig. 2 and Sec. A. 3. In this section we very briefly review key features of these polytopes and the relations between the structures associated to their facets, enabling us to outline our new algorithm in Fig. 2. Figure 1: The polytope C (3,5), combinatorially equivalent to the exchange graph [3] of the G(3, 5) (or A 2 ) cluster algebra.
In order to illustrate some key terminology we begin by considering the 2-dimensional polytope called C (3,5) in [7]. It is equivalent to the associahedron [32,33] K 4 as constructed in [34] and can be realized in R 2 with coordinates (X 1 , X 2 ) by the inequalities where the c's are positive constants (see Fig. 1).
There are three natural structures we can use to label each facet F of this polytope: 1. the function in (2.1) that vanishes on F (and is positive inside the polytope), 2. the generator of the ray normal to F (we always choose the generator to be the first integer point along the outward pointing normal ray), 3. or the G(3, 5) cluster variable whose g-vector [35] is normal to F.
The correspondence between these structures for the five facets of C (3,5) is shown in Tab. 1.
Note that by convention we always fix the overall normalization of each kinematic function so that the coefficients of the X's match the components of (the negative of) the corresponding generator.
Cluster Series
The G(k, n) cluster algebra [3,36] is finite if and only if n+1 > d = (k−1)(n−k−1). For polytopes associated to these algebras, it has been found (by explicit computation in all cases studied so far) that the generator of each normal ray is a g-vector of the cluster algebra. It is this fact that allowed us to fill in the third column in Tab. 1. In contrast, [7][8][9] studied polytopes associated to the infinite algebra G (4,8) having a facet normal to (−1, 1, 0, 1, 0, −1, 0, −1, 1) , (2.2) which is known to not be a g-vector of G(4, 8) (moreover, it is known to not even be inside the cluster fan [22]). For this reason we call (2.2) an exceptional generator. All exceptional generators of the polytopes studied in [7][8][9] are related to (2.2) by an element of the cluster modular group 5 . 5 In fact, (2.2) may be essentially unique in that all rays in R 9 are either inside the G (4,8) cluster fan or related to the one generated by (2.2) by an element of the cluster modular group [37]. We thank C. Kalousios for providing some data that supports this hypothesis.
Since it is not possible to assign a cluster variable to exceptional facets, [7] suggested instead to assign to each ray R + y the (formal) cluster series (called a cluster algebraic function in [7]) defined by where B(y) is a cluster algebra basis element associated to the lattice point y ∈ Z d . In [7] it was conjectured that the cluster series associated to the ray generated by (2.2) takes the form 1 1 − A t + B t 2 (2.4) in the canonical basis [38], where This conjecture was checked through O(t 3 ) by explicit computation using the character formula of [22]. In general f g (t) may depend on the choice of basis for the cluster algebra, but we expect that certain important properties of f g (t) are the same in any suitably reasonable basis. In particular, we expect that it is a rational function of t and that the locations of its poles (in t) are basis-independent and located on the positive t axis when the series is evaluated at any point in the positive Grassmannian G >0 (k, n).
In Sec. 5 of this paper we introduce a closely related web series. We conjecture that a web series exists for every ray, but we have not found the specific form of the series in general. However, for certain rays (those corresponding to almost arborizable webs; see Sec. 5) we prove, to all orders in t, that the web series takes the form with A, B depending on the ray. In particular we prove that the web series associated to the ray generated by (2.2) takes the form (2.6) with A, B given by (2.5). Evidently our web series use a different basis than the one that gives the series (2.4), but is consistent with the abovementioned expectations (since it has the same poles, at t = (A ± √ A 2 − 4B)/(2B)). We also prove that the web series has the form (2.6), and evaluate the corresponding A's and B's, for 324 normal rays of the polytope C † (4,9) that might be relevant to the symbol alphabet of 9-particle scattering amplitudes in SYM theory.
Finally let us note that if g is a g-vector, then B(g) is the associated cluster variable and B(mg) = B(g) m for any choice of basis, so the cluster series is basis-independent and geometric: Figure 2: An outline of how our work fits into the literature. Recent studies have uncovered an apparent connection between data associated to the facets (with normal ray y) of certain G(4, n)-polytopes (indicated on the second column) and the symbol alphabet of n-particle amplitudes in SYM theory (left column). One route from the former to the latter [7] (the dotted line) uses the character formula of [22] and requires the series (2.3) to be computed term by term, with the m-th term having computational complexity O((mw)!), where w is an integer that depends on the facet. In this paper we employ web invariants and the Fomin-Pylyavskyy conjectures to provide an alternate route, indicated by the arrow labeled X −1 . Here X is a map from web invariants to kinematic functions that we define in Sec. 4. Its inverse X −1 can be computed in practice by scanning over a manifestly finite set of candidate preimages W for any given F y . Moreover, we provide a recursive (in m) all-order proof that for certain web invariants (those having a single closed loop), there is a web series that sums exactly to (2.6), with A, B depending on the facet. This proof applies to the exceptional G (4,8) generators encountered in [7][8][9], and here we show that it also applies to the generators of 324 facets of the C † (4,9) polytope, for which we compute the exact web series. Other alternate routes have been studied in [8,10].
In such cases the information content of knowing f g (t) is the same as that of knowing the cluster variable B(g). This provides a sense in which it is reasonable-for infinite algebras-to generalize the third column "cluster variable" of Tab. 1 to "cluster series" as [7] did, or to "web series", as we shall do.
Symbol Letters
Finally let us briefly review the observed connection between the symbol alphabet of nparticle amplitudes in SYM theory and G(4, n)-polytopes. All known symbol letters (see Sec. B for more details) fall into two classes: rational letters are cluster variables of G(4, n), and algebraic letters have the form 6 a− where a, b are polynomials in Plücker coordinates. Tab. 2 summarizes the number of each type of letter that is known to appear in various n-particle amplitudes, as well as the number of distinct square roots (each √ b appears in several algebraic letters, i.e. paired with various different a's).
Tab. 2 also summarizes data about the facets of the polytopes called C † (4, n) in [12]. There (and also in [8,9]) they were constructed and studied for n ≤ 8, and found to contain information about the n-particle symbol alphabet in the following sense. First, the cluster variables associated to the g-vector facets (second to last line) include all known rational symbol letters (top line). For n = 8 it was further observed that the square roots appearing in the poles of the (conjectured) series (2.4) associated to the two exceptional facets (bottom line), given by √ A 2 − 4B in terms of (2.5) and its image under a Z 8 cyclic shift, agree precisely with the two distinct square roots known to appear in algebraic symbol letters of 8-particle amplitudes (third line).
In this paper we extend this analysis to n = 9 using the algorithm summarized in Fig. (2). We find that the polytope C † (4, 9) has 3429 facets; 3078 are normal to g-vectors of G (4,9) while the other 351 are exceptional. The 3078 cluster variables associated to the former include the 522 (non-frozen) rational letters shown in Tab. 2. For 324 of the latter we prove that the web series has the form (2.6); the 324 distinct square roots of the form √ A 2 − 4B obtained in this way include the 9 counted in the third line of the table. The remaining 27 exceptional facets remain more mysterious. Although we expect that web series exist for them as well, we have not been able to find their explicit form.
As a further application of our technology we also define and study the polytopes C † (3, n). Interestingly we find that they do not have any exceptional facets for n ≤ 10, which is as far as we have computed (see Sec. 6).
Review of Tensor Diagrams and the Fomin-Pylyavskyy Conjectures
In this section we review some basic facts about tensor diagrams, which provide a graphical way to encode data about the cluster structure of the Grassmannian. The connection between tensor diagrams and G(k, n) cluster algebras was first studied by Fomin and Pylyavskyy in [23]. The main elements of this connection are given by the Fomin-Pylyavskyy (FP) conjectures, which have been partly proved in [37]. Our work relies on the FP conjectures in a manner discussed in Sec. 6.
An sl k tensor diagram is a finite graph drawn inside a disk with n marked points (labeled 1, . . . , n clockwise around its boundary) satisfying the requirements: 1. all boundary vertices are colored black, and may have arbitrary valence, 2. each internal vertex may be either black or white, but must have valence k, 3. and each edge of the graph must connect a black vertex to a white vertex.
A planar tensor diagram is called a web, and a tensor diagram with no closed loops (of internal vertices) is called a tree. If we glue all of the vertices and edges of two or more webs into the same disk, we get a combination of webs.
To each diagram D we can associate a tensor invariant [D] constructed as follows. First, we associate to each boundary vertex i a k-component vector Z a i . (For k = 4 these are the familiar momentum twistor variables that encode massless n-particle kinematic data.) Then to each white vertex we associate ǫ a 1 ···a k , to each internal black vertex we associate ǫ a 1 ···a k , and we contract all indices as indicated by the edges of the graph. The resulting invariant is always a homogeneous polynomial in the Plücker coordinates on G(k, n). This definition suffices for our purposes, but it is not precise because when k is even it leaves the overall sign of [D] undetermined thanks to ǫ a 2 ···a k a 1 = −ǫ a 1 a 2 ···a k . For a proper definition of tensor invariants, including a detailed discussion of how to fix this sign, we refer the reader to [31,39]. In practice we will determine the "correct" overall sign for any invariant by requiring that it evaluates to a positive number when the k × n matrix Z a i is an element of the positive Grassmannian G >0 (k, n). If W is a web satisfying certain additional conditions 7 we call [W ] a web invariant. If W is a combination of two or more webs W 1 , W 2 , . . . then [W ] is the product of the web 7 For k = 3 W must be non-elliptic, which means every pair of vertices is connected by at most one edge and each face formed by interior vertices has at least six sides [23]. For k = 4 W can have at most double edges and must have no 2-cycles [37].
is not a product of two or more web invariants then we say that [W ] is indecomposable.
As their name suggests, tensor invariants are invariant under certain graphical moves known as skein relations (see Fig. 5) [23,[39][40][41]. One important application of these relations is that they can sometimes be used to convert a web W with closed loops into a tree diagram D that is equivalent in the sense that [D] = [W ]; if this is possible then the web W is called arborizable (note that D may or may not be a web, i.e. it may be non-planar).
The Fomin-Pylyavskyy conjectures [23] comprise several interesting connections between tensor invariants and cluster variables. These have been proven up to G (3,9) and G (4,8) in [37]. For our purposes the key conjecture is: the set of cluster (and frozen) variables coincides with the set of indecomposable arborizable web invariants. Henceforth we only consider indecomposable diagrams.
In order to better familiarize the reader with tensor diagrams let us now introduce some tricks for quickly reading off the invariants associated to certain diagrams.
sl 2 Tensor Diagrams
This case is rather trivial in an instructive way. The only structures an sl 2 tensor diagram can have are strands that begin and end on boundary vertices, passing along the way through an odd number of internal vertices alternating between white and black. All internal vertices except for a single white vertex on each strand can be removed by a skein relation (ǫ ab ǫ bc = δ a c ). The web invariants have the form [W ij ] = i j , corresponding to the web W ij with a single strand connecting boundary vertices 1 ≤ i < j ≤ n.
sl 3 Tensor Diagrams
The invariant for any sl 3 tree diagram D can be read out in a simple way [23]. First we choose any internal vertex of D to be the central vertex v and assign a direction to each edge in such a way that it points from the boundary of the diagram towards v; this assignment is unambiguous if D is a tree. Then at each internal vertex except v, there must be two inward pointing edges and one outward pointing edge. Having already assigned a vector Z i to each boundary vertex i, we now assign to each internal vertex v ′ = v the cross-product of the two vectors or covectors associated to the two inward edges at v ′ ; this is a vector if v ′ is black and a covector if v ′ is white. Then the invariant of D is equal to the determinant of the three (co)vectors assigned to the three incoming edges at the central vertex v.
Some sl 3 webs and their corresponding invariants are shown in Fig. 3. For example, let D be the diagram shown in Fig. 3 This shortcut for computing a tensor invariant [D] can also be used if D is arborizable. For example, by choosing the white vertex adjacent to 3 as the center, the invariant associated to the diagram shown in Fig. 4 evaluates to (3.2)
sl 4 Tensor Diagrams
Again we emphasize that we have only explained how to compute tensor invariants mod sign when k is even. In order to formulate the skein relations for sl 4 tensor diagrams, it would be necessary to be careful about the detailed sign convention explained in [39,41]. In Fig. 5 we show the equivalence relations (mod sign) that we require for the calculations in this paper. Freed from having to worry about the sign, we can compute invariants for sl 4 trees in a similar manner to those of sl 3 . If two vertices are connected by a pair of edges then we call the pair a double edge. After choosing a central vertex v and assigning to each edge a direction pointing towards v, every other internal vertex v ′ = v has either a double or single outgoing edge, and the other two or three edges are incoming. Extending the k = 3 analysis in the obvious way, we now assign to v ′ the 1-or 2-index co-or contravariant tensor constructed by contracting ǫ abcd (if v ′ is white) or ǫ abcd (if v ′ is black) with the tensors associated to the incoming edges (multiplied by 1 2 if there is a double edge). Finally, at the central vertex v all Contracting indices at the central vertex computes the diagram's invariant, which can be expressed as 45(123) ∩ (678) using the notation
Non-Arborizable Web Invariants
According to the Fomin-Pylyavskyy conjectures, every cluster monomial (a product of compatible cluster variables) in G(k, n) is an n-point sl k web invariant. However, the converse is not true because of the existence of non-arborizable webs. Their invariants are multiplicatively independent of cluster variables and so indicate that bases for cluster algebras must (in general) have elements beyond cluster monomials (see Sec. III.A of [7] for an explicit example discussed in the physics literature). The simplest non-arborizable sl 3 webs appear at n = 9 and the simplest sl 4 webs appear at n = 8. These include for example [23, Figure 31] and [22, (8.2)], shown in Fig. 6.
From Tensor Diagrams to Kinematic Functions
In this section we study a map X that associates a kinematic function F = X([D]) to certain tensor invariants [D]. A key property we want the map to have is that if [D] is a cluster variable, then X([D]) should be the kinematic function naturally associated to [D] (in the same sense of association as between the first and third columns of Tab. 1).
More specifically, and more generally, X is defined as follows: if [D] is a tensor invariant whose g-vector (defined as reviewed in Sec. A.3) is y ∈ Z d , and if y is the first integer point along the ray R + g (in which case we say that y and [D] are primitive), then X([D]) is the kinematic function F y computed according to (A.9). These steps trace counterclockwise around Fig. 2, when applied to a diagram that is not necessarily a web. In the rest of this section we present conjectural formulas that compute X([D]) for k ≤ 4 directly, as opposed to tracing around the figure. We have confirmed that our formulas obey the defining property for all kinematic functions and web invariants that we have encountered in this work (summarized in Tab. 3), and conjecture them to be valid in general. In Sec. 5 we discuss web series, which extend this discussion to arbitrary integer points along R + y.
It is sufficient to focus our attention on indecomposable invariants; more generally we have In order to make our results easily accessible to users of two different sets of conventions we study two distinct versions of the map: X L and X R . The former is attached to the conventions reviewed in Secs. A.1-A.4 (and used in the example of Sec. 2.1), while the latter is attached to the "Langlands dual" convention where all arrows in Fig. 7 are reversed (see Sec. A.5).
For k = 2 it is possible to write down an explicit general formula for the X-map, exploiting the fact that every indecomposable sl 2 tensor invariant has the form i j for i < j (see Sec. 3.1). The associated kinematic function, for the two choices of convention, is simply (4.1)
sl 3 Arborizable Invariants
For k > 2 we conjecture a recursive formula for the X-map, first for tree diagrams. In order to apply the recursion, [D] must be written in the form obtained by reading it off from some diagram D as described in Sec. 3.2. If we are handed [D] in some random form (as a polynomial in Plücker coordinates), we would first need to draw some tree diagram D whose invariant is [D]. Next, D must be put into canonical order, which means that all lines and vertices are placed so as to minimize the number of crossed lines but without producing or annihilating any lines or vertices. Specifically, all crossing structures of the type shown in the fourth and fifth lines of the sl 3 skein relations (Fig. 5, left panel) should be cleared, but without using the moves shown on the first and second lines. The recursion is seeded by the simplest possible sl 3 tensor diagrams: those with only a single internal vertex (see for example the left panel of Fig. 3), for which we have using notation reviewed in Sec. C. For more general diagrams, our definition of the X-map is motivated by the G(3, 6) "bipyramid relation" of [11] (see for example (6.10) of [15] for more specificity) which in our notation reads Working for a moment with the "left" conventions, we can use (4.2) to rewrite all but the first term as X L images of cluster variables: We don't yet know what to do with the first term, but one can calculate that the g-vector of S (12)34 is the same as that of the cluster variable 5 × 6, 1 × 2, 3 × 4 , which motivates us to define Our recursive definition of X L (and similar for X R ) is motivated by the desire to extend (4.5) to more general cases without having to rely on computing g-vectors at intermediate stages.
Therefore we now consider more complicated tensor diagrams whose invariants involve cross-products of the form a × b, c × d, e × f . Here a, b, . . . , f are either all vectors or all covectors (the expression makes no sense otherwise), and each could itself be a string of nested cross-products such as a = (1 × 2) × (3 × 4). The generalization of (4.5) is for vectors and for covectors.
The formulas (4.2), (4.6) and (4.7) provide a recursive formula for X L and X R for all sl 3 tree diagrams, but it may not immediately be clear how to apply the recursion to a diagram like the one in the right panel of Fig. 3 because its invariant has no manifest cross-product in the middle entry. However, it is always possible to rewrite an invariant in an equivalent way that exposes a cross-product in each entry; in this case via the identity to which one can now apply (4.7). In this example one has to perform further rearrangements at the next step in the recursion; the important point is that it is always possible to do so. As emphasized above, although we have written the above recursion in a seemingly general way in terms of nested brackets, it will give inconsistent results unless [D] is expressed as the invariant read off from a canonically ordered diagram. One obvious manifestation of the inconsistency for improper ordering is the fact that we clearly have but (4.6) and (4.7) are not invariant under c ↔ d, e ↔ f ; the recursion must be applied only to the right-hand side of (4.9).
sl 3 Non-Arborizable Invariants
Now we propose recursion relations for computing X([D]) when [D] is a non-arborizable invariant. The basic step is to "unroll" each internal loop by appropriately cutting one of its edges. Consider first the case when [D] is the invariant of a diagram having a single internal loop. Then, for any choice of edge on the internal loop (shown in red) we define where we have removed the red edge of the loop and replaced it with the new red edges shown, the blue dashed lines denote an arbitrary (even) number of additional vertices on the inner loop, and m is an arbitrary reference point. One can check that all terms involving the reference point m cancel out when the recursion is applied and the right-hand sides are expanded out. We emphasize again that (4.10) is only valid when the diagram is drawn in canonical order, with minimal number of crossings.
To see an example of the recursion at work, let us compute the kinematic functions associated to Fig. 6(a). Applying (4.10) and then (4.6), (4.7) we have and similarly (4.12) A diagram with ℓ > 1 inner loops can be treated similarly, by recursively unrolling each loop via introduction of a new reference point. All ℓ reference points will disappear in the final result.
Of course, some diagrams with internal loops are equivalent to trees by the skein relations and it is important to check that the recursion relations we have given respect this equivalence. To see this we must look at the two types of arborization processes. Applying X L to the left-hand side and using (4.10) gives (4.14) The first term on the right has the form X L ( m × a, a × b, c × · · · ) where "· · · " stands for everything along the dotted blue semicircle and to its left in the figure. Then using (4.6) we have but this is the same as X L applied to the right-hand side of (4.13), as required.
Next consider applying which by (4.10) gives which again is the required answer: X L applied to the right-hand side of (4.16). We omit the proof for X R , which is essentially the same, and instead illustrate with the example shown in Fig On the other hand, the right diagram in (4.18) is a tree; according to (4.7) its image under the X R map is ( 1, 4, 6 ) , (4.20) in agreement with (4.19).
sl 4 Arborizable Invariants
For k = 4 the recursion for calculating the X-map is seeded by (4.21) using notation reviewed in Sec. C. There are two basic types of structures that can appear in place of simple vectors a, b, c, d in (4.21) when we look at more general tree invariants. We must describe separately how to recursively handle each type of structure.
The first type of structure that can appear in place of a vector is a tensor product involving 5 points: where all Greek subscripts and superscripts run from 1 to 4 and ǫ is the antisymmetric Levi-Civita symbol with ǫ 1234 = ǫ 1234 = 1; recall (3.4). For invariants involving one of these we have X L ( a, b, c, (d, e) ∩ (f, g, h) ) = X L ( a, b, c, e ) + X L ( d, e, g, h ) (4.23) We could also have a structure like (4.22) but with a lower µ index-as would be the case for example if all of a, . . . , e were not individual vectors but triple-products like a µ = ǫ µνρσ a ν 1 a ρ 2 a σ 3 for some a i . (Note that a µ represents the plane in P 3 containing the three a i .). In this case we would have (4.24) The second type of structure that can appear in tree invariants is a tensor product involving 9 points: which can appear in combinations of the form (again recall (3.4)) For this kind of invariant we have (4.27) Alternatively, in the contravariant case (that is, when each of the nine entries in (4.25) represents a plane) we have (4.28) All of the above relations are valid only when each invariant is read off from a tree diagram drawn in canonical order, as in the previous subsection.
sl 4 Non-Arborizable Invariants
Here we consider only inner loops which have no double edges. We have found this to be sufficient to analyze the C(4, 8) and C † (4, 9) polytopes (see Sec. 6); it would be interesting to formulate a recursive rule for more general diagrams. We find that single-edged inner loops of sl 4 diagrams can be unrolled with the rule Here we need two reference points m 1 , m 2 for each unrolled loop. Actually, since m 2 is associated to a white vertex, it is better thought of as a reference plane, i.e. a triple of reference points. All dependence on these reference points drops out of any invariant. Of course, as always, the recursion rule can only be applied to canonically ordered diagrams.
By way of example let us apply X L to the non-arborizable G(4, 8) web shown in Fig. 6(b): Like we discussed for the sl 3 case, it is of course important that the recursive relations we have given for the k = 4 X-map respect the skein equivalences shown in Fig. 5, in particular applied to the arborization of an inner loop. The proof is similar to the k = 3 case but there are more cases of the equivalence to check. We omit the details here and instead consider the illustrative example which agrees as required with the X L image of the web on the right computed from (4.23).
General Relations Between X L and X R
Here we point out a few general relations between X L and X R that can be derived from the above definitions. First, for k = 3, it follows from (4.6) and (4.7) that Apart from a trivial overall factor of the frozen variable b+1, b+2, b+3 we can effectively regard the above transformation as b × (b+1) → b+2 . Therefore we can phrase the inverse of (4.33) and (4.34) as where W ♭ 3 is obtained from W by taking a → (a−2) × (a−1) .
(4.40)
Another relation between X L and X R involves reflection. For an arbitrary web invariant W , either arborizable or non-arborizable, we have Analogous relations also exist for k = 4. First we have where W ♯ 4 and W ♭ 4 are defined respectively by a → (a+1, a+2, a+3)
Kinematic Length
It is intuitively clear that "more complicated" webs have "more complicated" invariants, and are assigned by the X-map to "more complicated" kinematic functions. In this section we formalize this notion of complexity in a way that will play an important role in Sec. 6. To that end we first define two bases of the generalized kinematic space K k,n (different from the ABHY basis constructed in Sec. A.2). The left kinematic basis is the set {X L (p) : p is a non-frozen Plücker coordinate}, and the right kinematic basis is defined analogously using X R . Note that each set contains n k − k elements, the same as the dimension of K k,n , and one can check that they are linearly independent in K k,n , so each is indeed a basis.
We define the left (right) kinematic length of a kinematic function F as the sum of the coefficients of F when expressed in the left (right) kinematic basis. Next, we define the cluster length of a web invariant [W ] to be equal to 1/k times the number of external legs of the associated web; this is the same as the degree of [W ] when expressed as a polynomial in G(k, n) Plücker coordinates. Now the notion that more complicated invariants are associated to more complicated kinematic functions is formalized in the statement that the left (or right) kinematic length of X L ([W ]) (or X R ([W ])) is equal to the cluster length of [W ]. It is easy to verify that this statement is true by recursion. If p is a non-frozen Plücker coordinate then by definition its cluster length is 1 and the left (or right) kinematic length of X L (p) (or X R (p)) is 1. For more complicated invariants, we note that in all of the recursive definitions (4.6), (4.7), (4.24), (4.27) and (4.28), each side is linear in both kinematic and cluster length; this establishes the equality.
Web Series
A web series W is a formal power series of webs W 1 , W 2 , . . .
to which we associate the invariant We are interested in web series whose invariants are cluster series of the type reviewed in Sec. 2.2. Once a cluster algebra basis is specified (for example, the one provided by the character formula of [22]), then each such series is (in principle) completely determined by its first nontrivial term W 1 . Therefore, we are interested to study natural ways to associate an entire web series W(W ) to a single web W , with W 1 = W and with the higher-order terms W 2 , W 3 , . . . being determined from W in some manner. One simple way to do this is via the "web thickening" procedure of [23,Definition 10.8]. If W is any web and we take W m to be the combination of m copies of W , then [W m ] = [W ] m and the web series invariant is geometric: However if W is a non-arborizable web we seek a different definition of the web series W(W ) because we want its invariant to not be geometric, but rather to evaluate to more complicated rational functions such as (2.4) or (2.6). In Sec. 5.3 we provide such a definition for a class of indecomposable webs that we call almost arborizable-these are non-arborizable webs that can be converted, via skein relations, to tensor diagrams (possibly non-planar) with a single closed inner loop. (For k > 3 we further require every edge of that closed inner loop to be a single line.) We leave for future work the study of web series associated to more complicated non-arborizable webs. In order to connect to the notation used in Sec. 2.2 let us define A(W ) = [W ], the usual web invariant, and now take a short diversion to define a new type of invariant B(W ) that we can associate to certain almost arborizable webs.
sl 3 Almost Arborizable Webs
Let W be an almost arborizable sl 3 web and let D be the equivalent tensor diagram with exactly one inner loop. We first define B 1 (W ) and B 2 (W ) as follows: • (a) Starting with D, delete all the edges on the loop that go in clockwise order from a white vertex to a black vertex.
• (b) Now all vertices originally on the loop are divalent. Delete those vertices, fusing the two edges at each vertex into a single edge.
• (c) Now we have a new tensor diagram where all inner vertices are trivalent, and all edges connect a black vertex with a white vertex. Define B 1 (W ) to be the invariant of this diagram.
• (d) Repeat steps (a)-(c) but delete edges that go from white vertices to black vertices in counter-clockwise order. Define B 2 (W ) to be the invariant of this diagram.
Then define B(W ) = B 1 (W )B 2 (W ). As an example let W (1) be our old friend, Fig. 6(a), whose invariant is To see a less trivial example, we move on to n = 10 and consider the web W (2) given by
sl 4 Almost Arborizable Webs
Let W be an almost arborizable sl 4 web and let D be the equivalent tensor diagram with exactly one inner loop. As mentioned above, require that all edges on the inner loop of D are single lines; this is sufficient to cover all webs we encounter in Sec. 6. In such cases we define B 1 (W ) and B 2 (W ) as follows: • (a) Starting with D, delete all the edges on the loop that go in clockwise order from a white vertex to a black vertex. • (c) Then use the skein relations to cancel each 2-cycle, which removes the factors of 1/2 introduced in the previous step.
• (d) At this stage we have a valid sl 4 tensor diagram (all inner vertices are quadrivalent, and all edges connect a black vertex to a white vertex). Define B 1 to be the invariant of this diagram.
• (e) Repeat steps (a)-(d) but delete edges that go from white vertices to black vertices in counter-clockwise order. Define B 2 (W ) to be the invariant obtained in this way.
A Web Series for Almost Arborizable Webs
We now define a web series W(W ) associated to every almost-arborizable web W via a slight modification of the thickening procedure of [23]. Let W be a given almost-arborizable web and let D be a tensor diagram with a single inner loop such that [D] = [W ]. To define the O(t 2 ) term W (2) in the web series we first draw a combination of two copies of D and then connect the two inner loops by twisting any pair of edges. For example, if the inner loop is a hexagon then we take =⇒ (5.17) where we suppress the rest of the diagram, showing only the internal loop. The generalization is clear: W (m) is defined by combining m copies of W , cutting one identical edge on each of the m inner loops, and gluing them back together after a (cyclic) "shift-by-one" permutation. This was called the bracelet operation in [42], where web series and invariants constructed in this way were studied for sl 3 .
Using skein relations and the definitions given in the previous two subsections, it is easy to see that where A(W ) = [W ] is the web invariant we start with and B(W ) = B 1 (W )B 2 (W ). Thanks to (5.18), the invariant of the web series W(W ) can be written in the form of (2.6): To summarize: we have shown that there is a natural web series W(W ) one can associate to any almost arborizable web W , and that the invariant of this series evaluates to (5.19) in terms of the usual web invariant A(W ) = [W ] and a second quantity B(W ) that admits a simple diagrammatic definition. (The definition of the web series can be considered to include arborizable webs as a special case for which B(W ) = 0.) We conjecture that for any almost arborizable W , A(W ) and B(W ) agree with the quantities A and B appearing in the series (2.4) associated (via the character formula of [22]) to the ray R + y, where y is the g-vector of [W ]. It is furthermore natural to speculate that more complicated webs (that are not almost arborizable) are associated to series with higher-order polynomials in their denominators. We leave such questions to future work.
Results and Discussion
Now that all the pieces are finally in place we detail the application of our algorithm to the polytopes C(3, n ≤ 10), C † (3, n ≤ 10), C(4, n ≤ 8) and C † (4, n ≤ 9), the definitions of which are reviewed in Sec. A.4. All of these have been constructed or analyzed in the literature, to varying degrees, and by various methods, although only the k = 4 polytopes are of direct relevance to SYM theory. In particular, the cluster variables associated to C(4, 8) and C † (4,8) were determined in [7][8][9] and the cluster series associated to them were determined in [7]. These prior results serve as checks on the correctness of our methods; however our results for C(3, 10), C † (3, 10) and C † (4, 9) are genuinely new 8 .
For each polytope P on the above list, our algorithm (summarized in Fig. 2) proceeds as follows. Let i = 1, 2, . . . index the facets of P, with y i being the generator of the ray normal to facet i 9 and F y i being the associated kinematic function. Our first goal is to assign a web W i to each facet such that X R (W i ) = F y i 10 . A priori it is not guaranteed that it always possible to find such a W i . In practice, searching for W i is feasible since for any given F y i , we only need to scan over a manifestly finite set of sufficiently simple candidate webs-specifically, those whose length (defined in Sec. 4.6) is at most that of F y i . Actually we can exploit the general relations derived in Sec. 4.5 for considerable simplification: for each facet i we only need to scan up to the length set by the shortest image of i under the D n dihedral group. In this manner we have found webs associated to all facets of C(3, 10), C † (3, 10) and C † (4, 9) by scanning webs of length up to 7, 5, and 6, respectively.
The webs W i we encounter fall into three types: 1. If W i is arborizable, then (according to the FP conjectures) y i is a g-vector of the G(k, n) cluster algebra and [W i ] is a cluster variable that we associate to facet i. (Equivalently, we associate to facet i the cluster series 1/(1 − t[W i ])).
2. If W i is almost arborizable, then we can compute A(W i ) and B(W i ) as described in Sec. 5 and the cluster series associated to facet i is 3. In other cases we don't yet know how to associate a web series to W i , although we conjecture that there exists a natural way to do so; the cluster series associated to these facets may have polynomials of degree higher than 2 in their denominators.
We summarize the number of facets of each type for various polytopes in Tab. 3. We also include ancillary files that list, for each of these polytopes, the kinematic function F y i and web invariant [W i ] associated to each facet. For each almost arborizable web we also include the A and B invariants appearing in the associated series (5.19).
A few important comments about our algorithm are in order. First of all, we cannot exclude in generality the possibility that there might exist two webs W 1 , W 2 with [W 1 ] = [W 2 ] that have the same image X R (W 1 ) = X R (W 2 ) = F . If this were to happen for a kinematic function F associated to some facet of a polytope of interest, then we would not know which web to assign to that facet. However, we have not encountered such a situation as far as we have computed: for given F , we have always found there is (up to skein relations, of course) precisely one web W such that X R (W ) = F (among all possible webs below the maximum lengths we have checked).
Second, we must of course mention the possibility that the FP conjectures could be wrong for (k, n) = (3, 10) or (4,9). Then we would have to worry that there could be some web W and some non-web D such that (1) X R (W ) = X R (D) = F and (2) [D] is a cluster variable but [W ] is not. In such a case our algorithm would suggest associating W to the facet F , when it might be more appropriate to associate D instead. The fact that we have not encountered any apparent inconsistency in our calculations for C † (4,9), which furthermore are corroborated by the independent work of [43], suggests that such worries may be postponed to higher (k, n), if not indefinitely.
(1) (2) (3) web type: arborizable almost arborizable neither cluster series type: (2.7) (i.e., cluster variable) (3,9) 327 0 0 C (3,9) 468 3 0 C † (3,10) 1060 0 0 C (3,10) 2860 280 0 C † (4, 6), C(4, 6) 9 0 0 C † (4, 7), C(4, 7) 42 (4,9) 3078 324 27 Table 3: The number of facets of types (1), (2) and (3) (defined in the text) for various polytopes. Note that the facets of C † (k, n) are always (by construction) a subset of those of C(k, n). The set of facets associated to each polytope is closed under the action of the Z n cyclic group, and for the C † polytopes they are closed under the full D n dihedral group as well as under parity (see Appendix A of [1] for a discussion of parity symmetry.) Next let us comment on a few interesting features of our results. First of all we note that while C(3, n) has facets associated to non-arborizable webs for n = 9, 10 (and, presumably, for all n ≥ 9), these are absent from the C † (3, n) polytopes that we have studied: all facets of C † (3, n ≤ 10) are associated to cluster variables. It would be interesting to see if this continues to hold for higher n.
The 3 non-arborizable webs associated to C (3,9) are the three cyclic images of Fig. 6(a) and the 4 non-arborizable webs associated to C (4,8) are the four cyclic images of Fig. 6(b). Out of the 324 almost arborizable webs associated to C † (4,9), 315 have an inner quadrilateral loop and 9 have an inner hexagon. The latter are the cyclic images of where we highlight the two loops in color. Each of these is skein-equivalent to a valid web (that means, with no 2-cycles or triple edges; see footnote 7). As already noted above, it would be very interesting to find a natural web series to associate to these more complicated webs; the corresponding invariants might evaluate to rational functions with higher (than quadratic) order polynomials in their denominators. It is interesting to note that the approaches of [8,10] also seem to encounter some difficulty when passing from G (4,8) to G (4,9), for essentially the same reason: Whereas the G (4,8) cluster algebra has finite mutation type [47], and all exceptional rays can be asymptotically approached by repeated mutation on some quiver containing an A 1,1 subalgebra, G (4,9) does not have finite mutation type and has arbitrarily complicated quivers. It would be interesting to more precisely understand how (if at all) this fact relates to webs of the type shown in (6.2).
where W is the k × (n−k) matrix constructed as follows. Draw a (k−1)× (n−k−1) array with faces labeled by web variables x 1 through x d (reading down each column, from left to right). Label the horizontal lines 1, . . . , k from top to bottom and the vertical lines k + 1, . . . , n from left to right. Give each horizontal edge a rightward orientation and each vertical edge an upward orientation. To each path p through the diagram we associate the product of all web variables above p, which we denote by p(x). Then the i, j element of W is given by For example, for k = 3 the array looks like The web matrix associated to G(k, n) provides a parameterization of G + (k, n)/T as the d web variables range over R d >0 , and (importantly for our purposes) the web variables are precisely the cluster X -coordinates associated to the initial seed of G(k, n) shown in Fig. 7. Specifically: when evaluated on the web matrix (A.1), the cluster X -coordinate (see Sec. A.3) attached to any mutable node of the initial quiver is equal to the web variable x that appears in the same position of the web array described above.
A.2 Kinematic Space and Kinematic Functions
Next we review the planar kinematic variables first employed for k = 2 in the construction of [34]. We introduce n k (generalized) Mandelstam variables [13] s i 1 ,i 2 ,...,i k , fully symmetric in all indices, subject to the "on-shell" condition s i,i,... = 0 and the "momentum conservation" condition The resulting n k − n-dimensional space spanned by these variables is called the kinematic space K k,n . Figure 7: Initial seed for the G(k, n) cluster algebra [48,49], where l = n−k and a x,y denotes the Plücker coordinate 1, . . . , k−x, k+y−x+1, . . . , k+y . The arrows here are reversed (and the figure is transposed) with respect to that used in [22]; see the discussion in Sec. A.5.
Since the Mandelstam variables are not linearly independent, our next step is to define a particular basis for K k,n [12]. To that end we consider where α ′ is a positive constant that is irrelevant for our purposes. Note that thanks to (A.4), R k,n is invariant under the torus action that rescales each Z a i independently, and therefore is well-defined on G(k, n)/T . Something remarkable happens when R k,n is evaluated on the G(k, n) web matrix. The n k minors fall into two categories. First, there are n + d trivial minors that evaluate to 1 or to monomials in web variables; these include the n frozen variables 1, 2, . . . , k , 2, 3, . . . , k+1 , . . . , 1, 2, . . . , k−1, n (A. 6) as well as the d non-frozen variables of the form 1, 2, . . . , j, l, l+1, .
Each of the remaining n k − n − d minors I factors into a monomial in web variables times a single polynomial P I (x) that is unique to each minor. Moreover, each of these polynomials is subtraction free and has constant term 1. By collecting all of the overall monomials from both the trivial and non-trivial minors, we can rewrite [12] R n,k (x 1 , . . . , x d ) = where the product runs over all non-trivial minors, the power of each overall x a is α ′ times some linear combination of Mandelstam variables that we denote X a , and we set s I = −c I . Altogether the total number of X a and c I variables is n k − n, and they provide the desired basis for K k,n .
Here we explain the kinematic functions that first appeared in Sec. 2.1. We associate to any point y = (y 1 , . . . , y d ) in the integer lattice Z d the function F y on kinematic space defined by The properties of R n,k ensure that F y is always an integer linear combination of the X a and c I . In fact the coefficient of X a is just y a , so we have Using (A.9) and (A.10) we can pass back and forth between y and F y at ease. The two vertical arrows on the right side of Fig. 2 apply this correspondence to the case when y is taken to be the generator y of an outward-pointing normal ray to a G(k, n)-polytope. For example, for G(3, 5) a simple calculation reveals that where X 1 = s 123 and X 2 = s 345 , and it is also easy to check that (A.9) computes the first column of Tab. 1 from the data given in the second column. Note it is manifest (by homogeneity) that F my = mF y for any non-negative integer m. This bears resemblance to the statement about cluster algebra bases that B(my) = B(y) m , but the former holds for any lattice point y while the latter holds only if y is inside the cluster fan.
A.3 g-Vectors
Next we review the horizontal arrows at the bottom of Fig. 2. We order the d + n cluster variables (A-coordinates) appearing in the initial quiver ( Fig. 7) a 1 , a 2 , . . . , a d+n , first reading the mutable variables down each column from left to right, and then the frozen variables counterclockwise starting from a 0,0 . Next recall that the associated exchange matrix is given by B ij = (#arrows i → j) − (#arrows j → i) where i, j run over the nodes, and the cluster X -coordinate associated to node i is related to the A-coordinates of its neighbors by To any monomial i a g i i we associate the vector of powers g = (g 1 , . . . , g d+n ). We introduce a partial order on such vectors by saying that g ′ g iff g ′ − g is a non-negative linear combination of the first d columns of B (the columns corresponding to mutable nodes). If a is a sum of monomials in the a i we define the g-vector of a to be that of the term whose g-vector is largest with respect to (if such a term exists). It is always sufficient to truncate g to its first d components. If a is a cluster variable of G(k, n), then the g-vector of a exists and it is said to be a g-vector of the cluster algebra.
We have therefore explained the upward pointing arrow in Fig. 2. For example, for G(3, 5) we have the initial cluster variables (given in the caption) are and the remaining three cluster variables are given in terms of these by 2 3 5 = a 3 a 5 a 2 + a 3 a 4 a 6 a 1 a 2 + a 4 a 7 a 1 , 2 4 5 = a 1 a 5 a 2 + a 4 a 6 a 2 , (A.14) Here we have written the terms in each sum in increasing order with respect to so it is easy to read off, from the last term in each line, the g-vectors (−1, 0), (0, −1) and (−1, 1), as shown in Tab. 1.
Going the other way, down the dotted arrow in Fig. 2 to compute the cluster variable (or more general basis element) associated to a given lattice vector x, is not so simple. In practice one often resorts to a computer search by repeatedly mutating away from the initial seed until one has the fortune to chance upon a cluster variable whose g-vector is x. Of course for infinite algebras this algorithm may take an indefinite amount of time. Even worse, x may lie outside the cluster fan in which case one will never find a match.
One alternative, suggested in [7], is to read Corollary 7.3 of [22] as providing an explicit formula for an element of the canonical basis [38] associated to every g that agrees with the usual cluster algebraic definition when g lies inside the cluster fan. Although the required computation is manifestly finite for any g, its enormous computational complexity makes it impractical in many cases of interest.
A.4 G(k, n)-Polytopes
Before ending this section, we are finally in a position to review the construction of the G(k, n)polytopes of interest, which generalize the well-known Stasheff polytope [32,33] building on a construction introduced in [34]. These polytopes lie in the d-dimensional subspace H k,n of K k,n obtained by setting all of the c I to positive constants. Here we see that the purpose of defining X 1 , . . . , X d in the previous subsection is that we can take these as coordinates on H k,n .
The polytope called C(k, n) in [7] (called P(k, n) or dual Trop G(k, n) in some other references) is defined by taking the Minkowski sum of the Newton polytopes (with respect to x 1 , . . . , x d ) associated to the polynomials P I appearing in (A.8). Other polytopes can be constructed by only including proper subsets of the P I in the Minkowski sum. For example, of particular interest is the polytope called C † (4, n) in [7] (also studied in [8,9]). It is defined as the polytope obtained by including only polynomials associated to I 's of the form i i+1 j j+1 or i−1 i i+1 j , and may be obtained from C(4, n) by setting to zero all c I except those corresponding to these I's. Here we define C † (3, n) to be the polytope obtained by keeping only I 's of the form i i+1 j .
More general polytopes of the same basic type can be constructed by including other proper subsets of the P I in the Minkowski sum, or by including polynomials obtained by evaluating more complicated G(k, n) cluster variables on the web matrix. An example of the latter was considered in [8].
A.5 Langlands Dual Conventions
Because our work touches on a wide range of previous work in the physics and math literature, we find it helpful to clearly connect to two different choices of convention that are related to each other by what could be called "Langlands duality" (see Remark 7.15 of [35]). By this we mean performing the following compatible set of changes: 1. inverting each x i → 1/x i in the web parameterization of Sec. A.1, 2. reversing each arrow in Fig. 7, 3. and, correspondingly, changing the sign of the B-matrix with respect to which g-vectors are computed as described in Sec. A.3.
The conventions outlined in Sec. A.1 through Sec. A.3 correspond to what we call the "left" convention starting in Sec. 4. To illustrate the different conventions we present in Tab. 4 the "right" convention version of the G(3, 5) data from Tab. 1. Note that the form of the equations (A.14) is the same for both choices, and while the "left" g-vectors shown in Tab. 1 can be read off from the last term in each line, we can similarly read off the "right" g-vectors (shown in Tab. 4) from the first term on each line. We explain a general relation between the two conventions, at the level of our X-map applied to general tensor diagrams, in Sec. 4.5.
B Summary of Known Symbol Letters
Here we summarize what is known about the symbol alphabet S n of n-particle amplitudes in SYM theory. In this discussion we of course restrict our attention to those amplitudes which are of polylogarithmic type, and so have conventionally-defined symbols.
Let us begin with the rational letters. All currently known rational letters are cluster coordinates of G(4, n), which (according to the FP conjectures) means that we can represent them as arborizable webs. It expected that the n-particle symbol alphabet is a strict subset of the n ′ -particle symbol alphabet for all n ′ > n, which corresponds to the fact that we can Table 4: The correspondence between kinematic functions, generators, and cluster variables for the C(3, 5) polytope according to the "right" conventions, shown in Fig. 8, in contrast to Tab. 1 which shows the correspondence for the "left" conventions.
always make a valid n ′ -particle web by adding n ′ −n boundary vertices, with no edges attached, to an n-particle web. Therefore it is convenient to categorize different types of symbol letters according to the smallest value of n at which they first appear; we also categorize them by Plücker degree. In this way we encounter five basic types of rational letters for n ≤ 9: (1) S n≥6 contains the Plücker coordinates of the form 1 2 a b for 3 ≤ a < b ≤ n, and their cyclic images. (For n < 8 all Plücker coordinates are of this type.) (2) S n≥7 contains letters that are quadratic in Plücker coordinates having the form a(b c)(d e)(f g) := a b d e a c f g − a b f g a c d e . Specifically, S 7 contains the 14 non-Plücker cluster variables of G(4, 7): 1(23)(45)(67) , 1(72)(34) (56) and their cyclic images. The letters of this type for n = 8, 9 are listed in [24,62].
(4) S n≥8 also contains certain cubic letters listed for n = 8, 9 in [24,62]. (5) For n ≥ 9 S n contains a second type of cubic letter; see [24]. In Tab. 5 we tabulate the number of cluster variables of each type that appear in the The two-loop NMHV amplitudes have 18, 99 multiplicatively independent algebraic letters respectively for n = 8, 9 [24,62]. As reviewed in Tab. 2, these respectively involve 2, 9 distinct square roots of Plücker polynomials; all are of four-mass box type, having the form √ A 2 − 4B in terms of (2.5), or cyclic images thereof. In our approach, as we found in Sec. 5, each of these arises from a web series associated to (a cyclic image of) the almost arborizable web shown in Fig. 6(b) (or, for n = 9, the same web but with a ninth boundary point added anywhere in the diagram).
C Some Notation for Kinematic Functions
In this appendix we collect some notation, originally introduced in [46], to efficiently encode certain kinematic functions. If A is a subset of {1, . . . , n}, we define (C.5) | 14,624.4 | 2021-06-02T00:00:00.000 | [
"Mathematics"
] |
Nilpotence and the generalized uncertainty principle(s)
We point out that some of the proposed generalized/modified uncertainty principles originate from solvable, or nilpotent at appropriate limits,"deformations"of Lie algebras. We briefly comment on formal aspects related to the well-posedness of one of these algebras. We point out a potential relation of such algebras with Classical Mechanics in the spirit of the symplectic non-squeezing theorem. We also point out their relation to a hierarchy of generalized measure theories emerging in a covariant formalism of quantum gravity.
Introduction
The possibility that the Heisenberg uncertainty principle is modified by quantum gravitational effects has been first proposed almost a half century ago [1]. More recently, this idea resurfaced [2]- [8], mostly motivated by a wish to naturally incorporate a minimal length [9], [10] in the various approaches to quantum gravity. Such a generalisation is warranted close to the Planck scale, around which the Compton wavelength of a particle becomes comparable to its Schwarzschild radius. There has been a veritable explosion of interest in this topic during the last two decades, during which formal variations [11], and their statistical mechanical and phenomenological implications [12]- [18] have been being explored. Implications of the generalised uncertainty principles for quantum field and gauge theories have also recently emerged [19]- [21].
It is probably not too surprising that there is no "unique", "natural" or even "best" generalisation of the Heisenberg uncertainty principle. Such a generalisation really depends on the goals that one wishes to attain, and is ultimately justified by its predictions or by a physical principle, already known or new, that may be uncovered lying at its foundations. As a result, one encounters several versions of the generalised uncertainty principle, which stem from different generalisations of the Heisenberg algebra. Ultimately, such generalised uncertainty principle should arise and be justified by a theory of quantum gravity. Since such a universally acceptable theory is currently lacking several paths, mostly phenomenologically motivated, have been taken toward the formulation of generalised uncertainty principles [2]- [18].
Nilpotence and the Generalized Uncertainty Principle(s)
2.1 The concepts of nilpotent and solvable groups and algebras are central in the structure and classification of both discrete and "continuous" groups and algebras [43]. Let G indicate a group with elements g I where I is a discrete or continuous index set. The (group) commutator is the subgroup indicated by [G, G] having elements In a similar manner, when one considers two subgroups H 1 , H 2 ≤ G, with elements H 1 = {h 1j , j ∈ J}, H 2 = {h 2k , k ∈ K} with J, K subsets of I, then their commutator subgroup [H 1 , H 2 ] is given by Consider the following commutator groups defined iteratively by The descending central series is A group G is called n-step nilpotent, if its lower central series terminates after n-steps, namely if there is n ∈ N such that G (n+1) = 1. Several other equivalent definitions exist for nilpotent groups. Examples of nilpotent groups: all Abelian groups are 1-step nilpotent. The Heisenberg group is 2-step nilpotent. By contrast, the quaternion and the rotation groups are not nilpotent.
Consider the following commutator groups defined iteratively by The derived series is A group G is solvable if its derived series terminates after n ′ steps, namely if there is n ′ ∈ N such that G (n ′ +1) = 1. Examples of solvable groups: all Abelian groups are solvable. More generally, all nilpotent groups are solvable, as can be readily seen. A solvable but non-nilpotent group is the symmetric group of 3 elements S 3 . For discrete groups, the Feit-Thompson theorem states that every finite group of odd order is solvable.
For a Lie algebra g similar definitions apply, by using the matrix instead of the group commutator. Then Engel's theorem states that a Lie algebra is nilpotent if the adjoint map ad x (y) = [x, y], x, y ∈ g is a nilpotent operator, namely if there is n ∈ N such that ad(x) n = 0, ∀ x ∈ g. Moreover, g is solvable if and only if [g, g] is nilpotent. The notation for Lie algebras g that we will use is analogous to that for groups, as given above. Analogous definitions can be used for associative algebras endowed with commutators as will be done in the sequel.
2.2
One can easily see that the Heisenberg algebra of Quantum Mechanics, giving rise to the "ordinary" uncertainty principle, is 2-step nilpotent, hence solvable, since and all other commutators are zero, where the dimension of the phase space M is 2n. Then with the other 2-step commutators trivially zero. Now consider the n-dimensional rotationally symmetric Kempf-Mangano-Mann (KMM) deformation [7], [8] of the Heisenberg algebra given [p i , p j ] = 0 (10) where β ∈ R + and We immediately observe that all elements of the second step of the derived series g (2) are zero, except the following ones that require some straightforward calculations and Using those, one can express the elements of the higher elements in the derived series in terms of those of g and g (2) . As can be immediately seen, the derived series does not terminate, in general.
We proceed by further simplifying matters, in order to get a firmer control of the algebra.
An obvious way to achieve this goal is to impose conditions that make g (2) Abelian. One way to attain this is to consider only the "semi-classical limits" → 0 or β → 0, or both, of the KMM deformation. Such a "Inonü-Wigner"-like contraction is implemented by ignoring all terms that are of quadratic or higher order in and of quartic or higher order in β. A second way to proceed is by foregoing altogether all traces of non-commutativity between the "spatial" variables x i by imposing Obviously, (15) is a significant simplification of the KMM deformation. It is adopted by the "modified uncertainty principle" as will be seen in the sequel. If either of these simplifications are made, then the corresponding subalgebra of the KMM deformation is 2-step solvable as can be seen from (14). We have to be somewhat careful though. If we assume (14), and we omit terms of quadratic and higher order in then what remains is the Heisenberg algebra, so we get nothing new. Hence, to get a nontrivial result, we are forced, in addition to (14) to omit only terms of quartic or higher order in β. By using this approximation, we go beyond the Heisenberg algebra, since which is obviously an element of the centre of g. Then all 4-step commutators, i.e. all elements of g (5) are trivial. In other words, under the above approximations, the KMM deformation (9)-(11) reduces to a 4-step nilpotent algebra.
Maggiore [3]-[5] proposes a generalization of the Heisenberg uncertainty relations that
can be derived from the Lie algebra having generators x i , p j , J k , i, j, k = 1, 2, 3 which obey where J i , i = 1, 2, 3 stand for the components of the total angular momentum operator, c ijk are the structure constants of the Lie algebra of SU(2) and κ is the "deformation" parameter which is identified with the Planck mass. The essential difference between this algebra and the Heisenberg algebra can be essentially traced back to (18). As κ → ∞, we recover the direct product of the Heisenberg algebra with the Lie algebra of SU (2). The latter however cannot become solvable, in any approximation in terms of κ. To justify this, consider the Killing form of a Lie algebra g, defined as the symmetric bilinear form on g given by Cartan's criterion states that a Lie algebra g is solvable if and only if its Killing form satisfies It is straightforward to check that every subalgebra of a solvable Lie algebra is also solvable. Hence if Maggiore's extension could become solvable in some non-trivial (namely, not resulting in the Heisenberg algebra) approximation in terms of κ then its SU(2) subalgebra should also have a degenerate Killing form. This is impossible however as the SU(2) commutation relations do not depend on the value of κ in Maggiore's deformation, as is obvious in (23). Hence Maggiore's deformation cannot give rise to a nilpotent algebra either, in some approprite limit in terms of κ. We conclude that our approach and subsequent conclusions do not apply to Maggiore's generalization of the Heisenberg algebra (18)-(23).
2.4
The Das-Vagenas (DV) generalised uncertainty relation is a result of the associative algebra endowed with a bracket given by with ζ = ζ 0 /(M P l c 2 ) where M P l denotes the Planck mass. This is an anisotropic variation, provided by the term 2ζp i p j of the KMM deformation (9)-(11) with the additional simplification that the spatial coordinates commute as in (15). Requiring (15) instead of (11) is a considerable simplification of the KMM proposal, conceptually more closely aligned to ordinary rather than to non-commutative geometry. As can be readily seen, this algebra is 3-step solvable as all elements of g (2) are zero. On the other hand, for nilpotency we have As was also observed in the case of the KMM algebra, the DV one is not nilpotent unless one resorts to some approximations. The most straightforward assumption is to consider only terms vanishing as quadratic or higher powers of in which case (29) becomes zero. In this approximation the DV algebra is 3-step nilpotent. On the other hand, someone may wish to keep only terms up to second order in ζ. Then (29) reduces to The only 3-step non-trivial commutator is, in the aforementioned approximation in terms of ζ, which being a central element of g implies that All the other commutators have been trivially zero from the previous step. We see that the presence of the anisotropic term in (26) does not even affect the step at which the DV algebra becomes nilpotent when compared to the KMM case.
2.5
The Ali-Das-Vagenas (ADV) "modified uncertainty principle" [14], [15] generalizes the spatial-momentum commutator of the KMM deformation and extends the DV generalized algebra to The generalisation to n dimensions is straightforward. Here α = α 0 l P l / where l P l is the Planck length. The ADV algebra assumes, as the DV case (26), (27) above that In this case the "deformation" parameter is indicated by α. As in the case of (26), (33) is also 3-step solvable since g (2) is also trivial, as can be seen by a straightforward computation. Notice that due to the fact that the canonical momenta commute (34), the two potentially "dangerous" issues being the exact operator ordering in the fraction of (33), as well as the exact way that p and 1 p are defined, can be temporarily ignored.
To check the nilpotency of the ADV algebra, we will work at a formal level, leaving potential justifications of these steps for Subsections 2.6, 2.7 in the sequel. Consider an operator of interest, let's say p i . Then define its inverse 1 p j by demanding There is no need to distinguish, naively at least, a left from a right multiplication, because to due to (34), it is expected that both one-sided multiplications will give the same results. We use repeatedly that the commutator is a derivation, as well as (34), and find from (12) [ With the definition (35) this can be rewritten as Taking into account (36)-(38), a calculation gives, up to terms of order α 2 , that We follow the same level of approximation as in the KMM and DV cases above where terms up to the square of the lowest term in the deformation parameter are retained. Next, we have Calculation of the next few terms such as [ ] results in a gradually increasing level of complexity of the resulting expressions, which does not seem to terminate even in the approximation up to α 2 . The reason behind this behaviour, which is totally different from that of the KMM and the DV algebras, is not hard to pinpoint: it is the existence of p rather than of p 2 and its appearance not only in the numerator, but also in the denominator of (33). As long as it is unclear at this point what is the physical principle, if any, dictating the form of (33), and since (33) is not the only expression resulting in an uncertainty relation with desirable phenomenological consequences, it may be prudent to avoid the use of p itself which introduces these problems for the subsequent formalism. One could use instead integer powers of p in any generalised algebra, starting its square as in the KMM (9) or DV (26) cases. Then this algebra will become nilpotent in the lowest non-trivial approximation in terms of the deformation parameter.
2.6
In this subsection, we would like to comment on the terms of (33) involving p . It seems that the meaning of this quantity and especially its possible vanishing in the denominator of (33) has not been adequately addressed in the literature of the generalised uncertainty principles, so far. For this reason, a comment or two may be in order about these issues. The notation and some pertinent definitions used in the rest of this Section, can be found in the Appendix and the references cited therein.
We will assume that someone works in the Schwartz space S(R n ) in which the Fourier transform F is well-defined. Incidentally, it is entirely possible to use another integral transform, such as the Mellin transform, to reach similar conclusions. We immediately see that the symbol corresponding to p is This is a classical symbol belonging to the (Hörmander) class S 1 1,0 . In more physical terms, it is a pure canonical momentum, being independent of the "configuration" variables x. The corresponding operator p(x, ∂ x ) is a first order pseudo-differential operator belonging to OP S 1 1,0 . The Laplacian ∇ 2 on R n is a second order elliptic operator, as it has a positive definite symbol, and we see that (41) can be re-expressed as As such, p is well-defined on R n . We can re-cast (33) in the slightly different form It should be understood that (33) and (43) are not necessarily equivalent as the domains of these operator expressions can be different, despite their functional equivalence. The interest in operator domains should not be dismissed out of hand as an exercise of purely mathematical interest. Indeed the potential physical importance of operator domains has been investigated for over two decades in the context of quantum gravity and non-commutative geometry, in particular as it pertains to issues of topology change ( [44], [45] and references therein). We will tacitly assume that the discussion takes place in the intersection of the domains of (33) and (43).
Going back to (43) one should notice that a potential problem lies is its "infra-red" behaviour, namely in the possibility that its denominator becomes zero. This is in stark contrast with typical issues in functional spaces, especially where the Fourier transforms involved, that are mostly concerned about "ultra-violet" divergences. A way to deal with this may be to compactify R 3 to the 3-torus T 3 and only consider functions that obey periodic boundary conditions as is frequently done in Quantum Physics. Another way is to work with principal values of potentially divergent integrals, which is a form of infra-red regularization, as is done in the case of Hilbert transforms and other singular integral operators. One can take this path by re-writing the fractional terms of (43) in terms of Riesz transforms as It is well-known [39]- [41] that the Riesz transforms are bounded in L p (R n ), 1 < p < ∞. Among them, the L 2 -integrable functions are of greatest interest in Physics, hence (43) therefore (33) are well-defined in such spaces, which is sufficient for our purposes.
2.7
One issue arising from the ADV algebra (33), (34), due to the presence of the inverse of p and its square, is related to smoothness. As can be seen from (12) and (42), in (33) we are dealing with an inverse power of the Laplacian, of the specific form For our purposes s = 1 but it does not hurt to be somewhat more general and allow s ∈ C, with Re s > 0 for convergence purposes. Such expressions are called Riesz potentials of order s and are defined via the Fourier transform and its inverse, for f ∈ S(R n ) by The effect of operators such as J s is to improve the integrability of functions, namely to map I s : L p → L q which for s ∈ R are related by the Sobolev duality In the particular case of the ADV algebra above p = 2, s = 1, n = 3 which gives q = 6. The Hardy-Littlewood-Sobolev fractional integration theorem [39]- [41] provides bounds for the corresponding norms. The Riesz potentials such as p −1 are essentially integrals, hence they act as smoothing operators. Therefore, by using inverse powers of differential operators, the corresponding expressions become more regular, a clearly desirable property especially for any theory having a classical limit.
On the other hand, because of the fact that integral operators are defined in domains of R n rather than points, an obvious question arises about the meaning of locality in models using the ADV algebra. This, does not only create the usual problems of interpretation as in ordinary Quantum Physics but also introduces considerably greater difficulties in the quantization of any such systems. It is not clear, to us at least, what exactly would be the meaning of an operator defined at a point in space, even in the distributional sense, when a non-local operation such as the convolution with a singular integral operator is involved in even defining the algebra expressing the dynamics. A similar issue is also raised and partially addressed in [6] for the KMM algebra by utilising a generalised Bargmann-Fock representation and by defining an approximate "quasi-position" representation. One could possibly address such issues by following a largely algebraic path, in the context of generalizations of C ⋆ algebras emulating the path followed in axiomatic quantum field theory [46] or non-commutative geometry [22], [23]. The issue of locality is of central importance in theories of quantum gravity, such as Loop Quantum Gravity [47] or Causal Sets [48], which aim to formulate theories that are background independent. Contrast this with the approach taken toward gravity quantization and interaction unification by the String/Brane/M theories [49], [50]. Since a generalised uncertainty principle should reflect, in some part, elements of these quantum gravitational theories, its treatment of locality is crucial, extending far beyond the mere technical level that we have alluded to here.
In [7],[8] an inner product was introduced in an attempt to develop a rudimentary
representation aspects of the KMM algebra. It was given, in n-dimensional momentum space, by This product can be seen as "natural" from the viewpoint of the Fourier transform of the inner product of the (Sobolev) Bessel potential space L 2 −1 (R n ). More generally, one can see that the algebras giving rise to the generalised uncertainty principles, can be approximately expressed as "quantizations" in Hilbert spaces endowed with generalized inner products. Modifying the inner product to bypass altogether the Stone -von Neumann theorem, which however does not hold in quantum field theory, and obtain distinct predictions from the usual operator quantisation of the Fourier modes of the phase space variables is one of the tenets of theories such as Loop Quantum Gravity [47]. Motivated by the the generalised uncertainty principles as well as by the approach implemented in Loop Quantum Gravity, it may be of some interest to check on whether the Weyl correspondence [39] can be modified/extended to apply to the above or any new algebras giving rise to the generalized uncertainty principles.
The "symplectic camel" and generalized measure theories.
We saw in the previous section that some of the proposed generalisations of the Heisenberg uncertainty principle lead to solvable, and in particular limits, to nilpotent Lie algebras. It may be worth wondering on whether this just a coincidence, whether it is part of a mathematical pattern, or even more importantly whether it is a manifestation of a physical principle. This section is an attempt to relate such questions to known facts about Classical Mechanics and a generalized measure hierarchy of potential use for Quantum Gravity, as first rudimentary comments that may contribute toward an answer.
The operator formalism of Quantum Mechanics has undeniable similarities with the
Hamiltonian formulation of Classical Mechanics. Then, it would be highly suggestive if an extension of Heisenberg's uncertainty principle could be traced back to the structure of Classical Mechanics. The fact that this is indeed possible for the Heisenberg uncertainty principle itself, is a relatively recently established fundamental result in Symplectic Topology called the "symplectic non-squeezing theorem" [24], [25]- [35] or the principle of the "symplectic camel" [28]. As this result has not yet received the visibility in Physics that it duly deserves, despite the extensive efforts of primarily M. de Gosson (and collaborators), who seems to be its biggest advocate in the Physics community [29]- [34], we will say a few words about it that are related to the present work.
The following applies to any symplectic manifold M but we may wish to think more physically as M being the phase space of a Hamiltonian system. Assume that dimM = 2n and let it be parametrized locally by (x, p) where x = (x 1 , . . . , x n ) and p = (p 1 , . . . , p n ) where the notation is borrowed from the Hamiltonian formulation of Mechanics. Consider the ball B 2n (R) = (x, p) ∈ M : |x| 2 + |p| 2 ≤ R (49) and the "cylinder" Z l (r), l = 1, . . . , n over the symplectic 2-plane (x l , p l ) given by Consider a (smooth) canonical transformation (symplectomorphism) f : M → M. The symplectic non-squeezing theorem states that it is impossible to fit B 2n (R) inside Z l (r) unless R ≤ r, namely that This shows that a phase-space volume is not only preserved by a Hamiltonian (more accurately: a divergence-free) flow, as given by Liouville's theorem, but it possess an additional rigidity associated with its projections along each 2-plane of canonical coordinates. Alternatively, the set of canonical transformations of M is quite different from the set of volume-preserving transformations of M [24], [26], [28]. This can be interpreted as a rigidity property of Hamiltonian Mechanics whose Quantum Physics "analogue" is the Schrödinger-Robertson inequality [31]- [34] (∆x l ) 2 (∆p l ) 2 ≥ (Cov(x l , p l )) 2 + 2 4 , l = 1, . . . , n where Cov(x l , p l ) stands for an element of the covariance matrix. If the covariance matrix is zero, this results in the usual Heisenberg uncertainty relation. So we see that inside Classical Mechanics itself, there are "elements" of Quantum Physics, when some terms are properly interpreted. Is it possible to use further rigidity results of Classical Mechanics (if any further rigidity exists at all) to guide us in formulating a generalised uncertainty principle, therefore going beyond Quantum Mechanics? This is unclear at present. Although a definitive answer is unknown, it appears that there may be additional rigidity properties in the behaviour of canonical transformations, appearing in the middle dimension n as the work of [35] seems to indicate. If such indications are affirmative and more rigidity constraints exist for phase-space volumes, nilpotence in this context would be the termination after a finite number of steps of a sequence of properly defined involutions of such rigidity constraints.
3.2
The present work is concerned with nilpotent/solvable associative algebras endowed with a bracket operation, that are non-linear generalisations of Lie algebras, and properties of related functional spaces which are the carrier spaces of their representations. It may be worthwhile to see how these ideas may carry over from the canonical to the covariant framework. Each of these two approaches has its own advantages and limitations, but both provide valuable techniques and insights on how to understand and work out the process of quantisation in particular models. As is clear from the generalised uncertainty principles and the corresponding algebras discussed above, our interest is in uncovering properties related to Quantum Gravity.
The most striking observation is that it is not really surprising how different is Quantum Mechanics from Classical Mechanics, but how actually close they are to each other [36], [37].
An indication for such a close relation was provided by the symplectic "non-squeezing" theorem discussed above. Another is found if one thinks about a triple-slit experiment extending Young's double-slit experiment [36], [37]. We start with all three slits open and then gradually start blocking off one, then two at a time and then all three. We record the corresponding interference patterns with an overall plus sign if three and one slits are open and with an overall negative sign if two or no slits are open. We superimpose these eight resulting patterns by adding them up algebraically. The result will always be zero. If a four, five etc slit extension of Young's experiment is set up and calculations are performed along similar lines, the result will always turn out to be zero. This is a direct consequence of the fact that the Heisenberg algebra is 2-step nilpotent. In Classical Mechanics no new information beyond the one provided by a "single-slit" experiment is obtained. In Quantum Mechanics, Young's double slit experiment contains all the non-trivial physical information and every multi-slit experiment beyond it gives nothing new. It is in this sense that Quantum Mechanics is as "close" to Classical Mechanics as "possible" [36], [37] although, of course, their structures are quite different from each other.
To generalize this nilpotentcy in the covariant framework, we have to think in terms of generalised measures of histories, expressing the evolution of a system. Consider a set of histories S 1 having a generalised measure indicated by |S 1 |. Consider a second set S 2 and form the disjoint union S 1 ∐S 2 . These two sets could be chosen to represent the histories of the electron going through slit one or only through slit two in Young's double slit experiment. The extension of the notation and the definitions to a multi-slit experiment involving the "histories" S l , l = 1, . . . , n is immediate. Consider a hierarchy of sum rules [36]- [38] S l (53) Here S indicates that the argument S should be omitted in the calculation. Evidently I 1 = 0 for any non-trivial statement to be feasible. Classical Mechanics corresponds to I 2 = 0. Quantum Mechanics is given by I 2 = 0, I 3 = 0. One can straightforwardly see that I l+1 = 0 implies that I l is additive in each of its arguments. This multi-additivity can be used to explain why imposing I 3 = 0 results in being able to express the real part of the decoherence functional as I 2 (S l , S l ) = 2|S l |, which in turn implies that the transition probabilities are proportional to the square of amplitudes, as is well-known in Quantum Physics. A generalised uncertainty principle would reflect in this framework that I l = 0, l ≥ 4. The generalisation of the Heisenberg algebra to an l-step nilpotent algebra would be expressed by demanding that I l+1 = 0, l ≥ 4.
In such theories, the transition probabilities would be functions of some integral power, but not the square, of the amplitudes of wave-functions. It appears that following the covariant approach would also imply that the carrier spaces of the representations of the generalised algebras would be L p (R n ), p = 2, if not more general Sobolev spaces. Such spaces of functions are in general Banach spaces, rather than Hilbert spaces like L 2 (R n ) which is the one used Quantum Physics. This poses an obvious problem, as the Banach spaces L p (R n ), p = 2 do not admit an inner product. Then one would have to explain how exactly the geometric structure of the Euclidean spaces stems from that of functions which are elements of L p (R n ), p = 2. This might be feasible by technically utilising a Littlewood-Paley type of treatment [39]- [41], but the physical principle that may justify such a "semi/classical" transition L p (R n ), p = 2 to L 2 (R n ), is not clear to us.
Outlook and speculations
In this work we attempted to check to what extent some of the, largely, phenomenologicallymotivated generalised uncertainty relations stem from algebras that are solvable, or nilpotent at least in some approximation. We found that if such proposed algebras do not contain a simple part that remains unaffected by the Inonü-Wigner type contraction of one of their deformation parameter(s), then they can be seen as being parts of a solvable algebraic structure. In appropriate limits of parameters depending on the Planck length and mass, such algebras can be seen to possess a nilpotent structure.
It may be worth noticing that the solvable algebras/groups are in a sense complementary to the simple ones that we use extensively in various parts in Classical and Quantum Physics. This complemetarity can be seen in two ways: the Killing-Cartan form on solvable Lie algebras is trivial but it is non-zero for simple algebras. Alternatively, any Lie algebra can be expressed as a semi-direct product of a solvable and a semi-simple Lie algebras, according to the Levi-Mal'tsev decomposition. We are cannot help but wonder on whether this complementarity persists at a more fundamental level and has any significance for Quantum Gravity or it is just a formal coincidence due to our treatment and approximations?
If such a solvability and nilpotency are accepted, then it may be worth examining the form of the generalised measure theories that may be appropriate for formulating the corresponding covariant formalism. In our opinion, this raises obvious questions about the central role that the Hilbert spaces play in Classical and Quantum Physics. We believe that it may be worth further exploring the physical and formal reasons as well as the corresponding implications that may be behind such a role.
The ADV algebra also raises some questions that may be of interest: Should we even allow for pseudo-differential and smoothing operators in fundamental algebras? If so, what may be implications on locality or on the Markovian character of the classical and quantum evolution? What techniques could someone use to explore further such ideas? We believe that some of these questions may merit some attention in future work.
Lastly, one cannot fail to see the resemblance of (53) to a simplicial structure. It may be of interest to explore consequences of such a simplical view, define appropriate boundary/coboundary operators and a (co-)homology theory [51], generalise valuation theory [52] etc. and the inverse Fourier transform is In the above equations x · ξ indicates the Euclidean inner product and |x| stands for the Euclidean norm of x ∈ R n . Both the Fourier and the inverse Fourier transforms are unitary operations (isometries), since according to Parseval's identity where * indicates the complex conjugation, and it immediately implies Plancherel's formula II. Consider the functionσ(x, y) : R n × R n → C. For our purposes, it is sufficient to assume thatσ ∈ C ∞ (R n × R n ). Consider m ∈ R, 0 < ρ, δ ≤ 1. Thenσ(x, y) is called a symbol in the (Hörmander) class S m ρ,δ , if for all multi-indices α, β ∈ N n there are constants c α,β such that Consider now the operator σ(x, ∂ x ) : S(R n ) → S(R n ) given by Ifσ(x, y) ∈ S m ρ,δ , then σ(x, ∂ x ) is a pseudo-differential operator belonging to the class OP S m ρ,δ . In the above definitions, m is called the order of the operator. Ifσ is polynomial, then the corresponding operator σ is differential. If the symbolsσ(x, y) can be decomposed asymptotically, as sums of homogeneous functions of degrees m − j, namely if σ(x, y) − whereσ (x, ty) = t jσ (x, y), t ∈ R, |y| ≥ 1 (67) then they are called classical symbols. The highest order term in the above classical symbol expansion is called the principal symbol. An element σ ∈ OP S m ρ,δ is called elliptic pseudodifferential operator, if for some R < ∞ there is a constant c > 0 such that |σ(x, y)| ≥ c y m , |y| ≥ R III. Sobolev spaces are spaces of functions aiming to quantify the "degree of the functions' smoothness". First, and as a reminder, one defines the Lebesgue spaces L p (R n ) = {f : R n → C : It turns out that these are Banach spaces when equipped with the L p norm For the triangle inequality to hold 1 ≤ p ≤ ∞ where L ∞ is equipped with the sup norm.
The classical Sobolev spaces W k,p (R n ), k, p ∈ N are defined as An alternative description of W k,p (R n ), which also allows for an extension to k ∈ R, is given via the Fourier transform and the Bessel potential spaces A theorem of Calderón states that for k ∈ N, indeed W k,p (R n ) = L p k (R n ). Among the above functional spaces, the most commonly used in Physics have, undoubtedly, been L 2 (R n ) and W k,2 (R n ) both of which are Hilbert spaces. The inner product (·, ·) k of W k,2 (R n ) is given in terms of the usual L 2 inner product (·, ·) by Due to the equivalence of the norms 1 + |y| and y of L 2 one can extend this to an inner product in W k,2 , k ∈ R by which gives rise to the norm It may be worth observing that if f ∈ S(R n ), then f ∈ L p k (R n ), k ∈ R. A pseudo-differential operator, such as σ(x, ∂ x ) ∈ OP S m ρ,δ , can be extended to an operator acting between the Sobolev spaces L p k+m (R n ) → L p k (R n ) or on the space of tempered distributions S ′ (R n ).
IV. Riesz transforms are multi-dimensional analogues of the Hilbert transforms. For R n the Riesz transforms R l , l = 1, . . . , n are defined to be singular integral operators of convolution type, as follows: Let f ∈ S(R n ). Then where p.v. indicates the principal value of the integral and Γ(x) is the Euler gamma function.
More explicitly, the Riesz transforms can be seen as the convolutions where φ l ∈ S ′ (R n ), l = 1, . . . , n are tempered distributions given by the pairing for h ∈ S(R n ). What is of particular interest for our purposes is that the Fourier transform of the Riesz transform is a Fourier multiplier, namely that for f ∈ S(R n ), we have | 8,372 | 2013-02-01T00:00:00.000 | [
"Mathematics"
] |
Robust adaptive control for a class of nonlinear switched systems using state-dependent switching
This paper presents a novel adaptive control for a class of nonlinear switched systems by introducing a sufficient condition for stabilization. Based on the possible instability of all sub-systems, a variable structure (VS) switching rule with an adaptive approach and sliding sector was offered. Moreover, the stability condition of the system can be determined by solving linear matrix inequalities (LMIs) to ensure asymptotic stability. The application of H∞ analysis of nonlinear switched systems was also investigated through the design of the mentioned adaptive control system and defining a VS switching rule. Finally, simulation results were presented to validate the novelty of the proposed method.
Introduction
Stability studies are among the main issues of any system. Since the 1990s, hybrid systems have been the subject of research by control theorists, computer scientists, and applied mathematical scientists, motivated mainly by the critical applications of the hybrid system theory in various fields [1,2]. Generally, hybrid systems are a category of dynamic systems with continuous and discrete variables in their structure.
It is generally accepted that switchable systems are one of the most commonly used subsets of hybrid systems as they can model an extensive range of physical systems, including electronic power systems, chemical processes, network control systems, and automobile industries [3][4][5]. Switching systems are classified into arbitrary and restricted categories according to the switching signal. Stability testing methods of these systems vary according to the type of switching system, but like other linear and nonlinear systems, they are usually based on the selection of the Lyapunov function [6,7]. Because of the switching property and creation of complex behavior in switchable systems, their stability assessment is essential to researchers, and thus, they have investigated various methods for switched systems including stability analysis [8], observability analysis [9,10], H∞ control [11][12][13], and optimal control [16,17].
Robust Control, H∞, is a strategy for designing control systems that emphasizes the robustness and stability of control systems against disturbances. The design of such systems aims to create a control system under which changes in system conditions have the least impact on the output. In other words, the main objectives of designing robust control systems include increased system reliability, improved performance or stability in the presence of uncertainties, and non-modeled dynamics or disturbing factors such as turbulence and unwanted inputs. Recently, the H∞ control problem for switched systems has gained traction among researchers because of the efficiency of the H∞ index in control synthesis of practical systems [16][17][18].
A hyperchaotic secure communication scheme for non-ideal communication channels is designed in [19]. The proposed approach employs the Takagi-Sugeno (TS) fuzzy model and linear matrix inequality (LMI) technique to design a controller that synchronizes the hyperchaotic transmitter and receiver systems. An H ∞ performance for a chaotic based secure communication scheme for a nonideal transmitting public channel is suggested in [20]. The presented approach employs the polynomial representation and numerical SOS convex optimization technique to design a novel polynomial synchronizer for hyper (chaotic) systems. Sadeghi and Vafamand in [21] present an optimal approach for more relaxed stability analysis conditions and controller design for Takagi-Sugeno fuzzy systems. The optimal selection of the upper bounds by employing the LMI approach leads to conservative conditions. A Takagi-Sugeno (TS) fuzzy model is also proposed in [22] to further decrease the conservativeness instability analysis condition and controller design. The stability condition proved by using a non-quadratic Lyapunov function in terms of LMI.
Indeed, choosing the right switching signal plays a crucial role in stabilizing the system as applying an appropriate switching signal to a switched system with unstable subsystems could lead to system stability. Similarly, a wrong switching signal may cause instability of a switched system with stable sub-systems. Designing suitable switching signals for the asymptotic stability of switchable systems has, thus, encouraged many researchers to invest and study-related fields [23][24][25]. On the other hand, given the fact that a switchable system consists of unstable sub-systems; thus, system states tend to diverge. Therefore, there is a need to design an efficient switching signal that can stabilize a switchable system with unstable subsystems. [26] designed a state dependent switching law that obeys a dwell time constraint and guarantees the stability of a switched linear systems.
In the past two decades, a significant class of statedependent switching signals known as variable structures (VSs), have been investigated extensively for application in switched systems [27][28][29]. The VS method modifies the dynamic of a nonlinear system by applying a high frequency switching control and switches from one smooth condition to another. Therefore, the structure of the control rule differs relative to the position of the state trajectory, in a sense that it switches from one smooth control rule to another, and perhaps at higher speeds. In [30], an output feedback variable structure control was suggested for a set of continuous-time switched linear systems in the presence of parametric uncertainties. Moreover, sliding surfaces were created using the LMI method. Moreover, Zhao et al. [28] proposed the adaptive control problem for a set of continuous-time switched linear systems by proposing a VS switching rule and sliding sector.
Adaptive control is considered an effective way of dealing with uncertain systems. Interestingly, adaptive control is utterly distinct from robust control. That is, contrary to robust control, and adaptive control does not entail prior knowledge of the boundaries of uncertainties or time-varying parameters, but rather, it is concerned with control rules modifying themselves to adapt to the parameters. Over the last decade, adaptive control has received much attention in many fields, primarily when undesired chattering exists while the control system is in the sliding mode. In fact, the chattering phenomenon can play an undesirable role in reducing system performance since it may increase high-frequency dynamics, leading to instability. The boundary layer technique is usually adopted to eliminate the chattering; meanwhile, many adaptation methods have been introduced and extended to tune the control gain [28,29]. A common sliding surface is constructed for the proposed nonlinear switched system, and an adaptive sliding mode controller is developed to adapt the unknown parameters and guarantee reachability to state trajectories [31]. Zhu and Khayati aimed to enhance accuracy and smooth the chattering phenomenon [32]. Also, the switching gain adaptation rule was analyzed, and an alternative design was proposed. A new adaptive robust control was proposed in [33] for uncertain switched EL systems which reduces the complexity of control design. Furthermore, the adaptive tracking control problem of uncertain switched linear systems was proposed in [34].
In contrast to previous research works [12,28,35,36], the present study takes the state-dependent switching signal as a sliding surface, ensuring the stability of the switched nonlinear system in the presence of disturbance by introducing an adaptive controller. In addition, the proposed approach can be generalized to most nonlinear systems. Indeed, the turning point of the present paper is to apply a convex combination of unstable subsystems in such a way to end up with stable states of the system.
In this paper, a controller with an adaptive approach and a variable structure switching rule were utilized to stabilize a nonlinear switched system in the presence of disturbances. This was achieved using the Lyapunov function theory and the sliding sector method. In this paper, the problem of stabilization and stability for nonlinear switched systems was investigated by introducing a new class of switching signals in a continuous-time setting. A new adaptive controller, a state feedback control, and a VS switching rule with a sliding sector were developed to ensure that the H∞ control problem for a class of nonlinear switched systems is solvable in the condition where all subsystems are unstable. This paper organized as follows: Sect. 2 states the problem of adaptive control of switched nonlinear systems. In Sects. 3 and 4, stabilization and H∞ control conditions for switched nonlinear systems with and an adaptive approach and a VS switching rule are proposed. The simulation results are presented in Sect. 5 to verify the effectiveness of the proposed controller and Sect. 7 concludes the paper.
Descriptions and Preliminaries
Here, we have a class of continuous-time nonlinear switched systems that can be represented by the following model: denotes a nonlinear term of the system, u(t) ∈ R m and y(t) ∈ R q stand for the control input of the ith subsystem and the control output, respectively. Signal (t) ∈ L 2 [0, ∞) represents the disturbance input. ∈ R l is a constant unknown parameter vector and A r , B r , C r , D r , E r , r ∈ M are known constant matrices with proper dimensions.
(t) denotes the matrices of known signals considered as being piecewise-differentiable and uniformly bounded in time. Although the system matrices may not satisfy the Hurwitz stability criterion, it is assumed that there exists a positive constant r , r ∈ M , which contains r ∈ [0, 1][0, 1] , ∑ m r=1 r = 1 and Hurwitz stable matrices A 0 ∈ R n×n , D 0 ∈ R n×m , E 0 ∈ R n×d such that ∀r ∈ m Remark 1 With respect to this point that all the system matrices A r , r ∈ M are probably unstable, the condition Lemma 1 (Young's Inequality) [37] For the matrices X and Y with proper dimensions, for every symmetric positive definite matrix S and scalar ε > 0, we have: (1) Lemma 2 (Schur's lemma) [30] Let Q 1 , Q 2 , and Q 3 be three matrices of proper dimensions to the extent that Q 1 = Q T 1 and Assumption 1 [38] The Lipschitz condition for all x ∈ R n and y ∈ R n is as: in which is the Lipschitz constant. Meanwhile, inequality (5) is exemplified as follows: Definition 1 [35] The H∞ control problem for System (1) can be specified because it gives a constant > 0 , and then designs an adaptive state feedback controller u r , r ∈ M for each subsystem and a switching rule r = (t) to the extent that: (1) If (t) = 0 , the closed-loop system will be asymptotically stable; (2) For all feasible uncertainties, System (1) has an H∞ performance index from (t) to y(t), i.e., the following is held: If it holds that for x 0 = 0.
The following discussion aims to assist in the design of an adaptive feedback controller (u(t)) for dealing with uncertainties.
Stabilization for a nonlinear switched system
Here, we focus on stabilization for the switched nonlinear system (1) when (t) = 0 . This paper emphasized designing an adaptive controller as its highest priority in achieving asymptotical stability in System (1). First, we must define a sliding sector for future derivations. Regarding the switched linear system, all of the system matrices A r , r ∈ M might be unstable as defined in the preceding section. Here, we have the following inequality: that might not appear to be true for any given positive matrix P and non-negative matrix Q. Nevertheless, for each subsystem, we can break the state space down to two sections in a way that one section satisfies the condition for some of the elements x ∈ ℝ n while another part satisfies the condition for other elements x ∈ ℝ n .
Definition 2 [28] For each subsystem of the nonlinear switched system.
, the corresponding sliding sector is defined in the state space ℝ n as: in which P is a positive matrix, Q is a negative matrix, and and are positive constants. The amount of energy for the corresponding Lyapunov function, V (x(t)) = x T (t)Px(t) +̃T̃ , reduces and satisfies the above condition.
Remark 2
For either stable and unstable systems, a sliding sector can be defined in the state space.
Theorem 1 Consider (t) = 0 ; in this case, all the subsystem matrices of the nonlinear switched system (1) might be unstable. For a positive constant r , r ∈ [0, 1] and ∑ m r=1 r = 1 , r ∈ M . If there exists a positive matrix P, a non-negative matrix Q, and Hurwitz matrices B 0 ∈ R n×n , D 0 ∈ R n×m , a Hurwitz stable matrix A 0 ∈ R n×n and B have appropriate dimensions, such that: By Applying the LMI method, we have where Considering the subsequent switched adaptive controller, VS control rule, and adaptive law, the switched system (1) would be asymptotically stable as follows: where D −1 r is the generalized right inverse of the matrix D r , ̂ is the estimate of θ, and S r is the sliding sector outlined in (11).
Proof For the r subsystem of the system (1), when (t) = 0 , the Lyapunov functional candidate is constructed as follows: in which P is a positive matrix and ̃=̂( t) − . Then, for every nonzero state, x(t), V(x(t)) is positive. V(x(t)) can be derived with reference to t along with the solution of the system (1). Thus, we have: from Lemma 1 and Assumption 1, we have V (x(t)) ≤ x T (t) (A r + D r K ) T P + P(A r + D r K ) x(t) V (x(t)) ≤ x T (t) (A r + D r K ) T P + P(A r + D r K ) As with all the system matrices A r + D r K ,r ∈ M that are likely to be unstable, the variable structure control method is applied to stabilize the corresponding nonlinear switched system. Therefore, the Lyapunov function is defined for the autonomous system ẋ(t) = (A 0 + D 0 K )x + Bf r (x(t)) as: From (12) and (23), for ∀x ∈ ℝ n , we can derive: From (14) and (24), it can be easily obtained: Thus, there must be a scalar r ∈ {1, 2, ..., m} where ∀x ∈ S r when (t) = 0 , we can stabilize the nonlinear switched system (1) using the proposed VS control rule in (19). Remark 3 In Theorem 1, the adaptive controller u(t) is designed to accommodate the uncertainties. On the other hand, to stabilize the underlying system, the VS control approach with the sliding sector is investigated using a designed switching signal. To summarize, the proposed control design method is not only flexible but may also be capable of reducing the complexity and cost of the combination.
An H ∞ control designed for nonlinear switched system
Here, the H ∞ problem is investigated for the nonlinear switched system (1), when (t) ≠ 0 . Therefore, a definition is offered for the sliding sector as follows: Definition 3 Consider the sliding sector for the nonlinear switched system (1) in the state space ℝ n . For every subsystem, we have: in which P is a positive matrix, Q is a negative matrix, and are positive constants. (1), when (t) ≠ 0 , in which all the subsystem matrices are probably unstable. For the given positive scalar r , in which r ∈ [0, 1] , ∑ m r=1 r = 1 , p ∈ M . If there exists a scalar > 0 , > 0 , a positive matrix P, a non-negative matrix Q, and Hurwitz matrices A 0 ∈ R n×n , D 0 ∈ R n×m , E 0 ∈ R n×d with appropriate dimensions, such that:
Theorem 2 Consider the nonlinear switched system
where Therefore, the nonlinear switched system (1) is considered asymptotically stable according to the switched adaptive controller, adaptive rule, and VSC rule with the H ∞ performance index γ, in which D −1 r is the generalized right inverse of the matrix D r , Sp is the sliding sector, and ̂ is an estimate of θ, as described in (27). and, from the definition of A 0 , D 0 , C, B, E , we can obtain the following relationship: Then, ∀x ∈ S r we have: Thus, the above inequality can be maintained provided the discrete switching signal (t) = r is selected observing x(t) ∈ S r . Furthermore, consider the Lyapunov function when (t) ≠ 0: in which P is a positive matrix, and ̃=̂( t) − . Taking the derivative of V(t) along the trajectory of System (1), we can obtain the following relationship: So, we have: Now, for the nonlinear switched system (1), consider: Therefore, from the time derivative of V(t) and (40), we have: where and According to Lemma 2: However, to find of 2 < 0 , the obtained condition is altered by some manipulations, pre-and post-multiplication 2 by diagonal matrix ∑ = diag � P −1 I I I � and introducing X = P −1 > 0,Y = KP −1 = KX results in (28). Then, the system (1) based on the sliding sector, the switched adaptive controller, adaptive law, and VSC rule is uniformly asymptotically stable with an L 2 -gain smaller than γ.
Numerical simulations
In this section, we present a numerical example that attempts to validate the proposed method. Considering the Rossler system [39] as a subsystem (1), it is described as follows: where a 1 = b 1 = 0.2 and c 1 = 5.7 are three real constants, and the Newton-Leipnik system [40] as subsystem (2) is described as follows: where a 2 = 0.4 and b 2 = 0.175 are the system parameters. The state-space model comprises two subsystems and is described as follows: where the system matrices are: Although the eigenvalue of A1 and A2 are not Hurwitz stable, the associated Hurwitz stable convex combination with 1 = 0.2 and 2 = 1 − 1 can be obtained as follows:
Fig. 4 Control inputs
As seen in Fig. 1, the specified signals are chosen as 1 (t) = sin t , 2 (t) = cos t , and Accordingly, the system parameters are set to 1 = 2 , 2 = 5 , and 3 = 9 . Moreover, to demonstrate the effectiveness of the presented control in Figs. 2, 3, and 4, the initial values of the estimated parameters are ̂( 0) = 1 6 11 T , and the initial condition of state trajectories are considered to x(0) = 0.349 0 −0.16 T . As seen in the figures, the H∞ problem is solved by the proposed controller with acceptable performance. Figures 2 and 3 depict the time response of adaptive rule ̂( t) , and the state variable of the nonlinear switched system (47) under the variable control rule as well as the control input u(t) are illustrated in Fig. 4. Evidently, the designed reliable controller with H∞ performance not only leads to a globally asymptotical stable closed-loop system but also consistently offsets the associated uncertainties.
Conclusion
This paper attempted to solve the adaptive control problem for a class of nonlinear switched systems. To do so, first, the sufficient condition of the system stability was satisfied using the adaptive controller to adapt to system uncertainties, a convex combination method, and the switching signal. Then, considering that the subsystems are possibly unstable, an adaptive controller and VS switching rule were applied to ensure the globally asymptotical stability. Thereby, the H∞ control problem of the nonlinear switched system was solved with regard to linear matrix inequalities. Lastly, a numerical example was offered to reveal the unique and practical approach developed through the designed controller and the proposed switching strategy. Qualities of high accuracy, the rapid convergence of states toward zero, attenuation of disturbance and reduction of its effect on the nonlinear system are the advantages of our proposed method. As a future work, the authors seek to develop a new adaptive controller along with a switching sliding sector in the presence of parametric uncertainty and disturbance to maintain the states at desired values.
Acknowledgements N Pariz, the corresponding author was supported by a Grant from Ferdowsi University of Mashhad (N0. 48216).
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. | 4,886 | 2021-02-08T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
New Types of Doubly Periodic Standing Wave Solutions for the Coupled Higgs Field Equation
and Applied Analysis 3 of different theta functions. To obtain new doubly periodic wave solutions, we make a new ansatz, g = η [θ 3 (ωt, τ 1 ) θ 2 (γx, τ) + iθ 1 (ωt, τ 1 ) θ 4 (γx, τ)] exp (ipt) , f = θ 4 (ωt, τ 1 ) θ 4 (γx, τ) + θ 2 (ωt, τ 1 ) θ 2 (γx, τ) , (11) where the parameters ω, γ, η, and p are constants to be determined and the period τ in the spatial direction and the period τ 1 in the temporal direction are purely imaginary constants. Inserting the ansatz (11) into (9), together with the theta function identities given in Section 2, we set the coefficients of the terms θ2 2 (γx, τ), θ2 4 (γx, τ), and θ 2 (γx, τ)θ 4 (γx, τ) to be zero and get (γ 2 b 2 + ω 2̃ b 2 − Λ − δη 2 ̃ b 4 ) θ 2 2 (ωt, τ 1 ) + (γ 2 b 1 − ω 2̃ b 1 − δη 2̃ b 5 ) θ 2
In this paper, we will focus on a coupled Higgs field equation with important physical interests [19], which describes a system of conserved scalar nucleons interacting with neutral scalar mesons in particle physics.
Here and are constants, and the function V = V(, ) represents a real scalar meson field and = (, ) a complex scalar nucleon field.Equation ( 1) is related to some nonlinear models with physical interests.Equation (1) is the coupled nonlinear Klein-Gordon equations for < 0 and < 0 and the Higgs equations for > 0 and > 0. Much attention has been paid to investigate exact explicit solutions and integrable properties of (1).The symmetry reductions, the homoclinic orbits, -soliton solutions, rogue wave solutions, Jacobi periodic solutions, and other types of travelling wave solutions have been presented [19][20][21][22][23][24].
The Hirota bilinear method is a powerful tool for constructing various exact solutions for NLEEs, which include soliton, negaton, rogue waves, rational solutions, and quasiperiodic solutions [25][26][27][28][29][30][31][32][33][34][35].Recently, by means of Hirota bilinear method and theta function identities [36][37][38], Fan et al. obtained a class of doubly periodic standing wave solutions of (1) [39], which was expressed as rational functions of elliptic/theta functions of different moduli.A significant portion of these solutions represents travelling wave, that is, those which will remain steady in an appropriate frame of reference.Physically, the envelope of these oscillations is bounded by a pattern periodic in both time and space.The focus of this work is to investigate new types of doubly periodic standing wave solutions for (1).
This paper is organized as follows.In Section 2, we briefly illustrate some properties of theta functions and Jacobi elliptic functions.In Section 3, we construct a new kind of doubly periodic wave solutions for the coupled Higgs field equation.In Section 4, for the obtained periodic solution, we derive its Jacobi elliptic function representation and analyze interaction properties by some figures.Some conclusions are given in Section 5.
The Theta and Jacobi Elliptic Functions
The main tools used in this paper are Hirota operators and theta function formulas, which will be discussed here, to fix the notations and make our presentation self-contained.More formulas for the theta functions can be found in [36,37].
The Riemann theta functions of genus one (, ) ( = 1-4), the parameter (the nome), and (pure imaginary) are defined by [40] Here, , are the complete elliptic integrals of the first kind: 1 is an odd function while the other three are even functions.There exists a large class of bilinear identities involving products of theta functions, some of which are listed here: where for simplicity we have used the notations and the formulas (4) can be derived from product identities of theta functions; the details can be found in [36,37].
A New Class of Doubly Periodic Wave Solutions
In this section, we construct a new class of doubly periodic wave solutions by Hirota bilinear method [2].For (1), substituting the following transformation into (1) and integrating with respect to yield the bilinear forms where V 0 is a constant, Λ = Λ() is an integration constant, and is the well-known Hirota bilinear operator.Equation ( 10) is slightly different from the results given in [39] by adding one integration constant term.The crucial step to derive doubly periodic wave solutions is to suppose and in ( 9) and ( 10) as suitable combination of different theta functions.To obtain new doubly periodic wave solutions, we make a new ansatz, = [ 3 (, 1 ) 2 (, ) + 1 (, 1 ) 4 (, )] exp () , = 4 (, 1 ) 4 (, ) + 2 (, 1 ) 2 (, ) , (11) where the parameters , , , and are constants to be determined and the period in the spatial direction and the period 1 in the temporal direction are purely imaginary constants.
Therefore, we obtain a new doubly periodic wave solution of (1): where , , and 1 are arbitrary constants, and the parameters , , , V 0 , and Λ are given by (17).To the author's knowledge, the solution ( 18) is firstly reported here.
In fact, the coupled Higgs field equation ( 1) admits abundant families of doubly periodic wave solutions.For example, we can suppose the solutions of the bilinear equations ( 9) and (10) as and so on.For the sake of simplicity, the tedious computations are omitted here.It is noted that these two types of periodic solutions have also two independent periods in the spatial and temporal directions.
Jacobi Elliptic Function Expressions and Long Wave Limit
In order to analyze the periodic property by some figures, we may first convert solution (18) into Jacobi elliptic function expressions.Together with ( 6) and ( 7), solution ( 18) can be expressed as rational forms of Jacobi elliptic functions: where is an arbitrary constant, and the parameters , , Λ, and V 0 are given by which indicates that the period in the spatial direction and the period 1 in the temporal direction are related by In ( 21) and ( 22), the complete elliptic integrals and 1 are defined by From (21), it is easy to check that which implies that the solution (||, V) is periodic in the -direction with a period 4/ and the -direction with a period 4 1 /.By selecting appropriate parameter values in (21), the interactions of doubly periodic waves are shown in Figures 1 and 2. It is clearly seen that (||, V) is periodic in the -direction and the -direction.For the solution in [39], the periodic waves are both bell shaped in the spatial and temporal directions.However, with regard to solution (21), the periodic waves are of different shapes in the spatial and temporal directions.
Abstract and Applied Analysis 7
When the modulus → 0 and 1 → 1, one obtains a new periodic-solitary wave solution as follows: where the parameters , , and are given by With proper selections of the values of , , , and , the interactions of periodic solitary waves (27) are shown in Figures 3 and 4. The solution (||, V) displays the feature of a dark soliton in the -direction; the cosine function causes periodic modulation and thus it is periodic in the -direction.
With the aid of the computer algebra software Maple, the validity of the new solutions ( 18) and ( 27) are verified by putting them back into the original systems (1).
Conclusions
The combination of the Hirota bilinear method and theta function identities is demonstrated to be a powerful tool in finding periodic waves for the coupled Higgs field equation.As a result, we have derived a new kind of doubly periodic standing wave solutions for the coupled Higgs field equation, which is different from those of the known solutions reported in the literature.The interaction properties of periodicperiodic waves and periodic-solitary waves are analyzed by some figures.
The key of the combination method is that the solutions are supposed as rational expressions of elliptic functions of different moduli, which should be applicable to other nonlinear evolution equations or systems with bilinear forms in mathematical physics.The doubly periodic solutions will prove to be beneficial and instructive in modeling and understanding nonlinear phenomenon. | 1,961.4 | 2014-03-19T00:00:00.000 | [
"Physics",
"Mathematics"
] |
Communication Problems: Advantages and Disadvantages of Teaching Autistic Children with Humanoid Robots
The relevance of the investigated problem is caused by the need of society education to be focused on a humanitarian approach to provide education not only to healthy children but also to patients with autism. This article is focused on the search for educational tools using robotic technologies for teaching children with autism a foreign language. The methods used in this article are based on a comparative analysis of two approaches, such as the traditional method of teaching English and using robotic systems. The article reveals that traditional system of education, organization of the educational process, is not suitable for increasing the number of children with autism. The main result of the research is the developed criteria for evaluating the effectiveness of strategies for teaching children with autism a foreign language which is based on the analysis of the effectiveness of existing educational tools. Recommendations are developed. Materials of the article can be useful for researchers in this field, teachers dealing with autistic children for enrichment foreign language teaching methodology and pedagogy.
Introduction
Currently, there are a number of serious diseases that prevent the effectiveness of educational processes and the assimilation of knowledge necessary for the realization of the individual and its socialization in society. The problem concerns children with autism. As a special disorder, autism was described in the 40s of the last century. Despite this, until the collapse of the USSR and even after, children were often diagnosed with "child schizophrenia", and spent their lives in psych neurological boarding schools. For a long time in the country, it was believed that children with autism need only care and treatment. It took half a century for autism to be officially recognized as a special type of health disorder in the Soviet Union.
This happened in 1989. However, it was not easy to bring this fact to the understanding of doctors, teachers, and society as a whole. Seven years ago, children with autism were considered uneducated in Russia. Only a Federal law «On education in the Russian Federation" adopted at the end of 2012 abolished this approach, recognizing all children as having the right to education (Special Federal State Educational Standard for primary school education of children with autism spectrum disorders).
A prototype of the Special Federal State Educational Standard (FSES) for primary school education of children with autism spectrum disorders (ASD) was developed by specialists of the Institute of Correctional Pedagogy in 2010. This project is currently being tested in pilot regions of Russia. This educational standard allows a child with an autistic component to exercise their right to education regardless of the place of residence, type of educational institution, or severity of the developmental disorder (Recommended BGEP (Basic General Educational Program) «Autism» adopted by the decision of the federal educational-methodical association for general education).
The Federal Educational and Methodological Association for General Education adopted the Prototype of ABGEP (Adapted Basic General Educational Program) Autism, which proposed a system for assessing the achievement of students with autism spectrum disorders with the planned results of mastering the adapted basic general educational program of primary general education (Karpekova, et al., 2016).
Purpose and objectives of the study
Purpose of the study is to work out criteria and evaluate the effectiveness of strategies for teaching children with autism a foreign language applying the analysis of the effectiveness of existing educational tools.
Literature review
According to researchers (Esterbrook & Esterbrook, 2013;Mukhina, 2017;Nemenchinskaya, 2014;Mandy et al., 2016 a), the difficulty of teaching a child suffering from autism spectrum disorder is caused by the following problems: lack or absence of communicative skills (lack of spoken speech, inability to initiate or maintain conversation (Mandy et al., 2016 b), limited and/or repeated actions andinterests (stereotype, auto aggression, limited behavior, etc. (Hornby, 2015). That is, a special educational approach is needed, in which classes will be held as productively as possible for such a child.
It should be noted that the problem of teaching children with autism spectrum disorder is relevant today. Foreign researchers also study various methods for the development of communication and learning skills of such children (Huijnen & De Witte, 2017;Vanderborght et al., 2012;Moghadam et al., 2015).
A completely new approach, available due to the rapid development of such field of science as robotics, was proposed by Huijnen and de Witte (2017). The researchers considered a method based on teaching a child with ASD (autism spectrum disorder) using a robotic system. Scientists indicate that robot assisted therapy (RAT) or robot mediated intervention (RMI) is considered the most promising method for teaching autistic children communication and social interaction because, communicating with a robot can be more comfortable, easier and more attractive than communicating with a person for such children. The researchers conclude that the most productive learning process is when a robot acts as a teacher (Taheri et al., 2015). At the same time, the area of teaching foreign language to children with autism remains poorly studied.
It is known that the basis of communication skills is the knowledge of the language, often not only native.
The above mentioned actualizes the need to consider the strategy of teaching children with autism, a foreign language using humanoid robots.
Methodology
During the study, both empirical and theoretical methods of scientific research were applied: description and comparative analysis of data to identify the advantages and disadvantages of traditional method or
Results
To assess the effectiveness of the criteria, we used genuine, real and concrete results of educational activities and performance of the set educational tasks for the child. At the same time, a special set of quantitative and qualitative indicators has been developed for each of these criteria, which contribute to an accurate and verified assessment. Both pedagogical and psychological parameters were used.
Considering the first criteria, namely the status-role characteristics, it was found that children with autism showed the greatest communication activity and feedback with the robot, the interface and communication of which coincided with its role. It should be noted that the students during such lessons more often talk about their fears, about their character, gave more information about their condition, which is very important when conducting classes. It is noted that children show more interest in the process of learning activities with the robot. Tactile contacts with the robot aroused interest among students, which is not always appropriate in the case of a human teacher. In other words, the children were convinced that the situation around them is safe and the teacher-robot is a friend and you can share your experiences with it or ask it for help.
The analyses of the results of training with a human teacher showed that projected expectations were not met in 45% of cases due to the fact that the teacher was in a higher status role. The teacher received monosyllabic answers, while the children were shackled, felt discomfort, more lost, less concentrated, were more thoughtful, perceived the educational process as discomfort and had long pauses in the answers. This behavior can be described as having weak feedback. When teaching a language, it has a negative effect, because the process of mastering the material becomes more complicated each next lesson.
Analyzing the correct form of problem statement, it should be noted that due to the imperfection of the technical part used in modern robots, a human teacher copes with this teaching problem much better than robots. Studies indicate that 85% of teachers can confidently cope with the exact setting of the educational task, taking into account the level of knowledge, the amount of knowledge acquired, and the psychoemotional state of the student.
Considering learning environment as creating favorable conditions for high communicative activity of children with autism as the third criterion, it was found that there are almost no serious differences between the two educational strategies, but in the case of a robot, we cannot change the teacher. One robot can be used for different subjects, respectively; there is no need to get used to the speech characteristics, appearance, behavior and the method of teaching. In the case of a human teacher, this is not always guaranteed. The results of the work confirm 75% of successful training activities using robots and 65% of effective training activity with a teacher. Natalya A. Sigacheva, Alfiya R. Baranova, Khanif F. Makaev / Proceedings IFTE-20202391 An important and necessary aspect of working with autistic children is the development of an individual educational plan. Since the student's intellectual, mental, physical, and emotional state changes daily, it is difficult or even impossible to reprogram the robot to solve an urgent problem. According to most researchers, in this case, the human teacher has an undeniable advantage. Pilot studies have shown that 99.5% of teachers are able to assess a child`s capabilities and create a correct, the most productive individual curriculum.
The last criterion, namely home education, allows evaluating the effectiveness of two compared technologies while teaching a child at home. It was found that the human teacher copes with the task of teaching at home better than a robot teacher, due to a number of reasons. Firstly, currently there are no such robots that could accurately determine in which areas the child does not have enough knowledge in contrast to the human teacher. Secondly, development of a personal robot for each child suffering from autism is a very costly and time-consuming process. Thirdly, to use a training robot at home, you need a trained, technical Troubleshooting specialist with pedagogical skills as an accompanying each lesson.
The results of the comparative analysis of the effectiveness of the developed criteria, depending on the choice of educational strategy, are listed in the table (Table 1).
Discussions
To study the problem, it is necessary to consider such a term as autism. The most acceptable by scientists is such a definition: autism is a disorder in which a person has a lack of social interaction and communication, and this disorder is characterized by limited interests and repetitive actions. According to the researchers Taheri et al. (2015), Pour et al. (2018), Chang et al. (2010, Zilberman et al. (2015), autistic children, despite their passivity to communicate with anyone, enjoy working with various technological devices, such as robots and computers. So it can be assumed that humanoid robots have great potential in helping children with ASD in learning foreign languages, considering that children must be provided with special learning strategies. It should be noted that in order to optimize learning a foreign language in the classroom, students must communicate with each other in English, which can be quite difficult for children with autism. We agree with Karpekova (2016) that, communication can make them worry, which will lead to low learning outcomes in the classroom. Therefore, it is necessary to develop a learning environment in which it is possible to reduce the level of anxiety of children with autism and increase their productivity. The detailed and thorough analysis of the research works of domestic and foreign experts and our own experience allows us to work out criteria in order to evaluate the effectiveness of standard and robotic educational strategies.
The first criterion under consideration is the status-role characteristics of the teacher. Here it is necessary to take into account such qualities of a teacher as his manner of communication, appearance and positioning.
Status is a local characteristic of a person, and the closely related concept of social role which refers to the behavior expected of people with a certain status in accordance with the norms accepted in this society.
Another criterion is the correct form of problem statement. Using special learning strategies, children with ASD are able to learn a foreign language. Such children have difficulty in maintaining social communication, which is extremely important when learning English, thus you need some way to reward them for their attempts to communicate. Otherwise, the child may have negative factors such as confusion, stress, discomfort. It should also be borne in mind that it is necessary to duplicate the current task either verbally or in writing every time the child begins to be distracted from classes (Voloshin, 2016).
Another criterion is learning environment. That is providing favorable conditions for high communicative activity of children with autism. Accordingly, the first session each day should start with some breaks.
Also, there should be no sharp changes in activities, if we talk about the transition from physical education to a subject that requires other activity(mental), and mental -in the transition from a humanitarian subject to technical, where it is necessary to apply different approaches to the processing of information received by them.
The next criterion under consideration is the features of the individual educational plan. In most schools it is impossible to organize the educational process so that children with different levels of socialization are engaged in the same class. That is, children with autism even if they are socialized usually have a lower level of perception in the standard class.
Finally, let`s consider the possibility of home schooling as another criterion. The modern educational system presupposes the existence of inclusive education, and children with autism can be special users of it. Mainly, classes during such type of training are held at home in a more familiar environment.
Conclusion
In today's world, in view of some physical and/or psychological diseases, not all people can obtain the necessary skills and knowledge for life on an equal basis with others. Obviously, that the problem of teaching children with ASD is relevant today. The solution of this problem is impossible without state support and further research by scientists, teachers and psychologists as well as generalization of the practical experience of specialists working in this field.
It is really important to pay attention to the problem of developing a scientifically based system of criteria and evaluating the results of their effectiveness in the educational process. Of particular importance is the prospect of research in the field of teaching foreign languages using robotics. Insufficiency in this research field leads to a decrease in the effectiveness of teachers. Undoubtedly, learning foreign languages is a vital necessity for autistic children to expand the boundaries of their intellectual abilities and increase the level of communication abilities. It is in this direction that scientists should make their efforts.
In the context of this work, the analysis of existing educational technologies using robots and without was held. To assess the level of effectiveness of teaching autistic children a foreign language, the following criteria can be used: the status and role characteristics of the teacher, problem statement, the environment, the development of the individual educational plan, and the use of home schooling. It should be noted that due to the imperfection of the technological part of robotic systems, the training of autistic children is more effective with a human-teacher who uses a robot as an assistant.
Both researchers and educators working with autistic children can benefit from this study. The transition from primary to secondary school in mainstream education for children with autism | 3,504.2 | 2020-11-25T00:00:00.000 | [
"Education",
"Computer Science"
] |
Identification and Characterization of MicroRNAs in the Leaf of Ma Bamboo (Dendrocalamus latiflorus) by Deep Sequencing
MicroRNAs (miRNAs), a class of non-coding small endogenous RNAs of approximately 22 nucleotides, regulate gene expression at the post-transcriptional levels by targeting mRNAs for degradation or by inhibiting protein translation. Thousands of miRNAs have been identified in many species. However, there is no information available concerning miRNAs in ma bamboo (Dendrocalamus latiflorus), one of the most important non-timber forest products, which has essential ecological roles in forests. To identify miRNAs in D. latiflorus, a small RNA library was constructed from leaf tissues. Using next generation high-throughput sequencing technology and bioinformatics analysis, we obtained 11,513,607 raw sequence reads and identified 84 conserved miRNAs (54 mature miRNAs and 30 star miRNAs) belonging to 17 families, and 81 novel miRNAs (76 mature miRNAs and five star miRNAs) in D. latiflorus. One hundred and sixty-two potential targets were identified for the 81 novel bamboo miRNAs. Several targets for the novel miRNAs are transcription factors that play important roles in plant development. Among the novel miRNAs, 30 were selected and their expression profiles in response to different light conditions were validated by qRT-PCR. This study provides the first large-scale cloning and characterization of miRNAs in D. latiflorus. Eighty-four conserved and 81 novel miRNAs were identified in D. latiflorus. Our results present a broad survey of bamboo miRNAs based on experimental and bioinformatics analysis. Although it will be necessary to validate the functions of miRNAs by further experimental research, these results represent a starting point for future research on D. latiflorus and related species.
Bamboo, one of the most important non-timber forest products among the world's plant and forest resources, is widely distributed in the tropical and subtropical areas under fluctuating light conditions. Most bamboos are fast growing, reaching their full height and diameter within a single growth season, which indicates that bamboos may possess unique carbon assimilation mechanisms in their leaves. D. latiflorus is an evergreen species locally known as 'tropical giant bamboo', which forms abundant forests in southern China and southeast Asia, and is a valuable natural resource used as food, building material and other human consumption [27]. Consequently, D. latiflorus is an obvious choice for an initial study of miRNAs in bamboo.
Using high-throughput sequencing and bioinformatics analysis, we identified 84 conserved miRNAs belonging to 17 families, and 81 novel miRNAs from more than 11 million raw sequence reads generated from a small RNA library of ma bamboo leaf. One hundred and sixty-two potential targets were identified for the 81 novel bamboo miRNAs. In addition, to confirm the novel predicted miRNAs in bamboo leaf tissues, the expression profiles of 30 novel miRNAs under different light conditions were validated by qRT-PCR. These results will lay the foundation for understanding miRNA-based regulation during D. latiflorus development.
Plant material, RNA isolation and small RNA highthroughput sequencing
Cutting seedlings of ma bamboo (D. latiflorus) were potted in our laboratory under a regime of 16 h light and 8 h darkness at 25°C, with a light intensity of 200 μmol·m -2 ·s -1 and a relative humidity of 75%. As experimental materials, we chose the third piece of new functional leaf (blade tissue only) from the top of the branch, which could be considered a juvenile leaf. The leaves were collected from 2-year-old cuttings and quickly frozen in liquid nitrogen. Total RNA was isolated from leaf tissues using the Trizol reagent (Invitrogen, Carlsbad, CA, USA), according to the manufacturer's instructions.
The small RNA library construction for ma bamboo and Solexa sequencing were carried out at BGI-Shenzhen (Shenzhen, China) using the standard Solexa protocol [28]. Briefly, small RNAs of 15-30 nt in length were first isolated from the total RNA through 15% TBE urea denaturing polyacrylamide gels. Subsequently, 5′ and 3′ RNA adaptors were ligated to these small RNAs, followed by reverse transcription into cDNAs. These cDNAs were amplified by PCR and subjected to Solexa sequencing. After removing low quality reads and trimming adapter sequences, small RNAs ranging from 18-30 nt were collected and used for further analyses. Finally, the selected clean reads were analyzed by BLAST against the Rfam database (http://rfam.sanger.ac.uk/) [29] and the GenBank non-coding RNA database (http:// www.ncbi.nlm.nih.gov/) to discard rRNA, tRNA, snRNA and other ncRNA sequences. In addition, sequencing data within this study were firstly uploaded to NIH Short Read Archive (accession number: SRX347876).
Data content
To obtain and analyze miRNAs from the leaf of ma bamboo, three types of data were used in this study (1). MiRNA data. The miRNA data from the leaf of ma bamboo was obtained by next generation sequencing, and all the reference sequences of mature miRNAs, star miRNAs and their precursors (pre-miRNAs) were downloaded from miRBase, Release 19.0 (http://www.mirbase.org/index.shtml) [13]. (2) Genome data of moso bamboo (Phyllostachys heterocycla var. pubescens) [30]. Although the whole genome sequence of ma bamboo is not available, phylogenomic analyses suggested that ma bamboo and moso bamboo had the closest relationship, with high sequence similarity [31]. For this reason, the sequenced genome of moso bamboo from our previous study was used as the genome to analyze ma bamboo miRNA data (3). Expressed sequence tag (EST) and mRNA data from ma bamboo. The approach outlined in (2) would inevitably produce some biases and some miRNAs would not be identified. Therefore, to fill the gap, ESTs and other mRNA data from ma bamboo were obtained from the NCBI database site and used to identify additional miRNAs.
Prediction of conserved miRNAs, novel miRNAs and potential miRNA targets in ma bamboo
As miRNA precursors have a characteristic fold-back structure, 150 nt of the sequence flanking the genomic sequences of small RNAs (sRNAs) was extracted and used to predict miRNAs. For analysis of conserved miRNAs in ma bamboo, unique sRNAs were aligned with plant mature miRNAs in miRBase Release 19.0. First, after rigorous screening, all retained sequences with three or more copies were considered as potential miRNAs. Second, sRNA sequences with no more than four mismatched bases were selected by BLAST searching against miRBase. Third, the remaining 15-26 nt reads were used to map the genome of moso bamboo and ESTs of ma bamboo using the BLASTN program. Sequences with a tolerance of two mismatches were retained for miRNA prediction. RNAfold (http:// www.tbi.univie.ac.at/RNA/) [32] was used for secondary structure prediction (hairpin prediction) of individual mapped miRNAs, using the default folding conditions to identify the known conserved miRNAs in ma bamboo. Finally, sequences that were not identical to the conserved miRNAs were termed novel miRNAs.
Potential target sequences for the newly identified miRNAs were predicted using the psRNATarget program (http:// plantgrn.noble.org/psRNATarget/) with default parameters. Newly identified miRNA sequences for ma bamboo were used as custom miRNA sequences, while coding sequences for moso bamboo, as well as EST and mRNA databases for ma bamboo, were used as custom plant databases, respectively. All predicted target genes were evaluated by the scoring system and criteria defined in a previous report [33]. Sequences with a total score less than 3.0 were identified as miRNA targets.
Expression analysis of novel miRNAs by qRT-PCR
The stem-loop qRT-PCR method [3] was used to detect the expression levels of novel miRNAs in D. latiflorus. Forward primers were specifically designed for each individual miRNA, as detailed in a previous method [3]: six nucleotides of the 3' end of the stem-loop RT primer were complementary to the 3' end of the mature miRNA, and the sequence 5' -CTCAACTGGTGTCGTGGAGTC -3' was used as the universal reverse primer [34]. U6 snRNA was used as an internal control [35]. More detailed information is supplied in File S1.
The qRT-PCR was conducted using a SYBR Green I Master Kit (Roche, Germany) on a LightCycler ® 480 Real-Time PCR System (Roche). The final volume was 20 μl, containing 7.5 μl 2×SYBR Premix Ex Taq, 0.3 μl of each primer (10 μM), 2 μl of cDNA and 7.2 μl of nuclease-free water. The amplification was carried out as follows: initial denaturation at 95°C for 10 min, followed by 41 cycles at 95°C for 10 s, 55°C for 20 s, and 72°C for 10 s. The melting curves were adjusted as 95°C for 5 s and 55°C for 1 min and then cooled to 40°C for 30 s [36]. All reactions were repeated three times.
For each condition, the qRT-PCR experiments were performed as biological triplicates and expression levels were normalized according to that of the internal control. The relative value of the gene expression was calculated using the 2 -ΔΔCt method [37]. Statistical tests were performed on the qRT-PCR data using SPSS (Statistical Product and Service Solutions) 18.0 software. Error bars representing the standard deviation were derived from the three experiments in triplicate.
Results and Discussion
Overview of small RNA library sequencing Deep sequencing of the small RNA library from D. latiflorus leaf tissues produced 11,513,607 raw sequence reads. Low quality sequences, adapters and small sequences shorter than 16 nucleotides (nt) were removed, leaving 10,593,305 clean reads and 6,320,379 unique sequences. After further removal of unannotated small RNAs and non-coding RNAs, such as tRNAs, rRNAs, siRNAs, snRNAs, snoRNAs and other noncoding RNAs, 910,151 miRNA sequences, accounting for 8.59% of the total sRNA, were identified ( Table 1).
The significant feature of the size profile permitted miRNAs to be distinguished from other small RNAs. The sRNAs length distribution (10-30 nt) of the original reads demonstrated that the most abundant reads were those of 20-24 nt in length (Figure 1), which were consistent with sRNAs with known function [19]. The most abundant class was those of 24 nt, which was also highly consistent with those small RNA of other Poaceae plants based on Solexa sequencing technology, such as rice [38], barley [39] and wheat [40].
Identification of conserved miRNAs in ma bamboo
Conserved miRNAs have been found in many plant species and play significant roles in plant development and stress responses [4]. To identify the conserved miRNAs in D. latiflorus, the sRNA library was searched using BLASTN for unique mature plant miRNA sequences in miRBase Version 19.0, in which miRNAs of bamboo had not been identified. Following a set of strict filtering criteria and further sequence analysis, 54 conserved miRNAs and 30 conserved star miRNAs were identified from D. latiflorus, which belonged to 17 families (Table 2).
Compared with the conserved miRNA families in plants reported by Cuperus et al [41], 15 conserved miRNA families, accounting for 88% of the conserved miRNA families in ma bamboo, were represented in ma bamboo. However, of the 29 conserved miRNA families in ehrhartoideae [41], 14 conserved miRNA families were not represented in ma bamboo. This may have resulted from the data in miRBase database being a different version, may represent the particular evolution and function of miRNAs in ma bamboo, or may have resulted from the inevitable biases in processes such as the preparation of samples, sequencing and analysis. Further experiments using increased amounts of data are required to verify the conserved miRNA families in ma bamboo. In addition, based on the moso bamboo genome, 13 conserved miRNA families were predicted and compared with those experimentally derived from ma bamboo. This analysis indicated that eight conserved miRNA families, accounting for 62% of all families predicted in moso bamboo, were represented in ma bamboo. This may be explained by the fact that they are two different bamboo species with distinct biological and evolutionary features. For example, the number of chromosomes is 68 and 48 in ma bamboo and moso bamboo, respectively.
MiRNAs with high sequencing frequencies have been shown to play fundamental and essential regulatory functions in maintaining biological processes. Therefore, the read counts for known miRNA families were analyzed ( Figure 2). The most abundant miRNA family (594,744 reads) was miR168, which was represented five to 41 times more frequently than other relatively high abundance miRNAs, including miR156/157, miR535, miR165/166 and miR167, whose total abundances ranged from 14,448 to 109,638 reads. Moreover, the top three The number of members in the differently conserved miRNA families was also analyzed. Four families (miR156/157, miR165, miR167 and miR535) contained multiple members, with nine, six, five and six members respectively. Five families, miR390, miR396, miR399, miR827 and miR1878, only had one member.
The bamboo flowering cycle can take up to 120 years and also involves infrequent and unpredictable flowering events, as well as peculiar monocarpic behavior, e.g., flowering once before culm death [26]; therefore, the mechanism of flowering, as a unique characteristic of bamboo, has received much research interest. Studies in other plants indicated that miRNAs play important roles in flower development [42,43]. The transition from juvenile to adult phase is controlled by miR156 and miR172. MiR172 also influences floral organ identity, as evidenced by failure of carpel abortion from the male inflorescence [42]. In addition, extensive analysis of certain monocotyledonous species, such as Brachypodium distachyon, Oryza sativa, Sorghum bicolor, and Zea mays, revealed more than three members in each miR172 family. Therefore, absence of miR172 may contribute to bamboo's special regulatory mechanism of flower development. Other miRNAs (miR164, miR167, miR169 and miR319) with functions during different stages of flower development were detected in D. latiflorus.
As seen in Figure 3, to explore the evolutionary roles of these conserved miRNAs, deep analyses focused on extensive comparisons against known conserved miRNAs in other plant species, including Picea abies, Pinus taeda, Physcomitrella patens, Selaginella moellendorffii, Arabidopsis thaliana, Brassica napus, Ricinus communis, Medicago truncatula, Citrus sinensis, Vitis vinifera, O. sativa, S. bicolor and Z. mays. Based on BLAST searches and sequences analysis, some miRNA families of D. latiflorus (miR156/157, miR160, miR165/166, miR171, miR319, miR390 and miR396) are highly conserved and ancient. For example, miR156/157 is a family of endogenous miRNAs with a relatively high expression level in the juvenile phase of many plants. The level of miR156/157 gradually decreases with plant age [44,45].
However, some miRNA families (miR528, miR535, miR827, miR1318 and miR1878) are less evolutionarily conserved and are involved in regulation of diverse physiological processes. For example, the miR528-target recognition site is only present within monocot genes, while all eudicot genes orthologous to SsCBP1 from Arabidopsis, poplar, grape, and soybean genomes completely lack the miR528-target recognition site, which supports the view that miR528 is a monocot-specific miRNA [46]. Another example is miR535, which was predicted to target a gene encoding a brassinosteroid signaling positive regulator protein in Physocmitrella patens [47]. It was reported that mature miR535 may be considered as a divergent member of the miR156/157 family, and targeting of SPL genes may be a vestigial function of miR535, which may have been performed more efficiently in the course of evolution by other members of the family [48].
The remaining five miRNA families, including miR164, miR167, miR168, miR169 and miR399, are homologous in eudicotyledons and monocotyledons, indicating these miRNA families may be recent and are more sensitive to certain stresses. For example, miR168 regulates loop ARGONAUTE1 (AGO1) homeostasis, which is crucial for the regulation of gene expression and plant development [49]. Another report demonstrated that NFYA5 contained a target site of miR169, which was downregulated by drought stress through an ABAdependent pathway [50,51]. MiR399 is upregulated in phosphate starvation and its target gene, encoding a ubiquitinconjugating E2 enzyme, is downregulated in A. thaliana [50,51].
Identification of novel miRNAs in ma bamboo
Previous studies have shown that each species has speciesspecific miRNAs [17,18,24,25,33,39,[52][53][54]; therefore, ma bamboo is also likely to have unique miRNAs. In addition, a distinct feature of miRNAs is the ability of their pre-miRNA sequences to adopt the canonical stem-loop hairpin structure. After excluding sRNA reads homologous to known miRNAs and other non-coding sRNAs, the remaining 18-24 nt sRNAs were subjected to secondary structure analysis of their precursors using RNA fold software. Seventy-six novel miRNAs and five novel star miRNAs were identified as possible miRNA candidates in ma bamboo ( Table 3). ESTs of ma bamboo were used to predict miRNAs; however, no novel miRNAs were Moreover, the length distributions of plant pre-miRNAs are more heterogeneous than animal pre-miRNAs. The shortest pre-miRNA is 53 nt in length (miRBase ID: ath-MIR5645b) and the longest in 938 nt in length (miRBase ID: aly-MIR858) in miRBASE Version 19. In ma bamboo, the distribution of pre-miRNAs is 54-96 nt, which is consistent with the pre-miRNAs distributions in other plants [17,25,39]. In addition, the length distribution of novel miRNA ranged from 18-24 nt in length. Forty-eight (45.3%) of the novel miRNAs belong to the 24nucleotide class, representing the most abundant novel miRNAs. Among the novel miRNAs, dla-miRC1 had the highest expression in our data, with 4,801 reads. Based on their frequencies and sequences in the small RNA library, although the expression levels of these candidates ranged from thousands of reads to single reads, in general, novel miRNA candidates of D. latiflorus showed lower expression compared with most of the conserved families. The low abundance of novel miRNAs might indicate that these miRNAs play a specific role in certain tissues or developmental stages, and may be considered young miRNAs in terms of evolution [41]. This sRNA library was only generated from leaf tissues, and future experiments will be carried out to determine whether these lowabundant miRNAs are expressed at higher levels in other organs, such as flowers and seeds, or whether they are regulated by certain stresses.
Prediction of miRNA targets in ma bamboo
The identification of miRNA targets using bioinformatics approaches is an essential step to further understand the regulatory function of miRNAs [25,33,55]. Given the lack of genomic data for ma bamboo, the CDS from the genome of the closely related species moso bamboo and ESTs of ma bamboo were used to predict new miRNA target genes, using the criteria described in the material and methods. We identified 176 potential targets, including 139 targets from moso bamboo in File S2 and 37 targets from ma bamboo (Table 4).
Among the novel miRNAs, dla-miRC5 has 26 putative target genes with different functions, which indicates that dla-miRC5 might be involved in regulating the expression of multiple genes in ma bamboo. Some of the novel miRNAs target transcription factor (TF) genes that have been confirmed to play key roles in plant development. The targets of dla-miRC1 are auxin response factor (ARF) TF genes. ARF can regulate auxin response genes by binding specifically to cis-elements of their promoters to affect the developmental process of plants [55]. To date, miR160 and miR167 have been demonstrated to inhibit ARF10, ARF16, ARF17 [8,56] and ARF6, ARF8 [57], respectively; however, dla-miRC1 is barely homologous to miR160 and miR167, indicating that it might be another specific suppressor of ARF genes in ma bamboo. In addition, an ERF gene (belonging to AP2/EREBP family) is predicted to be a target of dla-miRC37, a MYB TF gene is a putative target of dla-miRC56, and both dla-miRC9 and dla-miRC16 have one target gene belonging to the BHLH family of TFs. Scarecrow, of the GRAS TF gene family, is targeted by dla-miRC33. All the targets of these novel miRNAs were TF genes that function in regulating the development of plants [58][59][60][61], which indicated these miRNAs might play important roles in ma bamboo development and stress responses.
Some novel miRNAs, including dla-miRC2, dla-miRC5, dla-miRC13, dla-miRC35 and dla-miC45, are predicted to target genes related to chloroplast synthesis and photosynthesis in the leaf, which is the most important photosynthetic organ of plants. This result suggested that the miRNAs might be leafspecific and be involved in the process of regulating chloroplast synthesis and photosynthesis.
Expression profiles of novel miRNAs in response to light
The expression patterns of miRNAs could provide clues to their functions [22]. As an efficient and sensitive method for detecting gene expression [62], stem-loop qRT-PCR was developed based on the common qRT-PCR method, with advantages such as increased sensitivity and high accuracy. Therefore, stem-loop qRT-PCR has been widely employed to distinguish two miRNAs with small differences [3].
Among the novel predicted miRNAs, the top 30 novel miRNAs (according to the number of reads) were selected and primers were designed based on their highly specific stem-loop reverse sequences to experimentally identify their expression. As shown in File S3, the result of stem-loop qRT-PCR showed that each of the selected miRNAs had a highly specific dissolution curve, which indicated that the primers could be used for further analysis. Moreover, as shown in Figure 4, the in-depth analysis demonstrated the expression of the miRNAs under high light stress conditions; the number of miRNAs that were upregulated and downregulated was 10 and 16, respectively. The expressions of dla-miRC18, dla-miRC27-5p and dla-miRC27-3p were significantly increased under high light (P<0.01), among which dla-miRC18 was upregulated to 15 times the level of the control. However, the expression of dla-miRC1, dla-miRC19 and dla-miRC28 was significantly downregulated (P<0.01). Under dark condition, the expressions of dla-miRC1, dla-miRC22, dla-miRC25, dla-miRC27-5p and dla-miR27-3p were upregulated significantly (P<0.01), while the expressions of dla-miRC5, dla-miRC17 and dla-miRC29 were downregulated significantly (P<0.01).
This analysis also indicated that the expressions of dla-miRC1 and dla-miRC19 were downregulated significantly under high light and upregulated markedly under dark conditions (P<0.01), indicating that they were inhibited by light. However, dla-miRC29 was upregulated under high light and downregulated under dark, indicating it was light-induced expression. MiR27-5p and miR27-3p were cleaved from the same precursor and had similar expression profiles, being upregulated under both high light and dark, which indicated they might have the same promoter and cis-elements, and synergistic expression characteristics. The expression results indicated that these miRNAs were affected by light, and might play vital roles in the regulation of genes involved in light signal transduction and light stress.
Bamboo converts light energy into chemical energy through photosynthesis, which is one of the necessary processes supplying carbohydrates for the rapid expansion of cells. As the first comparative identification of miRNAs among leaves treated with distinct light conditions, this analysis showed that bamboo may possess a unique light regulation mechanism involving miRNAs, although its function and mechanism are currently unknown.
Conclusions
We identified 81 novel miRNAs and 84 conserved miRNAs belonging to 17 families in the leaf of ma bamboo using highthroughput sequencing and bioinformatics analysis. The results of qRT-PCR indicated the miRNAs might regulate the expression of genes involved in photosynthesis, which acts as a key metabolic pathway in the fast growth of bamboo. These miRNAs will add to the growing database of new miRNAs and lay the foundation for further understanding of miRNA function in the regulation of ma bamboo development and other biological characteristics. These miRNAs identified in the leaf of ma bamboo provide new opportunities for future functional genome research in bamboo and other related species.
Supporting Information
File S1. Primers used in novel miRNAs qRT-PCR. | 5,248.6 | 2013-10-21T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Is climate exacerbating the root causes of conflict in Mali? A climate security analysis through a structural equation modeling approach
Climate continues to pose significant challenges to human existence. Notably, in the past decade, the focus on the role of climate on conflict and social unrest has gained traction in academic, development, and policy communities. This article examines the link between climate variability and conflict in Mali. It advances the argument that climate is a threat multiplier, in other words, climate indirectly affects conflict occurrence through numerous pathways. We take the view that maize production and household food security status sequentially mediate the relationship between climate variability and the different conflict types. First, we provide a brief review of the climate conflict pathways in Mali. Second, we employ the path analysis within the structural equation modeling technique to test the hypothesized pathways and answer the research questions. We use the Living Standards Measurement Study-Integrated Surveys on Agriculture (LSMS-ISA), a nationally representative data from Mali merged with time and location-specific climate and the Armed Conflict Location and Event Data (ACLED) data. Results show that an increase in positive temperature anomalies when sequentially mediated by maize production and household food security status, increase the occurrence of the different conflict types. The results are robust to the use of negative precipitation anomalies (tendency toward less precipitation compared to the historical norm). Our findings highlight two key messages, first, the crucial role of climate change adaptation and mitigation strategies and interventions on influencing household food security status and thus reducing conflict occurrence. Second, that efforts to build peace and security should account for the role of climate in exacerbating the root causes of conflict.
Is climate exacerbating the root causes of conflict in Mali? A climate security analysis through a structural equation modeling approach Climate continues to pose significant challenges to human existence. Notably, in the past decade, the focus on the role of climate on conflict and social unrest has gained traction in academic, development, and policy communities. This article examines the link between climate variability and conflict in Mali. It advances the argument that climate is a threat multiplier, in other words, climate indirectly a ects conflict occurrence through numerous pathways. We take the view that maize production and household food security status sequentially mediate the relationship between climate variability and the di erent conflict types. First, we provide a brief review of the climate conflict pathways in Mali. Second, we employ the path analysis within the structural equation modeling technique to test the hypothesized pathways and answer the research questions. We use the Living Standards Measurement Study-Integrated Surveys on Agriculture (LSMS-ISA), a nationally representative data from Mali merged with time and location-specific climate and the Armed Conflict Location and Event Data (ACLED) data. Results show that an increase in positive temperature anomalies when sequentially mediated by maize production and household food security status, increase the occurrence of the di erent conflict types. The results are robust to the use of negative precipitation anomalies (tendency toward less precipitation compared to the historical norm). Our findings highlight two key messages, first, the crucial role of climate change adaptation and mitigation strategies and interventions on influencing household food security status and thus reducing conflict occurrence. Second, that e orts to build peace and security should account for the role of climate in exacerbating the root causes of conflict. KEYWORDS climate security, conflict, impact pathways, food insecurity, Mali, mediation analysis, structural equation modeling, climate variability
Introduction
The recent Intergovernmental Panel on Climate Change (IPCC) report identifies climate change and variability as one of the main challenges threatening human existence (IPCC, 2021). Together with other drivers, climate change and variability threaten human life in many ways including increasing the occurrence of natural disasters, undermining livelihoods security and peace. Concerning human security and peace, an increasing stream of research over the past decades has addressed the climate-conflict nexus (Burke et al., 2009;Fjelde, 2015;Froese and Schilling, 2019;Helman et al., 2020). An ongoing debate within this stream of research revolves around the arguments of causality and the mechanism or the contextual pathways through which climate may affect human security, peace and stability (Busby, 2018;Martin-Shields and Stojetz, 2019).
Existing empirical studies contributing to the climateconflict debate provided mixed findings. Some support the argument that climate change exacerbates conflict (Burke et al., 2009;Crost et al., 2018;van Weezel, 2020) while others find no effect of climate change on conflict (Bergholt and Lujala, 2012;Slettebak, 2012). Scholars that support the argument that climate change and variability exacerbate conflict can be categorized into two. The first category conceptualize that climate variability has a direct effect on conflict, the second category postulate that the relationship is mediated by economic, social or political factors (Sakaguchi et al., 2017). On the one hand, studies that hold the view of a direct relationship between climate variability and conflict are framed from the General Aggression Model which state that higher temperatures trigger human aggression (DeWall et al., 2011), and Routine Activity Theory which holds that higher temperatures force people to spend more time outdoors increasing chances of that may undermine peace (Groff, 2008). On the other hand, those that take the indirect effect stance argue that climate variability affects conflict trough some intervening factors such as food insecurity (Koren and Bagozzi, 2017;Anderson et al., 2021), crop production (Wischnath and Buhaug, 2014;Caruso et al., 2016;Jun, 2017), and poverty and inequality (Harris and Vermaak, 2015;Helman et al., 2020) and country's economic growth (Bergholt and Lujala, 2012).
Even with the growing consensus that there is an indirect relationship between climate and conflict, there are no generally agreed upon impact pathways, rather, the indirect relations are complex and dynamic with feedback mechanisms. In the studies supporting the hypothesis that climate is indirectly associated with the emergence and persistence of conflict, resource scarcity is the dominant discourse explaining the mechanism at play (Klomp and Bulte, 2013;Salehyan and Hendrix, 2014;Raleigh et al., 2015).
Resource scarcity discourse views climate as a driver that creates resource scarcity which in turn fuels conflict (Evans, 2011;Ide, 2017). Access to arable land and water are some of the resources that are often adversely affected by climate variability and when the access is limited, conflict may arise (Hendrix and Salehyan, 2012;Koubi et al., 2012). For instance, in Africa where a majority of the countries rely on agriculture for economic development, an adverse climate variability may result in reduced agricultural production leading to livelihood and food insecurities and this may in turn trigger emergence of conflict events (Couttenier and Soubeyran, 2014). Moreover, for economies that rely on agricultural sector, reduction in agricultural production due to climate variability may lead to reducing employment opportunities and incomes, and rising food prices which may substantially increase conflicts (Fjelde, 2015). Such indirect relationship between climate and conflict constitute a significant "threat multiplier" to the peace and stability of the communities that rely on agriculture (Hegre et al., 2016). Two shortcomings are evident in the studies that attempt to unravel indirect effects of climate change on conflict, firstly, they do not provide much insight into the contextspecific pathway linking climate and conflicts (van Weezel, 2020). Second, the studies on the effect of food production make an implicit assumption that food production is the main cause of food insecurity (Jun, 2017), while the studies on the effect of food security make an implicit assumption that food insecurity is as a result of decline in production due to climate change and variability. In other words, there are limited attempts to model the sequential association between food production and household food security status in influencing conflict.
Another unsettled issue within the climate-conflict research is the question of which climatic events influence conflict. Largely, existing studies consider precipitation and temperature anomalies-the deviation from the historical normal precipitation and temperature. Precipitation and temperature anomalies have been shown to have different effects on conflict depending also on the type of conflict in consideration (Hsiang et al., 2013). On the one hand, an increase in temperature anomalies has been shown to exacerbate conflict (Burke et al., 2009;Collard et al., 2021). On the other hand, rainfall anomalies show inconsistent results, for instance, some studies have found no effect of rainfall anomalies on conflict (Bergholt and Lujala, 2012), others have found rainfall abundance increases conflict (Theisen, 2012;Salehyan and Hendrix, 2014) yet others such as Hendrix and Salehyan (2012) have found a curvilinear relationship between rainfall and conflict.
This paper contributes to filling the knowledge gap and on the debate on the association between climate and conflict in at least four ways. First, we provide a contextualized impact pathways for Mali explaining the mechanisms through which climate variability may trigger to conflict. Second, .
we model maize production and household food security status as sequentially mediating the association between climate variability and conflict. Third, estimate path analysis (serial mediation) through the structural equation modeling (SEM) approach. Fourth, we provide a detailed analysis of the association between both temperature and precipitation variability and conflict. Overall, we advance the argument that the relationship between climate and conflict is complex and dynamic. Specifically, we hypothesize that climate variability negatively affects maize production and this, in turn, adversely affects the household food security status which consequently may trigger different types of conflicts.
The next sections of this article are organized as follows. In Section Climate security impact pathways we briefly provide the contextual climate security pathways in Mali and the theoretical framework of the mechanisms that explain the relationship between climate variability and conflict. Section Data and methods outlines the data and methods. Section Results presents the results and discussion, and in Section Conclusions we draw conclusions placing our findings in the growing debates on climate security, climate adaptation and mitigation in fragile contexts and climate finance.
Climate security impact pathways Pathway : Resource availability and livelihood insecurity
For the past three decades, Mali has experienced an increase in competitive pressures over the access to and use of natural resources by different livelihood groups. These groups are often associated with specific ethnic groups, leading to overlaps between conflict lines. For instance, in northeast Mali, there are considerable tensions between Tuareg and Fulani pastoralist communities over the control of pasture lands and sources of water for their livestock (Nagarajan, 2020) while in central Mali, Fulani herders have also had confrontations with Dogon and Bambara farmers over access to pastures (Benjaminsen and Ba, 2009;Nagarajan, 2020;Hegazi et al., 2021).
Climate change and variability in Mali continues to impact negatively climate-sensitive livelihoods, including agriculture, livestock, and fishing, reducing their production and productivity (Nagarajan, 2020). The combined effect of a rise in temperatures and rainfall variability is likely to result in reduced productivity of some staple crops such as millet, sorghum, maize, and rice as well as cash crops such as cotton (Ministry of Foreign Affairs of the Netherlands, 2018; USAID, 2019). National reports indicate that the climate change reduces animal weight, decrease forage yield, and increase the prevalence of animal diseases, reducing the overall livestock productivity (Ministry of Foreign Affairs of the Netherlands, 2018). The result of the impact of climate will likely translate into increasing food insecurity, malnutrition, poverty, and poor health, which have been considered as the root causes of conflict (USAID, 2018(USAID, , 2019Nagarajan, 2020).
In this context, the climate crisis has the potential to exacerbate the competition over the access to and use of available resources through its impact on natural resource availability and environmental conditions. In Mali's conflict-affected context, the increasing competition may continue to reduce levels of social cohesion, further increasing the risks that conflicts will be sustained or (re)emerge between and amongst different socio-professional and ethnic groups (Raineri, 2018;Ursu, 2018;Nagarajan, 2020).
Pathway : Farmer-herder conflict
Farmer-herder conflicts have increased in the last decade due to various factors, including the expansion of farming into livestock corridors and the mobility of herders induced by the violent conflict and droughts (Ibrahim and Zapata, 2018;Jourde et al., 2019). The increasing variability in climate and the rise in the number of extreme weather events have negatively affected pastoralist communities in different ways, including the reduction of pasture and water that will further diminish their ability to maintain their primary source of livelihood (Ministry of Foreign Affairs of the Netherlands, 2018; USAID, 2018; Nagarajan, 2020). Pastoralists are forced to change their routes in search of alternative resources while some farmers try to increase agricultural land, frequently at the expense of grazing areas (Ibrahim and Zapata, 2018). This often leads to disputes between farmers and herders, especially as these pressures push herding communities further south where there are fewer demarcated livestock corridors (Nagarajan, 2020).
Harsh climate conditions with more severe dry seasons force pastoralists to move toward the Niger Delta in search of pasture. This becomes a real problem when animals arrive before the crops have been harvested as they damage crops, impacting farmers' livelihoods and increasing the risk of food insecurity and conflict (Ibrahim and Zapata, 2018). If the coping and adaptive capacities are not addressed, the climate crisis will likely exacerbate the root causes of conflicts, increasing both the number and intensity of conflicts (Ibrahim and Zapata, 2018;Hegazi et al., 2021).
Theoretical background and hypotheses
The climate-conflict nexus national and international security issue as opposed to being understood as purely an environmental shock (Brzoska, 2012). Key to this debate is the argument that climate is a "threat multiplier" which amplify and compound the cascading effects of economic, social, and political risks that trigger conflict. This debate is active in fragile countries such as those in sub-Saharan Africa where conflict occurrence and climate effects are on the rise (Anderson et al., 2021). In the academic literature, the climate-conflict nexus has been conceptualized and theorized in a variety of ways leading to the application of different analytical methods and often yielding mixed findings. Broadly, two strands of conceptualizations exist: in the first strand, scholars test the hypothesis that climate variability may have a direct association with conflict (Hsiang et al., 2013), the second strand seeks to unravel the relative contribution of climate variability on conflict as mediated by other factors (indirect association) (Koubi et al., 2012). While these two strands of conceptualizations are interesting, the recent systematic review on climate-conflict nexus by Sakaguchi et al. (2017) identify the second strand as that provides the opportunity for policy makers and development community to design interventions that may reduce conflict. Whereas our research is rooted in the second strand of conceptualization, we nonetheless test the direct association between climate variability and conflict. Studies that have estimated the direct association between climate variability and conflict often stem from the intersection of psychology and economics disciplines. For instance, supporting this line of conceptualization (Anderson et al., 2000) and (Ranson, 2014) argue that high temperature increase the level of aggression and tension, in turn, this may increase the likelihood of violence and the probability that police officers use force (Vrij et al., 1994). There is, however, a caveat to this direct association between temperature and conflict, that is, to date, the physiological mechanism linking temperature to aggression or tension remains unknown (Hsiang et al., 2013). From the foregoing, we test the following hypothesis: H1: Climate variability is positively associated with conflict.
The mediating role of agricultural production and food insecurity
To advance the second strand of conceptualization that the association between climate variability and conflict is mediated by some factors, we reflect on the impact pathways. In general, the pathway through which climate variability may influence conflict are numerous, complex and context specific. According to Sakaguchi et al. (2017), the mediated association of climate and conflict emerge when climate variability interact with socio-economic factors, resource factors or processes of migration. Food (in)security has often been conceptualized as a mediator in the climate-conflict linkage (Koren and Bagozzi, 2016;Brück and d'Errico, 2019;Martin-Shields and Stojetz, 2019). Accordingly, Koren and Bagozzi (2016) identifies two pillars that are mostly likely to be contested through violent means, these are food availability and access pillars. They find that food scarcity is associated with an increased occurrence of armed conflict.
In another study, Martin-Shields and Stojetz (2019) found that at the household and individual levels, nutritional status and economic opportunities trigger participation in any form of anti-social behavior that undermine peace. Notably, there is an implicit assumption in the studies that have attempted to model food (in)security as a mediator between climate variability and conflict. The assumption is that climate variability affects food production which in turn affect household food security status. Indeed, the relationship between climate and food production is often not considered. Instead, another stream of studies has attempted to model agricultural production as a mediator assuming that reduced food production due to climate increases food insecurity and hence the emergence of conflict. In Indonesia, Caruso et al. (2016) studied the effect of climate on conflict as mediated by rice yields. They hypothesize that climate may negatively affect rice production, and eventually food availability and food prices and thus positively the emergence of violence. Their results indicate that increase in the minimum temperature during the core month of the growing season leads an increase in violence driven by the reduction in future rice production per capita.
In sub-Saharan Africa, Jun (2017), studied the effect of temperature on civil conflicts mediated by maize yield. They postulate that high temperatures during maize growing season reduced the maize yield, which in turn increased the incidence of civil conflict. The findings support the hypothesis suggesting that that temperature-induced maize yield positively influences the incidence of civil conflict.
Finally, to our knowledge, limited effort has been directed to unravel the association between climate and conflict through both food production and food (in)security "closing the loop".
In this study we attempt to close this loop by modeling both maize production and food security status as mediators. The choice of maize yield is based on the importance of maize production to household food and livelihood security in Mali. Maize was widely adopted by farmers in the late 1970s following the great droughts during that decade as a crop diversification strategy aimed at addressing national chronic food shortages as well as ensuring food security (Diallo, 2011). The relevance of maize in Mali's total cereal production has been rapidly increasing since the 1990s, representing now around 25% of the total cereal production (Diallo, 2011;FAO, 2014). This boost in production was followed simultaneously by an increase in maize consumption, which went from 250,000 tons in 1996 to 700,000 tons in 2007 (Diallo, 2011). At the household level, annual maize . /fclim. . consumption has increased from 5.9 kg per person in 1980 to 50.9 kg per person in 2011, becoming the fourth most consumed product in Mali after rice, millet and sorghum (FAO, 2014). Human consumption accounts for 90% of the total domestic maize consumption, becoming a crucial cereal in the nutrition of most Malians, providing 10.8% of the total caloric intake in Mali (Diallo, 2011;CIAT et al., 2021). Unlike other cereal crops such as millet and sorghum, maize production is mostly grown for consumption with only 10 to 25% of the production being marketed (FAO, 2014). Recent studies have concluded that maize yield is a determinant factor in the food security of farming households, suggesting that the higher the farming household maize yield, the less the likelihood of food insecurity (Diallo and Toah, 2019).
In this study, we present food production (maize) and food insecurity as the mechanism through which climate influence conflict. To put our impact pathway into perspective, we postulate that climate variability (as measured by both precipitation and temperature anomalies) has a direct effect on maize production, and this in turn has a direct effect on household food security status, consequently influencing conflict. Given the above, we test the following hypotheses: H2: Climate variability is negatively associated with maize production H3: Maize production is negatively associated with food insecurity H4: Food insecurity is positively associated with conflict H5: Maize production and household food insecurity sequentially mediate the association between climate variability and conflict.
We test these hypotheses through a process called serial/chain mediation analysis in structural equation modeling technique-where the influence of the independent vari-able flows through multiple mediators before impacting the outcome variable (Collier, 2020). The theoretical model guiding this research is illustrated in Figure 1.
Data and methods Data
The data used to answer the research questions is based on rich nationally representative household data from Mali which is administered by the Living Standards Measurement Study-Integrated Surveys on Agriculture (LSMS-ISA) of the World Bank. We use the pooled data of the two waves of Mali LSMS-ISA (2014/15 and 2017/18). We use the pooled data since it is documented that it was not possible to track households between the two waves, thus it is recommended that the data should be considered a cross-sectional survey . The LSMS-ISA surveys collect detailed data on household characteristics, agricultural production, food security, shocks, and household assets among others.
Maize yield is derived from the agricultural production section calculated by the sum of harvested maize production (kgs) in the two waves of data, this approach has been used previously (Caruso et al., 2016;Jun, 2017). Food security measures are taken from the food security section of the LSMS-ISA data, we use five items that measure the state of household food availability and access, these are: (a) whether or not a household member skipped meals because of lack of resources to buy food; (b) whether or not a household member reduced the quantities of food consumed because of lack of resources to buy food; (c) whether or not you or other household members spent a whole day without eating for lack of money or other resources?; (d) whether or not you or other household members did not eat a variety of food they desired because of lack of money or other resources; and (e) whether or not you or other household members depended on borrowed food, or relied on help from relatives, neighbors or friends.
The conflict variables were derived from the Armed Conflict Location and Event Data Project (ACLED). ACLED is geo-Referenced event dataset collected and coded to tract the conflict and violence occurrence globally. It aims to capture the modes, frequency and intensity of political violence and conflicts as they occurs (Raleigh et al., 2010). In this paper, we consider five forms of conflicts as grouped in ACLED, these are (a) violence against civilians, (b) riots, (c) protests, (d) remote violence, and (e) battles. Violence against civilians are deliberate violent acts perpetrated by an organized political group such as a rebel, militia or government force against unarmed noncombatants. These conflict events harm or kill civilians and are the sole act in which civilians are an actor. Protests are non-violent, public demonstrations against political entities, government institution, policy or group on the other hand riots are violent forms of demonstrations. Remote violence refers to events in which the tool for engaging in conflict does not require the physical presence of the perpetrator. These include bombings, IED attacks, mortar, and missile attacks, etc. Remote violence can be waged on both armed agents and civilians. Battles are violent interactions between two politically organized are groups at a particular time and location. For more details about the ACLED data see ACLED (2019). In addition to these forms of conflict, we also included a variable called total conflicts which is the sum of all the conflict types in a location of interest.
The climate data used were derived from the Climate Hazards Group InfraRed Precipitation with Station? data (CHIRPS) which contains information on maximum and minimum temperature and precipitation (Funk et al., 2015). The household data, climate data and conflict data we merged using the month and year of survey and at the lowest administrative location referred to as cercle in Mali. We calculated the temperature and precipitation anomalies by considering the lagged values 3 months before the month of the survey.
To calculate the climate anomalies, we applied the formula by Maystadt and Ecker (2014).
where TA denotes temperature anomalies and PA precipitation anomalies.
T i,m,y and P i,m,y denote the monthly average temperature and monthly total precipitation in location (cercle) unit i during the month-year (m, y) time period. The long-term monthly mean is µ i,m , and the standard deviation is σ i,m .
For the conflict variables, we consider the number of the different forms of conflicts that were reported 12 months after the survey period. Following our conceptual model logic and mediated hypothesis, if maize production affects household food security, then it is not logical that climate variability in time t will affect food security in time t and hence conflict in time t. Therefore, to test the mediated hypothesis, we believe that climate variability 3 months before the month of survey (t − 3) will affect household food security within 12 months after the survey (t + 12), and consequently the number of different forms of conflicts within t + 12. The choice of calculating both temperature and precipitation anomalies 3 months before the month of survey takes into consideration the fact that maize has been established to mature between 180 to 210 days in the Sahel (Beah et al., 2021).
Empirical analysis
In this study we investigate the empirical associations between climate variability (as measured by the temperature and precipitation anomalies) maize production, household food insecurity and conflict. Given the complexity of the associations, we employ the structural equation modeling (SEM) approach which has previously used to unravel complex relationships such as the association between climate and conflict through different pathways (Helman et al., 2020;Yue and Lee, 2020). The SEM continues to gain popularity for modeling and estimating pathspecific associations within a complex set of relationships. The SEM has the advantage of allowing for the estimation of direct and indirect (mediated) effects of climate change on conflict. Given this characteristic, SEM is preferred over the standard linear regression as it allows for the isolation of specific direct effects from indirect effects. SEM thus is suited for testing the direct and mediated effects based on a priori hypotheses. We therefore use SEM to test our conceptual model in Figure 1. We present the standardized effects, the magnitude, and the signs.
Structural model and hypotheses testing
To estimate the structural model and test the mediation effects James and Brett (1984) recommend the use of SEM approach adopting a maximum likelihood estimation. To do .
/fclim. . this, we follow MacKinnon et al. (2002) and Collier (2020) procedure of simultaneously estimating the path from the climate variability (temperature and precipitation anomalies) to conflict as measured by the number of the different types of conflicts through two mediators, maize production and household food security status -serial mediation as illustrated in Figure 1. Given the lack of solid theory and the existence of numerous pathways explaining the association between the climate variability and conflict, we constrain the direct effects to 0 when testing mediation effects (James et al., 2006). The control variables were included in the structural model and regressed on the dependent variables (types of conflict). Descriptive statistics of the variables used in the analyses can be found in Appendix A1. The results of the structural model indicate that a good model fit was achieved as shown by the following fit indices CFI = 0.977; TLI = 0.958; RMSEA = 0.071; and SRMR = 0.054. These measures of model goodness of fit are within the recommended cutoff criteria, that is, CFI >0.95; TLI >0.95; RMSEA <0.08; and SRMR <0.06 (Hu and Bentler, 1999). For the test of mediation hypothesis, we conducted bootstrapping with 5,000 samples and bias-corrected confidence intervals of 95% level to obtain efficient standard errors as recommended in Shrout and Bolger (2002). Table 1 presents the results of the direct effects both without and with controls. We interpret the panel of results with controls.
For brevity we present the full direct effects results only for the key variables that we hypothesized and provide the full results in Appendix A2. Overall, we find mixed results with some hypotheses supported while others are not supported. Whereas we hypothesized a positive association between climate variability (both temperature and precipitation anomalies) on the number of conflict types (H1), our results indicate that 3 months negative precipitation anomalies are negatively associated with total conflicts, violence against civilians and riots. We find positive association with protests and insignificant association with remote violence and battles. It is important to note here that negative precipitation anomaly denote tendency toward lower rainfall in relation to the long-term mean. Our mixed results are consistent with previous findings indicating that that negative deviations from historical mean are associated with higher risks of violence between communities (Fjelde and von Uexkull, 2012;Hendrix and Salehyan, 2012;Crost et al., 2018). Similarly, our findings are consistent with that of Raleigh and Kniveton (2012) who found that wet periods were associated with higher rates of communal conflicts in Kenya and Ethiopia. As expected, the hypothesis on the effect of 3 months negative precipitation anomalies on maize production is supported suggesting that decrease in precipitation compared to the historical long-term average reduces crop production (H2).
With respect to the direct effects of 3 months positive temperature anomalies on the number of different conflict types, our results are mixed. Some hypotheses are supported indicating that increase in 3 months positive temperature anomalies increases the number of the different conflict types (H1). Specifically, one standard deviation increase in 3 months positive temperature anomalies increases violence against civilian, riots and protests, however, it reduces number of remote violence and battles. The supported hypotheses are consistent with the findings within the General Aggression Model which state that higher temperatures trigger human aggression (DeWall et al., 2011), and Routine Activity Theory which holds that higher temperatures force people to spend more time outdoors, in resource constraint contexts, this may provide opportunities to engage in activities that may undermine peace (Groff, 2008). As hypothesized, increasing temperatures relative to the long-term average has negative relation with maize production (H2). Two studies closely related to ours, Caruso et al. (2016) in Indonesia and Jun (2017) in sub-Saharan Africa have found results similar to ours.
With respect to the hypothesis that maize production is negatively associated with food insecurity (H3), our results support this hypothesis. This implies that increase in maize production reduces household food insecurity status. This corroborates with the findings that maize yield is crucial for household food security in Mali (Diallo et al., 2020).
Our results also support the hypotheses that household food insecurity increase number of conflict types (H4), indicating that increase in food insecurity by one standard deviation result to an increase in total conflicts by 0.08 standard deviations; increase in violence against civilian by 0.068 standard deviations; increase in riots by 0.067 standard deviations; increase in protests by 0.050 standard deviations; increase in remote violence and battles by 0.058 and 0.047 standard deviations respectively. This is in line with previous studies that have found that household food security status is one of the mechanisms that triggers conflict (Koren and Bagozzi, 2016;Martin-Shields and Stojetz, 2019;Anderson et al., 2021).
In the next step, we performed serial mediation analysis (indirect effects) while accounting for the control variables. This tests the hypothesis that maize production and household food security status sequentially mediate the association between climate variability (both temperature and precipitation anomalies) and the conflict types (H5). We rely on the parameter estimates for the path from temperature and precipitation anomalies to the conflict types via maize production and household food security status sequentially (see Figure 1) while setting the direct path from temperature and precipitation anomalies to the number of conflict types to zero. The mediation hypothesis is supported if the mediation path jointly not equal to zero (MacKinnon et al., 2002). Table 2 presents the results of the serial mediation analysis both without and with controls.
Specifically, the results indicate that the mediated effect of 3 months positive temperature anomalies on total conflicts is 0.011, on violence against civilians is 0.003, on riots is . /fclim. . 0.001, on protests is 0.003, on remote violence is 0.001, and on battles is 0.002. These, imply that, an increase in 3 months positive temperature anomalies by 1 standard deviation increases the total conflicts by 0.011 standard deviations, increases violence against civilians by 0.003 standard deviations, increases riots by 0.001 standard deviations, increases protests by 0.003 standard deviations, increases remote violence by 0.001 standard deviations, and increases battles by 0.002 standard deviations. With respect to precipitation, the results indicate that overall, there is a positive association between the 3 months negative precipitation anomalies and the number of conflict types mediated by maize production and household food security status sequentially. Specifically, the mediated effect of 3 months negative precipitation anomalies on total conflicts is 0.015, on violence against civilians is 0.005, on riots is 0.002, on protests is 0.004, on remote violence is 0.002, and on battles is 0.003. These, imply that, increase 3 months negative precipitation anomalies increase the total conflicts by 0.015 standard deviations, increase violence against civilians by 0.005 standard deviations, increase riots by 0.002 standard deviations, increase protests by 0.004 standard deviations, increase remote . /fclim. . In general, all the hypotheses are supported, suggesting that maize production and the household food security status sequentially mediate the association between temperature and precipitation anomalies, and the conflict types. In other words, maize production and household food security status are some of the mechanisms through which climate variability exacerbate conflict. In terms of the type of mediation, we find partial mediation in all mediated paths except the mediated path from 3 months positive temperature anomalies to total conflicts, the mediated path from 3 months negative precipitation .
anomalies to remote violence and to battles which have a full mediation. Mediated paths showing partial mediation imply that the direct paths are significant. On the one hand, this suggests that the variations in the conflict variables are explained both by the mediated paths and the direct paths. On the other hand, full mediation is where the direct path is insignificant suggesting that the variation in conflict variable is fully explained by the mediated path. While these results have policy implications, we caution that they need to be interpreted with care, this is because the scope of this paper is on one pathway (climate variability to conflict via maize production and household food security status), thus before making policy recommendations or designing interventions to reduce conflicts there is need to take into account the complexity of other pathways at play.
Conclusion
The world is significantly less peaceful now than it was 15 years ago. The 2021 Global Peace Index report shows that the average level of global peacefulness deteriorated for the ninth time in 13 years in 2020. Climate variability and change also accelerate this negative trend by multiplying socioeconomic risks and insecurities, such as food insecurity, forced migration, displacement, and inequality, among others, which are ultimately the root causes of instability, tensions, and conflict. Recent estimates report that approximately 971 million people live in areas with high or very high climate exposure, and of this number, 41 per cent resides in countries marked by low levels of peacefulness.
Despite growing recognition of the potential of climate to amplify existing conflict dynamics or even create new ones, robust, scientific evidence that climate is a "threat multiplier" is lacking. This is reflected in the policy agenda of many fragile countries, where climate security is not acknowledged and therefore risks associated with the nexus are not accounted for in either peacebuilding efforts or climate resilience interventions. More policy relevant research is needed on how climate is exacerbating common drivers of conflict; where is the climate security nexus occurring; who is bearing the burden of these risks and, finally, what can be done to break the cycle between climate and conflict.
Our study contributes to fill this gap providing answers to the how question above. We do so by testing the hypothesis that climate variability reduces agricultural production, increases food insecurity which in turn increase the intensity of conflict in Mali. We use a rich nationally representative dataset managed by LSMS and merge these with high-resolution climate (CHIRP) and conflict (ACLED) data.
Our findings reveal that climate is a threat multiplier, this is consistent with previous studies that have found that climate indirectly leads to increased conflict occurrence (Fjelde, 2015;Crost et al., 2018;Mach et al., 2019). We have shown that maize production and food insecurity are important mediators of the impact of climate on conflict. In other words, climate indirectly exacerbates conflict by adversely affecting agricultural production and food security.
Acknowledging the role of climate as threat multiplier has important implications for both peace peacebuilding efforts. Current peace and security interventions do not adequately address the change, variability, and impact of climate on socioeconomic risks that can lead to conflict. There is, therefore, a need to correct this imbalance. And this is particularly important not only for those countries where climate and fragility already intersect but also for many supposedly peaceful countries across the developing world, which are regularly exposed to a set of diversified risks that can have a remarkably high destabilizing potential as the climate crisis intensifies.
This is even more important if we think that when it comes to climate action, existing strategies are unlikely to capture the wide range of context-dependent security risks that can arise from climate impacts. While an increasing number of climate interventions, investments, policies, and programmes target fragile and conflict-affected countries, these activities are often blind and less responsive to the context in which they operate. This can lead to the unintended consequences of reinforcing structural and contextual drivers of conflict. Indeed, several examples exist of conflict-insensitive adaptation measures that have increased conflict potential by damaging economic prospects, undermining political stability, and amplifying inequality and grievances. Therefore, to reduce the potentially harmful effect of climate action and ensure that it positively impacts people and communities, there is a need to design and implement climate investments, policy, and programmes in a climate security sensitive manner. Climate security sensitivity can indeed unveil the potential peace contributing impact of climate measures, thereby addressing the root causes of conflict, and fostering societal levels of peace.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Author contributions
GP: conceptualization, methodology, data analysis, writing, and supervision. DK: conceptualization, methodology, data analysis and curation, and writing. IM-L: conceptualization and writing-review and editing. VV and AB: methodology, data analysis, and curation. PL: review and editing.
All authors contributed to the article and approved the submitted version.
Funding
This work was implemented as part of the CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS), which is carried out with support from the CGIAR Trust Fund and through bilateral funding agreements. For details, please visit https://ccafs.cgiar.org/donors. This work was also carried out with support from the CGIAR Initiative on Climate Resilience, ClimBeR. We would like to thank all funders who supported this research through their contributions to the CGIAR Trust Fund-https://www.cgiar.org/funders/. | 9,123.8 | 2022-11-16T00:00:00.000 | [
"Environmental Science",
"Political Science",
"Economics"
] |
Silicon and Oxygen Isotope Evolution of the Inner Solar System
Enstatite chondrites have been regarded as major building blocks of the Earth and other differentiated inner planetary bodies due to the similarity of Δ17O (deviation of the δ 17O value from the terrestrial silicate fractionation line) and nucleosynthetic isotope anomalies. However, this hypothesis has been rebutted by the fact that the Earth and enstatite chondrites show distinct Si isotopic compositions. It has been debated whether the origin of this Si isotope difference is the result of nebular or planetary processes. Here we show that the δ 30Si (deviation of 30Si/28Si relative to NBS 28 standard) and the Δ17O values of chondrules in unequilibrated enstatite chondrites are between −0.20‰ and −0.54‰ and −0.36‰ and +0.26‰, respectively. Furthermore, the chondrules with higher Δ17O values tend to have lower δ 30Si. The data exhibit values consistent with most of the noncarbonaceous group differentiated planetary bodies. This consistency suggests that the Si and O isotopic compositions of enstatite chondrules record those of the major precursors that formed the differentiated planetary bodies in the inner solar system. Model calculations based on the results reveal that the Si and O isotope variations of the enstatite chondrite chondrules were generated by an interaction between the evaporation-driven SiO-rich gas and partially or fully melted forsterite-rich precursor chondrules. The Mg/Si of the evaporated dust-gas mixtures increased with increasing silicate/metal ratio in the evaporated dust, which may have increased the bulk Mg/Si and δ 30Si value of the inner planetary bodies.
Introduction
The isotopic compositions of meteorites and their components have provided essential information for understanding the origin and evolution of inner planetary bodies. Nucleosynthetic isotope anomalies of neutron-rich isotopes (e.g., 48 Ca, 50 Ti, 54 Cr, 62 Ni, and r-process Mo) distinguish classes of meteorites into noncarbonaceous (NC) and carbonaceous (CC) groups (Trinquier et al. 2007;Warren 2011;Budde et al. 2016). The NC-CC dichotomy indicates the presence of heterogeneity between presolar materials inside and outside the molecular cloud, which is thought to form the isotopic differences in the meteorite parent bodies (Kleine et al. 2020). The ordinary and enstatite chondrites, Earth, Moon, Mars, and most of the achondrites (e.g., ureilites, angrites, HEDs, and acapulcoites) are classified into the NC group. Variations in the nucleosynthetic isotopes of lithophile elements ( 48 Ca, 50 Ti, and 54 Cr), from the NC group differentiated planetary bodies, are positively correlated with their parent planetary body's mass, which can be used as a proxy for the planetary body's accretionary timescale (Schiller et al. 2018). This correlation suggests that the material that formed the inner planets was the result of mixing between the early (0.1Ma after CAI) inner solar system material depleted in 48 Ca, 50 Ti, and 54 Cr and that represented by ureilite and the CC group materials enriched in 48 Ca, 50 Ti, and 54 Cr (Schiller et al. 2018). Although the aforementioned nucleosynthetic isotope tracers have revealed the relationship between nebular heterogeneity and the source materials of the inner planetary bodies, the total mass fraction of Ca, Ti, and Cr in the inner planetary bodies is only a few weight percent (Trønnes et al. 2019). Therefore it is crucial to examine whether the isotopic variations of the major elements in rocky planetary materials are consistent with planet formation models predicted by nucleosynthetic anomalies.
Silicon and O account for about half of the mass of inner planetary bodies and represent significant components within the disk; both solid and gas-phase materials could have strongly influenced the Si and O isotopic compositions of the inner planetary bodies. The O isotopic composition in meteorites is mainly characterized by mass-independent isotopic variations, expressed as a Δ 17 O value that represents their deviation from the terrestrial silicate fractionation line (definition is given in the footnote of Table 1). Variations in the Δ 17 O values of planetary materials were broadly attributed to the mixing of 16 O-rich and 16 O-poor reservoirs that could have resulted from photochemical reactions (Clayton 2002). Although the timing and location of the photochemical reactions that formed the 16 O-rich and 16 O-poor reservoirs is still unknown, these reactions could have occurred in the parent molecular cloud from which the solar system formed (Krot et al. 2020). The nucleosynthetic isotope anomalies of 48 Ca, 50 Ti, 54 Cr, and 64 Ni correlate well with Δ 17 O values in CC meteorites, indicating a genetic relationship (Trinquier et al. 2007;Yin et al. 2009;Dauphas & Schauble 2016). On the other hand, the Δ 17 O values of NC meteorites do not correlate with 48 Ca, 50 Ti, 54 Cr, and 64 Ni. Furthermore, because the range in Δ 17 O values of NC meteorites, except for ordinary chondrites and R (Rumurti) chondrites, overlaps with that of CC meteorites, it has recently been argued that the Δ 17 O values should not be included in the definition of the NC-CC dichotomy (Kleine et al. 2020). Thus, it is likely that the Δ 17 O values of the source materials that formed the NC meteorite parent bodies may not be directly related to the nucleosynthetic isotope anomalies in their source materials. The Δ 17 O values in NC achondrites could be related to a kinetic process operating in the inner disk, such as a solid-gas reaction (Tanaka & Nakamura 2017).
For Si isotopes, no resolvable mass-independent variation of Si isotopes was found in bulk meteorites and their components, except for presolar grains, suggesting that Si isotopic homogenization was achieved in the solar nebula prior to planetary accretion (Pringle et al. 2013b). The δ 30 Si values, which express the Si isotope composition of the sample relative to that of the NBS 28 standard, δ 30 Si= [( 30 Si/ 28 Si) sample /( 30 Si/ 28 Si) NBS28 − 1], for carbonaceous and ordinary chondrites, show the same value (−0.47 ± 0.07‰, 2SD, N = 34), which is referred to as the chondritic value (Figure 1). In contrast, many NC group planetary materials, i.e., enstatite chondrites (ECs), HEDs, angrites, aubrites, bulk silicate Earth (BSE), and the Moon, show variable values that differ from the chondritic value ( Figure 1). The cause of the massdependent Si isotopic variations of NC group planetary materials have been discussed in terms of either nebular or planetary processes (e.g., Georg et al. 2007;Fitoussi et al. 2009;Savage et al. 2010Savage et al. , 2014Armytage et al. 2011Armytage et al. , 2012Fitoussi & Bourdon 2012;Pringle et al. 2013aPringle et al. , 2013bSavage & Moynier 2013;Zambardi et al. 2013;Dauphas et al. 2015;Young et al. 2019;Sikdar & Rai 2020). However, little research has been done to explore the causes of Si isotope variations of NC group planetary bodies in terms of the relationship with O isotopes (e.g., Hin et al. 2017).
Elucidating the isotope systematics of Si and O in the NC group can provide essential information for deciphering the evolution of the inner planetary body precursors during nebular evolution, including condensation, evaporation, and gas-solid interactions. The bulk isotopic composition of meteorites reflects the average composition of the precursor materials in the region where each meteorite parent body was accreted. Planetary processes could have partially or entirely homogenized the initial isotopic heterogeneity of the precursor materials. Therefore it is necessary to measure the isotopic variability of primitive components in chondrites in order to decipher the evolution of the precursors of inner planetary bodies in more detail. Chondrules are millimeter-sized, silicaterich spheres that formed from fully or partially molten droplets in protoplanetary disks. Chondrules, along with matrix phases, which are thought to form complementarily from the same reservoir (Palme et al. 2015), are the major components of chondrites. Furthermore, chondrules are regarded as the main building blocks of planetesimals and planetary embryos (Johansen et al. 2015). Thus, the chemical and isotopic composition of chondrules provides essential information for understanding the origin and evolution of the inner planets. Kadlag et al. 2019;Sikdar & Rai 2020). There is no evidence for diffusive re-equilibration of Si isotopes between silicate and metal phases in EH3 and EH4 chondrites (Kadlag et al. 2019). Therefore the metal and silicate phases of ECs were assumed to have condensed from or reacted with nebular gases with different δ 30 Si values (Sikdar & Rai 2020), but the cause of this Si isotope heterogeneity has not been well explained.
Ureilites are ultramafic achondrites predominantly composed of olivine and pyroxene and are generally interpreted as originating from the mantle of a partially differentiated parent body, the ureilite parent body (Scott et al. 1993). As mentioned previously, ureilites are found at one end of a nucleosynthetic isotope anomaly line (depleted in 48 Ca, 50 Ti, and 54 Cr) and thus represent the members of the NC group that are least affected by the CC component. The O isotopic compositions of ureilites are plotted on or near the carbonaceous chondritic anhydrous mineral (CCAM) line (Clayton & Mayeda 1988; Figure 2). This O isotope coincidence is considered crucial evidence that ureilites inherited the isotopic compositions of nebular precursors present in the early inner solar disk material that was not significantly modified by later planetary processes (Clayton & Mayeda 1988). Therefore the Si-O isotope systematics of ureilites likely record that of the early, that is, <0.1 Myr after the formation of CAIs, inner solar disk material. The δ 30 Si values of ureilites were previously measured for only four samples, −0.47±0.12‰; 2SD, and were found to be identical to the chondritic value (Armytage et al. 2011). However, the relationship of δ 30 Si with O isotopes has not been examined.
Here we report the Si and O isotopic compositions of the chondrules in EH chondrites. The EL chondrites were not studied here because their primary compositions may have been modified by impact process (Weisberg & Kimura 2012). As a supplemental data set, the Si and O isotopic compositions of ureilites that have an extensive range of Δ 17 O values are also reported. Based on the obtained results and previously reported data, we elucidate the process responsible for Si-O isotope evolution in the inner solar system and discuss its implications for the formation processes that yield the inner planetary bodies. analysis. Additional chondrule fractions were also separated from Y-791810 and Indarch using a method described elsewhere (Tanaka & Nakamura 2017). The analyzed chondrules show either porphyritic pyroxene or radial pyroxene textures. The main constituent mineral of the representative samples was enstatite (Mg# = 0.98-1.00, defined as mole fraction Mg/[Mg+Fe]) with and without anhedral relict forsterite (Mg# = 0.99-1.00) associated with a minor amount of albitic glass/plagioclase and mesostasis and with an accessory amount of Ca-rich pyroxene, troilite, niningerite, and daubreelite.
Samples and Experimental Techniques
For the ureilites, 15 monomict meteorites (DaG 340, DaG 868, Dho 132, Dho 836, El Gouanem, NWA 766, NWA 1241, NWA 2376, Y-791538, Y-980110, Y-981688, Y-981750, Y-982143, A-880784, and Nova 018) were selected for analysis. All of the ureilite samples are partially weathered. As the pristine O and Si isotopic compositions can be altered by weathering processes (Newton et al. 2000;Ziegler et al. 2005), workup procedures are essential to ensure that any measurements probe the pristine isotopic compositions of the samples. Furthermore, ureilites generally contain a few wt.% of carbon, which results in the production of C-O-F compounds during laser fluorination of silicates. The presence of C-O-F compounds results in a low recovery yield of O 2 . We found that the δ 17 O and δ 18 O values of olivine and pyroxene artificially mixed with carbon gave higher values than the original values even after removing the formed C-O-F compounds and CF 4 from O 2 by the gas chromatographic or cryogenic procedures. Thus, the altered fractions and C-bearing phases were removed before isotope analysis as follows. Chunks of ureilites were crushed using a silicon nitride pestle and mortar, then sieved into 73-200 μm size fractions.
After removing magnetic fractions using a ferrite magnet, the remaining fraction was leached in ethanolamine thioglycollate dissolved in isopropanol to remove weathering products, then rinsed with isopropanol (Martins et al. 2007). Magnetic fractions and carbon fractions were further removed using an Nd-magnet and hand-picking under a binocular microscope, respectively. Finally, the samples were washed in deionized water and dried.
The analytical method for determining the Si isotopic composition of the EC chondrules and ureilites from the current study is based on Georg et al. (2006). A mass of 0.1-1 mg of coarse-grained or powdered sample was mixed with ∼30 times that weight in NaOH pellets (Merck, EMSURE ® ) within a 99.9 % silver capsule, put in a cleaned silver crucible with a lid, and then fused at 730 C • for 10 minutes in a furnace. After fusing, the silver capsule containing the sample was transferred into a Teflon vial containing 5-20 mL of water and kept in a dark place for 24 hr. The sample solution was then transferred into a polypropylene bottle and rinsed three times with water to ensure maximum recovery. The solution was acidified by adding 2M HNO 3 and water to adjust the pH to between 2.2 and 2.4 and the Si concentration to ∼6 μg mL −1 . The Si in the sample solution was purified by 1.8 mL of cation exchanged resin (BioRad AG50W-X12, 200-400 mesh), which was packed in a polypropylene column (ID ∼7mm, Muromachi Chemical Inc.) and in the + H form. The resin was cleaned before sample preparation by passing 10 mL of 6 mol L −1 HCl, 10 mL of 8 mol L −1 HNO 3 , 5 mL of 6 mol L −1 HCl, 5 mL of 3 mol L −1 HCl, and 6 mL of water. After cleaning the resin, a collection beaker was placed beneath the column. Subsequently, 5 mL of sample solution was loaded onto the resin, followed by 3.8 mL of water to recover Si. The eluted solution was acidified by adding 70 % HNO 3 to 1 % v/v HNO 3 for the analysis. The recovery of Si at the end of the whole procedure was measured using reference materials (NBS-28, IRMM-018a, Big Batch, Diatomite, and BHVO-2) and was found to be >96%.
Silicon isotope measurements were performed on a MC-ICP-MS (Neptune-Plus, Thermo Fisher Scientific) in the highresolution mode under a wet plasma condition. A sapphire torch, Ni normal sample cone, and Ni-X skimmer cone were used. Typically, 3.0-3.5 μg mL −1 of Si dissolved in 1 % v/v HNO 3 were introduced into the plasma via a 50 μL/min self-aspirating PFA microflow nebulizer (Elemental Scientific) and a Peltiercooled double-pass silica glass cyclonic spray chamber. The isotopes of 28 Si, 29 Si, and 30 Si were measured using Faraday cups L3, C, and H3, respectively, equipped with 10 11 Ω resistors. The intensities of 28 Si for the peak and background signals were usually ∼5 V and ∼0.03 V, respectively. The Si masses were resolved from the interferences (e.g., 12 C 16 O + , 14 N 2 + , 28 Si 1 H + , and 14 N 16 O + ) on the low-mass side of the Si plateau peak. To correct for instrument mass bias, a standardsample bracketing method was performed using NBS28 as a bracketing standard. Each measurement consisted of 50 cycles of 4 s integration for the sample and 30 cycles of 4 s integration for the background, and the data outside of 2SD were rejected. More than three measurements were performed for each sample; the average and 2SE of the replicated data are shown in Table 1.
Oxygen isotope measurements were performed using the laser fluorination method. The detailed analytical method is described elsewhere (Tanaka & Nakamura 2013;Pack et al. 2016;Tanaka & Nakamura 2017). The O 2 from the sample was extracted using a CO 2 laser with BrF 5 as an oxidation agent. The extracted O 2 was purified in the extraction line, then trapped with a 13 Å molecular sieve at the temperature of liquid N 2 . The isotope ratios in the extracted O 2 gas were determined using a gas-source mass spectrometer (MAT253, Thermo Fisher Scientific) in dual inlet mode. For each sample, eight blocks of 11 cycles each were measured with a total measurement time of ∼90 minutes. For the ureilite samples, duplicate measurements were performed, and the average values are shown in
Results
Silicon and O isotopic data for EC chondrules and ureilites are shown in Table 1. Most of the O isotope data for EC chondrules presented here were reported elsewhere (Tanaka & Nakamura 2017), and all the newly analyzed data are within the range of the previously reported data set (Table 1 and Oxygen isotopic compositions of analyzed chondrules and enstatites in EH5 and EH6 have been partially and fully equilibrated, respectively, on the parent body (Tanaka & Nakamura 2017). The lower δ 30 Si values of chondrules in St. Marks, relative to EH3 and EH4 chondrules could have been caused by the partial equilibration between silicate and metal during metamorphism. Therefore data obtained from EH5 and EH6 are not discussed in this study.
The O isotopic composition of the 15 monomict ureilites measured in this study show a range of δ 18 O values from 5.6 to 8.6‰ and Δ 17 O values from −2.18‰ to −0.32‰, which all align along the CCAM (Figure 2). The measured Δ 17 O range accounts for 84% of the Δ 17 O range (−2.49‰ to −0.28‰) for all the ureilites reported to date (Figure 2). The measured δ 30 Si values of these monomict ureilites give a homogeneous value of −0.449± 0.055‰ (2SD, N = 15), i.e., the 2SD value is comparable with that of reference materials (±0.054‰ for BHVO-2; Figure 1). The previously reported Si isotope data (Armytage et al. 2011) are within the range obtained in this study.
Silicon and Oxygen Isotopic Characteristics of the EC Chondrule
The heterogeneous Δ 17 O values of chondrules from each EH3 and EH4 chondrite demonstrated that they were not equilibrated in a planetary environment (Tanaka & Nakamura 2017). As the diffusion coefficient of Si in pyroxene is more than one order of magnitude smaller than that of O for a given temperature (Béjina & Jaoul 1996), it is unlikely that the Si isotopic compositions of the measured chondrules in EH3 and EH4 were equilibrated under planetary conditions. Thus, the variation of Δ 17 O and δ 30 Si values for EH3 and EH4 chondrules (Figure 3) is attributed to the nebular processes. In Figure 3 we also plot the published Si isotopic compositions of silicate or nonmagnetic fractions from each EH3 and EH4 meteorite versus the bulk Δ 17 O values of the same meteorite (note that Si and O isotope data were not measured from the same sample batch). The mass fraction of O in the non-silicate fraction, i.e., sulfide and metal, is negligible relative to that in silicate and oxide phases. Thus the relationship between O and Si isotopic data for these compiled data can represent the silicate fraction of these ECs. These Enstatite chondrite chondrules were formed under a highly reduced nebular environment (Jacquet et al. 2018). The canonical model for EC chondrule formation requires the melting of precursor materials that were condensed from the reduced (e.g., high C/O) region of the solar nebular ). However, EC components experienced variable redox conditions during their formation (Weisberg et al. 1994). For instance, the presence of titanium valences in olivine and pyroxene from EH3 chondrules suggest that these precursors formed in an environment with an oxygen fugacity close to solar nebula conditions (Simon et al. 2016). The reduced mineralogical features are thought to have been formed by the reaction process of the precursor materials. The frequently preserved relict or poikilitic olivine in low-Ca pyroxene and the presence of silica-containing minerals, which are more common in EC chondrules than in other chondrule clans, is believed to have resulted from the reaction of chondrule precursors and reduced gases. As a reduced gas component, SiO is presumed to be an essential reactant during the crystallization of enstatite from olivine (Libourel et al. 2006). Reactions with S-rich gas, in addition to SiO, could have played an important role in the formation of EC chondrules that contain significant amounts of lithophile element-sulfides and silica minerals (Lehner et al. 2013;Piani et al. 2016). The most dominant mineral in the studied samples is enstatite, with a Mg# that ranges between 0.99 and 1.00, while sulfide minerals are rare and silica minerals are absent. Thus, a SiO-rich gas-melt interaction process should have played an important role in producing the variation of δ 30 Si values, as well as the O isotope variations (Tanaka & Nakamura 2017).
The Silicon and Oxygen Isotopic Characteristics of Ureilites
It is widely accepted that the large range in Δ 17 O values for ureilites (−2‰ to 0‰, Figure 2) was inherited from precursor materials that were formed by the mixing of nebular reservoirs, between 16 O-rich rocky components and 16 O-poor H 2 O components (Clayton & Mayeda 1988;Clayton 2002). Thus, the homogeneous Si isotope composition of ureilites, despite the heterogeneous Δ 17 O values, suggests that the δ 30 Si value of these precursor materials had already reached the homogeneous value of −0.45‰ by at least 0.1 My after the formation of the solar system (Schiller et al. 2018). On the other hand, a different hypothesis was proposed in which aqueous alteration by high Δ 17 O water/ice within the ureilite parent body was responsible for the heterogeneous Δ 17 O values (Sanders et al. 2017). The aqueous alteration hypothesis explained that the Δ 17 O values of ureilites should be proportional to the reacted H 2 O/silicate ratio (Sanders et al. 2017). Even if the variation of Δ 17 O values of the ureilite parent body were due to aqueous alteration, the homogeneous Si isotopic composition implies that the precursor of the ureilite parent body was already homogeneous in its Si isotopic ratio and the system was closed with respect to Si isotopes during lowtemperature aqueous alteration. However, the ureilite parent body eventually underwent partial melting, and the Si isotopic ratio should have been fractionated during the processes that occurred prior to this, such as high-temperature hydrothermal alteration and metamorphism. We expect that it is unlikely that all of these planetary process took place under a closed system with respect to Si.
Silicon and Oxygen Isotope Evolution by the Evaporationdriven Melt-gas Interaction Model
The evaporation-driven melt-gas interaction model was applied to explain the O isotope trend of carbonaceous and enstatite chondrite chondrules (Marrocchi & Chaussidon 2015;Tanaka & Nakamura 2017). In the current study, the evaporation-driven melt-gas interaction model was applied to EC chondrules, but with the inclusion of Si isotopes in addition to those of O. This model assumes that enstatite-rich chondrules were formed by open-system melt-gas interactions between a precursor forsterite-rich chondrule melt and an evolved SiO-enriched gas that could have occurred over part of the temperature range for chondrule formation. The evolved SiO is a mixture of the initial nebular gas and that from evaporated dust. The Si and O isotopic compositions of the modeled enstatite-rich chondrule were obtained by a mass balance calculation using given values of the precursor chondrule (δ 30 Si olivine and δ 17 or 18 O olivine ), initial gas (δ 30 Si initial gas and δ 17 or 18 O initial gas ), dust (δ 30 Si dust and δ 17 or 18 O dust ), the dust/gas density ratio (R), and the melt-gas reaction temperature (T).
The molar contents of Si and O and the isotopic compositions of these elements for the evolved gas can be written as is the equilibrium isotopic fractionation of δ O 17 or 18 between CO and SiO.
As SiO is the dominant Si-bearing gaseous species, the Si isotopic composition of the gas can be written as Si . 13 30 gas 30 SiO
( )
The reaction of SiO into the melt and the reaction between olivine and melt can be written as (Javoy et al. 2012). CI dust was used as the precursor dust composition (Marrocchi & Chaussidon 2015;Tanaka & Nakamura 2017). Thus, δ i O initial gas and δ i O olivine values estimated in Tanaka & Nakamura (2017) were recalculated using the solar nebular condensate (Fedkin & Grossman 2016) as the precursor dust composition using the same calculation method as described in Tanaka & Nakamura (2017), resulting in a δ 18 O initial gas =21‰, δ 17 O initial gas =20‰, δ 18 O olivine = 3.5‰, and δ 17 O olivine =0.1‰.
Silicon Isotopic Compositions of the Dust, Gas, and Precursor Chondrule Melt
Forming a chondrule requires an orders of magnitude higher dust-enrichment than for the canonical solar nebular condition (Alexander et al. 2008). Fedkin & Grossman (2016) calculated the pre-accretionary condensate composition relevant to the dust-enriched region of the inner protoplanetary disk from a nebular of a solar composition. The solar nebular condensates, which equilibrated with solar gas from 2000 to 1400 K at 10 −3 bar, are forsterite, Fe-Ni metal, enstatite, spinel, and liquid Fesulfide, and nearly all Fe existed as Fe-Ni metal and Fe-sulfide (Fedkin & Grossman 2016). The oxygen fugacity of dustenriched system from a nebular of a solar composition decreases with decreasing temperature, reaching ∼−4 relative to the iron-wüstite buffer (IW) at ∼1400 K (Fedkin & Grossman 2016). To crystallize Fe-Ni metal with Si >1 wt. % and lithophile element-sulfides, the oxygen fugacity has to be <−3∼−4 relative to IW (Berthet et al. 2009). The solar nebular condensate includes ∼21 wt.% of sulfide minerals (Fedkin & Grossman 2016). Because sulfide minerals do not contain nominal Si and O, the abundance of sulfide dust were not considered for the calculation. Thus the actual dust/gas ratio should be higher than the calculated R value. Although the abundance of sulfide minerals does not affect the calculation, the evaporation of sulfide dust plays an important role in reducing the system by forming S-rich gas, even for a high dust/gas ratio at 1000 in the solar nebular (Fedkin & Grossman 2016).
The major early phases, which condensed from the solar nebula gas in the innermost region of the protoplanetary disk, include amoeboid olivine aggregates (AOA), CAIs, forsterite, and Fe-Ni metal (Davis & Richter 2014;Scott & Krot 2014). CAIs, the earliest condensates from the cooling solar nebula, have an extensive range of mass-dependent heavy Si isotope enrichments (the δ 30 Si can be as high as value up to 14.3‰), revealing kinetic isotope fractionation caused by evaporation ). Thus, the δ 30 Si values of CAIs do not record equilibration with the nebular gas. On the contrary, AOAs were affected by only minor thermal processing after their formation (Scott & Krot 2014). Thus, the forsterites in AOAs preserve the earliest O and Si isotopic compositions of forsterite condensates from the solar nebula, giving Δ 17 O of ∼−25‰ to −20‰ (Krot et al. 2004) The δ 30 Si values of SiO gas, which equilibrated with forsterite in AOA and Si-bearing Fe-Ni metal calculated at 1600 K, demonstrate overlapping values between −6.8‰ and −4.5‰ and between −7.5‰ and −3.3‰, respectively (Figure 4). The δ 30 Si value of SiO, which was equilibrated with the early (i.e., <0.1 Ma after the birth of the solar system) stage bulk inner disk silicates presumed by ureilite, is ∼−2‰ to −3‰ (Figure 4). Thus, the δ 30 Si values of SiO in the nebula evolved from ∼−8‰ to ∼−2‰ during condensation of Fe-Ni metal and olivine ( Figure 4). The δ 30 Si initial gas value is fixed as −2.63 and was calculated for SiO gas of a solar composition equilibrated with δ 30 Si olivine =−0.45‰ at 1400 K (Javoy et al. 2012;Meheut & Schauble 2014; the temperature being just below the condensation temperatures of Fe-Ni metal and forsterite) and at a total pressure of 10 −3 atm (Davis & Richter 2014).
The δ 30 Si of the olivine-rich precursor chondrule melt, expressed by the δ 30 Si olivine value, was estimated by the δ 30 Si value of the inner planetary disk silicate components represented by ureilites and chondritic value. The O isotopic compositions of ureilites reveal that the inner disk materials had heterogeneous Δ 17 O values between −2‰ and 0‰ (Figure 2), which partly overlap with those of chondrules and isolated olivine grains in carbonaceous chondrites ranging between −6‰ and −1‰ (Clayton et al. 1983;Russell et al. 2010). It is a matter of debate whether the carbonaceous chondrite chondrules that formed in the inner solar system subsequently migrated beyond Jupiter's orbit or in the outer solar system (van Kooten et al. 2016 implies that the Si isotopic compositions of the major disk materials had not been modified by the thermal process in which the nucleosynthetic isotope compositions were modified by selective destruction of presolar components (Trinquier et al. 2009). Thus, the δ 30 Si olivine value is fixed at −0.45‰ for the calculation.
Result of the Model Calculations
The calculation was first performed using the fixed F sil values (=0.80) based on the chemical composition of solar condensates (Fedkin & Grossman 2016) and a variable dust/gas density ratio (R) and melt-gas reaction temperature (T) (Figures 5(a), (b)). Figure 5(a) indicates the formation of EC chondrules at R between 3 and 6 and T between 2000 and 2800 K. The Δ 17 O value of enstatite depends on R but not on T. On the other hand, the δ 30 Si value of enstatite depends both on T and R values. The Mg/Si atomic ratio of the dust-gas mixture depends on the Δ 17 O value, namely, the R value at constant F sil value. The silica-rich chondrule in EH4 experienced >1960 K during its cooling process (Tanaka & Nakamura 2017). Thus, T could have been reached at ∼2000 K in the chondrule-forming region. However, a T >∼2200 K, higher than the liquidus temperature of forsterite, is too high to preserve the relict olivine in the reacted enstatite. The estimated T at ∼2000 K is within the range of the chondrule peak temperature (1700-2100 K; Hewins & Connolly 1996). The similar mineralogy of the measured chondrules, mainly showing the porphyritic texture and the common presence of relict olivine, suggests no significant differences in melting temperature and cooling rate for each chondrule. Therefore it is unlikely that the Δ 17 O-δ 30 Si variation was mainly attributed to the significant variation of T.
Second, the O and Si isotope variation of pyroxene formed at constant T and variable F sil values were examined (Figures 5(c) and (d)). This condition assumes that the silicate-metal ratio in the condensed dust-gas system had variable proportions due to fractionation of these phases during or after solar gas condensation. The representative figure calculated at T=2000 K is shown in Figures 5(c) and (d). The variation of the δ 30 Si value is sensitive to the F sil value, the δ 30 Si value increases with increasing F sil value, while the Δ 17 O value is less sensitive to the F sil value at a given R value (Figures 5(c) and (d)). The Mg/Si atomic ratio of the dust-gas mixture depends on both the R and F sil values. Figure 5(c) shows that the melt-gas interaction explains why the variation in the Δ 17 O and δ 30 Si values of EC chondrules at different dust-gas abundances has variable R and F sil values. One cluster with Δ 17 O<−0.2‰ is accompanied by the higher R, relatively higher but variable F sil values, and the higher Mg/Si ratio of the dust-gas mixture relative to the other cluster with Δ 17 O > −0.1‰ (Figure 5(c)). The exception, Sahara 97103 Ch1, can be generated under relatively higher F sil that resemble the former, but a lower R that resembles the latter during meltgas reaction in the higher Mg/Si ratio of the dust-gas mixture. When the given T increases, the relationship between R and F sil values (red mesh) and Mg/Si (broken blue curves) moves parallel to the y-axis (δ 30 Si value) and toward higher values, as shown in Figure 5(c). As discussed in the previous paragraph, the thermal history was not significantly different among the measured chondrules. Thus, the preferred explanation, for the cause of the Δ 17 O and δ 30 Si variations of EC chondrules, is the variable R and F sil values in the reacted dust-gas environment. Thus, in the environment where the EC chondrules formed, there were regions with relatively high and low levels of dust/gas, silicate/metal in the dust, and Mg/Si in the dust-gas mixtures, which may have determined the variation in the Δ 17 O and δ 30 Si values of EC chondrules and the silicate fractions of EC. However, the presence of data outside of these clusters, one chondrule in Sahara 97103 and the silicate fraction of MIL 07028 (Figure 3), suggests that more variable compositional ranges may actually have prevailed.
The matrix of EH3 consists of fine-grained silicate and opaque (Fe-Ni metal and sulfides) minerals (Kimura 1988), and nearly half of the clastic matrix in the EH3 is inferred to be composed of primitive nebular components (Kimura 1988;Rubin et al. 2009). Although the detailed silicate-metal ratio in the primitive nebular components has not been measured, the fine-grained nebular components are composed of various silicate and metal mixtures characterized by heavier and lighter Si isotopic compositions, respectively (Rubin et al. 2009;Sikdar & Rai 2020). These primitive nebular components could be the remnants of the dust components in the EC chondruleforming region.
The supra-chondritic δ 30 Si values for silicate fraction in EC were explained by metal-silicate fractionation or vaporization (Sikdar & Rai 2020). However, these equilibrium or kinetic processes cannot fractionate the Δ 17 O values, as observed in a positive correlation between δ 18 O′ and δ 17 O′ with a steep slope of 1.27 for EC chondrules (Figures 5(b) and (d); Tanaka & Nakamura 2017). The subchondritic δ 30 Si values for silicate fractions reported from two EH4 (Indarch and Abee in Figure 3) cannot be explained by either metal-silicate fractionation or vaporization processes from the chondritic or ureilitic δ 30 Si source. The bulk Δ 17 O values of these two chondrites are relatively higher among the EC (Figure 4), indicating that these low δ 30 Si and high Δ 17 O values can consistently be explained by a melt-gas interaction process within a relatively low R and F sil environment.
Implications for the Si and O Isotope Systematics of Inner Planetary Bodies
The total mass of the terrestrial planets and asteroids is 1.19×10 25 kg, of which 51 wt.% is in the Earth-Moon system. Based on the estimated chemical compositions of the terrestrial planetary bodies (Trønnes et al. 2019), the Earth-Moon system accounts for 50% and 49% of the total O and Si contents, respectively, of the current inner planetary bodies. Due to the indistinguishable isotope systematics of O and many nucleosynthetic isotopes between ECs, BSE, and the Moon, it has been suggested that the major building blocks that formed the Earth-Moon and EC parent bodies originated from the same reservoir in the inner protoplanetary disk (Javoy et al. 2012;Dauphas 2017). However, the different Si isotopic compositions and Mg/Si ratios of ECs and BSE-Moon have made it difficult to explain the EC reservoir model for the Earth's formation (Javoy et al. 2012;Dauphas et al. 2015). The Δ 17 O and δ 30 Si values of the BSE-Moon are within the range of EC chondrules (Figures 1 and 3). Moreover, the Si and O isotopic compositions of most of the NC group differentiated planetary bodies (HEDs, angrites, Moon, Mars, and brachinite-like achondrites) are also identical with the range of these isotopes from EC chondrules. This result implies that the EC chondrules inherit the Si and O isotopic compositions of the precursor materials that formed the NC group differentiated planetary bodies.
The higher δ 30 Si value for BSE relative to the chondritic value and ECs was inferred by Si fractionation between the Earth's metallic core and silicate mantle (Georg et al. 2007;Fitoussi et al. 2009;Armytage et al. 2011). To explain the Si Table 1. Open circles in panels b and d are the O isotopic compositions of EH3 and EH4 chondrules and enstatite separates (Tanaka & Nakamura 2017) whose Si isotopic compositions were not measured. Blue broken lines are Mg/Si atomic ratios of the dust-gas mixture. Panels a and b: Solar condensates (Fedkin & Grossman 2016) were used as the fixed dust composition (F sil = 0.80) at variable T and R values. The proportion of silicate (including oxide), metal, and sulfide for solar condensates (Fedkin & Grossman 2016) is 0.63, 0.16, and 0.21 in weight by assuming that metal contains 3 wt.% of Si. Sulfide dust was not considered for the calculation because Si and O are absent in sulfide phases. Thus, the actual dust/gas ratio is 1.3 times higher than the R value. Panels c and d: The calculation was performed at T=2000 K at variable F sil values. The O and Si isotopic compositions of silicate and metal dust compositions were the same as were used for panels a and b. isotopic compositions of BSE by core-mantle fractionation from the EC reservoir, 20 to 30 wt.% of Si is necessary in the Earth's core, which is an unrealistic value (Fitoussi & Bourdon 2012;Sikdar & Rai 2020). From the carbonaceous or ordinary chondrite reservoirs, the Si isotopic composition of BSE can be explained by core-mantle fractionation with a reasonable Si mass fraction in the core (∼5 to 11 wt.%) ). However, the O and nucleosynthetic isotopic compositions of the BSE cannot be explained by any fractionation process from the carbonaceous or ordinary chondrite reservoirs. The cause of heavy Si isotopic compositions of angrites also cannot be explained by core-mantle fractionation from any chondrite reservoirs (Dauphas et al. 2015). These arguments inferred that core-mantle fractionation could not generate the heavy Si isotope enrichment observed in BSE and angrites.
Other planetary processes, such as impact-induced volatile loss during accretion of planetesimals (Pringle et al. 2014) or a giant impact (Zambardi et al. 2013), or vapor loss from the melting of planetary bodies (Hin et al. 2017;Young et al. 2019) may be able to explain the elevated δ 30 Si values of BSE, the Moon, and angrites. The depletion of the mass fraction of volatile elements, such as K, Rb, and Zn and the heavy isotope enrichment in these elements in the planetary bodies are more sensitive at tracing the volatile loss during planetary or nebular processes than Si (Paniello et al. 2012;Pringle & Moynier 2017;Tian et al. 2019). Depletion of K and Rb and their heavy isotope enrichments were observed in HEDs relative to ECs, Mars (analyzed only for K), and BSE, which could have been caused by extensive volatile loss during either planetary or nebular processes (Paniello et al. 2012;Pringle & Moynier 2017;Tian et al. 2019). Therefore the enrichment of heavy Si in the BSE cannot be explained by the higher degree of volatile loss relative to other NC group differentiated planetary bodies. These kinetic isotope fractionation processes cannot explain the cause of the variation in Δ 17 O values either.
Our proposed model suggests that the dust-gas mixture in the EC chondrule-forming region should have higher Mg/Si at a higher R and F sil condition ( Figure 5(c)). If the major precursor materials of the inner planetary bodies were formed in the same region as the EC chondrules, the order of bulk Mg/Si of the nebular dust-gas mixture was Earth-Moon ≈ angrite >HED ≈ brachinite ≈ Mars >aubrite (Figures 3 and 5(c)). Dauphas et al. (2015) explained that the bulk Mg/Si of planetary bodies was controlled by isotopic equilibration between SiO gas and forsterite in the solar nebular, resulting in the proportional relationship for planetary bulk Mg/Si and δ 30 Si. However, the model of Dauphas et al. (2015) did not consider the variation in O isotopic compositions of the examined planetary bodies. Our study demonstrates that the Mg/Si and Si and O isotopic compositions of the inner planetary bodies could be controlled by the degree of fractionation between the silicate and metallic dust of the solar condensate and the nebular reservoir's dustgas ratio where EC chondrules were formed.
Conclusions
In this study, the relationship between high-precision O and Si isotopic data was presented for EC chondrules and ureilite. As a result, the following conclusions were reached: ( | 9,436 | 2021-05-20T00:00:00.000 | [
"Physics",
"Geology",
"Environmental Science"
] |
Option Pricing using Quantum Computers
We present a methodology to price options and portfolios of options on a gate-based quantum computer using amplitude estimation, an algorithm which provides a quadratic speedup compared to classical Monte Carlo methods. The options that we cover include vanilla options, multi-asset options and path-dependent options such as barrier options. We put an emphasis on the implementation of the quantum circuits required to build the input states and operators needed by amplitude estimation to price the different option types. Additionally, we show simulation results to highlight how the circuits that we implement price the different option contracts. Finally, we examine the performance of option pricing circuits on quantum hardware using the IBM Q Tokyo quantum device. We employ a simple, yet effective, error mitigation scheme that allows us to significantly reduce the errors arising from noisy two-qubit gates.
INTRODUCTION
Options are financial derivative contracts that give the buyer the right, but not the obligation, to buy (call option) or sell (put option) an underlying asset at an agreed-upon price (strike) and timeframe (exercise window). In their simplest form, the strike price is a fixed value and the timeframe is a single point in time, but exotic variants may be defined on more than one underlying asset, the strike price can be a function of several market parameters and could allow for multiple exercise dates. As well as providing investors with a vehicle to profit by taking a view on the market or exploit arbitrage opportunities, options are core to various hedging strategies and as such, understanding their properties is a fundamental objective of financial engineering. For an overview of option types, features and uses, we refer the reader to Ref. [1].
Due to the stochastic nature of the parameters options are defined on, calculating their fair value can be an arduous task and while analytical models exist for the simplest types of options [2], the simplifying assumptions on the market dynamics required for the models to provide closed-form solutions often limit their applicability [3]. Hence, more often than not, numerical methods have to be employed for option pricing, with Monte Carlo being one of the most popular due to its flexibility and ability to generically handle stochastic parameters [4,5]. However, despite their attractive features in option pricing, classical Monte Carlo methods generally require extensive computational resources to provide accurate option price estimates, particularly for complex options. Because of the widespread use of options in the finance industry, accelerating their convergence can have a significant impact in the operations of a financial institution.
By leveraging the laws of quantum mechanics a quantum computer [6] may provide novel ways to solve computationally intensive problems such as quantum chemistry [7][8][9][10], solving linear systems of equations [11], and machine learning [12][13][14]. Quantitative finance, a field with many computationally hard problems, may benefit from quantum computing. Recently developed applications of gate-based quantum computing for use in finance [15] include portfolio optimization [16], the calculation of risk measures [17] and pricing derivatives [18][19][20]. Several of these applications are based on the Amplitude Estimation algorithm [21] which can estimate a parameter with a convergence rate of 1/M , where M is the number of quantum samples used. This represents a theoretical quadratic speed-up compared to classical Monte Carlo methods.
In this paper we extend the pricing methodology presented in [17,18] and place a strong emphasis on the implementation of the algorithms in a gate-based quantum computer. We first classify options according to their features and show how to take the different features into account in a quantum computing setting. In Section III, we review the quantum algorithms needed to price options and discuss how to represent relevant probability distributions in a quantum computer. In Section IV, we show a framework to price vanilla options and portfolios of vanilla options, options with path-dependent payoffs and options on several underlying assets. In Section V we show results from evaluating our option circuits on quantum hardware, and describe the error mitigation scheme we employ to increase the accuracy of the estimated option prices. In particular, we employ the maximum likelihood estimation method introduced in [22] to perform amplitude estimation without phase estimation in option pricing using three qubits of a real quantum device.
II. REVIEW OF OPTION TYPES AND THEIR CHALLENGES
We classify options according to two categories: pathindependent vs path-dependent and options on single assets or on multiple assets. Path-independent options have a payoff function that depends on an underlying asset at a single point in time. Therefore, the price of the asset arXiv:1905.02666v2 [quant-ph] 4 Jul 2019 up to the exercise date of the option is irrelevant for the option price. By contrast, the payoff of path-dependent options depends on the evolution of the price of the asset and its history up to the exercise date. Table I exemplifies this classification. Options that are path-independent and rely on a single asset are the easiest to price. This is done using Amplitude Estimation once a proper representation of the distribution of the underlying asset can be loaded to the quantum computer. Path-independent options on multiple assets are only slightly harder to price since more than one asset is now involved and the probability distribution loaded into the quantum computer must accout for correlations between the assets. Path-dependent options are harder to price than pathindependent options since they require a representation of the possible paths the underlying assets can take in the quantum computer.
III. IMPLEMENTATION ON A GATE BASED QUANTUM COMPUTER
Here we review some of the building blocks needed to price options on a gate-based quantum computer.
A. Distribution loading
The analytical formulas used to price options in the Black-Scholes-Merton (BSM) model [2,23] assume that the underlying stock prices at maturity follow a lognormal distribution with constant volatility. Such distributions can be efficiently loaded in a gate-based quantum computer [18,24]. However, to properly model the market prices of options, the volatility of the geometric brownian process describing the evolution of the assets must be changed for options with different strike prices [25]. This discrepancy between the BSM model and market prices is because stocks do not follow a geometric Browninan motion process with constant volatility. It is thus important to be able to efficiently represent arbitrary distributions of financial data in a quantum computer.
The loading of arbitrary states into quantum systems requires exponentially many gates [26], making it inefficient to model arbitrary distributions as quantum gates. Since the distributions of interest are often of a special form, the limitation may be overcome by using quantum Generative Adverserial Networks (qGAN). These networks allow us to load a distribution using a polynomial number of gates [19]. A qGAN can learn the random distribution X underlying the observed data samples { x 0 , . . . , x k−1 } and load it directly into a quantum state. This generative model employs the interplay of a classical discriminator, a neural network [27], and a quantum generator (a parametrized quantum circuit). More specifically, the qGAN training consists of alternating optimization steps of the discriminator's parameters φ and the generator's parameters θ. After the training, the output of the generator is a quantum state that represents the target distribution. The n-qubit state |i n = |i n−1 ...i 0 encodes the integer i = 2 n−1 i n−1 + ... + 2i 1 + i 0 ∈ {0, ..., 2 n − 1} with i k ∈ {0, 1} and k = 0, ..., n − 1. The probabilities p i (θ) approximate the random distribution underlying the training data. We note that the outcomes of a random variable X can be mapped to the integer set {0, ..., 2 n − 1} using an affine mapping. Furthermore, the approach can be easily extend to multivariate data, where we use a separate register of qubits for each dimension [19].
B. Amplitude Estimation
The advantage of pricing options on a quantum computer comes from the amplitude estimation (AE) algorithm [21] which provides a quadratic speed-up over classical Monte-Carlo simulations [28,29]. Suppose a unitary operator A acting on a register of (n+1) qubits such that for some normalized states |ψ 0 n and |ψ 1 n , where a ∈ [0, 1] is unknown. AE allows the efficient estimation of a, i.e., the probability of measuring |1 in the last qubit. This estimation is obtained with an operator Q, based on A, and Quantum Phase Estimation [30] to approximate FIG. 2. Quantum circuit that creates the state in Eq. (4). Here, the independent variable i = 4i2 + 2i1 + i0 ∈ {0, ..., 7} is encoded by three qubits in the state |i 3 = |i2i1i0 with i k ∈ {0, 1}. Therefore, the linear function f (i) = f1i + f0 is given by 4f1i2 + 2f1i1 + f1i0 + f0. After applying this circuit the quantum state is |i 3 The circuit on the right shows an abbreviated notation.
with probability of at least 8/π 2 . This represents a quadratic speedup compared to the O M −1/2 convergence rate of classical Monte Carlo methods [31].
C. Linearly controlled Y-rotations
We obtain the expectation value of a linear function f of a random variable X with AE by creating the operator A such that a = E[f (X)], see Eq. (2). Once A is implemented we can prepare the state in Eq. (2) and the Q operator. In this section, we show how to create a close relative of the operator in Eq. (2) and then, in Section III D, we show how to use AE.
Since the payoff function for option portfolios is piecewise linear we only need to consider linear functions f : We can efficiently create an operator that performs using controlled Y-rotations [17]. To implement the linear term of f (i) each qubit j (where j ∈ {0, . . . n − 1}) in the |i n register acts as a control for a Y-rotation with angle 2 j f 1 of the ancilla qubit. The constant term f 0 is implemented by a rotation of the ancilla qubit without any controls, see Fig. 2. The controlled Y-rotations can be implemented with CNOT and single-qubit gates [32].
D. Expectation value of functions using AE
We now describe how to obtain E[f (X)] for a linear function f of a random variable X which is mapped to integer values i ∈ {0, ..., 2 n − 1} that occur with probability p i . To do this we create the operator that maps i √ p i |i n |0 to using the procedure outlined in Sec. III C. The parameter c ∈ [0, 1] is a scaling parameter. The functionsf (i) and f (i) are related bỹ Here f min = min i f (i) and f max = max i f (i). The relation in Eq. (5) is chosen so thatf (i) ∈ [−1, 1]. Thus, sin 2 [cf (i) + π/4] is an anti-symmetric function around f (i) = 0. With these definitions, the probability to find the ancilla qubit in state |1 , namely is well approximated by To obtain this result we made use of the approximation which is valid for small values of cf (i). With this first order approximation the convergence rate of AE is O(M −2/3 ) when c is properly chosen which is already faster than classical Monte Carlo methods [17]. We can recover the O(M −1 ) convergence rate of AE by using higher orders implemented with quantum arithmetic. The resulting circuits, however, have more gates. This trade-off, discussed in Ref. [17], also gives a formula that specifies which value of c to use to minimize the estimation error made when using AE. From Eq. (6) we can recover E[f (X)] since AE allows us to efficiently retrieve P 1 and because we know the values of f min , f max and c.
IV. OPTION PRICING ON A QUANTUM COMPUTER
In this section we show how to price the different options shown in Tab. I. We put an emphasis on the implementation of the quantum circuits that prepare the states needed by AE. We use the different building blocks reviewed in Sec. III.
A. Path-independent options
The price of path-independent vanilla options (e.g. European call and put options) depend only on the distribution of the underlying asset price S T at the option maturity T and the payoff function f (S T ) of the option. To encode the distribution of S T in a quantum state we truncate it to the range [S T,min , S T,max ] and discretize this interval to {0, ..., 2 n − 1} using n qubits. In the quantum computer the distribution loading operator P X creates a state with i ∈ {0, ..., 2 n − 1} to represent S T . This state, exemplified in Fig. 3, may be created using the methods discussed in Sec. III A. We start by showing how to price vanilla call or put options and then generalize our method to capture the payoff structure of portfolios containing more than one vanilla option. FIG. 3. Example price distribution at maturity loaded in a three-qubit register. In this example we followed the Black-Scholes-Merton model which implies a lognormal distribution of the asset price at maturity T with probability density func- . σ is the volatility of the asset and µ = r − 0.5σ 2 T + ln(S0), with r the riskfree market rate and S0 the asset's spot at t = 0. In this figure we used S0 = 2, σ = 10%, r = 4% and T = 300/365.
Vanilla options
To price vanilla options with strike K, we implement a comparison between the values in state (8) with K. A quantum comparator circuit sets an ancilla qubit |c , initially in state |0 , to the state |1 if i ≥ K and |0 otherwise. The state |ψ n in the quantum computer therefore undergoes the transformation This operation can be implemented by a quantum comparator [33] based on CNOT and Toffoli gates. Since we know the value of the strike, we can implement a circuit tailored to the specific strike price. We use n ancilla qubits |a 1 , ..., a n and compute the two's complement of the strike price K in binary using n bits, storing the digits in a (classical) array t[n]. For each qubit |i k in the |i n register, with k ∈ {0, ..., n − 1}, we compute the possible carry bit of the bitwise addition of |i k and t[k] into |a k . If t[k] = 0, there is a carry qubit at position k only if there is a carry at position k − 1 and |i k = 1. If t[k] = 1, there is a carry qubit at position k if there is a carry at position k − 1 or |i k = 1. After going through all n qubits from least to most significant, |i n will be greater or equal to the strike price, only if there is a carry at the last (most significant) qubit. This procedure along with the necessary gate operations is illustrated in Fig. 4. An implementation for K = 1.9 and a three-qubit register is shown in Fig. 6.
To prepare the operator for use with AE we add to |φ 1 a second ancilla qubit initially in the state cos(g 0 ) |0 + sin(g 0 ) |1 . Here, g 0 is an angle with a value that we will carefully select. Next, we perform a rotation of the new ancilla qubit controlled by the comparator qubit |c and the qubits in |ψ n . The state This operation, implemented by the quantum circuit in Fig. 7, applies a rotation with an angle g(i) only if i ≥ K. The probability to find the second ancilla in state |1 , efficiently measurable using AE, is Now, we must carefully chose the angle g 0 and the function g(i) to recover the expected payoff E[f (X)] of the option from P 1 using the approximation in Eq. (6). The payoff function of vanilla options is piece-wise linear We now focus on a European call option with payoff Sec. III D, we must set Circuit that compares the value represented by an n-qubit register |i n , to a fixed value K. We use n ancilla qubits |a1, ..., an , a classical array t[n] holding the precomputed binary value of K's two's complement and a qubit |c which will hold the result of the comparison with |c = 1 if |i ≥ K. For each qubit |i k , with k ∈ {1, ..., n}, we use a Toffoli gate to compute the carry at position k if t[k] = 1 and a logical OR, see Circuit that computes the logical OR between qubits |a and |b into qubit |c . The circuit on the right shows the abbreviated notation used in Fig. 4. 6. Quantum circuit that sets a comparator qubit |c to |1 if the value represented by |i 3 is larger than a strike K = 1.9, for the spot distribution in Fig. 3. The unitary PX represents the set of gates that load the probability distribution in Eq. (8). An ancilla qubit |a is needed to perform the comparison. It is uncomputed at the end of the circuit.
where i max = 2 n − 1. This choice of g(i) forces us to Circuit that creates the state in Eq. (9). We apply this circuit directly after the comparator circuit shown in Fig. 6. The multi-controlled y-rotation is the gate shown in Fig. 2 controlled by the ancilla qubit |c that contains the result of the comparison between i and K. chose To see why, we substitute Eqs. (12) and (13) in Eq. (10) and use the approximation in Eq. (7). Therefore, This shows us that we needed g 0 = π/4 − c to used the identity up to a scaling factor and a constant. From this last equality we recover the expected payoff of the option given the probability distribution of the underlying asset. We should note that the fair value of the option requires appropriately discounting the expected payoff of the option to today, but as the discounting can be performed after the expectation value has been calculated we omit it from our discussion for simplicity. We demonstrate the performance of our approach by running amplitude estimation using Qiskit [34] on the overall circuit produced by the elements described in this section, and verifying the convergence to the analytically computed value or classical Monte Carlo estimate. An illustration of the convergence of a European call option with increasing evaluation qubits is shown in Fig. 8.
A straightforward extension of the analysis above yields a pricing model for a European put option, whose
Portfolios of options
Various popular trading and hedging strategies rely on entering multiple option contracts at the same time in-stead of individual call or put options and as such, these strategies allow an investor to effectively construct a payoff that is more complex than that of vanilla options. For example, an investor that wants to profit from a volatile asset without picking a direction of where the volatility may drive the asset's price, may choose to enter a straddle option strategy, by buying both a call and a put option on the asset with the same expiration date and strike. If the underlying asset moves sharply up to expiration date, the investor can make a profit regardless of whether it moves higher or lower in value. Alternatively, the investor may opt for a butterfly option strategy by entering four appropriately structured option contracts with different strikes simultaneously. Because these option strategies give rise to piecewise linear payoff functions, the methodology described in the previous section can be extended to calculate the fair values of these option portfolios.
In order to capture the structure of such option strategies, we can think of the individual options as defining one or more effective strike prices K j and add a linear function f j (S) = a j S + b j between each of these strikes. For example, to price an option strategy with the payoff function which corresponds to a call spread (the option holder has purchased a call with strike K 1 and sold a call with strike K 2 ), we use the functions f 0 , f 1 , and f 2 such that To match Eq. (15) with Eq. (16) we set f 0 (S) = 0, f 1 (S) = S − K 1 and f 2 (S) = −S + K 2 . In general, to price a portfolio of options with m effective-strike prices K 1 , ..., K m and m+1 functions f 0 (S), ..., f m (S) we need an ancilla qubit per strike to indicate if the underlying has reached the strike. This allows us to generalize the discussion from Sec. IV A 1. We apply a multi-controlled Y-rotation with angle g j (i) if i ≥ K j for each strike K j with j ∈ {1, ..., m}. The rotation g 0 (i) is always applied, see the circuit in Fig. 9. The functions g j (i) are determined using the same procedure as in Sec. IV A 1.
B. Multi-asset and path-dependent options
In this section we show how to price options with pathdependent payoffs as well as options on more than one underlying asset. In these cases, the payoff function depends on a multivariate distribution of random variables {S j } with j ∈ {1, ..., d}. The S j 's may represent one or several assets at discrete moments in time or a basket of assets at the option maturity. In both cases, the probability distribution of the random variables S j are truncated to the interval [S j,min , S j,max ] and discretized using Quantum circuit that implements the multicontrolled Y-rotations for a portfolio of options with m strike prices.
2 nj points so that they can be represented by d quantum registers where register j has n j qubits. Thus, the multivariate distribution is represented by the probabilities p i1,...,i d that the underlying has taken the values i 1 , ..., i d with i j ∈ {0, ..., 2 nj − 1}. The quantum state that represents this probability distribution, a generalization of Eq. (8), is with n = j n j . Various types of options, such as Asian options or basket options, require us to compute the sum of the random variables S j . The addition of the values in two quantum registers |a, b → |a, a + b may be calculated in quantum computers with adder circuits based on CNOT and Toffoli gates [35][36][37]. To this end we add an extra qubit register with n qubits to serve as an accumulator. By recursively applying adder circuits we perform the transformation |ψ n |0 n → |φ n+n where |φ n+n is given by Here circuit optimization may allow us to perform this computation in-place to minimize the number of qubit registers needed. Now, we use the methods discussed in the previous section to encode the option payoffs into the quantum circuit.
Basket Options
A European style basket option is an extension of the single asset European option discussed in Sec. IV A, only now the payoff depends on a weighted sum of d underlying assets. A call option on a basket has the payoff profile where S basket = w · S, for basket weights w = [w 1 , w 2 , . . . , w d ], w i ∈ [0, 1], underlying asset prices at option maturity S = [S 1 , S 2 , . . . S d ] and strike K. In the BSM model, the underlying asset prices are described by a multivariate lognormal distribution with probability density function [38] where ln S = (ln S 1 , ln S 2 . . . , ln S d ) T and µ = (µ 1 , µ 2 . . . µ d ) T , where each µ i is the lognormal distribution parameter for each asset defined in the caption of Fig. 3. Σ is the d × d positive-definite covariance matrix of the d underlyings with σ i the volatility of the ith asset, −1 ≤ ρ ij ≤ 1 the correlation between assets i and j and T the time to maturity. The quantum circuit for pricing a European style basket call option is analogous to the single asset case, with an additional unitary to compute the weighted sum of the uncertainty registers |i 1 n1 . . . |i d n d before applying the comparator and payoff circuits, controlled by the accumulator register |b n = |i 1 + ... + i d n . A schematic of these components is shown in Fig. 10. The implementation details of the circuit that performs the weighted sum operator is discussed in Appendix A.
We use a basket option to compare the estimation accuracy between AE and classical Monte Carlo. For M applications of the Q operator, see Fig. 1, the possible values returned by AE will be of the form sin 2 (yπ/M ) for y ∈ {0, ..., M − 1} and the maximum distance between two consecutive values is This quantity determines how close M operations of Q can get us to the amplitude which encodes the option price. Using sin 2 (π/4+x) = x+1/2+O( (3) and Eq. (14) we can determine that with probability of at least 8/π 2 , our estimated option price using AE will be within of the exact option price, where c, i max and K are the parameters used to encode the option payoff into our quantum circuit, discussed in Sec. IV A 1. To compare this estimation error to Monte Carlo, we use the same number of samples to price an option classically, and determine the approximation error at the same 8/π 2 ≈ 81% confidence interval by repeated simulations. The comparison for a basket option on three underlying assets shows that AE provides a quadratic speed-up, see Fig. 11.
Asian Options
We now examine arithmetic average Asian options which are single-asset, path-dependent options whose payoff depends on the price of the underlying asset at multiple time points before the option's expiration date. Specifically, the payoff of an Asian call option is given by where K is the strike price,S is the arithmetic average of the asset's value over a pre-defined number of points d between 0 and the option maturity T The probability distribution of asset prices at time t will again be lognormal with probability density function with µ t = r − 0.5σ 2 ∆t + ln(S t−1 ) and ∆t = T /d. We can then use the multivariate distribution in Eq. (20), with S now a d-dimensional vector of asset prices at time points [t 1 . . . t d ], instead of distinct underlying prices at maturity T . As we are not considering multiple underlying assets that could be correlated, the covariance matrix is diagonal Σ = ∆t[diag(σ 2 , ..., σ 2 )]. An illustration of the probability density function used for an asset defined on two time steps is shown in Fig. 12.
We now prepare the state |ψ n , see Eq. (17), where each register represents the asset price at each time step up to maturity. Using the weighted sum operator of Appendix A with equal weights 1/d, we then calculate the average value of the asset until maturity T , see Eq. (25), into a register |S Finally, we use the same comparator and rotation circuits that we employed for the basket option illustrated in Fig. 10 to load the payoff of an arithmetic average Asian option into the payoff qubit |p .
Barrier Options
Barrier options are another class of popular option types whose payoff is similar to vanilla European Op- • Knock-In: The option has no value unless the underlying asset crosses a certain price level before maturity.
If the required barrier event for the option to have value at maturity occurs, the payoff then depends only on the value of the underlying asset at maturity and not on the path of the asset until then. If we consider a Knock-In barrier option and label the barrier level B, we can write the option's payoff as where T is the time to maturity, S t the asset price at time t with 0 < t ≤ T and K the option strike.
To construct a quantum circuit to price a Knock-In barrier option, we use the same method as for the Asian option where T is divided into d equidistant time intervals with ∆t = T /d, and use registers |i 1 n1 |i 2 n2 . . . |i d n d to represent the discretized range of asset prices at time t ∈ {∆t, 2∆t, . . . , d·∆t = T }. The probability distribution of Eq. (26) is used again to create the state |ψ n in Eq. (17).
To capture the path dependence introduced by the barrier, we use an additional d-qubit register |b d to monitor if the barrier is crossed. Each qubit |b t in |b d is set to |1 if |i t nt ≥ B. An ancilla qubit |b | is set to |1 if the barrier has been crossed in at least one time step. This is done by computing the logical OR, see Fig. 5, of every qubit in |b d and storing the result in the ancilla This is computed with X (NOT) and Toffoli gates and d − 2 ancilla qubits. The ancilla qubit |b | is then used as a control for the payoff rotation into the payoff qubit, effectively knocking the option in. For Knock-Out barrier options, we can follow the same steps and apply an X gate to the ancilla barrier qubit before using it as control, in this manner knocking the option out if the barrier level has been crossed. A circuit displaying all the components required to price a Knock-In barrier option is shown in Fig. 13. Results from amplitude estimation on a barrier option circuit using a quantum simulator are shown in Fig. 14.
Even though we have focused our attention on barrier options where the barrier event is the underlying asset crossing a barrier from below, we can use the same method to price barrier options where barrier events are defined as the asset crossing the value from above. This only requires changing the comparator circuits to compute S t ≤ B in the barrier register |b d .
V. QUANTUM HARDWARE RESULTS
In this section we show results for a European call option evaluated on quantum hardware. We use three qubits, two of which represent the uncertainty and one encodes the payoff.
To examine the behavior of the circuit for different input probability distributions, we run eight experiments that differ by the initial spot price S 0 and all other parameters are kept constant. The spot price is varied from 1.8 to 2.5 in increments of 0.1. This way we can use the same circuit for all experiments and only vary the Y-rotation angles used to map the initial probability distribution onto the qubit register. This choice of input parameters allows us to evaluate our circuits for expected option prices in the range [0.0754, 0.7338].
Each experiment is evaluated on the IBM Q Tokyo 20qubit device with 8192 shots. We repeat each 8192-shot experiment 20 times and average over the 20 measured probabilities in order to limit the effect of any one-off issues with the device. The standard deviation of the measured probabilities across the 20 runs was always < 2%. The connectivity of IBM Q Tokyo allows to choose three fully connected qubits for the experiments, and thus, no swaps are required to implement any two-qubit gate in our circuits [34]. For all circuits described in the following sections, we used qubits 6, 10 and 11.
A. Algorithm and Operators
We now show how to construct the operator A that is required for AE. The log-normal distribution on two qubits can be loaded using a single CNOT gate and four single qubit rotations [39]. To encode the payoff function we also exploit the small number of qubits and apply a uniformly controlled Y-rotation instead of the generic construction using comparators introduced in Sec. IV. A uniformly controlled Y-rotation, i.e.
implements a different rotation angle θ i , i = 0, ..., 2 n − 1 for each state of the n-control qubits. For n = 2, this operation can be efficiently implemented using four CNOT gates and four single qubit Y-rotations [40,41]. The resulting circuit implementing A is shown in Fig. 15. Note that in our setup different initial distributions only lead to different angles of the first four Y-rotations and do not affect the rest of the circuit. Although we use a uniformly controlled rotation, the rotation angles are constructed in the same way described in Sec. III D. We use an approximation scaling of c = 0.25 and the resulting angles are [θ 0 , . . . , θ 3 ] = [1.1781, 1.1781, 1.5708, 1.9635], which shows the piecewise linear structure of the payoff function.
The total resulting circuit requires five CNOT gates and eight single-qubit Y-rotations, see Fig. 15. Since we use uniformly controlled rotations, we do not need any ancilla qubit. Note that if we want to evaluate the circuit for A alone, we can replace the last CNOT gate in Fig. 15 by classical post-processing of the measurement result: if q 1 is measured as |1 , we flip q 2 and otherwise we do nothing. This further reduces the overall CNOT gate count to four.
A quadratic speed-up can also be realized by performing AE without quantum phase estimation [22]. This is done by measuring Q k A |0 for k = 2 0 , ..., 2 m−1 for a given m and applying a maximum likelihood estimation. If we define M = 2 m − 1, i.e. the total number of Q-applications, and we consider N shots for each experi- 15. The A operator of the considered European call option: first, the 2-qubit approximation of a log-normal distribution is loaded, and second, the piecewise linear payoff function is applied to last qubit controlled by the first two. This operator can be used within amplitude estimation to evaluate the expected payoff of the corresponding option. ment, it has been shown that the resulting estimation error scales as O(1/(M √ N )), i.e., the algorithms achieves the quadratic speed-up in terms of M . This leads to shorter circuits than the original implementation of AE, see Appendix B for more details. In the remainder of this section, we focus on QA |0 3 , i.e., the outlined algorithm for m = 1, to demonstrate option pricing on real quantum hardware.
After optimzing the gate count, the resulting circuit for QA |0 3 consists of 18 CNOT gates and 33 single-qubit gates. The detailed circuit diagram and applied circuit optimization steps are provided in Appendix C.
B. Error mitigation and results
We run the circuits for A |0 3 and QA |0 3 on noisy quantum hardware. The results are affected by readout errors and errors that occur during the execution of the circuits.
To mitigate readout errors we run a calibration sequence in which we individually prepare and measure all eight basis states [34,42]. The result is a 8 × 8 readoutmatrix R that holds the probability of measuring each basis state as function of the basis state in which the system was prepared. We use R to correct all subsequent measurements. The error we measure on P 1 for A |0 3 was reduced from ∼ 6% to ∼ 4% using readout error mitigation.
Errors occuring during the quantum circuit can be mitigated using Richardson extrapolation [43]. First, the quantum circuit is run using a rescaled Hamiltonian to amplify the effect of the noise. Second, a Richardson extrapolation is used to extract the result of the quantum circuit at the zero noise limit. In hardware, error mitigation is done by stretching the duration of the gates. For each stretch factor the qubit gates need to be recalibrated [8]. Here, we use a simplified error mitigation protocol that circumvents the need to recalibrate the gates but still allows us to increase the accuracy of the quantum hardware [44]. Since the single-qubit and CNOT gates have an average randomized benchmarking fidelity of 99.7% and 97.8%, respectively, we restrict our error mitigation to the CNOT gates. Furthermore, because the optimized circuit for A |0 3 contains only 4 CNOT gates, we employ the error mitigation protocol only when evaluating QA |0 3 which consists of 18 CNOT gates.
We run the circuit for QA |0 3 three times. In each run we replace the CNOT gates of the original circuit by one, three and five CNOT gates for a total of 18, 54, and 90 CNOT gates, respectively. Since a pair of perfect CNOT gates simplifies to the identity these extra gates allow us to amplify the error of the CNOT gate without having to stretch the gate duration, thus, avoiding the need to recalibrate the gate parameters. As the number of CNOT gates is increased the probability of measuring |1 tends towards 0.5 for all initial spot prices, see Fig. 16(b). After applying the Richardson extrapolation we recover the same behavior as the option price obtained from classical simulations, see Fig. 16(c). Our simple error mitigation scheme dramatically increased the accuracy of the calculated option price: it reduced the error, averaged over the initial spot price, from 62% to 21%.
VI. CONCLUSION
We have presented a methodology and the quantum circuits to price options and option portfolios on a gatebased quantum computer. We showed how to account for some of the more complex features present in exotic options such as path-dependency with barriers and averages. The results that we show are available in Qiskit Finance [34]. Future work may involve calculating the price derivatives [45] with a quantum computer. Pricing options relies on AE. This quantum algorithm allows a quadratic speed-up compared to traditional Monte Carlo simulations but will most likely require a universal faulttolerant quantum computer [46]. However, research to improve the algorithms is ongoing [47][48][49]. Here we have used a new algorithm [22] that retains the AE speedup but that uses less gates to measure the price of an Fig. 15) (b) Probability of measuring |1 for the QA |0 3 circuit (see Fig. 19). We show the measured probabilities when replacing each CNOT by one, three and five CNOT gates (green, orange, red, respectively), the zero-noise limit calculated using a second-order Richardson extrapolation method (purple), and the probability measured using the simulator (blue). (c) Option price estimated with maximum likelihood estimation from measurements of QA |0 3 and A |0 3 with error mitigation (purple) and without (green). The exact option price for each initial spot price S0 is shown in blue.
option. Furthermore, we employed a simple error mitigation scheme that allowed us to greatly reduce the errors from the noisy quantum hardware. However, larger quantum hardware capable of running deeper quantum circuits with more qubits than the currently available quantum computers is needed to price the typical portfolios seen in the financial industry. Future work could focus on reducing the number of quantum registers in our implementation by performing some of the computation in-place.
VII. ACKNOWLEDGMENTS
The authors want to thank Abhinav Kandala for the very constructive discussions on error mitigation and real quantum hardware experiments.
Opinions and estimates constitute our judgment as of the date of this Material, are for informational purposes only and are subject to change without notice. This Material is not the product of J.P. Morgan's Research Department and therefore, has not been prepared in accordance with legal requirements to promote the independence of research, including but not limited to, the prohibition on the dealing ahead of the dissemination of investment research. This Material is not intended as research, a recommendation, advice, offer or solicitation for the purchase or sale of any financial product or service, or to be used in any way for evaluating the merits of participating in any transaction. It is not a research report and is not intended as such. Past performance is not indicative of future results. Please consult your own advisors regarding legal, tax, accounting or any other aspects including suitability implications for your particular circumstances. J.P. Morgan disclaims any responsibility or liability whatsoever for the quality, accuracy or completeness of the information herein, and for any reliance on, or use of this material in any way. Important disclosures at: www.jpmorgan.com/disclosures IBM, IBM Q, Qiskit are trademarks of International Business Machines Corporation, registered in many jurisdictions worldwide. Other product or service names may be trademarks or service marks of IBM or other companies.
Appendix A: Circuit implementation of weighted sum operator
Weighted sum of single qubits
In this appendix, we demonstrate an implementation of the weighted sum operator on a quantum circuit. The weighted sum operator S computes the arithmetic sum of the values of n qubits |a n = |a 1 . . . a n weighted by n classically defined non-negative integer weights ω = (ω 1 , ω 2 , . . . , ω n ), and stores the result into another mqubit register |s m = |s 1 · · · s m initialized to |0 m . In other words, where The choice of m ensures that the sum register |s m is large enough to hold the largest possible weighted sum, i.e. the sum of all weights. Alternatively, we can write the weights in the form of a binary matrix Ω = (Ω i,j ) ∈ {0, 1} n×n * , where the i-th row in Ω is the binary representation of weight ω i and n * = max d i=1 n i . We use the convention that less significant digits have smaller indices, so |s 1 and Ω i,1 are the least significant digits of the respective binary numbers. Using this binary matrix representation, S is to add the i-th qubit |a i of the state register to the j-th qubit |s j of the sum register if and only if Ω i,j = 1. Depending on the values of the weights, an additional quantum register may be necessary to temporarily store the carries during addition operations. We use |c j to denote the ancilla qubit used to store the carry from adding a digit to |s j . These ancilla qubits are initialized to |0 and will be reset to their initial states at the end of the computation. Based on the above setup, we build quantum circuits for the weighted sum operator using three elementary gates: X (NOT), CNOT, and the Toffoli gate (CCNOT). These three gates suffice to build any Boolean function [35]. Starting from the first column in Ω, for each column j, we find all elements with Ω i,j = 1 and add the corresponding state qubit |a i to |s j . The addition of two qubits involves three operations detailed in Fig. 17: (a) computation of the carry using a Toffoli gate (M), (b) computation of the current digit using a CNOT (D), (c) reset of the carry computation using two X gates and one Toffoli gate ( M). When adding |a i to the j-th qubit of the sum register, the computation starts by applying M and then D to |a i , |s j and |c j , which adds |a i to |s j and stores the carry into |c j . Then, using the same two operations, it adds the carry |c j to the next sum qubit |s j+1 with carry recorded in |c j+1 . The process is iterated until all carries are handled. Finally, it resets the carry qubits by applying M in reverse order of the carry computation. We reset the carry qubits in order to reuse them in later computations if necessary.
In general, we need max(k − 2, 0) carry qubits to com-pute the addition of |a i on |s j , where k ≥ 1 is the smallest integer satisfying k 1|ρ s j,j+k−1 |1 k = 0, where ρ s j,j+k−1 is the density matrix corresponding to |s j · · · s j+k−1 . In the k = 1 case, i.e. |s j = 0, the computation is reduced to "copying" |a i to |s j using the bit addition operator D, and no carries would be produced. For k ≥ 2, Eq. (A3) guarantees no carries from |s j+k−1 and beyond. Therefore we can directly compute the carry from |s j+k−2 into |s j+k−1 without worrying about additional carries. This eliminates the need for an ancilla qubit |c j+k−2 , and hence the number of carry qubits needed is k − 2. To further reduce the number of ancilla qubits, we can use any sum qubit |s j = |0 during the computation. In our case, since we are processing Ω column by column, all sum qubits more significant than |s j+k−1 would be |0 . In other words, we have the last m − (j + k − 1) sum qubits usable as carry qubits in the computation described above.
As the weights are known at the time of building the circuit, the possible states that |s m can have before each addition of a state qubit |a i are also computable. Since we are adding |a i to |s m starting from the least significant bit, k equals the bit length of the maximum possible sum on |s j . . . s m after adding |a i to |s j . In other words, Therefore, the number of carry operations and additional ancilla qubits required for each addition of |a i can be determined. The term in the · in Eq. (A4) is upperbounded by u≤i, or v≤j where n max = max m j=1 i=1 nΩ i,j is the maximum number of 1's in a column of Ω. It immediately follows that the number of non-trivial carry operations (i.e. carry operations that requires M) required to add |a i to |s j . . . s m is upper-bounded by k − 2 < log 2 n max ≤ log 2 n , and the number of ancilla qubits required for the entire implementation of S is at most the upper bound for k−2, since we may use some sum qubits as carries. In other words, the number of ancilla qubits required for S grows at most logarithmically with the number of state qubits n. |a 1 + b 1 → |s 1 s 2 |a 2 + b 2 → |s 2 s 3 |a 3 + b 3 → |s 3 s 4 a 1 : • a 2 : • • a 3 : other. This reduces the CNOT gate count for QA |0 3 to 18 and the resulting circuit is reported in Fig. 19. 19. The optimized circuit for QA |0 3 used for the experiments on real quantum hardware. It requires 18 CNOT gates and 33 single qubit gates. The initial spot price is assumed to be equal to 2. The dashed boxes indicate which parts are used for A, A † , S ψ 0 , and S0. Note that due to the circuit optimization, some boxes are slightly overlapping. | 11,218.2 | 2019-05-07T00:00:00.000 | [
"Computer Science",
"Physics"
] |
Astrotheology: The natural interface between hyperspace and the Trinity
Over the past few decades, physicists are seeking a unifying theory that could encapsulate the theories of general relativity and quantum mechanics, our understanding of the very big and the infinitesimal small, into one all-inclusive theory. This quest led to a renewed interest in the proposal of hyperspace, which states that the structure of space and time are folded into one another, creating multiple dimensions. The author believes that the biblical confession about the resurrected Christ could be beneficial to science and theology in this respect. Biblical testimony provides insight into the apparent natural and effortless movement of Christ between the different dimensions in nature. Astrotheology, as a nexus between the different disciplines, is well equipped to describe the meaning and implications of the resurrection with regard to the fabric of space-time. The author opines that it could facilitate as a natural interface between hyperspace and the Trinity. This proposal aims to accentuate, from a scriptural point of view, that reality indeed comprises more than four dimensions and that astrotheology could make a significant epistemological contribution in the dialogue about hyperspace and God’s agency in creation.
INTRODUCTION
We live in the age of hyperlinks, hyper cars, hypersonic rockets, and consequently, hypertension.The prefix "hyper-" describes an object or subject that is over and beyond our normal conception.Since the advent of the space race, there has been an exponential increase in our knowledge about space and our appreciation of its complexity.The advances concerning our understanding of space-time led to a new cosmology where the need to integrate different disciplines became apparent.Currently, the mystery regarding the possible existence and the nature of dark energy and matter confront the edges of reason.It would, therefore, not be unusual to connect the prefix "hyper-" in some way to our conception of space and time.
Over the past few decades, physicists are seeking a unifying theory that could encapsulate the theories of general relativity and quantum mechanics, our understanding of the very big and the infinitesimal small, into one all-inclusive theory.Kaku 1 (2022a) points out that the proposal of a multidimensional reality (where the known four dimensions in nature, if time is accepted as a fourth dimension, are increased to ten or more) could account for a better understanding of all the physical laws that we know of thus far.The realisation that the structures of space and time are folded into one another to form a hyperspace 2 consisting of multiple dimensions could also benefit this elusive search for the "god equation". 3A leading contender in the quest to solving this problem is string theory. 4This quest by theoretical physics to solve the nature of the fabric of creation inherently edges the boundaries of philosophy and theology.I believe that it is at this crossing that the biblical confession about the resurrected Christ could be beneficial for science and theology, as well as our appreciation of the intricacies of the created order.The incarnation, and specifically the resurrected body of Christ, retrospectively and proleptically, help us understand not only the relationship between mind and brain (Pieterse 2020), but also God's movement in the physical universe within time and space (Pieterse 2022).In the context of this article, it provides insight into the apparent natural and effortless movement of Christ between the different dimensions in nature.The aim of this top-down approach is not to identify another elusive equation.The objective is more imperious.The triune God endowed humanity with a creation where the core of the natural order exceeds our efforts to conquer all knowledge exclusively from a physical point of view.Ironically, although reductional physics often dismiss 1 Kaku is the author of The God equation (2021), and a leading authority on multidimensional realities.
2
The term "hyperspace" is functional in different contexts.In this study, it refers to proposed areas currently invisible to conventional physics.3 Physicists use this term to describe the elusive single theory that could encapsulate reality.However, from a scriptural viewpoint, one could argue that the complexity of the natural world is entwined with a spiritual dimension that is not empirically visible.
4
"The basic idea of string theory is not to take particles as fundamental objects but strings that are very small but extended in one dimension.This assumption has the pivotal consequence that strings interact on an extended distance and not at a point."(Kuhlmann 2020).It is important to note that the nature of this theory makes it extremely difficult to validate currently.
the authenticity of other spiritual dimensions 5 as a yet unresolved physical phenomenon or illusions, the proposed hyperspace hypotheses are deemed credible because they originate from a specific need within natural science.However, it is important to view the bigger picture.Van Huyssteen (1998:119) clarifies the character of evolutionary epistemology and its impact on the specific definitions of rationality within science and theology.He concludes that a scientific view of rationality is not necessarily superior to the nature of rationality applied in theological discourse.In addition, Pieterse (2021:170) reminds us that we live in a complex and relational cosmos, where a natural phenomenon is often clouded in mystery.Our inability to solve some problems is not a weakness, but it bears testament to certain underlying mysterious attributes within the created order and the limits of human knowledge.Natural science presents us with numerous examples such as, for instance, Euler number 6 and the Fibonacci sequence. 7I believe that a theistic theology borne from within the triune God could enhance the epistemological process and clarify some of these conundrums.How?Over the past few decades, various scholars from within the science/religion fraternity debated the likelihood of intelligent design (ID) when confronted with seemingly impossible scenarios in creation.Unfortunately, some political and social in-groups exploited ID in support of specific non-theological agendas.In addition, ID as a theory also poses certain theological challenges.This is not another ID proposal.To the contrary, the author has no intention to prove or disprove the existence of the triune God.I accept God and his agency in creation in faith.Conversely, creation provides natural science with numerous enigmas and anomalies that are ascribed to fate or temporarily credited to the god of the gaps.The author believes that God has made a commitment to his creation and, through his indwelling Spirit, he upholds and guides creation towards the eschaton.Within these processes, there are indeed embedded unknown forces, laws, and contingencies at work, all within the fabric of a creation embraced by the triune God.The enigma of hyperspace is but an example.
5 Kärkkäinen (2015:307) points out that among scientists studying human nature, for example, as well as other nonreligious philosophers, by far the most common notion of human nature is physicalist or materialist monism.6 According to Kenton (2022), "[t]he term Euler's number (e) refers to a mathematical expression for the base of the natural logarithm.This is represented by a non-repeating number that never ends.The first few digits of Euler's number are 2.71828.The number is usually represented by the letter 'e' and is commonly used in problems relating to exponential growth or decay."7 Sheldon (2022) explains that "[t]he Fibonacci sequence is a set of integers (the Fibonacci numbers) that starts with a zero, followed by a one, then by another one, and then by a series of steadily increasing numbers.The sequence follows the rule that each number is equal to the sum of the preceding two numbers."This sequence of numbers is frequently observed in natural objects and phenomena.
Astrotheology, as a nexus between the different disciplines, is well equipped to describe the meaning and implications of the incarnation and resurrection of Christ with regard to the fabric of space-time in a multidimensional reality.Due to its subject matter, it includes the very big and incredibly small.Some might argue that astrotheology, as a subset of theology, may well be excluded in our reflection about hyperspace and the Trinity.The reason being that the issue at stake transcends the traditional scope of astrotheology and the conclusion appeals to the broader dialogue between science and religion.Yet, the author believes that astrotheology, as the current porthole to hyperspace, might be a valuable ally in the bigger debate.Theology needs to embrace the dialogue with space sciences, as it is one of the principal leaders of innovation.Within this environment, theological novelty is essential to paint the bigger picture, a picture that eludes mere physicalism as an analysis of creation.Hyperspace and the Trinity are also cosmologically linked, due to the incarnation of the cosmic Christ (Col. 1) (Pieterse 2017:361).
Therefore, the author believes that astrotheology could facilitate as natural interface between hyperspace and the Trinity.The foundation of this proposal is embedded in two previous works 8 on astrotheology, namely space-time and incarnation.Astronomy and theology have a common denominator; both navigate between time and space on a macro and micro level.This proposal aims to accentuate, from a scriptural point of view, that reality indeed comprises more than four dimensions and that the resurrected Christ moved about spontaneously between different dimensions in the natural world.How could this testimony assist the discourse about hyperspace?It will verify that God created multiple dimensions in the natural world and that interchange between different dimensions is an inherent phenomenon.Christ committed himself to this world through the incarnation and embraced the very nature that he created, the same creation currently scrutinised by physics and other disciplines.
A valid question might be: Why should we think of the triune God in terms of hyperspace and how is an interface even possible?Edwards (2010:104) relates these questions to the eschatological transformation of creation: The God of the resurrection is the God of creation.God is present in the Spirit to every creature in the long history of the universe as the God of self-bestowing resurrection love.God creates a universe that is capable of being transformed from within.
The resurrection of Christ accentuated this delicate balance between God's transcendental might and his immanent presence in the visible and invisible spectrums of creation.Hyperspace provides a temporal-spatial reference point of God's motion in creation.It is important to relate Christ's agency to the perichoretic movement within the Trinity, as he himself testified in John 14.This article does not claim that the postulated ten dimensions envisioned by theoretical physicists are similar to the ones Christ used.However, significant questions need to be addressed about the fabric of nature and God's movement therein.Astrotheology could make a significant epistemological contribution in the dialogue about hyperspace and God's agency in creation.Rust (1987:31) explains theology's contribution as follows: The Christian revelation must be interpreted in a way that both shows rational coherence and also speaks to the contemporary field of knowledge.This is why every great systematic theology possesses a philosophical cement and thereby builds a bridge to its world, provided such cement is consonant with some important strain of contemporary thinking.
In the current context, theology is obliged to engage with scientific research regarding the hypothesis of hyperspace.That being said, from a convergent point of view, theology is not in any way inferior to scientific endeavour, as if theology should only validate or fall in line behind any hypothesis that natural science conjures up.Theology has a duty to enlighten and empower the natural sciences in all spheres that fall outside their mandate.In addition, theology should always be aware of the hazards of creationism 9 in its efforts to relate God's agency to the mysterious nature of hyperspace as postulated by science.
A superficial reading of the title might lead to valid questions.For example: Is it theologically correct to relate God to certain spaces in creation, albeit hidden or unknown at the moment?Is this concept not contrary to the traditional confession about an omnipotent creator?In his work about the attributes of God, Staniloae (1998:181) states that, in Christ, God accepted a kenosis in the realm of space.He chose to reveal himself and commit himself to creation in a specific space and time.However, the incarnation did not erase God's omnipotence and his continuous providence and care for all of creation, in all space.9 Creationism is an ideology that seeks to explain the methodology of creation.There are different models, but it usually relates to a very specific fundamentalist reading of the book Genesis.
When considering God's relation to space, it is important to briefly note Barth's view on God and spatiality.Venter (2006:208, 209) reveals the importance of Barth's novel proposal.In an attempt to reinforce the doctrine of God's omnipresence in creation, older theologies tended to view God as non-spatial.Within this noble attempt, danger lurked.Non-spatiality meant, no distance, only identity.God's omnipresence (space) and eternity (time) were relegated to aspects of God's infinity.Barth proposed an alternative notion.
God's omnipresence is primarily a determination of God's love.Without love there could be no other, no universe beside God, and no divine omnipresence in relation to it.Omnipresence implies presence, which is not identity, but togetherness at a distance (Venter 2006:208).
Barth linked omnipresence to the essence of God within the Trinity.Father, Son and Spirit exist distant and near in one being.He possesses space in himself as triune.Consequently, the triune God created space as presence and remoteness; it is relational in character and its reality is found in the truth of the intra-trinitarian relationship.God is spatial, but in a special way.
In order to understand the progression of the argument, it is necessary to clarify the meaning of two important terms, namely "interface" and "hyperspace".What does interface mean?Interface is "a situation, way, or place where two things come together and affect each other" (Cambridge Online Dictionary 2022).In the old Testament, the holy of holies in Solomon's temple acted as an interface, where God bestowed his grace on his people through the mediating role of the high priest.In the context of this article, astrotheology explores aspects of the created order that presents itself as an interface where science and theology find common ground.It might be an opportunity to acknowledge the agency and splendour of the triune God in an interdisciplinary manner.It is appropriate to briefly refer to the work of Gregersen (2016) and his concept of "deep incarnation", as it resonates with interface.Over the past two decades, he explored, in a series of papers, the gravity of Christ's incarnation.He argued conclusively that Jesus' life, death, and resurrection have a more profound influence on reality than what is generally accepted.The notion of "deep incarnation" is an attempt to fathom the depths of the triune God's solidarity with creation.
If God's own being was present in the life story of Jesus, as Christians believe, then Christ is present from the bottom of the universe and up, emerging from within the realm of creation no less than descending from above.The proposal of deep incarnation is thus both 'high' in Christology and 'low' in materiality (Gregersen 2016:2).2023:43(2) Deep incarnation refers to an incarnation into the very essence of the material world and the systems of nature.Do we seek only specific areas of God's agency in creation, as Polkinghorne and Russel proposed? 10No! The author believes that the triune God, who reveals himself as being actively engaged with his creation through creatio continua and providence, dwells within and encompasses space-time in all its dimensions.It is important though that this immanent presence of the Trinity should not be confused with pantheism or panentheism (Pieterse 2022:42), but it is an acknowledgment of the indwelling Spirit of God, as eloquently described in Psalm 104.God's agency reveals a natural movement between, and the upholding of the different dimensions of nature, an attribute that is deemed speculative only from a reductionist scientific perspective.
What is the meaning of hyperspace?The Collins English Dictionary (2022) identifies multiple meanings associated with the term.For example, (i) mathematics -space having more than three dimensions: often used to describe a multidimensional environment; (ii) science fiction -a theoretical dimension within which conventional space-time relationship does not apply.Conway Morris (2003:152) employs the term "hyperspace" to describe promising extraterrestrial habitats, where biological life might evolve.The term is also incorrectly applied as a synonym for cyberspace to describe the movement and interpretation of data, while psychologists explain cognitive processes in the brain with the same expression.Thus, in all examples, hyperspace is viewed as a special construct of space and time.It transcends philosophical speech, where space is often described as an unobserved transition through history, culture, and politics where certain spaces influence our perception of reality.In his critique of modernist spatial awareness, Allen (1999:253), for example, pleads for a renewed appreciation of the role that spatial language and areas have in the formation of ideas, cultures, and social degeneration.In the context of this article, hyperspace refers to an ontological, 11 yet naturally occurring spatiality that transcends human beings, but one that also constitutes our very existence.Although the current scientific hypothesis associates concealed dimensions with the very small, it is equally plausible to include macro dimensions.Scripture reveals a unique spatial appreciation within the perichoretic relationship of the Trinity that surpasses our grasp.Yet, through the incarnation, as a Trinitarian act of grace, Christ engages on multiple levels with created space and time.The argument will be constructed in the following manner.After this concise introduction, I will explore the multidimensional nature of the universe, in particular the existence and nature of hyperspace.This enquiry about the fabric of creation naturally leads to questions concerning God's Trinitarian agency in space (hyper-).In particular, could the Scriptural testimony about the nature and presence of the resurrected Christ enrich our understanding of a multidimensional cosmos?Finally, astrotheology, by nature of its content, locates itself at the interface of hyperspace and the Trinity.Therefore, it might serve as an interdisciplinary bridge between natural science and natural theology.12
A HYPERSPACE INFUSED UNIVERSE?
The standard model of particle physics is an attempt to bring order to a contingent universe and comprehend the intricacies of an entangled cosmos.The theory attempts to describe all the known elementary particles on a sub-atomic level, as well as three13 of the four fundamental forces in nature.On closer inspection, it becomes clear that this conventional paradigm is often tested with new and unusual phenomena.This waypoint of physics is continuously interrogated and revised, due to new discoveries on sub-atomic level (by the Large Hadron Collider) and on a cosmic scale (for example, Hubble and the James Webb telescopes).One thing is clear, we live in an entangled cosmos, where space-time is even more mysterious than was previously thought.One of the conundrums of space-time is the possibility and nature of hyperspace.
The seeds of possible higher invisible spaces could be traced to ancient Greece.Freeman (2018:175-177) narrates Plato's allegory of the cave as a first attempt to acknowledge dimensions that transcend our sensual awareness.2023:43(2) He points out that, although Aristotle and Euclid rejected the idea of higher dimensions, the idea did not disappear.During the 18 th century, the fourth dimension was a common theme in art and literature.Serious thought about the hypothesis of hyperspace could be traced to Riemann's lecture in 1854, when he revealed a geometric system for curved surfaces.Einstein applied this method to create his theory on general relativity.Abbott's celebrated work, Flatland14 (1884), demonstrated the metaphysical and theological implications of a cosmos with more than three dimensions.This metaphor is useful in our attempts to imagine a world infused with hyperspace.
Where did the current fascination with hyperspace begin?The precursor of modern string theories was Kaluza and Klein's hypothesis, early in the 20 th century, of a possible fifth dimension opposed to Einstein's four.Over the past few decades, scholars from various disciplines revisited this notion of a potential multidimensional universe, specifically in the quest for a single theory that describes all of the natural world.Kaku's recent book, The God equation (2021),15 underlines this interest.Kaku (2022a) points out that, although the theory of higher dimensional space has not been verified, nearly 5,000 papers in physics alone have been published on the subject.This includes the pioneering work of Kaluza and Klein, the supergravity theory of the 1970s, and the various superstring theories of the 1980s and 1990s.The amount of research communicates the significance of the subject, as well as the illusive nature of this enigma.We spend our lives in three spatial dimensions, ignorant of the possibility of an invisible ten-dimensional hyperspace, hovering above, or folded into conventional space.The only evidence of its existence may be found in the ripples of gravity and light interacting in space-time (Kaku 2022b).Page (2021:8) elaborates and draws attention to the causal relationship between various scientific theories and hyperspace.String theory, for example, requires a multidimensional universe, justifying the belief in hyperspace.That being said, the likelihood of a multilayered embedded cosmos is not dependent on the success or progress of any specific scientific theory.Over the past 30 years, the science and religion debate wrestled with various ideas that intersected science, religion, and philosophy.The mind/brain problem 16and John Polkinghorne's proposal of dual-aspect monism17 are two examples of the interrelated nature of creation and the need for a more balanced view between the sciences.
When contemplating about the nature of hyperspace, Allen's (1999) submission on spatial theory18 might be beneficial.Although his discourse is not directed specifically at the substance or meaning of hyperspace, his proposal is relevant.Allen (1999:258)
points out that
[m]odernist human geography struggles between the mind's ability to transcend matter or be determined by matter.Regardless of the side taken, the 'mind' was given a dialectical life to be struggled for, while matter, as the modernist signifier of space, was left as passive, nostalgic, and dead.Human geography needed to include a discourse on spatial literacy and the struggle for spatial consciousness in order for the socio-spatial structures of everyday life to be named, deconstructed, and transformed.
A critical question might be: Is this need for spatial consciousness also applicable to the interdisciplinary discourse about hyperspace?I believe it is.
In his proposal, Allen (1999:254-255) points to the modernist spatial binary, a dualistic perception of reality divided between the realistic illusion and imagined conceptual space.The former bestows supremacy to material objects since the modern world is primarily interpreted scientifically and mechanistically.Conversely, imagined conceptual space is viewed as less valuable, for it represents unseen and unmeasurable spatial abstractions.The stuff of the mind.This diminished view of the natural world is also commonplace among some scholars within the science and religion debate.In this article though, it will become clear that the substance of, and the movement between space (hyper?) and matter are a natural and interchangeable phenomenon if viewed from a biblical perspective.The various miracle narratives in Scripture that culminated in the resurrected body of Christ are proleptic references to the distinct nature of space and matter.Hudson (2005:184) validates this line of thought.His research points out that the acceptance of the hypothesis of hyperspace by way of inference to the best explanation could provide possible answers to some of the most perplexing questions of faith.2023:43( 2) What is the nature of God's agency in space and time, and could it be relayed to hyperspace?Rust (1987:32) delivers an introductory remark when he states that [o]ur historical time is a reality in the divine life and has its place in the divine purpose.At the centre of this historical unveiling is Jesus of Nazareth, the incarnate presence of God in history.Here is the final assurance that our creaturely time is within the divine activity and has an eternal significance.
TRINITARIAN AGENCY IN HYPERSPACE
If one views the world solely from a naturalistic point of view, the spiritual19 domain might lack importance, due to its innate qualities of verification.Then again, important scientific hypotheses (for example, the Big Bang theory) could at best be accepted on account of an inference to the best explanation.In addition, Pieterse (2021:170) reminds us that there are certain mysteries intrinsic to nature that we are not able to detect or comprehend from an exclusive scientific paradigm.Scripture testifies about God's agency in and through creation, culminating in the incarnation of Christ.If we consider hyperspace, it is only reasonable to ask if God's revelation could assist our quest.A word of warning though.The context, purpose, and focus of the biblical text are pre-scientific in origin and, therefore, limited scientifically.
Nonetheless, specific texts could be beneficial to our purpose.In Chapter 20 of the Gospel according to John, the appearance of the resurrected Christ is documented in verse 19: On the evening of that day, the first day of the week, the doors being locked where the disciples were for fear of the Jews, Jesus came and stood among them and said to them, 'Peace be with you'.(ESV 2016).
The text, along with verse 26, confirms that the doors were locked, yet Jesus appeared in their midst.Then again, in verse 27, we read the famous words: Then he said to Thomas, 'Put your finger here, and see my hands; and put out your hand, and place it in my side.Do not disbelieve, but believe'.(ESV 2016).
In this instance, according to tradition, Thomas extended his hand and he did not touch a void but a material being.It seems that Christ's transit from Spirit to matter, from an unknown space into the three-dimensional world, was effortless and in that context fairly normal.In Luke 24, the evangelist provides more information and testifies that Christ, who moments before transcended space, shared food with the disciples.It appears that the limits of space and matter are tested, and elegantly exceeded.Did Jesus use one or more of these hypothetical dimensions?
In his commentary, Hendriksen (1982:458) presents different explanations given through the ages to clarify the mystery of the closed doors.His conclusion is that the historicity of the moment is beyond doubt and that the entrance of the resurrected Christ should be understood in a literal sense.Although many questions remain, it is clear that the resurrected body of Jesus possessed multidimensional attributes that surpass the first level of rational thought.Beasley-Murray (1987:378) concurs and underlines the ability of the risen Christ to materialise himself at any given place in a manner that is beyond comprehension.In his explanation of Luke's testimony, Geldenhuys (1965:640) affirms that Jesus deliberately ate fish in the presence of the disciples to reassure them that it was he himself who appeared among them in spirit and in body.McClean (2012:102) interprets Paul's perspective on different spatial realities in a similar way.He states that, for Paul, the heavenly realm is part of the creation.It has a spatio-temporal relationship to the earthly realm, as well as a spatio-temporal dimension in itself.Christ is the archetype of the resurrection and the resurrected body of Christ participates in time and space, even though heavenly time and space should be thought of in a somewhat different manner to that of earthly existence.Del Colle (1996:108) elaborates and observes that the body of Christ is the mediating agent between the Trinity and temporality.It constitutes a distended experience of time in all of its varied dimensions, theologically considered, so that we find in Jesus Christ an unsurpassable and irrevocable temporalization of being-in-the-world which constitutes him as the mediating agent for the consummation of all creation into the eternal reign of God.
The resurrected Christ points the way to the embracing of a multidimensional reality, a reality where creation and eschatology meet.What does this mean?The resurrected body of Christ anticipates the parousia, where all of creation will be transformed and embedded in him.Although our inquest into hyperspace is limited to the constraints of creation, the impact of the deep incarnation of Christ and its eschatological fruits should never be excluded (Gregersen 2016).Christ's resurrected body and its movement in spacehyperspace becomes the bridge that ties creation to eschatology.In what manner?The transfigured body of Christ is the first fruit of a new creation.The effortless passage between space and hyperspace, between Spirit and matter, endorses the promise of a new heaven and a new earth.
If these testimonies are assessed from our current limited knowledge of creation, certain questions arise.For example: What or where is the space that Jesus used in his miraculous transition through matter?It is clear that, from the very beginning, God's revelation coincided with a transition from a specific dimension (where God is) to another (the recipient's spiritual and physical space).With the advent of Christ, these boundaries became more fluid, since the triune God now dwelled in our midst.Yet, due to its focus and specific etymology, Scripture is not interested in the physical specifications of these dimensions.In addition, Freeman (2018:183) points out that, until the advent of non-Euclidean geometry in the 19 th century, the concept of hyperspace was either unknown or not allowable.Theologians would have had no vocabulary to relate physical phenomena to spiritual revelation.Given that Christ incarnated into the physical realm and made use of natural substances throughout his earthly ministry, one may ask: Could a natural multidimensional reality be plausible?Freeman (2018:175) is confident in his analysis.His research of angelic bodies leads him to conclude that advances in the study of geometry and physics in the 20 th century provide us with a new way of conceiving angelic bodies.They are objects existing in higher spatial dimensions, what we might call hyperspace.The highly paradoxical idea of spiritual matter, a substance that possesses materiality but not (three-dimensional) bodiliness has become plausible again (as it was in the early church).Hyperspace physics could help us understand that angels are composed of material bodies but not three-dimensional material bodies.If one studies the Old and New Testaments, it becomes clear that God's revelation was intersected with miraculous deeds that transcended simple physicalist explanations. 20uis ( 2021) reflects on the nature of space according to modern physics.He states that one can distinguish between two possible views of space: According to the first type, space is a thing, something that exists independently from the things that are located in it, something that 'contains' those things.Such space is 'absolute'; it exists on its own; it can be 'empty'. 21According to the second type of answer, space is a 'relation', more precisely, a structure of relations between things that occupy different places in a coordinate system.Without things, there is no space because a relation cannot exist without relata (Muis 2021:5).
Although modern physics cannot prove the ontological view that space is relational, the reflections on hyperspace nudge us in that direction.From a theological point of view, the relational aspect of space is a familiar one.The incarnation of Christ suggests that God continuously creates space in a spiritual and physical manner, in order for creation (in all its configurations) to encounter its creator. 22Christ's use of physical space after the resurrection accentuates this point.
Whence does space originate?Staniloae (1998:171, 177) observes that the possibility of space arises in God, for it is in the distinction of the divine persons that the possibility of the otherness of finite persons arises.
Although God is above space, he is also present in all space.This "supraspatial" attribute of God prevents him from being caught up in physical space and time.It originates from the omnipresence of God within the Trinity and finds its physical reflection in the ontological unity of relational space.The nature of the triune God's connection to space (and time) is an important and contentious issue historically.Lett (2019:268) refers to Jenson's (1997Jenson's ( -1999:236:236) and Balthasar's (1988Balthasar's ( -1998) ) use of the phrase "divine roominess".He concurs with Jenson that a natural adoption of creaturely time and space emerges from within the perichoretic relationship of the Trinity.
God in Christ is the infinite space of creaturely space (Col 1:15-18, Eph 1:23, I Cor 15:18).The analogia entis enables us to speak of God's spatiality while remaining cognizant of the ever-greater dissimilarity between the spatiality of God and creatures (Lett 2019:274).
Lett employs the metaphor of sound to describe God's infusing of creation.Different sounds can interpenetrate one another without one displacing the other.God's agency in the world is not in conflict or in competition with human activity.His actions are the bass notes that sustain the melody.This association with space and time differs from Balthasar's spatial language about the Trinity that might lead to three ontologically distinct entities.These conceptions are not unique.Buitendag (2022:6) draws attention to the Russian theologian, Sergius Bulgakov's (1871Bulgakov's ( -1944) ) contribution to the Trinitarian story of creation which stated that creation can neither be identified with God nor separated from God, as the Holy Spirit ontologically grounds it.Likewise, Bergmann (2010:19) points to the old liturgical formula "in the Spirit through the Son to the Father" that expressed the perichoretic unity 23 that was conveyed to a broken creation and implanted through the resurrection of Christ.Rust (1987:40) also speaks of the triune God's dynamic presence inversely to his creation.Although God is hidden behind and agential within the created process, he is actively evoking new responses and persuasively urging the creation towards its goal and purpose.It becomes clear that one cannot speak about space without speaking about the triune God.God's creation and his use of space (and time) have temporal and teleological implications.Pieterse (2022:111) refers to Paul's revelation about the cosmic Christ in Colossians 1 to remind us that Jesus as God incarnated was from the very beginning the focal point of God's eschatological purpose with the whole of creation.In addition, the affirmation of a cosmic redemption realised through the efficacy of the triune God accentuates his preservation and the eschatological purpose of the cosmos.
This agency of God is realised in three-dimensional and hyperspace.
The concept of a multidimensional reality, where other dimensions influence three-dimensional space, was already present in the Old Testament.The prominent scholar, N.T. Wright (2013:97) explains that, "[w]hen you went up to the Temple, it was not as though you were 'in heaven'.You were actually there.That was the point."In his commentary, Page (2021:7, 12) remarks that the temple was unlike any other place on earth, since only in the temple, and perhaps more precisely the holy of holies, one could be simultaneously located in heaven and earth.Therefore, he asks, in view of hyperspace, could the concept of Heaven be located not spatially far off, but in another dimension close by, as prominent New Testament scholars propose?Thus, scriptural testimony is clear that the triune God purposely revealed himself through the incarnation to be immanently active in his creation.In addition, the resurrected body of Christ seems to defy traditional physical laws and boundaries.If we consider hyperspace, it might be possible to relate these features to current hypothetical scenarios concerning space and time.Humanity's continued pursuit of exploring boundaries on micro and macro scale and God's continued agency in creation compel us to seek an interface(s) between hyperspace and the Trinity.Our motivation is not to localise the triune God within space and time in a manner that diminishes his splendour.That would be impossible.Rather, his agential activity in hyperspace serves as a reminder that any exclusive physicalist proposal about the ontology of the cosmos is at best speculative and reductionist in nature.Thus, is there an interface between hyperspace and the trinity?always participated fully in every divine act.This is very evident in the biblical testimony."(Rust 1987:37).
AT THE INTERFACE OF HYPERSPACE
The answer to this question is, absolutely!If one considers the theological paradigm regarding God's agency in creation as a viable and complementary companion to the current hypothesis about hyperspace, an interface is a natural attribute of nature.However, if one accepts, for example, the identity version of monistic substantivalism24 as the essence of space-time, an interface would be impossible (Schaffer 2009:140).This model rejects the confession of a resurrected body that is able to transcend space and time in an instant, and effortlessly transform into matter in another.In the quest for an interface, important issues need to be addressed.For example, is hyperspace a theoretical oddity, or does it indeed belong to the essence of creation?Is linking God and hyperspace a mere epistemological reality, or is it also ontological in nature?I believe that the famous phrase of Polkinghorne (2004:79) could contribute to the argument: "Epistemology models Ontology, what we know is a reliable guide to what is the case."In this proposal about hyperspace and the Trinity, the author argues that hyperspace is indeed embedded within the ontological fabric of creation.The tools of natural science are well equipped to make us aware of this reality on an epistemological level.However, science lacks the capacity to fully comprehend the unique interplay and meaning of space, hyperspace, and time.Theological insight into the incarnation of Christ and the unique attributes of his resurrected body lead to a more comprehensive epistemological grasp of an ontological reality.The connection between God and hyperspace is both epistemologically accessible and ontological in nature.The resurrection has an ontological significance that engulfs every aspect and domain of creation.
In popular literature and in academic journals though, physicists seem to be the sole curators of hyperspace, which they developed to solve problems that transcend our three-dimensional world.Guillard and Marks (2021:1) identify several theories that rely on higher dimensional spaces to solve threedimensional problems.
Riemannian manifolds in general relativity [1], the potential for our universe to be a hologram [2], multidimensional spaces in string theories [3], and the projection of higher dimensional crystals to explain the structure of quasicrystals in chemistry [4].
If one considers the scriptural analysis of the previous paragraphs, I will argue to the contrary.Hyperspace is not an exotic phenomenon exclusive to theoretical physics and only accessible through laboratory experiments with sophisticated tools.From a confessional viewpoint, it is clear that hyperspace is a natural phenomenon, intersecting space-time, accessible through communion with the triune God, and consistent with God's agency in the natural world.That being said, any theological analysis of hyperspace should be vigilant not to seize it for its own objectives and, in the process, alienate other disciplines.Hyperspace is a complex phenomenon that confronts our intellectual capacity and should be respected in that manner.
If one reflects on the implications of a hyperspace-infused universe and the possibility of an interface between different dimensions, the work of Page (2021:13, 14) may be helpful.Revelation 21 presents a vision of the future, a realised eschatology, where God shall be in all.En route to the eschaton though, believers and the whole of creation are part of an inaugurated eschatology, where the new era has begun in Christ, but not everything has been realised yet.Page (2021:15) speculates that the causal connection between the heavenly realm (the already) and the believer/creation (the not yet) might be the prospect of hyperspace.The physical temple of the Old Testament presented the high priest with a route to enter the heavenly sphere.In the New Testament, Paul enlightens the hearts and minds of the brethren by reminding them that the incarnation has transformed every believer into the temple of God (1 Cor.3:6-17).Through communion with him in the Spirit, they are now transferred and changed to identify with, and occupy a fourth dimension or hyperspace.Creation itself has been transformed, due to the possibility of a natural transition between matter and spirit.Hyperspace became accessible.
Is this relationship between matter and Spirit not contrary to a law-abiding universe and, in essence, impossible?If one observes nature from an exclusive physicalist point of view, where natural laws25 (Barrow 2007) are immovable boundaries that predict and prescribe, in a deterministic manner, the outcome of all processes, the answer would regrettably be "Yes".Fortunately, different opinions are possible.Polkinghorne (1987:65), for instance, transforms Monod's (1972:110) proposal, which emphasises coincidence in creation: I would argue that the balance between chance and necessity that we observe in the workings of the world is consonant with that balance between the gift of freedom and the reliability of purpose which should characterize Love's act of creation.
Thus, embedded within the law-abiding structure of the physical world, one discovers contingency as an important created phenomenon that enables the possibility of novelty.The incarnation, as God's novel act of love, transformed nature with the potential interplay of matter and Spirit within the universe.Rust (1987:43) concurs and reminds us that creation has, from the very beginning, not come with a fixed and determined structure."Rather it has come as an open and unfinished order, that it may serve God's purpose."Embedded within the structure of creation, hyperspace became accessible in Christ, and its physical complexity, intelligible through scientific endeavour.
Any debate on the interface of hyperspace naturally leads to questions regarding the role and significance of the sciences in their assessment and monopoly of hyperspace, and the status of confessional theology.Mühling (2011:215, 216) gives a historical analysis of different solutions put forward to rationalise the relationship between space and omnipresence.According to the philosophy of science, space could be differentiated into a finite absolute being and an infinite absolute being.Variations of this definition could be traced back to the works of Newton, Clarck, and Einstein.The relational view of space is a third option underlying the ideas of Leibnitz and Mach.Mühling's concern is that our ideas about God's eternity and infinity hinged historically on our understanding of time and space.These perceptions were usually reached by means of the philosophy of nature and scientific proposals.He pleads for a more balanced approach, where physical science takes note of theological propositions.Jürgen Moltmann, for example, developed a theology of space (Bergmann 2010:26), where he wrestles with the nature of space within creation and the agency of God.This endeavour is important, since theology devotes much effort in its understanding and description of the things that God created in space but is ignorant of space itself (Van Kleeck 2021:169).Van Kleeck (2021:166) follows Moltmann (1991:109;2004) in his proposal that God the Father created space for all of creation by means of a voluntary kenotic act of divine hiddenness.The Father's first act may have been concealing rather than revealing.Van Kleeck's aim is to clarify the locality of creation in space, as nothing could exist outside God's ubiquity.These arguments may well lead to panentheism, but Van Kleeck (2021:179) argues that Moltmann's proposal is in accordance with the Confessions and Reformed Scholastics.Bergmann's (2010:26) interpretation of space also emerges from within the Trinity, "… one should understand [that] origin as a Trinitarian social space, whence the whole of creation emerged and still emerges".It is clear that the boundaries between theism and panentheism on this subject need further exploration.
Where is the interface between hyperspace and the Trinity?From a confessional point of view, the answer may be that it is everywhere, but also very specific.God's omnipresence through the providential agency of the Spirit within the fabric of creation is consistent with the revelations of Psalm 139:7-12; Colossians 1:15-20, and so on.Conversely, the appearance of angels in specific circumstances might simply be (the) moving between hyperspace and our space.This could be accomplished by altering their orientation to our plane.Perhaps, just as I might draw a human figure on my fingertip and press it to the flatlanders' page-world, thus appearing as a two-dimensional human, angels might do the same in three dimensions (Freeman 2018:181, 182).
Hyperspace might be far more than a theoretical construction that explains natural laws; it might also be a spatial dwelling of God.
CONCLUSION
The title of this article posed a significant challenge.Is it possible to relate a physio-spatial construction such as hyperspace to a confessional statement about the nature of the triune God?The concluding answer is "Definitely".In addition, scriptural analysis revealed that the resurrected body of Christ could serve as a vantage point that accentuates the natural movement between different dimensions within space.From a scientific point of view, hyperspace might have theoretical and physical applications, yet its real significance may be accentuating God's agency within creation.The author acknowledges that it is a complex issue that demands extensive research.Yet, this proposal may be beneficial in the progression of the science and religion debate.It presents the sciences as complementary partners that serve one God who created the cosmos through different processes and takes care through his providential guidance. | 10,088.4 | 2023-12-13T00:00:00.000 | [
"Philosophy",
"Physics"
] |
Investigation of the structure and mechanical properties of stainless steel alloyed with silver
The paper contains information of the development of technology for smelting stainless steel for application in the manufacture of medical devices. Smelting was carried out in electric arc vacuum furnaces. The technology has been developed for producing sheets with a thickness of 1 ÷ 1.25 mm for further research of mechanical properties. To study the structure, thin sections of the alloys were obtained by melting and further rolling. The mechanical properties of the samples and their structure have been studied. The results show that the addition of a small amount of silver decreases the mechanical properties of the steel. In the ingots, a dendritic structure is observed, and after warm rolling, the structure has a pronounced fine-grained austenitic structure.
Introduction
Today, stainless steel has taken one of the leading places among the most important materials in the world. Its composition has been repeatedly modified and improved to acquire novel and enhanced properties. One of the most popular and applied grades among stainless steels is 316L austenitic stainless steel. The range of its application is incredibly wide. It has high corrosion resistance, good mechanical strength, and ductility, which can guarantee long-term and high-quality operation of products from this steel. It is used in many industries such as medical, food, petrochemical, mining, automotive, aerospace and other fields. The scope of application of 316L stainless steel is constantly growing in new areas of the economy, such as biotechnology and others [1][2][3][4][5].
Due to its high performance properties, stainless austenitic steel has long been used for the manufacture of various medical devices, such as orthopedic prostheses, dental implants, cardiovascular stents / valves, and other medical devices [6]. Often, stainless steel implants are used to fix bone fractures [7]. However, the biological environment in the human body is very corrosive to metals and can lead to protein adsorption, biofilm formation (attachment of microorganisms / bacteria to the surface of the material), and corrosion. Steel itself is able to become a source of bacterial contamination [8]. Аlloying, coating and many other methods are applied to increase the biocompatibility of stainless steels. Recent studies have shown that the addition of Ag to stainless steels can impart antibacterial properties without the necessity for further surface modification [9].
In this work, 2 ingots of stainless steel with and without the addition of 0.2% Ag were smelted, plates were obtained from these ingots by the method of warm rolling, and the structure and mechanical properties were investigated.
The weighed portions were melted in an electric arc vacuum furnace with an LK200DI nonconsumable tungsten electrode from LEYBOLD-HERAEUS (Germany). Samples were placed in a water-cooled copper crystallizer, after which the working chamber was hermetically closed and evacuated to a pressure of 1 * 10-2 mm. rt. Art. After that, argon was poured into the chamber to a pressure of 0.4 atm.
In the first remelting process, a single ingot was obtained. The shape of the ingot was obtained in the form of a biconvex lens, 30-35 mm in diameter, 10-15 mm in height. The next 2 remelts are aimed at obtaining a uniform chemical composition throughout the ingot. The duration of each melting of one ingot is 1-1.5 minutes. The getter was melted before of the fusible ingot. An ingot of iodide titanium weighing 15-20 g was used as a getter. The mass of the ingots was 45 to 50 grams.
Further, under these conditions, the resulting ingots are melted into single ingots weighing 180-200 g for 2 remelts. The final ingot has a length of 90-100 mm, a width of 20-25 mm, and a height of 10-15 mm.
The primary deformation (rolling) of cast billets 10-15 mm thick was carried out by warm rolling on a DUO-300 twin-roll mill with partial absolute reductions per pass: 1-2 mm to a billet thickness of 4 mm, then 1.0 mm to a billet thickness of 2.0 mm, further 0.5 mm to the final thickness of the workpiece -1-1.25 mm. The blanks were heated before deformation in a muffle furnace for 25 minutes at a temperature of 1100 ° C. Heating was carried out in a KYLS 20.18.40 / 10 furnace from HANS BEIMLER with a maximum temperature of 1350 ° C.
The tensile strength study was carried out on an INSTRON 3382 universal testing machine with a tensile speed of 1 mm / min. Flat specimens with heads were made from plates and cut by EDM cutting along and across the plate (Figure 1). This shape is necessary to minimize the influence of gripping on the results of the study of samples. The structure study was carried out for samples in the form of ingots after smelting and plates after plastic deformation. The pressing occurred on an IPA 40 pneumohydraulic press at a temperature of 175C and holding for 10 minutes, at a pressure of 3 bar. The samples obtained were grinded and polished. Sample preparation was carried out on a Buehler Phoenix 4000 installation by sequential grinding after pressing on a P320 grinding wheel (15 min) and Aka-Alegran-3 with a 3 μm diamond suspension (10 min). The samples were polished using a rayon polishing wheel (Aka-Napal) using a diamond suspension (1 μm) for 5 min.
The surface of the samples was etched with a mixture of acids for high-alloy steels, consisting of 20% nitric acid (HNO3), 10% sulfuric acid (H2SO4), 5% hydrofluoric acid (HF), and 65% water (H2O). The etching time for ingots was 9 minutes, for samples of rolled plates 11 minutes.
Optical microscopy was carried out on a Carl Zeiss JENA Germany NEOPHOT 2 microscope, with which microimages were obtained and processed using the AmScope MU1403 camera and the AmScope 3.1 software.
Results and discussion
For mechanical tests, 5 samples were tested per one experimental point. Determined the values of the relative elongation, conventional yield stress and ultimate strength. The results can be found in Table 1. For convenience, a bar chart was constructed ( Figure 2). Based on the data obtained, it can be concluded that all samples have good plasticity (from 42% to 54%) and strength (from 675 to 690 MPa). The nominal yield stress also varies depending on the alloy composition and the direction of the rolled product in the range from 398 to 506 MPa.
The results show that the addition of a small (0.2% Ag) amount of silver slightly reduces the mechanical properties of the steel. The minimum strength and ductility were found for the sample across the rolled steel with the addition of 0.2% silver and amounted to 42% and 675 MPa, respectively. The maximum characteristics were for the specimens cut along the rolled stock without the addition of silver and amounted to 54% in terms of elongation and 690 MPa in terms of ultimate strength. It should also be noted that the direction of the rolling also affects the properties. The tensile strength and relative elongation are slightly higher for specimens cut along the direction of rolling, however, the yield stress is lower than for specimens cut across the direction of rolling.
Optical microscopy of melted ingots and plates was carried out on an optical microscope Carl Zeiss JENA Germany NEOPHOT 2. The structure can be found in Figure 3.
Ingot
Plate Composition №1 Composition №2 The obtained micrographs show that the structure of the ingots is a dendritic structure, and in the plates the structure has a pronounced fine-grained austenitic structure.
Сonclusion
Technology has been developed for smelting stainless steel for use in the production of medical products in electric arc vacuum furnaces and a technology for producing sheets with a thickness of 1 ÷ 1.25 mm for further research of mechanical properties. To study the structure, thin sections of the alloys were obtained after melting and after rolling. The mechanical properties of the samples and their structure have been studied. The results show that the addition of 0.2% Ag silver does not significantly reduce the mechanical properties of the steel. In the ingots, a dendritic structure is observed, and in the ingots after warm rolling, the structure has a pronounced fine-grained austenitic structure. | 1,895.6 | 2021-06-01T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Identification and characterization of protective CD8+ T‐epitopes in a malaria vaccine candidate SLTRiP
Abstract Introduction Efforts are required at developing an effective vaccine that can inhibit malaria prevalence and transmission. Identifying the critical immunogenic antigens and understanding their interactions with host proteins forms a major focus of subunit vaccine development. Previously, our laboratory showed that SLTRiP conferred protection to the liver stage of Plasmodium growth in rodents. In the follow‐up of earlier research, we demonstrate that SLTRiP‐mediated protection is majorly concentrated in specific regions of protein. Method To identify particular protective regions of protein, we synthesized multiple nonoverlapping fragments from SLTRiP protein. From this, we designed a panel of 8‐20mer synthetic peptides, which were predicted using T‐epitope‐based prediction algorithm. We utilized the IFN‐γ enzyme‐linked immunosorbent spot assay to identify immunodominant peptides. The latter were used to immunize mice, and these mice were challenged to assess protection. Results The protective polypeptide fragment SLTRiP C3 and SLTRiP C4 were identified, by expressing and testing multiple fragments of PbSLTRiP protein. The immune responses generated by these fragments were compared to identify the immunodominant fragment. The T‐epitopes were predicted from SLTRiP protein using computer‐based algorithms. The in vitro immune responses generated by these peptides were compared with each other to identify the immunodominant T‐epitope. Immunization using these peptides showed significant reduction in parasite numbers during liver stage. Conclusion Our findings show that the protective efficacy shown by SLTRiP is localized in particular protein fragments. The peptides designed from such regions showed protective efficacy equivalent to whole protein. The sequence conservation analysis with human Plasmodium species also showed that these peptides were conserved. In conclusion, these peptides or their equivalent from other Plasmodium species could impart protection against malaria in their respective hosts too. Our studies provide a basis for the inclusion of these peptides in clinical vaccine constructs against malaria.
studies provide a basis for the inclusion of these peptides in clinical vaccine constructs against malaria.
K E Y W O R D S
liver-stage malaria, T-cell epitopes, vaccine
| INTRODUCTION
Malaria is a mosquito-borne infectious disease affecting annually an estimated 212 million people worldwide, the causative agent being an Apicomplexan parasite of genus Plasmodium. 1 The disease manifests in a broad range of clinical symptoms varying from moderate symptoms like fever and diarrhea to life-threatening symptoms, which include severe anemia, respiratory distress, renal impairment, coma, and even death. Despite high mortality, no licensed vaccine that can provide 100% (sterile) protection against Plasmodium infection exists. The complicated genetic structure and high antigen diversity of Plasmodium make malaria vaccine generation a daunting task. The situation has become perilous with the increasing resistance of Plasmodium against common antimalarial drugs. 2,3 In fact, resistance against artemisinin has also been reported from various parts of Asia and Africa. 4,5 In addition, most known therapeutic drugs against Plasmodium restrict or kill parasites during its blood stage. The need for an effective vaccine against malaria that targets both blood as well as liver stage has become indispensable for the control and eradication of malaria. 6 This is necessary as some Plasmodium species persist as dormant hypnozoites in the liver, which are activated anytime from days to years after primary infection, causing relapse of blood-stage parasite.
The vaccines designed against microbes belong to one of the three categories-killed parasite, attenuated parasite, and subunit vaccines. Live radiation-attenuated Plasmodium berghei sporozoites (RAS) were the first vaccines against malaria that gave full sterile protection against the challenge of live sporozoites and is considered the "gold standard" for development of malaria pre-erythrocytic stage subunit vaccines. [7][8][9][10][11] Immunization using chemically and or genetically attenuated malaria parasites have been shown to provide immunity against multiple strains of Plasmodium parasite. [12][13][14] However, the approach faces the challenges of manufacture cost, storage, and distribution of parasite, thus limiting the use of this approach in endemic areas. Conversely, a subunit vaccine includes one or multiple protein antigen that may or may not be coupled to immunogenic and protective epitopes.
A very few subunit vaccines against different infectious diseases have been licensed and are being used. These include tetanus, diphtheria and pertussis (TDP) toxins, hepatitis B surface antigen, and vaccine against human papilloma virus. 15 Plasmodium proteins have been assessed in murine models, for the development of therapeutic vaccines against vector-or host-specific malarial stages. The synthesis of a peptide-based vaccine called SPf66, with apparent efficacy against monkeys generated enormous interest for field trials in Africa to demonstrate protection. 16 The studies with SPf66 also led to the development of field technologies to evaluate different vaccine candidates.
Malaria sporozoites express exoerythrocytic stage-specific virulent proteins important for productive hepatocyte invasion. These include CSP, EXP1, TRAP, SPECT1, SPECT2, CelTOS, UIS4, and PPLP1 and many other proteins. 17 These proteins have been studied for their protective efficacy, some of which, like circumsporozoite protein (CSP) and TRAP, are already in the advanced stages of vaccine development. 18 The major sporozoite coat protein, CSP, is well characterized and widely used as a model antigen. The central repeat (R) region and the T-cell epitopes (T) of Plasmodium falciparum CSP combined with hepatitis B surface antigen given along with the AS01 adjuvant system (RTS,S/AS01), provides partial protective immunity against malaria infection primarily through high levels of antibodies. 19 The protection is limited to a maximum percentage of 40 to 50 and the antigen needs to be improved for its efficacy, by combining it with new antigens and adjuvants. 20 The identification of Plasmodium surface protein-circumsporozoite protein led to an optimistic prediction of a possible subunit vaccine against malaria. However, validation of the abilities of vaccine antigen candidates for boosting immune responses and providing 100% sterile protection in humans is still in process. Multiple tests are done to increase the efficacy of partially effective antigens through assessments of new adjuvants, delivery platforms, and/or identifying new candidate antigens. Epitopes from MSP, LSA-1, and CSP have been tested alone and as part of multiepitope antigen. 21 Therefore, epitope-enhanced immunogens, expressing multiple copies of linear B and T-cell epitopes from candidate antigens could be an important strategy to increase the protective efficacy of these vaccines.
In a previous work, we reported the protective efficacy of a novel antigen SLTRiP. 22 SLTRiP immunization affected the growth of parasites within hepatocytes by delaying the prepatent period by 3 to 4 days. Immunized mice displayed protection subsequent to sporozoite challenge and exhibited 10 000-fold less parasites 18S ribosomal RNA (rRNA) copy numbers in liver thus emphasizing the vaccine potential of SLTRiP. 22 These data support the potential of SLTRiP as a target antigen for malaria vaccine development. In addition, this protection was majorly attributed to cell-mediated immune system. In this paper, we aim to identify immunodominant and subdominant T-cell epitopes, interferon γ (IFN-γ) secretion by T cells against those epitopes. In addition, we attempted to identify epitopes involved in protection. Studies have shown that the T-cell epitopes bind to peptide-binding groove on MHC molecules, which has hydrophobic regions. To fit this groove, T-cell epitopes need hydrophobic amino acids while hydrophilic regions are needed for interaction with T-cell receptor. Bioinformatics approach to classify epitopes using Parker hydrophilicity prediction was employed to identify hydrophobic regions, which are likely to contain high-scoring T-cell epitopes. 23 The study reports a protein antigen and its protective regions that can facilitate the development of a second-generation vaccine against malaria.
| Ethics statement, experimental animals, and parasites
Six-to eight-week-old male/female C57BL/6 mice (H2 b ) were used in all animal experiments. The animal work was conducted in accordance with National Institute of Immunology's (NII) Institutional Animal Ethics Committee (IAEC) rules. The IAEC approval number for the project is NII-312/13. Animals were injected with ketamine/xylazine intraperitoneally for short-term anesthesia. At the end of each experiment, the anesthetized mice were killed humanely by cervical dislocation.
| Parasite cycle
Six-to eight-week-old male/female C57BL/6 mice were used for growing parasites. P. berghei ANKA parasites were cycled between mice and Anopheles stephensi mosquitoes. Mosquitoes (3-5 days old, female) were starved overnight and fed on infected mice. These infected mosquitoes were kept at 19°C, 70% to 80% relative humidity, 12 hours light cycle and fed on cotton pads soaked in 20% sucrose solution for 18 days post infected blood meal. After 18 days, sporozoites were obtained from dissection of salivary glands from infected mosquitoes. For this, the infected mosquitoes were first washed with 50% ethanol, followed by PBS, and dissected in RPMI 1640 media containing 10% fetal bovine serum. To obtain sporozoites, salivary glands were ground gently and centrifuged at 800 rpm for 4 minutes to remove mosquito tissue. The number of sporozoites present in per unit volume (mL) was determined by counting in a hemocytometer.
| Bioinformatics analysis
Parker hydrophilicity prediction was used to distinguish between the hydrophilic and hydrophobic regions of the protein. The region above the threshold value are generally hydrophilic (shown in yellow) while the regions below the threshold are hydrophobic (shown in green). The epitopes of SLTRiP were predicted using Immune Epitope Database (IEDB) analysis resource, http://tools.immuneepitope.org/ mhci. This tool takes an amino acid sequence, or a set of sequences to determine possible MHC class I binding peptides. It establishes the probability of a particular amino acid sequence to form a T-cell epitope by assigning a score or percentile rank. The lower the assigned score to a particular amino acid sequence, the greater is the probability of that region to form T-cell epitopes. The prediction method allows choosing from a number of MHC class I binding prediction methods. Based on the availability of predictors and formerly observed predictive performance, this selection uses the best possible method for a given MHC molecule. Currently for peptide:MHC-I binding prediction, for a given MHC molecule, IEDB Recommended uses the Consensus method consisting of ANN, SMM, NetMHCpan, and CombLib methods. We employed IEDB recommended 2.19 for our epitope prediction. The epitope predictions were limited to peptides of H2 b allele and specific to MHC class I. Variablelength peptides were chosen based on position in protein and percentile rank.
| Primer designing
The amino acid sequence of SLTRiP gene of P. berghei was scanned for hydrophobic regions. Primers were designed from hydrophilic region such that four fragments incorporating the whole gene were generated with no overlapping regions. Primers with BamHI and XhoI sites were used to facilitate cloning of fragments in pGEX6P1 vector and express fragments of open reading frame of SLTRiP fused in-frame to the 3′ end of the glutathione S-transferase (GST) protein.
SLTRiP fragments
The individual polypeptide fragments (C2, C3, C4, C5) were induced by addition of isopropyl 1-thio-D-galactopyranoside (IPTG) to a final concentration of 1 mM when the bacterial culture reached 0.5-0.7 OD 600 , followed by incubation of 12 hours at 37°C for C2; 18 hours at 18°C for C3 and C4; and 18 hours at 25°C for C5. The cells were harvested at 8000 rpm for 10 minutes at 4°C and suspended in buffer A (100 mM Tris, 250 mM NaCl, 10% glycerol, 0.5 mM EDTA, 0.05% Triton X-100, pH 8.0) with 0.02 mg/mL lysozyme and protease inhibitor mixture to make complete lysis buffer. The suspension was sonicated at 4°C (ice-cold) for 10 minutes. The sample was cleared by centrifugation at 12 000 rpm for 20 minutes at 4°C. The supernatant obtained was loaded onto a prepacked 5-mL GST-FF column and washed with 10 column volumes of buffer A. The protein was eluted with buffer A containing 15% v/v of buffer B (50 mM reduced glutathione) using AKTA explorer chromatography system. The purity was observed to be more than 95% in case of C3, C4, and C5. An additional step of gel filtration chromatography was employed for purification of C2. Yields were typically in the range of 3 to 4 mg of purified protein/L of bacterial culture for C2 and C5 but 0.5-1 mg/L for C3 and C4.
| Immunization with purified
SLTRiP polypeptide fragments and SLTRiP peptides C57BL/6 mice, aged 6 to 8 weeks were immunized; priming was done with 50 µg of polypeptide in complete Freund's adjuvant (Sigma, India) per mouse. In the three subsequent boosters, the amount of polypeptide used was 25 µg per mouse mixed with incomplete Freund's adjuvant (Sigma). Boosts were given on days 15, 21 and 28 post-priming. The control group was immunized in an identical manner with GST protein.
| Peptide synthesis
The studies were initially carried out with a panel of long protein fragments spanning the complete sequence of the SLTRiP protein. Later 15 peptides consisting of 9 to 16 amino acids, spanning the protective protein fragments were used to define minimal T-cell epitopes. The peptides were synthesized commercially by Bio Basic (Canada) at more than 80% purity.
| Enzyme-linked immunosorbent assay
Culture supernatants from in vitro stimulated splenocytes were collected after 60 hours of incubation. Secreted cytokines were measured by enzyme-linked immunosorbent assay (ELISA) using an eBiosciences kit, following manufacturer's instructions. The purified anticytokine antibody was added to the wells of enhanced protein binding ELISA plate, sealed, and incubated at 4°C overnight. The next day, the antibody solution was removed and the plate was blocked using blocking buffer for 1 to 2 hours at room temperature (RT) to prevent nonspecific binding. Plate was washed three times with PBST (1× phosphate-buffered saline with Tween detergent). Biotinylated anticytokine detection antibody was added, sealed, and the plate was incubated at RT for 1 hour. It was washed again three times with 1× PBST. Secondary antibody conjugated with HRP was added to the wells, sealed, and incubated again at RT for 30 minutes. The plate was washed five times with PBST and developed using TMB (3,3′,5,5′-tetramethylbenzidine) substrate until color starts to appear. Optical density was measured at 450 nm in a microplate reader (Tecan M200, UK).
| Ex vivo IFN-γ enzyme-linked immunosorbent spot
Ex vivo enzyme-linked immunosorbent spot (ELISpot) assay was done for peptide-stimulated splenocytes following manufacturer's (BD Biosciences) protocol. Capture antibody diluted in coating buffer was added to each well of an ELISpot plate and stored at 4°C. Next day, antibody was discarded; plate washed and blocked with a blocking solution for 1 to 2 hours at RT. Splenocyte suspension was prepared and added at different dilutions (10 5 -10 6 cells/mL) to wells of ELISpot plate. The cells were activated using proper mitogen and antigen. ELI-Spot plate was incubated at 37°C, in a 5% CO 2 and humidified incubator for 24 hours. The cell suspension was aspirated and plate was washed three times with wash buffer. Detection antibody was prepared and added to ELISpot plate. The plate was incubated at RT for 2 hours, followed by washing three times again with wash buffer. Secondary Ab-HRP enzyme conjugate was added to the plate and incubated for 1 hour at RT. The plate was washed five times with wash buffer and finally, substrate solution was added to each well of ELISpot plate. Spot development was monitored for 5 to 60 minutes and reaction was stopped, by washing wells with deionized water to prevent overdevelopment of spots, else it may give high background. The plate was air-dried at RT for 2 hours or overnight until it was completely dry. The plate was stored in a sealed plastic bag in the dark, until analysis. Spots were enumerated using an ELISpot plate reader (AID iSpot ELHR04, Germany).
| Generation of polypeptide fragments
We aimed to identify immunodominant and subdominant T-cell epitopes involved in protection against sporozoite challenge. For this, gene fragment clones were designed as an approach to predict the minimal protective region in SLTRiP ( Figure 1A). Parker hydrophilicity prediction method was employed to distinguish hydrophobic regions from hydrophilic regions. We observed the regions designated as SLTRiP C1, SLTRiP C3, SLTRiP C4, and SLTRiP C5 had hydrophobic regions but SLTRiP C2 was majorly hydrophilic in nature ( Figure 1B). The protein SLTRiP, mentioned in the work or used for immunization experiments starts from N-terminus, amino acids 85 to 413. The exon 1, referred to as fragment SLTRiP C1 was added in later annotations to the protein and the peptides from this fragment have been used as negative control in peptide stimulation studies. The protein hydrophilic regions were used to design primers that corresponded to individual polypeptide fragment. This was done to ensure that none of the possible T-cell epitopes are fragmented. The individual gene fragments were amplified using, Escherichia coli codon-optimized SLTRiP gene as template and PCR primers ( Table 1). The gene fragments were cloned in the bacterial expression vector pGEX6p1, which contains a GST tag at its N-terminus ( Figure 1C). The clones were confirmed and the individual gene fragments were expressed as GST-tagged fusion protein or protein fragments. The fusion protein fragments were purified using the GSTbinding column. The purity of all fragments was observed to be above 95% from sodium dodecyl sulfate-polyacrylamide gel electrophoresis image ( Figure 1D
| Identification of immunodominant SLTRiP fragment
Studies have shown the role of cytokines in modulating immune responses. To demonstrate the most immunodominant fragment in terms of IFN-γ secretion, an in vitro IFN-γ ELISA was conducted. For this, the mice were immunized with SLTRiP protein (Figure 2A). The splenocytes from SLTRiP-immunized mice were cultured and stimulated with individual fragments/polypeptides in vitro. The supernatant of culture was collected and quantitated for IFN-γ concentration. The cultures stimulated with SLTRiP C2 and SLTRiP C5 showed IFN-γ secretion on stimulation, which was more than the control. However, significantly increased secretion of IFN-γ was observed in cultures stimulated with SLTRiP C3 and SLTRiP C4, which was comparable with that of SLTRiP protein stimulation ( Figure 2B), establishing these two as immunodominant polypeptides/fragments.
| SLTRiP fragments immunization and sporozoite challenge assay
In the next set of experiments, we compared the protective efficacy of SLTRiP fragments by mice immunization experiment. Groups of 12, females 6-week-old C57BL6 mice were immunized with SLTRiP fragments subcutaneously as shown in Figure 2A. Two weeks after the last boost, the mice were challenged with 5000 P. berghei ANKA sporozoites given intravenously. One set of mice was analyzed for parasite load in mice liver while the other set of mice (six mice) was monitored for emergence and growth of blood-stage parasite. The parasitemia levels were observed by microscopic examination of Giemsastained thin blood smears prepared from day 3 post challenge and followed until the mice died. The parasitemia count for SLTRiP C3 and C4 showed a delay in | 55 prepatent period of 4 and 3 days while SLTRiP C2 and C5 showed a delay of 0 and 2 days, respectively. The prepatent delay shown by SLTRiP C3 and SLTRiP C4 was comparable with full-length SLTRiP protein immunized mice ( Figure 3A). Furthermore, a 3 log reduction in parasite 18S rRNA copy numbers was observed in mice immunized with SLTRiP C3 and SLTRiP C4; which is close to the reduction observed with SLTRiP protein.
SLTRiP C5 showed nearly 2 log reduction, while SLTRiP C2 showed less than 0.5 log reduction in mice liver burden ( Figure 3B). The survival assay showed an increased survival of 4 days in mice immunized with SLTRiP C3 and SLTRiP C4 while an increased survival of only 1 day was observed in SLTRiP C2-and C5-immunized mice compared with control ( Figure 3C). Overall, the sporozoite challenge assay showed a decrease in parasite load, delay in prepatent period, and increased survival in mice immunized with SLTRiP C3 and SLTRiP C4, which was comparable with SLTRiP protein. These results indicate that the protection contributed by SLTRiP is located mostly in these fragments (C3 and C4) of SLTRiP.
| Immunogenicity and protective efficacy of putative T-cell epitopes
Bioinformatics approaches to identify T-cell epitopes, have been used in many infectious diseases for their inclusion in vaccines with success. Peptides joined as a string of beads were synthesized as a recombinant protein to immunize against P. falciparum epitopes. In this study, bioinformatics approach was used to screen for potential T-cell epitopes in SLTRiP and to identify T-epitopes with the potential to provide protection against P. berghei sporozoite challenge in mice. In this regard, 8-20mer consecutive peptides that encompass the entire protective gene fragments were synthesized using T cell-based algorithm. Table 2 shows the list of all peptides chosen from SLTRiP for use in in vitro stimulation and protection assay. IEDB tool was used for T-epitope prediction. Epitopes from SLTRiP fragments namely C1, C3, C4, and C5 were predicted using IEDB analysis and chemically synthesized. Their comparative location in protein is shown in Figure 4A. Splenocytes were collected from SLTRiP-immunized mice and incubated with peptide at concentration 10 ng/well in a 96-well plate for 72 hours. Epitope-induced IFN-γ secretion was monitored by IFN-γ ELISA. An increase in the release of IFN-γ was observed in cells stimulated with peptides 302, 303, 304, and 401 ( Figure 4B). A suboptimal increase was also observed in cells stimulated with peptides 305, 402, and 502. Similarly, splenocytes collected from SLTRiPimmunized mice were incubated with peptides at concentration 10 ng/well for 24 hours in an ELISpot plate. Epitopeinduced IFN-γ secretion was monitored by the formation of spots on membrane. An increase in number of IFN-γ spots was observed in wells stimulated with peptides 302, 303, 304, 401, and 502 ( Figure 4C), while comparatively moderate spots were also observed in wells stimulated with peptides 305 and 402. These results determined peptides 302, 303, 304, 401, and 502 as immunodominant peptides, and 305 and 402 as subdominant peptides. As fragment analysis had shown that most of SLTRiP-related protection is concentrated in fragment SLTRiP C3 and SLTRiP C4; peptides 302, 303, 304, 305, and 401 were further used for immunization to analyze them for their protective efficacy.
Groups of C57BL/6 mice (5-6 mice/group) aged 6 to 8 weeks were immunized with above mentioned immunodominant peptides. The immunization schedule was same as shown in Figure 2A and 50 or 25 μg of peptides were used for priming and boosts, respectively. A week after the final boost, mice were challenged with 10 000 P. berghei ANKA sporozoites given via intravenous route. The burden of parasite in liver was quantified by measuring parasite 18S rRNA using real-time PCR analysis. PEP 302 showed 2 log reduction and PEP 303 showed 2.5 log reduction in parasite Note: Peptide length and percentile rank is given in the rightmost columns. IEDB recommended 2.19 was used for our epitope prediction. The percentile rank given corresponds to the sequences in red. The lower the assigned score to a particular amino acid sequence, the greater is the probability of that region to form T-cell epitopes.
18S rRNA copy numbers while PEP 304 and PEP 401 showed only 1.5 log reduction. PEP 305 showed no reduction in mice liver burden as compared with control immunized mice. (Figure 4D). The significant reduction in parasite burden in PEP 302-and PEP 303-immunized mice confirmed these peptides as dominant protective epitopes ( Figure 5).
| T-epitope conservation among the human parasites
To identify the conservation of T-epitopes, the sequences of these immunodominant peptides were aligned with homologous proteins in human Plasmodium species using ClustalW tool. The aligned sequences showing identical amino acids with no gaps were considered conserved across species. PEP 302 showed nearly 90% conservation in amino acids across species followed by PEP 303 which also showed considerable amino acid conservation. Furthermore, the
F I G U R E 4
SLTRiP T-cell epitope peptides characterization: splenocytes were collected from the immunized mice, cultured in presence of IL-2, and stimulated with peptides at concentration 10 ng/well for 3 days. A, The comparative location of peptides in protein is shown. B, Epitope-induced interferon-γ (IFN-γ) secretion was monitored by IFN-γ enzyme-linked immunosorbent assay. C, Epitope-induced IFN-γ spot formation was monitored by enzyme-linked immunosorbent spot assay by calculating the number of spots formed in each stimulated well. The decrease in pre-erythrocytic parasite burden (parasite 18S rRNA copy number) in the liver of mice immunized with peptides and challenged with wild-type sporozoites.was quantified. D, The 18S rRNA copy numbers were not normalized with mice glyceraldehyde-3phosphate dehydrogenase (GAPDH) control as the values for all GAPDH copy numbers were equal in all the samples. The data represented are means and standard error of the means based on six mice per group. *P < .05; **P < .01; by one-way analysis of variance Kruskal-Wallis test. Control, mice immunized with adjuvant; PEP, peptide there exists a general notion that multiple antigens, either through whole-parasite inclusion of many proteins or polypeptides are required for an effective malaria vaccine. 24 However, protein fragments are unlikely to be effective unless they include critical epitopes recognized by protective immune cells. An approach for development of effective vaccine against malaria includes identification of protein regions that can generate protective immune responses. 25 Epitope mapping has multiple advantages for vaccine development as they represent antigenic regions of the protein and consequently nonprotective parts can be removed. 26,27 Epitopes from different alleles can also be collected for the formation of a peptide library that will be recognized by majority of immune populations. In addition, using mouse model, the peptide-specific protective efficacy can be characterized by mice immunization.
F I G U R E 5 T-epitope conservation among human parasites. Multiple sequence alignment of SLTRiP immunodominant peptides (302, 303, 304, 305, and 401) with SLTRiP orthologs in human Plasmodium species: P. falciparum (PF3D7_0830500), P. vivax (PVP01_0504200), P. ovale (PocGH01_00158900), P. malariae (PmUG01_05015900), P. knowlesi (PKNH_1324300). The conservation of amino acid tryptophan (W), along with other hydrophobic amino acids (V, I, L, F) and charged amino acids (K, R, E, D) is majorly observed in protective peptides 302 and 303. "*" fully conserved residue; ":" conservation between strongly similar residues; "." conservation between weakly similar residues SLTRiP protein was of particular interest as it was able to demonstrate protective efficacy in mice model. 22 The in silico analysis showed that protein has B-and T-cell epitopes. An epitope is frequently used for an immunodominant peptide. Our previous results demonstrate that the high titer antibodies generated were nonprotective. Therefore, we proceeded to identify T-cell epitopes in SLTRiP. For this, we synthesized multiple nonoverlapping fragments from SLTRiP protein to identify particular protein fragments that are protective. Identification of the T-cell epitopes of the 413 amino acid long SLTRiP protein would have been a laborious process by conventional methods, as it involves synthesizing short overlapping oligopeptides of the full-length protein.
We, therefore, synthesized multiple fragments of our gene to identify protective regions. The in silico studies have shown that the epitopes for T-cells are generally present in hydrophobic regions of protein while hydrophilic regions score best for B-cell epitopes. 27 Using this information, bioinformatics approach was employed to identify hydrophobic regions of the protein using Parker hydrophilicity prediction to identify hydrophobic regions in protein. By generating recombinant subfragments of the protein SLTRiP, we observed that the immunodominant T-cell epitopes of the protein are located between amino acids 155 and 355, which forms the protective SLTRiP C3 and SLTRiP C4 fragments of protein SLTRiP. These two fragments demonstrated major protection in C57BL/6 mice, which are conventionally difficult to protect than other mice strains. 28 Mice immunized with SLTRiP C5 showed partial protection while SLTRiP C2 showed no protection. SLTRiP C2 contains mostly hydrophilic regions of protein as seen by Parker hydrophilicity prediction. The nonprotective results of C2 correspond with our earlier hypothesis stating that most of protection relies on T-cell epitopes.
To study the protective epitopes, first of all T-epitopes from the protective fragment were predicted using computer algorithm-based predictions. These programs predict the potential of a peptide to bind to a particular MHC class I molecule using MHC peptide binding. Number of peptides required to be synthesized could be significantly reduced by employing such methods. The program prediction is based on the affinity of the peptides and T-cell receptor but it cannot predict the processing, proteolysis, expression, or availability of the peptide on the cell surface. Some of the peptides predicted do not induce a T-cell proliferation response and the peptides need to be checked in vitro for their ability to induce immune responses. Previous research has shown that protective immune responses after immunization are dependent on T cells secreting IFN-γ. 29 Therefore, we identified and validated T-cell epitopes of the SLTRiP protein immunodominant for IFN-γ secretion; however, these may be only a subset of the total T-cell repertoire that exists in vivo.
Recently, the whole-parasite vaccination approach has also been revived, despite challenges in sporozoite production. A number of blood-stage and transmission-blocking candidates are also being tested with different adjuvant formulation and delivery routes for malaria vaccine development; however, many groups believe that evaluation and identification of subunit vaccine candidates, acting synergistically to induce protective responses, can add to the efforts targeting multiple stages of parasite's life cycle. The subunit vaccines undergoing clinical assessment currently are PfCSP and PfTRAP. Studies in mice have shown that immunization using viral vectors expressing ME-TRAP (multiepitope TRAP) induces protective immune responses in the liver. 30 Although SLTRiP peptides do not provide sterile protection preclinically, we demonstrate the significantly high level of protective efficacy of these peptides by immunization and challenge experiments. In addition, sequence conservation analysis with human Plasmodium species revealed that these peptides were conserved, in fact, some amino acid residues, particularly positionally constrained tryptophan showed 100% identity in most of these strains. Therefore, these peptides or its equivalent from other Plasmodium species could impart protection against malaria in other hosts too. While antigens like CSP provide greater levels of protection clinically, we demonstrate the value of including these peptides in a multicomponent subunit second-generation vaccines for the development of a subunit vaccine with improved protective efficacy. | 6,869.8 | 2020-01-22T00:00:00.000 | [
"Biology"
] |
Gravitational lensing by wave dark matter halos
Wave Dark Matter (WaveDM) has recently gained attention as a viable candidate to account for the dark matter content of the Universe. In this paper we explore the extent to which dark matter halos in this model, and under what conditions, are able to reproduce strong lensing systems. First, we analytically explore the lensing properties of the model -- finding that a pure WaveDM density profile, a soliton profile, produces a weaker lensing effect than other similar cored profiles. Then we analyze models with a soliton embedded in an NFW profile, as has been found in numerical simulations of structure formation. We use a benchmark model with a boson mass of $m_a=10^{-22}{\rm eV}$, for which we see that there is a bi-modality in the contribution of the external NFW part of the profile, and actually some of the free parameters associated with it are not well constrained. We find that for configurations with boson masses $10^{-23}$ -- $10^{-22}{\rm eV}$, a range of masses preferred by dwarf galaxy kinematics, the soliton profile alone can fit the data but its size is incompatible with the luminous extent of the lens galaxies. Likewise, boson masses of the order of $10^{-21}{\rm eV}$, which would be consistent with Lyman-$\alpha$ constraints and consist of more compact soliton configurations, necessarily require the NFW part in order to reproduce the observed Einstein radii. We then conclude that lens systems impose a conservative lower bound $m_a>10^{-24}$ and that the NFW envelope around the soliton must be present to satisfy the observational requirements.
I. INTRODUCTION
The ΛCDM model is the most successful theoretical framework in modern cosmology to explain the process of structure formation in the Universe on large scales. This model requires the existence of a cold dark matter (CDM) component that comprises 26% of the total energy budget, which is best described by a non-relativistic (cold) and non-interacting fluid [1].
One of the main predictions from only CDM simulations of structure formation is the appearance of universal cuspy density profiles for the DM halos, with the Navarro, Frenk and White (NFW) profile the one most used to describe CDM [2]. Despite the successes of CDM at large scales, there are some discrepancies with observations on galactic scales, such as: the "missing satellite problem", the "cusp core problem", and the "too-big-tofail problem" [3][4][5][6][7][8][9], see also Ref. [10] for a recent review. Solutions to these problems may come from taking into account the effects of baryons in the formation of galaxies, but it is doubtful that this is the final answer. Another possibility to solve the above mentioned issues is to change the paradigm of the nature of dark matter itself, as has been proposed and explored widely for different candidates such as Self-Interacting Dark Matter [11], Warm Dark Matter [12,13], Axion/Scalar or Wave Dark Matter [14][15][16][17][18][19], and other specifications of the nature of In this paper, our approach is to describe the dark matter as an axion/scalar field that we will refer to here as a Wave Dark Matter model (WaveDM, also referred sometimes to as scalar field DM, ultralight axion-like DM, fuzzy DM, etc.). This type of model has been worked out by several other authors [14][15][16][17][18], and has been found to be able to reproduce the success of the ΛCDM model on cosmological scales, but it predicts a natural cut-off on the mass power spectrum of linear perturbations that helps to alleviate most of the low-scale issues of CDM [15,17,21,22]. Interestingly enough, all cosmological effects are directly related to a single parameter, which is the boson mass of the scalar field particle m a (although extra observational effects may arise from quartic self-interactions [23][24][25][26]). Based on considering the cut-off of the mass power spectrum, the halo mass function, the reionization time or the Lyman-α forest, the most up-to-date constraints suggest that the boson mass must satisfy m a > 1 × 10 −21 eV [27,28].
However, the non-linear process of structure formation under the SFDM hypothesis does not depend on a single parameter only, but rather one requires to take into account at least one second parameter. This fact is indeed considered in many recent studies that try to put constraints on the SFDM parameters with data coming from, for instance, satellite galaxies in the Milky Way [19,[29][30][31][32]. The aforementioned studies consider that galaxies are described by a solitonic core with a negligible self-interaction, known as ψDM, or WaveDM. The soliton solution is just the ground state of the so-called Schrodinger-Poisson system of equations [33,34], and its wave-like properties provide stability against gravitational collapse -opening the possibility of naturallysupported, cored halos. The full prescription of the WaveDM profile requires specification of the boson mass m a together with one of its structure parameters, which can be taken to be either the central density or the scale radius, while the other is determined by the relation, The boson mass m a is expected to be a fundamental parameter with a single value for all galaxies, while the other two parameters may take values that differ from galaxy to galaxy. All of the above strongly indicates that we require to think more carefully if we are to obtain meaningful constraints on the boson mass. More specifically, if we consider the boson mass as an universal parameter, on the same footing as any other cosmological parameter, we should certainly be able to use statistical analysis of galaxy data to constrain which values are permitted, as has been proposed to do in [35,36] and more recently done in [30,31]. However, in general, we may be unable to assert whether there is one single value of m a that is suitable to satisfy all the possible constraints.
It has been shown that the NFW profile correctly describes the observed lensing signal in a large sample of systems, in particular in the SLACS survey [37]. However, since the wave dark matter is considered a feasible candidate for DM, in this work we study the behaviour of, and constraints upon, a WaveDM type of profile acting as a gravitational lens, and we obtain the conditions under which the profile will be able to produce strong lensing.
A brief description of the paper is as follows. The basic lensing equations for any given density profile are described in Sec. II, where we also introduce the explicit lensing expressions for the particular case of the WaveDM profile. In Sec. III we describe our statistical analysis and present the results arising from the comparison of the WaveDM model predictions with selected data from the SLACS catalog. Finally, the general conclusions are presented in Sec. IV. Some analytical solutions of the lens equations used in the text are presented in the appendix.
A. General lensing equations
One of the main predictions from Einstein's General Relativity (GR) is the bending of light as it passes close to a massive body. The deflection angle produced by this effect depends on the mass of the deflector, acting like a lens. This deflector may be approximated by a point-like mass, such as a star, but for more massive objects like galaxies it is better to represent them as extended masses which are described by their density profiles.
The simplest type of lens is a system with a point mass M located close to the line of sight to a luminous source S. Due to the gravitational field of the point mass, a light ray is deflected in its path to the observer; this is described by the Lens Equation in the thin lens approximation. The same approximation also holds for a mass distribution, in which case the lens equation is [38], that relates the (unobservable) angle between, the line of sight and the path from the observer to actual position of the source, β, and to the apparent position of the of the source (the image), θ, to the mass distribution that is causing the lensing m(ξ). Here we have assumed that m(ξ) is the projected mass enclosed in a circle of radius ξ ≡ D OL θ; more explicitly, we can write The projected surface mass density Σ(ξ) can be calculated directly from the (spherically symmetric) density profile ρ(r) of the lensing object as: where z ≡ r 2 − ξ 2 is a coordinate in the direction orthogonal to the line of sight, so that 0 ≤ ξ ≤ r. If the lens system has a finite radius r max , then z max = r 2 max − ξ; otherwise, we can put z max → ∞ in the integral (3b).
Let us consider the case in which the density profile ρ has a characteristic density ρ s , and a characteristic radius r s , such that ρ(r) = ρ s f (r/r s ) where f is the function that accounts for the shape of the profile. We can then write Eq. (2) in the dimensionless form where the different distances are normalized in terms of r s : β * = D OL β/r s , θ * = D OL θ/r s , and then ξ * = ξ/r s = θ * . The latter equation means that the normalized variables ξ * and θ * can be used interchangeably, and then hereafter we will use θ * as our distance variable. Likewise, the total mass, as given in Eq. (3a), is normalized as where the normalized projected surface mass density, from Eq. (3b), is with z = r 2 * − θ 2 * and r * = r/r s . The new parameter λ in Eq. (4) is then given by Equation (6) contains information about the lensing properties of any given model, together with that of the different distances involved in the lens system. 1 One particular case of interest is that of perfect alignment between the luminous source and the lens system for which β * (θ * E ) = 0. This in turn defines an Einstein ring with radius R E = D OL θ E with an associated angular radius θ E . In terms of our normalized variables, we see that that the observed Einstein radius is just R E /r s = θ * E . In other words, the normalized angular Einstein radius θ * E directly is the ratio of the Einstein radius to the scale radius of the density profile. Moreover, the angular radius θ * E must also be a solution of the equation [see Eq. (4)] Interestingly enough, Eq. (7) shows that the lensing properties of a system with a density profile of the form ρ(r) = ρ s f (r/r s ) are independent of the density and distance scales, and are mostly sensitive to the particular shape of the density profile. The physical parameters of the system are then concentrated in the dimensionless parameter λ in Eq. (6), and the latter can be calculated from Eq. (7) without any prior knowledge of the given physical scales in the system, namely ρ s and r s , under the only assumption of perfect alignment (see Fig. 1 below for an example).
There is a critical value λ cr that is the smallest value of λ for which an Einstein ring appears, which must correspond to the limit θ * E → 0 in Eq. (7). As we shall show now, such a critical value can be calculated analytically in the general case. To avoid the divergence at θ * E = 0 (where m * (0) = 0), we make use of the L'Hôpital rule in Eq. (7), and from Eq. (3a) we finally obtain where Σ(0) is the central value of the projected surface mass density given by Eq. (3b). Eq. (8) is quite a simple formula for the calculation of λ cr for any given density profile ρ(r). 2 As said before, Eq. (8) suggests that the critical value λ cr just depends on the particular shape of the given Table I. density profile and no information is necessary about its other physical parameters. The values of λ crit , calculated from Eq. (8) for density profiles that are well-known in the literature, are shown in Table I. For these profiles we also show in Fig. 1 the Einstein angle θ * E as calculated from Eq. (7). As expected, the Einstein angle is the smallest for the WaveDM profile (10) alone, which also means that it is the one with the weakest lensing signal.
We should mention here an additional use of the lens equation (7) to constrain the free parameters of a given density profile. It relates to the fact that any DM halo characterized by a particular density profile needs to satisfy the constraint λ ≥ λ cr if it is to produce a lensing signal. Using Eqs. (6) and (8), the latter statement can be re-written as Equation (9) establishes a minimum value for the (structural) surface density ρ s r s of any given DM profile in terms of the measured quantities of a lens system. Although the constraint Eq. (9) is satisfied automatically by the NFW profile, for which λ crit = 0, this is not the case for the other profiles listed in Table I.
B. Combined density profile of WaveDM
For the density profile of WaveDM halos we will consider the model described in Refs. [19,32], which arises from the study of extensive N-body simulations. The profile consists basically of two parts: one part describing a core sustained by the quantum pressure of the boson particles, also known as the soliton profile, and another part that resembles a NFW-like profile in the outer parts of the halo. As argued in Ref. [41], the transition at some radius to a NFW profile must be expected from the change of behavior to CDM on scales larger than the natural length of coherence, which should be proportional to the associated Compton length of the boson particles.
The soliton profile is where r sol and ρ sol are its characteristic radius and central density contrast, respectively. This profile was first studied in detail in Ref. [32], although here we are following the nomenclature adopted in Ref. [41], where it is also shown that the profile fits well the ground-state solution of the so-called Schrödinger-Poisson (SP) system of equations [33,34]. In this respect, the soliton profile is strongly related to the wave properties (via the Schrödinger equation) of the boson particles. One important property of the profile given in Eq. (10) is that it must also obey the intrinsic scaling symmetry of the SP system [34]. If 0 <λ ≪ 1 is a constant parameter, it can be shown that the central density and radius in the soliton profile are given by This equation suggests that the intrinsic, physical, quantities of the soliton profile in Eq. (10) are related as defined in Eq. (1), which we rewrite here just for clarity: This relation, Eq. (1) will be important later when we discuss the constraints on the boson mass m a .
For the NFW profile at the outskirts of the galaxy halo we adopt the following parametrisation Notice that in writing Eq. (12) we are assuming the following implicit definitions for the scale radius and density, respectively, of the NFW profile: r NFW = r s /α NFW and ρ NFW = ρ sol ρ NFW * , where both α NFW and ρ NFW * are dimensionless numbers. Unfortunately, there is not precise information in Ref. [19] about the transition in a galaxy halo from the soliton profile of Eq. (10) to the NFW profile of Eq. (12) in the general case. Hence, for the present work we adopt the convention for a combined profile as suggested in Ref. [41] where Θ(r ǫ − r) is the Heavisides step function. Here, r ǫ is the matching radius where the transition between the individual profiles occurs, and which satisfies the condition ρ(r ǫ ) = ǫρ s . Notice that 0 < ǫ < 1 if the transition between the profiles is to occur at the outskirts of the galaxy halo.
In general terms, and under our parameterization, there are six free parameters in the combined profile (13): (ρ s , r s , ρ NFW * , ǫ, r ǫ , α NFW ). We will now derive two new constraints that arise from the continuity of the combined density profile at the matching radius which will help us to reduce the number of free parameters.
For a continuous density function, we must impose the condition When Eq. (14) is applied to the soliton profile of Eq. (10), we obtain which basically establishes the interchangeability of the (dimensionless) matching radius r ǫ * and ǫ. In the case of the NFW profile (12), the continuity condition (14) establishes that which, taking into account Eq. (15a), can be written as Equation (15c) indicates the (normalized) density ρ NFW * that is required for a correct matching between the soliton and NFW profiles, for given values of α NFW and r ǫ * . However, one can see that the continuity constraint (15c) actually shows a hidden degeneracy: once the values of α NFW and ρ NFW * are fixed, there can be up to two solutions for the matching radius r ǫ * . This is a direct consequence of the fact that the crossing of the density profiles (10) and (12) can occur at most at two different points, as illustrated in the left-hand panel of Fig. 2, which shows normalized density profiles for α N F W = 0.1 and different values of the normalized density ρ NFW * . Fig. 2 also shows that there exists a maximum value of ρ NFW * beyond which the profiles do not cross each other. To avoid the hidden degeneracy, and to select a combined profile with an interior soliton shape, we will choose those cases for which r ǫ * ≥ r ǫ * ,max , where r ǫ * ,max is the matching radius corresponding to the maximum value of ρ NFW * . A straightforward calculation from Eq. (15c) shows that r ǫ * ,max is a root of the cubic equation Although there is a general solution to this equation, it can be shown that the limits for small and large values of lim αNFW→∞ r ǫ * ,max = ( 3/13) .
This means that in absolute terms the maximum value of ρ NFW * must be located in the range 0.25 < r ǫ * ,max < 0.48, which is agreement with the values observed in the right-hand panel of Fig. 2.
In the end, it is possible to reduce the number of free parameters that describe the combined profile (13) to only four: ρ sol , r sol , r ǫ and α NFW . By means of these parameters and the constraints discussed above, the other parameters are fully specified.
One last comment is appropriate. Notice that our chosen normalization is such that the physical parameters in the NFW profile (12) are given in terms of those in the soliton profile (10). This means, for instance, that ρ NFW * > 1 (ρ NFW * < 1) is equivalent to ρ NFW > ρ s (ρ NFW < ρ s ), whatever the physical value of ρ s is. Likewise, we find that α NFW < 1 ( α NFW > 1) corresponds to r NFW > r s (r NFW < r s ), even if the physical value of r s is not known beforehand. The same will apply for the matching radius, since r ǫ * > 1 (r ǫ * < 1) means that matching occurs beyond the soliton radius and then r ǫ > r s (before the soliton radius, and then r ǫ < r s ).
C. Gravitational Lensing
To obtain the lensing properties of the combined profile given by Eq. (13), we follow the recipe described in Sec. II A. We first need to compute the projected surface mass density (3b). Because of the presence of the step functions in Eq. (13), the integral in Eq. (3b) naturally separates as .
It should be understood that the integrals in Eq. (18) are done along the line of sight. Notice that in Eq. (18) we are following our convention in Sec. II for normalized quantities, namely Σ * = Σ/(ρ s r s ), θ * = ξ/r s and z = r 2 * − θ 2 * . The analytical expression for the integrals in Eq. (18) can be found in appendix A.
Interestingly enough, Eq. (18) shows that the projected surface mass density only depends upon the characteristic radii of the combined density profile (13). Actually, it is the (normalized) matching radius r ǫ * which determines the general behaviour of Σ * . For instance, it can be shown that lim rǫ * →∞ Σ * (θ * , r ǫ * , α NFW ) = 0.658 1 + θ 2 * −15/2 , (19) a result that is obtained from the first branch in Eq. (18). Notice that Eq. (19) is exactly the result for the soliton profile (10) alone. Also, we cannot recover the result of the NFW profile if r ǫ * → 0, as the second branch in Eq. (19) indicates that Σ * → 0 in such a case. In addition, it must be remembered that the operation r ǫ * → 0 is not permitted by the constraint r ǫ * ≥ r ǫ * ,max , see Eq. (16).
Going back to the complete profile (13), we start with the calculation of the critical value λ crit from the analytical formula (8). The (total) projected surface mass density for the special value θ * = 0 is obtained from the first branch of the solution (18) as which indicates, together with Eq. (7), that the critical value λ cr of the combined profile (13) is a function of r ǫ * and α NFW , and its behavior for different combinations of these parameters is shown in the left panel of Fig. 3. Not surprisingly, the addition of the NFW outer part helps the soliton profile to achieve small values of λ crit , which in turn eases the accomplishment of the inequality (9).
In particular, Fig. 3 shows that λ crit → 0 as α NFW → 0, which means that the combined profile (13) will be able to produce a lensing signal for any non-trivial combination of its parameters ρ s and r s . In the case of the combined profile (13) the total mass M inside a sphere of any given radius r > r ǫ is simply given by the integral In the general case the total mass diverges as r → ∞, whereas for the soliton profile only (which requires r ǫ * → ∞) we simply obtain that its total mass M s is [31,34,41] In general, we expect from Eq. (21) the total mass in the combined profile to be larger than the soliton alone, that is M (r) ≥ M s . However, the value of the total mass M will depend on the upper limit of integration r * , and the largest values for any given r * will be obtained for the case where α NFW → 0, similar to the case of the critical value λ crit . The aforementioned general behaviour of the total mass M as a function of the free parameters r ǫ * and 8) and (20). Notice that the critical value corresponding to the soliton case, λcr ≃ 0.48, is obtained asymptotically in the limit rǫ * → ∞. Notice that in plotting the curves we have taken into account the constraint rǫ * ≥ rǫ * ,max, see Eq. (16). Moreover, it can be seen that the lowest value of λcr, for any given value of αNFW, is indeed attained at rǫ * ,max. (Right) The same as before but now for the total mass M in Eq. (21), with its value normalized in terms of the soliton mass Ms given in Eq. (22). Notice that the latter is the asymptotic value at large rǫ * , whereas for small values of the latter the total mass M can be three orders of magnitude larger than the soliton mass Ms. Here the upper limit of integration in Eq. α NFW is shown in the right-hand panel of Fig. 3. For the numerical examples we considered the upper limit of integration r * = 20, for which we then see that that the difference between M and M s can be as large as three orders of magnitude in the case α NFW = 0.
III. DATA ANALYSIS
In this section we will use our theoretical results to infer information about the WaveDM profile from observations of specific lens systems. For this we will use data from the SLAC survey [42]. To do so, we recall from Sec. II B that there are four free parameters that are needed to describe the lensing properties of the combined density profile, Eq. (13). However, the lens equation, (4), discussed in Sec. II C, is not explicitly dependent of two of them, namely ρ s and r s , but only to the free parameters of the NFW outer profile r ǫ * and α N F W . Therefore we could use the right-hand side of the lens equation (4) to put constraints on the surface density through the combination of parameters ρ s r s -see the discussion in Sec. II A.
However, the special properties of the WaveDM profile, as represented by Eq. (1), suggest that the lens equation could be written in a more convenient form. Using that the (normalized) angular Einstein radius is θ * E = R E /r s , Eq. (6) can be re-cast in the form where we have set m a22 ≡ m a /10 −22 eV. Equation (23) then defines a different observable, which results solely from the combination of the distances involved in the measurement of the lens system, so that we can put constraints directly on the boson mass m a rather than on the energy density ρ s , but in any case in combination with the rest of parameters, namely θ * E , α NFW , and r ǫ * . In general, we expect that, given the data from a single galaxy, there will always be a region in the parameter space that will satisfy Eq. (23). Thus, for a given sample of galaxies, we could in principle determine the range of possible values of m a that is consistent with the observed data. However, we must recall that the boson mass m a is a fundamental physical parameter of the model which in principle should have a unique value. This means that the boson mass should be treated differently from other parameters in the model and should not be given the freedom to vary from galaxy to galaxy.
Our proposal, therefore, is to study the lensing properties of the WaveDM profile by fixing the value of the boson mass and finding, via statistical analysis, the bestfit values of the remaining free parameters θ * E , α NFW and r ǫ * . As we are interested in the properties of the WaveDM profile alone, we will select a sample of galaxies from the SLAC catalog for which the DM component is the dominant contribution -that is, with a fraction of luminous matter of 50% or less. The selected galaxies are shown in Table II, together with the values of their lens parameters.
A. Soliton core profile
As a first case of study, let us consider the soliton core profile without the external NFW part. There are in this case only two free parameters: m a22 and θ * E . The projected mass surface density given by Eq. (5a), with the help of Eq. (19), has in this case an analytical expression, where λ crit ≃ 0.484 is the critical value calculated from Eq. (8); see also Table I. Notice that m * (0) = 0, whereas its asymptotic limit is m * (∞) = 2/(13λ crit ).
To obtain a basic understanding of the solutions that will be found for the physical parameters, we show in the top panel of Fig. 4 the expected behavior of the lefthand side of Eq. (23) as a function of the Einstein angle θ * E . We also show, as the series of horizontal lines, the values of the right-hand side of Eq. (23) obtained from the observed data for the galaxies listed in Table II. Figure 4 shows that it will always be possible to identify a value of the Einstein angle θ * E for which the lefthand and right-hand sides of Eq. (23) are in agreement, irrespective of the value of the boson mass -although as the boson mass increases the agreement occurs at increasingly large values of θ * E . For the examples shown in Fig. 4, a boson mass of order m a22 ≃ 0.02 seems to fit well the SLACS galaxies listed in Table II -corresponding to an allowed range for the angular Einstein radius of 5 < θ * E < 10. This latter range can also be translated into an allowed range for the soliton radius, and suggests that r sol ∼ kpc for the given example galaxies.
To summarise, given that we have only one observable constraint, the most we can do is first to fix the value of the boson mass m a and from this to obtain constraints on the remaining free parameters that are consistent with Table III for fixed values of the boson mass ma. Because the main constraint imposed by the lensing system is for the total mass inside the Einstein radius RE, the obtained data points lie along the line of constant soliton mass Ms ≃ 2 × 10 11 M⊙. The data points also lie below the line representing the inequality (9) for the surface density ρs rs = 9×10 3 M⊙ pc −2 . See the text for more details.
that boson mass. Specifically, by adopting a proposed value for the boson mass m a in Eqs. (23) and (24), we can obtain for each galaxy the corresponding best-fit value for θ * E , and from that the best-fit value for r s . The results obtained for our selected sample of galaxies are shown in Table III, and also plotted in the bottom panel of Fig. 4. The latter figure speaks for itself, and it shows that the data points for all galaxies lie along the line with a constant soliton mass M s ≃ 10 11 M ⊙ (see Eq. (22)), and (as required) all lie below the line that represents the inequality (9) for the galaxy in Table II (J0935-0003) with the most extreme value for the ratio of distances on the right-hand side of Eq. (23). The different values obtained for the characteristic radius r s give an enclosed mass which corresponds closely to the values reported in [42]. Nevertheless, these models are found to be considerably too compact when the characteristic radius and corresponding enclosed mass are considered together. For example, galaxy J0008-0004 has a value for M Eins = 3.1 × 10 11 M ⊙ which is comparable with the value of M s = 3.4 × 10 11 M ⊙ obtained using the best fit parameters of the soliton model. Notwithstanding that the soliton model gives an enclosed mass that is adequate and realistic, we think that the characteristic radius is most definitely not so. The mean effective radius for this galaxy is observed to be r e ≈ 9.6 kpc, which is several orders of magnitude larger than the characteristic radius r s obtained for any of the different boson masses presented in Table III. Therefore we think the soliton profile alone is actually not helping to explain the distribution of dark matter around the selected galaxies in a consistent way. Fig. 4, where we see that they all indicate that the total mass contained within the Einstein radius is Ms ≃ 10 11 M⊙.
There are two valuable lessons from the above exercise. The first one is that the soliton core profile alone will always be able to fulfill the lensing constraints even without the consideration of the NFW contribution. This is not surprising, as the lensing equations can be solved even if we consider a point particle with the required total mass (which formally corresponds to the soliton core profile with m a → ∞). The second lesson is that even though the soliton profile may be adequate, formally speaking, to explain the lensing properties of the galaxies in Table II, we will, in any case, have to consider the NFW outskirt in the complete profile (13) in order to satisfy other constraints that suggest that the boson mass should be in the range m a22 = 1 − 10 [43].
B. Complete profile
Taking into account the above experience gained with the soliton profile alone, we will now consider the following procedure for the complete WaveDM profile.
Since the total mass inside the Einstein radius is the only constraint provided by the lens systems, we will fix the values of the boson mass m a and soliton mass M s . For this, we take following values of the boson mass m a22 = 0.1, 1, 10, and for the soliton mass log 10 (M s /M ⊙ ) = 11.5, 10.5, 9.5, 8.5, 7.5, from which we will calculate the values of r s by means of Eq. (22).
We will adopt a uniform prior for the other parameters over the following ranges: α NFW = [0 : 10], and r ǫ⋆ = [r ǫ⋆,max : 10]. Here r ǫ * ,max is found from the cubic equation (16) for a given value of α NFW , and the extreme values α NFW = 10 and r ǫ * = 10 are suggested by Figs. 3 and 4.
We will obtain the values of θ * E by sampling from a Gaussian distribution, using the relation The value for θ * Em = R E /r s is the mean of the distribution using the observed value for the Einstein radius, and σ = 0.05 * χ the error assigned. p is a random number sampled from a uniform distribution on the interval [0,1]. The inverse error function is approximated as described in [44]. In this way, the variable θ * E will not otherwise enter into the fitting analysis. Once the soliton mass is fixed, the rest of the mass that is included within the Einstein radius must be completed by the NFW profile. Because this requires a huge contribution, up to three orders of magnitude more, one sensible consideration is to include a simple approximation of a partial contribution of the luminous matter inside the Einstein radius. In a first approximation, the mass corresponding to the barionic matter is simply a constant value modeled as a point particle. This is done from Eq. (2), and then the projected mass for the lens is composed of two parts, where m(θ) is the mass from the dark matter component given by the profile in Eq. (13), and M ′ = f * , Ein M Ein is the stellar mass contribution as described in Table II. These values are normalized accordingly and then the dimensionless total mass m ′ is where Eq. (27a) is combined with Eq. (23) to produce a modified observable which uses the soliton mass directly, (28)
C. General results
Using the Sloan Lens ACS Survey(SLACS) data for several lens candidates with strong lensing [37,42], we will try to constrain the free parameters that will satisfy Eq. (28). As said before, the information available from the data is the Einstein radius, R E , the lens distances (d OL , d LS , d OS ), and the redshift, z, of the lens. This information is used in the Multinest code [45] to carry out a parameter search for each individual galaxy.
Typical results are shown in Fig. 5 for the individual cases of galaxies J0008-0003 and J0008-0004; both cases include the contribution of the luminous matter to the total mass of the lens as in Eq. (28). For the purposes of clarity, in each figure we indicate the radius r s and total mass M s of the soliton profile. Some general features of the results are as follows. First, we note that the free parameters r ǫ * and α NFW appear well constrained if the soliton mass cannot provide the total mass required by the lens system; in the examples shown, this happens if M s < 10 11.5 M ⊙ . The credible regions for the parameters in Fig. 5 are in agreement with the theoretical expectations discussed in Sec. II B: that there is a minimum value for r ǫ * due to the constraint imposed by Eq. (16), and a maximum value of α NFW appears due to the maximal contribution of the NFW part of the profile to the total mass in the lens, see also the right panel in Fig. 3. Likewise, notice that as α NFW → 0 the value of the matching radius r ǫ * is very well constrained, and this is easily understood from Eq. (21): it is r ǫ * which determines alone the contribution of the NFW part of the profile to the total mass. Finally, observe that the value log 10 (M s /M ⊙ ) = 7.5 is excluded because the code is not able to find any suitable values of the variables that could fit the data. That is, the soliton mass M s is so small that the NFW part cannot compensate the required mass for the lens.
In summary, if the soliton is allowed to provide enough mass to fulfill the matter contribution in the lens, say M s ∼ 10 11.5 M ⊙ , the analysis will select large values for r ǫ * so that the NFW tail contribution to the total matter is minimal, see Eq. (21). In contrast, if the soliton mass is not large enough, M s < 10 11.5 M ⊙ , it is then possible to find appropriate pairs (α NFW , r ǫ * ) for the NFW part of the profile to provide the needed mass for the lens. In this respect, the stripped credible regions in Fig. 5 represent the degeneracy regions in the plane (α NFW , r ǫ * ) for the same mass contribution of the NFW tail to the lens system.
Another quantity of interest is the resultant density profile of DM in the lens system. Fig. 6 shows examples of the density profiles inferred from the posteriors of galaxy J0008-0004 in Fig. 5 for a boson mass m a22 = 1. The soliton core is clearly seen in all curves, and so too is the transition to the NFW part of the profile. Not surprisingly, the largest core corresponds to the configuration with the lowest soliton mass for which the matching radius is close to the lower bound suggested in Eq. (17a).
We also report in Fig. 7 the results obtained for the lens system J0008-0004, for larger or smaller values of the boson mass. For a mass of m a22 = 10, the soliton is much more compact, and it is not by itself adequate to describe a galaxy. But given the fact that the parameters α NFW and r ǫ * are also well constrained we conclude that the lensing effect must be mostly attributed to the NFW part. This is not surprising, as we had already indicated in Sec. II C that strong lensing could be achieved if α NFW ≪ 1. Moreover, a larger boson mass is also in better agreement with recent cosmological constraints [27] and with estimations based upon satellite galaxies of the Milky Way and Andromeda [46].
In contrast, we can see that the constraints become more diffuse if we consider a smaller boson mass of m a22 = 0.1, although there seems to be some preference for the case in which M s = 10 10.5 , that also corresponds to a larger soliton radius. This time the resultant configuration would be in agreement with those found in the statistical analysis made in Ref. [30], which suggests that satellite galaxies put an upper bound on the boson mass that takes the form m a22 < 0.4.
IV. CONCLUSIONS
We have studied the properties of the so-called WaveDM density profile assuming that it comprises the total DM contribution in galaxies for which a gravitational lens has been detected and measured. In doing so we have adapted the standard lens equations to the particular features of the WaveDM, in that we took into account its soliton core together with its NFW envelope, which is the complete form suggested by numerical simulations of cosmological structure under the WaveDM hypothesis.
We then used the lens equations to make a comparison with actual observations of some lens systems that seem to be DM dominated, although we took into account their baryonic components in a simplified manner. In doing the statistical analysis we considered carefully the role of the different free parameters of the WaveDM profile, in particular the boson mass m a which has to be regarded as a fundamental parameter that should not vary from one galaxy to another.
The overall procedure was then to fix the value of the Triangle plot for the parameter posteriors fitted to galaxies J0008-0003 (left) and J0008-0004 (right). The contribution of the luminous matter is 35% (50%) of the total reduced mass inside the Einstein radius for J0008-0003 (J0008-0004), see also Eq. (28). The colors indicate different choices for the soliton mass Ms, and the values of the corresponding rs, calculated from a fixed (normalized) boson mass ma22 = 1, are also shown for comparison. In general, we can see that distinct credible regions can be found for the NFW parameters if the soliton mass is 10 8.5 < Ms/M⊙ < 10 11.5 , and that the constraints are in agreement with the semi-analytic analysis in Sec. II. boson mass and the total mass within the soliton core in the configuration. In consequence, the soliton radius was fixed and the only free parameters were those of the NFW part of the density profile. In general terms, for large or small values of the boson mass, our results indicate that the soliton structure, if it is as massive as 10 11.5 M ⊙ , is able to fit the measured Einstein radius in the lens systems, although this also requires the soliton structure to be extremely small when compared to the measured scales of the lensing galaxies. This result then indicates that galaxies in general cannot be explained by the soliton structure alone. Because of the above, we had to consider the complete WaveDM density profile and constrain the NFW free parameters. Generically, and so far for the cases we explored, our analyses suggest that the matching radius for the soliton and NFW parts of the profile is of the same order of magnitude as the soliton radius, r ǫ ∼ r s , which is in agreement with the expectation from numerical simulations [47][48][49]. In addition, the second free parameter is in general bounded from above as α NFW < 1, which just means that the characteristic NFW radius is larger than the soliton radius, r NFW > r s . Moreover, our results also suggest that the case α NFW → 0 is also possible, which in turn means that the density profile decays as ρ ∼ r −1 at large radii.
On the other hand, for any given value of the boson mass, it was not possible to constrain the NFW param- eters in the case where the soliton radius was larger than the Einstein radius, as in such cases the soliton mass is insufficient to produce the required lensing signal. Together with the aforementioned difficulty that the soliton should not provide the whole mass of the lens, we can summarize our results as M s /M ⊙ < 10 11.5 and r s < 6 kpc. By means of Eq. (22), the above inequalities can be combined in the following lower bound on the boson mass m a > 10 −24 eV. Notice that this lower bound is in agreement with previous constraints from cosmological and galactic scales, see for instance [22,30,31,46]. Although the lens systems we considered are not able to put strong bounds on the boson mass, they certainly indicate that most likely a complete WaveDM profile (i.e. comprising a soliton core + NFW tail) is necessary to account for all the diverse observations at galaxy scales.
As a final note, the lens systems studied here have a subdominant, although non-negligible, baryonic contribution. We expect to extend our analysis to a larger sample with a more detailed, specific, inclusion of the baryonic matter that could give us better constraints on the soliton features. This is ongoing work that will be presented elsewhere. | 10,635.8 | 2017-07-31T00:00:00.000 | [
"Physics"
] |
Energy paybacks of six-sigma: A case study of manufacturing industry in India
© 2016 Growing Science Ltd. All rights reserved.
Introduction
In today's world, everybody is talking about energy saving, effective energy utilization, pollution control, green manufacturing, health problems and its remedies, easy and effective ways of exercise, traffic control, etc.One stop solution to all above concerns is riding a bicycle and it is therefore, more and more people nowadays are turning towards bicycle.Cycling results in huge amount of health benefits including cardiovascular health, improved bone density, muscular fitness, etc. (Oja et al., 2011).Apart from the health benefits, increasing traffic congestion and air pollution in most of the cities in world is also growing as a huge problem.That is why; the government of most of the developed and developing countries are motivating people to adopt the safe, secure and environmental friendly mode of transport: 'Bicycle'.As a result, sales of bicycle across the globe are in inclining mode.Many reports suggest that this affordable mode of transportation if properly absorbed in people's lifestyle can reduce the total Co2 emission of the world by 11%.
Apart from the direct benefits of cycling, the industries involved in bicycle manufacturing can do their part in global energy saving by reducing the waste/rejection and implementing green manufacturing practices.One such case study of an Indian bicycle manufacturer is described in this paper.The industry was suffering from high rejection rate of one of the components of bicycle.It used the trusted Six-Sigma DMAIC methodology to reduce the high rejection rate up to acceptable level.The literature review of six-sigma has been accomplished by various authors in the past.Some have categorised the papers in order of usage of methodology in different sectors (Srinivasan et al., 2016), type of manufacturing industries (Biswas & Chowdhury, 2016), journal wise distribution and classification (Sreedharan & Raju, 2012), classification of tools (Uluskan, 2012) etc. Apart from these there have been some case studies showing the energy conservation in various forms using six-sigma (Falcon et al., 2012;Kaushik et al., 2008;Kaushik & Khanduja, 2008).By applying Six-Sigma, not only the industry was able to save the energy used in production of rejected items by decreasing the rejection level, but moving forward, the annual amount saved by the project has been used for the work related to implementing green manufacturing practices and reducing the energy consumption by industry.This adds to the total energy saving claimed by the industry.
Case Study
Bicycle consists of limited number of parts as compared with automobiles.Transmission of power in a bicycle is performed with the help of a Chain sprocket assembly.Hence chain can be considered as an important part of bicycle.The main parts of the chain are bush, pin and outer covering.The case study described in this paper was executed in a bicycle chain industry in India.It produces all parts to be assembled in a chain.Pin is the key element of chain and is initially in the form of a rod and is cut in lengths.The tolerance limit of pin length was 9.65±0.5 mm (Fig. 1).
The rejection rate of pin was 8.9 per cent, so there was a huge scope of increasing productivity by eliminating faults innate in manufacturing process.Six Sigma DMAIC approach was used to resolve pin rejection problem to attain the acceptable quality level.At first, project was presented to the management and after their approval official registration was performed.This activity is necessary to win consent from the higher authorities because unless they approve it can never be possible to devolve the available resources.The pin manufacturing process was looked over minutely and Six Sigma DMAIC approach has been effectively applied to improve the standing Cpk from 0.47 to 1.90.These phases are explained as follows:
a) Define
In define phase, Process Flow Diagram was drawn for pin manufacturing process as shown in Fig. 2.This diagram elaborates the different manufacturing steps during the production of pin.By drawing such diagram, it is easier to focus the attention of the project team on the process that is responsible for the faulty parts.In this phase a measure of extent of problem is generally made.There are various tools available for that.First of all, a measurement system analysis (MSA) was performed which includes the Gauge R&R (gauge repeatability and reproducibility) study.The experiment can be performed with the help of at least two people.An operator from the production line and one from the inspection line were chosen.
Ten pieces of known measurement (pin) were given to them and they were asked to make a measurement using micrometre which was being used to measure the dia of pin during production.
Following are the readings that were recorded (Table 1). 1 was entered in the Minitab software for performing the gauge r & r study using ANOVA.Table 2 shows the results of study in which repeatability and reproducibility was noted out to be 21.27 % and 0.00 %.This value is certainly less than 30 %, showing that Micrometer in use was accurate.
Histogram
The histogram (Fig. 3) was also drawn for checking the trend of the rejected parts (100 samples size).It clearly showed that data was not around the mean line and was away from target (9.65mm)value.Also, most of the parts being produced were undersized.
Process Capability Analysis (Cpk)
Minitab software was again used to draw a Cpk curve for pin length as shown in figure 4. Cpk is a measure of the capability of the process to produce acceptable parts.Value less than 1 show that there is great need to rectify the process and increase the Cpk value.Also, Z-bench σ value was found to be 1.35 (Fig. 4) and present PPM was found at 89095.91, which was bizarre.The main contribution of this phase is to find out the root cause of the rejection.In this phase, suspected causes of rejection were listed by thoughtful study of gathered data.Different statistical tools were tried for the analysis.Their explanation is as follows-
Fish-bone Diagram
Various brainstorming sessions were performed which includes different members from various sections of the industry.Thorough study of the possible causes resulted in a list of causes related to different aspects of 4 M's.These are depicted in Fig. 5 as fishbone diagram.
Hypotheses Testing
After detailed discussions three suspected source of variations were shortlisted for further investigation.
Hypotheses were set and its testing was performed with all three suspected source of variations using 2 sample T test.The sample size was kept at 50 for all assessments.In first case, assessment was performed for operator skill (unskilled and skilled).In second case, assessment was performed for regrinding of cutter (36 hrs & 24 hrs).In third case, assessment was performed for pin feeder device (existing & improved).
First Case: Assessment for Operator Skill
Sample 1: Unskilled operator Sample 2: Skilled operator Operator skill assessment displayed that p-value for pin length is greater than the 0.05 (confidence level= 95%).Hence this cannot be a prime cause of rejection.
Second Case: Assessment for Re-Grinding of Cutter
Sample 1: Re-grinding after 24 hrs Sample 2: Re-grinding after 36 hrs Re-grinding assessment displayed that p-value for pin length is less than the 0.05 (confidence level= 95%).Hence this cannot be a prime cause of rejection.Pin feeder device assessment displayed that p-value for pin length is less than the 0.05 (confidence level= 95%).Hence this cannot be a prime cause of rejection
d) Improve
After finding out the root causes associated with the process now it is the time to find out the optimum working parameters.For this, a tool 'Design of Experiments' was chosen.This tool helps us to design the nature and combinations of different parameters during experimentation.There were two factors and two levels available so 2×2=4 combinations could be tried to optimize the value of the parameters; regrinding of cutter and pin feed mechanism.Table 3 displays the existing and proposed working parameters of root causes for pin length variation and table 4 depicts readings of pin length at available combinations of working parameters.
Fig. 6. Main Effect Graph for Pin Length
To concrete our findings, Main effect graph and interaction graph were.The Main Effect graph (figure 6) suggested that both regrinding and pin feeder mechanism were the prime factors for high rejection rate of pin.Interactions plot (Fig. 7) shows that there is no interaction present between the factors which mean the value of one factor does not affect the value of another or there is no conflict of interest between the two.In this last phase of six-sigma, feasibility and monitoring of the implemented measures is checked.For this, X bar/R Control Charts (Figure 8) were plotted (sample size-100) to envisage the occurrence of different causes of variation and for making sure that the process endures in an established optimized path.
Results and Discussions
When different similar industries were consulted, Pin length variation rejection is found to be a prevailing problem.The causes of high rejection rate of pin were found to be pin feed mechanism and regrinding of cutter.After implementing, documenting and freezing all the proposed measure, a great amount of improvement in terms of rejection PPM was observed.The PPM which earlier was recorded to be at 89095.91 now has been improved to 0.01 PPM which is a great achievement.Also, the z bench sigma level is improved to 5.58 (figure 9), corresponding to a monetary gain of INR 2.67 lakhs (Appendix -A) and this is a substantial amount for any organization.The money saved by the project was used for bringing more energy conservation in the industry.Following measures were taken some of which have been implemented and some are in process.By implementing above measures industry was able to reduce its monthly electricity usage by 11%, which is a great achievement.It not only adds to the global energy conservation but also the profitability of the industry increases.This act of the industry also brought the energy awareness among employees and motivated them to save energy at their homes also.Study demonstrates that organizations using similar quality management approaches perform better in almost every parameter including yield on sales, investment, improved organizational culture, personnel development of employee, brand value, employee satisfaction, effective utilization etc.These can be treated as intangible benefits of Six-Sigma implementation.This case study also reveals and encourage small industries to implement similar quality management techniques for productivity improvement because results amply put to rest all the fears that management techniques like Six Sigma, MRP, ERP, JIT etc. are the domination of only the large industries that can spend plentifully.
Conclusion
In this paper a case study of an Indian bicycle chain manufacturing unit is explained with a prime motive of energy conservation.The case study started with the rejection reduction of one of the part of cycle chain.Six-Sigma DMAIC methodology was used for finding out the root causes of the rejection and finally a rejection PPM of 0.1 was achieved after the complete implementation of methodology.
The monetary savings made after the project were used for energy conservation purposes in the industry.Various measures were proposed by the team many of which are implemented and some are in process.Successful implementation of the measures brought down the monthly electricity usage of industry by 11% which is a great achievement.This can be treated as a contribution of industry in global energy conservation.In author perspective, such projects should be started more frequently in industries.This not only improves the organizational culture and profitability but also bring the energy saving awareness among the employees.
Fig. 1 .
Fig. 1.Main parts of a Bicycle Chain Assembly
Table 1
Measurement System Analysis
Table 2
Result of Gauge R & R (Pin Length) for Micrometer
Table 5
Description and Status of Various Measures for Energy Conservation | 2,683.8 | 2016-11-01T00:00:00.000 | [
"Business",
"Environmental Science",
"Engineering"
] |
An Intelligent Handover Management System for Future Generation Wireless Networks
Future generation wireless networks should provide to mobile users the best connectivity to services anywhere at anytime. The most challenging problem is the seamless intersystem/vertical mobility across heterogeneous wireless networks. In order to answer it, a vertical handover management system is needed. In our paper, we propose an intelligent solution answering user requirements and ensuring service continuity. We focus on a vertical handover decision strategy based on the context-awareness concept. The given strategy chooses the appropriate time and the most suitable access network among those available to perform a handover. It uses advanced decision algorithms (for more e ffi ciency and intelligence) and it is governed by handover policies as decision rules (for more flexibility and optimization). To maintain a seamless service continuity, handover execution is based on mobile IP functionalities. We study our decision system in a case of a 3G/UMTS-WLAN scenario and we discuss all the handover decision issues in our solution.
INTRODUCTION
The existence of various wireless technologies (3G/UMTS, WLAN, WMAN, etc.), with the evolution of multi-interface mobile terminals (MTs) and IP-based applications, has allowed a mobile user to have access to IP services anywhere at anytime from any network.This universal wireless access is driven by the future generation of wireless networks (FGWNs) (i.e., the 4th generation (4G) of wireless communications [1]).To ensure ubiquity and seamlessness challenges in FGWN, intersystem handover management is the essential issue that supports the moving of users from one wireless system to another during active communication.
In FGWN, the need for vertical handovers can be initiated for convenience (e.g., according to user choice for a particular service) rather than connectivity reasons (such as in horizontal handover).Vertical handover challenges are performance optimization (e.g., reducing overhead signaling, handover latency) and user requirements satisfaction.These particular requirements can refer to the always best connected (ABC) concept, of being connected in the best possible way in an environment of heterogeneous wireless networks [2].For that, decision parameters have to be considered such as network conditions and user preferences.Thus, a vertical handover management solution can mostly concern the handover decision phase: the decision for the appropriate time to initiate the handover and for the most suitable access network from those available.
In this paper, we propose an intelligent handover management system controlled by the mobile.It applies the ABC concept that answers "if a handover is needed or not" (i.e., handover initiation) and "over which access network to handover" (i.e., network selection) while maintaining service continuity.The first choice can minimize, for instance, the signalling overhead and avoid unnecessary handovers.The second choice can satisfy network and user requirements.More precisely, we consider a context-aware vertical handover decision: multiple criteria are considered as contextual information gathered from terminal and network sides and advanced decision algorithms (for handover initiation and network selection) are needed.Moreover, we use vertical handover policies expressing rules that shape the handover decision process.The handover execution is based on mobile IP (MIP) functionalities for service continuity.
The handover decision scheme is studied under a 3G/UMTS-WLAN environment.
In our system, we provide a combination of interesting decision strategies [3][4][5]: a context-aware strategy for multiple criteria use and precision, advanced decision algorithms for efficiency and intelligence, and policies for flexibility and optimization.Thus, our approach should be conscious of all the contexts (access network availability, MT's movement, QoS parameters, etc.), takes the right decision at the right time (according to user objectives and handover policies), and ensures service continuity for the demanding service.This combination of a context-aware approach using policies can provide an efficient and optimized vertical handover decision solution.This latter can facilitate MIPbased procedures necessary for handover execution phase.With a mobile-controlled model, our approach can be a flexible handover management system for a 3G/UMTS-WLAN environment.
The paper is organized as follows.Section 2 presents the related work.Section 3 introduces the architecture of our handover management system.Section 4 describes the handover decision strategy.Section 5 gives the handover execution procedure.Section 6 studies a 3G/UMTS-WLAN scenario and discusses the proposed system features.Finally, Section 7 concludes our work.
RELATED WORK
The handover management remains a widely studied issue in the case of a heterogeneous environment.In FGWN, mobile users should be able to move among these heterogeneous networks in a seamless manner.Various activities of working groups are currently under way such as IEEE 802.21 [6], IETF MIP [7], or 3GPP standards [8].IEEE 802.21 specifies media-independent handover (MIH) services and aims at providing link layer intelligence and other related network information to upper layers to optimize handovers between heterogeneous link layer technologies.IEEE 802.21 supports a mobile-controlled handover (MCHO) scheme and MIP as mobility management protocol.It is the MIH function that provides intelligence to the network selection entity or the mobility management entity responsible for handover decision based on L1, L2, and L2.5 triggers.The details of network selection entity and the specification of handover policies that control handovers are outside the scope of the 802.21.
The first vertical handover decision scheme, that considered multiple criteria user intervention and policies, was proposed by [3].It introduced a cost function to select the best available access network based on three policy parameters (bandwidth, power consumption, and cost).Reference [9] proposed also a multiservice vertical handover decision algorithm cost function.However, the solution is based on a policy-based networking architecture (i.e., IETF framework).For more efficiency and taking into account more criteria, context-aware decision solution has inspired the authors in [5,10,11].In [10], the authors designed a cross-layer architecture providing context-awareness, smart handover, and mobility control in a WWAN-WLAN envi-ronment.They proposed a vertical handover decision, with a cost function-based solution, taking into account network characteristics and higher level parameters from transport and application layers.References [5,11] are based on a multiple criteria decision-making algorithm, analytic hierarchy process (AHP) [12].Nevertheless, some information coming from the context (network or terminal) can present uncertainty or imprecision.Thus, more advanced multiple criteria decision algorithms are necessary to cope with this kind of information.To meet this requirement, in their work [4,13], Chan et al. applied the concept of fuzzy logic (FL).They employ decision criteria such as user preferences, link quality, cost, or QoS.We compared in detail the different vertical handover decision strategies in [14].
In this paper, we design our decision strategy while taking advantage of the most interesting solutions and particularly the best aspect of each one.Our solution was introduced in [15].It is based on context information as proposed in [5] and tools such as AHP and FL [4].To deal with the decision problem complexity, our scheme is based on vertical handover policies that express rules to help managing the whole decision process.This combination can prepare also MIP procedures in handover execution for service continuity such as in [16].
THE HANDOVER MANAGEMENT SYSTEM
Figure 1 gives our proposed MT functional architecture containing the following given modules.
The network interfaces module contains the protocol stack of each network.These interfaces are monitored periodically and one of them will be intelligently selected and activated in the handover process.
The handover management module is responsible for providing transparent switching between networks.So, it encloses the main phases of a handover process.
(i) Handover information gathering (HoIG).Collecting all the contextual information, through monitoring and measurements, required to identify the need for handover and to apply handover decision policies.
(ii) Handover decision (HoD).Determining whether a handover is needed (i.e., handover initiation) and how to perform it by selecting the most suitable network (i.e., network selection) based on decision criteria.
(iii) Handover execution (HoE).Establishing the IP connectivity through the target access network.This will implement protocols such as MIP.
The upper layers enable functionalities such as session management services to the application and provide additional information to the HoIG module.
In our paper, the handover criteria are the qualities measured to give an indication for a context-aware handover decision.It is required to be context-aware in the sense that it should be conscious of possibilities offered by each access network, MT's movements and QoS requirements for the demanding service.In traditional handover decision, only one criteria is used, the received signal strength (RSS).For a vertical handover decision, it is not sufficient.Context information is relevant in a way that they are useful enough to avoid false decisions, therefore, bad performances.They can be relative to the network, the terminal, the service, and the user.Here, we group it into two parts as in [5]: all the information related to the network on one side and all the information that may exist at the terminal on the other.There are the following contexts.
(i) Network context.QoS parameters (bandwidth, delay, jitter, packet loss), coverage, monetary cost, link quality as RSS, and bit error rate (BER) of the current access network and its neighbors.
(ii) Terminal context.User preferences, service capabilities (real-time and non-real-time), terminal status (battery and network interfaces), priority given to interfaces, location, and velocity.
These criteria can be classified into static and dynamic.Typically, static criteria are user preferences and the monetary cost, whereas the MT's velocity, RSS, and access network availability are dynamic criteria.These contextual information is provided by the HoIG module.It is responsible to keep the handover policies repository (HoPR) entries up to date.These entries (static or dynamic) are needed as policy parameters to govern the choices in the whole decision process.HoPR stores a set of policies expressing decision rules based on different parameters.A policy rule is a group of if-then rules (if condition then action).Examples of rules are given in the description of the decision process (see Section 4.1).
This combination of a context-aware approach using policies can provide an efficient and flexible vertical handover decision solution.We give more flexibility in a way that the whole handover process is completely controlled by the mobile (MCHO).It reduces more the overall complexity in the network, the signaling overhead, and the handover latency than a mobile-assisted handover (MAHO).Most conducted experiments and publications in vertical handovers [4,5], even regarding policies, promote an MCHO decision model in which the MT is responsible for making decisions and to put all the intelligence at the MT.Therefore, we prefer an MCHO solution with respect to transfer of handover decision criteria and more precisely regrouping context information.Thus, MT conducts the initiation (at the decision phase) and the control of the handover (at the execution phase).Otherwise, MCHO does not exclude the assistance from the network in a way that it needs information, such as capabilities or bandwidth, to choose the most optimal network among those available.Moreover, this proves the distribution of computation between MTs compared to a centralized approach (a network-controlled handover, NCHO).
THE HANDOVER DECISION STRATEGY
In heterogeneous environment, the handover decision process is very complex: decision criteria, coming from different sources, should be compared and combined to select the appropriate moment to handover and the target access network according to user preferences.Moreover, the gathered contextual information can be imprecise: unavailable or incomplete [17].This complexity can be solved by using advanced decision algorithms applicable on multiple criteria and reasonable handover policies.In this section, we describe our intelligent vertical handover decision process based on two main phases: handover initiation and network selection.
It is performed as a context-aware decision making problem, so a typical multiple criteria decision making (MCDM) problem.In the study of decision making, terms such as multiple objective, multiple attribute, and multiple criteria are often used interchangeably [17].MCDM is sometimes applied to decisions involving multiple objectives or multiple attributes, but generally they both apply.Multiple objective decision making (MODM) consists of a set of conflicting goals that cannot be achieved simultaneously.Multiple attribute decision making (MADM) deals with the problem of choosing an alternative from a set of alternatives which are characterized in terms of their attributes.
In our process, we use FL and AHP as decision support tools.The use of fuzzy logic (FL) does not only combine and evaluate multiple criteria simultaneously, but also copes with imprecision and nonstatistical uncertainty.Hence, fuzzy logic (FL) concept provides a robust mathematical framework.It can be used to model nonlinear functions with arbitrary complexity.AHP is able to identify the decision problem as a multilevel hierarchical structure of primary objectives (i.e., according to user preferences) and decision criteria (i.e., context information).In the following subsections, our decision problem can be identified as a fuzzy or a classical MCDM problem.
The decision process
As previously mentioned, the handover management process, more detailed in Figure 2, starts with the HoIG phase.This latter gets context information through monitoring, measurements, or probing and updates HoPR permanently.The information gathered is needed to perform handover initiation described in Section 4.2 and network selection described in Section 4.3.At the terminal context level, the interfaces are monitored (L2 and L3 monitoring) to reach access networks, and the user preferences are defined to get objectives.At the network context level, QoS parameters or cost can be advertised by the available access networks.The HoIG module provides policy parameters to the HoPR such as network availability or user preferences.These parameters are retrieved by handover decision components when necessary and used to apply policy rules.The decision policy rules translate scenarios related to connectivity, network availability, user, or even corporate preferences.Our policy rules are as follows.-Subcondition 1: "handover is needed?"= NO (for each network), -Subaction 1: enabling the network selection module.
Policy Rule 1
According to the flowchart in Figure 2, handover initiation evaluates current network conditions in order to decide if a handover is necessary.If it does not, there is no need to search for new available access networks.When MT is under an overlapping coverage, available network conditions must be satisfactory in order to enable network selection.Criteria scoring is a preconfiguration phase performed once HoIG gets user defined preferences.Network scoring is invoked for each service-type currently running in the terminal.Thanks to criteria scoring and network scoring results, decision making selects the most appropriate access network according to user preferences.Once the target access network chosen, a HoE can be performed (Section 5).We illustrate handover decision functioning in a 3G/UMTS-WLAN case of study in Section 6.
Handover initiation
The handover initiation phase is performed by a fuzzy logic system (FLS)with a Mamdani fuzzy inference system (FIS) as described in [4] (see the appendix).This phase is considered as a fuzzy MADM [17].The information gathered (RSS, bandwidth, network coverage, velocity) depending on their availability are fed into a fuzzifier in which are converted into fuzzy sets.A fuzzy set contains varying degree of membership in a set.The membership values are obtained by mapping the values retrieved for a particular variable into a membership function.Figure 3 gives membership functions of the input fuzzy variables.
(i) The input fuzzy variable "RSS" has three fuzzy sets: weak, normal, and strong (Figure 3(a)).
(iii) The input fuzzy variable "network coverage" has three fuzzy sets: bad, normal, and good (Figure 3(c)).
These inputs are chosen answering specific needs related to different scenarios.RSS indicates the current radio link quality and acts as a pretreatment that helps to decide whether to trigger the handover.The bandwidth is different from a network to another (e.g., 3G/UMTS has lower bandwidth compared to WLAN).The velocity is also a very important criterion since when the coverage is bad, a highspeed MT would quickly pass through it.This can avoid excessive unnecessary handovers.After fuzzification, fuzzy sets are fed into an inference engine, where a set of fuzzy rules are applied to determine whether handover is necessary (see Table 1).Fuzzy rules utilize a set of IF-THEN rules and the result is YES, Probably YES, Probably NO, or NO.As an example from Table 1, the rule 81 represents the case of an MT under 3G/UMTS coverage and should not handover to WLAN because of its velocity, in a 3G/UMTS-WLAN scenario.At the final step, the resultant decision sets have to be converted into a precise quantity.For that, centroid defuzzification method [4] is used to obtain a handover initiation factor (see the appendix).If this quantity is below a certain threshold (e.g., 0.85), a handover is needed.
Network selection
In this phase, we need more decision criteria from the terminal side (i.e., user preferences, service capabilities, battery status, and network interfaces) as well as from the network side (i.e., QoS parameters, cost).The most appropriate access network, from those available, has to be selected satisfying a number of objectives.So, we consider an MODM in which all alternatives available (access networks) are evaluated according to these objectives: low cost, the preferred interface, the good battery status, and to the good quality (maximizing bandwidth, minimizing delay, Jitter, and BER).It is pointed out that contextual data can be crisp or fuzzy.Fuzzy data have to be converted to crisp numbers using conversion scales.Thus, a classical MODM such as AHP (see the appendix) is used to assign scores to the available networks.
As mentioned previously, before using AHP method directly, two steps have to be performed: the criteria scoring, a preconfiguration step, in which the importance of each objective is evaluated according to user preferences; and the network scoring, in which the available networks are evaluated and compared according to each objective.
(a) Criteria scoring is in charge of mapping priorities given by the user into scores.In our decision process, we consider two categories of services: real-time (voice, video conferencing or streaming, etc.) and non real-time (file transfer, email, web browsing, etc.).For each type of service, priorities are considered among the available interfaces in the MT (WLAN, UMTS, Bluetooth, etc.) and among the user preferences previously fixed.For example, the priority can be set to provide the fastest network connection to the mobile user, or the cheapest.WLAN interface can be set as high priority or alternatively chosen when a video application is active.Whereas, 3G interface can be set as high priority especially for voice application due to the almost 3G ubiquitous coverage.We obtain the following scores: interface scores and objective scores.Based on the priorities given by the user, scores between 1 and 9 are assigned automatically, where 1 is the most preferred one and 9 the least preferred one [5].The scores are equal-spaced integers whose space gap is defined by (1), where N p is the number of parameters, S h and S l are the highest and the lowest possible scores (i.e., 9 and 1), respectively, and I is the numeric space gap between two subsequent scores, which is rounded off to the nearest integer: For example, for objective scores, the user sets this order to the objectives: preferred interface (obj1), low cost (obj2), good quality (obj3), and finally good battery status (obj4).Here, (1) results in I = 2, while S h = 9, S l = 1, and N p = 4. Obj1, obj2, obj3, and obj4 get scores of 1, 3, 5, and 7, respectively.The same measure is made for interface scores.(b) Network scoring performs real-time calculations for each type of running application.Here, scores have to be assigned to each of the available networks based on user preferences.It is simple to get the network scores related to the interface and the cost.The same interface score, defined in the previous step, is assigned to the available network.For the cost and battery status objective, all the available networks are compared with each other.Cost scores and power scores are assigned using the equal-spaced scores between 1 and 9 based on (1) in a descending order, where the cheapest network has a score of 1.In the case of the quality objective, network QoS parameters are very dynamic and each application type has its own QoS requirements.So, we have to express QoS preferences as limits in order to compare them easily with the network QoS parameters.For that, we use a technique of mapping the four quality parameters (bandwidth, delay, jitter, and BER) into limits values (upper and lower), described in [5].It is an easy and fast solution for comparing dynamic parameters for each demanding service.Now, we can compare the QoS parameters of all available networks with these values.Quality scores are calculated as follows: where u i and l i are, respectively, upper and lower limits for a particular QoS parameter, and n i is the value offered by a network for that parameter.However, (2) is specific to the bandwidth parameter, where the result is preferred to be as high as possible.Whereas, (3) is specific to delay, jitter, and BER parameters, where the result is preferred to be as low as possible.
(c) Decision making is the final step of the network selection phase and calculates the final decision once every parameter is available.The analytic hierarchy process (AHP) method is employed [12].Our decision problem is structured as a hierarchy in which decision factors are identified and inserted.Figure 4 presents our decision concept with ABC as the overall objective (topmost node of the hierarchy), objectives as subsequent nodes, and solution alternatives as bottom nodes.AHP method is chosen due to its ability to vary its weighting between each objective.The AHP calculation is a three step process as follows.
(1) Calculating the objective priorities or weights from the objective pairwise comparison matrix A (4) based on objective scores: where RS i j values are the relative scores involved in each objective, indicating how much more important objective i is than objective j ((5), [5]): ( A norm (see the appendix) is the normalized matrix of A (6), where the values a i j of each row for objective i are calculated to give priorities for each objective: wo 1 for obj1, wo 2 for obj2, wo 3 for obj3, and wo 4 for obj4 (7): (2) Calculating the network weights with respect to each objective through a network pairwise comparison matrix (8): where RS i j values are the relative scores among the scores of the available networks obtained at the network scoring step in terms of individual objective.After normalizing (8), we can calculate network weights wn i j , where i and j represent, respectively, the available network and the specific objective, using (8).
(3) Determining the sum of products of objective weights (from step (1)) and network weights for each network (from step (2)) and selects the network with the highest sum.For i number of available networks and j number of objectives, the overall score is obtained from (9):
THE HANDOVER EXECUTION PROCEDURE
In order to maintain user sessions while moving between two networks, intersystem mobility solution is needed.For that, MIP [7] is an efficient IP layer mobility management presented in [14].It requires to implement MIP functionalities with these components: home agents (HAs) and foreign agents (FAs) in both networks, and MIP support in the MT.MIP agents, installed in gateway routers, can help them to tunnel and forward the data packets.HA may be local to any network (i.e., depending on which network the user is subscribed to), it must be accessible by both networks to maintain current MT's location.
In order to maintain seamless service continuity, we focus on handover decision and execution strategies.In our scheme, the handover decision process plays an important role in handover execution process preparation.It is particularly useful under any overlapped coverage (e.g., 3G/UMTS and WLAN).When MT is moving out of a network coverage area, the proposed scheme can predict disconnections and, thus, saves the MIP movement detection and triggers preregistration.A generic MIP signaling for the handover execution procedure is depicted at Figure 5 (foreign network can be 3G or WLAN).Once handover decision is taken, IP connectivity has to be maintained.In MIP procedure, each MT is assigned to a pair of addresses: a home address and a temporary address called care-of-address (CoA) when away from its home network.The CoA in our solution is the address of the FA.So, we opt for MIPv4 because of its wide support by network operators today compared to MIPv6.Here, the important thing is that the use of standard MIP can drive to a nonseamless handover (significant handover latency for realtime services).Nevertheless, in order to remedy this problem, we use preregistration process to reduce the handover latency and packet buffering and forwarding functions thus reducing the packet loss [16].
Once handover decision taken, MT will request IP connectivity in order to obtain a CoA.The latter will be configured upon receiving the FA advertisements.After that, the MT can send an MIP preregistration request to the FA.This latter forwards the request to the HA.Right after, the HA creates a mobility binding between the MN home address and its CoA and sends a preregistration reply.Once received, FA forwards the reply to the MT.This preprocedure is finished when MT received a preregistration reply before L2 handover.Thus, a tunneling is established between HA and FA encapsulating packets received at the user home network, then forwarding them to its CoA.Moreover, to prevent packet loss, HA buffers packets destined to MT when receive the preregistration reply.It allocates extra space to store the MTs next CoA in the address table.Thus, after L2 handover completion, it updates this table by replacing the current CoA with the next one and forwards the buffered packets.
CASE OF STUDY: 3G/UMTS-WLAN ENVIRONMENT
In order to compare to other proposed solutions [9,10], we choose a 3G/UMTS-WLAN environment to evaluate our handover decision strategy.3G/UMTS offers wide area coverage with lower data rates and a higher cost than WLAN which offers higher data rates for less expensive cost, but in localized areas.
3G/UMTS-WLAN scenario
In Figure 6, we give our 3G/UMTS-WLAN scenario.Our scenario can be divided into different phases according to MT movement (shown in the figure).We consider that a user moves with a certain velocity and he stops for a predefined pause time and then moves again (e.g., random waypoint mobility model).For RSS values, we assume that a coverage area is divided into three different regions: the darker one has the strongest RSS, the second one has lower RSS than the first one, and the third one has the weakest RSS.The last one is potentially the vertical handover area.It is pointed out that in real WLAN environment, RSS can highly vary over time even at a fixed location depending on parameters such as interference, user number in the area.We enumerate the different phases that could characterize the scenario as follows.(1) MT is under 3G/UMTS coverage area, it is associated to BS.
(2) MT is entering a WLAN coverage area.A 3G/UMTS-WLAN handover can be performed according to user objectives and the running application thanks to the network selection module that is performed when more than one access network is available.Thus MT is associated to AP.Otherwise, such a handover could not happen depending on the MT velocity (really fast) according to the handover initiation (FLS) result.
(3) MT is under a overlapping coverage.MT is associated to the most suitable access network answering its requirements.A handover could not be performed if the MT remains motionless for the same running application.After a certain period, network conditions can change (bandwidth is low).In this case, a handover could be performed.
(4) MT is leaving the WLAN coverage area (RSS is weak).This step is time critical, since the active connection would break if the WLAN coverage ended before performing the handover to 3G/UMTS.Thus the handover initiation module can predict MT disconnection and prepares the WLAN-3G/UMTS handover.
(5) MT is associated to BS of 3G/UMTS network.
Handover decision strategy evaluation
To study this environment of two access networks, we choose voice and data applications.The mobile user has MT with two interfaces: 3G/UMTS and WLAN.He enters his preferences for both applications.As mentioned in Section 4.3, the objectives are: low cost (cost), the preferred interface (interface), the good battery status (power), and the good quality (maximizing bandwidth, minimizing delay, jitter, and BER).To simplify, the objective good quality After criteria scoring, we assume that the two networks are available and their current conditions are good (a handover is not needed).Thus we proceed to network scoring.We obtain cost scores, power scores, and quality scores thanks to (1) and (2).
In the decision making step with AHP method, we have to establish the objective pairwise comparison matrix.Following (4) with values calculated in (5), we obtain the normalized matrix in Table 3 from (6).The weights for each objective based on (7) are calculated: wo Interface = 0.0611, wo Cost = 0.453, wo Power = 0.2198, wo Quality = 0.8276 for voice application and wo Interface = 0.0679, wo Cost = 0.9123, wo Power = 0.1340, wo Quality = 0.3177 for data application.In the next step of the AHP method, we calculate the network weights with respect to each objective.We obtain the network pairwise comparison matrix (8) given in Table 3 in the normalized form for voice application.The network pairwise comparison matrix for data application is similar to voice application one.The only difference is the interface matrix.Here, we can have the values for the interface objective, for example, wn WLAN,Interface = 0.124 and wn UMTS,Interface = 0.9923 for voice application.
At the final step, we calculate the sum of products of objective weights and network weights for each network from (9).The results for the two available networks are Score WLAN = 0.3056, Score UMTS = 0.4375 for voice application and Score WLAN = 0.522, Score UMTS = 0.3529 for data application.The network with the highest score, UMTS, is finally selected for voice application and WLAN for data application.
According to the different phases of the scenario enumerated previously, we give the results for both applications of our solution compared to an RSS-based algorithm [15] in Table 4.
Discussion
As mentioned in Section 2, various handover decision strategies were proposed in FGWN.Compared to [9] that use a formula-based solution with optimizations, we use an inference-based one for handover initiation and a classical MCDM method for network selection.However, both answer user requirements (i.e., network selection) as well as network efficiency (i.e., handover initiation).Deciding for the correct time to initiate a vertical handover can reduce the subsequent handovers (i.e., ping-pong effect) and limit the signalling messages and can also predict disconnections during MT's movement.Thus handover latency can be reduced.Selecting the best access network can satisfy user requirements anywhere and anytime in a flexible (using policies) and efficient (AHP method) manner.
Otherwise, we have to discuss some relevant aspects that characterize or not our system.
(i) In our handover decision mechanism, the chosen decision techniques use complex calculations (fuzzy logic) on one hand, and simple calculations (AHP method), on the other hand.Thus we bring more intelligence and precision instead of ease of calculation (cost function) in the whole process for practical mobile terminals.
(ii) When MT tries to search for available access networks, it must activate the interfaces.The simplest way to discover these networks is always keeping all the interfaces on.However, activating the interface consumes battery power.A faster system discovery time is also desired because the MT can benefit faster from the new wireless network.Since our system based on handover decision policies is flexible, it is possible to add some specific rules as defined in [18] for a power saving interface management solution.
(iii) Periodically, the system reevaluates handover initiation when the mobile user is using a current access network.In a case where a handover is needed and there is no better access network available for the ongoing application, we are facing a problem of subsequent unnecessary handovers.To solve it, we can use a waiting period in which the stability aspect is maintained.Moreover, the handover synchronization problem, as mentioned in [3], considers that several MTs could discover the same better network and switch to it simultaneously.In this case, it causes instability for all these MTs and poor performance.For that, a randomized stability period is used.
(iv) It is pointed out that the MIP protocol is not relevant for delay-sensitive applications.With a handover decision mechanism that enables a preregistration, MIP movement detection time is saved.Thus it prepares the handover execution phase and participate to a seamless handover (minimum handover latency and packet loss).However, we are evaluating the handover execution module in an ongoing work.
CONCLUSION
In our paper, we propose a handover management system for future generation wireless networks.Our solution focuses on the handover decision process providing flexibility and efficiency thanks to advanced multiple criteria decision algorithms (fuzzy logic and AHP) and policies governing it.We give more flexibility in a way that the scheme is controlled by the mobile (MCHO).Thus it can provide performance optimization and prepare the handover execution phase.In a near future, we compare our handover management system with other existing techniques and we study the multiservice aspect such as in [9], since the proposed handover decision mechanism performs for one service at a time.Moreover, our vertical handover decision scheme can be applied to other environments such as 3G-WMAN and WMAN-WLAN, while proving seamless mobility.
Mamdani fuzzy inference system
This method is the most commonly seen fuzzy methodology.The system essentially defines a nonlinear mapping of the input data into an output, using fuzzy rules.The mapping process involves input/output membership functions, FL operators, fuzzy if then rules, aggregation of output sets, and defuzzification.The fuzzy inference system contains four components: the fuzzifier, the inference engine, the fuzzy rule base, and the defuzzifier.The most popular defuzzification method is the centroid calculation.
In centroid defuzzification method
The defuzzifier determines the center of area (centroid y ) under the curve given by where y i is the center of the fuzzy set B (membership function μ B ).
AHP method
The concept of AHP was developed, among other theories, by Thomas Saaty, an American mathematician working at the University of Pittsburgh.It is an approach to decision making that involves structuring multiple choice criteria into a hierarchy, assessing the relative importance of these criteria, comparing alternatives for each criterion, and determining an overall ranking of the alternatives.
Matrix normalization
Normalized matrix of A(x i j ) is given by A norm (a i j )
Figure 1 :
Figure 1: Our handover management system architecture.
Figure 2 :
Figure 2: Handover decision process in our handover management system.
Table 1 :
Examples of fuzzy rules.
⎝ a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44
Table 2 :
Criteria and network scoring.
Table 3 :
Objective and network pairwise comparison matrix in AHP method.
Table 4 :
Network selection results. | 7,894.6 | 2008-01-01T00:00:00.000 | [
"Computer Science"
] |
Characterization of Unique Small RNA Populations from Rice Grain
Small RNAs (∼20 to 24 nucleotides) function as naturally occurring molecules critical in developmental pathways in plants and animals [1], [2]. Here we analyze small RNA populations from mature rice grain and seedlings by pyrosequencing. Using a clustering algorithm to locate regions producing small RNAs, we classified hotspots of small RNA generation within the genome. Hotspots here are defined as 1 kb regions within which small RNAs are significantly overproduced relative to the rest of the genome. Hotspots were identified to facilitate characterization of different categories of small RNA regulatory elements. Included in the hotspots, we found known members of 23 miRNA families representing 92 genes, one trans acting siRNA (ta-siRNA) gene, novel siRNA-generating coding genes and phased siRNA generating genes. Interestingly, over 20% of the small RNA population in grain came from a single foldback structure, which generated eight phased 21-nt siRNAs. This is reminiscent of a newly arising miRNA derived from duplication of progenitor genes [3], [4]. Our results provide data identifying distinct populations of small RNAs, including phased small RNAs, in mature grain to facilitate characterization of small regulatory RNA expression in monocot species.
Introduction
Rice is one of the world's most important food crops, as it is produced in over 100 countries and is a staple food for half of the world's population (OECD, 2004).In addition to the importance of rice as a food staple, its genetic synteny, ease of transformation and an assembled genome [5] make it a model system for the study of cereal grasses.The importance of small interfering RNAs (siRNAs) and microRNAs (miRNAs) in plant developmental regulation have made the investigation of small RNA populations an important aspect in understanding the regulation of higher plant genomes and processes.Classes of small RNAs including miRNAs, ta-siRNAs, and nat-siRNAs regulate important developmental and physiological processes in land plants and most regulatory small RNAs and target genes are conserved among higher plants [1,2,6].In general, these classes of regulatory RNAs suppress gene expression by inhibition of translation or destabilization of target mRNAs in trans.miRNAs and ta-siRNAs are derived from distinct transcriptional units that either form internal foldback structures or recruit a specific RNA-dependent RNA polymerase (RdRP) RDR6.Additional endogenous siRNAs often associated with repetitive elements have also been characterized that are processed from long double stranded RNA (dsRNA), and function to silence transcription in cis through modification of the chromatin state.
As in-depth annotation and functional gene network relationships are developed for rice, the analysis of small RNA expression will facilitate a deeper understanding of these relationships.An extensive and diverse population of small RNAs have been identified in Arabidopsis and rice utilizing high throughput sequencing methods [3,[7][8][9][10][11][12][13][14], however only developing or stressed tissues were evaluated [15][16][17].In an attempt to gain a broader understanding of gene expression in rice grain, we characterized small RNA populations from Oryza sativa spp.japonica cv.Nipponbare utilizing a deep sequencing approach.As approximately 70% of all human food consumption is derived from seeds [18] it is important to understand the role of small RNAs in seed development given the known roles of miRNAs in organ identity, morphogenesis, and polarity in actively growing tissues.Seed development is a highly regulated and coordinated process involving deposition of seed storage reserves, novel morphogenic events to define the embryo and endosperm, and maturation drying and quiescence.Grain in the dormant state is typically characterized by little or no active translation, given this and the known roles of miRNAs in developing tissues only actively growing tissues have previously been analyzed in depth.The regulatory role of small RNAs in repression of gene expression could potentially provide an important mechanism in establishing or maintaining the dormant state in grain or altering storage reserves.In this study, we have initiated the sequencing of the small RNA population from rice grain and seedlings to determine the abundance and role of miRNAs and other small RNAs in both dormant and growing tissues.
Deep sequencing of rice grain and seedling small RNA populations
Small RNA libraries were constructed from three pools of mature, dormant rice grain and three-week post germination seedlings utilizing RNA-adapter mediated ligation [19,20].Each of the four libraries were independently sequenced using highthroughput pyrosequencing [21].We obtained a total of 679,146 sequences from the three rice grain libraries and 257,394 from one rice seedling library (Table 1).Greater than 41% of these sequences represent unique reads.Grain and seedling small RNAs were similar in size distributions with distinct peaks at 21-and 24nt (Figure S1).Consistent with previous reports, the 21-nt small RNAs comprised many redundant reads, whereas the 24-nt class comprised primarily unique or low abundance reads.In seedling, greater than 30% of the ,21-nt redundant reads matched to known miRNAs.Despite a similar distribution of ,21-nt redundant sequences in grain, less than 5% could be accounted for by previously characterized miRNAs.The majority of these small RNAs were from non-repetitive elements, suggesting either many novel grain specific miRNAs or alternative siRNA generating loci such as nat-or ta-siRNAs.These data represent the first large scale sequencing of small RNAs from mature rice grain and an opportunity to assess the role of small RNAs in grain.
Conservation of small RNAs in plants and animals
To contribute to our understanding of small RNA function and conservation, we compared rice small RNA sequences to genomes of species representing important lineages throughout evolution.We found the highest conservation among plant species with a lesser percentage similarity among distantly related species.For example, there are 13,288 matches between the rice grain small RNA and Arabidopsis thaliana genome and 621 between the rice grain small RNA and Drosophila melanogaster genome.As previously reported [22] there is little conservation of small RNAs between rice and any animal species tested (Table 1).Small RNAs conserved among distantly related non-animal species were generally low abundance sequences, whereas the highly abundant miRNAs were conserved among plants.While more small RNAs, primarily miRNAs, are conserved among plants, many small RNAs were also found with perfect homology to sequences in human and other animal transcriptomes (Table 1).The majority of rice small RNAs matched to unannotated intergenic regions, with only 29% matching to the rice transcriptome, including 12% to the complement.Sequences unmatched to the rice genome were likely a result of sequencing errors or from regions of the rice genome that remain unmapped.Here, we have not removed small RNAs matching to rRNAs, tRNAs, or sn/snoRNAs from the analysis as these are unlikely to be mRNA degradation products due to the requirement for a 59 phosphate group in the cloning protocol.However, because these small RNAs could have multiple genomic origins, we have normalized to genome copy number to help remove this potential error.We have also chosen to keep these small RNAs in the analysis group, as interesting new classes of small RNAs from similar repeat regions have been reported (piwi-RNAs) and from alternative size classes [23].
Categorization of small RNA populations through hotspot determination The three replicates of rice grain libraries allow us to estimate expression and assess the quality and coverage of each sequencing reaction.Amongst the three replicates, we obtained 285,873 unique sequences, however little overlap was observed among these sequences in independent libraries.We found only ,1.4% of unique sequences were shared among all three replicates, and only ,5.9% between at least two replicates (Figure 1a).This illustrates that despite the high number of sequences obtained; the endogenous rice grain small RNA population is far greater than captured by our sequencing effort.
While miRNAs represent a significant portion of the total number of sequences obtained, in comparison they represent a minor component of the unique small RNA population.In contrast, small RNAs from repetitive loci are disperse and may account for the low overlap among unique sequences from replicate libraries [10].To determine if clustering small RNAs from specific loci would result in greater overlap of replicates, we calculated hotspots individually for each grain library replicate.Hotspots were identified by dividing the genome into 1-kb bins and small RNA abundance was calculated for each strand of the genome separately.We calculated a P value for each bin and defined hotspots as P,E-50 (see Methods S1).Of the 551,274 sequences from rice grain that mapped to the genome, 53.2% were captured in 680 clusters representing less than 0.1% of the rice genome.In contrast to unique sequences, 67% of hotspots were represented in more than one replicate (Figure 1).These results are similar to reports in Arabidopsis, in which little similarity was observed among unique small RNA sequences yet a greater overlap was observed among clustered small RNAs [10].For further categorization of small RNA populations, we utilized hotspot cluster determination to facilitate identification of abundant, unique miRNA loci, and loci from which many low abundance sequences are derived.
We examined the distribution of small RNAs by plotting abundance across the twelve rice chromosomes, calculated as transcripts per quarter million sequences (tpq) (Figure 2 and Figure S2).Regions of high small RNA expression around centromeres have been reported for Arabidopsis leaf tissue [10,13].In contrast, we generally did not observe a concentration of small RNAs in centromeric and pericentromeric regions using repeat-normalized expression.Rather, expression was localized to specific regions across the rice chromosomes, indicating the majority of the rice small RNA population derives from a small number of highly expressed loci.In many cases, the regions of highest localized expression mapped to known miRNA loci.Grain and seedling libraries showed distinct, tissue-specific expression patterns, with more miRNA-associated hotspots in seedlings.The higher number of miRNA hotspots is consistent with known roles for miRNAs in regulating development in plants [1].
Small RNAs from hotspots of seedling and grain libraries were categorized into repetitive elements, miRNAs, miRNA targets, sparse hotspots, phased hotspots such as ta-siRNAs, and disperse hotspots (Figure 3a).Phased hotspots were defined as those regions from which small RNAs were generated in a 21-nt register, while disperse hotspots were those regions of the genome from which many small RNAs were generated with no particular phasing or clustering.The reasons for evaluating hotspots were twofold; first they represent the population of small RNAs most likely involved in modulating gene expression in these tissues, and second the potential for incorrectly characterizing small RNAs decreases when low abundance sequences are not considered [1].Furthermore, in the absence of a panel of RNAi pathway gene knockouts, such as is available in Arabidopsis, hotspot characterization helps to elucidate the functional classification of a given small RNA because the neighboring small RNAs are included in the analysis.For example, miRNA genes show two characteristic peaks of abundance, often located within a short distance of each other, with the miRNA sequence several times more abundant than the partially complementary miRNA* sequence.In this way, important regulatory elements such as miRNA and trans-acting siRNA loci can be found, although potentially at the expense of a number of false negatives due to the stringent requirements on abundance.
The unique small RNA sequences from grain and seedling were significantly different, which could reflect either lack of sequence coverage to fully capture the complexity of the small RNA population, or regions of small RNA expression unique to each tissue.To subvert this obstacle we evaluated genomic regions by categorizing 1 Kb regions (as described above), rather than individual small RNA sequences.This changed the overlap of the three rice grain replicates to 67% for 1 Kb bins over 1.4% overlap that was found when unique sequences were compared (Figure 1).For analysis of the three rice grain library replicates, we used the average normalized tpq.Using a cutoff of P,E-50 for hotspots resulted in a combined total of 498 small RNA clusters in grain and seedling (Table S2).The largest class of hotspots contained repetitive elements, such as transposons, ribosomal DNA, and tRNA genes.The second largest class was from MIRNA genes for which we captured 93 loci [24]; thirteen miRNA targets were also captured by similarity to truncated miRNAs.Consistent with the role of miRNAs in development and morphology, we detected more miRNA loci hotspots specific to seedlings (39) when compared to grain (4).The remaining hotspots were separated into sparse clusters associated with one or two specific peaks, phased, and disperse hotspots.Each of these categories was then analyzed for novel miRNAs, ta-siRNAs, and siRNA regulated genes, respectively.
Identification of novel miRNAs by hotspot determination
Discovery efforts in Arabidopsis and rice have used abundant clusters containing only a small number of unique siRNAs as a starting point for prediction of novel miRNAs [3,10,12].Following this rationale, we utilized sparse hotspots to search for new miRNAs.We propose that with extensive sequencing the propensity for false positive miRNA identification increases dramatically if low abundance siRNAs are considered.For example, using criteria derived from known miRNAs [1,25] we predicted 840 candidate miRNAs from 1072 loci that form miRNA-like precursors.Potentially, many of these miRNA candidates represent species specific miRNA-like genes arising from recent duplication of progenitor sequences [3,4].In contrast, among sparse hotspots from grain and seedlings only five loci were found to contain miRNA-like foldback structures.While many of the 1072 miRNA-like loci are likely to represent bona fide miRNAs, we chose to only analyze the five corresponding to sparse small RNA hotspots.Expression of putative miRNAs from sparse hotspots was confirmed by Northern blot analysis (Figure 3b).All miRNAs except miRMON18, also detected in maize, were specific to rice.Similarly, the vast majority of recently identified miRNAs in Arabidopsis were non-conserved [3,12].Based on criteria for prediction of known miRNAs [1,25] from which we predicted 840 potential miRNAs, rice also appears to have a diverse set of non-conserved miRNAs, which appear as low abundance sequences.
Prediction of target genes for rice miRNAs
We predicted targets of the five new miRNAs using a scoring system that penalizes weak pairing to the 59 end of the miRNA to reduce false positive predictions [26].The results of this analysis were compared to small RNA hotspot clusters within predicted protein coding genes, including those flagged for short 18-nt matches.We expect to find miRNA targets in small RNA hotspots because many miRNAs are highly similar to their target sequences [27] and due to the possibility of transitivity [28].We found thirteen previously predicted miRNA targets among clusters, supporting this hypothesis (Table S2).In addition, we were able to identify the PPR repeat target (Os10g35436) of miRMON13 utilizing hotspot classification, which was later validated by 59 RACE assay.
Predicted targets of new miRNAs included an SPX-domain (miRMON18), PPR repeats (miRMON13), and CACTA transposon (miRMON22) gene families (Table S4).A non-conserved miRNA, miR827, predicted to target SPX genes in Arabidopsis [3,12] varies by two nucleotides from miRMON18.Targets Os02g45520 and Os04g48390 are most similar to a subclade of Arabidopsis SPX genes predicted to be targets of miR827 [29].Targets, Os02g45520 and Os02g48390 (SPX genes) for miR-MON18 and Os10g35436 (PPR) for miRMON13 were tested using a standard 59 RACE analysis.Cleavage events at the predicted site, 10 nucleotides from the 59 end of the small RNA, were detected for all three predicted targets (Figure 3c).Three week old seedling tissue was used for validation of targets.Due to the similarity between the mature miRNA and predicted targets, miRMON18 is most likely a related family member of miR827 found in monocot accessions.miRMON13 was predicted to target seven PPR genes from an orthologous clade of PPR genes that have spurned miRNAs and ta-siRNAs in Arabidopsis [3,26,27,30].Target predictions for the remaining three novel miRNAs (miRMON24, 25, and 27) were performed but consisted largely of unannotated regions of the genome.This could be due to a lack of transcribed data for rice grain or due to the rapid evolution of miRNA genes [3,4].There is also no public microarray data for rice grain and low coverage with EST data for this tissue.This makes evaluating the possible functions of these genes difficult.Genes predicted to be regulated by abundant or grain-specific small RNAs are, however good candidates for further functional characterization and evaluation of their involvement in grain development.
Small RNA generating hotspots from rice grain include phased siRNAs and protein coding genes
In our small RNA libraries, miRNAs accounted for 33.0% of all seedling small RNAs, in comparison to only 2.4% from grain (Figure S1).Despite the difference in miRNA abundance, 21-nt small RNAs were essentially equivalent in abundance from both tissues (Figure S1).We hypothesize that alternative regulatory siRNAs, including ta-siRNAs and naturally occurring antisense siRNAs, (nat-siRNA) might account for the discrepancy in miRNA abundance between grain and seedling 21-nt siRNAs [31,32].To test this hypothesis, we determined phased hotspots by calculating phase uniqueness (p-score) based on ratio of abundance of inphase siRNAs to out-of-phase siRNAs and fullness of each phase (see Methods S1).A single TAS3-like gene on Chromosome 3 was identified among phased hotspots (Table S2) [26].Additional phased siRNAs from three hotspots on chromosomes 6 and 12 together accounted for ,22% of the 21-nt small RNAs from rice grain.Unlike ta-siRNAs and nat-siRNAs, phased siRNAs from both loci were exclusively from one strand, suggesting an alternative origin.
We chose to examine the locus on Chromosome 6 in detail, due to it generating siRNAs with the highest abundance, which were specific to rice grain.The phased siRNAs were distributed between two tightly clustered regions (Figure 4a).We cloned an 880-nt precursor that mapped to the Os6g21900 locus (see Data S1).Exons 2 and 3 form a long, imperfect foldback structure containing eight 21-nt phased siRNA duplexes, separated by an ,1.2 kB intron (Figure 4b).No small RNAs were found that match exon I, nor were we able to identify a miRNA target sequence that could initiate ta-siRNA phasing [26,33,34].RNA gel blot analysis of the most abundant phased siRNA confirmed expression specific to rice grain (Figure 4c).We were unable to detect either of the two most abundant phased siRNAs (P7 and P4) in seedling or other plant species tested.To determine whether the phased siRNAs are expressed in other tissues, we also compared the precursor to public EST collections and the rice MPSS database.All matched ESTs and MPSS signatures were from rice grain or six days post germination libraries.In addition, the passenger strand was also cloned for all phased 21-nt siRNAs (except P2), confirming siRNA biogenesis from the foldback structure.
We predicted that phased siRNAs would be processed by DCL4 due to the phased nature of 21-nt siRNAs, and would not require miRNA pathways because of the presence of a foldback structure in the Os06g21900 phased siRNA precursor RNA.To test this hypothesis, the full length cDNA from the Os06g21900 locus was transformed into Arabidopsis thaliana Col-0 ecotype, and mutants dcl1-7 and dcl4-1 diagnostic for miRNAs and ta-siRNAs, respectively.We detected P5 and P7 phased siRNAs in Col-0 and dcl1-7 (Figure 4d).The abundant 21-nt phased siRNAs were absent in dcl4-1, replaced by faint 24-nt siRNAs, similar to what was observed for ta-siR255.Foldback structures with DCL4dependent phased processing were observed for ASRP1729/ miR822 and miR839, although in both cases the phasing was less precise than that seen for Os06g21900 [4,12].Furthermore recent reports [35] of DCL4-dependent generation of 21-nt small RNAs in rice suggest DCL4 plays a broader role than in Arabidopsis development and support our hypothesis that DCL4 can process phased small RNAs independent of an RdRP.Analysis of our data revealed an inverted repeat on chromosome 12 as a hotspot for generation of phased siRNAs.Small RNAs generated from this locus were identified in two previous reports [12,35] as being dependent on DCL4.Together, these results confirm that phased siRNAs are processed through a distinctly different pathway than canonical miRNAs.
In Arabidopsis, miRNA-like genes have been characterized in which multiple phased ,21-nt siRNAs are processed from a single foldback structure derived from duplication of progenitor genes [4,12].To test the possibility that this locus is a recently emergent miRNA-like gene, we compared the halves of the foldback sequences to protein coding genes in rice.Consistent with this hypothesis, we found significant similarity (E,10 27 ) to protondependent oligopeptide transporter (POT) genes (Figure S3).
Phased siRNAs are reminiscent of miR163 in Arabidopsis in which two phases of siRNAs were sequenced, although only miR163 accumulates significantly [4,36].Unlike TAS loci in Arabidopsis that require RDR6 to generate dsRNA, phased siRNAs derive from an extended imperfect foldback structure.Given the high abundance of multiple, phased siRNAs from a single foldback precursor transcript and processing, we believe that these phased siRNAs represent a novel regulatory class present in rice grain.
In addition to the phased siRNAs found on Chromosome 6, we inspected three abundant disperse hotspots on Chromosome 1 from the grain libraries that map to HAP5 transcripts (Table S2).In contrast to phased siRNAs, individual siRNAs were in low abundance, confined to the transcribed region, and randomly distributed across the transcript (Figure 5).Based on similarity comparisons and secondary structure predictions, we found no evidence of inverted repeats at any of the three HAP5 loci on Chr 1 ((Os01g01290, Os01g24460, Os01g39850) (Table S2)).Upstream of each HAP5 gene there is an associated siRNA region, with weak phasing (p-score 0.691).Greater than 82% of the siRNAs from the three HAP5 hotspots were 21 to 22-nt, whereas siRNAs typically associated with disperse clusters are typically 24nt [37][38][39].A likely explanation for siRNA production would be bi-directional transcription forming nat-siRNAs [31].We searched the MPSS database for signatures, and found signatures for expression only in the sense orientation in 6-day old seedlings [40].There is little data available for mature rice grain, so it is possible that antisense transcription is responsible, although we favor a model in which an RdRP is recruited to the three HAP5 transcripts derived from Chromosome 1.
Discussion
While our efforts in deep sequencing of rice grain and seedling tissue have revealed a number of unique elements in the regulatory small RNA family of rice, the functions of many of these remain to be elucidated.We have developed a method of hotspot determination to uniquely validate the class of small RNA in the absence of readily available mutants, which would lend themselves to characterization of small RNAs.Characterization in this manner allows for more rigorous analysis of small RNA data from sequenced genomes, despite the fact that it may produce false negatives, as indicated by the predicted 1072 pre-miRNAs based solely on foldback data.Using sparse hotspots as the defining criteria for miRNAs limited this set to five novel miRNAs, which were validated by Northern blot and predicted targets for two of the five novel miRNAs were validated by 59 RACE analysis.Furthermore, we compared miRNAs from our large prediction set to those reported recently from wheat [41] and found 3 possible homologs with 4 to 5 nucleotide differences, but did not find any identical small RNAs.We verified 101 MIRNA genes in 27 families with high expression in mature grain and 3 week old seedlings by their presence in hotspot clusters.
Target prediction for the 5 novel miRNAs, revealed that miRMON13 targets a clade of PPR genes.The PPR family has expanded considerably in plants, comprising .450genes in Arabidopsis and .600 in rice [42,43].Although it is not surprising to see miRNA genes derived from a rapidly expanding gene family, the preference to maintain suppression elements specific to a specific clade of PPR genes both in Arabidopsis and rice is curious.One possible model is that maintenance of proper gene dosage from this PPR clade is critical to plant fertility.Fertility restoration genes (Rf) have been found to contain PPR motifs, and mutations lead to cytoplasmic male sterility.Rapid evolution of miRNA and ta-siRNA genes from PPR progenitor genes may offer a mechanism to suppress aberrant or excess transcripts to prevent reduced fertility [3,12].
In Arabidopsis, small RNAs principally of the 24-nt class, accumulated in pericentromeric regions [10,13] whereas we observed a very different distribution in rice.Small RNA hotspots were more randomly distributed across the chromosomes, with 21nt loci including miRNA and phased siRNA genes more readily apparent.This may be reflected in the tissues used for analysis, as epigenetic states should be established in mature grain and therefore accumulation of heterochromatin associated siRNAs is not required to the extent it would be in growing floral or vegetative tissues.Small RNA hotspot clusters accounted for over 53% and 49% of the total small RNA abundance in grain and seedling, respectively.In both tissues, clusters were confined to a very small proportion of the rice genome indicative of unique small RNA generating genes, many with unknown function.Many of these hotspots were not attributable to novel miRNAs or other characterized classes of small RNA generating loci; characterization of these genes is likely to reveal interesting and new mechanisms for gene regulation in plants.
A striking finding was the abundance of 21-nt siRNAs specific to rice grain that were not attributable to miRNAs.We identified hotspots expressing extraordinarily abundant 21-nt phased siRNAs (Os06g21900) or disperse ,21-nt siRNAs (Chr. 1 HAP5 genes, Os01g01290) specific to dormant grain.Similar miRNAlike genes have been described in Arabidopsis [4,12], from which multiple, phased 21-nt siRNAs dependent primarily on DCL4 are expressed.In contrast, the phased siRNA genes in rice showed a far stricter phasing (out-of-phase siRNAs did not accumulate), and expression of nearly all phases were equivalent to expression of conserved miRNAs.The phased siRNA precursor at Os06g21900 displays similarity to POT family nitrate transporters, consistent with a model in which miRNA genes derive from inverted duplication of progenitor genes [3,4,12].
In addition to abundant grain specific phased siRNAs, three HAP5 loci were found that generate an extraordinary number of 21-nt siRNAs across the length of the transcript.These transcripts could represent nat-siRNA loci, although we were unable to identify any antisense cDNAs or MPSS data indicating antisense PolII transcription.This could be due to the lack of available transcript/cDNA from mature rice grain.There is no evidence for inverted repeats from the transcribed regions at these loci, indicating that these (siRNAs) are unlikely to be DCL4 products Together with HAP2 and HAP3 proteins, HAP5 forms a ternary CCAAT transcription factor complex.This complex has known roles in controlling expression of seed storage protein genes in Arabidopsis [44,45].Our finding that, a subset of HAP5 genes generate such a large number of siRNAs in mature seed suggest post-transcriptional regulation of the HAP complex, potentially involved in modulating seed storage proteins.From the scope of all small RNA space, we observed that the reverse complementary match to the rice transcriptome is 40% over-represented for the unique grain small RNAs versus those from the seedling (Table 1. 13% vs. 9%).This over-representation of potential transcription repressors in grain might facilitate maintaining a general expression suppression state in the dormant grain stage.Further investigation into the roles of these siRNAs could lend insight into biology of seeds and the roles of these distinct populations of siRNAs.
All data will be deposited into NCBI GEO upon publication of this manuscript.
Plant materials and RNA isolation
Oryza sativa spp.japonica cv.Nipponbare was used for small RNA library construction and Northern blot analysis.Plant material for RNA isolation was obtained from dehulled mature grain and 23 day old seedlings planted from the same seed lot and grown in a greenhouse under non-stress conditions.RNA was isolated for Northern analysis from Zea mays var.LH244 leaves, roots (both at stage V6), and 32-39 day after pollination kernels; Glycine max var.A3525 trifoliate leaves, roots, and S3 to S5 seed; and Arabidopsis thaliana ecotype Columbia-0.Seed for dcl1-7 (CS3089) and dcl4-1 (GK_106G05) were obtained from the Arabidopsis Biological Resource Center.A. thaliana was transformed with a CaMV 35S binary expression vector using the floral dip method, and events containing a single T-DNA insertion selected.Total RNA was isolated from plant tissues using TRIzol reagent (Invitrogen).
Construction and computational analysis of small RNA libraries
Small RNA library construction was performed as described previously [20].Three micrograms of each small RNA cDNA library was sent to 454 Life Sciences for sequencing.Small RNA inserts in the raw sequences were parsed by locating the 59 and 39 adaptors to obtain strand and position information by Perl scripts.Small RNAs of 18-26 nt were compared with genomes using BLAST (Table S1 for databases used).To compare small RNA expression from different libraries, abundance was calculated as transcripts per quarter million sequences (tpq) [8,10].Normalized abundance was determined as tpq divided by the number of perfect matches to the rice genome.To determine small RNA expression hotspots in the rice genome, each chromosome was divided into 1-kb bins and small RNAs were assigned to specific bins.In cases where small RNAs were divided between bins, the 59 end of the small RNA was used to determine in which bin it should be placed.Normalized small RNA abundances with each bin were summed to give the total abundance of the bin.
Statistical analyses
See Methods S1 for detailed description of statistical methods used in analysis.
Small RNA blotting and hybridization
Five micrograms of total RNA was separated on a 17% PAGEurea gel and blotted as previously described [4].Complementary oligonucleotide probes specific to small RNA sequences were end labeled with c 32 P-ATP using OptiKinase (USB Corporation) [46].Probe sequences are listed in Table S3.LNA probes were ordered for miRMON13 and miRMON27.Oligo probes for ta-siR255, miR173, and phased siRNAs in Arabidopsis were end-labeled with digoxygenin according to the manufacturer's recommendation (Roche).Probe sequences are listed in Table S3.
Target Validation
Target Validation using a 59 RACE assay was done with the GeneRacer Kit (Invitrogen).Poly(A) mRNA was isolated from 3 week old rice seedling tissue, ligated to the adaptor, converted to cDNA and PCR amplified, using gene-specific and adaptor-specific primers.PCR products were gel purified, cloned and sequenced.Successfully validated miRNA target gene-specific primer sequences are shown in Table S3.
Supporting Information
Figure S1 (a) The majority of small RNAs matched to miRNAs, rRNA repeats, and transposable elements.Annotated miRNAs from miRBase accounted for 33.0% of seedling small RNAs and 2.4% from grain.(b) The size distribution of small RNA populations from grain and seedling were similar, with 21-nt small RNAs as the most abundant class, followed by the 24-nt class.Found at: doi:10.1371/journal.pone.0002871.s001(0.08 MB AI) Figure S2 Small RNA expression from grain and seedling libraries, calculated as transcripts per quarter million (tpq) across the twelve rice chromosomes.Abundance was normalized by dividing small RNA tpq by the number of perfect matches in the rice genome.For rice grain, the average tpq of the three libraries was used.Centromere position is indicated by a circle and bins containing small RNA hotspots are indicated in red.Each bar represents a 100 kb bin.A ceiling of 500 tpq was used.Location of phased siRNAs are indicated by an asterisk (*).The majority of rice or monocot specific miRNAs are represented by a single locus.Found at: doi:10.1371/journal.pone.0002871.s002(0.55 MB PDF) Figure S3 Alignment of the foldback structure regions of exon 3 and the reverse complement of exon 2 from Os06g21900 with putative progenitor genes.Conserved positions are highlighted as follows: 7/7 red, 5/7 or 6/7 orange, 4/7 yellow, 3/7 green (including at least one conserved base in Os06g21900 foldback).Found at: doi:10.1371/journal.pone.0002871.s003(0.09 MB PDF)
Figure 1 .
Figure 1.Overlap among small RNA sequences and hot spots.Venn diagrams illustrate overlap among rice grain replicate libraries (a) unique small RNAs or (b) 1-kb clustered hot spots (P,1E-50).The total number of unique sequences or hotspots from each replicate is in parentheses.doi:10.1371/journal.pone.0002871.g001
Figure 2 .
Figure 2. Small RNA expression from rice grain and seedling libraries across Chromosome 1 and Chromosome 6. Abundance was normalized by dividing the small RNA tpq by the number of perfect matches in the rice genome.For rice grain, the average tpq of the three libraries was used.Centromere position is indicated by a circle and bins containing small RNA hotspots are indicated in red.Each bar represents a 100 kb bin.A ceiling of 500 transcripts per quarter million (tpq) was used.doi:10.1371/journal.pone.0002871.g002
Figure 3 .
Figure 3. Characterization of rice hotspot small RNAs.(a) Classification of small RNA hotspots in rice grain and seedling small RNA libraries.Sparse hotspots contain one or two major peaks; disperse hotspots have more than two siRNA peaks; phased hotspots had a p-score .0.87.(b) Low molecular weight RNA blot analysis of rice miRNAs identified from hotspots in rice, maize, and soy.(c) Validation of predicted targets for 3 novel miRNAs.Positions of the dominant 59 RACE products (sequences with 59 ends at position/total sequences for 59 ends) are indicated.Bolded nucleotide indicates predicted cleavage site.doi:10.1371/journal.pone.0002871.g003
Figure 4 .
Figure 4. Phased siRNAs from the Os6g21900 locus.(a) Graph of 59 position of siRNAs from the plus strand precursor RNA.Expression was capped at 6000 tpq.The abundant phases (P1-P8) and passenger strand siRNAs (P1*-P8*) are indicated.The locations of exons and introns are shown along the rice genomic sequence.(b) Predicted foldback structure formed by exons 2 and 3. Phasing is indicated by brackets and the tpq for each 21-nt phase is shown.(c) Low molecular weight RNA blot analysis of P7 siRNA expression.(d) Low molecular weight RNA blot of phased siRNAs from positions P5 and P7 in A. thaliana Col-0, dcl1-7, and dcl4-1.Expression of miR173 and ta-siRNA255 are shown as controls for canonical miRNAs and ta-siRNAs.doi:10.1371/journal.pone.0002871.g004
Figure 5 .
Figure 5. Disperse siRNAs associated with HAP5 genes.(a) Small RNA tpq is plotted as in Figure 4, with plus strand siRNAs above the x-axis, and minus strand below.Expression was capped at 25 tpq.(b) Low molecular weight RNA blot analysis of single siRNA.doi:10.1371/journal.pone.0002871.g005
Table 1 .
Genome and transcriptome matches to rice small RNA sequences.
a Defined as perfect match to the entire small RNA.b Repeats were masked by RepeatMasker and Tandem Repeats Finder (with period of 12 or less).c Only complementary matches were considered except noted.doi:10.1371/journal.pone.0002871.t001 | 7,645.4 | 2008-08-06T00:00:00.000 | [
"Biology"
] |
Search for the radiative transition χ c 1 (3872) → γψ 2 (3823)
Using 9.0 fb − 1 of e + e − collision data collected at center-of-mass energies from 4.178 to 4.278
Although tremendous effort has been made from both the experimental and theoretical sides, the interpretation of the χ c1 (3872) remains inconclusive.Due to the proximity of its mass to the D * 0 D0 + c.c. mass threshold, it is conjectured to have a large D * 0 D0 + c.c. molecular component [14,15].Indeed, some theoretical models consider it to be a mixture of a conventional 2 3 P 1 charmonium state χ c1 (2P ) and a D * 0 D0 + c.c. molecule [16,17].
Measurements of new χ c1 (3872) decay modes can help to improve our understanding of its internal structure.Ref. [18] extracted the absolute branching fractions of the known χ c1 (3872) decays by performing a global fit of the absolute branching fraction of the B + → χ c1 (3872)K + channel measured by BaBar [19] together with information from other experiments.The fraction of χ c1 (3872) decays not observed in experiments is estimated to be 31.9+18.1 −31.5 %.The work is carried out by assuming the χ c1 (3872) has universal properties in different production and decay mechanisms.Meanwhile, Ref. [20] also reported the branching fractions with consideration of the threshold effect of D * 0 D0 + c.c. and a possible bound state below the threshold or a virtual state in B + → χ c1 (3872)K + decay.If the χ c1 (3872) contains a component of the excited spin-triplet state χ c1 (2P ), then the radiative decay χ c1 (3872) → γψ 2 (3823) could happen naturally via a E1 transition [21], where the ψ 2 (3823) is considered as the 1 3 D 2 charmonium state.The BESIII experiment has reported the observation of e + e − → γχ c1 (3872) at center-of-mass energies √ s = 4.178 − 4.278 GeV [22,23].Using the χ c1 (3872) signal produced in these data samples, we search for the radiative transition χ c1 (3872) → γψ 2 (3823), where the ψ 2 (3823) is reconstructed with the cascade decay ψ 2 (3823) → γχ c1 , χ c1 → γJ/ψ, J/ψ → ℓ + ℓ − (ℓ = e, µ).The branching fraction ratio of this decay relative to the well-established Many theoretical models predict the partial widths of the radiative transitions between different conventional charmonium states.The partial widths of χ c1 (2P ) → γψ(1 3 D 2 ) and ψ(1 3 D 2 ) → γχ c1 (1P ) are calculated with the non-relativistic (NR) potential model and the Godfrey-Isgur (GI) relativistic potential model [21].Recently, the partial width of ψ(1 3 D 2 ) → γχ c1 (1P ) was calculated with lattice QCD (LQCD) [24], and the total width of the ψ(1 3 D 2 ) was estimated according to the BESIII measurements and some phenomenological results.Combining these predictions with the total width of the χ c1 (3872), Γ χc1(3872) = 1.19 ± 0.21 MeV, we calculated the theoretical branching factions and then proceed to the ratio of branching fractions, [2], as listed in Table I.It is worth pointing out that the total width of the χ c1 (3872) measured in experiments is highly dependent on the parameterization of its lineshape.The value (1.19 ± 0.21 MeV) used here is from a global fit to the experimental measurements of the decay mode χ c1 (3872) → π + π − J/ψ which describe the χ c1 (3872) lineshape with a Breit-Wigner (BW) function.The decay χ c1 (3872) → D * 0 D0 + c.c., however, will distort the lineshape due to the proximity of its mass to the D * 0 D0 + c.c. threshold.LHCb studied the χ c1 (3872) lineshape with a Flatté model instead [25], and determined the full width at half maximum (FWHM) of the lineshape to be 0.22 +0.07+0.11−0.06−0.13MeV, which is much smaller than that obtained from the BW model.Recently, BESIII performed a coupledchannel analysis of the χ c1 (3872) lineshape and reported a FWHM of 0.44 +0.13+0.38 −0.35−0.25 MeV [26], consistent with the LHCb result.If the FWHM values provided by LHCb and BESIII are used to calculate R χc1(2P ) , the ratios shown in Table I will increase significantly.The experimental measurement of this ratio will help to determine whether the χ c1 (3872) is the conventional charmonium state, χ c1 (2P ).
II. BESIII DETECTOR AND DATA SETS
The BESIII detector [27] has an effective geometrical acceptance of 93% of 4π.A helium-based main drift chamber (MDC) immersed in a 1 T solenoidal magnetic field measures the momentum of charged particles with a resolution of 0.5% at 1 GeV/c as well as the specific energy loss (dE/dx) with a resolution better than 6%., by including as input values the partial decay widths Γ χ c1 (2P )→γψ(1 3 D 2 ) and Γ ψ(1 3 D 2 )→γχ c1 (1P ) predicted by the NR and GI models and LQCD, the total widths, Γ χ c1 (3872) and Γ ψ 2 (3823) , and the branching fraction B(χc1(3872) → π + π − J/ψ).The " − " means unavailable.The two values of the ratio for the LQCD case correspond to the results by taking the Γ χ c1 (2P )→γψ(1 3 D 2 ) width from the NR and GI models as input, respectively.
NR [21] GI [21] LQCD [24] Γ χ c1 (2P )→γψ( 1 A CsI(Tl) crystal electromagnetic calorimeter (EMC) is used to measure energies and positions of photons, where the energy resolution for a 1.0 GeV photon is about 2.5% in the barrel and 5.0% in the end caps.A plastic scintillator time-of-flight system (TOF), with a time resolution of 80 ps (110 ps) in the barrel (end cap), is used to identify the particles combined with the dE/dx information measured in the MDC.In addition, a multi-gap resistive plate chamber technology is used in the TOF end cap starting from 2015 to improve the time resolution to 60 ps [28]; the data sets in this work benefit from this improvement except for the data taken at √ s = 4.226 and 4.258 GeV.A muon system interleaved in the steel flux return of the magnet based on resistive plate chambers with 2 cm position resolution provides powerful information to separate muons from pions.
III. EVENT SELECTION AND RESULT
According to the decay chain of the signal process, e + e − → γχ c1 (3872), χ c1 (3872) → γψ 2 (3823), ψ 2 (3823) → γχ c1 , χ c1 → γJ/ψ, J/ψ → ℓ + ℓ − (ℓ = e, µ), the final state contains a lepton pair from the J/ψ decay and four radiative photons.For the leptons, each corresponding charged track is required to have its point of closest approach to the beam axis within 1 cm in the radial direction and within 10 cm along the beam direction, and to lie within the polar-angle coverage of the MDC, | cos θ| < 0.93, in the laboratory frame.We require exactly two good charged tracks in the candidate events.EMC information discriminates between the electrons and muons: electrons are required to deposit at least 0.8 GeV in the EMC, and the muons less than 0.4 GeV.Photons are reconstructed from isolated showers in the EMC, at least 10 degrees away from any charged track, with an energy deposit of at least 25 MeV in both the barrel (| cos θ| < 0.80) and the endcap (0.86 < | cos θ| < 0.92) regions.In order to suppress electronic noise unrelated to the event, the EMC time, t, of the photon candidate must be in the range 0 ≤ t ≤ 700 ns, consistent with collision events.We require at least four photons for each candidate event.
A four-constraint (4C) kinematic fit is applied to constrain the total four-momentum of the lepton pair and the four photons to that of the colliding beams, to suppress backgrounds and improve the resolution.For events with more than four photons, the combination with the best fit quality corresponding to the minimum fit chi-square, χ 2 4C , is retained.The J/ψ is reconstructed by requiring the invariant mass, M (ℓℓ), of the lepton pair to satisfy |M (ℓℓ) − m(J/ψ)| < 30 MeV/c 2 , where m(J/ψ) is the nominal J/ψ mass.The selection criteria are optimized by maximizing the punzi figure-of-merit S/( a 2 + √ B) [33], where the number of signal events (S) is determined with the signal MC sample, the background (B) is estimated with the inclusive MC, and the expected statistical significance (a) is set to be 3.The dominant background is from the process e + e − → π 0 π 0 J/ψ.After the J/ψ selection, we veto π 0 candidates by requiring that the invariant mass of all photon pairs is more than 15 MeV/c 2 away from the nominal π 0 mass.After these requirements, a seven-constraint (7C) kinematic fit with an additional three constraints on the masses of M (ℓℓ), M (γℓℓ), and M (γγℓℓ) to the nominal masses of J/ψ, χ c1 , and ψ 2 (3823), respectively, is applied to distinguish the radiative photon in each cascade decay.The best-fit combination with the minimum chi-square, χ 2 7C , is retained; χ 2 7C < 100 is also required to further suppress the combinatorial backgrounds.One possible peaking background is ψ 2 (3823) → γχ c2 , χ c2 → γJ/ψ, the contribution of which is estimated according to the measurement of the branching fraction ratio of ψ 2 (3823) → γχ c2 to ψ 2 (3823) → γχ c1 in Ref. [34].The ratio of the yields of ψ 2 (3823) → γχ c2 to ψ 2 (3823) → γχ c1,2 , is about 1.5%, which is taken into account as a source of systematic uncertainty.
Figure 1 shows the distribution of the invariant mass of the radiative photon and the ψ 2 (3823), M (γψ 2 (3823)) for the selected candidates, summed over all the energy points.No signal is observed in the χ c1 (3872) signal region in data.The three events around 3.93 GeV are very unlikely to be from the χ c2 (2P ) decays since no χ c2 (2P ) signal was observed in its more favourable radiative transition to ψ(2S) [9].After normalizing the MC samples according to the luminosity and cross section in data, the contributions of the e + e − → π 0 π 0 J/ψ process and of the other backgrounds, estimated with the inclusive MC sample, are also shown in Fig. 1.
The branching ratio R χc1(3872) is calculated as where N obs = 0 is the number of observed events from all data in the χ c1 (3872) signal region [3.855, 3.885] GeV/c 2 which covers around ±3σ of the signal shape according to the signal MC distributions, N sdb obs = 4 is the number of the observed events in the χ c1 (3872) sideband region [3.840, 3.855] and [3.885, 3.940] GeV/c 2 ; r, the background scaling factor from the sideband regions to the signal region, is 0.474 based on the inclusive MC sample (taking into account its systematic uncertainty; see Sec.IV); N π + π − J/ψ = 80.7 ± 9.0 is taken from the BESIII measurement [10]; the branching fraction B(χ c1 → γJ/ψ) = 0.343 ± 0.010 is quoted from the PDG [2]; ϵ γψ2(3823) is the efficiency for the signal process reconstruction, determined with the signal MC sample; and ϵ π + π − J/ψ is the efficiency of the process χ c1 (3872) → π + π − J/ψ [10].The efficiency ratio ϵ γψ2(3823) /ϵ π + π − J/ψ at each energy point is shown in Fig. 2, which is almost independent on the center-of-mass energy.The mean value with the standard deviation, ϵ γψ2(3823) /ϵ π + π − J/ψ = 0.433 ± 0.004, is used to calculate the R χc1(3872) value.The upper limit of R χc1(3872) at the 90% confidence level (C.L.) is computed with the TRolke program implemented in the ROOT framework [35] by assuming the background N sdb obs and the denominator of R χc1(3872) follow Poisson and Gaussian distributions, respectively, where the systematic uncertainties discussed in the following section is taken as the standard deviation of the Gaussian function to be considered in the upper limit.We obtain an upper limit of R χc1(3872) < 0.075 at the 90% C.L.
IV. SYSTEMATIC UNCERTAINTIES
Systematic uncertainties on R χc1(3872) arise mainly from the estimations of r, the possible peaking background of ψ 2 (3823) → γχ c2 → γγJ/ψ, N π + π − J/ψ , ϵ γψ2(3823) /ϵ π + π − J/ψ , and B(χ c1 → γJ/ψ).The background scaling factor r is determined from the inclusive MC samples including the process e + e − → π 0 π 0 J/ψ.We use a 1st or 2nd order polynomial function to fit the M (γψ 2 (3823)) distribution from the inclusive MC samples; the r value is calculated several times using the parameters from the fit and varying them within 1σ.The value r = 0.474 is chosen from the obtained val- ues since it provides the most conservative upper limit.The contribution of the potential peaking background of ψ 2 (3823) → γχ c2 → γγJ/ψ is estimated with the related measurements mentioned previously within one errors, and the result providing the most conservative R χc1(3872) upper limit is retained.Both statistical and systematic uncertainties of N π + π − J/ψ contribute as sources of systematic uncertainty, where the statistical part (11.2%) is obtained by assuming that N π + π − J/ψ follows a Poisson distribution, and the systematic part (4.1%) is obtained from Ref. [10] where the dominant contribution is from the parametrization of the χ c1 (3872) signal shape.The systematic uncertainty (2.9%) due to B(χ c1 → γJ/ψ) is taken from the PDG [2].The systematic uncertainty of ϵ γψ2(3823) /ϵ π + π − J/ψ comes mainly from the tracking (2.0%), the photon selection (3.0%), and the kinematic fit (2.2%) uncertainties, estimated with the control sample e + e − → π 0 π 0 J/ψ.The systematic uncertainty due to the π 0 veto is mainly caused by potential differences in the angular distributions of the radiative photon between the data and the signal MC sample, and it is estimated by changing the angular distribution of the radiative γ in χ c1 (3872) → γψ 2 (3823) to 1 ± cos 2 θ (from flat) in the generator model.The relative difference of 5.3% between the efficiencies obtained with the photon angular distributions of 1 − cos 2 θ and 1 + cos 2 θ is taken as the systematic uncertainty.
The systematic uncertainties are listed in Table III.The total systematic uncertainty is obtained by summing all systematic uncertainties in quadrature, assuming they are uncorrelated.
V. SUMMARY
In summary, we search for the radiative decay χ c1 (3872) → γψ 2 (3823) for the first time by using the e + e − collision data accumulated at √ s = 4.178 − 4.278 GeV with the BESIII detector.No signal is observed, and the upper limit on the branching fraction ratio R χc1(3872) is determined to be 0.075 at the 90% C.L.This upper limit is more than 1σ below the theoretical calculations of R χc1(3872) under the assumption that the χ c1 (3872) is the pure charmonium state χ c1 (2P ), listed in Table I, and much smaller than the predictions based on the FWHMs measured by LHCb and BESIII [25,26].Our result therefore indicates that the χ c1 (3872) is not a pure χ c1 (2P ) charmonium state.
a
Deceased b Also at the Moscow Institute of Physics and Technology, Moscow 141700, Russia c Also at the Novosibirsk State University, Novosibirsk, 630090, Russia d Also at the NRC "Kurchatov Institute", PNPI, 188300, Gatchina, Russia e Also at Goethe University Frankfurt, 60323 Frankfurt am Main, Germany f Also at Key Laboratory for Particle Physics, Astrophysics and Cosmology, Ministry of Education; Shanghai Key Laboratory for Particle Physics and Cosmology; Institute of Nuclear and Particle Physics, Shanghai 200240, People's Republic of China g Also at Key Laboratory of Nuclear Physics and Ion-beam Application (MOE) and Institute of Modern Physics, Fudan University, Shanghai 200443, People's Republic of China h Also at State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, People's Republic of China i Also at School of Physics and Electronics, Hunan University, Changsha 410082, China j Also at Guangdong Provincial Key Laboratory of Nuclear Science, Institute of Quantum Matter, South China Normal University, Guangzhou 510006, China k Also at MOE Frontiers Science Center for Rare Isotopes, Lanzhou University, Lanzhou 730000, People's Republic of China l Also at Lanzhou Center for Theoretical Physics, Lanzhou University, Lanzhou 730000, People's Republic of China m Also at the Department of Mathematical Sciences, IBA, Karachi 75270, Pakistan n Also at Ecole Polytechnique Federale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland o Also at Helmholtz Institute Mainz, Staudinger Weg 18, D-55099 Mainz, Germany Using 9.0 fb −1 of e + e − collision data collected at center-of-mass energies from 4.178 to 4.278
FIG. 1 .
FIG.1.Distribution of M (γψ2(3823)).The dots with error bars are data, the red histogram is the signal MC sample with arbitrary scale, the filled blue histogram is the inclusive MC sample without the process e + e − → π 0 π 0 J/ψ, and the green stacked histogram is the contribution from e + e − → π 0 π 0 J/ψ.
TABLE I .
The calculated values for R χ c1 (2P )
TABLE II .
The data sets and their integrated luminosity at each energy point.
TABLE III .
The relative systematic uncertainties on R χ c1 (3872) .Systematics on the sideband scaling ratio, r, are treated separately (see text). | 4,183.2 | 2024-05-13T00:00:00.000 | [
"Physics"
] |
Comparison of Supervised and Unsupervised Deep Learning Methods for Medical Image Synthesis between Computed Tomography and Magnetic Resonance Images
Cross-modality medical image synthesis between magnetic resonance (MR) images and computed tomography (CT) images has attracted increasing attention in many medical imaging area. Many deep learning methods have been used to generate pseudo-MR/CT images from counterpart modality images. In this study, we used U-Net and Cycle-Consistent Adversarial Networks (CycleGAN), which were typical networks of supervised and unsupervised deep learning methods, respectively, to transform MR/CT images to their counterpart modality. Experimental results show that synthetic images predicted by the proposed U-Net method got lower mean absolute error (MAE), higher structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) in both directions of CT/MR synthesis, especially in synthetic CT image generation. Though synthetic images by the U-Net method has less contrast information than those by the CycleGAN method, the pixel value profile tendency of the synthetic images by the U-Net method is closer to the ground truth images. This work demonstrated that supervised deep learning method outperforms unsupervised deep learning method in accuracy for medical tasks of MR/CT synthesis.
Introduction
Cross-modality medical image synthesis between magnetic resonance (MR) images and computed tomography (CT) images could benefit medical procedures in many ways. As a multiparameter imaging modality, magnetic resonance imaging (MRI) provides a wide range of image contrast mechanisms without ionizing radiation exposure, while CT images outperform MR images in acquisition time and resolution of bone structure. CT is also related with electron density which is critical for PET-CT attenuation correction and radiotherapy treatment planning [1]. Generating synthetic CT (sCT) images from MR images makes it possible to do MR-based attenuation correction in PET-MR system [2][3][4][5][6] and radiation dose calculation in MRI-guided radiotherapy planning [7][8][9]. Synthesizing MR images from CT images can enlarge the datasets for MR segmentation task and thus improve the accuracy of segmentation [10].
In recent years, there have been many efforts to work on medical image synthesis between MR and CT images. Among all these methods, deep learning method exhibited superior ability of learning a nonlinear mapping from one image domain to another image domain. It can be classified into two categories: supervised and unsupervised deep learning methods. Supervised deep learning methods required paired images for model training. In the MR/CT synthesis task, MR and CT images have to be wellregistered at first and then used as inputs and corresponding labels for the neural network model to learn an end-to-end mapping. Nie et al. [11] used three-dimensional paired MR/CT image patches to train a three-layer fully convolutional network for estimating CT images from MR images.
Other researchers [4,5,[12][13][14][15] have trained deeper network for MR-based CT image prediction. However, as for medical image dataset, it is not that easy to get paired MR and CT images. It may take a long time span to collect patients who are scanned by both MR and CT scanners. Registration of certain accuracy between MR and CT images are also necessary to make paired MR-CT dataset.
Unsupervised deep learning methods enabled the possibility of using unpaired images for image-to-image translation [16][17][18][19][20]. It was first proposed for natural image synthesis and now has been implemented by many researchers for medical image synthesis [10,[21][22][23][24]. Chartsias et al. [10] demonstrate the application of CycleGAN in synthesizing cardiac MR images from CT images, using MR and CT images of different patients. Nie et al. [21] synthesized MR images from CT images with a deep convolutional adversarial network. Since there are plenty of unpaired medical images, the available datasets could be easily enlarged.
Unlike natural images, accuracy is highly emphasized in medical images. In this paper, we aim to compare the accuracy of supervised and unsupervised learning-based image synthesis methods for pseudo-MR/CT generation tasks. Two typical networks of U-Net [25] and CycleGAN [17] were introduced as representatives of supervised and unsupervised learning methods, respectively. Mean absolute error (MAE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) of the synthetic results were calculated to evaluate their performance quantitatively. More detailed comparisons and discussions about the advantage and disadvantage of these methods are included in Results and Discussion.
Neural Network Models.
In our experiments of pseudo-MR/CT generation tasks, U-Net and CycleGAN were used as the typical representative network of supervised and unsupervised deep learning methods, respectively.
U-Net has made a great achievement in segmentation tasks [25][26][27][28][29]. The advantage of U-Net is that it could use very few images to make a good performance. In this study, we adapted U-Net to an end-to-end image synthesis task.
The basic architecture of U-Net consists of a contracting part to capture features and a symmetric expanding part to enable precise localization. As shown in Figure 1, we added LeakyReLU [30,31] as activation operation before convolution operation in the contracting part of the network. Activation operation of LeakyReLU was replaced with ReLU [32] in the expanding part. Batch normalization [33] was introduced to U-Net to enable faster and more stable training. In Figure 1, the number of channels is denoted on top of each of the convolution operation, and the size of feature maps is signed in the parentheses.
In the medical image synthesis task, input image and its corresponding label were fed to the proposed U-Net to train and learn an end-to-end nonlinear mapping between them. Figure 1 illustrated the MR-to-CT synthesis using U-Net architecture, which takes MR images as input and CT images as label to train a synthetic CT generating model. On the contrary, when we use CT images as input and MR images as labels, U-Net could be trained as a synthetic MR-predicting model. The loss function used in the proposed U-Net is CycleGAN [17] which is proposed by Zhu et al. could be seen as an updated version of generative adversarial networks (GAN) [16]. GAN methods can learn a nonlinear mapping from input image domain to target image domain by adversarial training. CycleGAN introduced the idea of cycle consistency to general GAN methods. Cycle consistency adds restriction that the generated pseudoimage in target domain should be able to be transformed back to the original input image.
We used the CycleGAN architecture from Zhu et al. [17] for our medical image synthesis task. It takes unpaired MR and CT images as inputs to learn nonlinear mappings between these two image modalities. As illustrated in Figure 2, the CycleGAN architecture has two cycles, forward cycle and backward cycle. The forward cycle consists of three networks: two generative networks of G and F and one discriminator of D CT . The backward cycle uses the same generative networks of F and G and a counterpart discriminator of D MR .
In the forward cycle, network G was used to generate synthetic CT (sCT) from input MR images, while network F generated synthetic MR (sMR) from network G-generated sCT images. Network D CT discriminates whether the generated sCT image is real CT or fake. The backward cycle works just the opposite way. Network F took CT images as input images and generated sMR; then, network G synthesized sCT from the F-generated sMR images. Network D MR was used to distinguish whether the sMR image is real MR or fake.
The adversarial losses of CycleGAN are as follows: The cycle-consistency loss consists of forward cycle loss L forward_cyc and the backward cycle loss L backward_cyc . It is represented as follows: BioMed Research International Then, we have the full objective as the below equation: where λ is the weight of the objectives of cycle consistency.
Cross-Modality MR/CT Image Synthesis and Evaluation.
We used PyTorch to implement the proposed U-Net and CycleGAN. Both the networks were trained for bidirectional image synthesis, which includes learning a MR-to-CT model for generating synthetic CT images from MR images and a CT-to-MR model for generating synthetic MR images from CT images. U-Net and CycleGAN used similar parameters for training nonlinear mapping models between MRI/CT images. Adam optimizer was adopted for both the networks. The batch size was set to 1. Both networks were trained for 200 epochs, with fixed learning rate for the first 100 epochs.
The learning rate decreased linearly to 0 for the following 100 epochs.
Whole 2D slices of axial medical images with size of 256 * 256 pixels were used as inputs. During the training process, the images would be padded to 286 * 286 pixels and then random cropped to 256 * 256 for data augmentation. While U-Net should utilize paired MR and CT datasets for training nonlinear mapping, CycleGAN can take use of unpaired MR and CT images as inputs for both the forward and backward cycles in training procedure. As for the Cycle-GAN method, we randomly shuffled the MR image input sequences and CT image input sequences in the paired datasets to make the input MR and CT slices unpaired. The MRI input sequence in unpaired datasets were not the same as that in paired datasets.
Three metrics were used to quantitatively characterize the accuracy of the prediction of synthetic images compared with the ground truth images. The mean absolute error (MAE) measures the discrepancies by voxels. Structural similarity index (SSIM) [34] quantifies the similarities in a whole These evaluation metrics are expressed as follows: where H and W are the height and width of the images, respectively. X is the ground truth images, and Y is the predicted synthetic images. μ x and μ y are the average values of ground truth images and synthetic images, respectively. σ 2 x and σ 2 y are the variance of ground truth images and synthetic images, respectively. σ xy represents the covariance of ground truth images and synthetic images. L denotes the dynamic range of the voxel values. c 1 and c 2 are two variables to stabilize the division with a weak denominator. Here, we take k 1 = 0:01 and k 2 = 0:03 by default.
In this experiment, CT images were resampled to a size of 256 * 256 (1 * 1 mm 2 ) by bicubic interpolation [35] to match the voxel size of MR images. Binary head masks were generated by the Otsu threshold method [36] for MR and CT images to remove unnecessary background information around the head region.
Since the head region is mainly a rigid construction of bone structure, we applied rigid registration to the MR and CT images to make paired MR/CT images for the proposed U-Net. CT images were set as a fixed volume. MR images were set as a moving volume to register with CT images by Elastix toolbox [37]. The paired datasets were randomly shuffled to make an unpaired dataset for CycleGAN. Figure 2: CycleGAN architecture for bidirection synthesis of MR and CT images. The forward cycle generated synthetic CT from input MR by G while F translate the synthetic CT back to the MR image domain. D CT discriminate whether the generated images is real or fake CT. The backward cycle generated synthetic MR from input CT by F while G translate the synthetic MR back to the CT image domain. D MR discriminate whether the generated images is real or fake MR. Two cycle-consistency loss was introduced to capture the intuition that the synthetic image should be translated back to the original image modality. 4 BioMed Research International In our medical image synthesis task, 28 patients with 4063 image pairs were randomly selected for model training. The remaining 6 patients with 846 image pairs were used for evaluation procedure.
Results and Discussion
The results of synthetic MR and synthetic CT images generated by U-Net and CycleGAN and their ground truth are showed in Figure 3. The first column is the input images, and the second column is ground truth images. The third column showed the generated synthetic images predicted from input images by the two networks. The difference map between synthetic images and ground truth images was calculated and showed in the fourth column.
The first two rows in Figure 3 are sCT images synthesized by U-Net and CycleGAN, respectively. For the task of synthesizing CT images from MR images, the soft tissue area is translated from high contrast to low contrast. It could be seen from the difference map images that the soft tissue area of synthetic CT images by both networks is well-translated with little error. The translation error mainly occurred in the bone area. Their difference map demonstrates that the sCT by CycleGAN synthesized more error than sCT by U-Net in the bone areas.
The third and fourth rows in Figure 3 are sMR images generated by U-Net and CycleGAN, respectively. It could be seen that sMR by CycleGAN seems more realistic for it has more complex contrast information than sMR by U-Net. However, the difference map images illustrated that the CycleGAN method generated much more error than U-Net does. The abundant image contrast information in sMR by CycleGAN may be false and unnecessary.
In synthesizing CT tasks, the difference between synthetic images and ground truth mainly occurs in the bone area. But in synthesizing MR tasks, the error is evenly distributed in the whole head region. It means synthesizing high contrast images of MR from low contrast image domain of CT is tougher than its reverse synthesizing direction.
To compare the image details, 1D profiles of pixel intensity were also plotted. Figure 4 shows the 1D profiles passing through the short red lines and long blue lines as indicated in In the profiles, the red curve indicates pixel intensities of ground truth CT or MR. The blue curve represented for U-Net and the green curve for CycleGAN. It could be clearly seen in Figure 4(a) that the blue curve is close to the red curve, while some of the peaks of the green curve deviated from the red curve to an opposite direction. It means that the tendency of 1D profiles in sCT by U-Net was closer to the ground truth CT, while the CycleGAN method tends to generate fake contrast information in sCT images.
The profile in Figure 4(b) shows that the blue curve vibrated less from the red curve. Some peaks of the green curve deviated more from the red curve. It could be seen in the close-up 1D profile that some peaks of the green curve are biased to the opposite from the red curve, while the tendency of the blue cure seems like a smoothened or flattened red curve. It means that the pixel value of sMR by U-Net was closer to the ground truth but may lack contrast details. The pixel value of sMR by CycleGAN exhibits more deviation from the ground truth along the profile whereas the tendency may be false or exaggerated.
The quantitative metrics have been calculated for comparison. Figure 5 shows the MAE of sCT and sMR for each of the 6 patients in the evaluation datasets and the average result. It is obvious that the U-Net method generated lower MAE either in sCT image generation or sMR image generation for all the patients. This also demonstrates the robust performance of the U-Net method in bidirection MR/CT image translation tasks. Figures 5(a) and 5(b) show that the deviations of the MAE between the U-Net and CycleGAN method for sMR images of all the 6 patients are not as significant as those for sCT images. In Figure 3, the difference map of sMR indicated that the main predicted errors are evenly distributed in the whole head region, while the main error of sCT mainly occurs mainly in the bone structure. This could be interpreted that generating MR images of high soft tissue contrast BioMed Research International from CT images of low soft tissue contrast is much complex than the inverse direction synthesis of generating CT from MR images. Table 1 shows the overall statistics of three quantitative metrics for sCT by both the U-Net and CycleGAN methods.
The SSIM values indicate that the sCT images by both methods have fairly high similarity with the ground truth CT images. The U-Net method outperformed the CycleGAN method with a much lower MAE of 65.36 HU, a higher SSIM of 0.972, and a higher PSNR of 28.84 dB. The average sCT MAE deviation between the two methods is nearly 30 HU. Table 2 shows the overall statistics of three quantitative metrics for sMR images by the U-Net method and Cycle-GAN method. The U-Net method outperformed the Cycle-GAN method with a lower MAE of 73.43 HU, a higher SSIM of 0.946, and a higher PSNR of 32.35 dB.
The qualitative and quantitative results demonstrate that the proposed U-Net, a typical supervised learning method, outperforms CycleGAN, a representative advanced unsupervised learning method, in synthesis accuracy of medical image translation task. Since medical images highly value accuracy for the purpose of disease diagnosing, clinical treatment, and therapeutic effect evaluation, the supervised learning method is more recommended in medical practice.
Nevertheless, the success of supervised learning cannot do without well-registered image pairs. The performance of the trained model also depends on the registration accuracy of the paired images. Unlike natural images, paired medical images are not that easy to get. It would take a long time span to collect enough patients who need to be scanned for both MR and CT images at the same time. It is well-known that big amount of datasets could greatly improve the performance of the deep learning method. Though it outperforms the unsupervised learning method, the limit of dataset vol-ume may constrain the further improvement of the supervised learning method in medical image synthesis tasks.
From the experiments discussed above, the image synthesis by using unsupervised learning methods still has a long way to go for practical application in clinic due to their relatively low accuracy. But still, the unsupervised learning method could benefit when there is lack of paired medical image datasets. The good news is that there are abundant easy-to-obtain retrospective unpaired MR and CT images for the unsupervised learning method to take advantage of. No registration is needed.
Our experiments show that when the same datasets were taken as inputs, the unsupervised learning method got inferior quality in the synthesis accuracy for medical image translation. But nonetheless, if the dataset is large enough, it could be expected that the performance of the unsupervised learning method would be improved to a certain acceptable extent in clinical practice.
Conclusions
Cross-modality medical image synthesis between MR and CT images could benefit a lot from the fast growing of deep learning methods. In this paper, we compared different deep learning-based image synthesis methods for pseudo-MR/CT generation, including the unsupervised learning method of CycleGAN and supervised learning methods of the proposed U-Net. Synthetic images produced by the CycleGAN method contain more but fake contrast information in the whole image scale. Though the proposed U-Net method blurred the generated pseudoimages, its pixel value profile tendency is basically close to the ground truth images. The quantitative results also indicate that the U-Net method outperformed the CycleGAN method, especially in synthesizing CT image task.
BioMed Research International
As accuracy is highly demanded in medical procedures, we recommend the supervised method such as the proposed U-Net in cross-modality medical image synthesis at present clinical practice.
Data Availability
The datasets of MR and CT images used to support the findings in this study are restricted by the Medical Ethics Committee of Shenzhen Second People's Hospital in order to protect patient privacy.
Conflicts of Interest
The authors declare that there is no conflict of interest. | 4,531.8 | 2020-11-05T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Monodromy inflation and an emergent mechanism for stabilising the cosmological constant
We show that a pair of field theory monodromies in which the shift symmetry is broken by small, well motivated deformations, naturally incorporates a mechanism for cancelling off radiative corrections to the cosmological constant. The lighter monodromy sector plays the role of inflation as well as providing a rigid degree of freedom that acts as a dynamical counterterm for the cosmological constant. The heavier monodromy sector includes a rigid dilaton that forces a global constraint on the system and the cancellation of vacuum energy loops occurs at low energies via the sequestering mechanism. This suggests that monodromy constructions in string theory could be adapted to incorporate mechanisms to stabilise the cosmological constant in their low energy descriptions.
INTRODUCTION
Perhaps the simplest, most calculable, models of early universe inflation are those with superplanckian field excursions [1,2]. Typically these give rise to large primordial tensor fluctuations that could be detected by forthcoming polarization maps of the cosmic microwave background (CMB). However, pushing the inflaton field to such large values is a challenge for model builders keen to protect slow roll from ultra-violet corrections to the theory. Within string theory, monodromy inflation offers a promising solution to this problem [3,4]. A field theory version of this has been developed in a series of recent papers [5][6][7][8][9] (see also [10,11]), whereby a four-form field strength has a bilinear mixing with a pseudo-scalar. Control of the effective inflaton potential stems from a U (1) gauge symmetry in the four-form sector, as well as a (discrete) shift symmetry for the axion.
A seemingly unrelated question is that of the cosmological constant, or equivalently, vacuum energy, which is radiatively unstable [12][13][14][15][16]. Indeed, applying standard quantum field theory methods, radiative corrections to vacuum energy scale like the cut-off of the effective field theory (EFT) to the fourth power rendering it extremely sensitive to ultra-violet physics. This is problematic because the scale of the observed cosmological constant lies at least sixty orders of magnitude below the scale of current collider experiments. Within a standard semiclassical framework in which quantum matter is minimally coupled to classical General Relativity, this represents a startling failure of the so-called naturalness paradigm [17]. One mechanism for alleviating this problem and restoring naturalness has been dubbed vacuum energy sequestering [18][19][20][21][22][23][24]. The mechanism includes new rigid degrees of freedom that force a cancellation, or better, a decapitation [25,26], of radiative corrections to the vacuum energy. Consistent with the notion that relevant operators cannot be predicted in effective field theory, existing models of sequestering make no prediction for the renormalised value of the cosmological constant, but they do render it radiatively stable. This places it on the same footing as, say, the electron mass whose mass is protected from radiative corrections by chiral symmetry in the massless limit [27].
The purpose of this paper is to demonstrate that field theory models of monodromy such as [5][6][7][8][9] naturally incorporate the sequestering mechanism at low energies. To achieve this we need at least two independent monodromies, operating at hierarchically different scales and mixing only through gravity. The lightest of these will also give rise to monodromy inflation and the heavier to a rigid dilaton whose local fluctuations are suppressed on the scales of interest. All of this suggests that string theory, with its capacity for generating monodromy, could also have a built in mechanism for stabilising the cosmological constant.
ναβ] is a four-form field strength. We can readily rewrite this theory in terms of pseudoscalars only, integrating out the four-form field strength and replacing it with its magnetic dual. The result is where Q = 2πN q is the magnetic dual of F , quantised in units of the membrane charge q. The action (2) is manifestly invariant under a discrete gauge symmetry where f = q/m measures the periodicity of the pseudoscalar. This gauge symmetry protects the low energy theory from large corrections, both perturbative and nonperturbative, allowing us to reliably realise chaotic inflation at super-Planckian field values. We can ease the tension with observational bounds on the tensor-scalar ratio for primordial fluctuations by exploiting EFT corrections to this theory. As shown in [9], by careful application of naive dimensional analysis (NDA) [31,32], the appropriate factors of 4π allow one to probe the higher derivative operators in the EFT expansion without going beyond the cut-off. In particular, taking care to include the correct symmetry factors [9], these corrections take on the following generic form where M is the cut-off, and c n1n2 ∼ O(1). The nonnegative integers n 1 , n 2 satisfy 2n 1 + n 2 ≥ 3 with the gauge symmetry (3) dictating the precise form of these interactions. Strongly coupled dynamics now yields a theory of kinflation [28] with flattening of the effective potential. We refer the reader to [9] for further details and predictions for the tensor-scalar ratio and non-Gaussianity that can be compatible with CMB bounds, yet on the brink of being observed. We emphasize that the EFT description remains valid at strong coupling in a window µ 2 < mφ + Q < M , thanks to the extra powers of 4π. This is in contrast to many other applications of higher derivative operators in cosmological models (see [29] for a critique).
FROM MONODROMY TO SEQUESTERING
In monodromy inflation, reheating can occur by coupling the axion to a gauge sector in the usual way, φ f TrG ∧ G. Non-perturbative corrections now generate a periodic potential for φ. This scenario has been exploited in [30] to develop a sequestering set-up with a landscape of radiatively stable vacua. Here we explore a different scenario, in which reheating occurs via interactions that break the discrete shift symmetry (3). For example, consider a coupling gφ 2 h 2 to some massive scalar h that itself couples to the Standard Model (in principle h could even be the Higgs). g is a technically natural parameter. Coupling axions to an external Higgs-like sector, breaking the shift symmetry along the way, is reminiscent of so-called relaxion models [33]. There the technically natural coupling is taken to be extremely small which is problematic for stringy realisations (see e.g. [34]). That will not be the case for us. What is important for us is that loops of h now generate a symmetry breaking potential, and in particular a mass term that goes asm 2 φ 2 , wherem ∼ gm h and m h is the scalar mass. Provided m ≪ m, we do not expect a significant deformation of the inflationary dynamics. It follows that as long as the scalar mass lies below the scale of inflation (as it must anyway for efficient reheating) we can happily tolerate any g O (1).
To explore what happens when the gauge symmetry (3) is broken explicitly in this way, let us simply deform the original Kaloper Sorbo model by the mass term described above, specifically, wherem ≪ m now encodes the small symmetry-breaking parameter. This will be sufficient for elucidating the emergent mechanism for stabilising vacuum energy, so we will not include any explicit couplings between the inflaton and Standard Model fields in our subsequent analysis. To determine the structure of the EFT corrections to this deformed theory, whilst retaining control of our power counting, it is convenient to think ofm as a spurion, transforming under the gauge symmetry (3) as δm = −2πfm/φ. Applying NDA as before, only now including the spurion, we find that the EFT corrections take the generic form 1 where c n1n2n3 ∼ O(1) and the non-negative integers n 1 , n 2 , n 3 satisfy 2n 1 + n 2 + n 3 ≥ 3. As long asm ≪ m, the inflationary dynamics is essentially the same as in [9], with small corrections.
Let us now imagine that we have a second monodromy sector, with a four-formF and a pseudo scalarφ, only this time we deform it gravitationally.
where R is the Ricci scalar andĝ 1. Such a deformation introduces a dynamical dilaton, prevalent in string theory and is consistent with the notion that quantum gravity should ultimately break any of the remaining global shift symmetries. Integrating out the four form so that we trade it for its magnetic dual,Q, we find and multiply the whole interaction by a factor of M 4 (4π) 2 [9,31,32]. For the spurion we apply the rulem →m M , as with the other mass parameter. Now if we assume thatm lies the above the cut-off, M , we can decouple the local fluctuations inφ 2 . This forceŝ φ to lie at the minimum of its effective potential, or in other wordsφ = −Q/m + O(R/m 2 ) ≈ −Q/m.
Bringing everything together, including the Einstein Hilbert term and a Lagrangian for the Standard Model matter fields, we arrive at the following low energy effective theory valid below some cut-off scale, M , Here we have rewritten our inflationary monodromy sector in terms of the gauge invariant scalar ϕ = φ + Q m , with F containing the leading order symmetry breaking deformation and all the EFT corrections. Indeed, the first two lines of this action correspond to the model of flux-monodromy inflation proposed in [5][6][7], with a gauge symmetry-breaking deformation and EFT corrections of the form of (6). We identify the strong coupling scale µ = M/ √ 4π lying below the cut-off. Strongly coupled inflationary dynamics along the lines proposed in [9] occurs when µ 2 < mφ + Q < M . To ensure that the inflationary behaviour is not destabilised by the symmetry breaking parameters we further assume that ν ≫ µ 2 /m. The last line of (9) includes the Einstein-Hilbert action along with dilaton couplings, with the heavy dilaton held rigid below the cut-off. The gravitational coupling is assumed to be M g ∼ M P l , and we have introduced the ultra-violet scalê ν = M g /g M g . L m = L m (g µν , Φ) corresponds to the Lagrangian for Standard Model matter fields minimally coupled to the metric g µν .
We shall now demonstrate that this effective theory contains a mechanism for stabilising radiative corrections to vacuum energy. As we will see, that mechanism is essentially sequestering [18][19][20][21][22][23][24]. To proceed, we compute the corresponding field equations Here F i denotes the partial derivative of F with respect to its ith argument, T µν is the energy momentum tensor for the minimally coupled Standard Model fields and From (11) and (12) we obtain the integral constraints where Vol = √ −gd 4 x is the spacetime volume and angled brackets denote the spacetime average, X √ −gd 4 x = X Vol. Together these yield the following constraint on the spacetime average of the Ricci scalar This constraint is crucial and nothing more than the sequestering mechanism in action [18][19][20][21][22][23][24]. The crucial point is that R is constrained by the fluxes which correspond to geometric boundary data that can be chosen independently of the UV sector of the theory. Indeed, the flux F can be taken to be as small as we like without any violation of naturalness. Taking traces and spacetime averages of the metric equations of motion (14) we can easily show that (19) where T tot µν = T µν + T ϕ µν + T F µν . After applying the global constraint (18), we arrive at the following effective gravity equations where the effective Planck mass is given by κ 2 (σ) = M 2 g 1 +σ 2 for constantσ =Q mν , and the local fluctuations 3 in the cosmological constant term are given by (21) If we decompose the energy-momentum tensor for matter into its vacuum energy part, V vac , and local excitations, τ µν , as in T µν = −V vac g µν + τ µν we see that the vacuum energy drops out and we obtain a residual cosmological constant This quantity is stable against radiative corrections to vacuum energy. In other words, for a theory cut-off at the scale M , although we expect such corrections to go as V vac → V vac + O(1) M 4 (4π) 2 [36], we claim that this will not alter the scale of the residual cosmological constant, Λ eff → O(1)Λ eff . To see this let us examine each of the contributions in (22). The first term, τ , is the spacetime average of (the trace of) local matter excitations. By its very definition it receives no corrections from vacuum energy and is small 4 in a universe that grows large and old, provided matter satisfies the weak energy condition [19]. The fluxes mνF , mvF in the second term are essentially the same as in previous versions of vacuum energy sequester [21], rescaled by mν andmν. These are purely geometric quantities given entirely by boundary data and not renormalised by radiative corrections to vacuum energy. They can be taken arbitrarily small.
The contribution from F 1 , or equivalently F is also radiatively stable. To see this, note that µ 4 F (or a global subset thereof) plays the role of the cosmological counter term, the bare cosmological constant, whose value is ultimately determined by the geometric global constraint (18). It therefore scales with the cut-off as 3 Since one can easily show that δλ = 0, it is easy to see that δλ contains no global contribution to the cosmological constant. 4 In an infinite universe where all localised matter is ultimately diluted away, this will become infinitesimally small. (4π) 2 ∼ µ 4 , receiving order one radiative corrections in these units. This means that F takes on values of order one, and is corrected to the same order when we include additional loop contributions. Similarly, we have that But κ 2 is the effective gravitational coupling, whose radiative corrections go as µ 2 [37], well below the measured value of κ 2 ∼ M 2 P l . The final term in (22) is also immune to large radiative corrections in essentially the same way as the first term, corresponding to the average of the localised fluctuations in the scalar.
In this set-up, the magnetic duals, Q andQ play a crucial role, akin to the rigid degrees of freedom of the original sequestering proposals [18][19][20][21][22][23][24]. The former essentially plays the role of the cosmological counterterm, whilst the latter gives rise to the global geometric constraint that forces the desired cancellation of vacuum energy loops. Actually, this cannot be the full story because Q is quantised and cannot adjust continuously to compensate for a continuous change in the vacuum energy. However, small changes in the global value of the gauge invariant field ϕ can provide the extra flexibility required.
We might also be concerned that the mechanism for cancelling vacuum energy also does away with inflation. It was already shown that this was not the case in generic sequestering proposals [19], and we see the same here.
The key point, of course, is that the value of the inflationary potential during slow roll represents a local excitation at early times. Indeed, our effective gravity equation (20) contains an explicit contribution from local fluctuations in the cosmological constant, δλ. given by (21). Further, the spacetime average in (21) is negligible in a universe that grows old and large. Thus, alongside (13), we see that the dynamics of inflation goes through essentially as in [9]. There will be small corrections of orderm Finally, let us remark on the corrections coming from inflaton couplings to the Standard Model which we neglected in our analysis. These do not alter our qualitative results. To see this, note that such corrections endow the matter Lagrangian with dependence on φ = ϕ − Q m , or equivalently, L m → L m (g µν , φ, Φ). This only impacts equations (11) and (13), and in each case amounts to trading F 1 → F 1 − ν µ 4 ∂Lm ∂φ . When we take the spacetime average to evaluate the residual cosmological constant, any corrections coming from localised matter excitations contained in ∂Lm ∂φ will be negligible.
DISCUSSION
In this paper we have shown how a pair of field theory monodromies, with deformations motivated by inflation and quantum gravity, naturally incorporates a mechanism for stabilising the observed cosmological constant, protecting it from large radiative corrections to vacuum energy. This cancellation goes through the mechanics of sequestering [18][19][20][21][22][23][24] and suggests that monodromy constructions within string theory could be adapted to allow for a radiatively stable cosmological constant at low energies. It is important that the two monodromies operate at hierarchically different scales: for the low scale inflationary monodromy, the inflaton moves according to slow roll, while its magnetic dual plays the leading role in the cosmological counterterm required to cancel radiative corrections to vacuum energy. The high scale dilaton monodromy, in contrast, is held rigid. Only the magnetic dual plays any role forcing the desired global constraint on the geometry. The rigidity of the dilaton sector also avoids issues with experimental tests of General Relativity [35].
We can think of monodromy as the natural way in which we would extend low energy sequestering models into the UV, such that the cancellation mechanism described above might have been anticipated. To see this explicitly consider for definiteness and simplicity the local formulation of sequestering introduced in [21] (although we note that much of what we say here can easily be adapted to the improved model [23] designed to sequester vacuum energy contributions from graviton loops). The action introduced in [21] is given by where F = 1 4! F µναβ dx µ dx ν dx α dx β and its hatted counterpart correspond to four-form field strengths. This theory contains a cosmological potential Λ and a dilaton κ each of whom are held rigid by the dynamics of the three form fields but whose global variation ensures the cancellation of vacuum energy contributions coming from matter loops. Typically we assume that M g ∼ M P l and that µ is around the cut-off. We now reparametrise the theory (23), introducing the fields φ = νσ,φ =νσ then defining the potentials Λ = µ 4 θ(σ), κ 2 = M 2 gθ (σ). After rescaling F → mνF ,F →mνF , we obtain, The rigidity of the scalars φ andφ can be relaxed by adding canonical kinetic terms for the scalars and the four-forms, On the first two lines above, we see two copies of the original Kaloper-Sorbo theory, coupled through gravity, with deformations that break the corresponding gauge symmetries on the last line. When this theory is written in terms of scalars only, after integrating out the four-form field strengths, we see that we have two massive scalars, of mass m andm. The original Lagrangian for vacuum energy sequestering (24) is recovered at low energies below these mass scales, where we decouple the local fluctuations in the scalars whilst retaining their rigid deformations. One can explicitly show that this generalised form of vacuum energy sequestering does indeed sequester the vacuum energy successfully. This is because the vacuum energy source, being at infinite wavelength, only sees the low energy effective theory below the two mass scales, which is, of course, the original theory proposed in [21]. | 4,527.6 | 2018-06-12T00:00:00.000 | [
"Physics"
] |
An evaluation of a recombinant multiepitope based antigen for detection of Toxoplasma gondii specific antibodies
Background The inefficiency of the current tachyzoite antigen-based serological assays for the serodiagnosis of Toxoplasma gondii infection mandates the need for acquirement of reliable and standard diagnostic reagents. Recently, epitope-based antigens have emerged as an alternative diagnostic marker for the achievement of highly sensitive and specific capture antigens. In this study, the diagnostic utility of a recombinant multiepitope antigen (USM.TOXO1) for the serodiagnosis of human toxoplasmosis was evaluated. Methods An indirect enzyme-linked immunosorbent assay (ELISA) was developed to evaluate the usefulness of USM.TOXO1 antigen for the detection of IgG antibodies against Toxoplasma gondii in human sera. Whereas the reactivity of the developed antigen against IgM antibody was evaluated by western blot and Dot enzyme immunoassay (dot-EIA) analysis. Results The diagnostic performance of the new antigens in IgG ELISA was achieved at the maximum values of 85.43% and 81.25% for diagnostic sensitivity and specificity respectively. The USM.TOXO1 was also proven to be reactive with anti- T. gondii IgM antibody. Conclusions This finding makes the USM.TOXO1 antigen an attractive candidate for improving the toxoplasmosis serodiagnosis and demonstrates that multiepitope antigens could be a potential and promising diagnostic marker for the development of high sensitive and accurate assays.
Background
Toxoplasma gondii (T. gondii) is a widely distributed intercellular parasite with a relatively wide host range including human and almost all warm-blooded animals [1]. The clinical complications of the disease, especially in immunocompromised patients emphasize the importance of accurately identifying the infection. In particular, early diagnosis is critical for the effective therapy of the disease [2]. The important role of accurate diagnosis for the clinical management of toxoplamososis is a public health concern [3]. To date, various diagnostic techniques have been established [4]. However, the routine diagnostic strategy is mainly based on the detection of T. gondii-specific antibodies by various serological tests [5]. The serological tests play a vital role in the diagnosis of both human and animal toxoplasmosis [6].
Despite the satisfactory results obtained from the serodiagnosis specifically ELISA, development of standard and reliable reagents remains laborious and expensive [7,8]. Furthermore, the insufficient accuracy of several serodiagnostic tests necessitates the exploration of alternative reagents to be used for diagnostic purposes in the progress of toxoplasmosis control [8,9]. On the basis of this, suggestions were put forward to identify possible future directions of research on the development of accurate diagnostic tests. The scientific response to this scenario was based on paying particular attention to the recombinant multiepitope antigens that express different immunoreactive regions of various T. gondii antigens [10].
Recently, epitope based antigen has emerged as alternative tools for achievement of highly sensitive and specific capture antigens, that can be used as an alternative source of antigens with the potential to successfully replace the native antigen [10,11]. The rationale behind using of epitope based antigen for improvement of toxoplasmosis serodiagnosis would prove highly beneficial to increase the sensitivity and specificity; thus improve the standardization of the tests [12]. Furthermore, the more advantages of using such kind of antigens, is the capture antigens composition are precisely known and therefore mixture of different antigens can be used, as well as the cost of antigens production can be significantly reduced [12]. Such reasons justify why the studies on the T. gondii epitope antigens are receiving increasing attention from researchers.
The use of epitope-based antigen for the development of new diagnostic tests of various infections has shown encouraging results against various diseases. These diseases include hepatitis C virus [13], leishmaniasis [14], trypanosomiasis [15], leprosy [16], leptospirosis and Mycobacterium tuberculosis [17,18], as well as toxoplasmosis [8,19,20]. The advancement in bioinformatics and synthetic biology provides alternative strategies toward novel design and production of such kind of antigens [21]. These approaches are allowing the design and the subsequent synthesis of recombinant protein with improved or novel antigenic characteristics and reduced production costs [22]. Thus, studies on T. gondii multiepitope antigens are presently gaining increasing attention. This approach was adopted in the present study to generate a single multiepitope-based antigen expressing nine potential immunodominant epitopes of T. gondii. Consequently, the accuracy of the entire protein as a diagnostic marker for toxoplasmosis in humans was investigated.
Serum samples
Hospital Universiti Sains Malaysia (HUSM) of Kelantan is situated at the north east of peninsular Malaysia. It is a 700-beded tertiary teaching hospital for undergraduate medical program and postgraduate master of medicine. The hospital was equipped with accredited laboratories for testing all clinical samples from patients, including the Microbiology laboratory. A total of 247 human serum samples were collected from patients requested for routine serological investigation for toxoplasmosis at the laboratory, in HUSM. The positive or negative status of the samples was first determined by Elecsys® Toxoplasma IgG and IgM Immunoassays (Roche, Germany). Based on the serological profiles, serum samples were divided into four groups: Group I consisted of 151 anti-Toxoplasma IgG positive serum samples. Group II consisted of 96 IgG negative sera. Group III consisted of 17 sera from patient infected with diseases other than toxoplasmosis. Group IV consisted of 6 anti-Toxoplasma IgM positive sera. Additionally, 30 human serum samples from apparently healthy blood donors were collected and used as negative controls for the determination of the assay cut-off value.
Samples size calculation
The sample size was calculated using PS software for single proportion formula and confirmed with sample size calculation for sensitivity & specificity studies designed by Dr. Mohd Ayub (Universiti Sains Malaysia) with the parameters indicated in Table 1. The desired sample number for Group I (151 IgG positive) and Group II (96 IgG negative) were successfully collected. Unfortunately only 6 anti-Toxoplasma IgM samples and 17 sera from patient infected with diseases other than toxoplasmosis were achieved during the study period. Due to the time limit the study was conducted with the collected serumsamples.
Design, construction and expression of the recombinant multiepitope antigen
A single recombinant multiepitope antigen (USM.-TOXO1) consisting of nine linear and conserved immunodominant within the SAG1, GRA2 and GRA7 antigens of T. gondii was designed as described previously [9]. Consequently, the corresponding gene encoding this antigen with final length of 435 bp was constructed by assembly PCR as described by Stemmer (1995) [23]. Two steps were involved in this assay: Gene assembly (1st PCR) and gene amplification (2nd PCR). For gene assembly, equal volume of 19 overlapping oligonucleotide was mixed to prepare the assembly mix (250 μM). The mixture was subsequently diluted 100 fold in 20 μl PCR mix containing 4 μl of 5X Phusion HF buffer, 0.4 μl of 10 mM dNTPs, and 0.2 Phusion Hot Start II DNA Polymerase (2 U/μl) (Thermo Scientific, USA). The mixture was then subjected to 98°C for 30 s as initial denaturation, followed by 55 cycles of amplification at 95°C for 1 min, 64°C for 1 min, 72°C for 1 min, and a final extension cycle at 72°C for 10 min. In the gene amplification, two outside primers were designed to allow specific amplification of a desired gene from the collection of DNA fragment generated by The reaction was performed in final volume of 25 μl containing 5 μl of the first PCR product and 4 μl of 5× Phusion HF buffer, 1 μM each forward and reverse primers, 0.5 μl of 10 mM dNTPs, 0.25 μl Phusion Hot Start II DNA polymerase (2 U/μl) and sterile ddH 2 O were added to make the final volume of 25 μl. The PCR amplification was carried out under the following conditions: initial denaturation 98°C for 30 s, followed by 23 cycles of amplification at 95°C for 1 min, 64.5°C for 1 min, 72°C for 1 min and final extension 72°C for 10 min. Subsequently, the USM.TOXO1 synthetic gene was cloned into pET-32a(+) expression vector (Novagen, U.S.A). Afterward, the protein expression was induced in E. coli expression system and the synthetic protein was successfully purified using Ni-NTA spin column as described previously [9].
Development of in-house indirect-ELISA using USM.-TOXO1 as capture antigen
Indirect ELISA was developed to detect the anti-IgG antibodies against recombinant USM.TOXO1 antigen. The optimal concentration of the coating antigen and the serum, conjugate dilution were determined by checkerboard titration assay using known positive and negative human sera. As the result, the concentration show highest discrimination value between positive and negative sera was considered to be optimal. After optimization, the ELISA was carried out using standard conditions. Briefly, a 96-well Microplates was coated with 100 μl of USM.TOXO1 recombinant antigen to the final concentration of 2.5 μg/ml in 0.05 M carbonate buffer (pH 9.6) and incubated overnight at 4°C. The following day the wells were washed (3X) with PBS-T for 5 min each time and blocked with 200 μl of blocking buffer for 1 h at 37°C. After another rounds of washing, 100 μl of human sera diluted at 1: 400 was added to the wells and incubated at 37°C for 1 h. At the end of the incubation time the wells were washed again and 100 μl of HRP conjugated anti-human IgG antibody (diluted 1:4000) was added for 1 h at 37°C, followed by final 3X wash.
The immunoenzymatic color reaction was developed by adding 100 μl of TMB substrate and the plate was further incubated for 15 min. Finally the reaction was stopped by adding 100 μl of 2 M H 2 SO 4 and the optical density (OD) at 450 nm was then measured by using SpectraMax M Series Multi-Mode Microplate Readers (USA). The cut-off value was established as the average OD value of 30 serum samples from healthy negative control blood donor plus 3 standard deviations. Therefore, serum were considered negative or positive when its optical density less or more than adjusted cut off value respectively [24,25].
Reactivity of the USM.TOXO1 with T. gondii IgM antibodies
Due to the small number of positive IgM samples (6 serum samples) obtained in this study. The immunoreactivity of USM.TOXO1 antigen against anti-T.gondii IgM antibodies was confirmed by western blot and Dot enzyme immunoassay (dot-EIA) analysis. The western blot analysis was performed similarly as described previously [9] with exception of using anti-human IgM conjugated with alkaline phosphatase. The color reaction was developed using alkaline phosphatase conjugate substrate. For the dot-EIA, a concentration of 0.6 mg/ml of USM.TOXO1 was dotted onto PVDF membrane. The membrane was allowed to dry at room temperature for 1 h. The blocking step, incubation with primary and secondary antibodies was performed as described for western blot.
Statistics
The sensitivity, specificity, negative, and positive pre-
Result
Production of the USM.TOXO1 multiepitope antigen In this study, a single synthetic gene (456 bp) encoding nine immunodominat epitopes of T. gondii antigens was designed as previously described [9]. Subsequently the gene was successfully constructed and amplified by experimental methods using assembly PCR (Fig. 1). The corresponding recombinant multi-epitope protein was successfully expressed as and purified.
Evaluation of diagnostic potential of the purified USM.TOXO1 recombinant proteins by indirect ELISA To evaluate the potential of USM.TOXO1 antigen for the detection of anti T. gondii IgG antibodies in human sera, an in-house ELISA was developed using USM.-TOXO1 fusion proteins as capture antigen. As shown in Table 2, 129 out of 151 positive sera (group I) were reacted with USM.TOXO1 antigen with ODs above the cut-off value, whereas, the generated ELISA failed to detect T. gondii specific antibodies in 22 positive sera, resulting in a sensitivity of 85.43%. The USM.-TOXO1 ELISA was negative in 78 out of 96 samples from group II (negative serum samples), while 18 samples showed false positive results with OD 450 values higher than the cut-off value yielding a specificity of 81.25%. The positive and negative predictive values of the generated ELISA were 87.76% and 78% respectively (Table 3).
Determination the cross-reactivity of the USM.TOXO1 IgG ELISA
The cross-reactivity of the USM.TOXO1 IgG ELISA indicated in Table (2) showed that 14 out of 17 sera were negative, while 3 samples generated false positive results.
Reactivity of T. gondii IgM antibody in immunoblots with USM.TOXO1
The immune reactivity of USM.TOXO1 antigen against anti-toxoplasma IgM antibody was evaluated by western blot and dot enzyme assay only. The results indicated in Fig. 2, demonstrate that USM.TOXO1 has the potential to detect toxoplasmosis-specific IgM antibody.
Discussion
The serological tests play a vital role in the diagnosis of both human and animal toxoplasmosis [6]. Thus, researchers continue to strive in perfecting and improving the serodiagnostics of T. gondii infections. In this regard, acquiring effective diagnostic antigens would be highly beneficial. The current immunoassays are mainly based on the T. gondii lysate antigens (TLAs), which are characterized as high sensitive and specific diagnostic tools [26].
However, the insufficient accuracy of some diagnostic tests are correlated with significant variation in the procedure of producing such kind of antigens, resulting in a major drawback which is lack of the standardization. The real challenge for researchers is to identify novel antigens that possess high immunoreactivity [27]. Thus, exploration of effective diagnostic reagents is the best strategy for the development of accurate diagnostic assays, which would considerably improve the management of the disease [16]. Accordingly, significant efforts [8]. The peptide-based antigens appears as attractive and promising antigenic candidates for the achievement of standard diagnostic marker [28]. At present, bioinformatics tools play a significant role in the identification of immunodominant epitopes [29]. Meanwhile, the advancement of molecular techniques allows the production of recombinant multiepitope antigen [30]. Interestingly, the uses of epitope-based antigens could allow better standardization of the diagnostic tests [21]. Furthermore, the diagnostic value of a particular epitope can be studied; thus, the sensitivity of the immunoassays may be enhanced by combining several epitope antigens [8,21]. Compared with the lysate antigens, epitope-based antigens exhibit several advantages in the serological investigation of toxoplasmosis. These benefits include the low cost of the production and purification protocol, the precise knowledge on the composition of the diagnostic antigen, and the ability to use multiple epitopes that represent different stages of the infection [28].
Until now, only a few studies have demonstrated the usefulness of the recombinant multiepitope antigens in the detection of anti-T. gondii antibodies in human sera [8,18,21,28]. In the present study, this concept was tested. This study speculated that developing a novel recombinant antigen expressing the potential immunodominant epitopes of three T. gondii antigens would be an effective strategy to improve the sensitivity and specificity of the diagnostic assays. Accordingly, USM.TOXO1 gene was designed and successfully constructed by assembly PCR. Compared with the previous studies various methods have been developed to produce multi epitope-based antigens [8,18,21,28], however, assembly PCR is inexpensive as well as more practical strategy for constructing synthetic genes encoding different epitopes or more than one copies of the same epitope. Following the production, the potential uses of the USM.TOXO1 as diagnostic marker was examined. An indirect IgG (ELISA) was developed to detect anti-T. gondii antibodies in human sera.
The results indicated that USM.TOXO1 represents a valid and promising diagnostic marker for screening of anti-T. gondii in human sera. The USM.TOXO1 ELISA specifically identified 129 out of 151 serum samples from the sero-positive T. gondii patients. Meanwhile, 18 serum samples from the sero-negative patients showed false positive results. The diagnostic performance observed for the new antigens developed in this work was achieved at the maximum values of 85.43%, 81.25%, 87.76%, and 78% for diagnostic sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV), respectively. These findings are compatible with the diagnostic performance of several recombinant antigens developed recently for the diagnosis of toxoplasmosis [3,16]. However, in terms of cost and efficiency, the protocol performed in this study is less expensive and can rapidly produce large quantities of the recombinant protein.
The results emphasized the usefulness of the USM.-TOXO1 ELISA in the serological screening of toxoplasmosis. This notion is supported by the high sensitivities and specificities, which exceeded 85% and 80% respectively. However, the values did not exceed 90% even though USM.TOXO1 containing the most antigenic epitopes of SAG1, GRA2, and GRA7. This might be due to the loss of antigenicity due to the incorrect folding of the recombinant proteins expressed in the E. coli expression systems [6]. Thus, some of the epitopes featured in the native antigen may not have been presented in the recombinant protein and therefore, cannot be recognized by T. gondii or cross-react with other antibodies. Additionally, the epitope's diagnostic value could also be affected by immune diversity, which is a major hurdle that prevents the achievement of the high epitope predicted diagnostic value [31].
The sensitivity of the developed ELISA was similar to that found by Dai et al. (2012), in which a recombinant multiepitope peptide (rMEP) was developed to express three antigenic determinants of SAG1, SAG2, and SAG3 antigens. However, the specificity was much lower in the current study than the 100% obtained by Dai et al. (2012). The data obtained from a study conducted by Faria et al. (2015) also showed promising results, in which recombinant multiepitope proteins reacted with 88.8% of the positive sera and provided a specificity of 80%. Such result is consistent with our findings. However, multiepitope antigens were highly sensitive and specific in detecting anti-Trypanosoma cruzi antibody and specifically differentiating P. vivax from P. falciparum infection, suggesting powerful tools for developing accurate diagnostic assays [31,32]. Furthermore, a sensitivity and specificity of 100% was also reported [33].
The current paradigm strongly supports the further development of peptide-based assays. Such assays would benefit the diagnosis because of the variation in the host humoral immune response from stage-specific immunity. The response variation would produce specific IgG antibodies associated with one stage of infection but not with the other stages. Thus, multiepitope antigens, which express various antibody-binding sites from different antigens in different stages of infection must be used to develop diagnostic assays that can detect a wide range of antibodies produced throughout the disease process [6].
The future prospective in the establishment of an effective serodiagnostic assay for the detection of T. gondii infection should focus on identifying novel antigenic determinants along with examining various cocktails of distinct epitope-based antigens. The goal is to attain an appropriate level of sensitivity and specificity that would be unaffected by antigenic variation and give accurate results.
The rational selection of the T. gondii antigens that possess conserved T and B cell epitopes is crucial for the successful application of this epitope-based strategy [34]. Thus, SAG1, GRA2, and GRA7 have been selected as the candidate antigens to be assessed in the current project. All of these antigens have been the subject of various fundamental studies. The findings of most of these studies demonstrated the potential of these antigens to become more successful diagnostic reagents or/and effective vaccines. SAG1 is of particular interest because it represents around 5% of the tachyzoite antigen [35]. Investigations on the immunogenicity and immunoreactivity of SAG1 repeatedly yielded significant results [36,37]. These reasons explain the selection of SAG1 as an antigen candidate in this study. Previous studies indicated that GRA7 is a promising vaccine candidate and novel diagnostic reagent [38]. Direct contact of GRA7 with the host immune system enhances the induction of strong antibody and cell-mediated responses in both acute and chronic infection [5].
Similar to SAG1 and GRA7, GRA2 is also characterized as a highly immunogenic antigen during T. gondii infections; it has the potential to induce protective immune response in both human and experimental models [39]. GRA2 allowed the differential identification of antitoxoplasma antibody in acute and chronic human infections [40]. These data suggest that SAG1, GRA7, and GRA2 antigens could advance the development of effective diagnostic reagents for T. gondii.
Conclusion
In conclusion, the diagnostic performance of synthetic protein expressing nine epitope of T. gondii was evaluated. The results indicate that this antigen might be a promising epitope based antigen for seodiagnosis of T. gondii infection which can be modified later to improve the sensitivity and the specificity by either increase the number or the types of epitopes or manipulate the protein structure. There are two main limitations in this study; where only B cell prediction software was applied, in an ideal situation, the T cell prediction epitope should also be considered. Another limitation is the performance of USM.TOXO1 in IgM ELISA was not tested, due to the limited IgM positive sera collected during the study period. | 4,708.2 | 2017-12-29T00:00:00.000 | [
"Biology",
"Medicine"
] |
Defining and Assessing Quality in IoT Environments: A Survey
: With the proliferation of multimedia services, Quality of Experience (QoE) has gained a lot of attention. QoE ties together the users’ needs and expectations to multimedia application and network performance. However, in various Internet of Things (IoT) applications such as healthcare, surveillance systems, traffic monitoring, etc., human feedback can be limited or infeasible. Moreover, for immersive augmented and virtual reality, as well as other mulsemedia applications, the evaluation in terms of quality cannot only focus on the sight and hearing senses. Therefore, the traditional QoE definition and approaches for evaluating multimedia services might not be suitable for the IoT paradigm, and more quality metrics are required in order to evaluate the quality in IoT. In this paper, we review existing quality definitions, quality influence factors (IFs) and assessment approaches for IoT. This paper also introduces challenges in the area of quality assessment for the IoT paradigm.
Introduction
Quality of Service (QoS), according to the International Telecommunications Union (ITU), is "the totality of characteristics of a telecommunications service that bear on its ability to satisfy stated and implied needs of the user of the service" [1]. It comprises both network-related performance (e.g., bit error rate, latency) and non-network related performance (e.g., service provisioning time, different tariffs) [2]. Thus, in order to satisfy the users' needs, for more than 30 years the telecommunications industry, as well as academia, investigated several mechanisms in order to guarantee QoS in the provided telecommunication services.
However, with the exponential growth of the video-based services, the telecom operators realized that catering the quality expectation of the end users in multimedia services is the most important parameter [3]. Humans are considered quality meters, and their expectations, perceptions, and needs with respect to a particular product, service, or application carry great value [4,5].
The user experience of multimedia applications is inevitably bounded up with the notion of Quality of Experience (QoE) [6]. Lagjhari et al. described QoE as "the blueprint of all human quality needs and expectations" [4].
However, with the introduction of the Internet of Things (IoT), traditional terms and approaches used for defined or evaluating services may not be suitable/sufficient for the IoT context [7], where consumers may no longer be users but machines. Moreover, the QoE requirements in such a heterogeneous environment can vary with respect to the considered IoT application domain; even QoE requirements among IoT applications of the same IoT domain may vary [8]. Furthermore, due to the fact that, in IoT, decisions are taken based on data infusion from multiple sensors in case of failures, the effects are often multidimensional [9].
IoT 2022, 3 Currently, there is no standardization or set of best practices as to how the subjective tests can be conducted and even if there was, it would be practically infeasible to carry out subjective tests for every existing as well as new applications [7]. Moreover, existing subjective methodologies do not consider QoE influence factors (IFs) of the IoT environment, such as the usefulness of the application [7].
Thus, in such heterogeneous environments, existing quality related issues that were initially tailored for humans, such as QoE definitions, evaluation approaches and provision mechanisms, should be re-examined in order to check their validity in a machine-to machine environment. In addition, in cases where human interaction is not required, the traditional definition of QoE is not valid, and new metrics to evaluate the quality in IoT environments are required.
Fizza et al. [15] reviewed existing QoE definitions and QoE models for autonomic IoT. However, the authors suggested only one definition for QoE in IoT. In addition, regarding the QoE modeling, the authors provided limited information concerning the role of data in the QoE evaluation of the IoT applications. Moreover, the authors in [7] focused on the QoE IFs and presented a QoE taxononomy for IoT, while Bures et al. [16] consolidated the IoT quality characteristics into a unified view. A survey concerning QoE evaluation for autonomic IoT applications can be found in [17].
The contributions of the current paper are as follows: • Review existing definitions of QoE that are suitable for IoT environments, since nowadays new terms have been introduced to define and evaluate the quality of IoT applications. • Identify and categorize the quality IFs for IoT. More specifically, we have collected and classified IFs that may found into the literature and are necessary for the creation of a successful quality model for IoT. • Review existing quality assessment approaches for IoT applications.
The rest of this paper is organized as follows: Section 1 presents an overview of IoT and fog computing architectures. Section 2 reviews existing quality definitions that are suitable for IoT, while Section 3 overviews and categorizes quality indicators for IoT. Section 4 reviews existing QoE models and frameworks for IoT. Section 5 discusses challenges in the area of QoE assessment in the IoT context, while Section 6 concludes the paper.
Internet of Things (IoT) and Fog Computing
The term "Internet of Things" (IoT) was coined by Kevin Ashton in 1999 to describe the ability of network objects (connected to the Internet) to bring new services [18]. Since then, the IoT paradigm has gained a great momentum. More specifically, according to the statistics portal Statista (www.statista.com (accessed on 1 June 2022)), the number of IoT connected devices is expected to rise more than 75 billion in 2025 [19]. Figure 1 illustrates several fields of IoT applications including transportation, healthcare, home automation and smart cities [20]. However, these diverse IoT applications and devices from various manufacturers create such network heterogeneity that a unified and inter-operable standard is very difficult to achieve [10].
Currently, there is no global consensus on the architecture of IoT, thus, many different IoT architectures may be found in the literature [11]. The basic architecture has three layers [10]: • A things layer (also known as perception, device or sensor layer) that consists of the sensing hardware, and its main objective is to interconnect things in the IoT network. • A middle layer (also known as transport layer) that processes the received data from the things layer and determines the optimum data transmission path to the IoT servers.
• An application layer (also known as the business layer) that provides information management, data mining, data analytics, decision-making services, as well as the required services to end-user or machines. However, according to Kassab et al. [11] the superior model with respect to the elements is the architecture proposed by Yousefpour et al. (2018) [21], a fog computing architecture. Fog computing is a computing paradigm (introduced by Cisco) that deals with the requirements of time-sensitive IoT applications [22]. The idea is that instead of processing the sensor data on the cloud, to address this issue at the edge [23]. By doing so, the following advantages are accomplished [22]: i) applications are executed closer to end-user and IoT devices, (ii) performance metrics for real-time applications such as latency, response time, and cloud workload are enhanced, (iii) network scalability is increased, and (iv) device mobility is supported. Figure 2 depicts the three layers of the fog computing architecture. The lowest layer consists of the IoT devices that produce massive data and potentially are heterogeneous, geographically distributed and have mobility features [24]. The fog computing layer is composed by the fog nodes, intelligent intermediate devices from different networking components [25] and retransmits the workload to the cloud servers at given time intervals.
Quality in a IoT Environment
In telecommunications, the most suitable metric to assess end-to-end quality is QoE. The most frequently used definition for QoE is the one given by the ITU [26], where it is defined as "The overall acceptability of an application or service, as perceived subjectively by the end-user". Moreover, since many researchers pointed out that the inclusion of the term "acceptability" as the basis for a QoE definition is not ideal, during the Dagstuhl Seminar in 2009, the term acceptability was newly defined as "the outcome of a decision [yes/no] which is partially based on the Quality of Experience" [27]. However, even with this modification, the definition still follows a user-centric approach; thus, it does not reflect the machines' perspective.
Another popular definition of QoE is the one described in the Qualinet White paper [28] in which QoE is defined as "the degree of delight or annoyance of the user of an application or service. It results from the fulfillment of his or her expectations with respect to the utility and/or enjoyment of the application or service in the light of the user's personality and current state". Raake and Eggger [27] extended the definition of the Qualinet White paper in order to also include the term system. Thus, according to the new definition QoE is "the degree of delight or annoyance of a person whose experiencing involves an application, service, or system. It results from the person's evaluation of the fulfillment of his or her expectations and needs with respect to the utility and/or enjoyment in the light of the person's context, personality and current state".
The Qualinet's definition according to Floris and Atzori [20] is valid for general multimedia applications/services and, thus, it can be used for cases where humans are the recipients of the content provided by multimedia IoT applications.
However, in the IoT context, where exist applications that do not require any human intervention, such as smart parking, connected vehicles, etc., the term QoE cannot be used to describe quality. To this end, several researchers introduced new terms to define quality in the IoT domain. Mivoski et al. [29] introduced the term Quality of IoT-experience (QoIoT), which aggregates the delivered quality of an IoT service from the perspective of both humans and machines within the context of autonomous vehicles. More specifically, the QoIoT metric compromises the traditional user-centric QoE metric and the Quality of Machine Experience (QoME), an objective metric that "measures the quality and performance of intelligent machines and their decisions".
Karaadi et al. [30] defined the term "Quality of Things" (QoT) for multimedia communications in IoT to express the quality of fulfilling an IoT task in a Multimedia IoT (M-IoT) [31]. However, the authors do not provide any measurement methodology.
Rahman et al. [9] defined the term Quality of Systems (QoSys), an objective metric like QoE, that measures "the quality and performance of the Systems of Systems (SoS), and the decisions made by those". Thus, the metric QoE IOT is introduced in order to evaluate the quality in an IoT scenario from the perspective of both humans and machines.
Wang et al. [32] introduced the term quality of X (QoX), as a comprehensive evaluation metric that combines QoS, QoE, Quality of Data (QoD) and Quality of Information (QoI).
In this end, Fizza et al. [17] introduce the term Quality of autonomic IoT applications as "an aggregate quantitative value of various IoT quality metrics measured at each stage of the autonomic IoT application life cycle". Table 1 overviews and highlights the specific drawbacks of each definition. As can be seen, there is no definition that can generally express the end-to-end quality in IoT environments.
Paper
Recipient Term Shortcoming User Machine [9] x QoT Too generic definition. It is not clear how it can be measured [17] x x QoE AIoT Autonomic IoT systems [26] x QoE It does not reflect the machine's focused quality [27] x QoE It does not reflect the machine's focused quality [28] x QoE It does not reflect the machine's focused quality [29] x x QoIoT It cannot be applied to Autonomic IoT systems [30] x x QoE IoT It cannot be applied to Autonomic IoT systems
Key Quality Indicators for IoTs
As stated in [23], the first step in creating a successful quality model is to create a taxonomy of its influence factors (IFs). However, identifying these factors is not an easy task to accomplish.
Human IFs that present any variant or invariant property or characteristic of a human user (e.g., motivation, gender, age, education, etc.); 2.
System IFs that refer to properties and characteristics that determine the technically produced quality of an application or service (e.g., QoS, display size, resolution); 3.
Context IFs that embrace any situational property to describe the user's environment in terms of physical, temporal, social, economic, task, and technical characteristics (e.g., day of time, cost, etc.).
However, since in IoT the data acquired by devices (objects), as well as the information acquired and processed are important parameters, two more categories/dimensions may be found in the literature: the Quality of Data (QoD) that is used for data quality evaluation, and the Quality of Information (QoI) that is used for information quality evaluation. However, in several papers, the term QoI is used to determine the quality of information or data [34,35]. Table 2 overviews the most common QoD metrics, while Table 3 shows the most common QoI metrics. Table 2. QoD metrics.
Completeness
The extent to which data are of sufficient breadth, depth and scope for the task at hand [18] Precision The extent to which the collected data are precise Truthfulness The extent to which the collected data are from reliable resource [19] Accuracy The extent to which data are correct and accepted Usefulness The extent to which the sensed data are for the application [15] Consistency The extent to which data are presented in the same format and compatible with previous data [18] Timeliness The extent to which data are valid for decision making [15] Rahman et al. [9] also considered the Quality of Cost (QoC) due to the fact that the machines use some resources in terms of computation, storage, or energy, and such consumptions should be optimized.
Ikeda et al. [36] considered two sets of metrics: physical metrics emerging in the IoT architecture, such as network QoS, sensing quality, and computation quality, and metaphysical metrics demanded by users, such as accuracy, context and timeliness.
Pal et al. [7] classified the QoE IFs for IoT environments into three distinct categories:
1.
Technical, which represent the various QoS factors, which are popular in the multimedia context and also relevant with the IoT examined scenario.
2.
User, which represent the subjective characteristics of the users of the IoT applications.
3.
Context, which are related to the data and information quality along with specific application requirements that can vary depending upon the usage scenario. Nashaat et al. [23] consider 3 dimensions: the Environment runtime context, the Application, and the User expectations. These factors, in addition to QoS feedback, influence the total QoE of the user by a valuable weight, as Figure 5 depicts. Besides the QoE taxonomies for IoT applications, researchers have proposed various QoE taxonomies for specific IoT verticals. For example, Damaj et al. [37] in their taxonomy for the context of Connected and Autonomous Electric Vehicles (CAEVs) have identified several performance indicators that were grouped into categories. These categories were then mapped to 4 QoE IFs. Table 4 presents the categories and the corresponding QoE IF. An overview of the different quality metrics are presented in Table 5.
Quality of Experience (QoE)/ Human feedback
Evaluates the overall acceptability of an application or service or system as perceived subjectively by users [7,15,30,38] Quality of Context (QoC) Evaluates the context of the environment or the application [7,17,23] Quality of Cost (QoCo) Evaluates the cost in terms of of computation, storage, or energy of an IoT application [9] Quality of Information (QoI) Evaluates the quality of information [9,15,17,19,34] Quality of Data (QoD) Evaluates the quality of data [15,[17][18][19] Quality of Service (QoS) Evaluates the network's capability to provide satisfied service levels [15,17,23,33,38] Quality of Device (QoDe) Evaluates the quality of the physical IoT devices [17] Quality of Actuation (QoA) Evaluates the correctness of the decision making/ actuation performed by an IoT application [17] Quality of Security and Privacy (QoSe & P) Evaluates the security and privacy of an IoT application [17]
Quality Models for IoTs
Traditionally, qualitative methods that focus on voice perceptibility for applications usability are used for the QoE evaluation [39]. The QoE for multimedia services is evaluated by subjective, objective and hybrid assessment (a combination of both the subjective and objective approaches) [40].
In this context, a few studies that focus on modeling the relationship between human experience and quality perception in relation to the smart-wearable segment may be found in the literature [6,41,42]. QoE is considered a very important aspect of multiple sensorial media (mulsemedia) [43].
Shin et al. [41] examined the relation of users' experience and the quality perception in IoT. To achieve this goal the authors utilized a combination of qualitative and quantitative methods. Figure 6 shows the proposed QoE model in which, besides the user's behavior, coolness, satisfaction and affordance are considered as QoE factors in the IoT context. Pal et al. [44] proposed a QoE model that maps QoD and QoI to QoE. More specifically, the authors, in order to create the model, collect data from 5 wearable devices. Half of the data set is used to build the model, while the other half is used to test accuracy. The stepcounts and heart-rate measurement readings by the wearables are used as QoD parameters, whereas the perceived ease of use, perceived usefulness, and richness in information are used as QoI parameters. The accuracy of their model is evaluated by comparing the QoE obtained from the mathematical model and a subjective test with 40 participants. The authors adopted the Mean Opinion Score (MOS) to quantify the user experience.
Saleme et al. [6] studied the impact that human factors such as gender, age, prior computing experience, airflow intensity and smell sensitivity have on 360 • mulsemedia QoE. A total of 48 participants (27 male, 21 female) participated in this study with ages between 16 and 65 years old. Results showed that all these factors influence the users' QoE. Guidelines for evaluating wearables' quality of experience in a mulsemedia context can be found in [43].
In addition to the QoE evaluation for wearables, several attempts have been made in order to create QoE models for IoT. One of the first attempts was made by Wu et al. [45] that calculated the overall QoE by combining two parameters: profit (expressed in terms of QoD, QoI, QoE) and cost (expressed in terms of resource efficiency, i.e., device utilization efficiency, computational efficiency, energy efficiency, storage efficiency). The same approach was also followed in several other studies illustrated in Table 6.
Another way of quantifying QoE is the layer-based approach [23], in which each layer focuses on a specific QoE IF (domain), so that the overall quality can be computed as a combination of all IFs (domains). Several layered-QoE models may be found in the related literature.
For example, Floris and Atzori [20] proposed a layered QoE model that aims to evaluate the contributions of each IF to estimate the overall QoE in Multimedia IoT (MIoT) applications [23]. More specifically, the proposed model consists of five layers: physical devices, network, combination, application, and context. The authors, in order to demonstrate the generalization of the their framework, have applied it in two use cases: a) vehicles remote monitoring and b) smart surveillance.
A similar approach is also presented in [36]. More specifically, in this framework, the physical metrics are organized into four layers (device, network, computing, and user interface) while the metaphysical metrics are organized in two layers (information and comfort). However, no evaluation of the proposed framework is provided in this work. [46] proposed a different approach to measure the QoE of IoT services. More specifically, their proposed framework is based on the following steps: (1) Setting up the focus of the IoT services to formulate the QoE parameters, (2) Judging the institutional users who are the users of IoT services, (3) Conducting a Mean Opinion Score (MOS) survey of IoT service users, (4) Calculating the differential MOS as the Absolute Category Rating with Hidden References (ACR-HR) quantitative scale, and (5) Providing the strategic implications to those responsible for the implementation. The authors, in order to validate the proposed framework, conducted a subjective test in Jakarta where 6 institutional users expressed their experience of utilizing IoT technology in their relevant services, i.e., managing public transportation, garbage trucks, ambulances, the fire and rescue brigade, street lighting, and water level measurement.
Finally, [17] proposed a framework to measure the quality of autonomic IoT by mapping five IoT quality metrics to the IoT application life cycle stages: (1) Data Sensing, (2) Sensed Data Transmission, (3) Data Analytics, (4) Analyzed Information Transmission, and (5) Actuation. However, approaches on how to model and measure these IoT IFs are still an open issue.
Discussion
Defining quality in an IoT environment it is not an easy task. Although several terms are proposed in the literature, with the heterogeneity of the IoT components, it is difficult to have a generic definition for quality in IoT. A specific domain definition seems to be a more appropriate solution as in [17,29]; however, a classification based on the different characteristics of the IoT is required.
Additionally, the diversity of IoT applications makes the identification of the appropriate IFs a very challenging task. In Section 3, we have collected all the Quality IFs that can be found in the literature, as Figure 7 depicts. However, the answer to the question "which IFs should be considered for this IoT application" cannot be easily provided. Machine Learning (ML) techniques can be beneficial to address this challenge, since they can be applied to predict the type of IoT application and, as a sequence, the appropriate IFs. Saovapakhiran et al. [47] in their proposed QoE-driven IoT architecture propose the use of ML techniques to tailor QoE at the User level from User Engagement metrics. However, they do not provide a quantitative solution.
In addition, the fact that existing IoT architectures are multi-tier systems increases the complexity of the measurements in IoT. Since each tier has different aspects for the quality IFs, it is not often clear where the IFs should be collected. For example, in the M-QoE framework [38], the IFs are measured for: (1) the IoT device, (2) the Radio Access Network (RAN), (3) the edge network, (4) the core network/services network and (5) the vertical slices and service layers. However, acquiring data from different tiers can result in deterioration of communication delays [47]. To deal with this issue, Saovapakhiran et al. [47] suggested the creation of different QoE domains and the local estimation of QoE in each domain. However, no implementation details are given. In addition, the fact there is a standardized architecture the QoE domains may differ according to the proposed architecture.
Furthermore, security and privacy are crucial challenges to be addressed in IoT architectures. For example, in wearable environments, as more data are collected for the QoE evaluation, the more users' personalized data are revealed. In addition, the multi-tier IoT architectures make security provision difficult. Especially, for vehicular environments in which the topology of the computing network frequent changes due to mobility, security provision is harder to be achieved compared to other networks.
Additionally, quality assessment in IoT requires further research study. Subjective tests are considered the core part of QoE evaluation for multimedia environments. However, existing subjective approaches that are used to measure QoS, may not be suitable for IoT environments, since (1) it is not feasible to carry out subjective tests for every existing or new IoT application due to their big diversity; (2) they require user feedback after every specific interval, resulting in high network delay and relatively low application response time [23]. In particular, for real time monitoring IoT applications, this can lead to malfunction or in other cases can be dangerous even for humans' safety; (3) they cannot easy determine the cause-roots of the performance, e.g., as stated in [48], subjective results for autonomous vehicles cannot be very helpful for policymakers to define the cause of a car accident; and (4) subjective assessment requires human participation and is usually performed in a (rather isolated) lab environment. Even if we build objective models from subjective tests, their validity will be limited only to the application scenarios for which they are tested [44]. Thus, further study of QoE assessment in IoT is required.
Finally, the conducted research showed that although mulsemedia content provides a new content experience that goes beyond traditional media, QoE evaluation for such types of content is an under-researched area. More QoE IFs should be determined in order to reflect the human-to-machine interaction and, thus, create accurate QoE models. However, the complexity of this task is further increased due to the fact that the majority of the existing olfactory information based systems and methods is only available in specialized laboratories [49]. In addition, there is guidelines on how to create a multisensory content [50], as well as there are not many mulsemedia datasets available. Table 7 overviews the challenges concerning quality in IoT.
Conclusions
Quality is an important factor in an IoT environment. Quality provisioning in such environments is not only limited to life-threatening situations, but also needs to consider the risk of causing significant business losses and environmental damage [48]. QoE is the most popular metric that has been used to evaluate quality. However, due to fact that initially QoE was introduced to assess the end-user satisfaction, the concepts of traditional QoE should be extended in order to include contextual factors that are important in the IoT domain. In addition, more quality metrics are required in order to evaluate quality in IoT. To this extend, this paper has surveyed the actual necessity of evaluating the quality in IoT. We identified the quality metrics that impact the quality in an IoT environment. However, even the collection of the quality metrics measurements is not an easy process. For one point, data should be collected from multiple IoT nodes locating at different tiers depending on the IoT architecture that is used, while the storage and transfer of these obtained large-scale data is a very challenging task. Even existing assessing methods should be re-examined in the context of IoT. Especially for mulsemedia applications, traditional QoE assessment methods are not adequate. Thus, research is needed in order to deal with these challenges.
Conflicts of Interest:
The author declares no conflict of interest. | 6,112.4 | 2022-12-07T00:00:00.000 | [
"Computer Science"
] |
Microstructures in shocked quartz: linking nuclear airbursts and meteorite impacts
.
Introduction
In this study, we investigated quartz grains exposed to near-surface nuclear airbursts, in which the blast wave and fireball intersected the ground surface.We also compared shocked quartz grains from Arizona's Meteor Crater, a relatively small (1.2-km-wide) impact structure.Our objective was to compare quartz grains exposed to pressures and temperatures associated with these two types of high-temperature, high-pressure events.We explored the hypothesis that low-altitude nuclear airbursts marked by relatively low pressures can produce shock fractures in quartz grains filled with amorphous silica.We also investigated whether these characteristic shock fractures in quartz grains formed similarly to those during crater-forming impacts, such as at Meteor Crater.Studies of crater-forming impacts and airbursts are crucial because they potentially have sudden, radical effects on the Earth's environmental and biotic systems.However, most current studies have focused on ancient, large cratering events, such as that which occurred at the Cretaceous-Tertiary (KPg) boundary [1].Relatively little is known about smaller, younger events, especially those caused by comets that may produce airbursts rather than large impact craters.
Previously, Eby et al. [2] and Lussier et al. [3] explored shocked quartz grains' characteristics and formation mechanism resulting from the 1945 Trinity nuclear detonation at the Alamogordo Bombing Range, New Mexico.These studies revealed the presence of linear fractures that result from the high shock pressures of these detonations, leading Lussier et al. [3] to conclude that they may represent the initial deformational feature of quartz formed in a progression of increasing shock pressures.In another investigation related to the 1945 Hiroshima nuclear detonation, Wannier et al. [4] investigated glassy spherules but found that any shocked quartz grains that may have been present in the melt had been fully amorphized due to the extremely high temperatures.
Several laboratory experiments have investigated the shock-related transformation of quartz to amorphous silica.For example, in quartz grains experimentally shocked at 5 to 17.5 GPa, Fazio et al. [5] observed glass veins composed of amorphous silica extending across several microns in length and generally thicker than 50 nm.Wilk et al. [6] found amorphous silica in experimentally shocked rocks called shatter cones that formed at low shock pressures of 0.5-5 GPa.Shatter cones are considered to be a classic impact indicator.In addition, Carl et al. [7] conducted experiments demonstrating that extensive amorphization of quartz begins at ~10 GPa.Regarding the importance of amorphous silica in studies of shock metamorphism, French and Koeberl [8] wrote, "amorphous or 'glassy' phases ... constitute another set of unique and distinctive criteria for the recognition of shock-metamorphosed rocks...." Similarly, Bohor et al. [9] wrote, "the formation of quartz glass within fractures ... allows a definitive distinction ... between these shock PDFs and the glass-free dislocation trails marking slow tectonic deformation." Even with these pioneering investigations, numerous questions remain about the formation of shock fractures and amorphous silica associated with nuclear airbursts.Is the formation process similar to that for planar fractures (PFs) and planar deformation features (PDFs) found in shocked quartz grains associated with cosmic impact craters?Are these features similar or different from tectonic lamellae in some deformed metamorphic rocks?In this contribution, we explore these and other questions.
Here, the term "lamellae" typically denotes parallel and planar stress features that form at high shock pressures in quartz.In contrast, the term "fractures" denotes typically open or glass-filled stress features that are sub-planar and sub-parallel.Table 1 compares some of the commonalities and differences among the types of shock features.Our analysis of previous studies (primarily French and Koeberl [8]) shows that shock fractures share 2 of 10 characteristics with PDFs, 4 of 10 with PFs, and 2 of 10 with DLs.Thus, shock fractures differ substantially from the other shock metamorphic features: PDFs, PFs, and DLs.The most important reported differences are that shock fractures are typically sub-planar, non-parallel, not crystallographically oriented, and form at lower shock pressures.
Key analytical studies of shock fractures
Kieffer [32] performed analyses of shocked sandstone from Meteor Crater and concluded that impact-related microfractures began to form at 5.5 GPa (Table 2, adapted from Table 2 of Kieffer [32]).Later, Kieffer et al. [33] described quartz grains within sandstone from Meteor Crater that were weakly shocked at <10 GPa and displayed fractures with quartz that was transformed into amorphous silica.For moderately and strongly shocked rocks, they proposed a process called "jetting," in which molten quartz was injected under pressure into shock-formed fractures in the grains.
Christie et al. [18] performed laboratory experiments on milled quartz cylinders by generating slow-strain conditions to produce glassy lamellae using a confining pressure of 1.5 GPa and a stress differential of up to 3.6 GPa.Their experiment attempted to replicate the features known to form in quartz grains during tectonic motion along fault planes.They reported the presence of deformation lamellae closely associated with amorphous silica at low pressures under laboratory conditions.Their experiment suggests that glass-filled lamellae may form in quartz at pressures as low as 1.5 GPa.
Importantly, Christie et al. [18] did not report amorphous silica associated with naturally-formed tectonic deformation lamellae in quartz [19], suggesting that their laboratory experiments did not replicate the processes that form natural tectonic lamellae.Co-author H.-R.W. has performed multiple analyses of tectonic lamellae and, notably, never observed amorphous silica associated with tectonic lamellae in quartz grains [40][41][42][43][44].In addition, Houser et al. [45] described finding tectonically-formed, nano-to micro-scale amorphous silica particles and nanofilms along active fault planes, but they reported no quartz grains with fractures containing amorphous silica.Multiple studies have observed amorphous silica within fractures, but only in impact-related shocked quartz and not in tectonic deformation lamellae [9,14,19].
Laboratory experiments by Kowitz et al. [11,15,46] investigated the shock alteration of quartz grains when a Note.Based on a study of quartz-rich sandstone from Meteor Crater [32,33].The scale ranges from unshocked quartz at shock stage 0 to highly shocked quartz at shock stage 4 and melted quartz glass at shock stage 5. Shock-generated fractures with amorphous silica (glass) first appear at ~5.0 to 5.5 GPa, as green highlighting indicates.This classification is from Kowitz et al. [11], based on Table 2 of Kieffer [32,33] and modified by others [38,39].
The Trinity nuclear airburst has been recently estimated at 24.8 ± 2 kiloton (kt), up from previous estimates of 20-22 kt.The presence of various key minerals is indicative of the extreme pressures generated: ~8 to < 10 GPa for shocked quartz [2,3]; ~7-10 GPa for shocked zircon [47]; <25-60 GPa for vesiculated feldspar [48]; >8 GPa based on the fractionation of zinc [49]; and 5-8 GPa based on quasi-crystalline minerals in trinitite [50].These studies of the Trinity airburst are critical because they establish the high pressures typically necessary for producing shock metamorphism.
Sample locations Meteor Crater, Arizona
This site, also known as the Barringer Crater, is a 1.2-km-wide hypervelocity impact feature located east of Flagstaff, Arizona [51].The 180-m-deep crater is surrounded by an ejecta blanket that is elevated ~30 to 60 m above the local surface (Figure 2).The 50,000-year-old impact crater is estimated to have been produced by an approximately 50-m-wide bolide, now known as the Canyon Diablo meteorite [51].The bedrock inside Meteor Crater contains shocked quartz with high-pressure planar deformation features (PDFs) [32,51], but we limited our study to shock-fractured quartz grains embedded in samples of meltglass that had been ejected from the crater; we did not examine quartz grains embedded in sandstone or limestone (Appendix, Figure S1).The samples were collected in 1966 by Bunch [51] on the rim ~500 m north of the crater's center at ~35.032206° N, 111.023988°W.
Russia, Joe-1 and Joe-4 nuclear tests, near-surface airbursts
The first Soviet nuclear bomb test, nicknamed "Joe 1" by the Americans, was conducted in 1949 in Kazakhstan (~50.590664°[11] showing (A) original unshocked quartz grains in porous sandstone; (B) grains with non-planar, intra-granular microfractures initially produced at 5 GPa; (C) grains shocked at 7.5 GPa.Red arrows mark the direction of the applied shock from the top of the images down; yellow arrows mark selected representative fractures.Adapted and cropped from Kowitz et al. [11]; used with permission.
N, 77.847319° E).The ~20-kt nuclear test was detonated aerially on a 30-m-tall tower (Figure 3)."Joe 4" is the American nickname for a 400-kt Russian test that was detonated on a 30-m-tall tower at the same location in 1953.This study analyzed only fractured quartz grains in loose sediment and embedded in multi-mm-sized fragments of meltglass.A surface sediment sample was collected by Byron Ristvet on 9/1/2012 at ~100 meters from ground zero for both tests (Appendix, Figure S2).It could not be determined which nuclear test produced the sample that was collected and investigated.
U.S., Trinity nuclear test, near-surface airburst
The Trinity nuclear bomb was detonated aerially in 1945 at the Alamogordo Bombing Range, New Mexico, on a tower at an altitude of 30 m [2] with an estimated energy of 24.8 kilotons (kt) of TNT equivalent [52].The fireball was ~300 m wide at ~25 ms after detonation (Figure 4A).A blast zone of the ejected materials extended more than 400 m radially from ground zero [2].The airburst formed a crater that was ~80 m in diameter [53] and ~1.4 m deep [54] (Figure 4B).This study analyzed only fractured quartz grains embedded in meltglass, called trinitite, which was collected by co-author R.E.H. on 9/30/2011 from the ground surface ~400 m north of ground zero (33.68100°N, 106.4756°W) (Appendix, Figure S3).R.E.H. also studied another sample (JIE) of loose quartz grains found on an anthill near ground zero, collected by Jim Eckles in 2003.
Sampling and methodology
Samples were collected as described in the Appendix, Methods-Samples.Candidate grains were processed as described in the Appendix, Methods-Processing Steps.The Appendix also lists the locations of laboratories where analyses were performed.Selected grains were investigated using multiple standard analytical techniques and preparation methods, as described in Methods below and the Appendix, Methods-Analytical Techniques.
Results and discussion
We employed ten analytical techniques to investigate shock fractures containing amorphous silica, as follows:
Optical transmission microscopy (OPT)
Using this technique, we observed that >50% of the grains examined for each of the three sites displayed shock fractures.Representative optical and SEM-BSE images of quartz grains are shown in Figure 5.These images are comparable to those from the experimental study shown in Figure 1.Most displayed a single set of shock fractures, meaning all are oriented in approximately the same direction.However, a few grains display multiple sets oriented along different axes.
Some grains with shock fractures display undulose extinction (Figure 5), in which waves of extinction are typically oriented perpendicular to the trend of the grain's lamellae.Kowitz et al. [15] reported that the extinction of quartz grains is sharp in unshocked sandstone.In contrast, they noted that undulose extinction becomes apparent in sandstone shocked to 5 GPa, transitioning to weak but still prominent mosaicism (i.e., irregular patchwork extinction) (Figure 5).
Epi-illumination microscopy (EPI)
This analytical technique is particularly useful in viewing HF-etched quartz grains (Figure 5) that display previously hidden glass-filled fractures.Multiple studies [9,14,19,21,55,56] have demonstrated the usefulness of performing analyses after etching quartz grains with HF.According to Gratz et al. [19], the HF-etching removes some amorphous silica filling the shock features, allowing for the "unambiguous visual distinction between glass-filled PDFs and glassfree tectonic deformation arrays in quartz."Other techniques are necessary to identify and characterize the filled material as amorphous silica, a key indicator of shock metamorphism [9,19].
In contrast, lamellae in tectonically-deformed grains are not visible in EPI as open fractures but may appear as shallow, closed depressions without filling material.Our investigations of six unshocked natural quartz grains and six tectonically-deformed quartz grains from non-impact layers reveal that none contain amorphous silica.See Figure 15 and 16.
Scanning electron microscopy (SEM-BSE)
Analyses using SEM-BSE revealed filled fractures in quartz grains that appeared mostly as linear features, although some were curvilinear.Other analyses are also necessary to identify and characterize the material filling the fractures.(Figure 6).We confirmed the amorphous state of the fill material using high-resolution bright-field TEM that can image individual atoms (Figure 7).
Numerous inclusions, also known as decorations or vesicles, are filled with glass or gases and are closely associated with shock fractures (Figure 6).Madden et al. [57] reported that multi-phase inclusions of glass, gases, and fluids are typical at Meteor Crater in sandstone lightly shocked at ≥5.5 to 13 GPa.In contrast, that study observed no multi-phase inclusions in samples formed at >13 GPa in shock stages 3 or 4, suggesting that the high shock pressures collapsed the inclusions [57].Thus, the evidence suggests that these grains with shock fractures formed at low pressure of 5 to 13 GPa at shock stages 1 to 2. In contrast, unshocked tectonically-deformed quartz grains may display lines of bubbles, known as decorations, that form by the dissolution of quartz by water rather than by shock-related processes.
Fast-Fourier transform (FFT)
The areas of the grains from which the foils were extracted are shown in Figure 5.In this study, the FFT analyses commonly displayed crystalline structure in the quartz matrix away from the shock fractures, but most shock fractures displayed a diffuse halo or ring indicative of amorphous material [33,58,59], especially in the thin bands of glass along the shock fractures (Figures 6 and 7).
FFTs of the filling along these thin fractures display the diffuse halo-like patterns characteristic of amorphous material [33,58,59].The halos have average d-spacings of ~3.72 Å for Meteor Crater, ~3.90 Å for Joe-1/4, and ~3.95 Å for Trinity (Figure 6).Other average halo d-spacings are shown in Figure 7.The mean value of 10 grains for the three sites is 3.60 Å with a range of 3.34 to 3.95 Å. Plots show typical halo d-spacings for each of the three sites that are somewhat lower than the reported halo d-spacing of 4.2 Å for quartz glass [60] (Figure 8).
Gleason et al. [61] conducted experiments on amorphous silica and noted that unshocked amorphous silica had a d-spacing of about 4.20 Å.In contrast, shock pressures ranging from 4.7 to 33.6 GPa transformed the quartz into amorphous silica that was permanently densified, causing the standard glass d-spacing to decrease within a range of 3.36 to 4.00 Å.Thus, in our study, the lower d-spacing values (mean = 3.62 Å) support an interpretation that amorphous silica from the three sites was shocked and densified at as low as 4.7 GPa.
TEM energy dispersive spectroscopy (TEM-EDS)
Energy-dispersive X-ray spectroscopy (EDS) is an analytical technique used to determine the elemental composition of materials.EDS analyses of multiple grains demonstrated that most of the material filling fractures is predominantly composed of silicon and oxygen (range: 98-99 wt%).Together with the diffuse rings exhibited in the FFT results (Figures 6-8), this finding confirms that the material filling
Scanning transmission electron microscopy (STEM)
FIB locations of analyzed grains are shown in Figure 5.Using dark-field STEM, the 8-to 15-µm-wide foils display inter-fracture spacings ranging from ~250 nm to 3 µm (Figure 6).Nearly all shock fractures were observed to contain material that was shown to be amorphous silica discontinuously filling the fractures.
Transmission electron microscopy (TEM)
Images acquired using bright-field TEM show sub-planar shock fractures containing thin bands of amorphous silica the fractures is amorphous silica.On the other hand, this Si-rich material is inconsistent with being hydrated silica (opal, hyalite) that can precipitate into fractures because the filling lacks spherical micro-structures typically present in opal [62].Furthermore, TEM-EDS analyses reveal insufficient levels of oxygen to account for the hydration of silica (opal, hyalite) [62].Concentrations typically total ~66 wt% oxygen in opal and hyalite [62], compared with ~28 to 48 wt% for the glass in our samples.For EDS spectra and other details, see Appendix, Figures S4-S7.
Most material that fills the fractures is amorphous silica, but some fractures are intermittently filled with C, Al, Mg, Fe, or Ca.These represent secondary materials possibly injected into the fractures during their formation, precipitated later into the fractures, or introduced during the preparation and polishing of samples.
Cathodoluminescence (CL)
The areas of grains analyzed for CL are shown in Figure 5. Representative CL images are shown in Figures 9-11.Under CL, fractures filled with amorphous silica have been reported to be commonly non-luminescent, i.e., black [21,59,63], although some defect structures in amorphous silica have been reported to luminesce red [65].Alternately, open fractures also appear black; therefore, TEM and TEM-EDS must be used to confirm the possible presence or absence of amorphous silica.According to previous studies [21,59,63,64], if quartz luminesces red, it has been heated or melted and then recrystallized but does not contain amorphous silica.In addition, tectonic deformation lamellae may appear red but not black [21,59,63,64].Non-shocked quartz lattice often luminesces blue under CL [21,59,63,64].
SEM energy dispersive spectroscopy (SEM-EDS)
For these analyses, we selected multiple areas that displayed fractures filled with material (Figures 7 and 8).In most cases, EDS analyses indicated the quartz matrix and filling material were predominantly silica and oxygen (range: 89-98 wt%).The balance was made up of carbon, presumably from the mounting epoxy or the carbon coating.For EDS spectra and other details, see Appendix, Figures S8-S12.
Electron backscatter diffraction (EBSD)
Analyses performed using EBSD rely on varying comparisons of the Kikuchi patterns in a given grain, as shown in Appendix, Figures S13-S16.Multiple EBSD routines reveal an extensive network of oriented shock fractures for all three sites (Figure 12).Optical microscopy revealed that most of the hundreds of quartz grains in each sample from the three sites display these fractures.These images closely match those from shock experiments at ≥5.5 GPa by Kowitz et al. [11] (Figure 1).Each grain's crystallographic orientation is indicated for each image in the left-hand column by the crystal representation in the lower right-hand corner (Figures 12A, 12C, and 12E).The red-colored plane represents (0001), the basal plane, with the c-axis perpendicular.Although the shock fractures are non-planar, their general orientations correspond well with the crystallographic planes depicted on the crystal representation in the lower right-hand corner.This correspondence suggests that the shock fractures form similarly to high-shock planar deformation features (PDFs) and planar fractures (PFs) but are unlike tectonically-deformed lamellae [8,66].
EBSD "local orientation spread" (LOS)
The high pressures during shock metamorphism damage and distort the crystalline lattice of quartz grains.To identify and quantify any potential grain damage, we used an EBSD routine called "local orientation spread" that generates Kikuchi patterns of the quartz lattice.The EDAX EBSD software compares these short-range patterns to reveal possible rotations or misorientations of the crystalline lattice, after which the average misorientation of any given point is calculated relative to neighboring points.For the three sites, we observed values ranging from 0° to ~5° of misorientation, and this misoriented lattice tends to be concentrated along the shock fractures (Figure 12).We found that such misorientations are common in quartz with shock fractures, but are atypical in unshocked quartz grains (e.g., Figure 16).
Trinity grain 32x08 was scanned using SEM (Figure 13A) that recorded EBSD data with a beam width of ~20 nm and indexed the crystallographic patterns automatically (Figure 13B).This provides information about the orientation of the crystal at that spot relative to sample coordinates, generally defined by three Euler angles that relate sample The gray-to-black color indicates that the filling material is non-luminescent, consistent with amorphous silica [21,59,63,64].The SEM-EDS panels (right-hand column) are of approximately the same field of view as in the left-hand column and confirm that the material is predominantly composed of silicon and oxygen (see EDS spectra for panels in Appendix, Figures S8-S12).Thus, the evidence indicates that the filling in the fractures is amorphous silica.and crystal coordinate systems.Figure 13C is a map over the same Trinity quartz grain with colors indicative of Euler angle f2; the Kikuchi pattern is shown in Figure 13D.The pole figure in Figure 13B shows that two main orientations are present across the selected area.The quartz grain has a c-axis roughly perpendicular to the sample surface (001 pole figure) and two orientations of rhombohedral planes (101 and 011) related by a 60° (180°) rotation around the c-axis.This orientation relationship is known as Dauphiné twinning, which can form in multiple ways: during growth, during the phase transition from hexagonal high quartz to trigonal low quartz, during mechanical deformation, or during recrystallization after thermal shock.Several studies have observed Dauphiné twins in quartz subjected to stress (e.g., Schubnikow and Zinserling [67]; Tullis [68]; and Wenk et al. [41]).From the Euler angle relationships, twin boundaries can be defined, and the Dauphiné twin boundaries are plotted with black outlines in Figure 13C.
EBSD "grain reference orientation deviation" values superimposed on EBSD "image quality" values
Orientation deviation maps (Figure 14) assist with visualizing the distribution of local lattice angular misorientations by color-coding the variations.EDAX's EBSD software analyzes and colorizes individual points to illustrate any rotation of the crystalline lattice around an arbitrary common point on the grain with a wide range of colors that each represents areas with short-range misorientations relative to the common point.
Several of the grains in Figure 14 exhibit shock fractures that are curved.As the shock fractures formed, the lattice may have become distorted at high ambient temperatures or by shock melting, as suggested by Buchanan and Reimold [16] and Reimold and Koeberl [13].
EBSD "inverse pole figure" values superimposed on EBSD "image quality" values
The inverse pole figures (right-hand column of Figure 14) reveal variations in the lattice axes of quartz relative to a frame of reference, which, in these examples, is the (0001) basal plane.The EBSD results indicate that these are monocrystalline grains.In each case, measurements show that areas of quartz grains known as Dauphiné twins are rotated 60° relative to the c-axis.Dauphiné twinning is undetectable by standard optical microscopy and SEM but can easily be seen using EBSD.
For the shocked quartz analyzed in our study, Dauphiné twins typically align with the trend of the shock fractures, suggesting that they crystallized as the fractures formed under high stress or formed after the grain fractured as it cooled from the high shock temperatures.It has long been recognized that Dauphiné twins form when quartz is subjected to mechanical stress [67].Later, Wenk et al. [41] further concluded that Dauphiné twinning occurs under high thermal and mechanical stress.Subsequently, Wenk et al. [42] reported that Dauphiné twinning provides evidence for an impact-related origin of shocked quartzite collected from the Vredefort crater in South Africa.
Natural and tectonically deformed quartz grains
This study used optical microscopy and SEM-EDS to investigate hundreds of HF-etched natural, unshocked quartz grains and tectonically deformed grains.These grains commonly displayed fractures, but SEM-EDS observed none to contain silica.In addition, although Dauphiné twins are nearly ubiquitous in all quartz grains, including unshocked or tectonically deformed grains, they are typically distributed randomly (Figures 15 and 16).This random distribution is unlike shock-fractured quartz grains in which Dauphiné twins are nearly always oriented with the fractures.
Additional imagery showing variations in shockfractured quartz
Given the importance of imagery for this investigation, we provide additional examples from Trinity, Joe, and Meteor Crater (Figures 17-19).These illustrate the wide variation in shock fracture characteristics that we documented.These images were acquired using the same analytical techniques presented above.
Potential formation mechanisms of shock fractures
This investigation supports the hypothesis that glass-filled shock fracturing can occur in nuclear detonations and crater-forming impact events.Although the characteristics of these two events are mostly dissimilar, there are essential similarities in the shock effects.Both events produce enormous temperatures and pressures capable of melting quartz and producing shock metamorphism.The most important similarity is that, in both events, the fireball's shockwave is coupled with Earth's surface.This situation is unlike high-altitude nuclear detonations in which the fireball does not intersect the Earth's surface.This coupling appears essential for providing the following mechanisms to produce amorphous silica in shocked fractures.
Shock fracturing by compression
Evidence indicates that shock fractures, as well as shock PDFs and PFs, form when quartz grains are subjected to shock pressures above their Hugoniot elastic limit (HEL), which, for quartz, ranges from ~3-15 GPa [27].This pressure range corresponds with that estimated for the nuclear tests of Trinity and Joe-1/4.A pressure database [69] reveals that in quartz, velocities of the pressure wave range from 6.3 to 6.9 km/sec, depending on a quartz grain's orientation.
High shock pressures commonly produce quartz phases called coesite or stishovite.However, we found no evidence of these phases previously observed at Meteor Crater [2,3,32,48,49,[70][71][72][73].The absence of these phases supports the hypothesis that the shocked grains investigated in this study from Trinity, Joe, and Meteor Crater formed at the lower range of shock pressures, estimated to be ≤8 GPa.
Shock fracturing by tension
In both airbursts and crater-forming events, the fracturing of quartz grains may also occur from tensile forces and spallation [26,35,37,[74][75][76].This shock occurs when a compressive shockwave enters a material, such as a quartz grain, and then reflects off the opposite grain boundary, producing a rarefaction wave that fractures the grain in the opposite direction.The shockwave may occur at < 1 GPa and does not need to exceed quartz's Hugoniot elastic limit (HEL) to cause tensile damage.This process frequently produces the most mechanical damage because the tensile strength of quartz is typically lower than its compressive strength.In this study, tensile fracturing is considered to be the most common formation process.
Thermal shock-metamorphism
For shock fractures to form in quartz, the crystalline lattice must experience high stress and strain, not just from high pressures but also typically from high-temperature gradients.Nuclear tests like Trinity generate fireballs with extreme temperatures that may rise to ~200,000 °C within 10 −4 sec but then, after 3 sec, drop to below the melting point of quartz [77].Such extreme, short-lived temperatures followed by rapid quenching can fracture quartz grains due to sudden thermal expansion followed by rapid cooling.In addition, the intense thermal and gamma radiation may heat the quartz grains to near-melting and, thus, reduce the pressures needed to form shock fractures.These thermal processes appear responsible for forming Dauphiné twinning in alignment with the shock fractures.To our knowledge, this is the first report of such a connection.
Most importantly, we concur with the jetting hypothesis by Kieffer [32] that high temperatures appear to vaporize quartz grains and sediment, after which high pressures inject molten silica or vapor into the fractures and any other zones of weakness in exposed quartz grains [33,37].We infer that molten silica might enter quartz grains along multiple possible zones of weakness: (i) fractures produced by the shockwave; (ii) fractures produced by high temperatures; (iii) pre-existing quartz fractures; (iv) new fractures that form along pre-existing PDFs and PFs; (v) new fractures along pre-existing tectonic lamellae; and (vi) new fractures along pre-existing subgrain boundaries.In the cases of the pre-existing features, the shock fracturing process overprints and modifies the existing features.Even though these types of fractures may form under substantially different shock and non-shock conditions, all have one common characteristic: they became filled with amorphous silica, as described next.
Previous studies of amorphous silica in quartz grains
There have been many studies that identified amorphous silica in quartz.Kieffer [32] analyzed shocked sandstone from Meteor Crater and concluded that impact-related glass-filled fractures began to form at 5.5 GPa but not at lower pressures (Table 2).Christie and Ardell [18] performed shock compression experiments on large quartz cylindrical crystals and noted amorphous silica that filled the fractures at a confining pressure of 1.5 GPa.Kenkmann et al. [78] performed shock experiments on 1.5-mm-wide cylindrical samples of quartz, and using moderate shock pressures of 6-34 GPa, they could generate veins of amorphous silica that were 1-6 µm wide.Kowitz et al. [11,15,46] conducted detailed laboratory experiments to determine the lower pressure limit for forming shock features called "sub-planar, intra-granular fractures."[11] In their experiments, a steel plate was explosively driven into cylinders of quartz-rich sandstone at pressures of 5, 7.5, 10, and 12.5 GPa.Visible shock fractures and amorphous silica (~1.6 wt%) first appeared at 5 GPa [11], similar to the results of Kieffer [32].Carl et al. [7] conducted experiments demonstrating that extensive amorphization of quartz begins at ~10 GPa.In quartz grains experimentally shocked at 5 to 17.5 GPa, Fazio et al. [5] observed glass veins composed of amorphous silica generally thicker than 50 nm, extending several microns in length.Wilk et al. [6] found amorphous silica in experimentally shocked rocks called shatter cones that formed at low shock pressures of 0.5-5 GPa.Laboratory shock experiments by Martinelli et al. [79] used quartz crystals with a minimum diameter of 3400 µm, larger than we tested.The reported compression applied was as low as 0.2 GPa; the maximum compression applied is unclear but appears to have been <1 GPa.
In summary, these studies report the formation of amorphous silica in fractures produced by minimum shock pressures averaging 4.2 GPa (range 0.2 to 10 GPa), with 5 of the 8 studies reporting ~5 GPa as the minimum observed pressure.No experimental study has ever reported glassfilled fractures in natural quartz grains, nor have they been reported in natural quartz grains exposed to non-impact processes, such as volcanism and tectonism [19,80].The existing evidence supports their formation during nuclear detonations and hypervelocity impact events.In addition, Ernstson et al. [26,34,36,37,76,81,82], Moore et al. [83], Demitroff et al. [84], and Mahaney et al. [85] have reported shock-metamorphosed quartz in multiple proposed airbursts during the Cenozoic.
Proposed model for producing shock fractures
To summarize, we propose that shock fractures form in the following sequence.(i) Fractures in quartz grains either preexist or are produced by the high-pressure shockwave and thermal pulse both by compression and tensioning; (ii) the blast vaporizes some quartz grains, and this vapor is transported away from ground zero in the expanding fireball; (iii) the outer surfaces of some quartz grains melt at >1720 °C, the melting point of quartz; (iv) the extreme pressures inject molten silica or silica vapor into the fractures; and (v) both thermal and pressure shock may cause further random melting on the exteriors and in the interiors of some grains.
Future studies
Several studies [34,36,37,75,86] have reported evidence that shock fractures are produced in cosmic airbursts when a high-pressure, high-temperature fireball intersects the surface, similar to the nuclear airbursts described here.These cosmic airbursts may produce shallow craters rather than classic hard-impact craters.We suggest that future studies investigate the hypothesis that low-shock, glass-filled shock fractures are produced in quartz grains during near-surface cosmic airbursts.Similarly, we suggest further research to improve our understanding of glass-filled fractures in hard-impact craters of all sizes.
Conclusions
Glass-filled shock lamellae and fractures are considered to be definitive indicators of a crater-forming impact event and are widely accepted to form at extreme pressures of ~5 to >30 GPa.However, most previous studies of shocked quartz were conducted on large craters and on easily recognizable quartz grains that had been shocked at the higher end of that pressure range.Consequently, there is limited knowledge about the characteristics of quartz grains minimally shocked at lower shock pressures.
This study confirmed previously reported low-shock fractures in quartz at Meteor Crater, a relatively small 1.2-km-wide impact event.Most importantly, we confirmed that similar low-shock fractures also form in near-surface nuclear airbursts, where the fireball and the blast wave reach the surface and no hard-impact crater forms.Despite being static instead of moving at high velocity, these nuclear airbursts create ambient conditions of high pressures and temperatures that are proposed to be similar to those near-surface cosmic airbursts in which the shockwave couples to Earth's surface.
We observed that these low-grade shock fractures: (i) are either void or filled with glass; (ii) range from near-planar to curvilinear; (iii) are commonly sub-parallel in orientation; (iv) are commonly spaced microns apart; (v) are typically less than one micron thick; (vii) are typically closely aligned with Dauphiné twins; and (viii) appear to form at <5 GPa.Notably, Dauphiné twinning occurs during exposure to high pressures or high temperatures, after which portions of the grains recrystallize in alignment with the fracture patterns.Multiple studies have concluded that when amorphous silica is present within fractures, it allows for the unequivocal differentiation between impact-related shock fractures and the glass-free lamellae that mark slowstrain tectonic deformation.The same principle applies to shock fractures formed in nuclear detonations.Thus, we conclude that these shock fractures cannot be of tectonic origin.
The discovery of shock fractures in quartz exposed to nuclear airbursts has important implications.It suggests that shock metamorphism may also occur during a near-surface airburst of an asteroid or comet if the bolide disintegrates close enough to the Earth's surface to generate large shock pressures.The protocol reported here may help identify low-shock fractures in quartz from previously unknown, near-surface cosmic airbursts and small crater-forming impact events in the past.amount of information from the image, color information in the images is non-quantitative.(ix) EBSD analyses were performed using multiple routines.(x) FIB foils were extracted from selected quartz grains.(xi) TEM analyses were performed on individual foils.(xii) Elemental compositions of the grains were determined using TEM-based EDS.FFTs and bi-plots of d-spacing and intensity were produced with Digital Micrograph, version 3.32.2403.0.Because electron microscopy is capable of causing irradiation-induced amorphization [23], quartz grains were examined at low magnification using low voltages and short image-acquisition times.
Appendix: Analytical details HF etching Following Bunch et al. [87], Spectrum Petrographics, Vancouver, WA, etched thin-sectioned slides by exposure to HF vapor for 2 min to dissolve amorphous quartz and make any lamellae more visible.After treatment with HF vapor, we performed another dH 2 O rinse.Alternately, we treated some slides with liquid HF for 2 min, after which we performed a dH 2 O rinse; neutralized them with 5% sodium carbonate solution; rinsed them with dH 2 O again; and then treated them with 5% HCl to remove carbonates.The HF vapor produced more consistent results than liquid HF.Multiple studies [9,14,19,21,55,56] have demonstrated the utility of etching quartz grains with HF to differentiate between glass-filled shock features and glass-free tectonic deformation lamellae.In our study, we observed that HF sometimes lightly etches tectonic deformation lamellae to reveal broad, shallow depressions, as others reported [9,55].However, unlike shock fractures, these depressions in the damaged lattice did not extend more than a few microns into the grain and were not observed to contain amorphous silica.
Optical transmission microscopy (OPT)
For this study, we made polished thin sections of quartz grains and meltglass to search for potentially shocked quartz grains at three sites.For Meteor Crater, 36 quartz grains were analyzed at concentrations of 600 grains/cm 2 (Appendix, Figure S1); for the Joe-1/4 site, 24 grains at 150/cm 2 (Appendix, Figure S2) and for Trinity, 42 grains at 700/cm 2 (Appendix, Figure S3).Epi-illumination microscopy (EPI).This optical technique uses reflected light to image the surfaces of the grains investigated.
SEM and SEM-EDS
Dark-field STEM images were acquired on focused ion beam foils.Standard practices were used for STEM analyses.At Elizabeth City State University, North Carolina, analyses were conducted in low-vacuum mode using a JEOL-6000 SEM system.At the University of Oregon, we used a ThermoFisher Apreo 2 SEM with a CL detector.Using SEM-EDS, we manually selected for detection of major elements with uncertainties of approximately ±10%.At the University of Utah, secondary and backscattered electron images were collected using a Teneo SEM system (Thermofisher FEI, Hillsboro, OR).
TEM, STEM, and TEM-EDS
At the CAMCOR facility at the University of Oregon, Transmission/Scanning transmission electron microscopy, or (S)TEM, was performed on an FEI 80-300 Titan scanning/transmission electron microscope (STEM) equipped with an image corrector, High-Angle Annular Dark Field (HAADF) detector, Energy Dispersive X-ray Spectroscopy (EDS) detector, Gatan Imaging Filter (GIF), and a 4-megapixel Charge-Coupled Device (CCD) camera.Microscope magnification was calibrated using a standard cross-grating carbon replica (2,160 lines/mm) evaporated with Au-Pd (Ted Pella #607).All images, diffraction patterns, and EDX maps were collected at 300 Kv and processed using Digital Micrograph, version 3.32.2403.0.
STEM/TEM was performed on a JEOL 2800 operated at 200 kV at the University of Utah.EDS data was collected and processed using ThermoFisher Noran System 7 software.Spectral maps were processed as net-counts (background subtracted) using a 5×5 kernel size.Quantitative results were obtained using the Cliff-Lorimer method with absorption correction.
Fast-Fourier transform (FFT)
The diffraction characteristics of the FIB foils were investigated using FFT, an image processing technique for analyzing high-resolution TEM (HRTEM) images in reciprocal space.The FFT algorithm calculates the frequency distribution of pixel intensities in an HRTEM image, and then, any periodicity is displayed as spots in an output image, thus revealing the crystal's structure.HRTEM and FFT allow the measurement of interatomic spacings, known as d-spacings, measured in nm or angstroms (Å).
Focused ion beam
This technique creates a thin specimen (avg: ~175 nanometers (nm) thick) by milling a quartz grain with focused gallium (Ga) ions.The resulting specimen, called a foil, is then analyzed using TEM.At the CAMCOR facility of the University of Oregon, TEM samples of quartz foils were prepared using a Helios Dual Beam SEM FIB.At the Surface Analysis Laboratory at the University of Utah, TEM sample preparation of quartz foils from bulk specimens was performed on an FEI/Thermo Helios Nanolab 650.The lift-out procedure followed standard sample preparation techniques.An electron beam deposited platinum layer as first locally deposited.Next, an ion-beam platinum layer was deposited.Trenches were milled on each side of the protective layer.Cuts were then made to the underside, and a micromanipulator probe was placed in contact with the surface (Omniprobe 200).The probe was attached by depositing platinum, and then the sample was cut free from the bulk.After using the micromanipulator probe, the lift-out was attached to a copper support grid.The sample was then thinned using the ion beam at progressively decreasing accelerating voltages, 30 kV, 16 kV, 8 kV, and 2 kV.
Cathodoluminescence At the University of Oregon, cathodoluminescent (CL) images were synchronously captured at red (R), green (G), and blue (B) wavelengths on coated thin sections in low-vacuum mode on a Thermo Apreo2 S FE-SEM at 10 kV using 3.2 nA of beam current at ~10 mm working distance with 50Pa of chamber pressure to balance charge.Individual images using red, blue, and green wavelength filters on the CL detector were acquired and composited to create a 24-bit color image.Wavelength ranges: red: 595-813 nm; green: 495-615; and blue: 291-509 nm.Backscatter (BSE) and secondary (SE) electron images were captured with similar beam settings.
Electron backscatter diffraction (EBSD)
EBSD is an SEM-based analytical technique in which an electron beam scans across a crystalline sample tilted at 70°.The diffracted electrons produce what are called Kikuchi patterns that reveal the microstructural properties of the sample.
At the University of California, Berkeley, SEM analyses were performed with a Zeiss EVO for imaging operated at 20 kV and EDS analyses used an EDAX-AMETEK spectrometer with corresponding Genesis software.EBSD mapping used a Digiview detector and TSL-OIM software.At the University of Utah, a Velocity Super EBSD camera (EDAX, Pleasanton, CA) was used to collect diffracted electrons for crystal structure analysis.
At the University of Utah, secondary and backscattered electron SEM images were collected using a Teneo system (Thermofisher FEI; Hillsboro, OR).EDS, EBSD, and CL analyses were similarly conducted with the same SEM system installed with the following detectors.An Octane Elite EDS system (EDAX, Pleasanton, CA) was used to collect elemental spectra.A Monarc CL Detector (Gatan; Pleasanton, CA) was used for cathodoluminescence studies.SEM beam energy and current were optimized to meet the requirements of each analysis mode.Before imaging, sample slides were polished to 0.20 µm roughness with colloidal silica suspension and washed with water to remove residues.The slides were then coated with 5-nm-thick carbon using a Leica EM ACE600 coater (Leica Microsystems, Inc., Deerfield, IL) to prevent charging during the imaging process.
Micro-Raman
We investigated the shock fractures using micro-Raman with poor results.Even after highly polishing the quartz grains, their extensive fractures and amorphization made it challenging to acquire Raman spectra.
Universal stage
We also investigated the shock fractures using the universal stage.However, we could not determine Miller indices because the observed shock fractures are non-planar and, thus, cannot be accurately measured and compared to planar features.
Image processing Most images were globally adjusted for balance, brightness, contrast, and sharpness, and some images were cropped to fit the space.A few images were rotated for clarity, and the legends and scale bars were repositioned at the bottom of the figures.Legends sometimes became unreadable for RGB images and some resized images, so they were replaced with the original, legible legend.EDS figures were composited from multiple printouts.No data within the figures were changed or obscured in making any adjustments.
Figure 1 :
Figure 1: Low-shock fractures in quartz.SEM backscatter electron (BSE) images of polished, thin-sectioned grains from shock experiments by Kowitz et al. [11] showing (A) original unshocked quartz grains in porous sandstone; (B) grains with non-planar, intra-granular microfractures initially produced at 5 GPa; (C) grains shocked at 7.5 GPa.Red arrows mark the direction of the applied shock from the top of the images down; yellow arrows mark selected representative fractures.Adapted and cropped from Kowitz et al. [11]; used with permission.
Figure 5 :
Figure 5: Images of fractures in quartz grains.Optical microscopy (OPT), left-hand panels A, D, G. Epi-illumination (EPI), middle panels B, E, H. Scanning electron microscopy (SEM-BSE), right-hand panels C, F, I. (A-C) Grains from Meteor Crater, Arizona.(D-F) Grains from the Russian Joe-1/4 nuclear test.(G-I) Grains from the Trinity nuclear test site.Optical images (left-hand column) were acquired under crossed polarizers rotated ~20° off maximum for better visibility.Yellow arrows indicate random representative shock fractures.Panels A and D show dark bands of undulose extinction between orange lines labeled "u."The Trinity grain in panels G and H displays oriented pairs of shock fractures between blue arrows.Red arrows in panels C, F, and I (right-hand column) mark sites from which micron-sized slices of the quartz grain were removed using the focused ion beam (FIB) and then analyzed using bright-field TEM and TEM-EDS.The red asterisks in the right-hand column mark the locations of CL and SEM-EDS analyses.
Figure 6 :
Figure 6: Images using STEM, TEM, and fast-Fourier transform (FFT).(A-D) Grain #10x-12 from Meteor Crater, Arizona.(E-H) Grain #14x-04 from the Russian Joe-1/4 nuclear test.(I-L) Grain #09x11 from Trinity meltglass.The blue arrows mark shock fractures (left-hand column) in these dark-field STEM images, in which the dark lines represent fractures, and the black areas represent voids.For bright-field TEM analyses (middle and right-hand columns), arrows labeled "f" mark material that discontinuously fills the shock fractures.Green arrows labeled "v" indicate voids that appear white in bright-field TEM mode rather than black as in dark-field STEM.Panels D, H, and L are FFTs.The diffuse halo and the d-spacings of its outer edge indicate that the filling of the fractures is amorphous silica.Halo d-spacings were measured along dashed yellow lines and averaged 3.72 Å in panel D, 3.90 Å in panel H, and 3.95 Å in panel L. The diameter of the bright-field TEM beam spot was ~0.5 µm.Insets of diffraction spectra were acquired at "f" in each corresponding bright-field TEM image.
Figure 7 :
Figure 7: TEM images of quartz shock fractures filled with amorphous silica.(A-F) is from Meteor Crater (grain #09x-11); (G-L) is from Trinity (grain #09x11).(A) Bright-field TEM image of the region of interest.(B) A close-up bright-field TEM image exhibits the crystalline lattice below the dotted line and the amorphous silica above; the image was acquired at the asterisk in panel A. (C) Fast-Fourier transform (FFT) of the top part of panel B exhibits a diffuse halo indicative of amorphous silica with a d-spacing of 3.42 Å. (D) FFT of the bottom part of panel B exhibits diffraction spots with a halo indicative of a mix of crystalline lattice with amorphous silica.The halo measures 3.34 Å. (E-F) EDS panels show a composition of 98 wt% silica; the EDS spectra were acquired at the location of the asterisk in panel A. (G) Bright-field TEM image of the region of interest.(H) A close-up bright-field image exhibits the crystalline lattice above the dotted line and the amorphous silica below the line; the high-resolution TEM (HRTEM) image was acquired at the location of the asterisk in panel G. (I) FFT of the top part of panel H shows diffraction spots with a halo that measures 3.45 Å. (J) FFT of the bottom part of panel H displays a diffuse halo indicative of amorphous silica.The d-spacing of the amorphous halo is 3.79 Å. (K-L) EDS panels show a composition of 100 wt% silica; analyses were acquired at the location of the asterisk in panel G.
Figure 8 :
Figure 8: TEM images; FFT patterns and plots; EDS elemental maps.All images were acquired from FIB foils.(A-D) Grain #10x-12 from Meteor Crater, Arizona.(E-H) Grain #14x-04 from the Russian Joe-1/4 nuclear test.(I-L) Grain #30x08 from the Trinity JIE sediment sample.Bright-field TEM images (left-hand column) show the micron-sized areas analyzed; asterisks mark the locations used to generate the FFTs (middle column insets) and the EDS analyses (right-hand column).Panel I (Trinity) shows a glass-filled shock fracture intersecting a glass-filled vesicle.In the middle column, the graphs show intensities plotted against d-spacings generated from FFTs using the Profile function of Digital Micrograph, version 3.32.2403.0.Each grain in this study shows a decrease in slope at d-spacings ranging from 3.50 to 3.70 Å (black line), marking the edges of the diffuse halos shown in the FFT insets.The yellow dashed lines plot a reference profile of non-shocked amorphous silica (melted quartz) [60] with a slope change at 4.20 Å.The slopes of the yellow and black lines are similar, consistent with the presence of amorphous silica in the grains in this study.EDS analyses in right-hand panels confirm that the areas centered on the asterisks in the left-hand panels are predominantly silica and oxygen (range: 98-99 wt%).
Figure 9 :
Figure 9: SEM (A-C) images and cathodoluminescence (CL) images (D-F) of shock fractures in quartz grains.(A) SEM-BSE image of quartz from Meteor Crater, grain 11x08.Shock fractures at arrows.(B) CL image of a different Meteor Crater grain 13x11 showing small, feather-like fractures angling away from the large irregular shock fracture.(C) SEM-BSE image from the Joe-1/4 site, grain 03x16.Most shock fractures contain darker-contrast glass (g) along the shock fractures.The web-like structure is consistent with the high-pressure injection of molten silica or in situ melting.(D) CL image of a different grain from the Joe-1/4 site, grain 14x-04b.(E) SEM-BSE image of quartz from the Trinity site, grain 09x11.The arrow at "g" marks non-luminescent glass.(F) CL image of a different grain, 06x14, from Trinity meltglass.Note that shock fractures are filled with bluish-gray-to-black, non-luminescent glass.
Figure 10 :
Figure 10: SEM and cathodoluminescence (CL) images of shock fractures in quartz.(A, B) Grain 14x-04a from Meteor Crater, Arizona.(C, D) Grain 09x14 from the Russian Joe-1/4 nuclear test.(E, F) Grain 32x08 from Trinity meltglass.The red arrows point to sub-parallel pairs of shock fractures in the SEM-BSE images (left-hand column) and CL images (right-hand column).In SEM-BSE images (left-hand column), yellow arrows point to thin, dark-gray bands of amorphous silica labeled "g."In the CL images (right-hand column), the bluish-gray-to-black bands at arrows labeled "g" indicate non-luminescent, glass-filled shock fractures.As confirmed by EDS, the material is amorphous silica (glass).
Figure 11 :
Figure 11: Images acquired using SEM, grayscale panchromatic cathodoluminescence (CL), and energy dispersive spectroscopy (EDS).(A-D) Grain #10x-12 from Meteor Crater, Arizona.(E-H)Grain #14x-04B from the Russian Joe-1/4 nuclear test.(I-L) Grain #32x08 from Trinity meltglass.In the SEM-BSE images (left-hand column), the yellow arrows point to shock fractures filled with gray material.In the grayscale panchromatic CL images (spectrum: 185-850 nm; middle column), the yellow arrows point to the corresponding region, marked as glass.The gray-to-black color indicates that the filling material is non-luminescent, consistent with amorphous silica[21,59,63,64].The SEM-EDS panels (right-hand column) are of approximately the same field of view as in the left-hand column and confirm that the material is predominantly composed of silicon and oxygen (see EDS spectra for panels in Appendix, FiguresS8-S12).Thus, the evidence indicates that the filling in the fractures is amorphous silica.
Figure 12 :
Figure 12: Images of fractures using EBSD.(A, B) Grain #10x-12 from Meteor Crater, Arizona.(C, D) Grain #14x-04B from the Russian Joe-1/4 nuclear test.(E, F) Grain #09x11 from Trinity meltglass.Images in the left-hand column show numerous oriented shock fractures, with arrows marking a few representative examples among the many fractures present.For a close-up view of the smaller fractures, see SEM-BSE image Figure9E.For reference, the crystal representation at the lower right-hand of each image (left-hand column) represents the crystallographic orientation of that grain in which the c-axis is perpendicular to the red basal plane.A multi-colored misorientation scale is inset into the lower right-hand of panel B and applies to all images in the right-hand column.The colors represent the degrees of misorientation of the crystalline structure, ranging from 0 degrees (blue) to ~5 degrees (red).Note that the largest misorientation (i.e., damage) is concentrated along shock fractures.Some apparent disorientation might be an artifact of weaker quality diffraction patterns in the amorphous material or is due to surface irregularities near fractures, causing locally noisier orientation data.
Figure 13 :
Figure 13: Images of selected portions of shock-fractured quartz grain 32x08 from Trinity meltglass.(A) EBSD "image quality" scan in red is superimposed on an SEM-BSE image; arrows mark a pair of oriented, sub-parallel shock fractures with damaged lattice, as indicated by the lack of the red EBSD signal.(B) Pole figures across the grain with the c-axis (0001) nearly perpendicular to the surface but with Dauphiné twins that share two orientations rotated 60 degrees (101 and 011).(C) EBSD map of Euler angle gamma displays mainly two orientations (green and red).They are related by Dauphiné twinning (180 -30 deg rotation around the c-axis, black outlines).Equal area projection.(D) Kikuchi patterns corresponding to EBSD scan in panel C. (E) Image quality and local orientation spread (LOS) image of lattice misorientations (yellow to red) that correspond to the sub-parallel shock fractures at arrows.(F) Close-up SEM-BSE image of oriented shock fractures, marked by gold-dotted box in panel E. Medium gray areas represent amorphous silica, as separately confirmed by SAD, FFT, and TEM-EDS.
Figure 14 :
Figure 14: EBSD images using "orientation deviation" superimposed on "image quality" and EBSD "inverse pole figure" superimposed on "image quality."(A, B) Grain #10x-12 from Meteor Crater, Arizona.(C, D) Grain #19x-12C from the Russian Joe-1/4 nuclear test.(E-G) Grain #09x11 from Trinity meltglass.(A, C, E in the left-hand column) Orientation deviation analyses show the crystalline misorientation of the grain relative to an average value.Note that the misorientations tend to align with shock fractures (gray-to-black colored) at the white arrows.(F) Epi-illumination image showing open fractures corresponding to arrow in panel C. (B, D, G in right-hand column) Inverse pole figure analyses illustrate the axes of rotation of areas around the c-axis.In each figure, the white arrows mark black-outlined Dauphiné twins that are rotated 60° around the c-axis of these monocrystalline quartz grains.This twinning is represented by the magenta color in panel B, yellow color in D, and red in G.Note that most Dauphiné twins are oriented along shock fractures (gray-to-black colored), suggesting that the twinning formed synchronously with the shock fractures and is common in all quartz grains from the three sites investigated here.The inset legend in panel D shows the color-coded Miller-Bravais crystalline axes for all six panels.
Figure 15 :
Figure 15: Tectonically-deformed quartz from a non-impact site in Syria.(A) Optical microscopy image shows tectonically-deformed lamellae, marked by yellow arrows.(B) SEM-BSE image: tectonic lamellae are not visible on the surface.(C) Cathodoluminescence (CL).The tectonic lamellae are faintly visible as blue streaks in the grain.Blue luminescence indicates that the quartz is natural and unshocked [21, 59, 63, 64].The red arrow marks the extraction location of the ion beam (FIB) foil for use with TEM.(D) Bright-field TEM image with no parallel lamellae; yellow arrows mark irregular areas characteristic of dislocations in the quartz.(E) EBSD image quality (IQ) and local orientation spread (LOS) image shows no significantly aligned misorientations.(F) EBSD IQ superimposed on inverse pole figure (IPF); the single Dauphiné twin (yellow arrow) is not oriented with any features in the grain, except the single fracture to the right.In this grain, the tectonic lamellae are only visible in the optical and CL images and not in other analyses, as they are in shock fractures.These multiple techniques enable differentiation between non-shock tectonic lamellae and impact-related shock fractures.
Figure 16 :
Figure 16: Natural, fractured, and unshocked quartz from the Russian Joe-1/4 site.(A) The EBSD image shows a few fractures, but they are not glass-filled.(B) EBSD image quality superimposed on local orientation spread shows no shock fractures aligned with locally misoriented lattice.(C) EBSD image quality (IQ) and grain reference orientation deviation (GROD) show no pattern of misoriented lattice compared to the grain's average orientation.(D) EBSD image quality (IQ) and inverse pole figures (IPF) illustrate variations in the lattice axes of quartz relative to a chosen crystal reference frame, which for these grains is the (0001) basal plane.These color variations represent Dauphiné twinning (blue and green) but are not oriented along the fractures.(E) Close-up SEM-BSE image of quartz grain.(F) SEM-BSE and EBSD inverse pole figures.This grain is fractured, but the fractures are not oriented as in shock fractures.In addition, no amorphous silica was found associated with the lamellae.No well-oriented lamellae are visible in any of these images.
Figure 17 :
Figure 17: Images of selected portions of shock-fractured quartz grain 09x-11 from Meteor Crater.(A) Optical photomicrograph.Arrows mark selected shock fractures.(B) SEM-BSE image.(C) EBSD image quality (IQ) and grain average image quality (GAIQ).Green areas at the arrows represent areas that correspond with shock fractures.(D) Cathodoluminescence (CL) image of non-luminescent gray-to-black areas at arrows indicating amorphous silica in areas corresponding to oriented shock fractures.(E) Bright-field TEM image of open, glass-filled shock fractures.(F) Close-up TEM image of glass-filled shock fractures.
Figure 18 :
Figure 18: Images of selected portions of shock-fractured quartz grain 12x12 from the Joe-1/4 atomic test site.(A) Optical photomicrograph with arrows pointing to selected shock fractures.Yellow arrows mark fractures.(B) EPI photomicrograph of the same view as panel A. (C) SEM-BSE image.(D) Cathodoluminescence (CL) image shows non-luminescent black lines at arrows indicative of amorphous silica, as confirmed by TEM in panel F. Approximately the same view as in panel C. (E) Cathodoluminescence (CL) image shows blue-colored, unshocked quartz matrix containing non-luminescent black lines at arrows indicative of amorphous silica along shock fractures.(F) TEM image of oriented and unoriented shock fractures.The notation "glass" marks a darker gray subrounded area composed of amorphous silica.
Figure 19 :
Figure 19: Images of selected portions of shock-fractured quartz grain 30x08 from Trinity JIE grains sample.(A) Optical photomicrograph of selected shock fractures at arrows.(B) Close-up optical photomicrograph.(C) Cathodoluminescence (CL) image of non-luminescent, black lines at arrows indicative of amorphous silica associated with shock fractures, as confirmed by TEM.(D) SEM-BSE image of approximately the same grain region as shown in panel C. (E) Another CL image of non-luminescent, black lines indicates the presence of amorphous silica.(F) TEM image with arrows marking three directions of shock fractures.
Figure S2 :
Figure S2: Shock-fractured quartz grains from the Joe-1/4 atomic test site.Epi-photomicrograph of a thin-sectioned slide.We analyzed ~24 loose grains (9 shown at arrows) with shock fractures.Extracted from test site sediment at a concentration of ~150 quartz grains per cm 2 .
Figure S1 :
Figure S1: Meltglass containing shock-fractured quartz from Meteor Crater.Epi-photomicrograph of a thin-sectioned slide.We analyzed ~36 quartz grains (arrows) displaying shock fractures in a fragment of ejected meltglass.Shock-fractured grains were concentrated at ~600 quartz grains per cm 2 .
Figure S3 :
Figure S3: Shock-fractured quartz grains in Trinity meltglass.Epi-photomicrograph of a thin-sectioned slide.We analyzed 42 grains (arrows) with shock fractures from ejected meltglass at a concentration of ~700 quartz grains per cm 2 .
Figure S5 :
Figure S5: TEM-EDS data for Meteor Crater grain 10x-12.96.8 wt% SiO 2 , 3.2 wt% C, 0.02 wt% Fe, and 0.01 wt% Al.Note that the C, Fe, and Al appear to be contaminants introduced during the processing of the sample.For descriptions of panels, see the caption for Figure S6.
Figure S6 :
Figure S6: TEM-EDS data for Trinity meltglass grain 09x11.~100 wt% SiO 2 with negligible amounts of Al and C, most likely contamination from processing the sample.For descriptions of panels, see the caption for Figure S6.
Figure S7 :
Figure S7: Additional TEM-EDS data for Trinity meltglass grain 09x11.~100 wt% SiO 2 with insignificant amounts of Al and C, most likely contamination from processing the sample.For descriptions of panels, see the caption for Figure S6.
Figure S8 :
Figure S8: SEM-based EDS spectrum for Meteor crater grain 10x-12.(A) Energy spectrum for various elements of EDS analysis.EDS analyses were made on the entire field of view.(B) Panels showing concentrations of selected elements.(C) Composite image showing silicon panel overlying the SEM field of view.(D) Elemental concentrations were measured for the entire field of view.These descriptions also apply to captions for Figures S11-S14 below.
Figure S12 :
Figure S12: SEM-based EDS spectrum for Trinity meltglass grain 32x08.For descriptions of panels (A)-(D), see the caption for Figure S10.
Figure S11 :
Figure S11: SEM-based EDS spectrum for Trinity meltglass grain 09x11.For descriptions of panels (A)-(D), see the caption for Figure S10.
Figure S13 :
Figure S13: EBSD Kikuchi patterns of shock-fractured quartz.(A) Meteor Crater grain 10x-12.EBSD image of virtual backscatter results (similar to SEM-BSE image) overlain by the grain average image quality.Blue/green/yellow/red colors denote decreasing image quality.Gray color represents areas where no Kikuchi patterns were detected, suggesting the area is amorphous or has short-range ordering of crystals.The gray area along the dashed yellow line is interpreted as a region of amorphous silica that intruded into the grain or melted in situ.(B) For EBSD analyses, the diffracted electrons produce what are called Kikuchi patterns that reveal the microstructural properties of the sample.The panel shows an EBSD Kikuchi pattern from a spot in the yellow circle in panel A. The lattice diagram at the lower right represents the grain's crystalline structure in which the hexagonal surface is the basal plane (0001), with the c-axis perpendicular to it.
Figure S14 :
Figure S14: EBSD Kikuchi patterns of shock-fractured quartz.(A) Joe-1/4 grain 14x-04B.EBSD image of virtual backscatter results (similar to SEM-BSE image) overlain by the grain average image quality.Blue/green/yellow/red colors denote decreasing image quality.Gray color represents areas where no Kikuchi patterns were detected, suggesting the area is amorphous or has short-range ordering of crystals.(B) EBSD Kikuchi pattern from a spot in the yellow circle in panel A. The lattice diagram at the lower right represents the grain's crystalline structure in which the hexagonal surface is the basal plane (0001), with the c-axis perpendicular to it.
Figure S15 :
Figure S15: EBSD Kikuchi patterns of shock-fractured quartz.(A) Joe-1/4 grain 19x-12C.EBSD image of virtual backscatter results (similar to SEM-BSE image) overlain by the grain average image quality.Blue/green/yellow/red colors denote decreasing image quality.Gray color represents areas where no Kikuchi patterns were detected, suggesting the area is amorphous or has short-range ordering of crystals.(B) EBSD Kikuchi pattern from a spot in the yellow circle in panel A. The lattice diagram at the lower right represents the grain's crystalline structure in which the hexagonal surface is the basal plane (0001), with the c-axis perpendicular to it.
Figure S16 :
Figure S16: EBSD Kikuchi patterns of shock-fractured quartz.(A) Trinity meltglass grain 32x08.EBSD image of virtual backscatter results (similar to SEM-BSE image) overlain by the grain average image quality.Blue/green/yellow/red colors denote decreasing image quality.Gray color represents areas where no Kikuchi patterns were detected, suggesting the area is amorphous or has short-range ordering of crystals.(B) EBSD Kikuchi pattern from a spot in the yellow circle in panel A. The lattice diagram at the lower right represents the grain's crystalline structure in which the hexagonal surface is the basal plane (0001), with the c-axis perpendicular to it.
Table 1 :
[8]racteristics of metamorphism of quartz.Note.Shock micro-fractures investigated in this study share 2 of 10 characteristics with planar deformation features (PDFs), 4 of 10 characteristics with planar fractures (PFs), and 2 of 10 with tectonic deformation lamellae (DLs).The green shading represents features in common with shock fractures in our study.Data are primarily derived from French and Koeberl[8].
Table 2 :
Classification of shock stages for quartz. | 13,812.4 | 2023-01-01T00:00:00.000 | [
"Geology",
"Physics"
] |
Ultrasound assisted-phytofabricated Fe3O4 NPs with antioxidant properties and antibacterial effects on growth, biofilm formation, and spreading ability of multidrug resistant bacteria
Abstract Complicated issue in infectious illnesses therapy is increasing of multidrug resistant (MDR) bacteria and biofilms in bacterial infections. In this way, emerging of nanotechnology as a new weapon specifically in the cases of metal nanoparticle (MNPs) synthesis and MNPs surface modification has obtained more attention. In this study, ultrasound-assisted green synthesis method was utilized for the preparation of Fe3O4 NPs with novel shape (dendrimer) through leaf aqueous extract of Artemisia haussknechtii Boiss. Ultraviolet–visible spectroscopy, energy dispersive X-ray spectroscopy (EDX), Fourier transform infrared spectroscopy (FT-IR), scanning electron microscopy (SEM), atomic force microscopic (AFM), X-ray diffraction (XRD) techniques were applied for MNPs physicochemical characterization. Also, disc diffusion assay, minimum inhibitory concentration (MIC), minimum bactericidal concentration (MBC), planktonic and biofilm morphology of three pathogenic bacteria involving Serratia marcescens ATCC 13880, Escherichia coli ATCC 25922, and methicillin-resistant Staphylococcus aureus (MRSA) were evaluated upon treatment of Fe3O4 NPs as antiplanktonic and antibiofilm analysis. Results showed efficient antiplanktonic and antibiofilm activities of biosynthesized Fe3O4 NPs with average diameter size of 83.4 nm. Reduction in biofilm formation of S. aureus ATCC under Fe3O4 NPs stress was significant (66%) in higher MNPs concentration (100 μg/mL). In addition, as first report, spreading ability of S. aureus as important factor in colony expansion on culture medium was reduced by increasing of Fe3O4 NPs. Present study demonstrates striking antiplanktonic, antibiofilm, antispreading mobility and antioxidant aspects of one-pot biosynthesized Fe3O4 NPs with novel shape.
Introduction
Every year, approximately 100,000 tons of antibiotics against infectious diseases are produced worldwide [1]. Misusage of this amount of antibiotics has lead to multidrug resistance emerging among pathogenic strains, specifically in bacteria [2]. In addition to high costs, MDR has resulted in high-mortality rates through the inefficiency of conventional antimicrobial agents [3]. According to natural selection phenomenon, spreading of resistant pathogens such as bacteria, fungi, and viruses are caused extremely by using new mechanisms of MDR in these microorganisms [4]. For the case of bacteria species, these mechanisms include mainly the application of multidrug efflux pumps and resistance plasmids [5]. Another problem associated with resistance to antibiotic chemotherapy is the formation of chronic biofilm [6]. As a definition, bacterial biofilms are a slimy layer of bacteria that adhere to biotic and abiotic surfaces [7]. With regard to the medicinal aspect, biofilm formation can be contributed to chronic infections such as cystic fibrosis and periodontitis [8]. In order to remove biofilm, using an efficient strategy to disrupt the multicellular structure of the biofilm is necessary [9]. Recently, nanotechnology was used as a novel and powerful tool in medicinal offers [10]. In this technology, NPs application, specifically metal NPs (Ag, Cu, TiO 2 , ZnO, MgO, and Fe 3 O 4 NPs) illustrated antimicrobial and antibiofilm activities against MDR pathogens [11,12]. Among these effects, antibacterial activities of magnetic iron oxide (Fe 3 O 4 ) NPs have been reported by many investigators [13,14].
Based on bottom-up and top-down approaches, there are many ways for preparation of Fe 3 O 4 NPs including hydrothermal synthesis, thermal decomposition, ultrasound-assisted reduction, co-precipitation, electrochemical synthesis, and laser pyrolysis techniques as chemical and physical methods [15]. These methods have an advantage by the uniformity of MNPs distribution and disadvantages by consumption of toxic and expensive materials in MNPs preparation [16]. In recent years, green synthesis was introduced and applied by many scientists as a novel and effective process [17,18]. Several types of living organisms such as microorganisms (specifically magnetotactic bacteria), plants, and fungi were used for MNPs synthesis [19]. Among these organisms, plants have the advantages of more biocompatibility and availability than microbes and fungi [20]. These advantages are caused by this fact that plants have various secondary metabolites like flavonoids, flavonols, and terpenoids which they can contribute in the reduction and stabilizing of metal ions and MNPs structure, respectively [21]. Major disadvantages of green synthesis method are agglomeration and ununiform size and shape of NPs. For reducing these unsuitable results, we used ultrasonic wave as ultrasound-assisted reduction method. Advantages of this strategy were reported for the biosynthesis of Pd/Fe 3 O 4 nanocatalyst by green tea leaves [22].
Physicochemical properties of MNPs such as surface plasmon resonance (SPR) and local field enhancement (LFE) can be changed by the alteration of diameter size and surface composition. For instance, LFE of MNPs with bipyramids shape is higher than other shapes by the sharp tips [23]. In this way, there are spherical, semi-spherical, cubic, triangular, rod, wire, flower-like, and dendrimer shapes. It is worth to note that biological activities including antimicrobial, anticancer, cytotoxic, antifungal, biocompatibility, and bioavailability of MNPs are also affected by the size and morphology alterations [24]. These properties of MNPs can be tunable by the selection of synthesis methods and reducing and stabilizing agents [25]. In this regard, the present study illustrated novel shape (dendrimer) of green synthesized Fe 3 O 4 NPs with enhanced antibacterial and antibiofilm abilities.
Different medicinal aspects of Artemisia L. genus were approved by many investigators [26,27]. Among this genus, Artemisia haussknechtii Boiss. is one of the local species in Iran [28]. Therefore, based on the above information, in this study, we utilized A. haussknechtii aqueous extract to synthesize Fe 3 O 4 NPs with a new shape of dendrimer. Assays of disc diffusion, MIC/MBC, and growth kinetics were used to measure antibacterial effects of green synthesized Fe 3 O 4 NPs on three sensitive and MDR bacteria species of S. aureus ATCC 43300, E. coli ATCC 25922, and S. marcescens ATCC13880. In addition, antibacterial mechanisms of these MNPs were determined through analyzing changing in biofilm and bacterial morphology properties.
Leaves extract preparation and biosynthesis of Fe 3 O 4 NPs
Plant species of A. haussknechtii were sampled based on the previous study [29]. Aqueous leaf extract of A. haussknechtii was prepared by 20 g of freshly amassed leaves. The leaves surface were cleaned with running tap water, followed by washing with distilled water and boiling in 250 ml volume of distilled water at 90 C temperature for half hour. Suspensions were filtered two times with Whatman No. 40 filter papers. The filtered sample was collected and stored at 4 C for next stage. This extract was utilized as reducing as well as stabilization/capping agents.
Duing the preparation Fe 3 O 4 NPs, the conical flask containing 50 ml of ferric chloride hexahydrate (0.2 M) was added to 0.001, 0.01, and 0.1 M concentrations of FeCl 2 .4H 2 O (50 ml) under stirring on a magnetic stirrer for 2 h. Afterward, 10 mL of the aqueous extract of A. haussknechtii leaves was mixed with 90 ml of resulted solution and pH is adjusted to 8, by the addition of 0.1 M NaOH solution. The above reaction was under a stirring condition at room temperature followed by the application of ultrasonic irradiation for 2 h with a frequency of 40 kHz and the total acoustic power of 50 W [30]. In comparison analysis, the effect of different temperatures (4,25,35,45,55, and 65 C) on MNPs synthesis was measured in the range of 1-7 h. In order to purify MNPs, the resulting solution was centrifuged at 4000 rpm for 5 Â 10 min and washed several times with 1:1 mixture of absolute methanol and distilled water. Powder MNPs were prepared by incubation of solution at 50 C for 48 h and then, they were stored in an airtight condition for further characterization by XRD, FTIR, and SEM analysis. The chemical reaction of Fe 3 O 4 precipitation is given in the below Equations 1 and 2 Total amounts of phenol, flavonoid, flavonol, and tannin Folin-Ciocalteu assay was applied to assess the total amount of phenolic compounds as one of the important group of secondary metabolites [31]. To prepare 3 ml volume of solution sample, distilled water was added to leaf extract and MNPs samples until we get an amount of 200 lL (1 mg/mL), blended completely by 0.5 ml of Folin-Ciocalteu reagent for 3 min followed by the addition of 2 ml of 20% (w/v) sodium carbonate (Na 2 CO 3 ). The mixture was heated at 45 C for 15 min and then absorbance was indicated at OD 765nm . The total phenolic amount was calculated via calibration curve. Measurements were carried out as triplicate repeat for each sample and demonstrated as mg of gallic acid equivalent (GAE) per g dry weight (/gDW). Flavones, flavonols, flavanones, isoflavonoides, neoflavonoides, flavanols, flavan-3-ols, anthocyanins, and chalcones are important subgroups of flavonoides secondary metabolites in the plant kingdom [32]. For indication of the total flavonoids amount of each sample, AlCl 3 colorimetric assay was used with slight modification [33]. Standard and treatment solutions (0.5 ml) were separately mixed by distilled water (2 ml) and 5% sodium nitrite (150 lL). Mixtures were combined with aluminum chloride 10% solution (150 lL) and 4% sodium hydroxide (2 ml) followed by standing for a period of 6 min. Then, distilled water was added to make a volume of 5 ml in a 5 ml volumetric flask. The mixture was allowed again to stand for 15 min and absorbance of the solutions was evaluated at OD 510 nm against blank and the total flavonoid content was also expressed as rutin equivalents in mg per g of dried extract (mg/g DW).
In order to determine total flavonol content, 250 ml of 2% AlCl 3 and 250 ml of 5% CH 3 COONa solution were added to 200 ml of each sample (1 mg/mL) [34]. Samples were sealed and incubated for 2-3 h at 25 C. The absorbance was indicated at OD 440 nm and results were expressed as mean ± standard deviation by mg of (þ)-catechin equivalents per g of dried extract (mg catechin/g DW).
Moreover, measurement of total tannin content was performed based on the method of Sun et al [35]. About 1.5 ml of concentrated hydrochloric acid and 3 mL methanol solution of 4% vanillin were added to 50 lL of diluted samples. After 15 min incubation of mixture in room temperature, absorption was determined at OD 500nm against methanol as a blank. Total tannin contents were presented as mg of (þ)-catechin equivalent (CE)/g DW and all samples were also analyzed in triplicate repeat.
Total antioxidant capacity (TAC) and DPPH assays
Total antioxidant capacity (TAC) of the biosynthesized Fe 3 O 4 NPs was measured in accordance with phosphomolybdenum assay [36]. About 100 mg of dried leaf extract and the biosynthesized NPs were separately taken into a reaction vial and mixed with 0.05% dimethyl sulfoxide (DMSO). About 0.1 ml aliquot solution of the samples was mixed with 1 ml of the reagent solution (0.6 M H 2 SO 4 , 0.028 M Na 3 PO 4 and 0.004 M [(NH 4 ) 6 Mo 7 O 24 4 H 2 O]). Samples were incubated at 95 C for 90 min followed by cooling at 25 C. Then, absorbance of the resulted samples were indicated at OD 695 nm against a reagent solution (without the annealed samples) as blank. Ascorbic acid was utilized as a positive control. The absorbance of the samples were reported as the total antioxidant activity which the higher antioxidant activity was illustrated by the higher absorbance.
DPPH scavenging activity of MNPs and leaf extract were determined by the 1,1-diphenyl-2-picryl-hydrazyl (DPPH) free radical scavenging assay in 96 well microtiter plate [36]. Briefly, 100 lL of each concentration (100-500 lg/mL in methanol) of the green synthesized Fe 3 O 4 NPs and aqueous leaf extract was mixed with 100 lL of DDPH (100 lM) solution and incubated in the dark condition and at room temperature for 1 h. After color change from violet to pale yellow, the absorbance of the mixtures was indicated at OD 517 nm . Ascorbic acid was used for comparison assay. The capacity of the samples to scavenge DPPH radical was determined by: Percentage of DPPH scavenging activity
MDR and sensitive bacteria
Reprehensive MDR and sensitive bacteria of gram negative (E. coli ATCC 25922 and S. marcescens ATCC13880) and grampositive (S. aureus ATCC 43300) were used to determine the antimicrobial activity of Fe 3 O 4 NPs. These strains were obtained from bacterial archive of microbiology laboratory, Razi University of Kermanshah. Following evaluation, bacterial strains were maintained on nutrient agar slants at 4 C.
Disc diffusion assay
Antibacterial activity was determined by using disc diffusion assay [37]. Overnight MHB cultures of pathogenic bacteria of E. coli ATCC 25922, S. marcescens ATCC13880, and S. aureus ATCC 43300 were prepared freshly for each assay. These cultures were mixed with sterile physiological saline and turbidity was indicated by adding physiological saline until obtaining 0.5 McFarland turbidity standard (1.5 Â 10 8 CFU/ mL). Petri plates were prepared by 20 ml of sterile MHA and prepared bacterial inoculations were swabbed on the surface of the solidified media. After drying of media for 10 min, biosynthesized Fe 3 O 4 NPs were impregnated on discs at different concentrations of Fe 3 O 4 NPs (0.1, 0.01, and 0.001 M of FeCl 2 .4H 2 O) and were compared to plant leaf extract [38].
Determination of MIC/MBC
The bacteriostatic and bactericidal activities of Fe 3 O 4 NPs were measured by MIC/MBC assays [39]. An appropriate volume of bacteria (2 lL) in MHB was added to suspensions of Fe 3 O 4 NPs whose concentration varied using serial two-fold dilution from 100, 50, 25, 12.5, 6.25, and 3.12 lg/mL, respectively. These concentrations were taken from 0.1 M concentration of MNPs which had a higher antibacterial effect in agar diffusion assay. After incubation of medium for 24 h at 37 C, the tubes monitored for turbidity as growth and non-turbidity as no growth. The MIC values were interpreted as the lowest concentration of the sample, which illustrated clear fluid with no development of turbidity. Ten lL of the samples from each tube with no growth of bacteria were subcultured onto an MHA. The minimum bactericidal concentration (MBC) was determined as the highest dilution of the Fe 3 O 4 NPs that did not produce a single bacterial colony on the MHA after a 24 h of incubation period [40].
Effect of Fe 3 O 4 NPs on bacterial growth kinetic
Bacterial growth kinetics of E. coli ATCC 25922, S. marcescens ATCC13880, and S. aureus ATCC 43300 were evaluated under Fe 2 O 3 NPs effect at different concentrations (1000, 500, 250, 100, 50, 25, 12.5, 6.25, and 0 lg/mL as control). These bacteria were grown in liquid LB medium until they reached the log phase [41]. In order to obtain the first point of optical density (OD 600nm ), two different concentrations of 0.1 and 0.2 were determined by dilution of cell culture medium with fresh LB liquid medium. Different concentrations of Fe 3 O 4 NPs were added into the cell culture medium. Then, the culture medium was incubated at 37 C and 250 rpm. Bacterial growth kinetics was evaluated by measuring OD at 600 nm (bacterial concentration) at interval each hour.
Bacterial morphology analysis upon Fe 3 O 4 NPs treatment
Morphology of bacteria upon Fe 3 O 4 NPs treatment was visualized firstly by phase contrast microscopy (OLYMPUS-BX51, Shinjuko, Tokyo, Japan) using OLYMPUS-DP12 digital live camera and Q-capture pro7 software, Shinjuko, Tokyo, Japan, taking samples directly upon the cover-slide from stationary phase of growth kinetics. (20,40,60,80, and 100%). These samples were coated with gold or platinum for FE-SEM scanning analysis [42].
Biofilm formation measurement
Ninety-six-well polystyrene plates was for evaluating biofilm formation. Initially, overnight cultures of bacteria were adjusted to an OD 600 nm of 0.5 in LB medium and co-cultured by different concentrations (100, 50, 25, 12.5, 6.25, 3.12, and 0 lg/mL as control) of green synthesized Fe 3 O 4 NPs as treatments and without MNPs as control for 24 h at 37 C without shaking. Bacterial growth was determined by the assessment of absorbance at OD 600 nm by UV-Vis spectroscopy. In order to remove planktonic bacteria and, plates were rinsed with water several times. Biofilms were stained with 350 lL of crystal violet (0.1%, v/v) for 30 min at 25 C. Then, plates were emptied, washed with water, blotted onto tissue paper towels. Dried crystal violet was extracted with ethanol (95%, v/v), and total biofilm formation was then assayed at OD 570 nm [43]. All experiments were carried out as three replicate tests independently. In addition, results were presented as the averages plus standard deviations (SD) of three replicate cultures. The meaningful inhibition of biofilm was indicated by Tukey's test (p .05).
Morphology analyses of biofilm by light microscopic and AFM
Light microscopic and AFM were utilized to observe changes of biofilm morphology of S. aureus ATCC 43300 under treatment by Fe 3 O 4 NPs. In the case of observation by light microscopic; MRSA strains were cultured on small glass slides in 24-well microtitre plates with different concentrations of MNPs (100, 50, 25, 12.5, 6.25, 3.12, and 0 lg/ mL as control) for 48 h. After this time, planktonic cells were eliminated and the biofilm was stained with crystal violet dye for 10 min followed by washing gently and then drying for 10 min. Afterward, the biofilm morphology was observed under light microscopy (40Â) (OLYMPUS CX31 with camera model KECAM CMOS 10000 KPA) [44]. After approval of morphology changes of biofilm under various amounts of biosynthesized NPs stress via light microscopic, AFM analysis was applied to assess obviously these changes. In this way, the topography of biofilm structure was evaluated at higher concentration of Fe 3 O 4 NPs (lg/mL) compared with control sample. Preparation of biofilm by S. aureus was carried out on silicone slides for 24 h incubation at 37 C followed by washing with PBS and air drying. Images were obtained at a resolution of 4.42 Â 4.42 lm by non-contact AFM (Nanosurf Mobile S).
FTIR analysis of biofilm
Biological macromolecules including polysaccharides, proteins, and DNA are framework of biofilm structure which can be affected by MNPs stress [45]. Biofilm surfaces on the glass slides were analyzed from the point of macromolecular composition in two conditions of high concentration (100 lg/mL) of Fe 3 O 4 NPs stress and without MNPs treatment as control for a period of 24 h. For this analysis, FTIR spectrometer was used with the reflectance mode of wavenumber over the range of 400-4000 cm À1 (Bruker, Germany, model: ALPHA).
Spreading assay of S. aureus
This assay was carried out via the method of Kaito et al. with slight modification [46]. In this case, 3 g of Muller Hinton agar was used and 0.8 g of nutrient broth in 100 ml distilled water followed by autoclaving of solution. Afterward, filter sterilized 10% (w/v) D-glucose in distilled water was added to the final solution. Different concentrations of MNPs (3.12, 6.25, 12.5, 25, 50, and 100 lg/mL as treatments) were coincubated with 5 lL of S. aureus ATCC 43300 sample onto the center of plates for 48 h at 37 C. Finally, the spreading of bacteria was indicated on the culture medium.
Statistical analysis
SPSS version 16 software (SPSS Inc., Chicago, IL, USA) was utilized to obtain one-way ANOVA (Tukey's test) results of experiments. Furthermore, all tests were carried out in triplicates, means with standard deviation were measured and p .05 was determined statistically as the meaningful difference between samples and control. In order to determine the dose-response relation between the plant extracts and Fe 3 O 4 NPs, regression analysis was applied. Furthermore, linear regression analysis was used to measure the correlation coefficient.
Physicochemical properties of NPs
Ultraviolet-Visible (UV-Vis) spectroscopy UV-visible absorbance spectroscopy is used to survey the NPs characterization such as concentration, size, aggregation, and bioconjugation [47]. haussknechtii leaf aqueous extract medium was indicated by color change from pale green to blackish (Figure 1(a)). Phytochemicals such as flavonoids and polyphenols of A. haussknechtii leaf extract can be firmed by color change. This concentration (0.1 M) was selected to evaluate explicitly the effect of temperature parameter (4,25,35,45,55, and 65 C) on the rate of MNPs growth in a period of 7 h ( Figure 1(b)). Similar to the previous study about Ag and Au NPs synthesis, there was a direct relationship between temperature and absorbance values [49]. XRD patterns of leaf extract were reported by previous investigations [48,50]. Figure 2( [51]. Debye-Scherrer equation is the relationship between crystallite size and X-ray diffraction peak broadening. This equation is shown as follows:
XRD analysis
where d is the crystallite size of green synthesized Fe 3 O 4 NPs for (hkl) phase, k is Scherrer constant (0.9), k is the X-ray wavelength of radiation for Cu Ka (0.154 nm), b hkl is the fullwidth at half maximum at (hkl) peak in radian, and h hkl is the diffraction angle for (hkl) phase. Dependent upon atom density, each crystallographic facet contains energetically distinct sites. High atom density at (311) may be associated with high reactivity of these crystalline facets [52]. By the equation, the estimated crystallite size of synthesized Fe 3 O 4 NPs was 83.4 nm, which figured out to be high purity crystalline [53].
FT-IR analysis
FT-IR analysis was used due to the indication of possible molecules for capping and reducing of MNPs [54]. As seen in Figure 3, the absorbance peaks at 604.46 cm À1 and 1043.86 cm À1 are attributed to C-Cl and C-F stretching strong bands for alkyl halide compounds [55]. The peak at 1427.06 cm À1 corresponds to C¼C stretching multiple bands of aromatic groups [56]. The intensity of the absorption bands at 1607.39 cm À1 and 3411.81 cm À1 correspond respectively to C¼O (carbonyl) and N-H (amine) stretching bands [57]. Results of FT-IR analysis illustrated the attribution of different functional groups in the leaf extract including amine, carbonyl, polyphenols, and alkyl halide. Therefore, some polyphenols may be stabilizing of Fe 3 O 4 NPs by attaching to the MNPs surfaces through interacting with free amine or carbonyl groups. Also, the absorption band at 1607. 39
SEM images, EDAX, and AFM analysis
As shown in Figure 4(a), the results of SEM images demonstrated the denderimer shape of Fe 3 O 4 NPs with branched surfaces at nanometer magnifications. This structure may have resulted from chemical interactions such as hydrogen and electrostatic bonds between the organic capping agents of plant secondary metabolites and core of Fe 3 O 4 NPs [59]. Diameter sizes of NPs were in the range of 1-150 nm by the maximum size between 120 and 130 nm (Figure 4(b)). Compared to similar reports, green synthesized Fe 3 O 4 NPs by Solanum trilobatum and Kappaphycus alvarezii extract had 18 and 14.7 nm size with spherical shape [48,60].
Elemental composition analysis of biosynthesized Fe 3 O 4 NPs was performed by energy-disperse X-ray spectroscopy (EDAX) method (Figure 4(c)). EDAX graph indicated elemental signal for iron with an intensity of 111.8, which is specific for the absorption of metallic iron nanocrystallites resulted from surface plasmon resonance. Also, signals were observed for Cl, O, Ca, K, and S elements respectively with intensity of 138.4, 53.1, 12.6, 6.5, and 3.2 with also elemental distribution (Figure 4(d)) which may be related to protein/enzymes presence in A. haussknechtii leaf extract [61].
Surface topography, size, structure, agglomeration, and height of Fe 3 O 4 NPs were surveyed by AFM analyses ( Figure 5). Various dimensions of images ( Figure 5(a-f)) were utilized to detect clearly NPs. As illustrated in Figure 5(d), lower height of 2.73 nm and uniformity of size distribution were observed for Fe 3 O 4 NPs. In this case, the previous study showed size distribution range 7-77 nm for green synthesized magnetic Fe 3 O 4 NPs by fruit extract of Couroupita guianensis Aubl [62]. Therefore, as a comparative approach, our results showed reduction and uniformity of Fe 3 O 4 NPs size.
Antioxidant activities
Total contents of phenolic, flavonoid, flavonol, and tannin Standard curve of gallic acid was used for quantitative evaluation of total phenols. As shown in Figure 6(a), there is linearity curve of the calibration from 20 to 100 lg/mL concentration for gallic acid (R 2 ¼ 0.9987). Table 1 illustrates the total phenolic content of the methanolic leaf extract and Fe 3 O 4 NPs with 15.98 ± 1.30 and 3.22 ± 0.77 mg gallic acid equivalent (GAE)/g DW, respectively. In accordance with standard curve of rutin, measurement of total flavonoids was quantitatively performed and linearity of the calibration curve was obtained from 20 to 100 lg/mL amount for rutin (R 2 ¼ 0.9652; Figure 6(b)). The leaf extract and Fe 3 O 4, respectively showed total flavonoid content (Table 1). There was a significant difference (p .05) between total phenol and flavonoid content of leaf extract and Fe 3 O 4 NPs. Total flavonol content (TFC) around Fe 3 O 4 NPs with a value of 0.76 ± 0.27 was less than leaf extract (p .05). Also, as shown in Figure 6(c), total tannin amount was assessed as a standard curve of (þ)-catechin and linearity of the calibration curve (R 2 ¼ 0.967) was gained from 0 to 120 lg/mL concentration for (þ)-catechin. Fe 3 O 4 NPs and leaf extract, respectively showed total tannin content with 0.29 ± 0.19 and 2.36 ± 0.47 mg (þ)-catechin equivalent (CE)/g DW (Table 1). Also, as observed in this table, flavonoid/phenol ratio did not demonstrate a meaningful difference between Fe 3 O 4 NPs and leaf extract.
Total antioxidant capacity (TAC) and DPPH assays
Total antioxidant capacity (TAC) of plant leaf extract, Fe 3 O 4 NPs, and ascorbic acid (control) was compared together by phosphomolybdenum method. Antioxidant activity of Fe 3 O 4 NPs was increased by the augmentation of samples concentration. As presented in Figure 7(a), amounts of absorbance were 0.788 ± 0.033, 0.599 ± 0.041, and 0.382 ± 0.055, respectively for ascorbic acid, plant leaf extract, and Fe 3 O 4 NPs at higher concentrations (500 lg/mL). Therefore, the total antioxidant ability of Fe 3 O 4 NPs was lower than ascorbic acid and plant leaf extract. In the case of DPPH assay, there were similar results (Figure 7(b)
Disk diffusion assay
The antibacterial capacity of Fe3O4 NPs was evaluated firstly against three bacteria E. coli ATCC 25922, S. aureus ATCC 43300, and S. marcescens ATCC13880 by disk diffusion assay. Inhibition zone diameter (IZD) was observed as confirmation of antibacterial ability after incubation of the plates (Figure 8). This assay illustrated that higher concentration ( of green synthesized Fe 3 O 4 NPs [64]. In this regard, the effective amount of previous antibacterial test (0.1 M of FeCl 2 .4H 2 O) was used as a basic concentration to measure MIC assay. As shown in Figure 9, the MIC values of Fe 3 O 4 NPs against MDR and sensitive bacteria were in the range of 12.5-50 lg/mL. The highest value of this assay was for E. coli and S. marcescens with 50 lg/mL. In contrast, S. aureus had the lowest amount of 12.5 lg/mL. In case of MBC, values ranges 50-100 lg/mL. MBC result for E. coli and S. marcescens was 100 lg/mL. Staphylococcus. aureus showed lower concentration of MBC (50 lg/mL) than E. coli and S. marcescens. Therefore, it can be concluded based on these assays that E. coli and S. marcescens as gram-negative bacteria had more resistant than S. aureus as gram-positive bacteria [65]. In gram-negative bacteria, multidrug efflux pumps as membrane-located transporters make a major contribution to this intrinsic resistance [66]. Multidrug transporters in gram-negative bacteria protect bacterial cells from the function of antibiotic agents on both sides of the cytoplasmic and outer membranes with the broad specific substrate [67]. NPs amount [70]. Higher reduction in growth kinetics of E. coli ATCC 25922 and S. marcescens ATCC13880 was observed at Fe 3 O 4 NPs concentrations with !50 lg/mL. Comparatively, in the case of S. aureus ATCC 43300, lower amounts of MNPs (!12.5) had a striking effect on bacterial growth reduction. Also, there was an obvious relationship between initial concentrations of bacteria (0.1 and 0.2 OD) with the antibacterial activity of MNPs. At higher initial OD (0.2), needed amounts of the Fe 3 O 4 NPs to inhibit bacterial growth were more than lower initial OD (0.1) [41]. As demonstrated in Figure 11, specific growth rate, l, (OD 600 nm /h) of three sensitive and MDR bacteria: (a) E. coli ATCC 25922, (b) S. marcescens ATCC13880, and (c) and S. aureus ATCC 43300S in the presence of MIC concentrations (50 lg/mL for E. coli and S. marcescens and 12.5 lg/mL for S. aureus) had negative rate after 3, 3, and 1 h, respectively. In contrast, there was no negative growth rate in control samples. Escherichia coli and S. marcescens had more resistance to Fe 3 O 4 NPs than S. aureus. In this way, response difference of these bacteria can be caused bythe difference in cell wall stability and growth rate of gram-negative (-) and grampositive (þ) bacteria under Fe 3 O 4 NPs stress [71].
Bacterial morphology analysis upon Fe 3 O 4 NPs treatment
Morphology changes of the sensitive strain of S. aureus ATCC 43300 (MRSA) in the presence of 12.5 lg/mL concentration of Fe 3 O 4 NPs was observed by SEM images (Figure 12). As illustrated in Figure 12 illustration, Figure 10(d) shows a schematic image of several antibacterial mechanisms of Fe 3 O 4 NPs against S. aureus.
Biofilm formation assay
As shown in Figure 13
AFM and morphology analyses of biofilm
Biofilm morphology changes of S. aureus stain under Fe 3 O 4 NPs stress was evaluated by a light microscope at a magnification of 40Â ( Figure 14). As illustrated in these images, the reduction of biofilm structure was enhanced by the rising concentration of Fe 3 O 4 NPs. Similar to biofilm formation assay, higher antibiofilm activity (66% reduction) was observed at 100 lg/mL of Fe 3 O 4 NPs. Therefore, the results of these two methods showed obviously antibiofilm effects of Fe 3 O 4 NPs at the higher concentrations. Changes in biofilm architecture were illustrated at 10 lL phosphatidylcholinedecorated Au NPs at 0.116 mg/mL concentration for 24 h incubation against Pseudomonas aeruginosa (PAO1) [72]. Also, biofilm formation of E. coli and S. aureus was assessed at the presence of biosynthesized silver NPs in 5, 10, and 15 lg/mL concentrations for incubation of 48 h. In this regard, more reduction was viewed at 15 lg/mL amount of Ag NPs [73]. AFM images of biofilm formation by S. aureus under high concentrations of biological synthesized Fe 3 O 4 NPs (as treatment) and free-NPs (as control) are presented in Figure 15. As obvious from Figure 15(a-c), biofilm roughness is lower in treatment (10.666 nm) compared to control (Figure 15(d-f) by a value of 45.955 nm). In addition, there were pores in treatment biofilm resulted from biofilm damage by Fe 3 O 4 NPs stress. The inhibition and disturbing of biofilm structure of S. aureus under Fe 3 O 4 NPs treatments were approximately similar to the results of light microscopic and AFM analysis. Similarly, reduction in roughness values as 12-36% and 40-60% has been reported, respectively for S. aureus and E. coli in presence of biosynthesized Ag and Au NPs [49].
FT-IR analysis of biofilm
Polymers involving polysaccharides, protein, and nucleic acids are essential macromolecules in biofilm formation. FT-IR spectra of biofilm formation by S. aureus on a glass slide in two conditions of Fe 3 O 4 NPs stress and NPs-free as control were compared due to analyzing chemical composition changes of biofilms after 24 h period ( Figure 16). Results showed peaks of 1114. 46 cm À1 and 1112.57 cm À1 , respectively for control and treatment which can be the presence of sign of nucleic acids and polysaccharides macromolecules. Peaks at 1655.16 cm À1 and 1655.36 cm À1 illustrate the presence of proteins [74]. In comparison, the treatment showed reduction and increasing of peak intensity at 621.14 cm À1 (C-Cl stretching bond) and 2363.91 cm À1 (CC bond), respectively.
Spreading assay of Staphylococcus aureus
Colony expansion of S. aureus on soft agar was investigated previously in particular conditions as finger-like dendrites [46]. As illustrated in Figure 17, motility ability of S. aureus was determined by special assay of spreading upon Fe 3 O 4 NPs stress in various concentrations (3.12, 6.25, 12.5, 25, 50, and 100 lg/mL). Compared to a clear pattern of bacterial colonies as finger-like dendrites in control sample and colonies under lower concentrations of Fe 3 O 4 NPs (Figure 17(a-g)), there was a decreasing pattern of colonies expansion by increasing MNPs amounts. These results indicate the sensitivity of S. aureus colony under NPs stress, which is the antibacterial advantage of these MNPs. In fact, due tothe dependence of virulence, tissue colonization, and biofilm formation of S. aureus on colony spreading in initial stages of bacteria growth, infections related to this strain can be blocked by Fe 3 O 4 NPs [75].
Surface topography, size, structure, agglomeration, and height of Fe 3 O 4 NPs were surveyed by AFM analyses ( Figure 5). Various dimensions of images ( Figure 5(a-f)) were used to detect clearly NPs. As illustrated in Figure 5( [76]. Phenolic compounds act as antioxidants via their redox properties [77]. The total phenolic concentration could be applied as rapid screening of antioxidant activities. Plant secondary metabolites such as flavonoids, including flavones, flavanols, and condensed tannins have the antioxidant abilities on the basis of the presence of free hydroxyl functional groups, which they can reduce metal ions to NPs [78]. In this way, the metal ions including iron and copper ions bind to the various reducing/stabilizing flavonoids [79]. Several studies have reported antibacterial effects of MNPs and metal oxide NPs on gram-positive and gram-negative bacteria [80,81]. The efficiency of these antibacterial agents may be related to the cell wall and membrane difference of bacteria. Gram-positive bacteria have a thick cell wall (about 20-80 nm) compared to gram-negative with a thin layer of peptidoglycan (about 7-8 nm) and two cell membranes (outer and plasma membrane) [82]. It is worth mentioning that MNPs with a size range of 8-80 nm can penetrate the cell wall [83]. Antibacterial activities of Fe 3 O 4 NPs were demonstrated by several studies [68,84]. It was reported that these MNPs can cause damaging of E. coli membranes by diffusion of the tiny particles ranging from 10 to 80 nm [85]. Zero-valent iron NPs interact with intracellular oxygen and cause disturbing the cell membrane bythe production of oxidative stress [86]. Green synthesized Fe 3 O 4 NPs by fruit extract of Couroupita guianensis Aubl. had more antibacterial activity on gram-negative bacteria K. penumoniae MTCC 530, E. coli MTCC 2939, and S. typhi MTCC 3917 than gram-positive bacterium S. aureus MTCC 96 [62]. In this case, the generation of reactive oxygen species (ROS) as an antibacterial factor may result from MNPs unique properties. Also, CuO and Ag NPs have demonstrated that antibacterial activities can be raised by diameter size reduction of MNPs [80,87]. Based on cell wall characteristics including the thickness of peptidoglycan wall and number of the membrane, grampositive and gram-negative bacteria are different. In this case, gram-negative bacteria with the cell wall including a thin layer of peptidoglycan, an outer bilayer membrane (lipopolysaccharides and proteins) and inner membrane are more complex than gram-positive bacteria by having only a thick layer of peptidoglycan cell wall and plasma membrane [88]. In this regard, growth kinetic comparison of E. coli and B. subtilis (gram-positive) under different concentrations of iron oxide NPs showed higher growth inhibition for B. subtilis than E. coli [68]. Also, the core-shell Fe 3 O 4 @C-PVPS:PEDOT NPs (iron oxide NPs coated by catechol-conjugated poly (vinylpyrrolidone) sulfobetaines and encapsulated with poly (3,4-ethylenedioxythiophene)) illustrated a high antibacterial impact on S. aureus and E. coli [89].
Changes of bacterial morphology were proved by the impact of MgO NPs on S. enteritidis, E. coli O157:H7, and C. jejuni bacteria in the late-log phase of growth [81]. These alterations in morphology were involved in changing shape from spiral to coccoid and producing deep craters in the bacterial membrane. Cell wall clumping, membrane blebs, and rupture were observed in E. coli MTCC 443 at the stationary phase of growth kinetics upon treatment by ZnO NPs with a diameter range of 25-45 nm [90]. Toxicity of MNPs against bacteria can be related to several parameters involving the type of bacteria, physicochemical properties of MNPs such as the large surface area to volume (SA:V ratio), and chemical and biological functionalization of MNPs surface [91]. In this study, secondary metabolites of A. haussknechtii leaf aqueous extract such as phenol and flavonoids can influence on this property [92].
Antibiofilm properties of green synthesized MNPs were reported by other studies [93,94]. Synergism effect of chemical synthesized Fe 3 O 4 NPs with antibiotics of streptomycin, vancomycin, and penicillin was estimated against biofilm formation of Enterococcus faecalis pathogen. Result of this study did not approve great antibiofilm activity of magnetic NPs with high concentration (1688-16988 lg/mL) [95]. Pseudomonas aureuginosa and S. aureus exhibited biofilm reduction from 125 and 250 lg/mL to 1000 lg/mL amounts of Fe 3 O 4 NPs, respectively [96]. Incubation of E. coli ATCC 15224 and S. aureus ATCC 25923 for 72 h at 37 C on glass and silicon surface with Fe 3 O 4 NPs coating showed a meaningful reduction in these bacteria [97]. Unique properties of MNPs including large surface to volume ratio augment reactivity of MNPs with surrounding materials. In this way, the generation of reactive oxygen species (O 2 , OH , and H 2 O 2 ) resulted from MNPs which can disrupt the biofilm structure of bacteria [98]. Also, the shape of MNPs can be an efficient factor in biofilms and bacteria damage [72].
Conclusions
In summary, antioxidant, antiplanktonic, antibiofilm, and antimotility capacities of biosynthesized Fe 3 O 4 NPs by leaf aqueous extract of medicinal plant A. haussknechtii on three MSR bacteria E. coli ATCC 25922, S. marcescens ATCC13880 and S. aureus ATCC 43300 were surveyed as ecofriendly and efficient approach (Figure 18). There are several reports in the case of antibacterial activities of MNPs, but it has been paid less attention about antibiofilm and antimotility aspects of MNPs, specifically biosynthesized Fe 3 O 4 NPs. This study introduced a novel shape of | 8,944.6 | 2019-06-12T00:00:00.000 | [
"Environmental Science",
"Medicine",
"Materials Science",
"Chemistry",
"Biology"
] |
Merging MCMC Subposteriors through Gaussian-Process Approximations
Markov chain Monte Carlo (MCMC) algorithms have become powerful tools for Bayesian inference. However, they do not scale well to large-data problems. Divide-and-conquer strategies, which split the data into batches and, for each batch, run independent MCMC algorithms targeting the corresponding subposterior, can spread the computational burden across a number of separate workers. The challenge with such strategies is in recombining the subposteriors to approximate the full posterior. By creating a Gaussian-process approximation for each log-subposterior density we create a tractable approximation for the full posterior. This approximation is exploited through three methodologies: firstly a Hamiltonian Monte Carlo algorithm targeting the expectation of the posterior density provides a sample from an approximation to the posterior; secondly, evaluating the true posterior at the sampled points leads to an importance sampler that, asymptotically, targets the true posterior expectations; finally, an alternative importance sampler uses the full Gaussian-process distribution of the approximation to the log-posterior density to re-weight any initial sample and provide both an estimate of the posterior expectation and a measure of the uncertainty in it.
Introduction
Markov chain Monte Carlo (MCMC) algorithms are popular tools for sampling from Bayesian posterior distributions in order to estimate posterior expectations. They benefit from theoretical guarantees of asymptotic convergence of the estimators as the number of MCMC samples grows. However, whilst asymptotically exact, they can be computationally expensive when applied to datasets with a large number of observations n. Indeed, the cost of generating one sample from the MCMC algorithm is at best O(n) as the posterior distribution of the model parameters, conditional on the entire data set, must be evaluated at each MCMC iteration. For very large n, therefore, MCMC algorithms can become computationally impractical.
Research in the area of MCMC for big data can be broadly split into two streams: those which utilise one core of the central processing unit (CPU) and those that distribute the work load across multiple cores, or machines. For the single processor case, the computational cost of running MCMC on the full data set may be reduced by using a random subsample of the data at each iteration (Quiroz et al., 2014;Maclaurin and Adams, 2014;Bardenet et al., 2014); however, the mixing of the MCMC chain can suffer as a result. Alternatively, the Metropolis-Hastings acceptance step can be avoided completely by using a stochastic gradient algorithm (Welling and Teh, 2011;Chen et al., 2014), where subsamples of the data are used to calculate unbiased estimates of the gradient of the log-posterior. Consistent estimates of posterior expectations are obtained as the gradient step-sizes decrease to zero (Teh et al., 2014). While popular, subsampling methods do have the drawback that the data must be independent and the whole data set must be readily available at all times, and therefore data cannot be stored across multiple machines.
Modern computer architectures readily utilise multiple cores of the CPU for computation, but MCMC algorithms are inherently serial in implementation. Parallel MCMC, where multiple MCMC chains, each targeting the full posterior, are run on separate cores, or machines, can be easily executed (Wilkinson, 2005), however this does not address the big-data problem as each machine still needs to store and evaluate the whole data set. In order to generate a significant computational speed-up the data set must be partitioned into disjoint batches, where independent MCMC algorithms are executed on separate batches on independent processors (Huang and Gelman, 2005). Using only a subset of the entire data means that the MCMC algorithm is targeting a partial posterior, herein referred to as a subposterior. This type of parallelisation is highly efficient as there is no communication between the parallel MCMC chains. The main challenge is to then reintegrate the samples from the separate MCMC chains to approximate the full posterior distribution. Scott et al. (2013) create a Gaussian approximation for the full posterior by taking weighted averages of the means and variances of the MCMC samples from each batch; this procedure is exact when each subposterior is Gaussian, and can work well approximately in non-Gaussian scenarios. Neiswanger et al. (2013) avoid the Gaussian assumption by approximating the subposteriors using kernel density estimation, however, kernel density approximations scale poorly in high dimensions (Liu et al., 2007). Also, the upper bounds on the mean squared error given in Neiswanger et al. (2013) grow exponentially with the number of batches, which is problematic in big data scenarios where the computational benefit of parallelisation is proportional to the number of available processors.
Previous approaches used to merge the product of subposterior densities have solely relied on the parameter samples outputted from each MCMC algorithm, but have neglected to utilise the subposterior densities which are calculated when evaluating the Metropolis-Hastings ratio. We place Gaussian-process (GP) priors on the log-density of each subposterior. The resulting approximation to the log of the full posterior density is a sum of Gaussian-processes, which is itself a Gaussian-process. From this we may obtain not only a point estimate of any expectation of interest, but also a measure of uncertainty in this estimate.
Starting from this Gaussian-process approximation to the full log-posterior density, we investigate three approaches to approximating the posterior. Firstly, an efficient Hamiltonian Monte Carlo (HMC) algorithm (Neal, 2010) which targets the expectation of the posterior density (the exponential of the combined GP); samples from this provide our first means of estimating expectations of interest. Secondly, the HMC sample values may be sent to each of the cores, with each core returning the true log-subposterior at each of the sample points. Combining these coincident log-subposterior values provides the true posterior at the sampled points, which in turn provides importance weights for the HMC sample, leading to asymptotically consistent estimates of posterior expectations. The practitioner may wish to avoid the complexities and computational expense of running HMC on the expectation of the exponential of the GP and of calculating the true sub-posteriors at a sample of points. We, therefore, also consider an importance proposal based upon any approximation to the true posterior and obtain repeated samples of importance weights by repeatedly sampling realisations of the GP approximation to the log-posterior. This provides both an estimate of any expectation of interest and a measure of its uncertainty. This paper is structured as follows. Section 2 reviews the parallel MCMC approach for sampling from the posterior, the HMC algorithm and importance sampling. Section 3 then outlines the creation of our Gaussian-process approximation for each of the individual subposteriors, and for combining these. In Section 4 we detail three methods for approximating posterior expectations, each utilising the combined Gaussian-process approximation. Section 5 highlights through two toy models, and two large scale logistic regression problems, that our method offers significant improvements over competing methods when approximating non-Gaussian posteriors. We conclude with a discussion.
Bayesian inference and MCMC
Consider a data set Y = {y 1 , y 2 , . . . , y n } where we assume that the data are conditionally independent with a likelihood n i=1 p(y i |ϑ), where ϑ ∈ Θ ⊆ R d are model parameters. Assuming a prior p(ϑ) for the parameters, the posterior distribution for ϑ given Y is Alternatively, the data set Y can be partitioned into C batches {Y 1 , Y 2 , . . . , Y C } where we define a subposterior operating on a subset of the data Y c as where p(ϑ) is chosen so that p(ϑ) 1/C is proper. The full posterior is given as the product of the subposteriors π(ϑ) ∝ C c=1 π c (ϑ). In this setting we no longer require conditional independence of the data, but rather independence between the batches {Y c } C c=1 , where now the data in each batch can exhibit an arbitrary dependence structure.
Creating an approximation to the posterior, π(ϑ), commences with sampling from each of the subposteriors π c (ϑ) independently in parallel, where, given the independence between data subsets, there is no communication exchange between the MCMC algorithms operating on the subposteriors. This type of parallelisation is often referred to as embarrassingly parallel (Neiswanger et al., 2013). The challenge then lies in combining the subposteriors, for which we propose using Gaussian-process approximations.
In this paper we introduce the Hamiltonian Monte Carlo (HMC) algorithm as one possible MCMC algorithm that can be used to sample from π c (ϑ). Moreover, we use HMC in Section 4 to sample from an approximation to the full posterior, π(ϑ). Other MCMC algorithms, including the random walk Metropolis (Roberts et al., 1997), Metropolis adjusted Langevin algorithm (Roberts and Rosenthal, 1998) and adaptive versions of these (e.g. Andrieu and Thoms, 2008) can also be used.
Hamiltonian Monte Carlo
We now provide a brief overview of Hamiltonian Monte Carlo and its application in this paper; the interested reader is referred to Neal (2010) for a full and detailed review. The HMC algorithm considers the sampling problem as the exploration of a physical system with − log π(ϑ) corresponding to the potential energy at the position ϑ. We then introduce artificial momentum variables ϕ ∈ R D , with ϕ ∼ N (0, M ) being independent of ϑ. Here M is a mass matrix that can be set to the identity matrix when there is no information about the target distribution. This scheme now augments our target distribution so that we are now sampling (ϑ, ϕ) from their joint distribution the logarithm of which equates to minus the total energy of the system. Samples from the marginal distribution of interest, π(ϑ), are obtained by discarding the ϕ samples. We can sample from the target distribution by simulating ϑ and ϕ through fictitious time τ using Hamilton's equations (see Neal (2010) for details) The differential equations in (4) are intractable and must be solved numerically. Several numerical integrators are available which preserve the volume and reversibility of the Hamiltonian system (Girolami and Calderhead, 2011), the most popular being the leapfrog, or Stormer-Verlet integrator. The leapfrog integrator takes L steps, each of size , on the Hamiltonian dynamics (4), with one step given as follows: Using a discretisation introduces a small loss or gain in the total energy, which is corrected through a Metropolis-Hastings accept/reject step. The full HMC algorithm is given in Algorithm 3 in Appendix A.
The HMC algorithm has a step-size parameter and number of leap frog steps L which need to be tuned. The performance of the algorithm is highly dependent on the tuning of the parameters. One way to tune the algorithm is to optimise the parameters such that the acceptance rate is approximately 65% (Beskos et al., 2013). Alternatively, the parameters could be adaptively tuned ; for this paper we use the popular NUTS sampler Hoffman and Gelman (2014), which tunes the trajectory length L to avoid the sampler doubling back on itself. The HMC algorithm can be efficiently implemented using the popular STAN 1 software package. The STAN modelling language automatically tunes the HMC algorithm, and by using efficient automatic differentiaion, the user need only express their posterior model.
Importance sampling
A popular alternative to MCMC for estimating posterior expectations is the importance sampler (Robert and Casella, 1999). Given a proposal density, q(θ), and an unnormalised posterior density, π(θ), importance sampling (e.g. Geweke, 1989) aims to estimate expectations of some measurable function of interest, h(θ) by sampling from q. The starting point is where w(θ) := π(θ)/q(θ) and Z := π(θ)dθ is the normalisation constant. Consider a sequence, {θ i } ∞ i=1 with marginal density q. Provided that a strong law of large numbers (SLLN) applies, setting h(θ) = 1 in the above equation implies thatẐ N := 1 N N i=1 w(θ i ) → Z, almost surely, and hence, almost surely, where w N (θ) := w(θ)/Ẑ N . In Section 4 we will use importance sampling to estimate expectations with respect to the combined posterior distribution.
3 A Gaussian-process approximation to posterior 3.1 Gaussian-process approximations to the subposteriors Parallelising the MCMC procedure over C computing nodes results in C subposteriors {π c (ϑ)} C c=1 . From each subposterior, c, where the MCMC algorithm has been iterated J times, we have where each pair consists of a sample from the Markov chain with its associated log-subposterior, up to some fixed additive constant. We wish to convert this limited information on a finite set of points to information about log π c over the whole support of ϑ. We therefore treat the whole log-subposterior (up to the same additive constant), L c (ϑ), as random with a Gaussian-process prior distribution: where m : ϑ → R and K : ϑ × ϑ → R are, respectively, the mean and covariance functions. For computational convenience, we further assume that the log-subposteriors are independent of each other. We model log π c (ϑ) rather than π c (ϑ), so that our approximation to the overall log-posterior will be a sum of Gaussian-process (Section 3.2); modelling the log-posterior also avoids the need for non-negativity constraints when fitting the GP 2 .
The mean function and covariance function are chosen by the user. A mean function of zero, m(ϑ) = 0, would be inappropriate in this setting as our prior must be the logarithm of a probability density function up to a finite additive constant. We ensure that exp {L c (ϑ)} dϑ < ∞ almost surely through a quadratic mean function of the form Here V is the empirical covariance of ϑ obtained from the MCMC sample and β i , (i = 0, 1, 2) are unknown constants.
The covariance function K(·, ·), determines the smoothness of the log-subposterior, which we shall assume is continuous with respect to ϑ. A popular choice is the squared-exponential function (e.g. Rasmussen and Williams, 2006) where Λ is a diagonal matrix and ω are hyperparameters. In this paper we analytically marginalise β 0 and β 1 (O' Hagan, 1978) and estimate β 2 and the kernel hyperparameters through maximum likelihood (details given in Chapter 5 of Rasmussen and Williams (2006)). We have found that our choice of mean and covariance function work well in practice, however, alternative functions can be applied and may be more appropriate depending on the characteristics of the log-subposterior. Given the choice of prior, D c are observations of this Gaussian-process, leading to a posterior distribution, Define L c (ϑ 1:J ) := {L c (ϑ 1 ), . . . , L c (ϑ J )} and, for some parameter, or parameter vector, θ := θ 1: with, and whereK = K(ϑ 1:J , ϑ 1:J ), K * , * = K(θ 1:N , θ 1:N ) and K * = K(ϑ 1:J , θ 1:N ). The posterior distribution for the GP, L c (θ 1:N )|D c , is a random approximation of the logsubposterior surface.
Merging the subposteriors
Our next goal is to approximate the full posterior π(θ) ∝ C c=1 π c (θ) by merging the subposteriors together. The approximation of each of the C subposteriors as independent Gaussianprocesses, L c (θ) ∼ GP(·, ·) (c = 1, . . . , C) leads directly to the approximation of the full log-posterior (up to an additive constant) as the sum of C Gaussian-processes, Our assumption that the Gaussian-processes representing the log-subposteriors {L c } C c=1 are independent a priori was made for computational convenience. This may not be true in practice since gross deviations from the quadratic prior mean, m c (ϑ), such as any skewness, may be repeated across subposteriors. However, a posteriori these gross deviations should be accounted for through the posterior mean µ c (ϑ). Variability in the original partitioning of the data into batches, and variability in the sample points, ϑ 1:J , across batches will both contribute to the more subtle variations of the GPs about their individual posterior means, so that the posterior correlation should be much smaller than the prior correlation.
Illustration
The Gaussian-process subposterior, provides estimates of the uncertainty in the log-subposterior at points, θ, where the log-subposterior has not been evaluated. This contrasts with current approaches (e.g. Scott et al., 2013;Neiswanger et al., 2013;Wang and Dunson, 2013) to approximate the subposterior which give no measure of uncertainty. This is illustrated below and used in Sections 4.3 and 5.3 to gauge the uncertainty in our estimates of posterior expectations.
To illustrate how the sum of Gaussian-processes is used to approximate L(θ) we sample n = 20, 000 data points from a Normal(θ,1.5 2 ) distribution with θ = 2 and evenly split the data across C = 2 processors. Independent HMC algorithms are run on each subposterior targeting the posterior for θ. A Gaussian-process approximation, as shown in Figure 1, is fitted to each of the subposteriors, where the blue line is the GP mean and the blue band gives a 95% confidence interval of the uncertainty in the approximation at unobserved regions of the parameter space. Using (11), the approximation to the full posterior is given by summing the means and covariances of the Gaussian-process approximations to the subposteriors.
Approximating the full posterior
We now detail three methods for approximating posterior expectations, all of which utilise our Gaussian-process approximation to the full posterior density.
The expected posterior density
Here we approximate the full posterior density (up to an unknown normalising constant) by its expectation under the Gaussian-process approximation: using the properties of the log-Normal distribution. If the individual GPs provide a good approximation to the individual log-subposteriors, then E [L(θ)] will be a good approximation to the full log-posterior. The HMC algorithm then provides an efficient mechanism for obtaining an approximate sample, Evaluating the GP approximation at each iteration of this MCMC algorithm is significantly faster than evaluating the true full posterior, π(θ), directly. As is apparent from the leapfrog dynamics, HMC requires the gradient of log π E , and here the tractability of our approximation is invaluable, since Given a sufficiently large sample fromπ E , approximations of posterior expectations can be highly accurate if the individual GPs provide a good approximation to the log-subposteriors. Moreover, the approximationπ E (θ) to the full posterior can be further improved by using importance sampling on the true posterior.
Distributed importance sampling
Unlike the proposal, q, in Section 2.2, samples generated from the HMC algorithm represent an approximate, correlated sample from an approximation to the true posterior, instead of exact, independent samples from an approximation. Nonetheless, we may still correct for inaccuracies inπ E using importance sampling while spreading the computational burden across all C cores. The full sample from the HMC algorithm targetingπ E , {θ i } N i=1 , is sent to each of the C cores. Each of the C cores then evaluates the true subposterior at each θ i . A single core then combines the subposterior densities for each θ i to provide the full true posterior density:
To be clear, each sub-posterior is evaluated at the same set of θ values, allowing them to be combined exactly. In contrast, the original HMC runs, performed on each individual subposterior, created a different set of θ values for each subposterior so that a straightforward combination was not possible.
Since the unknown normalising constants for both π andπ E appear in both the numerator and the denominator of this expression, they are not needed. Almost sure convergence ofÊ N (h) to E π [h(θ)] as the HMC sample size, N → ∞ relies on the strong law of large numbers (SLLN) for Markov chains (e.g. Tierney, 1996, Theorem 4.3). In addition, if desired, an unweighted approximate sample from π may be obtained by resampling θ i with a probability proportional w i .
We expect our HMC importance proposal to be especially efficient, since it mimics the true posterior. However, other proposal distributions based on competing algorithms for merging subposteriors (e.g. Scott et al., 2013;Neiswanger et al., 2013;Wang and Dunson, 2013) can be used instead; these are compared in Section 5. Algorithm 1 describes this general distributed importance sampler.
Gaussian-process importance sampler (GP-IS)
Finally, we present an importance sampler that uses the full posterior distribution of L, the GP approximation to the full unnormalised log-posterior conditional on {ϑ c,j , π c (ϑ c,j )} C,J c=1,j=1 . Compared with the importance sampler in Section 4.2, the set of points {θ i } N i=1 is generated from a simple proposal distribution, rather than the HMC algorithm applied toπ E . Moreover, given the set of points {θ i } N i=1 the computationally-expensive evaluation of each subposterior at this set of values is replaced with repeated, but relatively cheap sampling of realisations of L at these points. For a fixed number of GP training points, J, estimates of posterior expectations are no-longer asymptotically exact in N , however estimates of the uncertainty in these estimates are also supplied.
As in Sections 4.2 and 2.2 we are interested in I h := E π [h(θ)] = 1 Z π(θ)h(θ)dθ. Here we consider approximating this with where is a realisation of L from the distribution in (11) As an alternative, robust, point estimate, the median of {I h ( m )} M m=1 would target the posterior median. Other posterior summaries for I h , such as a 95% credible interval, could also be estimated from the sample.
Unfortunately it is not possible to store the infinite-dimensional object, ; and even if it were, for moderate dimensions, numerical evaluation of I h ( ) would be computationally infeasible. Instead we use importance sampling. Consider a proposal distribution q(θ) that approximately mimics the true posterior distribution, π(θ) and sample N independent points from it: θ 1:N := (θ 1 , . . . , θ N ). For each m ∈ {1, . . . , M } we then sample the finite-dimensional object ( m (θ 1 ), . . . , m (θ N )) from the joint distribution of the GP in (11). For each such realisation we then construct an approximation to the normalisation constant and to I h ( ): for posterior inference on I h . For the specific case of I E h a simplified expression for the approximation may be derived: Algorithm 2 creates point estimates based upon this. The proposal density q(θ i ) should be a good approximation to the posterior density. To create a computationally cheap proposal, and with a similar motivation to the consensus Monte Carlo approximation (Scott et al., 2013), we make q(θ i ) a multivariate Student-t distribution on 5 degrees of freedom with mean and variance matching those of the Gaussian posterior that would arise given the mean and variance of each subposterior and if each sub-posterior were Algorithm 2 GP Importance Sampler Input: GP approximation L(θ) and proposal distribution q(θ).
-Weight the samples according to (13).
Gaussian, Alternatively, it would be possible to use the output from the HMC algorithm of Section 4.1 in an analogous manner to the way it is used in Section 4.2. Many aspects of our importance sampler can, if necessary, be parallelised: in particular, calculating µ c (θ 1:N ) and Σ c (θ 1:N ), and then sampling 1 , . . . , m and obtaining the sample
Computational cost
We briefly review some of the notation in the paper as a point of reference for this section.
• N := # samples drawn from the approximation to the merged posteriorπ E (θ), or, for GP-IS, from the Student-t proposal.
The overall computational cost of applying the methods in Sections 4.1 and 4.2 to create an approximate (weighted) sample from the full posterior can be summarised in three (four) steps: -Run MCMC on each subposterior (see Section 2). This step is common to all parallel MCMC algorithms (e.g. Scott et al., 2013;Neiswanger et al., 2013;Wang et al., 2015) and has a cost of O(Jn/C).
-Fit GP to each subposterior (see Section 3). Fitting a Gaussian-process to each subposterior has a cost of O(J 3 ) due to the inversion of the J × J matrixK. One of the drawbacks of Gaussian-processes is the computational cost. Faster, approximate Gaussian-processes, referred to as sparse GPs (e.g Csató and Opper, 2002;Seeger et al., 2003;Quiñonero-candela et al., 2005;Snelson and Ghahramani, 2006) can be used to reduce the computational cost. 3 In this paper we apply the simpler speed-up technique of first thinning the subposterior Markov chain; for example, using only every twentieth sample. The thinned Markov chain has the same stationary distribution as the full chain, but the autocorrelation is reduced and, more importantly for us, the sample contains fewer points. Secondly we remove the duplicate samples from the subposterior as they provide no extra information for the GP approximation, and cause the kernel matrixK to become singular. Fitting C independent GPs to each of the subposteriors is embarrassingly parallel as the MCMC output from each subposterior is stored on a separate core.
-Perform HMC onπ E (see Section 4.1). Each iteration of the HMC algorithm requires an evaluation of µ c and Σ c from (10) with N = 1, and multiple evaluations of the gradient terms given in Section 4.1. SinceK −1 has already been calculated, the total cost over all N iterations of the HMC algorithm is O(N J 2 ). The cost of this step is equivalent to competing algorithms including (Neiswanger et al., 2013;Wang and Dunson, 2013), which also use an MCMC-type step to sample from the approximation to the posterior.
Experiments
In this section we compare our Gaussian-process algorithms for aggregating the subposteriors against several competing algorithms: • Consensus Monte Carlo (Scott et al., 2013), where samples are weighted and aggregated.
• Weierstrass rejection sampler 5 (Wang and Dunson, 2013), where the nonparametric density estimates are passed through a Weierstrass transform to give the merged posterior.
We consider four interesting examples: a Bernoulli model with rare events which leads to a skewed posterior, a mixture of Laplace distributions which only becomes identifiable with a large amount of data, and two logistic regression models for large data sets. These examples highlight some of the challenges faced by merging non-Gaussian subposteriors and the computational efficiency of large-scale Bayesian inference.
Our Gaussian-process approximation method is implemented using J = 100 samples from the thinned chain for each subposterior to fit the GPs for the Bernoulli and multimodal examples; for the logistic regression examples J = 500. Both for our methods and for comparator methods, N = 5000 samples from each merged posterior are created. To ensure a fair comparison, the sample from eachπ E that is used both directly and in our DIS algorithm is the unthinned output from the HMC run. The Student-t proposals for the Gaussian-process importance sampler are iid.
Weighted samples from DIS and GP-IS are converted to unweighted samples by resampling with replacement, where the probability of choosing a given θ is proportional to its weight.
For each of the models studied in this section we denote the true parameter values by θ * (when known). We obtain an accurate estimate of the true posterior from a long MCMC run, thinned to a size of N , with samples denoted θ f and the true posterior mean and variance m f and V f , respectively. Samples from the approximation are denoted θ a , and their mean and variance are m a and V a . We use the following metrics to compare the competing methods: • Kullback-Leibler divergence for the Bernoulli and mixture example is calculated using a nearest neighbour search 6 and for the logistic regression example, approximate multivariate Gaussian Kullback-Leibler divergence (see Wang and Dunson (2013) for details) between the true posterior π and aggregated posteriorπ is calculated as Wang et al., 2015), which gives a measure for the posterior spread around the true value θ * (ρ=1 being ideal). 4 Implemented using the parallelMCMCcombine R package 5 Implemented using the authors R package https://github.com/wwrechard/weierstrass 6 Implemented using the FNN R package • Mean absolute skew deviation, η = 1 is the third standardised moment, and the superscripts f and a denote empirical approximations obtained from the samples obtained using the true posterior and the approximation, respectively.
Rare Bernoulli events
The consensus Monte Carlo approach of Scott et al. (2013) is the optimal algorithm for merging the subposteriors when each subposterior is Gaussian. A popular example where this algorithm struggles is the Bernoulli model with rare events, where the subposterior distributions are skewed (e.g. Wang et al., 2015;Wang and Dunson, 2013;Scott et al., 2013). We sample n = 10, 000 Bernoulli random variables y i ∼ Bern(ϑ) and split the data across C = 10 processors. We set ϑ = C/n so that the probability of observing an event is rare. In fact, each subset only contains 1 success on average. A Beta(2, 2) prior distribution is assumed for ϑ. Figure 2 and Table 1 give the results of merging the subposteriors for the various algorithms. Both GP-HMC and GP-IS samplers produce good approximations to the posterior. All of the competing algorithms can reasonably identify the mode of the posterior, but do not adequately fit the tail of the distribution. The consensus algorithm gives a reasonable approximation to the body of the density but struggles to capture the posterior skew. The nonparametric method appears to perform the worst in this setting; this could be improved by hand-tuning the bandwidth parameter. However, doing so assumes that the full posterior is known, which is not the case in practice.
We can generate samples from the full posterior using the distributed importance sampler (Alg. 1), where samples from each of the aggregation methods can be used as a proposal. Figure 2 (right panel) shows that using the DIS improves the accuracy of all of the competing methods. This improvement is most noticeable for the consensus and nonparametric approximations. Ultimately, the overall accuracy of the approximation to the full posterior will dependent on the quality of the proposal distribution.
Multimodal subposteriors
We create a concrete data scenario that could lead to a set of multimodal sub-posteriors similar to the artificial, perturbed multimodal sub-posteriors used in Wang et al. (2015). The example is a toy representation of a general situation where one or more parameters of interest are poorly identified, but as the size of the dataset increases towards 'large' n, the parameters start to become identifiable. The subposteriors are multimodal but the full posterior is unimodal (see Figure 3, left panel). We sample n = 1, 000, 000 observations from a mixture of two Laplace distributions If β 1 = β 2 then the mixture components are non-identifiable, however, by setting β 1 = 1.01 and β 2 = 0.99 the parameter of interest, θ can be identified from a sufficiently large dataset. For this experiment θ = 0.05 and the data are split equally over C = 25 processors. The scale parameters, β 1 and β 2 are fixed at their true values and a N (0, 1) prior is assumed for θ. The left panel of Figure terior reveals, approximately, the true value for θ, whereas θ is poorly identified by each of the subposteriors. The multimodality of the subposteriors results in a poor posterior approximation from the consensus Monte Carlo algorithm (right panel) as each subposterior is assumed to be approximately Gaussian. On the other hand, most of the nonparametric methods are able to capture the approximate shape of the posterior, but fail to correctly detect the posterior mode. Table 2 shows that the DIS step can slightly degrade the quality of the approximation if the proposal (e.g. semiparametric) under-represents the tail behaviour of the true posterior. As shown in Figure 3 (right panel), the GP-HMC and GP-IS samplers produce good approximations to the full posterior and unlike the nonparametric methods, the GP approximations concentrate around the posterior mode (see ρ in Table 2).
Logistic regression
Synthetic data set We use a synthetic data set on internet click rate behaviour where one of the covariates is highly predictive, but rarely observed. The data set has n = 10, 000, with 5 covariates and is identical to that in Section 4.3 of Scott et al. (2013). Partitioning the data across 10 machines means that only a few machines carry an observation for the highly predictive covariate, leading to some skewed subposterior distributions. The consensus Monte Carlo algorithm struggles in this scenario because, even though the marginal of the full posterior for the coefficient of the rarely observed covariate is approximately Gaussian, the subposteriors are not. We also compare the standard set-up of the merging algorithms against their DIS equivalents and find that DIS improves the accuracy of the approximation. The improvement brought about by DIS applied to our GP-HMC sampler is relatively small becauseπ E is already an accurate approximation to the posterior. Nonetheless, as can be seen with the consensus approximation, the DIS step can lead to a significant improvement if the approximation produced by the merging algorithm alone is poor. The nonparametric methods begin to struggle on this example. This is partly due to the difficulty of tuning these methods, and as discussed in Wang and Dunson (2013), these methods begin to struggle as the dimension of the parameter space increases.
Real data set We conduct parallel MCMC experiments on the Hepmass 7 data set. The challenge here is to accurately classify the collisions of exotic particles by separating the particleproducing collisions from the background source. The full data set contains 10.5 millions instances with 28 attributes representing particle features. In our experiments we use the first million instances and discard the mass attribute in our model fit. The data is divided equally across C = 20 machines. The results from Table 4 show that all methods approximate the full posterior with more or less the same level of accuracy, with the exception of the nonparametric method. As discussed in Neiswanger et al. (2013), nonparametric methods scale poorly with dimension with the Weierstrass and semiparametric algorithms performing better than the simple nonparametric method. The posterior concentration ratio is not reported as this is close to one for all methods. The subposteriors are approximately Gaussian and as a result all methods, including the consensus Monte Carlo algorithm, produce accurate approximations to the full posterior. As a result, and with the exception of the nonparametric method, applying the DIS step does Table 5: Expectation and variance of θ 1 and θ 17 from the logistic regression model with the HEPMASS dataset. Mean, Median and quantile estimates of the quantities are calculated from 500 samples from the GP-IS sampler (i.e. M=500).
not lead to a significant improvement in the approximation. As described in Section 4.3, the GP-IS sampler draws multiple realisations from the posterior distribution of the GP approximation to the posterior. Each of these realisations provides an estimate of the expectation of interest, their centre (mean or median) provides a point estimate and their spread (2.5% and 97.5% quantiles) provide a measure of the uncertainty. In Table 5 we estimate the posterior mean and variance of two parameters and compare these estimates against the truth, as calculated from an MCMC run on the full posterior. Sampling M = 500 realisations from GP we report the mean, median and 95% CI for estimates of the mean and variance and find that these results are consistent with the truth.
Discussion
Aggregating subposteriors generated through parallel, independent MCMC simulations, to form the full posterior distribution is challenging. Currently, available methods either produce a Gaussian approximation to the posterior, or utilise nonparametric estimators which are difficult to tune and do not scale well to high-dimensional settings. In this paper we have presented an alternative approach to this problem by directly modelling the log-density of the subposteriors. Using Gaussian-process priors we were able to employ a fully Bayesian strategy towards approximating the full posterior, and unlike competing methods, we were able to account for the uncertainty in the approximation.
Compared to the nonparametric methods, fitting the Gaussian-processes is straightforward using a mixture of marginalisation and maximum likelihood techniques for the hyperparameters. The main drawback of using Gaussian-process approximations is the computational cost. We have reduced the computational cost by, for each subposterior sample, thinning the Markov chain and removing duplicate MCMC samples prior to fitting the GP. We have shown that using only a small number of samples from the subposterior we can accurately approximate the full posterior. Furthermore, the computationally intensive step of fitting the individual GPs to the subposteriors is automatically parallelised, as the subposteriors are independent by definition and the GPs are independent by design.
The algorithms we propose scale well with the number of data points n, but fitting a GP when the dimension, d, of θ is high can be computationally expensive as the number of input points required to produce an accurate approximation grows exponentially with d. An extension to this work would be to employ sparse GP approximations to reduce the computational expense for high-dimensional problems. | 8,649.2 | 2016-05-27T00:00:00.000 | [
"Computer Science"
] |
Comparison of skin biopsy sample processing and storage methods on high dimensional immune gene expression using the Nanostring nCounter system.
Background Digital multiplex gene expression profiling is overcoming the limitations of many tissue-processing and RNA extraction techniques for the reproducible and quantitative molecular classification of disease. We assessed the effect of different skin biopsy collection/storage conditions on mRNA quality and quantity and the NanoString nCounter™ System’s ability to reproducibly quantify the expression of 730 immune genes from skin biopsies. Methods Healthy human skin punch biopsies (n = 6) obtained from skin sections from four patients undergoing routine abdominoplasty were subject to one of several collection/storage protocols, including: i) snap freezing in liquid nitrogen and transportation on dry ice; ii) RNAlater (ThermoFisher) for 24 h at room temperature then stored at − 80 °C; iii) formalin fixation with further processing for FFPE blocks; iv) DNA/RNA shield (Zymo) stored and shipped at room temperature; v) placed in TRIzol then stored at − 80 °C; vi) saline without RNAse for 24 h at room temperature then stored at − 80 °C. RNA yield and integrity was assessed following extraction via NanoDrop, QuantiFluor with RNA specific dye and a Bioanalyser (LabChip24, PerkinElmer). Immune gene expression was analysed using the NanoString Pancancer Immune Profiling Panel containing 730 genes. Results Except for saline, all protocols yielded total RNA in quantities/qualities that could be analysed by NanoString nCounter technology, although the quality of the extracted RNA varied widely. Mean RNA integrity was highest from samples that were placed in RNALater (RQS 8.2 ± 1.15), with integrity lowest from the saline stored sample (RQS < 2). There was a high degree of reproducibility in the expression of immune genes between all samples with the exception of saline, with the number of detected genes at counts < 100, between 100 and 1000 and > 10,000 similar across extraction protocols. Conclusions A variety of processing methods can be used for digital immune gene expression profiling in mRNA extracted from skin that are comparable to snap frozen skin specimens, providing skin cancer clinicians greater opportunity to supply skin specimens to tissue banks. NanoString nCounter technology can determine gene expression in skin biopsy specimens with a high degree of sensitivity despite lower RNA yields and processing methods that may generate poorer quality RNA. The increased sensitivity of digital gene expression profiling continues to expand molecular pathology profiling of disease.
Introduction
Molecular profiling of tissue for insight into mechanisms of disease, stratification of individuals for disease risk and to monitor therapeutic responses is rapidly increasing due to advances in technology. Driven by high-throughput molecular technology, such as digital sequencing, there is a growing body of molecular biomarker data across cancer phenotypes that aim to allow for personalised medical approaches that minimise unnecessary treatment.
A key consideration in molecular biomarker analysis is the need to extract high quality RNA from tissue samples [1]. The cross-linking of nucleic acids to proteins and other cellular components, such as in formalin fixation, makes the extraction of high-quality RNA difficult [2] . In recent years, the development of the NanoString nCounter platform, which utilises direct, digital quantitation of mRNA transcripts via hybridisation to colourcoded sequence specific probes, has overcome the limitations associated with detecting nucleic acid targets at all levels of biological expression [3]. The ability to multiplex targets reproducibly from RNA extracted from formalin fixed paraffin embedded (FFPE) samples has provided greater avenues for molecular research, particularly for clinicians located at sites not located near pathology or research facilities.
Various methods are also available for RNA protection, such as with TRIzol [4] or RNAlater, to overcome challenges with low quantity or low quality mRNA derived from FFPE samples. Given that mRNA quality and concentration impacts data quality, it is necessary to optimise collection/storage techniques for the sample processing [5]. Reliable and reproducible methods of obtaining sufficient amounts of high-quality RNA from tissue remain a challenge for biomarker studies, in particular studies involving skin samples. Skin biopsies are recognised to be difficult samples to achieve consistently high-quality RNA [6]. Investigations with the nCounter technology indicate the ability to measure mRNA with low yield and sub-optimal RNA quality. In this study we compared the impact of six tissue-processing methods on skin biopsies total RNA yield/integrity and the multiplex gene expression using the NanoString nCounter analysis system.
Methods
This was a comparison of immune gene expression from six skin tissue biopsy RNA extraction methods collected from three healthy patients undergoing abdominoplasty, with biopsies 3 and 4 collected from the same patient. All six methods were performed on abdominoplasty tissue collected from each person. Following excision of tissue, six 4 mm biopsies were collected with standard techniques. The study was conducted under approval from the Griffith University Human Research Ethics Committee and the United HealthCare Human Research Ethics Committee (HMR/05/15/HREC).
Tissue processing and storage
Following collection of the six skin biopsies from tissue from each patient the following storage and transport procedures were used: i) snap freezing in liquid nitrogen and transportation on dry ice; ii) RNAlater (Thermo-Fisher Scientific, Waltham, MA, USA) for 24 h at room temperature then stored at − 80°C; iii) formalin fixation and storage of FFPE blocks at room temperature; iv) DNA/RNA Shield (Zymo, Irvine, CA, USA) stored and shipped at room temperature; v) placed in TRIzol (ThermoFisher Scientific, Waltham, MA, USA) then stored at − 80°C; vi) 0.15 ml saline without RNAse for 24 h at room temperature then stored at − 80°C. First homogenization of skin biopsies using ZR BashingBead Lysis Tubes (Zymo) and Tissue Lyser II (Qiagen) was unsuccessful, therefore it was re-done using gentle-MACS octo and M tubes (Miltenyi Biotec). For the samples processed with liquid nitrogen, saline and RNAlater, RNA was extracted using the Maxwell® RSC simplyRNA Tissue Kit (Promega, Madison, USA). For the FFPE samples the RNeasy® mini kit (QIAGEN, Hilden, Germany) and ReliaPrep™ FFPE Total RNA Miniprep System (data is not shown) were used for RNA extraction. From samples in TRIzol RNA was extracted using the Direct-Zol™ RNA kit (Zymo, Irvine, CA, USA) while the Quick RNA™ Miniprep Kit (Zymo, Irvine, CA, USA) was used for extraction of RNA from DNA/RNA shield (Zymo, Irvine, CA, USA) stored biopsies. After isolation RNA samples were aliquoted and stored at − 80°C until further analysis.
RNA yield and integrity
RNA extraction was performed in an RNAse-free environment following the manufacturer's protocol for each kit. The concentration of extracted RNA (ng/μL) was assessed using three different methods: i) UVspectrophotometry (NanoDrop, ThermoScientific); ii) LabChip24 with Standard and Pico sensitivity RNA reagents (PerkinElmer); iii) Quantifluor direct RNA dye (Promega). A 260 / A 280 ratio was measured with the NanoDrop 1000 UV-Vis spectrophotometer (Thermo-Scientific, Massachusettes, United States) with an A 260 / A 280 ratio > 1.9 considered an indicator of pure RNA. RNA quality score (RQS) was calculated by a LabChip 24 bioanalyzer (PerkinElmer). Based on data using RNA Pico Sensitivity Reagent Kit, all RNA samples except LN1, LN2, RL1 and RL2 were concentrated using the Zymo RNA Concentrator kit (Zymo). After concentration, RNA was assessed using Quantifluor direct RNA dye (Promega) and LabChip 24 RNA Pico Sensitivity Reagent Kit (PerkinElmer).
NanoString gene expression analysis
Immune gene expression analysis was undertaken using the NanoString nCounter analysis system (NanoString Technologies, Seattle, WA) using the commercially available nCounter PanCancer Immune Profiling panel kit. The PanCancer Immune profiling panel contains n = 730 genes of key inflammatory pathways and n = 40 reference/housekeeping genes. The manufacturer's protocol was followed with small modification in that 300 ng of total RNA extracted from skin biopsies was hybridised with probes at 65°C for 24 h. Samples were processed on the NanoString Prep Station and the target-probe complex was immobilised onto the analysis cartridge. Cartridges were scanned by the nCounter Digital Analyser for digital counting of molecular barcodes corresponding to each target at 280 fields of view.
Data approach
Gene expression data was analysed using the Advanced Analysis Module in the nSolver™ Analysis Software version 4.0 from NanoString Technologies (NanoString Technologies, WA, USA) and TIGR Multi-Experiment Viewer (http://mev.tm4.org). The Advanced Analysis Module enables quality control (QC), normalisation, cluster analysis, differential gene expression (DGE), Pathview Plots and immune cell profiling. Raw data was normalised by subtracting the mean plus one standard deviation of eight negative controls while technical variation was normalised through internal positive controls. Data was corrected for input volume via internal housekeeping genes using the geNorm algorithm. Immune cell scores were determined using cell specific gene expression from The Cancer Genome Atlas (TCGA) as detailed in [7,8]. A Pearson correlation was used to determine degree of similarity of gene expression counts with significance accepted at p < 0.001.
Yield and integrity of extracted RNA
The average concentrations of extracted RNA for each processing method is shown in Table 1. RNA could be extracted from all samples, although the concentration and quality varied widely between and within processing methods. We found that in samples from one patient (set 3 and 4) stored in liquid nitrogen, RNAlater and saline RNA extraction did not yield enough RNA for nCounter Nanostring assay. RNA extracted from FFPE samples exhibited the most consistent concentrations and RQ scores while RNA / DNA shield resulted in consistent RQ scores but variable concentrations. RNA yield from biopsies stored in Liquid nitrogen, RNAlater and TRIzol of same participant was very low (set 3 and 4). We considered RNA concentration data assessed by UV-spectrophotometry (NanoDrop 1000, ThermoScientific) as unreliable for use with nCounter Nanostring system.
Immune gene expression
Counts for genes above background threshold, below 100, between 101 and 1000 and above 1000 by sample are shown in Table 2. Total RNA extracted from FFPE, LN and RNAlater returned the highest gene expression counts above background threshold levels (the geometric mean of the negative control samples). All samples showed similar counts at expression levels > 1000. The similarity across samples is depicted in Fig. 1, which is a heatmap from an unsupervised clustering of the 730 genes included in the PanCancer Immune Profiling panel. On average the FFPE samples had higher gene expression counts than total RNA extracted from samples using other protocols. There was a high correlation coefficient in immune gene expression counts between the tissue processing and RNA extraction methods (r =0 .88-0.97; p < 0.001).
Discussion
The role of molecular profiling in pathology to classify disease was recognised in 2014 through the formalisation of an informatics subdivision within the Association for Molecular Pathology given the growing use of high throughput quantitative data to deliver health care [9]. A recognised limitation to the generation of high-quality omics data is RNA yield and quality [6]. This study compared total RNA yield and quality on immune gene expression from healthy skin biopsies across six tissue processing/storage protocols. All protocols yielded RNA quantities with wide ranges of quality and concentration metrics. Skin tissue is recognised to be difficult to reliably extract high quality mRNA as a result of suboptimal biopsy procedures not yielding sufficient quantity of tissue, RNase activity and the nature of the collagen matrix [6]. Recent studies highlight the difficulties of obtaining sufficient RNA from skin even with the latest extraction techniques [6]. In our investigation, formalin fixation and storage in RNA / DNA shield yielded the most consistent quality scores across all samples. Importantly, all processing methods except saline storage (RNA degradation) were compatible with Nano-String nCounter analysis, highlighting the versatility of this hybridisation-based application to overcome the limitations of extraction protocols for undertaking molecular profiling. This versatility provides researchers and pathologists with simpler options to collect and store biological samples for more comprehensive classification of disease.
Variation in RNA quality results in inaccurate and misleading changes in molecular profiling, underpinning the need for reliable and reproducible protocols for the processing of tissue and extraction of RNA [10]. Numerous studies have compared extraction kits for the isolation of nucleic acids from FFPE tissue, with key factors to consider listed for researchers prior to undertaking experimental processes [1,11]. While DNA/RNA Shield yielded similar quality scores to RNA extracted from FFPE tissue, there was substantial variation in the total yield of RNA. The highest quality RNA was obtained from the samples stored in RNAlater, although there was a high degree of variation in the quality scores and RNA concentration from samples utilising this protocol. Overall, FFPE samples appear to provide the most consistent RNA quality scores and yields.
The NanoString nCounter Analysis system has been one of the latest advances in genomic technology for molecular profiling. As a hybridisation-based system, the technology eliminates the need for amplification bias common to PCR for direct counting of molecular transcripts. Research has demonstrated that the NanoString System is able to quantify transcripts from total RNA of lower quality and quantity, potentially providing researchers with additional options for the collection of tissue for molecular profiling. We utilised the PanCancer Immune Profiling kit to undertake broad-based molecular profiling of mRNA extracted from tissue using the various tissue processing techniques. The technology had high sensitivity of target detection across the sample set even at lower quality scores and yields, which is consistent with previous research [3,12]. Absolute gene expression counts were similar across the various skin tissue processing and RNA extraction protocols. Our data highlight the utility of the system for use with a range of tissue processing and RNA extraction protocols. This gives primary care physicians, researchers and pathologists, particularly in locations without access to liquid nitrogen facilities, greater flexibility to collect skin samples for the molecular classification of disease, particularly in oncology, aging, the endotypes of atopic dermatitis and other hypersensitivity reactions [13]. Provided consistency in the use of these methods by protocol, this gives researchers and primary care skin clinicians a wide variety of options to undertake molecular profiling of biological samples.
In conclusion, our study shows that several tissue processing and extraction techniques successfully isolate RNA for analysis using high throughput digital counting. We observed substantial variation in the quality and yield of these techniques, with tissue stored in FFPE blocks providing the most consistent yield and quality scores in all participants. We note a number of limitations, in particular the small number of samples per protocol, that each processing method utilised a different RNA extraction method, that the results relate to skin samples only and that these samples were fresh tissue not older samples so caution should be taken in extrapolating these results. Many of these limitations are consistent with clinical research and increases the ecological validity of the results for research and pathology purposes. Despite the variation and quality of mRNA, the NanoString nCounter analysis system was able to quantify 730 genes across protocols with a high degree of similarity, highlighting the benefits of hybridisationbased technology for molecular profiling. Fig. 1 A hierarchical cluster heatmap of the 730 immune genes by group. With the exception of FFPE which shows higher immune gene expression, the groups show similar gene expression counts. Each row is a gene and each column a group. Green is low expression and red is high expression. Immune gene expression from samples stored in saline are not included. LN-liquid nitrogen; RL -RNAlater; FFPEformalin fixed paraffin embedded; TRtrizol; RS -RNA / DNA shield | 3,530.8 | 2020-05-15T00:00:00.000 | [
"Medicine",
"Biology"
] |
High-density diffuse optical tomography for imaging human brain function
This review describes the unique opportunities and challenges for noninvasive optical mapping of human brain function. Diffuse optical methods offer safe, portable, and radiation free alternatives to traditional technologies like positron emission tomography or functional magnetic resonance imaging (fMRI). Recent developments in high-density diffuse optical tomography (HD-DOT) have demonstrated capabilities for mapping human cortical brain function over an extended field of view with image quality approaching that of fMRI. In this review, we cover fundamental principles of the diffusion of near infrared light in biological tissue. We discuss the challenges involved in the HD-DOT system design and implementation that must be overcome to acquire the signal-to-noise necessary to measure and locate brain function at the depth of the cortex. We discuss strategies for validation of the sensitivity, specificity, and reliability of HD-DOT acquired maps of cortical brain function. We then provide a brief overview of some clinical applications of HD-DOT. Though diffuse optical measurements of neurophysiology have existed for several decades, tremendous opportunity remains to advance optical imaging of brain function to address a crucial niche in basic and clinical neuroscience: that of bedside and minimally constrained high fidelity imaging of brain function.
Imaging spatially and temporally distributed brain activity has revolutionized our understanding of the brain. [1][2][3][4] The interacting brain systems supporting our thoughts and actions-from sensing the visual world, to communicating, to maintaining attention and control, to daydreaming or sleeping-are accessible to quantitative investigation through functional imaging techniques. [4][5][6] Additionally, functional brain imaging has provided insight into neurological and psychiatric disorders such as Alzheimer's disease, 7 autism spectrum disorder (ASD), [8][9][10] and stroke. 11,12 However, optimizing neuroimaging technologies as tools for understanding these disorders and tracking their progression presents significant challenges. Optical neuroimaging techniques offer a unique opportunity for safe, wearable, and portable methods for measuring brain function at the clinical bedside and in naturalistic settings. This review will discuss recent advancements in high-density diffuse optical tomography (HD-DOT) methods that have led to improved image quality and reliability in noninvasive optical mapping of human brain function.
The diverse set of physiological dynamics encompassing neurological processing engenders multiple opportunities for measurements of human brain function across a remarkably wide range of spatial and temporal scales (Fig. 1). When a part of the brain is active, the local firing of neurons gives rise to varying electrical field potentials that can be measured at the millisecond scale invasively with electrocorticography (ECoG) or noninvasively with electro/magneto encephalography (EEG/MEG). This local firing of neurons triggers a complex neurovascular cascade [13][14][15] that produces a dramatic increase in glucose use and local blood flow resulting in a large increase in oxygen availability. 16,17 The dynamic changes in FIG. 1. Spatial, temporal, and mobility domains of the leading methods available for measuring human brain function. Each colored region represents a rough estimate of the spatial and temporal capabilities for each modality. EEG, fNIRS, and DOT systems can be deployed at the bedside, in the laboratory, or in the hospital. MEG, fMRI, and PET machines require dedicated facilities, are immobile, and patients/participants must be transported to the facilities for imaging. EEG, electroencephalography; fNIRS, functional near infrared spectroscopy; DOT, Diffuse Optical Tomography; MEG, magnetoencephalography; PET, positron emission tomography; and fMRI, functional magnetic resonance imaging. glucose metabolism and blood flow can be measured by positron emission tomography (PET). The resulting relative changes in local concentrations of oxygenated (HbO 2 ), deoxygenated (HbR), and total hemoglobin (HbT) give rise to a blood oxygenation level dependent (BOLD) signal as measured by functional magnetic resonance imaging (fMRI) [18][19][20][21][22][23] and, differently, by functional near infrared spectroscopy (fNIRS) 24 -the basis for HD-DOT. Each of these measurement methods differs in their practical strengths and limitations (Table I). For example, PET utilizes ionizing radiation that is generally prohibited for research use in children. The strong electromagnetic fields required for fMRI are unsafe for participants with implanted active electronic devices (e.g., pacemakers, deep brain stimulators, and cochlear implants). The wearable, portable nature of optical technologies opens the door to bedside and minimally constrained imaging of functional brain health, [25][26][27][28] in settings more ecologically natural than MRI. 24,[29][30][31][32][33] Given these strengths, fNIRS technologies are uniquely suited to studies involving infants and toddlers, 25,[34][35][36][37][38][39] and they are ideal for use in clinical settings in which standards of clinical care lead to complex or untenable logistics for moving the patient to an MRI machine (e.g., if the patient is on a ventilator).
Though fNIRS methods are deployable at the bedside, anatomical specificity is less precise and spatial resolution of the acquired images is lower than what is obtainable with fMRI ( Fig. 1). Each single fNIRS measurement obtained from a given source-detector (SD) measurement pair recovers information about the underlying hemodynamics along a broad spatial path-including brain and superficial tissues-traversed by photons traveling from the source to the detector 24 [ Fig. 2(a)]. Acquiring data from multiple SD measurement pairs provides access to more hemodynamics even without utilizing imaging techniques [ Fig. 2(b)]. [40][41][42] Diffuse optical topography techniques can reconstruct sparse multichannel fNIRS data into spatial maps with moderate spatial resolution but no depth information 43,230,231 [Fig. 2(c)]. To improve the image quality of sparse fNIRS, spatially overlapping fNIRS measurements can be tomographically reconstructed to produce three-dimensional maps of brain function [ Fig. 2(d)], a technique known as diffuse optical tomography (DOT). 31,44,57 To further improve image quality, HD-DOT systems use a dense regular array of sources and detectors to obtain overlapping measurements at multiple distances. Herein, high-density is defined as a regular array, typically an interlaced lattice of sources and detectors, with a closest (a.k.a., nearest neighbor) SD distance of at most 15 mm 45 [Fig. 2(e)]. This maximum distance of 15 mm for the nearest neighbor SD separation makes possible access to multiple SD distances, including out to 40 mm and beyond, that together provide measurements crucial for obtaining spatial maps of brain function comparable to fMRI. Indeed, advances in image quality obtained with HD-DOT, including a spatial resolution approaching that of fMRI, 25,46 have been demonstrated in recovered maps of brain function using both task-based 25,33,[45][46][47][48][49][50][51][52][53][54][55][56][57][58][59] and resting state functional connectivity techniques. 25,26,46,60 In this review, to contextualize challenges in HD-DOT system design, we will briefly describe the physical mechanisms underlying fNIRS measurements, and the theory underlying modeling of light propagation in tissue. We will then focus on optical-electronic instrumentation and cap design utilized in HD-DOT systems. We additionally highlight several validation studies of HD-DOT mapping of cortical activity and connectivity in response to tasks and during a resting state. We then discuss the use of HD-DOT in clinically oriented applications. Finally, we will briefly consider opportunities to further improve image quality, anatomical specificity, and reliability so that HD-DOT methods can realize their true potential in unconstrained and noninvasive assessment of human brain function in the clinic, in naturalistic and even remote settings, and in sensitive populations. propagation, it is possible to accurately reconstruct brain function within the tissue volume from a set of these measurements collected on the surface (Fig. 3). The fundamental unit of an fNIRS measurement is a paired source and detector of near-infrared (NIR) light. In the late 1970's, Jöbsis observed a range of wavelengths (∼700-1300 nm) in the electromagnetic spectrum wherein photons penetrate multiple centimeters through biological tissue 61 and can provide direct measurements of hemodynamic physiology deep (>1 cm) in living intact tissue. The deeper penetration occurs within this "optical window" due to relatively weak absorbance of photons by the primary chromophores in biological tissue (water, lipids, and hemoglobin) 62 (Fig. 4). Importantly, as will be discussed below, though the photon absorption is low, the scattering of photons is high in biological tissue and can be well approximated as a diffusive process. [63][64][65][66][67] The local transient changes in local concentrations in HbO 2, HbR, and HbT brought about by varying brain activity are reflected in variance in the light levels of a given fNIRS SD measurement pair. Sections II B-II E discuss how to localize the changes within the volume from an HD set of measurements on the surface.
B. Forward light modeling
In optical functional neuroimaging, the goal is to model how variations in light level measurements on the surface correspond to transient changes in optical properties within the volume. This relationship can be concisely described by where y is a vector of measurements from the set of source-detector pairs (what we have), x represents the change in absorption and/or scattering at each point in the volume (what we want to know), and A is called the sensitivity matrix (also called the Jacobian) that relates differential changes in light measurements to differential changes in internal optical properties. This sensitivity matrix is constructed from a model, termed the forward light model, derived fundamentally from the Boltzmann Transport Equation (BTE), or, equivalently in this context, the Radiative Transport Equation (RTE). The BTE is a conservation equation that can be utilized to describe the flow of light energy E through a scattering medium (e.g., a head). This formalism is equivalent to a description of the flow of photons as E = nhc λ , where n is the number of photons, h is Planck's constant, c is the speed of light in a vacuum, and λ is the photon wavelength (see Table II for a list of the primary quantities discussed in this review with their units and typical values).
To construct the model, let us start by defining the energy radiance I( ⇀ r, t,ŝ) (i.e., the energy flowing per unit time through an area per solid angle, in units of W cm 2 sr ) 68 such that the differential energy dE flowing in a unit solid angle d 2ŝ through an elemental area da with associated normaln, at position where v is the speed of light in the medium (v = c n ≈ 21.4 cm ns , where n = 1.4 is the index of refraction in the medium); µs is the scattering coefficient (in units of 1 cm ); f (ŝ,ŝ ′ ) is the scattering phase function, which is essentially the probability density of a photon scattering from directionŝ ′ into directionŝ; q( ⇀ r, t,ŝ) is a source term (with units of W cm 3 sr ) representing power per volume emitted by sources at position ⇀ r in time dt in directionŝ; and µa is the absorption coefficient of the medium (in units of 1 cm ). Conceptually, Eq. (3) states that the change in radiance (i.e., the change in optical power through a differential area and unit solid angle) at time t in directionŝ at position ⇀ r is due to four possible quantities: (i) gains and losses in energy due to photons being scattered into direction s and position ⇀ r, (ii) gains in energy due to local sources of photons, (iii) changes in net energy flow into the differential volume, and (iv) losses in energy due to absorption and scattering, respectively. The absorption and scattering coefficients of the medium (e.g., scalp, skull, brain tissue, etc.) are wavelength dependent [µa(λ), µs(λ), respectively] and correspond to the reciprocal of the mean distance traveled by a photon before it is absorbed or scattered, respectively. More exactly, these coefficients represent the reciprocal of the mean distance traveled before a photon is absorbed/scattered in the absence of scattering/absorption. These distances are distinct from (and much smaller than) the transport mean-free-path (a.k.a., the random walk step) l = 1 µ a +µ ′ s , which represents the typical distance a collection of photons travels in a given medium before their directions effectively become randomized and uniformly distributed (i.e., isotropic). The reduced scattering coefficient µ ′ s includes information about the anisotropic scattering characteristic of the medium and will be mathematically derived below. To simplify, here we are treating the medium as if the index of refraction and the coefficients of absorption and scattering are constant throughout. We will deal with spatially and temporally variant optical properties below.
If we make the assumption that the radiance I( ⇀ r, t,ŝ) is nearly isotropic to first order (i.e., uniform in all directions), then Eq. (3) can be simplified by expanding I( ⇀ r, t,ŝ) into spherical harmonics and truncating after the first term (this is also referred to as the P 1 approximation) [70][71][72][73]113 where Φ( ⇀ r, t) is the fluence rate (a scalar intensity in units of W cm 2 ), defined as the total power per area radiating radially outward from a volume element at position and J( Substituting Eq. (4) into Eq. (3) and integrating over all solid angles (using the assumption of isotropic radiance) yields a scalar term, and a vector term, where Q( ⇀ r, t) is the total power per volume radiating radially isotropically outward from the volume element at position where f ŝ,ŝ ′ is the scattering phase function which is the (wavelength-dependent) angular distribution of photons scattered from directionŝ ′ to directionŝ, θ is the angle between the incident and outgoing scattering wave vectors. This anisotropy factor g reflects the probability that a photon is scattered in the forward direction and in soft mammalian tissue typically has a value around 0.9. Though a full discussion of measurement and derivation of human tissue baseline optical properties is beyond the scope of this review, this is a fascinating topic of ongoing study, especially with regard to changes during early development (prenatal and postnatal) and atrophy with aging and disease. 47,[74][75][76][77][78][79][80][81][82][83][84][85][86][87][88] The reduced scattering coefficient can now be defined as µ ′ s = (1 − g) ⋅ µs and, as described above, when combined with the absorption coefficient, is equal to the inverse of the transport mean free path (the random walk step). We can further simplify by assuming any sources are effectively isotropic 4π q( ⇀ r, t,ŝ)ŝd 2ŝ = 0.
If we now enforce a second key assumption (2) that variations in the photon current are slow relative to the time it takes the photons to travel a random walk step then Eq. (8) simplifies to a form similar to Fick's law of diffusion The constant of proportionality in Eq. (12), equal to one third of the transport mean free path, has units of length, whereas in Fick's first law of diffusion, the constant of proportionality-the diffusion coefficient-has units of area per time. To maintain conceptual
Review of Scientific Instruments
REVIEW scitation.org/journal/rsi simplicity, we can define the photon diffusion coefficient (in units of cm 2 s ) as where we are now explicitly noting that the index of refraction and the coefficient of absorption and reduced coefficient of scattering may vary in the tissue. Using this definition, we then substitute Eq. (12) into Eq. (7) to arrive at the diffusion approximation of the radiative transport equation for the photon fluence rate In some cases, we can further simplify by assuming that the optical properties within the medium are spatially homogeneous. The diffusion equation in (14) then becomes Equation (15) states that the temporal changes in the fluence rate are related to divergence due to diffusion (i.e., scattering), gains due to sources, and losses due to absorption. In practice, we often model the head using multiple tissue types (i.e., scalp/soft tissue, bone, gray matter, white matter, and cerebral spinal fluid), each with some set of estimated baseline optical properties.
To recap, the validity of the diffusion approximation for photon propagation in biological tissue is appropriate as long as (1) the radiance can be considered isotropic, which will generally be true in regions deeper than a mean free path, l = 1 µ a +µ ′ s ≈ 1.4 mm; (2) the time scale of variations in fluence are much greater than the time it takes a photon to travel a mean free path t l = 1 v(µ a +µ ′ s ) ≈ 7 ps [Eq. (11)]; (3) the tissue properties are in the strong scattering regime (µ ′ s ≫ µa, or, more concretely, µ ′ s > 10µa); and (4) the source term Q( ⇀ r, t) is isotropic. 89 In the application of focus here, i.e., optical imaging of human brain function, these assumptions generally hold true at the depths of brain tissue. Though these assumptions break down in transparent or "void" regions of the head (e.g., within cerebral spinal fluid, CSF), 90,91 the surface roughness of the boundaries between the CSF and surrounding layers enables these regions to be modeled using the diffusion approximation with effective optical properties to recover accurate reconstructions. 47,75,82,92 The specific characteristics of the fluence rate response in Eq. (15) depend on the source term Q( ⇀ r, t). Broadly, source terms utilized in human optical functional neuroimaging fall into three regimes: picosecond to nanosecond pulses (∼1 THz-1 GHz), intensity modulated light with frequencies in the ∼100 MHz-1 GHz range, and constant sources (essentially modulation below ∼1 MHz). The measurement types corresponding to these source modulation strategies are time domain (TD), frequency domain (FD), and continuous wave (CW), respectively. For the TD case, it can be shown 66,93 that for a source term defined as a short isotropic pulse Q(r, t) = δ(0, 0) in an infinite and homogeneous medium, the solution to Eq. (15) becomes This equation states that the distribution of the fluence rate around a point source is a spherically decaying Gaussian in space and exponential in time at large r [Figs. 5(b) and 5(c)]. The absorption relaxation time constant of this equation, τ = 1 vµ a ≈ 0.24 ns (corresponding to ∼4 GHz) for λ = 850 nm in gray matter tissue highlights the very short time scales required to adequately sample this fall-off distribution [ Fig. 5(c)], also called the distribution of times of flight (DTOF) or the temporal point spread function (TPSF). TD systems use picosecond wide pulses of light sources and ultrafast optoelectronics to measure this temporal broadening of the detected light at some distance or set of distances from the source. The measured DTOF can then be fit to the expected distribution [ Fig. 5(c)] to estimate underlying absorption and scattering properties of the tissue. As TD methods have yet to be fully realized in an HD-DOT array, a full discussion of TD methods and the exciting and rapidly advancing optoelectronics that enable these measurements [94][95][96][97][98][99][100][101][102][103][104][105][106][107][108][109] are beyond the scope of this review.
For the case of intensity modulated light, the source term is written in the general form, [110][111][112] with both a DC and an AC component [Fig. 5(d)] where ω = 2πf is the angular frequency of the intensity modulation. In this case, Eq. (15) can be written in a simpler form by taking the Fourier transform of each term to get the frequency domain photon diffusion equation into the general form of the inhomogeneous Helmholtz equation . The solution to this equation in an infinite homogeneous medium with a modulated point source at the origin is given by the following overdamped solution to the wave equation, where r is the distance from the modulated source and the wave vectors k Re and k Im are defined such that 72,110-112 Writing the wave vectors in this way highlights the length scale determined by the leading term of both wave vectors,
REVIEW scitation.org/journal/rsi
the source is too small to reliably measure. At very high modulation frequencies where ω ≫ vµa, the phase becomes insensitive to the underlying optical properties; in fact, Eq. (11) can now be rewritten as ω ≪ v 2 3D → f ≪ 25 GHz to provide an upper limit for modulation frequencies applicable to the diffusion approximation of the RTE.
Comparing the responses to a pulse [Eq. (16)] with the Fourier equivalent in the frequency domain [Eqs. (19)- (21)], we see that while the fluence rate of intensity modulated light propagates with a constant phase velocity V ph = ω k Im , the response to a pulse undergoes dispersion (pulse broadening in the time domain) due to the different phase velocity of each frequency component in the pulse [Figs. 5(c) and 5(d)]. Due to the significant cost of optoelectronics that maintain high fidelity in source modulation and photon detection at the required bandwidths for precise measurement of both light intensity and phase delay for FD methods, this strategy has yet to be implemented in HD-DOT arrays that require a high channel count of source-detector channels.
The CW regime can be modeled as the FD case with a modulation frequency of zero. This simplifies Eq. (18) to give the steady state diffusion equation which leads to the solution for the fluence rate in an infinite and homogeneous media In CW mode, only the magnitude of the light intensity is measured at the detector. In this case, because only one parameter is measured, relative changes in absorption in the modeled optical properties are all that can be accessed. By contrast, TD and FD systems, which measure the DTOF and both light intensity and phase relative to the source signal, respectively, provide access to relative (and, potentially, absolute) measures of absorption as well as scattering within the tissue. However, the current technology that supports these measurements is significantly more expensive and complex and has yet to be fully realized in an HD-DOT configuration. 101,115-124 Therefore, Secs. II C-II E will focus primarily on the CW case.
C. Perturbation methods: The Born and Rytov approximations
The solutions [Eqs. (16), (19), and (23)] to the diffusion approximation of the RTE [Eqs. (15), (18), and (22)] describe how photons propagate through turbid media given a constant background of steady-state optical properties. To address the goal of measuring changes in brain function within the volume (as manifested through changes in optical properties x) via changes in the light signals y measured at the surface [as formalized as y = Ax in Eq. (1)], we must see how these solutions are altered given a small perturbation in optical properties. These perturbations are modeled as spatially varying deviations from their baseline values: µa( 73,118 However, it has been shown that an exponential expansion in the fluence rate performs far better in practice and presents a much less ill-posed inverse problem, especially in cases of imaging deep perturbations (greater than ∼5 mm from the boundary) 73,118,125 where the ⇀ rs term denotes that these scattered fields are due to a spatially localized source, and δΦ( which then leads to a simple relationship for the perturbed fluence rate This exponential expansion is referred to as the Rytov approximation. Experimentally, the Rytov approximation provides a means of normalizing such that small errors in assumed background optical properties divide out, thereby providing a more robust approach to imaging than the Born approximation. 73,118 For the general case of a complex fluence rate, Φ = Ae iθ , the Rytov approximation provides a relationship for the perturbed fluence rate that automatically separates attenuation in light amplitude from phase shifts between the incident Φ 0 = A 0 e iθ 0 and measured Φ = Ae iθ signals Again, we see that the CW case is simply a special case of the FD case (i.e., there is no phase term to consider). The baseline fluence rate Φ 0 ( ⇀ r, ⇀ rs) is assumed to arise from some baseline spatial distribution of optical properties µ 0 a ( ⇀ r) and D 0 ( ⇀ r) within the volume [ Fig. 3(b)]. When imaging functional brain activity, the baseline fluence rate for a given source-detector measurement pair is typically estimated using the temporal mean of the time course of that measurement during an experiment [i.e., Φ 0 ( Alternatively, the baseline can be estimated from a time period immediately preceding an experimental induction of a perturbation via some task (e.g., a Valsalva maneuver).
D. Numerical methods
The above solutions to the diffusion approximation of the RTE are all analytically derived given the infinite model and homogeneous optical properties. For calculating solutions to the diffusion equation for tissues with an arbitrary and complex geometry (i.e., a head) and spatially varying optical properties (index of refraction, absorption, and scattering coefficients), a number of publicly available packages such as NIRFAST 126 or TOAST++ 127 are available. These packages utilize powerful, flexible, and fast finite element modeling (FEM) routines. [128][129][130] Alternatively, Monte Carlo methods can be employed. [131][132][133][134] A strength of Monte Carlo methods is that they do not rely on assumptions of isotropic radiance or slow changes in photon currents. Additionally, Monte Carlo methods provide solutions with greater accuracy within the top millimeter surface of the tissue-where the assumptions required for the diffusion approximation break down and can lead to numerical errors.
REVIEW scitation.org/journal/rsi
However, Monte Carlo methods are comparatively slow relative to FEM methods that rely on the diffusion approximation. At the depth of the human brain the diffusion approximation works quite well and is computationally far more efficient, allowing for solutions to be obtained for thousands of source-detector measurement pairs in a complex head geometry in just a few minutes. 55,135 The solutions to these problems can be derived using the methods of Green's functions where G( . The Green's function represents the spatial sensitivity of a given source or detector. Importantly, though the exact functional form of a Green's function depends upon the geometry of the problem, the Green's function for a source or a detector are of the same functional form, as derivable from the reciprocity theorem of electromagnetic radiation. 136 This fact directly gives rise to the adjoint formulation of the sensitivity relations. 73 Reciprocity essentially states that transmitters and receivers of electromagnetic radiation can be equivalently modeled. Thus, G( Solving the simpler CW case [Eq. (22)] using the Rytov approximation with Green's function methods, neglecting terms beyond the first order, and relating to changes in detected intensity at the surface y (also referred to changes in optical density) leads to the following solution: This equation states that ratiometric (i.e., differential) measurements of fluence at the boundary are related to the spatial distribution of internal changes in absorption multiplied by the spatial wavelengthdependent sensitivity distributions for the source and detector and summed over all points in the tissue. The normalization term within the integral (sometimes referred to as G sd ) is the Green's function of the source evaluated at the position of the detector. Here, y is a vector with each element corresponding to a specific source-detector pair at a given wavelength. The next step is to discretize Eq. (27) for some finite set of Nm source-detector pair measurements over a set of Nv voxels or nodes within a finite element mesh [ Fig. 3(b)]. Using small-volume voxels (tetrahedral elements for the mesh) will facilitate more accurate solutions but will also add to the computational time required. 135 While it is true that DOT is a relatively low-resolution imaging modality, it is important that the forward model be accurate enough that image quality of the data is not compromised due to discretization errors in the model. 46,55,135 To achieve fMRI-comparable image quality, it is recommended that the tetrahedral elements have a volume of ∼1-1.5 mm 3 each (which typically requires 800 000-1 000 000 nodes total in a head mesh). 46,47 Equation (27) can be rewritten as where V is the volume of the discretization element. This equation can be rewritten in a format where is the vector of optical density changes for each of the source-detector measurement pairs (each as a function of time), A is the sensitivity matrix derived from the full light model, and x = ∆µa(rj, t) is a vector representing the change in absorption in each voxel (also a function of time) For simplicity, it is assumed here that the sensitivity is itself not a function of time. In practice, the measurements, the absorbance, and the sensitivity matrix are each a function of the wavelength of light emanating from the sources.
In the FD case, as shown in Eq. (26), the Rytov approximation naturally separates the amplitude and phase components of the measurements (written here as vectors to account for multiple sources, detectors, wavelengths, and modulation frequencies) into real and imaginary components of . The full FD sensitivity matrix contains four separable components corresponding to the real and imaginary components of sensitivity of measurements at the surface to internal changes within the volume of absorption and scattering , where the complex sensitivity relations for a given source-detector pair measurement in relation to absorption ∆µa and scattering ∆D are given by 125,137,138 The FD measurements and Green's functions all depend on both the wavelength λ and modulation frequency ω of the incident light.
To calculate the light model A, many labs use NIRFAST 126 to model the Green's functions [Figs. 3(c), 5(d), and 5(e)], which are primarily dependent upon three things: (1) the tissue boundary shape, (2) the internal distribution of baseline optical properties, and (3) the locations of the sources and detectors on the surface [ Fig. 3(b)], as well as the wavelength and (in the FD case) the modulation frequency. The tissue shape and optical property distributions are ideally generated from a subject-specific segmentation of the head, 47,139,140 though atlas-based models can work quite well when subject-specific anatomy is not available. 44,52,87 E. Image reconstruction As described above, the sensitivity matrix relates relative ratiometric changes in light-level measurements taken at the surface to relative changes in absorption within the volume. The sensitivity matrix can be directly inverted for image reconstruction using Tikhonov regularization along with spatially variant regularization to minimize the objective function [ Fig. 3
REVIEW scitation.org/journal/rsi
The penalty term for image variance, λ 1 Lx 2 2 , incorporates a spatially variant regularization term λ 2 where 45,143 The specific values of these parameters will directly influence the DOT imaging domain characteristics [as visualized in Fig. 3(c-iv)] and should be considered aspects of the system design along with the hardware. A solution, can thus be directly obtained using a Moore-Penrose generalized inverse with where,Ã =ÃL −1 .
The optimal values of regularization parameters λ 1 and λ 2 depend upon the source-detector grid geometry, the underlying noise characteristics of the imaging system, and the geometry of the anatomical model. The Tikhonov regularization term λ 1 tunes the balance between amplifying high spatial frequency information (including noise) at small values (typically below 0.01, though the exact number depends on the number of measurements in the imaging system) and strongly weighting low-spatial-frequency modes which effectively spatially smooth the image domain at large values. 141 The spatially variant regularization term λ 2 has been shown to improve localization error and to provide a more uniform resolution and contrast within the imaging domain and thereby an improvement in image quality of DOT reconstructions. 117,[142][143][144][145] As the sensitivity of HD-DOT drops off with depth from the surface [Figs. 3(c), 5(b)-5(e)], spatially variant regularization provides a way to tune the reconstruction to an appropriate depth; too small of a λ 2 will push the reconstruction too deep below the surface and too large a value will pull the reconstruction too shallow. Optimal settings for these parameters are found through simulation and empirical studies to provide uniform imaging across the field of view as judged by evaluating point spread functions (in simulation) and, ideally, subject-matched comparisons to an alternate modality, such as functional MRI. An estimate of the spatial extent of the imaging domain can be found by calculating and visualizing a flat field reconstruction. This done by generating a test image ∂x of a global unit change in absorption throughout the imaging volume (the 'flat field' perturbation) to generate simulated data ysim via The flat field reconstruction of the imaging domain x ff is then found as in Eq. (34) The spatial profile of this flat field reconstruction provides a visual readout of the smoothness and extent of the imaging domain throughout the volume. Where the flat field lies below 1%-10% informs where should not be considered valid in volumetric reconstructions. 33,46,55,126,146 Relative changes in hemoglobin concentrations ∆C can then be obtained from the absorption coefficients used in spectral decomposition where E is a matrix containing the extinction coefficients of HbR and HbO 2 , and ∆C = [∆[HbO 2 ], ∆[HbR]] is the matrix of concentration changes by time.
III. HIGH-DENSITY DIFFUSE OPTICAL TOMOGRAPHY SYSTEM DESIGN
Accurate reconstruction of relative changes in hemodynamics fundamentally depends upon obtaining high fidelity signal quality of light levels from multiple overlapping measurements that are separated by multiple distances (Fig. 6). This key requirement directly leads to challenges in the optoelectronics and challenges in maintaining good optical coupling throughout the system-from the source to the scalp and from the scalp to the detector. The large number of independent source-detector measurements also presents significant challenges in real-time data quality assurance. Each of these sets of challenges will be discussed below.
A. Challenges in optoelectric designs
Source-detector measurement pairs at multiple separations provide additional depth information, and the overlapping measurements at a given separation support improved lateral resolution. 45,[147][148][149] For example, the array shown in Fig. 6(a) [equivalent to a subset of that in Fig. 3(a)] utilizes measurements separated by distances of 1.3, 3.0, 3.9, 4.7, and 5.1 cm for the first five nearest neighbor separations, which leads directly to significant challenges in obtaining an adequate dynamic range of response for each detector while minimizing crosstalk between detection channels. Maximizing the dynamic range while minimizing crosstalk involves multiple system design considerations: the light budget, the detection and amplification strategy, and encoding/decoding strategies.
The light budget
The source type, be they light emitting diodes (LEDs) or laser diodes (LDs), will significantly impact system design. First, the choice of wavelengths for the sources may be motivated by spectral width considerations, as LEDs emit photons over a relatively broad band around their characteristic center wavelength relative to LDs. Additionally, though LDs can be modulated faster than LEDs, LDs are typically not available at as many wavelengths as LEDs. The optimal choice of wavelengths, and optical bandwidths of sources will depend on the required spectroscopy for the specific goals of the application, 24,150-152 be it imaging hemoglobin, cytochrome c oxidase, [153][154][155][156] or other functional chromophores. Each source position of the system highlighted in Fig. 3 uses LEDs emitting 750 nm and 850 nm photons with an optical power at the head of 3.2 ± 0.3 mW and 4.3 ± 0.3 mW for each LED. 46 Other systems have used different wavelength combinations including 760 and 830 nm (the DYNOT 232 optical tomography imager of NIRx 59 ), 690 and 830 nm (the ISS Imagent™ 157 and the CW4 TechEn, Inc. system 44 ), 660, 780 and 850, 158 and even a larger set of wavelengths including 778, 808, 814, 841, 847, 879, 888, and 898 nm. 159,160 The system in Figure 3 used three 750 nm LEDs per channel to compensate for the strong attenuation in biological tissue at that wavelength. Though one can increase the intensity of the source, the American National Standards Institute (ANSI) limits the amount of NIR light deposited on human tissue at a maximum intensity of 4 mW/cm 2 at these wavelengths. This specification of source intensity at the scalp sets an upper limit on the light signal intensity to be collected from the head some distance away. The moment light leaves the source, losses occur due to poor coupling between the LED/LD and the fiber optic, loss along the fiber optic (if the fiber is made with a lossy material like plastic or if the fiber has been broken), and poor coupling at the scalp. Coupling between the source and a fiber optic depends on not just the optical alignment, but also the etendue of both the source and the fiber. The etendue of an optical element is equal to the area of emission (or collection) times the solid angle of emitted (or collected) light. When comparing coupling designs, and when the optics are axially symmetric, the solid angle can be roughly approximated by the square of the numerical aperture (NA) of the fiber optics. Also important with fiber optics is whether or not the fiber is a single-core fiber or a fiber bundle. Larger fibers provide easier optical coupling, however, they are stiffer than smaller fibers that can be challenging to align reliably. Fiber bundles that pack many small optical fibers into a single larger conduit provide a reasonable middle ground for many designs. Because the individual glass fibers are smaller, fiber bundles tend to be more forgiving to breaking (i.e., smaller fibers have a smaller critical bend radius). However, the price one pays for using a fiber bundle is found in the packing fraction: one can expect to lose a significant fraction of the light impingent on the fiber (typically up to 50%) because there are gaps in between the small glass fibers that will not transmit the light. Similar concerns are present on the return path of the photons into the detector. Exciting new advances in on-the-head optoelectronic components remove the fibers from the design, which simplifies some system design considerations. 159,160 However, challenges remain in reliable maintenance of power consumption, data streaming fidelity, and participant comfort.
Detection and amplification
Over the NIR wavelength range, light levels at source-detector distances from 1-5 cm vary over at least six orders of magnitude in optical power [Figs. 6(a) and 6(b)]. To ensure a linear output over such a range of optical power inputs, many HD-DOT systems use avalanche photodiodes (APD) that can be sourced from various distributors including, e.g., Hamamatsu. 25,45,46 The APD design is generally preferred over a photon multiplier tube (PMT) design due to the strong demands on dynamic range, though some systems successfully implement PMTs. 157 The APDs provide a dynamic range of up to >10 7 [ Fig. 6(c)], which allows for a signal to noise level (SNR) > 100 over 4-5 orders of magnitude in light level. 46 This high level of SNR is crucial because changes in hemodynamic-measured brain function due to task activations is of order a few percent and variance in the resting state is of order 1% or less. 47,161,162
REVIEW scitation.org/journal/rsi
In addition to dynamic range, additional key specifications when optimizing the detection strategy include the sensitivity, noise equivalent power (NEP), and crosstalk. The sensitivity, the ratio of output voltage for a given input optical power, should typically be at least 1 × 10 6 V/W. The NEP, the optical input-referred power of the noise floor output of a detector, should be as small as possible (e.g., less than 20 fW/ √ Hz). Crosstalk is a measure of how much interference a signal in one source-detector channel has on a separate source-detector channel. Constraints on the levels of allowable crosstalk are driven by the requirements in dynamic range: to ensure data is uncorrupted over a dynamic range of 120 dB, the crosstalk must be kept below −120 dB. Electronic crosstalk between detection channels can occur through common power supplies or within a multichannel analog to digital converter (ADC). To maintain these specifications, systems typically use avalanche photodiodes coupled into 24-bit dedicated ADCs. 27,45,46 Many companies provide commercially available high fidelity ADCs including MOTU, RME, and Focusrite. Additional strategies for increasing the effective dynamic range can be employed that use dynamic gain adjustments before the signal reaches the ADC. These strategies can be complex and can lead to poor crosstalk performance compared to the strategies described above. 57,163,164
Encoding/decoding
To maintain low crosstalk between source-detector measurement channels, also requires encoding and decoding strategies. Time encoding [ Fig. 6(d)] along with frequency and spatial encoding [ Fig. 6(e)] may be employed. 46 In time encoding, only the source light at a given position is turned on at a given time, here called a time step. This minimizes potential crosstalk between different source-detector pair measurement channels because it is straight forward to assign the signal for every detector to the exact source that is on. However, this strategy can be slow and lead to under sampling physiology if there is a larger number of sources to encode. To obtain a faster frame rate (i.e., the time is takes to sample the entire field of view), frequency encoding may be employed whereby multiple sources are modulated at the same time, but at different frequencies [ Fig. 6(e)]. Then the signal for a given source-detector pair is obtained via a Fourier decomposition of a given detector's data within a time step where the magnitude of the signal from a source is proportional to the magnitude at its modulation frequency. With frequency encoding multiple sources can be on at once as long as the Fourier peaks are far enough apart that they do not overlap, otherwise crosstalk between those respective channels will go up significantly. Additionally, with multiple sources on, broad band shot noise will contaminate the Fourier spectrum [see raised noise floor in trace with peaks in Fig. 6(e)]. A higher level of overall light will effectively lower the dynamic range for the source-detector measurements. One can also spatially encode the sources such that spatially separated sources on the HD array are on at the same time. One must be careful that the shot noise from very bright sources is not swamping out the desired light from more distant sources in a given encoding strategy. The system highlighted in Fig. 3 uses a combination of time, frequency, and spatial encoding. 45,47 With each of these encoding strategies, it is important to note that background light levels from the room or immediate imaging environment may lead to significant crosstalk and a loss of dynamic range. The Fourier decomposition strategy of decoding provides a robust strategy to minimize the effects of background light level. These strategies should be implemented with care for the desired frame rate: sampling too slowly can lead to aliasing of physiology variance into the data stream. A minimal frame rate of 3 Hz (optimal if 10 Hz or faster) is recommended to allow for adequate sampling of systemic physiology which includes both respiration (generally around 0.3 Hz) and pulse (generally around 1 Hz for a quietly resting healthy adult).
B. Challenges in optode-scalp coupling and cap design
Beyond challenges in optical and electrical components, reliable and consistent coupling of the optical elements to the scalp of the participant presents multiple significant challenges. Sources and detectors may be placed directly on the head 165 or coupled via optical fibers. 45,166,230,231 A general principle in ensuring reliable and comfortable imaging arrays is to provide a lightweight but rigid structure that maintains the optical fiber positions while minimizing torque on the fibers that can lead to coupling inconsistency over the course of an imaging session. For example, several adult DOT systems have used a rigid outer shell to manage fibers and bear fiber weight [Figs. 6(f) and 7]. Other DOT systems image the participants (mostly infants in the current literature) in the supine position so that the bed bears the weight of the fibers [ Fig. 7(d)]. A combination of foam and elastic pieces can help maintain a force perpendicular to the head surface to hold the optodes directly coupled against the scalp while allowing for moderate translation normal to the head such that the imaging cap can conform to local variations in head shape [ Fig. 6(f) used in cap design Fig. 7(b)]. 26,46 Alternatively, a spring loaded fiber tip can couple fibers to the scalp. 59 Furthermore, rigid outer structures aid in fiber management and suspend the weight of the fibers. 158 Recent work has designed more wearable caps with lightweight fibers. 167 Finally, recent developments in wireless systems have minimized the need for fiber management and weight bearing designs [ Fig. 7
(e)].
A further consideration is the choice of source-detector layout. Sparse DOT grids (i.e., source detector separation distance >15 mm) will give rise to systematic data quality perturbations based upon the respective point spread function (Fig. 8). The size, shape, and severity of artifacts in the point spread function of the observed data will depend upon the cap design (e.g., square, triangular, rectangular, HD), and metrics of image quality such as localization error and effective resolution will spatially vary in a systematic fashion based upon source-detector distance and location. For example, Chance and colleagues note that their chosen source-detector layout resulted in elongated activations where fMRI demonstrated localized activity. 168 Sparse square and triangular grids will have greater localization error and worse effective resolution than HD-DOT cap designs 149 (Fig. 8).
A crucial design detail, regardless of whether fibers or fiberless designs are used, is to comb through the participant's hair to gain unimpeded access to the scalp [e.g., as in Fig. 6(c)]. Hair (and product in hair like conditioner or hair gel) scatters light away from the optic-head system and lowers raw data quality. With this infrastructure in place, the cap maintaining the imaging array may be attached to the participant with hook-and-loop straps positioned to provide rigid yet comfortable stability on the head and to conform the curvature of the cap to a wide variety of head shapes and sizes. To ensure consistent placement of the imaging array on a given participant and across participants, measurements of the distance between specific fiducials on the imaging array and the head of the participant (e.g., the nasion, left and right tragus, and eyes) should be recorded.
C. Challenges in data quality assurance
To provide adequate coupling across the imaging array, a few simple metrics of data fidelity can help ensure a high quality cap fit. First, the average light level for each source and detector can be displayed in a two-dimensional representation of the imaging array [ Fig. 3(d-ii)]. If the light level is low, or if there is significant spatial variance in mean light level (more than 2 orders of magnitude), then the associated optical element or fiber optic should be adjusted at the head to improve coupling. The adjustment typically involves improving either the combining through the hair and/or ensuring the fiber/element is coupled to the scalp at a right angle. Second, adequately coupled elements will reflect a set of mean light levels that are logarithmically distributed as a function of distance, reflecting diffusion of photons through tissue [ Fig. 6(b) and Eq. (23)]. Third, if the spread in light level at a given source-detector separation is more than 1-2 orders of magnitude, or if the slope of the fall-off is not approximately one order of magnitude in light level for every centimeter of additional Rsd, then the cap fit may not be optimal. Third, assuming the data are acquired at a frame rate of at least 3 Hz, the time course of individual source-detector pair measurements with a good signal-to-noise ratio will clearly exhibit characteristics consistent with the pulse (∼1 Hz) frequency [ Fig. 3(d-iii)]. The relative magnitude of the pulse peak in a power spectral density plot is an excellent indicator of data quality: the more noise contamination, the lower the relative pulse peak power.
IV. VALIDATION
As is true with any technology development, external validation of the acquired signals provides essential corroboration required for establishing that the technology delivers meaningful information that complements existing measurement strategies. Validation of optically-measured neurophysiological signals and the anatomical specificity of the reconstructed maps is a necessary and crucial step toward adoption of the method beyond the optical community. For HD-DOT, validation studies have used simulation and in vivo direct and indirect comparisons against fMRI as a gold-standard of functional neuroimaging. Well-understood task-based paradigms that elicit reliable responses in sensory and motor areas provide solid footing for cross-modal validation because the brain responses from these tasks are more predictable and less variable across a population than tasks designed to elicit responses in cognitive brain areas. Task-free paradigms that leverage the spatial structure of temporal correlations of very low frequency activity within the brain (i.e., functional connectivity) provide a more stringent bar for validation due to increased demands of the instantaneous signal-to-noise. Sections IV A-IV D will discuss some key validation-focused studies that have established HD-DOT as an effective and reliable neuroimaging tool producing cortical brain maps with comparable precision to fMRI in both adults and infant participants.
A. Validation of HD-DOT with retinotopy paradigms
Retinotopy, so named because visual stimuli incident on the retina map onto visual cortex in a regular and characteristic pattern, 169 provides a compelling strategy for optical imaging validation for multiple reasons. First, the spatial organization of retinotopic maps at multiple spatial scales is well known via investigation with PET, 170 fMRI, [171][172][173][174][175][176][177][178] and other methods. [179][180][181][182][183] Second, retinotopic organization can be reliably measured within an individual. 171,174,184 Third, the detailed spatial structure of the retinotopic maps vary between individuals, providing opportunity for demonstration of image quality at ever finer spatial scales. 185 Fourth, because retinotopic maps are in a primary sensory region of the brain, interpretation of the measured responses is simpler than in regions of cortex that support higher cognitive functions. Indeed, mapping visual fields in occipital cortex was one of the earliest forms of fMRI methods validation against PET imaging. 171,172,177,[186][187][188][189][190][191] Following in this tradition, several DOT studies have utilized retinotopy to establish proof of principle, validity, and reliability of the technique for imaging human brain function in adults 45,47,48,192,193 and even in infants. 166 In a seminal study in 2005, Zhang and colleagues acquired data using simultaneously collected MRI and DOT. In response to five blocks with alternating fixation followed by a flashing black and white checkerboard pattern, results demonstrated bilateral visual cortex activations in both modalities. 192 In 2007, an HD-DOT system using a closest source-detector separation distance of 13 mm was first reported. This HD-DOT system was shown to recover visual cortex activations in response to flickering checkerboard wedges displayed in each quadrant of the visual field in seven adult participants [ Fig. 9(a)]. 45 The resultant images clearly delineated four quadrants of visual cortex activation intensities corresponding to visual stimuli in the upper and lower left and right visual fields. Several years later, light models utilizing realistic anatomy were used to reconstruct HD-DOT retinotopy data using subject-specific MRI-derived anatomy [ Fig. 9(b)]. This study assessed visual cortex activity during separate HD-DOT and fMRI sessions in five healthy adults using a phase-encoded paradigm 174 of rotating and flickering checkerboard wedges. Single-subject and group average data demonstrated that fMRI and HD-DOT retinotopic mapping boasted a high degree of correspondence in visual cortex: 47 these methods recovered activations with an average localization error of 4.4 mm relative to subject-matched fMRI. Additional studies used retinotopy based paradigms to demonstrate atlasbased light models yield similar results to subject-specific anatomical models as compared with fMRI-based retinotopy mapping (with an average localization error of 6.6 mm relative to subject-matched fMRI) [ Fig. 9(c)]. 52 Using atlas-based light models (such as with the MNI152 and Colin27 atlases) is advantageous for DOT imaging as it alleviates the necessity of acquiring a structural MRI image for each subject when there is adequate spatial agreement between the atlas and the subject/population. 87
B. Validation of HD-DOT with motor paradigms
The sensorimotor cortex provides an additional compelling location for simple and reliable validation of neuroimaging technology. The spatial organization of anatomical and functional areas along the motor cortex correspond to specific areas of the body, much like areas of visual cortex map to areas of the retina. Habermehl and colleagues examined HD-DOT activations within motor and somatomotor cortex in eight adults using vibrotactile stimulation of the thumb and pinky finger. 58 Subject-specific modeling was used to tomographically map HbR motor activity onto the surface for each subject (Fig. 10). Distinct activations to thumb and pinky fingers were observed in five out of eight subjects using both HD-DOT and nonconcurrent fMRI. The localization error between HD-DOT and fMRI motor activations was estimated at approximately 10 mm. More recently, a fiberless HD-DOT imaging system demonstrated feasibility by reporting motor cortex activity observed in five adult subjects. 160 Subjects were asked to touch the thumb to pointer finger of the dominant hand in 20 runs of 15-second blocks. Individual and group averaged oxygenated and deoxygenated activations were tomographically reconstructed on the surface (Fig. 11). Fiberless systems provide an exciting and compelling alternative to fiber-based HD-DOT. On-head optoelectronics provide a significant set of challenges and are discussed in more detail elsewhere. 165
C. Validation of HD-DOT with language paradigms
Brain areas associated with perception and generation of language are distributed throughout the cortex, including regions of temporal, parietal, and prefrontal cortex. [194][195][196] Due to this extended and differentiated organization, studies validating the efficacy of HD-DOT for investigating language-based task paradigms have required a field of view that extended beyond that of primary sensory and motor regions. In order to validate the largest-to-date field of view HD-DOT system, 46 the authors selected a hierarchical language paradigm first established in a seminal PET study that mapped the spatial topology of single word processing in the brain. 5 During the hierarchical language paradigm, several different experimental probes of language function were utilized. In the first task, participants listened to a prerecorded list of single nouns. Each run consisted of six blocks within which nouns were presented one/s for 15 s followed by 15 s of silence (hearing words). Next, participants silently read a series of simple nouns displayed one at a time in the same block design on a screen (reading words). The third experimental run required participants to imagine speaking each word out loud (imagined speaking). Finally, participants were asked to silently generate associated verbs in response to each noun presented on screen (covert verb generation). Activation maps corresponding to each of these aspects of language were recorded during a HD-DOT imaging session followed by an fMRI session on a separate day. Strong agreement between HD-DOT and fMRI was apparent with robust contrast-to-noise activations in auditory cortex, visual cortex, superior temporal lobe, and dorsolateral prefrontal cortex apparent in both modalities [ Fig. 12(a)]. This study highlighted the spatial correspondence of HD-DOT and fMRI throughout a spatially extended field of view that encompassed both sensory areas, known for exhibiting large signal to noise activations, as well as cognitive and association areas, known in the fMRI literature for exhibiting relatively smaller activation volumes and contrast levels. Group maps of activations detected with HD-DOT and fMRI showed strong concordance with responses to the simple perceptual tasks of hearing words and reading words in auditory and visual regions, respectively. The more cognitive tasks of imagined speaking the presented word or generating a novel verb revealed subtle differences beyond the agreement within motor and prefrontal areas, respectively, in the group maps between HD-DOT and fMRI. These differences are seen in cognitive regions within temporal, extrastriate, and parietal areas known to be associated with aspects of language processing that generally present with low SNR relative to sensory regions in response to these tasks. These differences are expected given the low number of subjects (N = 5) in this study and the level of intersession variability of cognitive language tasks for a given participant. Additional studies of language processing with the HD-DOT large field of view system have investigated brain function underlying processing of syntactically complex and simple sentences. While syntactically complex and simple sentences both activated similar regions of cortex, including dorsolateral prefrontal cortex and auditory cortex, complex sentences elicited greater activations in primary auditory, ventrolateral prefrontal cortex, and temporal cortex than syntactically simple sentences [ Fig. 12(b)]. 33 These results were largely consistent with prior fMRI and PET linguistic research and are suggestive of the validity and spatial specificity of task-based HD-DOT measured activations.
D. Resting state functional connectivity HD-DOT
While task-based studies were particularly useful for validating event-elicited brain activations between HD-DOT and fMRI, taskfree neuroimaging experimental methods have also been used to validate HD-DOT against fMRI. An increasingly common method in the fMRI literature is the use of resting state-functional MRI (rs-fMRI), a technique that can be used to assess functional connectivity within the brain in the absence of a stimulus. These resting state methods are ideal for situations in which a participant may be unable to engage in a traditional task-based block-design or event-related neuroimaging paradigm, such as infants or those who are asleep, anesthetized, or cognitively impaired. Functional connectivity can be inferred by assessing temporal correlations in low frequency fluctuations of the BOLD signal (in the range of 0.008-0.09 Hz). 161,197,198 Importantly, rs-fMRI data can be used to identify spatially-distributed brain networks comprising regions of the brain known to be activated by task, including primary cortical regions such as visual and motor cortex, as well as higher order cortical areas supporting cognitive control, attention, and executive functions. [199][200][201][202] The composition of these resting state networks has been well-characterized using fMRI in healthy adults and older pediatric populations [1][2][3]6,7,13,200,[203][204][205][206][207][208][209] and has also been increasingly utilized in infants. [210][211][212][213][214][215][216][217] Validation of resting state functional connectivity methods for HD-DOT, (i.e., functional connectivity DOT; fcDOT) were first demonstrated in healthy adults. 60 This seminal paper established that fcDOT maps were reproducible in participants across days and that bilateral maps of strong correlations within (and not between) visual and motor regions were replicated in fMRI in those same participants. More recently, subject-specific light modeling and an expanded field of view broadened the reach of fcDOT methods to map not just sensory or motor networks, but also spatially distributed cognitive networks including cortical aspects of the dorsal attention network, fronto-parietal control network, and the default mode network [ Fig. 13(a)]. 46 Group level analyses demonstrated similarities in the topology of these brain networks between fcDOT and subject-matched rs-fMRI. This type of analysis has also been extended to imaging functional connectivity in neonates [ Fig. 13(b)]. Patterns of bilateral visual, middle temporal, and auditory cortex connectivity were observed using both HD-DOT and fMRI. 25 Taken together, these validation studies suggest HD-DOT is capable of measuring both task-based activations and functional connectivity
REVIEW
scitation.org/journal/rsi in human participants with comparable spatial specificity to that observed with fMRI.
V. DOT APPLICATIONS IN HUMAN CLINICAL POPULATIONS
We will finish this review with a brief overview highlighting studies that have applied DOT and HD-DOT technology to clinical populations in environments beyond the reach of traditional methods such as fMRI. Imaging infants in the neonatal intensive care unit (NICU) with fMRI presents significant challenges for monitoring brain health. Neonates hospitalized for extended periods in the neonatal intensive care unit (NICU) may not be stable enough to move to an fMRI machine. For example, the most profoundly infirmed neonates may need mechanical ventilators, continuous positive airway pressure, or extracorporeal membrane oxygenation. Moving these infants for fMRI neuroimaging presents significant challenges for the health and safety of the patient. Thus, HD-DOT methods provide a compelling surrogate to fMRI and afford an opportunity to image cortical brain activity at the bedside. To-date, several studies have used DOT to assess brain activity and functional connectivity within preterm infants at a range of gestational ages recovering in the NICU. In this section, we highlight the use of optical methods with a focus on DOT in several case reports of infants with brain injuries including stroke, intraventricular hemorrhage (IVH), and hypoxic ischemic encephalopathy (HIE). Finally, we summarize combined EEG-DOT systems used to assess neonatal seizure activity and task-based activations in adult patients with epilepsy.
In the late 1990s, a groundbreaking paper reported the first DOT activations in an infant born extremely preterm (<27 weeks). 168 These authors were able to measure motor activations while manually stimulating the left and right fingers. More recently, Hintz and colleagues similarly demonstrated motor cortex activity to passive arm movements in infants born moderately preterm (32-33 weeks gestational age). 218 These early studies demonstrated the initial feasibility of DOT within the NICU.
While some studies rely on passive arm movements and tactile stimulation, 219 other studies make use of passive brain activity while the infant is resting. Specifically, a single high-quality dataset can be acquired in minutes, and data can be collected from subjects swaddled, resting quietly, sleeping, and under anesthesia or morphine without requirement of task performance or attention to stimulus. White and colleagues acquired fcDOT on three term-born infants and four preterm-born infants using a HD-DOT cap covering left and right occipital lobes. 26 Infants were imaged lying on their backs, with the weight of the fiber optics rested on the bed. One of the preterm infants exhibited a large left occipital hemorrhage, apparent on a T2-weighted MRI. Using fcDOT, bilateral functional connectivity maps were apparent within visual cortex in the healthy term-born Elsevier.] (b) Whole brain reconstruction of time domain optical data acquired from a preterm neonate with left-sided intraventricular hemorrhage (IVH). Regional blood volume (i), regional oxygen saturation (ii), and cranial ultrasound demonstrate disruptions in oxygenation and hemoglobin concentration in the region near the hemorrhagic parenchymal infarct (iii Fig. 14(a)]. This same pattern of bilateral visual cortex connectivity, although weaker, was also present in preterm-born subjects. However, bilateral visual cortex connectivity was absent in the preterm infant with left occipital hemorrhagic stroke. 26 Similarly, researchers have recently demonstrated reduced interhemispheric connectivity in four infants following perinatal stroke as compared to four healthy infants. 220 Neonatal brain injury, including the study of acute injury, such as IVH, is a particularly interesting application of DOT within the NICU. IVH typically occurs within the first 72 h following birth and is one of the leading forms of preterm brain injury. 221 Austin and colleagues used a time-resolved DOT system known as MONSTIR (Multichannel Optoelectronic Near-infrared System for Time-resolved Image Reconstruction) 222 to scan 14 preterm infants in the NICU, several of whom were diagnosed with IVH. DOT caps were constructed to cover the entire cortex, and each cap was custom-built to the head shape of the infant. Due to the long data acquisition time of this system, fcDOT was not performed. Instead, the authors generated mean photon flight times compared to a phantom reference volume. Using this analysis method, one preterm infant demonstrated increased regional blood volume and oxygen saturation [ Fig. 14(b)] corresponding to IVH in the left hemisphere, visible on ultrasound. 223 More recently, using a frequency domain DOT system (ISS Imagent™, Champaign, Illinois) recording at a 38.5 Hz frame rate with an irregular but high density cap configuration, researchers demonstrated decreased pulse rise time in ten
REVIEW
scitation.org/journal/rsi preterm infants with IVH as compared to 20 preterm infants without IVH at various stages of recovery in the NICU up through term equivalent age. 157 These results suggest imaging infants with DOT either during this acute period of brain injury or later during recovery in the NICU may provide insights into neural disruptions leading to subsequent neurodevelopmental impairment later in life. Neonatal HIE has also been investigated using DOT. HIE occurs as a result of oxygen deprivation from fetal trauma either during gestation or during birth, and can result in long-term developmental complications including cerebral palsy, epilepsy, and sensory impairments. [224][225][226][227] Infants with HIE provide a particularly compelling case for the use of DOT imaging, as the standard of care for infants with HIE is therapeutic hypothermia treatment for 72 h following birth. This therapeutic hypothermia treatment cools the body temperature of the infants, mitigating further brain damage resulting from the hypoxic event. However, the equipment used to cool the infant's body temperature is not MRI compatible. Therefore, portable brain monitoring and imaging modalities such as EEG and DOT provide crucial clinical information about brain function during this treatment period. Chalia and colleagues used DOT to study hemodynamics associated with high-frequency bursting EEG activity, typically signifying pathological activity, in a group of term-born infants with HIE during the warming period following therapeutic hypothermia treatment in the NICU. Infants presented with seizures in the first 48 h of life and were scanned with combined EEG-DOT within seven days of birth. Across infants, oxygenated hemoglobin initially declined during the EEG bursts, and peaked 10-12 s after the bust onset. 27 Though this study did not use a high density arrangement of measurements, this study provides a powerful example of the clinical application of concurrent EEG and DOT methods and illustrates the opportunity for advanced optical methods to inform clinical care [ Fig. 14(c)].
While seizure activity is typically measured using EEG/MEG, modalities sensitive to changes in electrical/magnetic fields, recently researchers have utilized DOT to investigate BOLD correlates of seizure activity. Seizures represent a major medical challenge for treating neonatal infants with HIE, and seizures are associated with poorer neurodevelopmental outcome. Singh and colleagues examined an infant with severe HIE during a 60 min period of passive rest following the warming period after 72 h of therapeutic hypothermia. 28 The authors observed seven discrete periods of generalized whole-scalp EEG hyperactivity indicating seizure events [ Fig. 15(a)]. Concurrent DOT imaging revealed HBT amplitude increases following each seizure event [ Fig. 15(a)]. Averaging DOT activity across all channels following one of the seizure events revealed HbO, HbR, and HBT peak amplitude 15 s following seizure events [ Fig. 15(b)]. The authors also observed spatial variation in the localization of activity prior to, during, and following the seizure events [ Fig. 15(c)]. This study suggests the use of DOT in the clinic is a useful tool in addition to standard EEG bedside monitoring.
Limited prior work has imaged human adult clinical populations using DOT or HD-DOT. One prior study imaged three healthy adults and three adults diagnosed with temporal lobe epilepsy as a proof of concept. 158 While seizure activity was not recorded, in this study, the authors demonstrate DOT activation differences in adults with and without epilepsy during a finger tapping task. Changes in HbT amplitude were observed in motor cortex of healthy adults [ Fig. 15(d)], while adults with epilepsy showed no signs of hemodynamic response [ Fig. 15(e)]. The authors suggest this lack of hemodynamic response in the motor cortex of epilepsy patients results from "the epileptic lesion existing in the brain of patients." The authors note that the patients with epilepsy did not suffer from any clinical motor impairments which might otherwise explain their apparent lack of motor activity. 158 Cumulatively, these papers illustrate some unique opportunities for applying advanced optical methods in the clinic and potentially in basic neuroscience. Future adult and infant studies would benefit from improved reliability and image quality afforded by utilizing whole-head HD-DOT imaging. For example, the silent and minimally constraining environment of HD-DOT may open the doors to neuroimaging studies on neural correlates of meditation and pharmacologically altered consciousness such as brought about via sedation, anesthesia, or psychoactive medications increasingly used to treat depression, post-traumatic stress disorder, or other conditions. Additionally, due to the lack of contraindications for implanted metal, HD-DOT may be used on studies involving participants with neural prosthetics such as deep brain stimulators and cochlear implants. 46
VI. CONCLUSIONS
In this review, we have focused on the physical principles underlying optical neuroimaging in humans, and the challenges of design and implementation of high density arrays. We have highlighted several studies that have demonstrated strong validation of the anatomical specificity and reliability of the technology. Finally, we summarized papers highlighting the unique potential for HD-DOT methods to profoundly impact clinical care.
Some limitations of HD-DOT should be discussed. The sensitivity of HD-DOT degrades with depth, as with all optical methods and validated imaging with HD-DOT beyond ∼15-20 mm from the surface has yet to be presented. The most robust strategy for overcoming that degradation is to increase the number of measurements at longer distances. However, as discussed above, as the sourcedetector separation increases linearly, the light level at the detector falls off exponentially. As such, significant advances in deeper imaging will most likely come about due to advances in detection and ADC technology with lower noise floors and wider dynamic ranges potentially along with advances in longer wavelength sources and detectors. Even with those potential advances, HD-DOT, unlike methods like fMRI, is limited to imaging the superficial cortex. Therefore, HD-DOT cannot access deep cortical structures such as the insula and operculum or deep subcortical brain structures such as the striatum, amygdala, hippocampus, or the thalamus. 46,47,146 While this limitation is potentially a problem in mapping of functional connectivity networks, known functional connectivity networks have nodes in the superficial cortex. 1,46,228 Furthermore, the functional connectivity structure of the brain has been shown to exhibit network-like properties, where a lesion within one area of the brain will often have effects that can be measured at spatially separated loci, often including those in superficial cortex. 229 Even though HD-DOT is constrained to imaging brain activity in superficial cortical areas, disruptions throughout the functional brain network may be within reach.
Recent advances in HD-DOT systems make this modality a viable alternative to fMRI that can provide comparable spatial REVIEW scitation.org/journal/rsi information about cerebral cortex activity and connectivity, with the added advantage of being portable (e.g., bedside data collection in populations that cannot be taken to a scanner). Multiple opportunities remain in the ongoing development of HD-DOT strategies for mapping human brain function. As discussed above, increasing the density of overlapping measurements has a direct positive impact on reconstructed image quality. One approach to increasing the density of overlapping measurements beyond current designs would be to lower the source-detector separation distances while maintaining the regular grid spacing. Though straight forward in design, this strategy leads to nontrivial challenges in cap design, cap fitting, source-detector encoding and decoding strategies, and the management of the hair of participants. An alternative approach to increasing the number of measurements afforded by a given arrangement of sources and detectors would be to use frequency domain or time domain strategies in a high density arrangement. The added phase-(or time-gate) based measurements compliment the intensity-based measurements and could provide further improvements in image resolution, localization accuracy, and quantification. Fiberless designs that place source, detector, and digitization components on the head have the potential to dramatically increase wearability and portability. All of these designs will also require further advancements in reliable and efficient anatomical co-registration methods, such as electromagnetic localization of the array and head fiducials or surface capture based on photometric strategies. Additionally, the increasing and large number of measurements required by HD-DOT arrays will require further developments in data fidelity assurance and motion-artifact detection and processing. | 16,382.8 | 2019-05-01T00:00:00.000 | [
"Biology",
"Physics"
] |
Can South America form an optimal monetary area? A structural vector autoregression analysis
This research analyzes the feasibility of adopting a common currency in South America using the Optimal Monetary Areas theory. Taking into account that the relative dominance of regional shocks in local output is considered a key indicator to adopt a regional currency, we use a structural vector autoregression (SVAR) model to determine what type of shock —among global, regional or country specific— prevails in South American economies. The results of variance decomposition demonstrate that the output trajectory of South American countries is mainly explained by country-specific shocks; therefore, South America as a whole is not considered not an optimal monetary area. However, we identified a group of countries —named Sud-5 (comprised of Chile, Peru, Ecuador, Brazil and Argentina)— for which the costs of a hypothetical monetary union would be relatively lower.
Introduction
Recent changes in the International Monetary System (IMS) 1 have led several economies to adopt regional currencies, as is the case for the euro area and the recent Monetary Area in the West African (ECOWAS) project. This scenario may have an impact on the debate about regional currencies in other economic blocs. In the case of South America (SA), the debate started with the work of Bayoumi and Eichengreen (1993), who found little support for the idea of a common currency area. Further contributions corroborate this result, with the last relevant work being that of Larrain and Tavares (2003) and the recent paper of Hafner and Kampe (2018). However, after more than twenty-five years of an unprecedented process of globalization, the case for a monetary union in SA should be revisited.
One of the main obstacles to economic integration is the reluctance of the majority of countries to forgo their sovereignty in order to achieve more regional cohesion (Dutta et al. 2020). In a global context, most researchers agree that Latin American (LA) economies maintain a low level of integration. Using a set of indicators of economic integration -suggested by the optimum currency area theory (OCA)-, Dorrucci et al. (2004) showed that LA was less economically integrated not only than the European Union (EU) after the adoption of the euro, but in some cases even less than the EU at the beginning of its regional integration process in the 1960s. East Asia, even with their relative lack of formal regional trade treaties, is more integrated among itself than the countries within LA (Aminian et al. 2009). Reyes et al. (2010) explained that the lower degree of integration of LA could be related to the lack of economic development of the region. Márquez-Ramos et al. (2017) proved that institutional and political factors influence economic integration in LA. These researchers showed that the terrorist attack on September 11th, 2001 and the region's policy affinity with the Revolución Bolivariana affected the economic integration process in LA. Despite this relatively low level of integration however, Basnet and Sharma (2013) determine that the economic fluctuations in the seven largest economies in LA -Argentina, Brazil, Chile, Colombia, Mexico, Peru and Venezuela-follow a similar pattern in terms of duration, intensity, response, and timing both in the long run and in the short run. Therefore, their findings suggest that these economies in LA could benefit strongly from regional cohesion and can lead the path of economic integration in the region.
In the context of LA, most relevant literature has focused on assessing the potential gains to the creation of an optimal monetary area across all LA countries or within economic blocks that maintain regional trade agreements such as MERCOSUR member countries or the Andean Community (CAN) (Eichengreen 1998;Hochreiter and Siklos 2002;Bresser-Pereira and Holland 2009;Numa 2011;Basnet and Pradhan 2017;Hafner and Kampe 2018). While Bayoumi and Eichengreen (1993) found no evidence to support the benefits to the adoption of a common currency in LA, their model has been criticized because it cannot distinguish whether the shocks are regional, global, or simply correlated local shocks (Chow and Kim 2003). Nevertheless, according to Chow and Kim (2003) the prevalence of regional shocks may justify common monetary policy within the region independently of their nature. In other words, the relative importance of regional shocks in the trajectory of the local output is considered the key indicator of the suitability of an economy to adopt a regional currency (Zhao and Kim 2009).
Consequently, in this paper we use a regional model to identify what kind of structural shock -country-specific, regional or global shocks-prevails in SA economies. Once identified, it is possible to establish candidates who would face lower costs to integrate a currency area in SA. These results are compared with a similar analysis of eleven members of the euro area states, taking into account that eurozone is a benchmark with which to compare these kinds of monetary agreements. Additionally, contrary to previous studies that include all LA countries, our research only includes SA countries. This is because a monetary union is much more likely in this group of countries due to their geographical proximity, similar production patterns, existing trade agreements, historical aspects and a greater degree of political integration. The structure of the paper is as follows. In the second section, the OCA literature and the research related to LA are revised. The third section details the methodology and the model used. The fourth section outlines the most important findings. The fifth section assesses the costs and benefits of adopting a single currency in SA. Finally, the main conclusions of the study are presented.
Literature survey
The workhorse model of the monetary unions is still that of the optimum currency areas (OCA), developed in the seminal work by Mundell (1961), McKinnon (1963) and Kenen (1969). The theory established that the adjustment mechanisms that replace the monetary policy are factor mobility and labor market flexibility. Furthermore, the OCA theory also identified benefits and cost in order to determine the adequacy of adopting a single currency. Among the benefits, they found an increase in intra-regional trade caused by the suppression of exchange rate risk and the reduction of transaction costs; improved conditions for investment, production, and consumption; the transparency of prices; and enhanced credibility due to the adoption of an international currency and price stability. Among the costs, they identified the loss of autonomy in monetary policy, the loss of the possibility of financing fiscal deficits through monetary issuance, and the reduction of sovereignty by giving up the national currency (Obstfeld and Rogoff 1996;Visser 2004).
However, Alesina et al. (2002) argued that the higher the correlation of shocks among a new potential member of a monetary area and the member countries is, the lower the cost of losing monetary policy independence. As Frankel and Rose (1997) argue, "Countries with idiosyncratic business cycles give up a potentially important stabilizing tool if they join a currency union. Another criterion for EMU entry is therefore the cross-country correlation of business cycles. Countries with "symmetric" cycles are more likely to be members of an OCA." Consequently, if the business cycles of the members of a monetary area are synchronized, the cost of losing the monetary policy to deal with imbalances should be lower. Subsequently, Frankel (1999) determined that even when the candidate countries to join a monetary area face higher costs than benefits (therefore, they do not belong to an optimal monetary area), once integrated into a monetary area, the increase in both trade integration and output correlation would lead to the benefits being higher than the costs. This means that countries could meet the ex post optimality criterion, even though they did not do so ex ante.
The literature on the suitability of LA, particularly SA, to form a monetary area is limited compared to that carried out on other economic blocs. Most research has focused on certain groups of countries that maintain trade agreements, such as MERCOSUR or the Andean Community. Bayoumi and Eichengreen (1993), through the application of vector autoregression (VAR) in SA, found low correlations in supply shocks, while the correlations of the demand shocks were seven times lower than in Europe and three times lower than in Asia. In later work, Eichengreen (1998) evaluated whether a monetary union could decrease the volatility of the exchange rates of the member countries of MERCOSUR (Argentina, Brazil, Paraguay and Uruguay). Research has shown that a regional currency is not an effective option for reducing exchange rate volatility. Similarly, they also argued that a deeper integration requires the harmonization of national regulations at several levels (such as the EU). Licandro (2000) examined the degree of similarity of the supply shocks that affect the countries that make up MERCOSUR, the NAFTA and the EU. His results demonstrate that the correlations of the supply shocks of the countries of MERCOSUR have a low level of correlation compared to those of other blocs such as the EU and NAFTA. In a study conducted on countries of South and Central America, Larrain and Tavares (2003) evaluated some criteria for the creation of a monetary union while making a distinction between two types -dollarization and a regional currency-and concluded that dollarization may be an option for the countries of Central America. However, they believe that neither the dollarization nor a regional currency would be a good option for SA.
In addition to the literature related to the synchronization of supply and demand shocks, there some other works building upon the euro area experience. Hochreiter and Siklos (2002) took the criteria set out in the Maastricht Treaty as a reference for determining the level of economic convergence. His research findings showed that in the LA region, there was a low level of convergence between Brazil (the main economic referent) and the rest of the countries. Only positive convergence results were obtained with Paraguay and to a lesser extent with Chile. The authors concluded that the creation of a common currency would be costly given the low level of synchronization in economic cycles. studied the changes in LA's monetary systems at the beginning of the twenty-first century. They argued that the LA region has a high level of heterogeneity, where countries differ in size, structure and economic policies. With respect to trade, the authors noted that although trade has significantly increased in most of the countries of the region, brought about by the regional common market agreements (such as MERCOSUR or the Andean Pact), trade integration is still deficient. In the same vein, Numa (2011) determined that both MERCOSUR and CAN require a higher level of economic and political integration to form an optimal monetary area. Kopits (2002) carried out a comparative analysis, using the criteria set out in the Maastricht Treaty, between the countries of Central Europe (which at the time were in the process of joining the euro area) and of LA (especially those of SA). According to the above author, the then candidates from Central Europe appeared to be better placed to join a monetary union (euro area) than the LA countries, given the latter's less homogeneous economic structure, limited trade and low labor mobility within the LA region. Edwards (2006) analyzed empirical evidence on the economic performance of countries that form monetary unions -those that do not have their own currency-and interpreted the results with respect to LA. The analysis focused primarily on (1) sudden stops in capital flows and (2) the current account reversals. The results suggest that belonging to a monetary union has not reduced the likelihood of sudden stops in capital flow or sudden changes in the current account. In summary, the literature regarding LA countries agrees that they are not good candidates for the constitution of a monetary area.
In a more recent study, Bresser-Pereira and Holland (2009) discovered that a regional currency could improve the integration process in LA by reducing the nominal exchange rate volatility, particularly for MERCOSUR. These findings coincide with the results published by Basnet and Pradhan (2017). These authors demonstrated that MERCOSUR countries share common trends in their main macroeconomic indicators. Finally, Hafner and Kampe (2018) demonstrated that LA and its RTAs are far from being considered an optimal monetary area because these countries have marked heterogeneities in terms of income, growth and economic structure. However, the most important conclusion of his research is that the countries belonging to the CAN present better homogeneity (in terms of the openness and mobility of factors) compared to the countries belonging to MERCOSUR.
Methodology
As mentioned above, one of the most relevant aspects for the constitution of a common currency is the degree of synchronization of business cycles among the economies. The first study that analyzed the synchronization of business cycles was presented by Bayoumi and Eichengreen (1993) . These authors used an autoregressive vector model to estimate both the supply and demand shocks for the various economic blocs. The researchers integrated a restriction in which the supply shocks had permanent effects on the output level, while demand shocks only had temporary effects on the output level. According to Bayoumi and Eichengreen, the presence of highly correlated or symmetrical supply shocks within a region is an indicator that a group of countries are good candidates for the constitution of a monetary union. However, an important criticism made about the approach proposed by Bayoumi and Eichengreen is that such a methodology does not allow for the distinction among different types of shocks according to their geographical origin; in other words, among country-specific, regional and global shocks. Chow and Kim (2003) established that the prevalence of each type of shock determines the monetary system that the country should adopt (single currency, monetary union, or global agreement). In other words, if country-specific shocks prevail in an economy, the country should opt for a national currency. If regional shocks predominate in a set of economies and there is also a correlation among regional shocks, a common monetary policy or a regional exchange rate arrangement can be justified. If global shocks prevail in one region and if they similarly affect all economies within and outside the region, a global monetary system -or pegging to a global currency (for example, the U.S. dollar or euro)-is justified. Consequently, the strategy in this paper is to identify what kind of structural shock prevails in the countries of SA -country-specific, regional or global shocks. Once identified, it will be possible to establish candidates who could constitute a currency area.
Model
To identify the underlying global, regional and country-specific structural shocks, we follow a similar strategy to that of Chow and Kim (2003), Zhao and Kim (2009) and Regmi et al. (2015), and it is also based on the methodology proposed by Blanchard and Quah (1988) and King et al. (1987). In this strategy, the domestic output, y d , faces three types of shocks: global, regional and country-specific (u g , u r and u d ): where β i (L) = β i0 + β i1 L + β i2 L 2 + … is a polynomial function of the lag operator (L). Considering Eq.
(1), the model is determined by three variables: the global (y g ), regional (y r ) and domestic (y d ) output. The relation of the three structural shocks to each output variable, in matrix form, is defined as follows: where A ij L ð Þ ¼ a 0 ij þ a 1 ij L þ a 2 ij L 2 þ …, and the matrix representation is Δy t = A(L)u t . It is assumed that the structural shocks of each type (global, regional and countryspecific) are not correlated and that the variance is unitary-that is, Var(u t ) = I. Considering that the different types of shocks are not observed, Chow and Kim (2003) propose the following restrictions to identify the innovations: (i) countryspecific shocks have no impact on the regional or global output in the long term, and (ii) regional shocks do not have an impact on the global output in the long term. 2 Those restrictions are the standard ones in characterizing small open economies. Specifically, global shocks (GS) affect all economies worldwide, including at the regional and domestic levels. An example of such a global shock would be the 2008 global financial crisis. Regional shocks (RS) affect within regional and local levels. However, such shocks do not expand to other regions. An example of regional shock is the commodity price boom observed in SA between 2004 and 2014. 3 Country-specific shocks (CS) only affect one particular country, and the effects of this event do not spread to other economies. Natural disasters or economic crises, such as the 2001 Argentinian economic crisis, are considered country-specific shocks. In matrix terms, the restrictions imply that certain matrix coefficients A(L), which are represented in Eq. (2), are equal to zero. That is, A 12 (1) = A 13 (1) = A 23 (1) = 0. In other words, these estimates are omitted. Consequently, through global, regional and domestic output, it is possible to identify global, regional, and country-specific shocks for a given country. 4
Data
Our analysis included nine countries from SA (Argentina, Bolivia, Brazil, Colombia, Ecuador, Paraguay, Peru, Uruguay and Venezuela) and eleven countries from the euro area (Austria, Belgium, Finland, France, Greece, Ireland, Italy, Luxembourg, the Netherlands, Portugal and Spain), known as EMU-12. 5 The inclusion of European economies is explained by the possibility of comparing the results obtained for SA with those of an established monetary union. The panels were divided into two periods to compare different patterns over time. The first time period corresponds to annual data between 1970 and 2001. The second time period includes quarterly data, from the first quarter of 2001 to the fourth quarter of 2015 (except for Argentina, which started in the first quarter of 2004). 6 Following the economic crises of the 1980s and 1990s, SA countries have shown greater macroeconomic stability since the beginning of the twenty-first century, with the exception of Venezuela and Argentina. In this sense, even when SA has not carried out a monetary integration process -and therefore it is not possible to test the ex post endogeneity approach stressed by Frankel andRose (1997, 2001) -, the purpose of splitting the database into two periods is to verify if there were changes in the influence of regional shocks over time. For SA, the annual data were obtained from the IMF Outlook report and the quarterly reports from official sources in each country; for European countries, the source was the European Commission database.
The variable used for the SVAR model and for the identification of the different shocks is the real output of each country, in addition to the proxy variables of the regional and global real outputs. For the purpose of representing regional output, previous studies used countries with significant economic and political weights within certain regions (Chow and Kim 2003;Zhao and Kim 2009;Regmi et al. 2015). This study used the output of Chile to represent the regional output -or a "center of gravity"-from SA. This choice is justified by the fact that Chile has the best macroeconomic performance in the region (low levels of inflation, fiscal deficit, debt, and fluctuation in its nominal exchange rate). However, in the robustness check section, we use different combinations. Particularly, estimates were also calculated considering Brazil as another regional output option due to its significant representation in the total output of SA (approximately 50% of the regional output). In regard to European countries, the output of Germany was used. The real output of the United States (US) was used as the proxy variable of the global output for both blocs.
The first difference of the logarithm of the country-specific, regional and global output is included for each specification, while for an independent variable, it is the first lag of each variable, except in cases in which the adoption of another strategy is indicated. 7 The augmented Dickey-Fuller (ADF) test found that the variables do not have a unit root. Furthermore, the Johansen test revealed the absence of cointegration among the variables. All models complied with the stability conditions; in other words, their own values were within the unit circle and did not show autocorrelation in the remainder.
Model results
The results of the impulse-response function show that between 1970 and 2001, the EMU-12 countries have less sensitivity to regional shocks compared to the SA region, as shown the impulse response functions in Fig. 1. The most significant case is that of Finland, which has a much more asymmetric pattern than do the rest of the European countries. These preliminary results do not justify the creation of a monetary area in SA Subsequently, the variance decomposition was estimated for the prediction error made for a time horizon of two and ten years. This process allows us to establish the percentage of volatility that a variable experiences when faced with disturbances of other variables. 8 The variance decomposition for the prediction error indicates the degree of the prediction error of the real output of each country produced by each type of shock: global, regional and country-specific. For the annual data between 1970 and 2001, predictions were made for the short (two years) and medium (ten years) term. In the euro area, regional shocks account for 24.9%, on average, of real output in the short term (2 years) and 26% in the medium term (10 years), between 1970 and 2001. The countries with the least impact on their output due to regional shocks were Finland, Ireland and Italy (in the short term), while Greece and Austria have the greatest 7 The optimal number of lags was determined by the Akaike information criterion (AIC) and the Schwarz information criterion (SIC). 8 In other words, it allows for the separation of the variance from the prediction error of each endogenous variable. dominance. Nevertheless, despite showing a high incidence of regional shocks, it has also been proven that the real output variation in euro area countries mainly depends on country-specific shocks. In regard to the SA group, the regional shocks characterized by Chile's real output explain on average 8.9% in the short term and 10% in the medium term. 9 The variance of the real GDP of each country is largely explained by country-specific shocks in both the short and medium term, at 81.6% and 80.5%, respectively. These data suggest that between 1970 and 2000, the best option for SA countries was to maintain their domestic currencies. Two striking data are that for both Brazil and Ecuador, the shocks induced by the US explain approximately 20% (in the short and medium term) of the variance of their outputs (Tables 1 and 2). was replicated using quarterly data between 2001 and 2015. 10 For this dataset, predictions were made for two quarters (short term) and twenty-four quarters (medium term). Figure 2 shows that the degree of symmetry of regional shocks in the euro area declined considerably, even when controlling for the estimate with the dummy variable (crisis). The countries showing the greatest asymmetry in the regional shock trend are Luxembourg, Greece and Ireland. In contrast, SA had a greater symmetry in regional shocks compared to the previous period , although it could be considered deficient.
According to variance decomposition, the regional shocks in the European countries remain at similar levels to those found in the annual data, averaging 24% in the short and medium term. These results are striking; according to Frankel andRose (1997, 2001), after forming a currency zone, the countries should increase their synchronization of business cycles. This concept is known as the ex post optimality. However, the findings show that European economies face similar levels of regional shocks, even after the entry of the euro. Ireland, which had a low level of regional shocks in the previous panel, has reduced the effect of common disturbances on the trajectory of its output. Greece has also reduced the effect of common shocks.
In contrast, Italy has significantly increased its common or regional shocks. In the case of SA, the values show that, on average, the real GDP deviations are largely explained by domestic shocks, and the regional shock levels are maintained. In general, the dominance of regional shocks in the SA economies has slightly increased compared to the period 1970-2001. However, this improvement may be considered deficient to pursue a monetary integration process throughout the region. The most relevant results are those of Ecuador (16.2%) and Peru (15.2%), which show a greater presence of regional shocks (represented by Chile) compared with the rest of the countries. Finally, the level of explanation of regional shocks in Brazil reaches 41.9% in both the short and medium term.
Robustness checks
Considering that the denomination of the regional output and the global output is arbitrary (especially in the case of the representation of the regional output by Chile, which is a relatively small economy in SA), in this section we present alternative models with different representations of both variables with the purpose of ensuring robustness in our results and we compare it with the benchmark model (BM). Table 3 reports the main results.
In the first alternative model, M1, the regional output is represented by the output of Brazil and the global output is US. We use this configuration because of the economic weight (about half of the output in SA) and international relevance. In M2 the regional output is represented by the sum of the outputs of Chile, Peru and Ecuador -countries with higher level of explanation of regional shocks in the BM and greater macroeconomic stability-and the global output by the sum of the outputs of the US and the EU.
The M3 model uses the same M2 specification for the global output, while the regional output is represented by the sum of Brazil, Colombia and Chile -the main economies of the region. In M4 and M5 for the global output we use the sum of the outputs of the US, the EU and Japan. In M4, the regional output is the sum of Brazil, Chile, Peru and Ecuador. In M5 the regional output is the sum of all SA countries (with the exception of Argentina due to data availability issues).
Although changes in the specifications of the model produce slightly different outcomes -especially when Brazil is integrated to represent the regional outputthey show consistency with those obtained in the BM. Specifically, regional shocks maintain a greater relevance in Brazil, Chile, Peru and Ecuador. Note that in the M3, M4 and M5 models, the dominance of regional shocks in Brazil exceeds 80% (causing the average of regional shocks for SA to increase). This is due to the strong influence of this country on regional output. Furthermore, it is important to note that Chile's output also presents a high level of explanation by the regional shocks in the different configurations. Consequently, Chile, Brazil, Peru and Ecuador would be better positioned to form a monetary area. Finally, it should be noted that in M1, M3, M4 and M5 (when Brazil integrates the regional output) demonstrates that Argentina's output is also influenced by the regional shocks. Thus, the costs of adopting a regional currency would be reduced for Argentina as long as Brazil becomes a member of the integrating group.
Discussion and final considerations
Following Mongelli (2002) and De Grauwe and Mongelli (2005), we use graphical representations to illustrate the relative position of EMU-12 and SA countries with respect to the that which reflects optimal conditions for adopting a common currency -i.e., where the benefits are greater than the costs. Taking into account that the loss of a national monetary policy instrument is more costly as the degree of business cycles asymmetry increases (Frankel and Rose 1997;Alesina et al. 2002), the costs of adopting a common currency are illustrated by variance decomposition for the prediction error of the regional shocks obtained previously. The higher the degree of explanation of the variance of the output by the regional shocks, the lower the costs of adopting a regional currency. Takin into consideration that intra-regional trade is a source of benefits of a monetary union (De Grauwe and Mongelli 2005; De Grauwe 2016), we use the intra-regional trade in the EMU-12 andSA (between 2001 and to represent the benefits. Therefore, the OCA line (downward sloping) shows the possible combinations between asymmetry (costs) and integration (benefits). Consequently, points lying to the right of the OCA line represent countries for which the benefits of a monetary union exceed its costs. As Fig. 3 shows, the countries of the EMU-12 for which the benefits exceed the costs are Luxembourg, the Netherlands, Belgium, Austria and Italy. For SA, we use the results of the MB, M1 and M5 models to represent the possible costs. According to Figs. 4, 5 and 6, none of SA countries falls to the right of the OCA line -i.e., the costs outweigh the benefits. However, based on the results of the MB models (with Chile as core) and M1 (with Brazil as core) which are illustrated in Figs. 4 and 5, it is possible to determine that the countries with the lowest costs in a hypothetical monetary unification are: Peru, Ecuador, Chile, Brazil and Argentina. We categorize this group of countries as Sud-5. Although these economies would obtain less gains from adopting a regional currency because of their poor level of intra-regional trade, Sud-5 countries share borders that would facilitate intra-regional trade that could reduce the costs and make integration attractive. Moreover, a common currency in Sud-5 countries would allow a boost in intra-regional trade taking into account that monetary unions increase the trade between their members (Rose 2000;de Nardis and Vicarelli 2003;Bun and Klaassen 2007;Berger and Nitsch 2008). An important finding is that according to the M5 model, in which regional shocks are represented by the sum of regional output, costs increase for all countries due to, in average, the regional shocks influence is lower -only Brazil and Chile exceed the average. These outcomes raise important questions which are: what would be the best path SA countries could take to achieve monetary integration? Moreover, which country should lead this process? In relation to the first question, the most feasible path would be a partial monetary integration in which the economies that have the best conditions to adopt a common currency form a core and then other economies that meet the basic requirements can gradually integrate. Let's remember that a monetary integration in SA as a whole would increase costs for all countries. The second question is much more difficult to answer. Several authors agree that Germany played a decisive role in the creation of the euro area because this country had key characteristics to promote a monetary unification process: macroeconomic stability, institutional credibility and a solid international standing (Hadjimichalis 2011;Eichengreen 2012;Crum 2013;De Grauwe and Ji 2015). Certainly, there are two possible economies in SA that could play a central (or core) role: Brazil and Chile. On the one hand, Brazil is the region's most representative economy (approximately 50% of SA's total GDP) and maintains a great influence on Chile's business cycle. However, the downside of this proposal is Brazil's deep macroeconomic imbalances and institutional weaknesses. Specifically, this country maintains persistent fiscal imbalances, inflationary pressures, rising unemployment rate, weaknesses in the business environment, exchange rate volatility, the highest general government gross debt in the region (87.9% of GDP in 2018) and lack of "facilitating features" at the political and social level to carry out the institutional reforms that would strengthen the Brazilian economy (Coelho 2020). In addition, as several authors stress (Rivarola Puntigliano and Briceño Ruiz 2017; Scholvin and Malamud 2020), there are social, political, and structural constraints to Brazil's regional hegemony -such as location and physical barriers, the distribution of the population and economic activity, infrastructure for energy and transportation, and public policies-resulting in a disconnection of the economy from its neighbors. On the other hand, contrary to Brazil, Chile has achieved an extended period of macroeconomic stability and solid institutional credibility. The Chilean economy is characterized by moderate levels of unemployment, price stability, better macro-fiscal performance with low levels of primary fiscal deficits and the lowest general government debt in SA (25.6% of GDP in 2018). At the institutional level, the Chilean Central Bank demonstrates one of the highest levels of Central Bank independence globally and maintains a successful inflation-targeting framework (Venter 2020). Moreover, among the LA countries that have adopted floating regimes and inflation targets, Chile has one of the lowest rates of intervention in the exchange market (Pérez Caldentey and Vernengo 2020). In the international context, the foreign policy in Chile has also been able to consolidate long-term coherence (Minke Contreras 2020). Therefore, from a technical point of view, we think that Chile would be the most suitable core country for a possible monetary union in SA because this country better fits the basic criteria of a core player in the monetary integration process (macroeconomic stability, institutional credibility and an appropriate international reputation). However, one limitation is the low relevance of Chile in the regional economy (barely 7% of SA's total GDP). Finally, although Chile has not risked its economic sovereignty in regional integration schemes (Fermandois 2011), in recent years Chile has shown greater regional commitment with the auspices of the Union of South American Nations (UNASUR of 2008) and the launching of regional institutions (Pacific Alliance, PA of 2012; PROSUR of 2019) (Wehner 2020).
Note: Intra-regional trade to EMU-12 is calculated as a fraction of the average exports to EMU-12 from 2001 to 2015 divided by the total average exports within the same period. The decomposition of the variance is taken from the BM model.
Note: Intra-regional trade to SA is calculated as a fraction of the average exports to SA from 2001 to 2015 divided by the total average exports within the same period. The decomposition of the variance is taken from the BM model.
Note: Intra-regional trade to SA is calculated as a fraction of the average exports to SA from 2001 to 2015 divided by the total average exports within the same period. The decomposition of the variance is taken from the M1 model.
Note: Intra-regional trade to SA is calculated as a fraction of the average exports to SA from 2001 to 2015 divided by the total average exports within the same period. The decomposition of the variance is taken from the M5 model.
Conclusions
The main findings in this paper are that we find EMU-12 countries' regional shocks have been similar before and after adoption of the euro (with the exception of Ireland and Greece, which have experienced a largely diminished influence of regional shocks in their economic fluctuations). Furthermore, the countries in which the benefits of adopting a common currency (measured as intra-regional trade) are greater than the costs (measured as the influence of regional shocks) are Italy, Belgium, Austria, the Netherlands and Luxembourg. A parallel analysis for SA countries reveals that economic disturbances are dominated by country-specific shocks and these have not changed greatly compared to the period of macroeconomic instability in the region (between 1970 and 2000). In other words, the costs of adopting a common currency would be higher for the SA economies than the EMU-12 countries. Nevertheless, although changes in the model specifications produce slightly differences outcomes -especially when Brazil is integrated to represent the regional output-the most important result of this research is the identification of a group of countries named Sud-5 (comprised of Chile, Peru, Ecuador, Brazil and Argentina), for which the costs of a hypothetical monetary union would be relatively lower. Furthermore, taking into account that monetary unions increase trade between their members, a common currency in Sud-5 would booster intra-regional trade in countries that share borders, effectively increasing the efficiency gains of forming a union. The majority of these countries belong to CAN: Peru and Ecuador are current members, and Chile, Argentina and Brazil are associate members. These results are similar to those of Hafner and Kampe (2018), who determined that CAN countries are in a better position to form a monetary area compared to MERCOSUR countries. Therefore, Sud-5 could be considered the most appropriate core for the creation of a single currency in SA, even though SA as a whole cannot be considered as an optimal monetary area.
A feasible path for SA countries would be a partial monetary integration in which the economies that have the best conditions to adopt a common currency form a core and then other economies that meet the basic requirements can gradually integrate. Certainly, the euro area showed that the creation of a monetary area can be a long and complex process (which includes trade liberalization, market integration, the creation of institutional and legal structures, and a long period of policy harmonization); however, the inclusion of new members is feasible. Nonetheless, the European experience has also shown that in order to establish a solid monetary union, it is imperative that members meet the technical criteria to avoid internal imbalances.
Finally, there are two possible economies in SA that could play a central (or core) role in monetary integration: Brazil and Chile. On the one hand, Brazil is the region's most representative economy (approximately 50% of SA's total GDP) and maintains a great influence on Chile's business cycle. On the other hand, Chile's economy shows macroeconomic stability, institutional credibility and an appropriate international reputation. From a technical point of view, we think that Chile would be the most suitable core country for a possible monetary union in SA because this country better fits the basic criteria to be the core member of a monetary integration process. However, despite Sud-5 countries facing lower costs of adopting a common currency, it is unlikely that they will form a monetary union in the short to medium term given the low level of regional integration and the tendency to resist loss of sovereignty in SA.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 9,006.2 | 2020-11-14T00:00:00.000 | [
"Economics"
] |
A New Remote Hyperspectral Imaging System Embedded on an Unmanned Aquatic Drone for the Detection and Identification of Floating Plastic Litter Using Machine Learning
: This paper presents a new Remote Hyperspectral Imaging System (RHIS) embedded on an Unmanned Aquatic Drone (UAD) for plastic detection and identification in coastal and freshwater environments. This original system, namely the Remotely Operated Vehicle of the University of Littoral C ô te d’Opale (ROV-ULCO), works in a near-field of view, where the distance between the hyperspectral camera and the water surface is about 45 cm. In this paper, the new ROV-ULCO system with all its components is firstly presented. Then, a hyperspectral image database of plastic litter acquired with this system is described. This database contains hyperspectral data cubes of different plastic types and polymers corresponding to the most-common plastic litter items found in aquatic environments. An in situ spectral analysis was conducted from this benchmark database to characterize the hyperspectral reflectance of these items in order to identify the absorption feature wavelengths for each type of plastic. Finally, the ability of our original system RHIS to automatically recognize different types of plastic litter was assessed by applying different supervised machine learning methods on a set of representative image patches of marine litter. The obtained results highlighted the plastic litter classification capability with an overall accuracy close to 90%. This paper showed that the newly presented RHIS coupled with the UAD is a promising approach to identify plastic waste in aquatic environments.
Introduction
The high and rapidly increasing levels of plastic litter in aquatic environments represent a serious environmental problem at a global scale, negatively affecting aquatic life and biodiversity, ecosystems, livelihoods, fisheries, maritime transport, recreation, tourism, and economies.To address this problem, the research community is always looking for novel devices, tools, and methods to detect, identify, and quantify plastic litter more rapidly and efficiently [1][2][3][4].Monitoring methods such as visual counting or sampling using nets are labor-intensive, whereas current remote observation (from spaceborne or airborne platforms) has some limitations in detecting and identifying plastic litter.Therefore, it is necessary to develop an innovative remote sensing system able to automatically detect and identify plastic litter in order to study pollution sources properly, to improve the survey assessments, and to support the implementation of mitigation measures [1][2][3][4][5].
resolution [8,10,16].Moreover, satellites and aerial images need to apply atmosph rection methods on the data to extract the hyperspectral reflectance.Conceptual framework for remote marine litter detection from [22] with differe devices (satellite, airborne, drone, etc.) and with the new system (ROV-ULCO) propose paper.
One way to overcome such limitations is to develop a hyperspectral sys works in the near-field and acquires hyperspectral images with a high spatial re and a high number of spectral bands.In this context, our contribution, propose paper, was the development of a new Remote Hyperspectral Imaging System (RH bedded on an Unmanned Aquatic Drone (UAD), namely the Remotely Operated of the University of Littoral Côte d'Opale (ROV-ULCO), as shown in Figure 1 knowledge, such a system is a real technological innovation that has never been p in the literature and, therefore, constitutes the novelty of our work.Our main o were (1) to develop new technologies for the perception and detection of plastic w to reduce time consumption during a study, and (3) to generalize an accurate tool tify, qualify, and identify polymers of floating plastic litter.This study, thus, con to the ongoing research efforts to develop new tools and methodologies for plas detection.
Although the use of hyperspectral imaging provides a large amount of info the problem of marine litter detection and identification is still complex due to the of different types of marine waste, especially for plastic materials present in aqua ronments, and the difficulty to recognize their nature by image analysis because high shape, size, opacity, and polymer variabilities [10,11,24].Indeed, floating pla can be perceived differently depending on its position, its orientation, or its speed of the camera and the lighting device, which can generate shadows and specula tion, depending on the opacity of its material or depending on whether its surface wet or dry and mixed with other materials.In order to reproduce these different sc it is important to first carry out experiments under laboratory-controlled conditio thermore, dealing with hyperspectral data is computationally expensive, and it [22] with different remote devices (satellite, airborne, drone, etc.) and with the new system (ROV-ULCO) proposed in this paper.
One way to overcome such limitations is to develop a hyperspectral system that works in the near-field and acquires hyperspectral images with a high spatial resolution and a high number of spectral bands.In this context, our contribution, proposed in this paper, was the development of a new Remote Hyperspectral Imaging System (RHIS) embedded on an Unmanned Aquatic Drone (UAD), namely the Remotely Operated Vehicle of the University of Littoral Côte d'Opale (ROV-ULCO), as shown in Figure 1.To our knowledge, such a system is a real technological innovation that has never been presented in the literature and, therefore, constitutes the novelty of our work.Our main objectives were (1) to develop new technologies for the perception and detection of plastic waste, (2) to reduce time consumption during a study, and (3) to generalize an accurate tool to quantify, qualify, and identify polymers of floating plastic litter.This study, thus, contributes to the ongoing research efforts to develop new tools and methodologies for plastic litter detection.
Although the use of hyperspectral imaging provides a large amount of information, the problem of marine litter detection and identification is still complex due to the number of different types of marine waste, especially for plastic materials present in aquatic environments, and the difficulty to recognize their nature by image analysis because of their high shape, size, opacity, and polymer variabilities [10,11,24].Indeed, floating plastic litter can be perceived differently depending on its position, its orientation, or its speed in front of the camera and the lighting device, which can generate shadows and specular reflection, depending on the opacity of its material or depending on whether its surface is either wet or dry and mixed with other materials.In order to reproduce these different scenarios, it is important to first carry out experiments under laboratory-controlled conditions.Furthermore, dealing with hyperspectral data is computationally expensive, and it is quite challenging to collect and manually label data for all types of existing plastic marine litter [3,5,10,11].
In this study, our general focus aimed to assess the capability of the proposed ROV-ULCO system to automatically recognize different types of plastic waste in aquatic environments with classical machine learning methods.For this purpose, waste samples representative of the most-common marine litter items found in the coastal environment were collected.This collection contained different plastic types (HDPE, LDPE, PET, PP, PVC, PS, etc.) and other materials such as wood, paper, rubber, and vegetation.Hyperspectral images of these waste items were then acquired by the RHIS system in laboratory-controlled conditions to build a benchmark database.RHIS provides high-spatial-resolution images and hyperspectral data cubes that cover the NIR (900 nm) to SWIR (1700 nm) range of the electromagnetic spectrum.From this database, an in situ spectral analysis was carried out to check the compliance of the spectra with the literature.Standard machine learning methods were then applied to evaluate the plastic waste recognition performance of our system.
The second section of the paper first presents the ROV-ULCO system with all its components, as well as the collected waste samples used in the experiments.This section describes how the waste samples are scanned by the ROV-ULCO to provide hyperspectral images of the proposed benchmark database.Two kinds of datasets were then derived from this database.The first dataset was constituted by the mean spectral reflectance computed over various parts of each marine litter sample observed under different conditions.This dataset was used to conduct an in situ spectral analysis in order to characterize each type of marine litter by a reference hyperspectral reflectance, as described in Section 3.This analysis aimed to compare the absorption features of each reference hyperspectral reflectance with the literature and thus confirmed which wavelengths were the most-efficient to discriminate the different plastic types.The second dataset was a set of same-sized image patches manually selected from the whole hyperspectral images and labeled with the ground-truth of each available marine litter category.Section 4 presents the experiments conducted with this dataset in order to assess the ability of our original RHIS to recognize different categories of marine litter.In this section, several supervised machine learning models, such as K-Nearest Neighbors (KNNs), Support Vector Machines (SVMs), and Artificial Neural Networks (ANNs), were trained on a set of training image patches.Then, the best-trained models were used to evaluate their performance on the testing image patches of the marine litter so that the testing patches were independent of the training ones to reproduce realistic conditions.Finally, the conclusion highlights that the ROV-ULCO is a promising approach to detect and identify plastic litter in aquatic environments.
Materials
This section describes the materials used in this study.The new remote hyperspectral imaging ROV-ULCO system is first presented with all its components in Section 2.1.The experimental setup to conduct the acquisitions with this system is then described in Section 2.2.This setup aimed to create a benchmark hyperspectral image database of plastic litter in a controlled laboratory environment, which reproduces different real situations.This database is presented in Section 2.3.
The ROV-ULCO System
The ROV-ULCO system, illustrated in Figure 2, is constituted of two subsystems: a new Remote Hyperspectral Imaging System (RHIS) and an Unmanned Aquatic Drone (UAD).
The UAD is an aquatic surface drone, named Jellyfishbot, specifically designed for removing floating debris [25].It is equipped with two propulsions, which are located under the two floating parts, and a remote control and communication system.The aquatic drone can reach a top speed of 2 knots and can have autonomy in terms of power that is greater than 2 h.The UAD was tailored at the University of Littoral (ULCO) to enable plastic material sampling in different water bodies, even in confined and hard-to-reach areas such as small waterways, estuaries, or rivers [25].The UAD is an aquatic surface drone, named Jellyfishbot, specifically designed for removing floating debris [25].It is equipped with two propulsions, which are located under the two floating parts, and a remote control and communication system.The aquatic drone can reach a top speed of 2 knots and can have autonomy in terms of power that is greater than 2 h.The UAD was tailored at the University of Littoral (ULCO) to enable plastic material sampling in different water bodies, even in confined and hard-to-reach areas such as small waterways, estuaries, or rivers [25].
The RHIS is connected at the front of the UAD in order to push it at the water surface.This original imaging system was structured around the following main outside elements (Figure 3): The RHIS is connected at the front of the UAD in order to push it at the water surface.This original imaging system was structured around the following main outside elements (Figure 3): The system enclosures are certified waterproof up to a 1.5 m depth at atmospheric pressure and for 30 min with an Ingress Protection (IP) rating of IP67.
The usage of the ROV-ULCO is limited to the observation of marine litter floating on the surface of the water.The current prototype is not designed to detect occluded objects The system enclosures are certified waterproof up to a 1.5 m depth at atmospheric pressure and for 30 min with an Ingress Protection (IP) rating of IP67.
The usage of the ROV-ULCO is limited to the observation of marine litter floating on the surface of the water.The current prototype is not designed to detect occluded objects that are not visible at the first observation layer or that are completely submerged under the water, which absorbs SWIR light depending on the depth.
The ROV-ULCO can travel more than 2 km with 1 h of autonomy at a maximum speed of 2 knots, before returning to replace the interchangeable batteries and continue the observation.
Hyperspectral images are stored only when marine litter is present under the RHIS.The recorded images are then processed offline to recognize the type of waste observed by the camera.
The Resonon PIKA-NIR-320 is a line-scan (also called push-broom) hyperspectral camera that covers the NIR to SWIR spectral range (900-1700 nm) with 164 spectral bands.The total number of spectral channels delivered by this camera is actually 168, with bands extending beyond both edges of the spectral range.Its resolution is 320 spatial pixels per line with a pixel size of 30 µm, and its line rate reaches up to 520 Hz.The main characteristics of the PIKA-NIR-320 hyperspectral camera are described in the Supplementary Materials (Datasheet S1) (now referred to as Pika IR hyperspectral camera: https://resonon.com/Pika-IR, accessed on 28 March 2023).This line-scan imager collects data one line at a time, and a two-dimensional image is completed by assembling line-by-line the multiple line-images acquired successively as the object is translated.To obtain hyperspectral data, signals from each pixel of a line-image enter at the same time into a spectrometer, which provides the spectrum of incoming light intensity as a function of wavelength for every pixel of the image.The two-dimensional image thus-acquired can be interpreted as a stack of single-band grayscale images, called a data cube, where each image of the stack corresponds to a different wavelength.This hyperspectral camera is provided with the SpectrononPro software, version 3.4.4(Spectronon software, Hyperspectral Software: https://resonon.com/software,accessed on 28 March 2023) to acquire data.
In order to ensure the stability of the system, the RHIS was designed in such a way that the center of gravity of the camera is as close as possible to the surface of the water.This is the reason why the camera is positioned horizontally with its optical axis parallel to the water surface.The ROV-ULCO is remotely controlled so that marine litter floating on the water is scanned by the RHIS.The latter operates in a very near-field to detect floating marine litter, where the distance between the hyperspectral camera and the water surface is about 45 cm (see the 3D views in Figure S1 of the Supplementary Materials for more details).The waste scrolling under the RHIS is illuminated by a waterproof lighting device protected from ambient light by a plate system.This device consists of a ramp of three halogen lamps, whose light spectrum covers the sensitivity range of the camera (900-1700 nm), in front of a diffuser.The light reflected by the illuminated surface hits an optical mirror oriented at 45 • , to move towards the objective lens and then on to the camera sensor parallel to the surface water.In order to cover a field of view corresponding to the distance between the two floats of the ROV-ULCO, the focal length of the objective lens is equal to 12 mm.The f-number of the objective lens was set to f/2 to let in a sufficient quantity of light for image acquisition without causing too much optical distortion.The length of the field of view is about 30 cm with this setting.
Although the scanning area covered by the proposed system is less than the area observed by other platforms equipped with hyperspectral imaging systems such as satellites, airborne vehicles, and drones, it overcomes their limitations in the spatial and spectral resolutions and enables observations of areas not visible by these other platforms.Moreover, no atmospheric correction of the hyperspectral data is needed.Another advantage of our system is that it can work night and day because it is completely independent of solar illumination and isolated from light noise.Finally, it can be easily used as a portable laboratory imaging system.
Experimental Setup
The two main objectives of the experimental setup were, first, to calibrate in situ the RHIS before being used in aquatic environments and, secondly, to characterize the main types of marine litter so that they can be recognized automatically.For this purpose, different real situations encountered on the water surface were reproduced in a laboratory environment.
To simulate an aquatic environment, a black PVC plastic container (dimensions of 45 × 32 × 10 cm 3 ) was filled with clear seawater.Such a black container was used to hold the seawater and the objects because it has negligible reflectance values compared to the reflectance values of the observed objects over the NIR-SWIR spectrum, while water absorbs infrared light.The hyperspectral camera was positioned at 45 cm of the water surface level to reproduce similar conditions to the real situation where the RHIS works on the water surface to detect the plastic type of the floating objects.For the linear motion simulation, a linear translation stage (linear scanner) was used to move the black container (Figure 4).An embedded computer with the Spectronon Pro software was used to calibrate the camera, focus the objective lens, capture the hyperspectral data cubes, control the integration time and the frame rate of the camera, and drive the motor for the linear scanner.Although hyperspectral cameras are spectrally calibrated, they usually provide raw data, which need to be calibrated to obtain the absolute reflectance of the scanned objects.For this purpose, both the instrument sensor response and illumination functions were considered to correct the acquired images.This calibration, also named flat field correction, was, thus, performed by a dark correction followed by a response correction.For the dark correction, the dark reference was captured by completely closing the aperture of the camera, leading to no light striking the sensor, resulting in a true dark reflectance.For the response correction, a Spectralon ® white diffuse reflectance standard was used as a white reference with a reflectivity of 1 in all the wavelengths.The integration time was adjusted to maximize the apparent reflectance of the Spectralon calibration panel.For all acquisitions, the camera parameters were fixed so that the frame rate supported the integration time required for the illumination and sample brightness, as follows:
The speed of the linear translation stage was then adjusted to maintain a unity aspect ratio so that the observed objects were not distorted in the acquired images.We obtained hyperspectral images with the spatial resolution given by the ratio between the line of An embedded computer with the Spectronon Pro software was used to calibrate the camera, focus the objective lens, capture the hyperspectral data cubes, control the integration time and the frame rate of the camera, and drive the motor for the linear scanner.Although hyperspectral cameras are spectrally calibrated, they usually provide raw data, which need to be calibrated to obtain the absolute reflectance of the scanned objects.For this purpose, both the instrument sensor response and illumination functions were considered to correct the acquired images.This calibration, also named flat field correction, was, thus, performed by a dark correction followed by a response correction.For the dark correction, the dark reference was captured by completely closing the aperture of the camera, leading to no light striking the sensor, resulting in a true dark reflectance.For the response correction, a Spectralon ® white diffuse reflectance standard was used as a white reference with a reflectivity of 1 in all the wavelengths.The integration time was adjusted to maximize the apparent reflectance of the Spectralon calibration panel.For all acquisitions, the camera parameters were fixed so that the frame rate supported the integration time required for the illumination and sample brightness, as follows:
The speed of the linear translation stage was then adjusted to maintain a unity aspect ratio so that the observed objects were not distorted in the acquired images.We obtained hyperspectral images with the spatial resolution given by the ratio between the line of view length and the camera resolution.
Although the hyperspectral camera Resonon PIKA-NIR320 covers the spectral range from 900 to 1700 nm with 168 spectral bands, the 13 first spectral bands and the 12 last spectral bands provide too noisy and distorted information to be exploited.For this reason, these 25 spectral bands were neglected by reducing the number of bands from 168 to 143, with an interval of the wavelength from 949.2 nm to 1650.8 nm instead of the real interval of the wavelength from 886.3 nm to 1711.4 nm.
This setup was used to acquire hyperspectral images of different waste samples in order to spectrally characterize them, compare their spectrum with the state-of-the-art to validate the proposed RHIS, on the one hand, and prove the RHIS's ability to recognize the different plastic litter, on the other hand.The database built for this purpose is presented in the next subsection.
Benchmark Image Database
In our study, plastic waste samples were collected from estuarine and coastal beaches along the Eastern English Channel French coast.In addition, some virgin plastics were used to expand the plastic library, which led to a set of plastic objects that contained and represented all plastic types.A categorized overview of these plastics objects and their types is shown in Table 1.They were divided into the following categories: (1) HDPE, (2) LDPE, (3) PET, (4) PP, (5) PVC, (6) PS, ( 7) PolyURethane (PUR), ( 8) PolyOxyMethylene (POM), (9) and ABS.A tenth category of non-plastic materials found in the aquatic environment (wood, vegetation, cardboard, clear seawater, etc.) and named "Other" was also added.We can notice in Table 1 that the number of objects was different depending on the plastic type.This variation was representative of the diversity of products made with each type of plastic.
The polymer that constituted each type of plastic object presented in Table 1 was then identified by a Macro-Raman Spectrometer (MacroRAM, Horiba Scientific, France, Palaiseau) using a laser of a 785 nm wavelength with a power of 7-450 mW and a fixed grating of 685 gr.mm −1 with a spectral range from 100-3400 cm −1 [26,27].This spectrometer was equipped with a CCD detector for a spectral resolution of 8 cm −1 at 914 nm.The signal acquisition and processing were realized with Labspec software and its identification using the KnowItAll software (KnowItAll, BioRad ® ) and the free-access spectra libraries of Horiba (Raman-Forensic-Horiba) and SLoPP/SLoPP-E.These identifications served as a ground-truth to label the data of our benchmark waste hyperspectral image database.
To create a hyperspectral image database of real marine litter, we performed 39 acquisitions with all the plastic objects presented in Table 1 under several positions using the RHIS in a controlled environment.Similar to [28], two cases were studied: dry and wet objects.The same object was scanned three times under several positions (face up or face down, side up or side down, etc.) and different views of its presence as an object floating (dry and wet) on seawater.We, thus, obtained a database of hyperspectral images which contained hyperspectral data cubes of the nine plastic types (HDPE, LDPE, PET, PP, PVC, PS, PUR, POM, ABS) and the "Other" category.
Two different datasets were then derived from this benchmark image database in order to carry out the two experiments presented in Sections 3 and 4, respectively.The first dataset consisted of reference mean hyperspectral reflectance spectra, and it was used to perform a spectral data analysis (Section 3).The second dataset was a labeled waste image patch dataset, which was used to assess the classification performance of the proposed system (Section 4).
Spectral Data Analysis
This section presents an in situ spectral analysis conducted with the proposed RHIS in order to characterize the hyperspectral reflectance of plastic litter samples (Section 3.1).This analysis aimed to demonstrate that the obtained spectral reflectance of each plastic type was similar to those existing in the literature and, thus, confirm which wavelengths were most-efficient in discriminating between plastic types (Section 3.2).
Spectral Reflectance Dataset
Using the new RHIS and Spectronon Pro software, the reflectance spectra were defined as references to characterize and verify the spectral reflectance of each object category.For this purpose, each hyperspectral data cube image was visualized (Figure 5) and different Regions Of Interest (ROIs) were then selected with various sizes and numbers depending on the size of each object.Large-area objects allowed selecting a large size and/or a large number of ROIs, while small-area objects limited the size and/or the number of ROIs.Each ROI was labeled by its type of plastic or Other.For each waste sample presented in Table 1, the following steps were applied: 1. Manual selection of a Region Of Interest (ROI) with a random size; 2. Computation and plotting of the hyperspectral reflectance spectrum of the selected ROI as the mean value over all pixels in the ROI; 3. Choice of another ROI of the same object present in another acquisition; 4. Return to Step 2, and repeat this process until a significative number of spectra are computed depending on the size of the object.
All selected ROIs represented two possible cases (dry and wet) of the plastic object in seawater.
For example, Figure 5 shows each of the hyperspectral reflectance spectra computed from five ROI extracted from the LDPE plastic object "Blue toothpaste tube".
The so-computed spectral data were quantized by an integer whose maximum value depended on the bit depth.Each hyperspectral reflectance spectrum was then normalized by using the reflectance factor given for each acquisition so that all the reflectance values belonged to the interval [0, 1].Finally, a reference mean hyperspectral reflectance spectrum of each plastic object was calculated as the average of the normalized mean hyperspectral reflectance spectra of the selected ROIs from this object.
For example, Figure 6 displays the reference mean hyperspectral reflectance spectrum (called mean spectra) of the LDPE plastic object "Blue Toothpaste Tube" in the case where it was dry.This figure shows that the different spectra of the same object were close to each other and its reference mean hyperspectral reflectance spectrum can be used as a spectral signature of the type of plastic.Figure 7 displays the reference mean hyperspectral reflectance spectra of different dry objects (green net rope, yellow rope, etc.) of the same plastic type HDPE.This figure shows that the shape of the different spectra was For each waste sample presented in Table 1, the following steps were applied: 1.
Manual selection of a Region Of Interest (ROI) with a random size; 2.
Computation and plotting of the hyperspectral reflectance spectrum of the selected ROI as the mean value over all pixels in the ROI; 3.
Choice of another ROI of the same object present in another acquisition; 4.
Return to Step 2, and repeat this process until a significative number of spectra are computed depending on the size of the object.
All selected ROIs represented two possible cases (dry and wet) of the plastic object in seawater.
For example, Figure 5 shows each of the hyperspectral reflectance spectra computed from five ROI extracted from the LDPE plastic object "Blue toothpaste tube".
The so-computed spectral data were quantized by an integer whose maximum value depended on the bit depth.Each hyperspectral reflectance spectrum was then normalized by using the reflectance factor given for each acquisition so that all the reflectance values belonged to the interval [0, 1].Finally, a reference mean hyperspectral reflectance spectrum of each plastic object was calculated as the average of the normalized mean hyperspectral reflectance spectra of the selected ROIs from this object.
For example, Figure 6 displays the reference mean hyperspectral reflectance spectrum (called mean spectra) of the LDPE plastic object "Blue Toothpaste Tube" in the case where it was dry.This figure shows that the different spectra of the same object were close to each other and its reference mean hyperspectral reflectance spectrum can be used as a spectral signature of the type of plastic.Figure 7 displays the reference mean hyperspectral reflectance spectra of different dry objects (green net rope, yellow rope, etc.) of the same plastic type HDPE.This figure shows that the shape of the different spectra was similar with the absorption features located at the same wavelengths.The level of each spectrum along the reflectance axis can vary depending on the color and opacity of the plastic under consideration.
Remote Sens. 2023, 15, x FOR PEER REVIEW 13 of 26 similar with the absorption features located at the same wavelengths.The level of each spectrum along the reflectance axis can vary depending on the color and opacity of the plastic under consideration.Finally, a dataset of 382 reference mean hyperspectral reflectance spectra was computed (104 for HDPE; 26 for LDPE, 34 for PET, 129 for PP, 18 for PVC, 22 for PS, 13 for PUR, 4 for POM, 9 for ABS, 23 for Other).An overview of this dataset can be found in the Supplementary Materials (Spreadsheet S1).Finally, a dataset of 382 reference mean hyperspectral reflectance spectra was computed (104 for HDPE; 26 for LDPE, 34 for PET, 129 for PP, 18 for PVC, 22 for PS, 13 for PUR, 4 for POM, 9 for ABS, 23 for Other).An overview of this dataset can be found in the Supplementary Materials (Spreadsheet S1).
Remote Sens. 2023, 15, x FOR PEER REVIEW 13 of 26 similar with the absorption features located at the same wavelengths.The level of each spectrum along the reflectance axis can vary depending on the color and opacity of the plastic under consideration.Finally, a dataset of 382 reference mean hyperspectral reflectance spectra was computed (104 for HDPE; 26 for LDPE, 34 for PET, 129 for PP, 18 for PVC, 22 for PS, 13 for PUR, 4 for POM, 9 for ABS, 23 for Other).An overview of this dataset can be found in the Supplementary Materials (Spreadsheet S1).
Comparison with the State-of-the-Art
In this section, the reference mean hyperspectral reflectance spectra presented in Section 3.1 are analyzed to identify the absorption feature wavelengths for each type of plastic and are compared to those of the literature.
Figures 8 and 9 present the reflectance spectra of six objects whose plastic type was HDPE, LDPE, PET, PP, PVC, and PS, respectively.Two spectra are presented for each object depending on whether it was dry or wet.In each case, the absorption feature wavelengths identified in the literature are highlighted in blue to be compared to our reflectance spectra.The objects analyzed in this study are listed below: 1.
HDPE plastic type-green net rope: The dry HDPE had five visible absorption features at wavelengths of 1222 nm, 1400 nm, 1425 nm, 1445 nm, and 1550 nm [3,8].The most-important absorption feature is at a wavelength of 1222 nm, which is in close correspondence with the work of Tasseron et al. [3].Similar results also appear in Figure 7 for dry objects.The wet green net rope of the HDPE plastic type had an attenuated spectral reflectance.However, the main absorption feature (1222 nm) remained visible.
2.
LDPE plastic type-blue toothpaste tube: The dry LDPE plastic had two absorption features at wavelengths of 1222 nm and 1400 nm, which is in close correspondence with Tasseron et al. [3].Similar results also appear in Figure 6 for dry objects.The reflectance spectrum of the wet blue toothpaste tube of LDPE plastic was also attenuated.The two main absorption features of polyethene plastics (HDPE and LDPE) found in this study were centered on wavelengths of 1222 nm and 1400 nm, which is very similar to the absorption features described by Tasseron et al. [3].
3.
PET plastic type-transparent tomato packaging: The dry PET spectral reflectance decreased with increasing wavelengths.The wet semi-transparent packaging of PET plastic type had an attenuated reflectance spectrum.The spectral shape of the transparent PET type found by Tasseron et al. [3] was similar to the spectral shape found in this study.No absorption feature can be highlighted for this type of plastic, whose reflectance was further reduced due to the transparency of the object, which reflected a small amount of light.4.
PP plastic type-red rope: The dry PP had four visible absorption features at wavelengths of 1200 nm, 1222 nm, 1405 nm, and 1650 nm.The most-important absorption features were usually at wavelengths of 1222 nm, 1405 nm, and 1650 nm.The absorption features of PPplastics found in this study were centered on wavelengths of 1222 nm and 1405 nm, which is in close correspondence with Tasseron et al. [3] and Moshtaghi et al. [6].The wet red rope plastic of the PP plastic type had an attenuated reflectance spectrum.
5.
PVC plastic type-semi-transparent packaging: The dry PVC had two small absorption features at wavelengths of 1200-1202 nm and 1400-1405 nm.The wet semitransparent packaging of the PVC plastic type had an attenuated reflectance spectra, but it was similar to the dry spectra.The transparency of this object generated lowlevel spectra along the reflectance axis since the light rays were transmitted through the material to be partly absorbed by the water.Although this specific type of plastic packaging tends to float on water, other PVC objects are rarely found floating due to the high density of this polymer relative to water and was, therefore, not considered by Tasseron et al. [3].6.
PS plastic type-pink parfum cap: The dry PS had three important absorption features at wavelengths of 1148 nm, 1212 nm, and 1420 nm.The wet pink parfum cap of the PS plastic type had an attenuated reflectance spectrum.Polystyrene was characterized by two distinct absorption features at 1150 and 1450 nm by Tasseron et al. [3].This study using the new RHIS revealed the presence of absorption features in the reference mean hyperspectral reflectance spectra of different plastic types in the NIR- This study using the new RHIS revealed the presence of absorption features in the reference mean hyperspectral reflectance spectra of different plastic types in the NIR-SWIR range centered on wavelengths of: 1148 nm, 1200 nm, 1212 nm, 1222 nm, 1400 nm, 1405 nm, 1420 nm, 1425 nm, 1445 nm, 1550 nm, and 1650 nm.These results, which were in correspondence with the results obtained by Tasseron et al. [3] and Moshtaghi et al. [6], confirmed that the RHIS was able to characterize each plastic type by a spectral signature.
Plastic Litter Recognition Using Machine Learning
This section aims to show that the new RHIS is able to automatically recognize the plastic type of the observed objects by hyperspectral image analysis.To evaluate the recognition performances, it was necessary to dispose of a ground-truth where the category of each analyzed data is known.From the benchmark database presented in Section 2.3, a dataset of image patches was, thus, built, where each patch was labeled by a class of plastic or other (Section 4.1).This dataset was then used to apply classical supervised machine learning methods in order to classify the images of waste samples (Section 4.2).
All calculations were performed with Matlab ® R2021b and a Windows 10TM computer with an Intel(R) Core(TM) i9-9880H CPU with 2.30 GHz, 32 GB RAM, and an Nvidia ® Quadro RTX 3000 graphics card with 16 GB GDDR5X memory.
Waste Image Patch Dataset
To build the waste image patch dataset, hyperspectral data cubes were extracted from the benchmark image database presented in Section 2.3.Manually labelling all the pixels of these images was a laborious task, which was not easily feasible.Each hyperspectral data cube was, therefore, divided into patches of size 16 × 16 × 143.The small size of certain plastic objects (Figure 10) led us to choose this patch size.Representative patches of each type of waste were then manually selected from different objects present in the images.Each selected patch was finally manually labeled according to its class, namely its type of plastic or other.SWIR range centered on wavelengths of: 1148 nm, 1200 nm, 1212 nm, 1222 nm, 1400 nm, 1405 nm, 1420 nm, 1425 nm, 1445 nm, 1550 nm, and 1650 nm.These results, which were in correspondence with the results obtained by Tasseron et al. [3] and Moshtaghi et al. [6], confirmed that the RHIS was able to characterize each plastic type by a spectral signature.
Plastic Litter Recognition using Machine Learning
This section aims to show that the new RHIS is able to automatically recognize the plastic type of the observed objects by hyperspectral image analysis.To evaluate the recognition performances, it was necessary to dispose of a ground-truth where the category of each analyzed data is known.From the benchmark database presented in Section 2.3, a dataset of image patches was, thus, built, where each patch was labeled by a class of plastic or other (Section 4.1).This dataset was then used to apply classical supervised machine learning methods in order to classify the images of waste samples (Section 4.2).
All calculations were performed with Matlab ® R2021b and a Windows 10TM computer with an Intel(R) Core(TM) i9-9880H CPU with 2.30 GHz, 32 GB RAM, and an Nvidia ® Quadro RTX 3000 graphics card with 16 GB GDDR5X memory.
Waste Image Patch Dataset
To build the waste image patch dataset, hyperspectral data cubes were extracted from the benchmark image database presented in Section 2.3.Manually labelling all the pixels of these images was a laborious task, which was not easily feasible.Each hyperspectral data cube was, therefore, divided into patches of size 16 × 16 × 143.The small size of certain plastic objects (Figure 10) led us to choose this patch size.Representative patches of each type of waste were then manually selected from different objects present in the images.Each selected patch was finally manually labeled according to its class, namely its type of plastic or other.In order to assess the machine learning model performances, two subsets of image patches representative of the ten classes were created: a training subset for the model learning and a testing subset for the model evaluation.The training and testing image patches were extracted from different original images so that they were as independent as possible and represented a realistic situation in seawater with all its challenges.
For the training and the testing subsets, a total of 788 and 312 image patches of size 16 × 16 × 143 were, respectively, selected from the images of the benchmark database.Table 2 shows, in the first column, the different considered classes (nine plastic types and one class "other").The number of patches that were used for training and testing are shown in the second and third column, respectively.This number depended on the number and size of the available objects for each type of plastic.An overview of the representative patches counted per plastic object can be found in the Supplementary Materials (File S1).
Waste Image Patch Classification
In this section, three well-known supervised machine learning methods are applied to classify the image patches of the dataset presented in the previous section.
As can be observed in Table 2, the number of examples for each class varied.Some classes were represented with a small number of samples for classification, and the difference between the number of patches for the plastic types PUR, POM, PS, and ABS, and the Other types was significant.A class imbalance usually makes it harder to identify (and, hence, classify) a minority class.In our case, the plastic types of PUR, POM, PS, and ABS were minority classes.Imbalanced classification is a challenge for predictive modeling because most machine learning algorithms used for classification are designed around the assumption of an equal number of examples for each class.To take these limitations into account, three classical supervised machine learning methods were chosen: 1.
For each method, there are parameters to be optimized to determine the best tuning of the classification model (classifier) by using the training image patches (learning stage) and then to evaluate its performance with the testing image patches (prediction stage).To fine-tune the classifier parameters with the challenge of class imbalance, Hyper-Parameter (HP) optimization methods (Bayesian [34,35] and random research [36]) offer the possibility to automatically select a classification model with an optimized tuning.
For these experiments, the mean hyperspectral reflectance associated with each patch of size (16 × 16 × 143) was calculated as the mean value over all its 256 (16 × 16) pixels, leading to a vector of size 1 × 143 (number of spectral bands).
In this study, high-definition hyperspectral images were used to classify patches according to different plastic types.However, the high dimension of hyperspectral images often causes computational complexity and the curse of dimensionality.In many cases, it is not necessary to process the hyperspectral information of all spectral bands since many spectral bands are highly correlated.Thus, it is required to remove redundant spectral bands in order to decrease the computational complexity and improve classification performance.Among the many dimensionality reduction methods used for this purpose, Principal Component Analysis (PCA) is a well-known preprocessing step in hyperspectral image analysis [37,38].PCA linearly transforms the initial feature space, whose axes correspond to the input spectral bands, and generates a new feature subspace, where the axes are called principal components, in order to remove redundant dimensions.
The main following stages are, therefore, proposed for the plastic classification: 1. Dimensionality reduction by feature extraction [38]: PCA was applied on the training subset, and different dimensions of the resulting feature subspace were considered for the next stage.
2.
Learning stage: The KNN, SVM, and ANN classification models were applied with optimization methods (Bayesian and random research) of hyper-parameter tuning to determine the best validation accuracy.In order to protect against overfitting, a five-fold cross-validation was considered.This scheme partitions the training subset into five disjoint folds.Each fold was used once as a validation fold, and the others formed a set of training folds.For each validation fold, the classification model was trained using the training folds, and the classification accuracy was assessed using the validation fold.The average accuracy was then calculated over all the folds and was used to optimize the tunning of the classification model parameters.These hyper-parameters, which are presented in Table 3, were determined by an automatic hyper-parameter optimization using two methods: Bayesian [34,35] and random research [36] optimization.The final validation accuracy gave a good estimate of the predictive accuracy of the classifier, which was used in the next stage with the full training subset, excluding any data reserved for the testing subset.
3.
Prediction stage: The trained models obtained during the previous stage were then applied to the testing image patches, and the overall test accuracy of the classifier was determined.The testing subset here was independent of the training subset.
Based on these stages, Table 4 presents the top ten classifiers that were tested with different dimensions of the feature subspace obtained by PCA (96, 64, and 48) and with the two HP optimization methods.The first column of this table gives the name of the tested classifier; the second one indicates the dimension of the feature subspace; the third column gives the name of the used HP optimization method.The goal of the optimization algorithm is to find a combination of HP values that minimizes an objective function, here the classification error rate.To find this combination, the iteration number of the used algorithm was fixed to 120.Table 4 also describes the determined optimized hyperparameters in the fourth column.This column is divided into several cells, whose number depends on the classification method.For each tested classifier, the validation accuracy computed with the training subset and the test accuracy computed with the testing subset appear in the fifth and sixth columns, respectively.Accuracy is given as the percentage of patches (training or testing) that were correctly classified.Although the validation accuracy was not the highest for this classifier (89.7 %), it was very close to the test accuracy despite the imbalanced classification problem.This result showed that the validation accuracy provided a good estimate for the model performance on new data compared to the training data.The top validation accuracy was obtained with Model5-PCA64-KNN, which uses the 64 principal components of PCA, but this classifier achieved a test accuracy of 87.2% and, therefore, gave a lower performance.
Figure 11 gives the test confusion matrix, which details the performances per class obtained for Model2-PCA48-KNN on the testing subset.Its rows correspond to the predicted class and its columns to the true class.The diagonal cells correspond to patches that were correctly classified, and the off-diagonal cells correspond to incorrectly classified patches.Both the percentage of patches and the number of patches (in brackets) are shown in each cell.This matrix shows that the testing patches corresponding to the four minority classes (PUR, POM, PS, and ABS) were well classified (100% of test accuracy) despite the imbalanced classification issue.The class of other materials that represents non-plastic objects corresponded also to 100% accuracy.This result proved that the classifier was able to predict whether an object was a plastic or not.The testing patches with the lowest accuracy belonged to the PP and PVC classes.There were 16.7% of PP samples assigned to the PET class and 8.3% to the HDPE class.There were 14.5% of PVC samples assigned to the class other.These misclassification rates could be explained by the presence of wet samples with distorted spectra, by the presence of transparent and black plastics with poor reflectance, and by the diversity of samples in the class Other.The HDPE and LDPE classes can be confused since 13.5% of the LDPE samples were assigned to the HDPE class.These two types of plastic are based on the same polymer.Finally, the accuracy obtained for each of the other class was greater than 90%, which is a very good performance with a classical classifier.These experimental results highlighted that the RHIS provided a very satisfactory wet and dry plastic recognition performance by using classical supervised machine learning methods such as the Model2-PCA48-KNN classifier.With more training data and more sophisticated classification approaches such as deep learning approaches, this performance can obviously be further improved for the detection and identification of plastic litter [22,30].
The RHIS is the first embedded hyperspectral imaging system that observes the aquatic environment in the near-field and automatically quantifies and qualifies polymers of floating plastic litter with accuracy.
Conclusions
This paper addressed the problem of plastic litter pollution in the aquatic environment resulting from human activity.The observation and quantification of this waste by remote sensing at different scales is crucial to determine its exact nature and fight against this pollution.Hyperspectral imaging is emerging as an appropriate technology to characterize, detect, and identify floated plastic waste in terms of shape and type of polymers.In this paper, a new remote hyperspectral imaging system, embedded on an unmanned aquatic drone for plastic detection and identification in coastal environment, was presented.
This new hyperspectral imaging system, named the ROV-ULCO, was designed around a hyperspectral camera that captures reflectance spectra in the NIR to SWIR range to discriminate different types of plastic.It works in the near-field for the observation of floating litter (plastic and non-plastic type).The first obtained results were very encouraging and proved the marine litter automatic recognition capability using a simple supervised machine learning method.Indeed, these results reached an overall accuracy close to 90% with a K-nearest neighbors classifier associated with a principal component analysis These experimental results highlighted that the RHIS provided a very satisfactory wet and dry plastic recognition performance by using classical supervised machine learning methods such as the Model2-PCA48-KNN classifier.With more training data and more sophisticated classification approaches such as deep learning approaches, this performance can obviously be further improved for the detection and identification of plastic litter [22,30].
The RHIS is the first embedded hyperspectral imaging system that observes the aquatic environment in the near-field and automatically quantifies and qualifies polymers of floating plastic litter with accuracy.
Conclusions
This paper addressed the problem of plastic litter pollution in the aquatic environment resulting from human activity.The observation and quantification of this waste by remote sensing at different scales is crucial to determine its exact nature and fight against this pollution.Hyperspectral imaging is emerging as an appropriate technology to characterize, detect, and identify floated plastic waste in terms of shape and type of polymers.In this paper, a new remote hyperspectral imaging system, embedded on an unmanned aquatic drone for plastic detection and identification in coastal environment, was presented.
This new hyperspectral imaging system, named the ROV-ULCO, was designed around a hyperspectral camera that captures reflectance spectra in the NIR to SWIR range to discriminate different types of plastic.It works in the near-field for the observation of floating litter (plastic and non-plastic type).The first obtained results were very encouraging and proved the marine litter automatic recognition capability using a simple supervised machine learning method.Indeed, these results reached an overall accuracy close to 90% with a K-nearest neighbors classifier associated with a principal component analysis for the classification of nine plastic types and their distinction with a tenth class of non-plastic objects.
This study showed that the new hyperspectral imaging system, the ROV-ULCO, is a promising approach to detect and identify plastic waste in aquatic environments.It can be improved by focusing on challenges such as transparent and black plastic waste or wet and submerged plastic waste, which are more difficult to recognize [39].From our perspective, the databases will be enlarged to add these plastic types with more representative samples, and classification approaches based on artificial intelligence will be applied in order to improve the performance of this original system.In addition, our prototype can be equipped with other optical or radar sensors to meet these challenges, but also to make it autonomous so that it can automatically navigate to areas where plastic waste is present.
Figure 1 .
Figure 1.Conceptual framework for remote marine litter detection from[22] with different remote devices (satellite, airborne, drone, etc.) and with the new system (ROV-ULCO) proposed in this paper.
Figure 2 .
Figure 2. ROV-ULCO system: RHIS embedded on a UAD, namely the Jellyfishbot, which can be connected with a net to collect macroplastics or microplastics in different water surfaces.
•
Two inflatable boat floaters; • Two batteries (each inside a removable waterproof case); • An illumination device of halogen lamps; • Protections against solar illumination; • A long-range WiFi antenna to communicate remotely; • A waterproof box that contains the following inside components (Figure 3b): o A line-scan hyperspectral camera (Resonon PIKA-NIR-320) with a 12 mm focal length objective lens; o An optical mirror system; o An industrial Central Processing Unit (CPU) as the on-board computer; o An Arduino unit that controls two temperature sensors and a water velocity sensor via an integrated board.
Figure 2 .
Figure 2. ROV-ULCO system: RHIS embedded on a UAD, namely the Jellyfishbot, which can be connected with a net to collect macroplastics or microplastics in different water surfaces.
Figure 4 .
Figure 4. Experimental setup of the new remote hyperspectral imaging system.(a) Hyperspectral camera setup in controlled laboratory environment with WiFi antenna; (b) black container under the RHIS on the linear translation stage.
Figure 4 .
Figure 4. Experimental setup of the new remote hyperspectral imaging system.(a) Hyperspectral camera setup in controlled laboratory environment with WiFi antenna; (b) black container under the RHIS on the linear translation stage.
26 Figure 5 .
Figure 5. Examples of hyperspectral reflectance computed from five ROIs (Spec. 1 to 5) of the LDPE plastic object "Blue Toothpaste Tube".(a) Extraction of a region of interest from a hyperspectral image represented in false color; (b) non-normalized mean hyperspectral reflectance of different ROIs (in orange, the mean spectral reflectance over all pixels of the selected ROI).
Figure 5 .
Figure 5. Examples of hyperspectral reflectance computed from five ROIs (Spec. 1 to 5) of the LDPE plastic object "Blue Toothpaste Tube".(a) Extraction of a region of interest from a hyperspectral image represented in false color; (b) non-normalized mean hyperspectral reflectance of different ROIs (in orange, the mean spectral reflectance over all pixels of the selected ROI).
Figure 6 .
Figure 6.Examples of the reference mean hyperspectral reflectance of the tube and the cap of the LDPE plastic object "Blue Toothpaste Tube".
Figure 7 .
Figure 7. Examples of the reference mean hyperspectral reflectance of different HDPE plastic objects.
Figure 6 .
Figure 6.Examples of the reference mean hyperspectral reflectance of the tube and the cap of the LDPE plastic object "Blue Toothpaste Tube".
Figure 6 .
Figure 6.Examples of the reference mean hyperspectral reflectance of the tube and the cap of the LDPE plastic object "Blue Toothpaste Tube".
Figure 7 .
Figure 7. Examples of the reference mean hyperspectral reflectance of different HDPE plastic objects.
Figure 7 .
Figure 7. Examples of the reference mean hyperspectral reflectance of different HDPE plastic objects.
1 -Figure 8 .
Figure 8. Mean hyperspectral reflectance of plastic objects (HDPE, LDPE, and PET) depending on whether they were dry or wet and their corresponding absorption features.Figure 8. Mean hyperspectral reflectance of plastic objects (HDPE, LDPE, and PET) depending on whether they were dry or wet and their corresponding absorption features.
Figure 8 .
Figure 8. Mean hyperspectral reflectance of plastic objects (HDPE, LDPE, and PET) depending on whether they were dry or wet and their corresponding absorption features.Figure 8. Mean hyperspectral reflectance of plastic objects (HDPE, LDPE, and PET) depending on whether they were dry or wet and their corresponding absorption features.
4 -Figure 9 .
Figure 9. Mean hyperspectral reflectance of plastic objects (PP, PVC, and PS) depending on whether they are dry or wet and their corresponding absorption features.
Figure 9 .
Figure 9. Mean hyperspectral reflectance of plastic objects (PP, PVC, and PS) depending on whether they are dry or wet and their corresponding absorption features.
Figure 10 .
Figure 10.Examples of RGB and hyperspectral images of plastic objects.(a) RGB image of plastic objects; (b) false color image using three spectral bands of wavelength values of 1575.4 nm, 1257.1 nm, and 1100.0 nm; (c) selection of the image patches.
Figure 10 .
Figure 10.Examples of RGB and hyperspectral images of plastic objects.(a) RGB image of plastic objects; (b) false color image using three spectral bands of wavelength values of 1575.4 nm, 1257.1 nm, and 1100.0 nm; (c) selection of the image patches.
Table 1 .
Examples of waste samples for each plastic type.
Table 2 .
Plastic type and "other" classes with the number of used patches for the training and testing image patch subsets.
Table 4
first shows that the KNN classification model outperformed the SVM and ANN ones in terms of validation accuracy and test accuracy.This table also shows that, for the KNN model, PCA drastically increased the test accuracy.The KNN model with the highest test accuracy (89.1 %) was Model2-PCA48-KNN, which uses the 48 principal components of PCA.The optimized HP were determined with a random search method. | 12,511.8 | 2023-07-08T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Computer Science"
] |
End-to-end NLP Pipelines in Rust
The recent progress in natural language processing research has been supported by the development of a rich open source ecosystem in Python. Libraries allowing NLP practitioners but also non-specialists to leverage state-of-the-art models have been instrumental in the democratization of this technology. The maturity of the open-source NLP ecosystem however varies between languages. This work proposes a new open-source library aimed at bringing state-of-the-art NLP to Rust. Rust is a systems programming language for which the foundations required to build machine learning applications are available but still lacks ready-to-use, end-to-end NLP libraries. The proposed library, rust-bert, implements modern language models and ready-to-use pipelines (for example translation or summarization). This allows further development by the Rust community from both NLP experts and non-specialists. It is hoped that this library will accelerate the development of the NLP ecosystem in Rust. The library is under active development and available at https://github.com/guillaume-be/rust-bert.
Introduction
Natural language processing (NLP) has undergone a rapid transformation over the last few years. Modern architectures based on the Transformers (Vaswani et al., 2017), leveraging efficiently the large amount of data available for unsupervised pretraining, have enabled significant progress for a variety of tasks including sentiment analysis, question answering, summarization or translation. These research efforts have been accompanied by the development of a rich Python ecosystem enabling a democratization of these technologies for both practitioners and users, from tokenization to deep learning architectures. The Transformers library is an example of a library propos-ing APIs at various levels to either promote further development of NLP or their integration in higher level applications.
The adoption of these technologies in other programming languages has unfortunately not been as fast, for example in Rust. Rust (Klabnik and Nichols, 2018) is a promising modern static, strongly typed language that offers execution speeds similar to C. Its built-in memory safety design makes it an attractive alternative to C++ for the development of productive machine learning systems. Rust does not include a garbage collector but instead relies on strict ownership rules for the variables, dropping them when going out of scope. Its modern implementation of the strings data model that complies with UTF-8 standards is especially relevant to NLP applications. Finally, Rust includes a powerful utility called cargo to manage external dependencies. This allows the development of open-source ecosystems, similar to Python's PyPI (Python Packaging Authority, 2000) or Java's Maven (Miller et al., 2010).
Rust is a modern programming language for which the foundations of a machine learning ecosystem are still being built. A number of initiatives including array manipulation (rust-ndarray Team, 2011), low-level CUDA libraries and deep learning framework bindings for Tensorflow (Tensorflow Project, 2016) or Torch (Mazare, 2019) are now maturing. However, there is still a lack of end-to-end, ready to use libraries leveraging stateof-the-art NLP models. The proposed library aims at filling this gap and exposes both Transformersbased architectures to NLP practitioners in Rust and pipelines that are ready for integration in Rust-based back-ends. The proposed library, rust-bert, is available at https://github.com/ guillaume-be/rust-bert or https://crates. io/crates/rust-bert and is shared under Apache 2.0 license.
Related Work
This work leverages the rich open-source resources available in Python. Especially relevant is the Transformers library , of which large sections of the proposed Rust library were ported from. The model architectures and layers naming have been aligned with the Transformers implementation, and Rust-compatible pre-trained weights are available in Hugging Face's Model Hub (Hugging Face, 2019). The general API for the high-level and ready-to-use pipelines has been strongly inspired by the SpaCy library (Honnibal and Montani, 2017).
Architecture Design
The library exposes three main features: • Language models implementation, covering state-of-the-art architectures including for example BERT (Devlin et al., 2019) or GPT2 (Radford et al., 2019).
• Ready-to-use pipelines, combining these models with pre-and post-processing routines.
• Utilities to load external resources, including a converter from PyTorch (Paszke et al., 2019) pickled model files to a C-array format. The language models and the pipelines are separated in different modules. Within the models, a sub-module is defined for each model (for example, BERT) with individual files for the major model components (for example, its attention mechanism). This promotes readability and modularity of the code base ( Figure 1).
An important design aspect of the library is related to the choice of abstractions. Rust does not implement the concept of classes and inheritance in a similar way to Python. Rather, data is arranged in structs that may implement associated methods in an impl block or shared behaviour via traits. As opposed to Python, layers do not inherit from a shared nn.Module because Rust requires a strict definition of the names and types of the inputs and outputs (those may differ significantly from model to model). As a consequence the registration of the model parameters in the variable store is done manually: While the model architectures have been generally ported from the Python Transformers' library, the proposed work is innovative in its handling of shared behavior. Models and configurations share capabilities using Traits. This includes for example the possibility for a model to be used as a conditional text generator by implementing the Lan-guageGenerator trait. A given model implements the trait by providing model-specific methods (e.g. prepare inputs or reorder cache). The complex text generation post-processing steps (beam search, sampling, non-repetition rules...) and the generation routine can then be readily leveraged by this model. Shared behavior is also required for the ready-touse pipelines that implement logic valid for a wide range of language models. Here the mechanism instead relies on Enums wrapping specific models in a shared abstraction. A given pipeline takes a Model Enum, a Tokenizer Enum and a Configuration Enum as inputs. The pipeline calls generic functions that are implemented by the enum (for example a forward pass). Each variant of the enum defines how the forward method is implemented. Note that this allows defining a common interface to models expecting a different set of inputs. This pattern is similar to dependency injection (while the traits are closer to inheritance) and has benefits of a greater flexibility in the interface for model loading and forward methods and reduced coupling between the model and the pipelines.
Capabilities Overview
The library exposes an API at two different levels: the language models themselves, allowing to build NLP pipelines from scratch, and end-to-end pipelines that can readily be integrated in higher level applications.
A rust implementation for a wide range of language models has been implemented, including BERT ( A large user base of NLP technologies also benefits from the availability of state of the art, end-toend pipelines requiring little to no familiarity with NLP to be integrated in higher level applications. To answer these needs of the Rust community, the following capabilities have been implemented: • Translation between 8 language pairs using either Marian (Junczys-Dowmunt et al., 2018) or T5 (Raffel et al., 2019) models.
• Summarization using a BART (Lewis et al., 2020) model trained on the CNN / Daily Mail summarization dataset (See et al., 2017).
• Question Answering using a DistilBERT model trained on the SQuAD dataset (Rajpurkar et al., 2016).
• Sentiment Analysis using a DistilBERT model trained on the SST-2 dataset (Socher et al., 2013) • Named Entity Recognition for English, German, Spanish and Dutch trained on CoNLL03 (Tjong Kim Sang and De Meulder, 2003) and CoNLL02 (Tjong Kim Sang, 2002) datasets These pipelines can be created and used in a few lines of code without prior knowledge in NLP. While the implementation of the language models is a prerequisite, the availability of powerful end-to-end pipelines is key to a broader adoption of NLP technology in Rust. These pipelines can easily be integrated with server back-ends running Rust with queuing and batching of incoming requests (Walsh, 2020).
Benchmarks
This library was developed with the primary goal of making state of the art NLP capabilities available to the Rust community rather than speeding up inference. Nevertheless, Rust is a high performance language with execution speeds matching C or C++. Efficient predictions using NLP systems has become a key subject of research and engineering development over the past few months. Several methods have been investigated to improve the model predictions performance, including for example pruning, quantization and Huffman Coding ( (Han et al., 2016), (Shen et al., 2020)), distillation , graph optimizations and layer fusing (Nvidia, 2020) or optimized runtimes such as ONNX . The high performance of the state of the art models usually comes with a significant computational cost.
It should be noted that the proposed library is based on bindings (Mazare, 2019) to LibTorch (Paszke et al., 2019), and therefore limited benefits can be expected from the tensor operations. These are executed in the CUDA layer that is effectively shared with the Python-based models. The following investigates if these high performance features of the language translate into benefits for the proposed NLP pipelines.
Benchmarks between Python and Rust are shown in Figure 2 using a Turing RTX2070 GPU with a AMD 2700X CPU. For all experiments the average time relative to Python is reported with the standard deviation. For all prediction tasks, the Transformers library (v3.2.0) is used as a reference. All experiments are run for 10 iterations, with various number of samples (provided in brackets). For reference, the Python absolute execution time per iteration is provided.
The loading benchmarks represent the average time required to load models into the GPU buffer. Significant benefits can be observed for Rust. This is probably caused by the simpler serialization format based on C-arrays for Rust, and may be advantageous for event-driven applications loading models on a per-request basis (short warm-up time).
The forward pass results vary between applications. As expected, pipelines with very simple pre-and post-processing steps offer virtually identical performance (for example sentiment analysis).
Significant benefits can be observed for question answering, coming entirely from the tokenization process (At the time this document was prepared, the Transformers' question answering pipeline did not leverage Rust-based tokenizers yet). The performance of pipelines involving complex post-processing steps (text generation with sampling and beam search) can show significant benefits. Marian-based translation models (Tiedemann and Thottingal, 2020) exhibit a 40% speedup (in line with the native C++ implementation (Junczys-Dowmunt et al., 2018)). The T5 implementation is faster for small effective batch sizes (with a beam size of 6) but slower for larger batches, indicating optimization potential remains. In general it was observed that the actual model forward pass (tensor operations) is comparable albeit slightly slower in Rust than in Python. A last experiment (large matrix multiplication) shows the Rust LibTorch bindings seem to be 1 to 2% slower than the PyTorch equivalent.
Conclusion
Rust is a promising language for the development of NLP systems. Its concurrency capabilities, memory safety features and modern strings data model make it a good alternative to C++ for production systems. While evolving quickly, the Rust NLP open-source ecosystem still lags behind Python rich set of libraries. Complementing the availability of high performance tokenizers, rust-bert makes state-of-the-art language models and end-to-end NLP pipelines available to the Rust community.
Acknowledgments
The list of contributors to the rust-bert project is available on the project repository. | 2,601.2 | 2020-11-01T00:00:00.000 | [
"Computer Science"
] |
Systematic design of flat band slow light in photonic crystal waveguides
We present a systematic procedure for designing “flat bands” of photonic crystal waveguides for slow light propagation. The procedure aims to maximize the group index bandwidth product by changing the position of the first two rows of holes of W1 line defect photonic crystal waveguides. A nearly constant group index bandwidth product is achieved for group indices of 30-90 and as an example, we experimentally demonstrate flat band slow light with nearly constant group indices of 32.5, 44 and 49 over 14 nm, 11 nm and 9.5 nm bandwidth around 1550 nm, respectively. ©2008 Optical Society of America OCIS codes: (130.5296) Photonic crystal waveguides; (999.9999) Slow light; (260.2030) Dispersion References and Links 1. R. W. Boyd, D. J. Gauthier, and A. L. Gaeta, “Applications of slow light in telecommunications,” Opt. Photon. News 17, 19-23 (2006). 2. T. F. Krauss, “Slow light in photonic crystal waveguides,” J. Phys. D. 40, 2666-2670 (2007). 3. Y. A. Vlasov, M. O’Boyle, H. F. Hamann, and S. J. McNab, “Active control of slow light on a chip with photonic crystal waveguides,” Nature 438, 65-69 (2005). 4. M. Soljacic and J. D. Joannopoulos, “Enhancement of nonlinear effects using photonic crystals ,” Nat. Mater. 3, 211-219 (2004) . 5. J. T. Li and J. Y. Zhou, “Nonlinear optical frequency conversion with stopped short light pulses,” Opt. Express 14, 2811-2816 (2006). 6. S. Hughes, L. Ramunno, J. F. Young, and J. E. Sipe, “Extrinsic optical scattering loss in photonic crystal waveguides: role of fabrication disorder and photon group velocity,” Phys. Rev. Lett. 94, 033903 (2005). 7. R. J. P. Engelen, Y. Sugimoto, Y. Watanabe, J. P. Korterik, N. Ikeda, V. Hulst, K. Asakawa, and L. Kuipers, “The effect of higher-order dispersion on slow light propagation in photonic crystal waveguides,” Opt. Express 14, 1658-1672 (2006). 8. D. Mori, S. Kubo, H. Sasaki, and T. Baba, “Wideband and low dispersion slow light by chirped photonic crystal coupled waveguide,” Opt. Lett. 15, 5264-5270 (2007). 9. A. Yu. Petrov and M. Eich, “Zero dispersion at small group velocities in photonic crystal waveguides,” Appl. Phys. Lett. 85, 4866-4868 (2004). 10. M. D. Settle, R. J. P. Engelen, M. Salib, A. Michaeli, L. Kuipers, and T. F. Krauss, “Flatband slow light in photonic crystals featuring spatial pulse compression and terahertz bandwidth,” Opt. Express 15, 219-226 (2007). 11. J. M. Brosi, J. Leuthold, and W. Freude, “Microwave-frequency experiments validate optical simulation tools and demonstrate novel dispersion-tailored photonic crystal waveguides,” J. Lightwave Technol. 25, 2502-2510 (2007). 12. L. H. Frandsen, A. V. Lavrinenko, J. Fage-Pedersen, and P. I. Borel, “Photonic crystal waveguides with semi-slow light and tailored dispersion properties,” Opt. Express 14, 9444-9450 (2006). 13. S. Kubo. D. Mori, and T. Baba, “Low-group-velocity and low-dispersion slow light in photonic crystal waveguides,” Opt. Lett. 32, 2981-2983 (2007). 14. M. Notomi, K. Yamada, A. Shinya, J. Takahashi, C. Takahashi, and I. Yokohama, “Extremely large groupvelocity dispersion of line-defect waveguides in photonic crystal slabs,” Phys. Rev. Lett. 87, 253902 (2001). 15. S. G. Johnson and J. D. Joannopoulos, “Block-iterative frequency-domain methods for Maxwell's equations in a planewave basis,” Opt. Express 8, 173-190 (2001). 16. K. L. Lee, J. Bucchignano, J. Gelorme, and R. Viswanathan, “Ultrasonic and dip resist development processes for 50 nm device fabrication,” J. Vac. Sci. Technol. B, 15, 2621-2626 (1997). 17. See http://www.nanophotonics.eu. #92587 $15.00 USD Received 8 Feb 2008; revised 21 Mar 2008; accepted 21 Mar 2008; published 18 Apr 2008 (C) 2008 OSA 28 April 2008 / Vol. 16, No. 9 / OPTICS EXPRESS 6227 18. L. O’Faolain, X. Yuan, D. McIntyre, S. Thoms, H. Chong, R. M. De La Rue, and T. F. Krauss, “Low-loss propagation in photonic crystal waveguides,” Electron. Lett. 42, 1454-1455 (2006). 19. J. P. Hugonin, P. Lalanne, T. P. White, and T. F. Krauss, “Coupling into slow-mode photonic crystal waveguides,” Opt. Lett. 32, 2638-2640 (2007). 20. A. Gomez-lglesias, D. O’Brien, L. O’Faolain, A. Miller, and T. F. Krauss, “Direct measurement of the group index of photonic crystal waveguides via Fourier transform spectral interferometry,” Appl. Phys. Lett. 90, 261107 (2007). 21. L. O'Faolain, T. P. White, D. O'Brien, X. Yuan, M. D. Settle, and T. F. Krauss, “Dependence of extrinsic loss on group velocity in photonic crystal waveguides,” Opt. Express 15, 13129-13138 (2007).
Introduction
Slow light in photonic crystal (PhC) waveguides can be exploited for a broad range of applications, such as optical delay lines or buffers [1] and enhanced light-matter interaction, both in the linear and nonlinear [2][3][4][5] regime.Two of the key concerns are propagation loss and dispersion, as any benefit arising from slow light may be compromised by excessive loss or pulse broadening [2,3,6,7].This paper focuses on reducing the unwanted dispersion by engineering the dispersion curve with the aim of achieving a constant group index over a broad wavelength range, which we refer to as "flat band slow light".
Previously, flat band slow light has been achieved by chirping the waveguide properties [8], by changing the waveguide width [9][10][11], or by changing the hole size of the first two rows of the W1 PhC waveguides [12,13].Some of these methods lead to multimode operation, others are difficult to control.Here, we study the properties of a PhC waveguide as a function of the position of the first two rows of holes adjacent to the line defect.Using this approach, we show that a continuous range of group indices from 30 to more 90 can be obtained that exhibits the desired flat band behavior and the same group index -bandwidth product.By plotting a map of group index -bandwidth product against the design parameters, we obtain a systematic picture of the relevant waveguide properties.To demonstrate the method, we fabricated W1 type waveguides that exhibit nearly constant group indices of 32.5, 44 and 49 over 14 nm, 11 nm and 9.5 nm bandwidth, respectively.Fig. 1.Geometry of the modified W1 PhC waveguides: the first and second rows of holes are displaced symmetrically about the waveguide axis.The displacements relative to the unmodified lattice (red lines) are given by s1 and s2, where shifts toward the waveguide centre are defined to be positive.Here, s1 < 0 and s2 > 0, as used throughout this paper.
Design
Line defect PhC waveguides support modes that can be categorized as either index guided or gap guided [14], or a combination of both.As explained in Refs.[9] and [14], an anticrossing between these two types of modes determines the local shape of the waveguide mode dispersion curves, the slope of which determines the group velocity of the mode.Frandsen et al. [12] showed that changing the hole size of the first two rows of holes adjacent to the line defect waveguide can change the intrinsic interaction of the index guided and gap guided modes.Controlling this interaction can be used to modify the dispersion curve and thus to obtain a flat band slow light region.It is difficult however, to control the hole size of a photonic lattice accurately and reproducibly.Instead, we change here the position of the first two rows of holes in order to modify the dispersion curve -an approach that is technologically preferred to controlling variations in hole size.Figure 1 illustrates the displacement of the inner rows of holes that is used to modify the dispersion.Parameters s1 and s2 describe the deviation of each row from the ideal lattice.
To enable a comparison between waveguides, we define the figure of merit as the group index -bandwidth product, n g (∆ω/ω), which is proportional to the delay-bandwidth product per unit length.This value is then mapped as a function of parameters s1 and s2 as shown in Fig. 2. The group index n g is considered as constant within a ±10% range, which is similar to previous work [10,12].In the calculation, the lattice constant was a = 414 nm, the normalised hole size r/a = 0.286, the thickness of the Si layer h = 220 nm and we considered TE polarisation.In Fig. 2(a), we used a two-dimensional (2D) version of the plane-wave expansion method [15] with an effective index of 2.87.The parameter scan was performed in steps of s1/a = 0.01 (s1 = 4.14 nm) and s2/a = 0.01 (s2 = 4.14 nm).For a more precise estimation, a three-dimensional (3D) calculation was used in the most promising range of s1 and s2 which is shown in Fig. 2(b) with steps of s1/a = 0.0025 (s1 = 1.04 nm) and s2/a = 0.005 (s2 = 2.07 nm).The result of 2D and 3D calculation in good agreement, except that the group index and bandwidth are larger and narrower, respectively, for the 3D calculation.Different regimes of slow light operation can be recognized in the map.where the group index varies between n g = 30 and n g = 90.For comparison, n g (∆ω/ω) ≈ 0.01 for an unmodified W1 waveguide.When the group index is relatively low, the slow light mode is well confined within the first row of holes of the waveguide.Hence moderately slow light (up to n g = 35) can be achieved without changing s2.When the light becomes slower, however, the mode penetrates deeper into the cladding and s2 becomes significant.Therefore, achieving higher n g values requires both s1 and s2 to be varied.In the experimental realization, we changed s1 and s2 in 2 nm steps, which corresponds to changing s1/a or s2/a by approximately 0.005.These tolerances limit the flat band slow light regime with the same n g (∆ω/ω) value to a group index of around n g = 50, i.e. higher group indices (up to 200) would require control of s1, s2 on a smaller scale.
Figure 3 illustrates three operating points taken from Fig. 2(b) to highlight the evolution of the dispersion curves and group indices with increasing group index, i.e. n g =32, 50 and 93.The fundamental mode of an unmodified W1 PhC waveguide is shown for comparison.Note that the optimized group index curves in Fig. 3(b) have local minima and maxima where the group velocity dispersion (GVD) is zero, while the third order dispersion (TOD) passes through zero between these points.Alternative designs with simultaneous zero GVD and TOD can also be chosen from Fig. 2 with a slight bandwidth penalty.There are some advantages of shifting rows of holes compared to changing the hole size [11].First, variations in the hole position are easier to control technologically.Second, higher group indices (up to 200) can be achieved for a given group index -bandwidth product, especially if s1, s2 can be controlled more accurately than the 2 nm precision used here.In contrast, calculations show that it is difficult to achieve group indices higher than 100 by changing the hole size alone.Third, the maximum n g (∆ω/ω) in previous work [8,12,13] tended to decrease when the group index is increased, whereas we demonstrate that it is possible to change the group indices continuously while maintaining an almost constant maximum n g (∆ω/ω) by changing the hole positions according to Fig. 2.
Fabrication and experiment
The devices were fabricated on a SOITEC Silicon on Insulator wafer comprising a 220 nm thick Silicon layer on 2μm of silica.The pattern was exposed in ZEP520A electron beam resist using a hybrid ZEISS GEMINI 1530/RAITH ELPHY electron beam writer at 30keVwith a pixel size of 2 nm and a writing field of 100 μm.The resist was developed using xylene with ultrasonic agitation [16].Pattern transfer was carried out using Reactive Ion Etching with CHF3 and SF6 gases.The silica beneath the photonic crystal was removed using Hydrofluoric acid (the rest of the pattern was protected with photoresist) The fabrication of these devices was carried out in the framework of the ePIXnet Nanostructuring Platform for Photonic Integration [17] and was very similar to that used in [18].A propagation loss of 12 db/cm was measured for benchmark W1 waveguides.To enhance coupling into the slow light regime, an intermediate region consisting of ten periods of photonic crystal waveguide with a lattice constant of 444 nm was added at either end of the device, following the principles discussed in [19].
A typical SEM picture of our PhC waveguide design is shown in Fig. 4. 80 μm long Si membrane W1 type PhC waveguides were made with a lattice constant a = 414 nm and hole diameter d = 236 nm (r/a = 0.286).Figure 2 shows that for any value of s2, we can find an s1 value to maximise n g (∆ω/ω).A range of group indices can be accessed in this way by choosing appropriate s2 values.Hence, to demonstrate our design, we fabricated and characterized three sets of PhC waveguides with s2= 0 nm, 12 nm and 16 nm, and s1 values spanning the optimized region of Fig. 2 Figure 5 shows the experimental transmission spectra and the experimental and calculated group indices for each value of s2 and the corresponding s1 that gave a maximum n g (∆ω/ω).The transmission was measured with a tuneable laser and the group indices were evaluated experimentally via Fourier transform spectral interferometry [20] using the same laser.The theoretical group index curves are calculated numerically via a 3D band structure calculation using the designed parameters [15].The curves are red-shifted approximately 1.5% in wavelength to match the experiment results.
The change in group index with increasing s2 is clearly illustrated in Fig. 5.The group indices are measured to be n g = 32.5 (14 nm bandwidth), n g = 44 (11.0 nm bandwidth) and n g = 49 (9.5 nm bandwidth) resulting in a nearly constant group index -bandwidth product, which compares favourably with previous work [9][10][11][12][13].The corresponding group velocity dispersion (calculated from the 3D simulation data) is one order of magnitude smaller than a W1 PhC waveguide at the same group index.Note also that the transmission spectra in Fig. 5(a) and (b) show no significant drop as the group index increases into the engineered slow light region, while in Fig. 5(c) the transmission decreases by only a factor of two for an almost ten-fold increase in group index.While additional measurements are required to quantify the propagation loss in the slow light region, these observations are consistent with our previous results for W1 waveguides [21] that show a much weaker dependence of losses on group velocity than initially assumed [6].
Conclusion
A systematic design for flat band slow light operation in PhC waveguides was carried out numerically; changing the position of the first two rows adjacent to the line defect waveguide gives access to a range of group indices, typically between n g = 30 and 90, for an almost constant group index -bandwidth product.This approach has the technological advantage of holding the hole size constant across the device-it is generally observed during etching that features such as sidewall angle can vary with hole size (strongly so in our particular case).This effect may be small but is an important factor in minimizing propagation loss in slow light PhCs.Changing hole position may also be implemented with better control than changing the hole size, which is the method previously employed by others.Our method is experimentally demonstrated by three flat band slow light structures.These structures have a nearly constant group index -bandwidth product with group indices of 32.5, 44 and 49 over 14 nm, 11 nm and 9.5 nm bandwidth.Our new design approach shows the powerful possibilities of using PhC waveguides in the slow light regime for practical applications especially in the enhancement of linear and nonlinear effects.
Fig. 2 .
Fig. 2. Systematic maps of (a) 2D and (b) 3D calculations of the group index -bandwidth product as a function of s1 and s2.The color plot and the contours represent n g (∆ω/ω) and n g respectively.The rectangle in Fig. 2(a) indicates the calculation region of Fig. 2(b).The three blue circles and green triangles in Fig. 2(b) indicate the calculation points in Fig. 3 and the experimental points in Fig. 5 respectively.In Fig. 2, the red region indicates high n g (∆ω/ω) values.The figure allows us to trace a flat band slow light region with an almost constant n g (∆ω/ω) value of approximately 0.3 but
Fig. 3 .
Fig. 3. (a) Calculated dispersion curves and (b) group indices, for the fundamental mode of the modified W1 PhC waveguides with s1 and s2 values indicated by the blue circles in Fig. 2(b).The thick solid red line represents the flat band slow light region.The group index -bandwidth product was around 0.3 in all cases.The result of W1 waveguide is also presented for comparison
Fig. 5 .
Figure2shows that for any value of s2, we can find an s1 value to maximise n g (∆ω/ω).A range of group indices can be accessed in this way by choosing appropriate s2 values.Hence, to demonstrate our design, we fabricated and characterized three sets of PhC waveguides with s2= 0 nm, 12 nm and 16 nm, and s1 values spanning the optimized region of Fig.2(b) in 2 nm steps. | 3,920 | 2008-04-28T00:00:00.000 | [
"Engineering",
"Physics"
] |
Tunable Multi Wavelength by Pulse Signal Modulation in Laser Pumping of EDFA
This report is aimed to present the result of experimental setup of the erbium-doped fiber amplifier (EDFA) with modulated pulse signal by laser pumping at wavelength 980 nm. The amplified spontaneous emission (ASE) from EDFA has multi wavelength and the spacing of wavelength can be controlled by controlling the pulse width of laser pumping. The result in this experiment shows that the feasibility of using EDFA system can generate the multi wavelength of all of C-Band spectrum. The pulse signal, for modulated laser pumping, is observed ranging from 10 to 100 Hz and the wavelength spacing can be tuned from 14.7 nm to 14.9 nm.
Introduction
Erbium-doped fiber amplifier (EDFA) is a widely applied component, e.g., for wavelength division multiplexing (WDM) in modern optical communication systems, optical fiber sensing system, optical device testing system and optical instrumentation [1] [2].In addition, the application of EDFA is a main device for generation of multi wavelength laser [3] [4].The EDFA has gained spectrum operation range from 1525 nm to 1565 nm in the range of C-Band [5].The EDFA is used successfully due to a high gain, low insertion loss, high output power and polarization-independent gain.There are many techniques of multi wavelength generation such as Fabry-Perot etalon inside cavity [6], fiber bragg grating [7], Sagnac interferometer [8], ring resonator [9], high nonlinear fiber [10], semi-conductor optical amplifier [11], LiNbO 3 [12].Most of these techniques are of high cost and require many equipments.
In this report, we carried out a novel method for experimental demonstration tunable multi wavelength and frequency spacing by modulate pulse signal into laser pumping.The purpose of this experiment is for a simpler setup for generation a multi wavelength laser by using EDFA.
Experiments and Results
The experimental setup is illustrated in Figure 1.The multi wavelength are formed by the wavelength division multiplexed (WDM) with 980/1550 multiplexing for the coupling light pumping source into the EDFA and for using in the future.The LD driver at the input port, with 980 nm, is connected to a pumping laser and the output port is connected to the optical isolator for improving the noise figure performance.A 20 meters EDFA length is connected between the optical isolators and the optical spectrum analyzer (OSA).The LD driver is driving the current of laser diode pumping.And the modulator (Mod.) is modulated pulse signal by the pulse generator to control the current in the laser diode with the pulse frequency varying from 10 Hz to 100 Hz by increasing a step by step of 5 Hz frequency.The maximum power of the laser pumping is 50 mW while the pulse generator is setup for duty cycle at 50%.
In this proposition, we will discuss and analyze the result of the EDFA with modulated pulse signal in the laser diode pumping.Figure 2 illustrates the results of an experimental setup of the proposing tunable multi wavelength of all of the C-Band Erbium-doped fiber amplifier with modulated laser pumping.
The resultant of the output port is showing the spectrum by OSA as in Figure 2. The output spectrum of the multi wavelength laser with spanning of wavelength 100 nm.The resolution bandwidth of the optical spectrum analyzer is 0.06 nm.The wavelength line spacing with modulated pumping laser in Figures 2(a)-(c) are ~14.70 nm, ~2.94 nm and ~1.49nm respectively.From this experiment we describe the multi wavelength laser source with control spectral range by control frequency for modulation pumping laser.The peak power of multi wavelengths generate output are equal line spectral of amplified spontaneous emission (ASE) spectrum with continuous wave (CW) pumping laser at 50 mW.The result of output spectrum with modulation pulse signal into the pumping laser is the same as by using CW pumping when the pulse signal is increasing too high frequency.
Figure 3 shows the result of the wavelength spacing with modulated pulse signal into the laser pumping at 10 Hz to 100 Hz by increasing step by step of the frequency by 5 Hz.The experimental result, at 100 Hz, have multi wavelength laser with multiple wavelength up to 67 wavelengths at all of C-Band.The spectral width is decreasing in exponential trend because of the nonlinear optical effectiveness and transient response of EDFA.
Conclusion
We have experimentally demonstrated an investigation of the multi wavelength generated in the EDFA by modulation of the laser pumping scheme.The output power of the EDFA is about 3.16 μW and the spectral width is about 1.49 nm to 14.2 nm.A simple analysis of the proposed system has also been reviewed.To optimize the experiment, the prototype parameters such as the optical spectral width, pulse frequency for modulation and optical power, the experimental procedures suffering from the environment are also taken into consideration.We can use this technique in the future for investigating all optical band pass active filter by EDFA.
Figure 1 .Figure 2 .
Figure 1.The experimental setup of an investigation of multi wavelength generating in EDFA by modulate pumping laser scheme.
Figure 3 .
Figure 3.The experimental result showing the wavelength spacing versus pulse signal with modulation in laser pumping, which is significant exponentially decreasing. | 1,188.6 | 2015-03-31T00:00:00.000 | [
"Physics"
] |
Analysis of Extended Hollo-Bolt Connections: Combined Failure in Tension
Abstract This paper investigates the combined failure mode of the Extended Hollo-Bolt (EHB) and the effect of the column thickness on the tensile behaviour of the blind fastener. A three-dimensional Finite Element (FE) model was developed, validated against experimental data and used in a parametric study. The non-linear numerical model, which simulates a single row of two EHB in tension, presents reliable results of the column and bolt failure modes in agreement with experimental data. It is concluded that the failure mode is first controlled by the plastic resistance of the component limited by concrete crushing accompained with hollow section yielding; it is then controlled by the strength of the bolt. An analytical model which predicts the global force-displacement relationships when varying the column thickness is proposed. Therefore, the stiffness and the strength behaviours of this combined mode of failure for the studied blind fastener can be estimated.
Introduction
Blind-bolted systems are a relatively new approach to connect open and hollow steel structural members, which are structurally more efficient compression members than open sections [1,2]. These systems only require access to one side of the hollow section to tighten the bolt [3]. According to Mirza & Uy [4], when blind-bolted systems are combined with concrete filled sections, beneficial behaviour is achieved due the bond and bearing action produced in the interaction and also because the infill concrete reduces the column face flexibility and deformations while the strength and stiffness of the tube walls are increased [5,6,7]. From the available blind-bolts, including Hollo-bolt (Lindapter International, UK), Molabolt (Advanced Bolting Solutions, UK), Huck Bolt (Huck International, USA), Flowdrill (Flowdrill B.V., The Netherlands), and Ajax Oneside (Ajax Engineered Fasteners, Australia), modifications have been made to improve their momentresisting capacity in steel connections [8]. The Extended Hollo-Bolt (EHB) (Fig. 1) is a modification of the commercial Lindapter Hollo-Bolt (HB) developed at The University of Nottingham UK by Tizani & Ridley-Ellis [9]. An additional anchor nut is attached to the end of an extended bolt shank to benefit from the concrete infill, which significantly increases the stiffness of the blind bolt system [10].
The use of this blind-bolted connector in joints constitutes an attractive construction technique due to the EHB potential performance in moment-resisting joints [11,12]. Pitrakkos et al. [13] and Pitrakkos & Tizani [6] identified three potential failure modes for EHB connections which are bolt failure, column face failure and combined failure. Independent experimental and numerical studies have been carried out to investigate the first two failure modes separately.
For the bolt component, Pitrakkos et al. [13] evaluated bond and anchorage mechanisms by means of an experimental programme where different bolt diameters, concrete strengths, bonded lengths, shank lengths, shank grades and embedment depths were considered. The tensile behaviour of the EHB bolt component was evaluated by Tizani & Pitrakkos [14] where the type of fastener, addition of concrete to the tube, strength of the concrete, spacing between bolts, and bolt class were the main test variables. Pitrakkos & Tizani [6] investigated the strength, stiffness and ductility of single EHB bolt component by conducting monotonic tensile pull-out, bolt pre-load and relevant material property testing. The cyclic behaviour of the EHB was evaluated by Tizani et al. [11] by means of quasi-static cyclic loading tests.
For the column face component, Mahmood et al. [15] studied the column face thickness effect on the bending behaviour of a single row EHB connections by using experimental and numerical methods. The bolt gauge distance effect on the bending behaviour of the column face component was evaluated by Mahmood et al. [12] who carried out experimental and numerical studies on EHB connections.
In general, previous research has demonstrated that the tensile stiffness of the EHB exceeds that of the HB, and that the joint when using this blind bolt can develop moment resistance sufficient for it to be classified as rigid, depending on the geometry of the connection and the connecting structural members and its behaviour is adequate due to its energy dissipation capacity and ductility. Besides, analytical models based on the component method were proposed for both components (bolt in tension and column face in bending) achieving good accuracy compared with experimental data. However, the combined failure has not been investigated yet.
This work devises a Finite Element (FE) model to simulate the behaviour of the EHB under tension when a combined failure can occur. It validates the model against experimental data reported from independent research done for the bolt and column face components. Parametric studies are carried out by varying the column face thickness. The analyses are performed for a row of two EHB with bolt diameter 16mm, bolt shank length 150mm, bolt grade 8.8, concrete strength 40 MPa and variable column thickness. An analytical model is formulated using the output from the parametric studies and cross-checked for conformance with the experimental data and the numerical model. This paper will first introduce the experimental programme followed by the numerical model assumptions and validation and finally how the analytical model was arrived at. It concludes with the analytical model validation.
Experimental programme review
The experimental programme includes a review of previous monotonic tensile pull-out, bolt pre-load, and material property tests in order to evaluate the load transfer mechanisms of the EHB components, determine the full force-displacement response and investigate the effect of different parameters on the behaviour of the connection.
Monotonic tensile pull-out tests
Pitrakkos [16] carried out 16 EHB pull-out tests varying the bolt diameter, db (16 & 20mm); the grade of the bolts (8.8 & 10.9); the concrete infill strength (C40 & C60); and the embedded depth, demb (4.0 -6.5db). The setup involves a reusable steel box assembly with a rigid top plate (20mm thick) which simulates a relatively rigid rectangular hollow section, two hollow section frames which act as the reaction forces, a circular loading plate (25mm thick), a concrete infill and the EHB specimen. The monotonic tensile pull-out test setup is shown in Fig. 2.
bolt pre-load tests
20 pre-load measurements were performed. Readings 3 were taken during and after tightening of the bolts allowing for the relaxation effect. The initial pre-load reading was taken once the tightening torque was a) Test rig for bolt pull-out b) Illustration of installed EHB before concrete infill achieved and the residual pre-load reading was taken after 5 days of tightening.
Column Face Component
Mahmood et al. [15] carried out 39 EHB pull-out tests varying the hollow section plate thickness or slenderness ratio µ = b/t (25, 31.75 & 40 µ); the concrete grade (C20, C40 & C90); the bolt gauge g (80, 140 & 180g); the bolt pitch p (120, 200, 280 p); the anchorage length Lan (80, 103 & 112 mm) ; and the concrete type. The setup involves reusable dummy bolts (DB) which have a simplified geometry for the sleeves (Fig. 3) and are manufactured from high strength steel (EN24 steel) which ensures pure face bending behaviour and eliminates the bolt failure mechanism. The test rig provides support for the specimens against the applied load. The test arrangement is illustrated in Fig. 4. An Imetrum's Video Gauge (VG) system and a Digital Image Correlation Q-400 (DIC) system (Dantec) were used to record the column face displacement, the EHB slip, the sample movement and the strain distribution at the column face.
Measured material properties
For both components, a series of pull-out tests were performed in accordance with ISO 898-1:2009 (BSI 2009) on the bolt batches used throughout the experimental programmes. They were performed on machined and full-size bolts where the stress-strain relationships were obtained. The concrete mixes used nominal maximum aggregate size of 10mm. The concrete compressive strength of the specimens was tested using 100mm cubes on the day of the testing and after 28-day of casting. The steel hollow section reaction frame and the 20mm thick top plate are grade S355 and standard steel dog-bone tests were performed on the test pieces to determine the full force-displacement response. The test pieces were designed and tested according to Annex D of BS EN 100021:2001 (BSI 2001).
Numerical model
Three full-scale 3D models were built using the nonlinear FE software package Abaqus (version 6.15) which has high non-linear capabilities to accurately evaluate the behaviour of the component and provide stress magnitudes for the full loading range. The bolt failure is simulated according to Pitrakkos & Tizani [6] and bending behaviour of the column face component according to Mahmood et al. [15] experimental data. The two validated models are assembled to evaluate the behaviour of the combined failure mode and the stiffness of the EHB connection in tension with a rigid plate when different column thicknesses are used.
Geometry
The geometrical model for complex elements was built using AutoCAD 3D 2018 and exported to Abaqus as ".sat" files; graphical tools in Abaqus/CAE were used for simpler geometries. For the column component, the components of the bolt were modelled according to the real bolt instead of the dimensions of the dummy bolt used by Mahmood et al. [15] in order to allow comparisons between the two failure modes and consistency when combining the two models.
The dimensions of the deformed EHB after tightening were input in the model with exact dimensions as reported by Pitrakkos et al. [13] experimental tests. For the steel box, the plate thickness at the corners (tc) is slightly larger than the thickness of hollow section wall (t) (see Fig. 5). Therefore, Ri is taken as t and Re equal to 1.65t in the model. This is so to model the actual dimensions of the manufactured tubes, which tend to have such dimensions due to hot-rolling. Only a quarter of the connection was modelled taking advantage of the symmetry in geometry, loads and constraints along the longitudinal and transverse axes.
Meshing
The discretization of the domain of each element is done using Abaqus cell partitioning tool which divides each element into pieces of simpler geometry which are less complex to be analysed by the software. The accuracy of the results and the processing time depend on the element size and discretization method. In order to optimise the model and obtain accurate results in the areas of interest, fine mesh was assigned to sections close to the EHB while coarse mesh to other regions which require less attention. To model the complex nonlinear behaviour, involving contact and geometrical nonlinearities of the connection, first order interpolation elements (C3D8) with full integration were used to model the hollow section and concrete. The circular geometry of the EHB inner part was meshed using a linear continuum 3D element with 6 nodes (C3D6). A mesh convergence was performed by simulating the same model with different element sizes. The mesh is considered as converged if the reduction in the element size causes a negligible difference in the resultant displacements and stresses. Since the precision of the plastic load results in the model is increased by 0.1% when very fine mesh (less than 10mm) is used for the bolt and the concrete and column elements around it, it is concluded that there is no need to use very fine mesh in the model. The thickness of the column was modelled using one to three mesh elements with no significant difference (less than 1%) in the stress and displacement results. The model with one element for thickness was therefore adopted for computing efficiency.
Contacts
Contact simulation in Abaqus prevents elements merging or penetrating and generates contact forces between them. Interaction constraints demarcate the limits of two regions in contact by normal or tangential load transfer between elements. While the elements are not in contact, no load transfer occurs. Surface-based contacts were defined in the model using the contact pair algorithm, in which the user needs to define the contact properties and link the related surfaces manually by specifying the "master" and the "slave" surfaces. The master surface is chosen to be the stiffest or the surface of the moving element in cases of similar stiffness. Normal and tangential behaviour models were used to define the interaction between two surfaces. In the first one, only the surfaces need to be defined and the software creates the link between them automatically. In the second model, it is required to define a friction modulus as a penalty friction behaviour for sliding. The friction modulus between concrete-steel contact is defined as 0.25, after Elremaily & Azizinamini [18], Hu et al. [19] and Ellobody et al. [20]; and 0.45 after Wang [21] for steel-steel interaction.
Concrete
Concrete behaviour is defined in Abaqus by introducing its elastic and plastic properties. The concrete compression behaviour was simulated assuming a linear elastic behaviour up to 40% of the ultimate concrete compressive strength. This part of the curve is defined by the concrete Young's modulus ( ) and the Poisson's ratio (υ). c was calculated using the Eurocode model and υ was taken equal to 0.2. After the elastic range, a non-linear ascending curve starts until the ultimate concrete strength ( ) is reached, followed by a reduction in the concrete resistance.
The plastic behaviour is more complex to simulate due to the brittle nature of the material and since irreversible strains cannot be captured in elastic damage models. Concrete Damage Plasticity (CDP) can model concrete by assuming two main failure mechanisms, tensile cracking and compressive crushing. The nonlinear stress-strain curve is defined in the software by the plastic stresses and inelastic strain, plasticity parameters and the damage parameters.
The non-linear stress-strain curve was predicted by the model of BS EN1992-1-1. The plasticity parameters are ψ, dilation angle; ε, flow potential eccentricity; σbo⁄σco, ratio of initial equibiaxial compressive yield stress to initial uniaxial compressive yield stress; kc, yield shape parameter; and μo, viscosity parameter.
A sensitivity check was performed to investigate the effect of ψ on the behaviour of the concrete. It was found that ψ has a significant impact on the component behaviour, especially after the plastic load. A dilation angle of 55° was found to be suitable for the model. According to Genikomsou & Polak [22], Kc should take values between 2/3, corresponding to the Rankine formulation, and 1, which corresponds to the Drucker-Prager criterion. Referring to Abaqus [23], the value of should be in the range of 0.5 to 1. Larger values of correspond to a stiffer behaviour as more elastic energy can be dissipated. After a sensitivity check, the value used for the models corresponds to 0.8. The remaining plasticity parameters were taken as the default values specified in the Abaqus manual [23] and these values are 1.16 for σbo⁄σco and 0 for μo. The compression damage parameter (dc) defines the softening branch of the stress-strain curve of the material characterized by the degradation in the elastic stiffness of the concrete. This parameter can take values between 0 and 1 where zero corresponds to undamaged material while one represents the total loss of strength [23]. The compression damage parameters are found using Eq. (1) [23] and it is dependent on the compression plastic strain 1 23 which is found from the laboratory.
An iterative method was used to find the maximum damage parameter. For the studied case, a maximum value of 0.55 was found as a suitable value and used in the FE models.
In tension, concrete behaviour was simulated using a bilinear model according to equations defined in CEN [24]. The results from the model were strongly influenced by the mesh size when large mass of concrete without reinforcement is modelled. In order to avoid this issue, an energy approach, introduced by Hillerborg [25], is used where the concrete brittle behaviour is defined by introducing the amount of energy required to open a unit area of crack calculated using Eq. (2) [22].
Steel
All the elements of the EHB were modelled using the elastic and the default plastic models in Abaqus The stress-strain experimental results obtained by Pitrakkos & Tizani [6] for the bolt shank were used for all the components of the EHB as no experiments were carried out for the remaining components. The EHB steel properties were defined using the default elastic and plastic models in Abaqus which require the definition of the Young's modulus ( F ), Poisson's ratio ( ) and plastic strain and stress values.
Bolt preload
The bolt preload is applied in the model using Abaqus bolt-load which simulates tightening forces or length adjustments in bolts or fasteners [23] following these steps: 6 1. Before the application of the preload, a very small displacement which has a negligible effect on the behaviour of the model was assigned as a boundary condition to the bolt. This guarantees that all the contacts between the elements are defined and activated. 2. The preload is a function of tightening torque and bolt diameter and was assigned using the specified torque for a M16 bolt. The preload is applied in Abaqus as a "bolt load" with magnitude of 11.5 kN, which is the preload measured after the specified torque was applied and relaxation has taken place, as reported by Pitrakkos [16]. It was assigned on two parallel surfaces of a partition located at the middle distance between the bolt head and the threaded cone. 3. The bolt was fixed at its current length to allow the bolt length to remain unchanged so that the force in the bolt can change according to the response of the model. This step is required as Abaqus cannot deactivate the bolt load.
Pull-out load
A displacement control pull-out model of 20 mm was applied at the bolt head to simulate the tensile load. This corresponds to be the maximum global displacement reported by Mahmood et al. [15] and it is bigger than the 7mm reported by Pitrakkos & Tizani [6]. This way, both mechanisms can be covered. The displacement was assigned as a boundary condition where the movements in all directions except the direction of the load were restrained.
Numerical model verification and validation
The reliability of the FE models is validated by comparing the component behaviour during the analysis, stiffness and strain distribution against experimental results. The FE models must display specific phenomena depending on the studied failure mode to be able to represent the experimental behaviour of the connection. For the bolt failure, concrete crushing above the anchor nut, bolt necking and concrete cone formation in tension must be displayed; for the column face failure, the yielding of the steel plate around the bolt holes and crushing of the concrete above the anchor nut must be presented. The validation involves verification of the general behaviour of the column face and bolt components, plastic load, initial stiffness and agreement between the force-displacement curves from experimental data and the FE models.
Single EHB Component
The single EHB component was simulated here as a quarter of the bolt placed at the centre of the specimen taking advantage of symmetry.
General behaviour
The general behaviour of the connection is well described by the model. Some important characteristics are that there were no penetrations between the model elements, the stress distribution is similar to the experimental results and high concertation of stresses was observed at the interacting surfaces between the elements. The failure mode corresponds to the complete bolt necking at the location where the pre-tightening force is applied. Similar to the experimental results reported by Pitrakkos [16], this occurs when the global displacement is 7mm approximately. Stress distribution in the bolt is presented in Fig. 6. The model also displays concrete crushing above the anchor nut as the maximum concrete compressive strength is exceeded. In agreement with experimental data, the failure mode involves a concrete cone breakout of diameter 175mm, which forms at an approximate angle of 45° to the horizontal as shown in Fig. 7. The top plate was assumed to be rigid and only elastic properties were assigned. Therefore, stresses resulted from the pull-out would not exceed its yield stress. This 7 assumption was validated in the FE model by monitoring the stress variation along the thickness of the column plate. Since the variation is small enough to be neglected, it is proved that the component is rigid.
Force vs displacement curves
Comparison between the FE model and experimental results are displayed in Fig. 8. The model can represent the component stiffness, strength and the ductility within a 90% prediction band. The connection performance is controlled by the bolt ultimate tensile strength in agreement with the literature. Table 1 shows the experimental and numerical stiffness for the specimen EHB16-150-8.8D-C40*.
EHB Bolt row Component
In this section the behaviour of single row of two EHBs is investigated to evaluate the effect of group action on the failure mechanisms. It is assumed that the strength of the connection of two bolts is double of that obtained from one bolt. This assumption is true only if the failure mechanism is bolt necking, whereas when overlapping in the concrete cones exists a reduction in the component strength is expected. The whole specimen was modelled here to obtain clear understanding of the overlapping between the two bolts. Load -displacement curve obtained from the FE model is presented in Fig. 9 and compared to twice of the experimental resistance of single bolt.
The connection behaviour is described by the model within a 90% prediction band until around 6mm of displacement. The failure mechanism corresponds to the necking of the bolts and these are the only components displaying plastic strains. Hence, the strength assumption is considered valid and the model verified.
General behaviour
The column face deformation corresponds to the widening and forming the volcano shape around the bolt hole with the increase of the applied load, see Fig. 10. This is captured by the FE model as well as higher deformation in the interior half of the hole. This differential deformation is caused by the constraint imposed by the column wall.
The model captures the cone size at the concrete surface (1.4 times the bolts anchored length Lan) for all concrete grades [26] and the outer perimeter of the crushing area (Fig. 11).
Force vs displacement curves
The column component model is divided into two force intervals delimited by the change in stiffness (see Table 2). Load vs displacement curves from experiments reported by Mahmood [26] and the FE results are presented in Fig. 12. (b) Experimental (b) FE analysis Fig. 11. Cone concrete crushing. applied load with the column face plate. However, after the plastic strength is reached, the anchored contribution drops due to concrete crushing Mahmood [26]. Fig. 12 shows that the FE model captures the initial stiffness and the strength of the component well. However, there is a difference in capturing the upper stiffness. This is attributed to the difference in the geometry of the bolts used in each case. In the model, the exact geometry of the EHB was introduced while in the tests, dummy bolts were used. The general behaviour of the component is considered to be described well by the model.
Stress distribution from bolt pull-out
The global displacement against the applied load is monitored in the combined FE model and plotted in Fig. 13Error! Reference source not found. to evaluate the general behaviour of the component. The pull-out force is transferred from the EHB to the surrounding concrete through the mechanical interlock between the bolt components (sleeves, bolt shank, threaded cone and anchor nut) and the concrete. The anchor nut distributes the stresses from the tensile load over a large region in the concrete infill. Hence, bolt shank, bolt hole, flaring sleeves and the concrete undergo continuous deformation.
The plastic load of the component (Fp) is defined as the peak load (244kN) before the it starts falling. The load -displacement curve (Fig. 13) is divided into three sections. The first region goes from 0 to 0.2Fp where all of the components behave in their elastic range. When the pull-out force reaches 20% of the plastic load, the first signs of concrete crushing around the anchored nut appear as the concrete yield stress is exceeded. The second region corresponds to load values from 0.2Fp up to 0.64Fp, value at which the strain in the column face reaches its plastic value and stresses in the bolt increase greatly, almost reaching 90% of its ultimate tensile capacity at Fp. As the pull-out force is further increased above Fp, in the third region, the sleeves showed high concentration of stresses, exceeding the material ultimate strength which represents cracking of the sleeves and finally the bolt shank started necking when the bolt shank ultimate tensile capacity is reached. The initial and the second stiffness of the component are influenced by the concrete compressive strength and column face behaviour. The concrete and steel reach their yield stress in the first region of the curve and exceed their ultimate strength before Fp. After this point, the stiffness of the component can be assumed to be fully dependent on the bolt and its components' properties.
The model shows a change in stiffness in the first region of the curve as it is presented in the bolt and column face components. The initial stiffness, between 0 and 0.2Fp, is followed by a decrease of stiffness up to Fp. The component initial and second stiffness are tabulated in Table 3.
Parametric Study on Column Plate Thickness
The thickness of the column face is defined in terms of its slenderness ratio µ which is the ratio of the column face width to its thickness (b/t). Three column thicknesses were used to investigate the effect of the slenderness ratio on the behaviour of the connection. The commercial thicknesses of 5, 6.3 and 8mm correspond to µ of 40, 31.75 and 25 respectively. For all models, the bolt gauge distance (80mm), the bolt anchorage length (80mm) and the concrete grade (C40) are used.
Column face slenderness ratio influences the stress distribution resulting from the EHB when subjected to a pull-out force. The stress distribution on the column face is presented in Fig. 14. There is formation of a volcano shape on the column top face with high stress concentration in the concrete crushing outer perimeter. Stresses are distributed in a bigger area in the column top face for µ40 with quick dissipation along the side faces and small influence on the column bottom face. For µ25, more even distribution of stresses is observed in all column faces and smaller affected area on the top face when compared to µ40. The distribution of stresses for slenderness ratio of 31.75 is in between the characteristics described for the thickest and thinnest columns.
The concrete failure around the anchor nut was monitored by identifying the load at which the concrete maximum strength is reached in the model. The relation between the load at which the concrete strength is exceeded (Fcu) normalised by the plastic load (Fp) for each model and the column face slenderness ratio is plotted in Fig. 15. The concrete ultimate strength is reached at 60% of the plastic load for the model with µ equal to 25. The percent increased to 65% for the model of 31.75 slenderness ratio and further increased to 73% in the µ40 model. The results suggest that there is a linear relationship between the load at which the concrete ultimate strength is exceeded and column face thickness, where thinner column thicknesses delay the concrete failure as both materials can deform more freely and there is less concentration of stresses in the concrete. Although concrete confinement does improve the concrete performance, this drop is mainly attributed to the drop in the plastic load, Fp, due to the higher slenderness rather than the concrete performance.
Stiffness of the Extended Hollo-bolt
The effect caused by varying the column thickness is displayed in Fig. 16. There is a similar trend pattern for the global force-displacement curves between all the slenderness ratios and a general increase in both initial and second stiffness when increasing the column thickness. The slenderness ratio has a clear effect on the component global displacement curve, all the FE models display a change in stiffness by an approximate tetralinear curve up to bolt failure. The component initial and second stiffness are reported in Table 4.
As reported by Mahmood et al. [15] for the column face component, the connection strength and stiffness are larger for thicker column sections. However, the amount of improvement in the component stiffness by changing µ from 40 to 31.75 is higher than that when varying it from 31.75 to 25. This can be explained by the reduced contribution of the face bending stiffness compared with that contributed to by the concrete strength as the thickness decreases.
Analytical model
The EHB connection behaviour in tension when combined failure can occur is dependent on many parameters such as bolt diameter, embedment depth, bolt gauge, concrete grade and column slenderness ratio. Spring model theory has been used by different authors to represent the connection behaviour. -----------------
Equivalent Spring Model
The overall behaviour of the component in tension is approximated with the use of an equivalent massless spring model where the most important property is the stiffness of the spring. A helical spring methodology has been used independently to characterise the tensile behaviour of both bolt and column failure mechanisms of the EHB component. These models were found to provide reliable predictions and satisfy the componentbased approach for the design of the EHB connection.
The bolt component behaviour modelled by Pitrakkos [16] using the spring theory states that since the failure mode of the bolt component was due to bolt shank fracture, the ultimate strength was imposed as the ultimate strength of the internal bolt model. The column face component represented by Mahmood [26] states that the column face plastic load is equal to the resistance provided by the hollow section plate and the anchorage action.
The assembly of these spring models requires to define the arrangement of the springs based on observations of the FE pull-out behaviour. The non-linear behaviour of both components was approximated by tetra-linear curves using the results from the proposed FE models in the same way as Pitrakkos [16] and Mahmood [26], see Fig. 17. The pull-out of the bolt is produced near the plastic load of the combined component and therefore both components exhibit similar displacement level. After this point, the force levels are similar up to the bolt failure. Therefore, it is proposed a model with parallel spring configuration up to the plastic load and series arrangement from this point forward.
The following equations describe the resulting properties of the model assembly based on spring theory. Using the spring theory for series configuration, the plastic load of the EHB combined component is calculated using Eq. 6. The results are compared against the FE results in Table 5 for a single row of two 16EHB, fy=406N/mm 2 , fcu=40N/mm 2 , b=200mm, g=80mm and Lan =80mm with variable slenderness ratio. The analytical model presents a good prediction to the component plastic load presenting a maximum error margin of 8%.
Overall Behaviour of the Component
The combined failure component can be represented by a tetra-linear model similar to the one proposed by Mahmood [26] where the clear difference is observed in the first region of the curve. The model is composed by an initial stage between 0 and 20% of the plastic load; a secondary stage from the first stage up to Fp; a drop stage characterized by a decrease in the component resistance after the plastic load to the lowest load before the component strength starts picking up; and a final stage in which the component strength starts increasing again up to the bolt failure, see Fig. 19.
As described before, the slenderness ratio influences the initial stiffness of the connection. Therefore, the initial stiffness is derived from the linear relationship between the FE initial stiffness results and the column plate thickness described by Eq. 9: `= 94.5 + 262.7 The stiffness derived from the FE models for each stage of the proposed tetra-linear curve can be calculated using the following expressions: The values of ks and ku are adopted as a percentage of the component initial stiffness as adopted by Ghobarah et al. [27] and Málaga-Chuquitaype & Elghazouli [28] who expressed the post-yield stiffness as a percentage from the initial stiffness. Besides, the mean ratio between the drop displacement and the displacement at the plastic load is used to calculate the drop displacement. The following equations are proposed: In order to calculate the drop stiffness (kd) using Eq. 11, the value of the drop load (Fd) must be known. Therefore, it is proposed a linear equation to calculate the drop load as a function of the plastic load (Fp). ^= 1.1614 2 − 71.742 (16) The proposed tetra-linear global force-displacement curve for EHB16-150-8.8-C40 when varying the column plate thickness can be assembled by defining five points. The points are P1(0,0), P2(0.20Fp, Δi), P3(Fp, Δp), P4(Fd, Δd) and P5 (Fu, Δu). The displacement Δ is defined using the following equations: The proposed tetra-linear model results versus the numerical data are plotted in Fig. 20. Reasonable agreement between the models is observed within an error band of 15%.
Conclusion
This paper presented the steps taken for the development of a validated finite element model that simulates the EHB component behaviour in tension when combined failure can occur and the effect of varying the column plate thickness on the connection behaviour. The model predicts the global force-displacement curve of the EHB connection when varying the steel column thickness or slenderness ratio with 90% accuracy. An analytical model was proposed. The model provides a good fit for the behaviour of the EHB component when compared with both the numerical analyses and the experimental data. Other findings of this work include: • The first failure sign is caused by concrete crushing followed by hollow section yielding. After this, the component strength is dependent mainly on the bolt properties in tension (bolts necking and rupture). • Components with larger slenderness ratio resists higher load before concrete failure. An optimal combination between concrete strength and column slenderness ratio requires further investigation. Applied load (kN)
Global displacement (mm)
Analytical model Finite Element model 15% Error band | 7,973 | 2020-02-01T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Temporal and Spatial Properties of Arterial Pulsation Measurement Using Pressure Sensor Array
Conventionally, a pulse taking platform is based on a single sensor, which initiates a feasible method of quantitative pulse diagnosis. The aim of this paper is to implement a pulse taking platform with a tactile array sensor. Three-dimensional wrist pulse signals are constructed, and the length, width, ascending slope, and descending slope are defined following the surface of the wrist pulse. And the pressure waveform of the wrist pulse obtained through proposed pulse-taking platform has the same performance as the single sensor. Finally, the results of a paired samples t-test reveal that the repeatability of the proposal platform is consistent with clinical experience. On the other hand, the results of ANOVA indicate that differences exist among different pulse taking depths, and this result is consistent with clinical experience in traditional Chinese medicine pulse diagnosis (TCMPD). Hence, the proposed pulse taking platform with an array sensor is feasible for quantification in TCMPD.
Introduction
There are four diagnostic methods, namely, inspection, listening and smelling, inquiry, and palpation, in traditional Chinese medicine (TCM) to diagnose causes of disease, locations of disease, the nature of disease, and predictions of a cure [1]. This means that palpation reflects the health condition of patients. If TCM physicians are good at pulse diagnosis, then they will have a good command of detailed changes in diseases. Following clinical evidence, pulse diagnosis builds an unshakable position in TCM. However, it requires long-term experience and a high level of skill for doctors to master pulse diagnosis. Researchers have done many reports to shorten the training duration and to enhance effective diagnosis using pulse diagnosis by means of modern technology [2][3][4][5]. The first step is to acquire the wrist pulse by using a sensor.
From the viewpoint of hemodynamical theories, Wang et al. pointed out the pulse wave should be governed by both a longitudinal and a transverse wave model [6,7]. Hence, the investigation of a pulse wave has to include both temporal and spatial dimensions. TCM physicians use a finger to take physiological information from a wrist radial artery; this technique is called pulse diagnosis or palpation, since the feeling of a pulse wave is acquired from the surface of the finger. Hence, the analysis of a wrist radial artery also has to include both temporal and spatial dimensions. From the above discussion, it can be seen that a pulse taking platform with an array sensor is needed.
Therefore, sensor research has shifted a single sensor to an array sensor or multiple combined sensors to obtain more information about wrist pulse signals [8][9][10][11][12][13][14]. Strain gauge, piezoresistor, and polyvinylidene fluoride (PVDF) are common choices for obtaining wrist pulse signals. The sensitivity, spatial resolution, and sensing area of a sensor are critical issues in the modernization of TCM. The optimal sensing area of a single sensor is about 30 mm 2 [5]. However, the sensing area of the sensing element is too large to obtain a good spatial resolution for a sensor array. To increase the spatial resolution, the sensing element has to decrease the sensing area but at the same time guarantee that the sensitivity is great enough (about 25 mmHg) [15].
Hence, we design the pulse taking platform with a tactile capacitive array sensor to provide quantifiable TCMPD research. The pre-experiments reveal that the wrist radial artery waveform from one sensing element of the tactile capacitive array sensor was the same as the proposed pulse taking platform with a single sensor. Based on this result, we assume that a pulse taking platform with a tactile capacitive array sensor is feasible for quantifiable TCMPD research and will obtain more information than the proposed pulse taking platform with a single sensor. The temporal and spatial properties of the wrist radial artery are illustrated, including strength, rate, length, width, and trends of pulse conditions, as shown in Figure 1. In Figure 1, pulse length represents the sensing length of physician's feeling during the pulse taking procedure. Similarly, pulse width represents the sensing width of this procedure. In addition, the hold-down pressure is provided by the proposed array sensor.
The results of the experiments match that of clinical experiences. Therefore, a pulse taking platform with a tactile capacitive array sensor is feasible for quantifiable TCMPD research.
Sensor Principle.
The main specifications of the sensor in our proposal are a sensitivity of about 25 mmHg (or 0.48 psi) and a sensing element area of about 10 mm 2 . Under these conditions, the technology of the pressure profile system company (PPS) meets our requirements. A brief description about the sensor technology is addressed as below. The capacitance can be calculated if the geometry of the conductors and the dielectric properties of the insulator between the conductors are known. For example, the capacitance of a parallel-plate capacitor is composed of two parallel plates with area A separated by a distance d that is approximately equal to the following: where C is the capacitance, A is the area of the overlapping of the two plates, ε is the dielectric constant, and d is the separation between the plates as shown in Figure 2(a). If the separation distance decreases, the capacitance C goes up as shown in Figure 2(b). When building tactile array sensors, the electrodes can be arranged as orthogonal, overlapping strips. A distinct capacitor is formed at each point where the electrodes overlap, as shown in Figure 3. By selectively scanning a single row and column, the capacitance at that location, as well as the local pressure is measured, as shown in Figure 4. Therefore, the wrist pulse signals can be detected based on capacitive tactile sensors.
The specifications of the tactile array sensor for this proposed pulse taking platform are an array size of 10 mm × 7.5 mm, a sensing element of 2.5 mm × 2.5 mm, a thickness of approximately 0.5 mm, a full-scale range of 300 mmHg, sensitivity of 0.5 mmHg, a scan rate of 100 Hz, a temperature range of −20 to 100 • C. The capacitive tactile sensor is custom designed by PPS, USA.
Pulse Taking Platform.
Conventionally, the pulse taking platform has used a single sensor, but the information obtained could not be compared with the real feeling of the physician's fingertip. Hence, a modified pulse taking platform with the sensor array is implemented to acquire detailed information regarding wrist pulse signals. It can simultaneously detect twelve channel signals at one sensing position, such as Cun, Guan, or Chi. The sensor block is flexible, and its structure is displayed in Figure 5.
In addition, the pulse taking platform, which is employed to determine the best measurement points on the X and Z axes, is adjustable with regard to the X and Z axes. Three screws and sleeves are used in the Z-axis to produce movement to hold down the tactile array sensor block. To obtain an acceptable pulse-taking position, a rotating device is housed in the X-axis for adjustment. The proposed twoaxis pulse taking platform is shown in Figure 6. Additionally, the acquisition of wrist pulse signals uses an analog-to-digital card (D600, PPS, USA). Its sampling frequency is 100 Hz; each sensing element is calibrated by software.
Data Collection.
This study attempts to minimize variations resulting from gender and health conditions. The subjects are all males (average age of 20.64 ± 6.84) who have no diseases, as confirmed by a TCM physician. The procedures for the experiment were approved by the Air Force Academy and National Cheng Kung University. The experimental sample consists of five R.O.C Air Force Academy students and one lab student. The goal of this study is to investigate the feasibility of a pulse taking platform with a sensor array. Based on TCM clinical experiences with pulse diagnosis, the pulse conditions are not expected to change during a 10-minute period of time. For the sampling facility, firstly, this study chooses one volunteer and records his wrist pulse signals twice in 10 minutes and checks whether the pulse signals are same or not. At the time of each sampling, the volunteer was asked to stop exercising and to rest for 5 minutes before the pulse conditions were sampled. During the pulse condition sampling, the subject was asked to sit on an adjustable chair, and he was forbidden to move his arterial wrist. Each sampled pulse was taken by pressing with only one finger, that is, only one robotic finger taking the pulse at Guan. In this study, only Guan data is analyzed. One of the reasons for this is that the strength of the wrist pulse at Guan is generally stronger than other pulse taking positions. First, the physician carries out the pulse taking and marks the Cun, Guan, and Chi positions. Then, the participant places the marked position of the arterial wrist under the sensor block, adjusts the screw about 5 times (from lightly touching skin to the bone), the depth of each time is 0.5 mm, and then samples the wrist pulse at this depth. The number of sampling data totaled 20. Each acquisition time was about 15 seconds. The second experiment is using the proposed pulse taking platform to evaluate the differences of pulse conditions among pulse taking depths, such as Fu, Zhong, and Chen, the wrist pulse signals are similarly sampled as in the above method. The participants place the marked position of the arterial wrist under the sensor block, and the operator adjusts the screw about 12-18 times (from lightly touching skin to the bone), the depth of each time is 0.5 mm, and samples the wrist pulse at each depth. According to these data, the pulse taking platform performance suggested in this proposal is derived for later evaluation.
Signal Analysis.
A wavelet algorithm is adopted to remove baseline wander and high-frequency noise [16], and then, a polynomial surface fitting is adopted to fit these processed signals. The detailed information for the signal processing is displayed in Figure 7. The surface of the wrist pulse is constructed based on fitting the equation, as shown in Figure 8(a). The surface of wrist pulse is projected to an X-Y plane. X indicates the width of wrist pulse, and Y is the length of the wrist pulse. Some core characteristics such as PEAK, FREQ, LENGTH, WIDTH AS, DS, and STATIC are defined as parameters for analysis later. PEAK is the peak of the surface wrist pulse. Once the PEAK is known, the area of interest is also determined through our experimental process. FREQ is the frequency of a wrist pulse defined in the maximum peak to peak for the sampling channel at each pulse taking depth. LENGTH, WIDTH, AS, and DS are defined at the peak of the surface wrist pulse at each pulse taking depth shown in Figure 8(b): LENGTH, the length of the wrist pulse according to the area of interest along the Y -axis; WIDTH, the width of the wrist pulse along the X-axis; AS, the ascending slope (AS) around the peak point of the surface wrist pulse; DS, the descending slope (DS). STATIC is the direct current component of the wrist pulse signals, which represents the hold-down pressure or static pressure, shown in Figure 9. ΔSTATIC, such as S 21 and S 32 , represents the differences in STATIC between pulse taking depths.
Statistical
Method. According to the above defined parameters, this study intends to check the repeatability of the proposed pulse taking platform and the differences of pulse conditions among Fu, Zhong, and Chen pulse-taking depths. To acquire a statistical analysis, we use the SPSS 17.0 program. To guarantee the repeatability of the proposed pulse taking platform in order to satisfy the requirements of pulse diagnosis, a paired samples t-test was carried out.
Additionally, a one-way analysis of variance (ANOVA), STATIC, PEAK, FREQ, LENGTH, WIDTH, AS, and DS of the wrist pulse at each pulse taking depth of Fu, Zhong and Chen are examined. Their mean differences between pulse taking depths are also verified, and Scheffe's test and Tamhane's test are carried out for multiple comparisons.
Results of the Proposed Pulse Taking Platform.
The proposed pulse taking platform detected the wrist pulse signals as shown in Figures 9 and 10. Figure 10(a) shows the original sampled signal; each channel presents dynamic wrist pulse signals on the static pressure or the holddown pressure. Figures 10(b) and 10(c) indicate the original signals ( Figure 10(b)) at one of 12 channels and the signals (Figure 10(c)) processed by the wavelet algorithm to remove the static pressure and baseline drift. It can be seen that the detection of the wrist pulse through the proposed pulse taking platform has the same performance as that of the single sensor platform.
Additionally, the proposed pulse taking platform can measure twelve channels of wrist pulse simultaneously at Guan pulse-taking position. The waveform of each channel is displayed in Figure 11; it indicates 12 channels work well for the detection of radial artery signals. Since twelve-channel signals are acquired at Guan, the surface of the wrist pulse can be analyzed by using a surface fitting equation with core parameters of the pulse conditions, including PEAK, FERQ, LENGTH, WIDTH, AS, and DS as shown in Figure 8.
Results of the Proposed Pulse-Taking Platform in Clinic.
The repeatability of the proposed pulse taking platform is listed in Table 1. The t-test results reveal that the P value is larger than .05, which indicates that the means does not show significant differences between pre-and post-sampling during the 10-minute period.
To testify the differences among Fu, Zhong, and Chen pulse taking depths, an ANOVA test is implemented. After the ANOVA test, among all the parameters except FREQ, we observe significant differences (P < 0.05) in the mean values among the different pulse taking depths. The ANOVA results are tabulated in Table 2, in the form of mean ± SD. From the viewpoint of "means", the differences of the parameters between Zhong and Chen pulse taking depths are smaller than those between the Fu and Zhong depths, as well as for the Fu and Chen pulse-taking depths.
Advantages of the Proposed Pulse Taking Platform.
The wrist pulse waveforms are roughly divided into either a triple-humped wave, which has three peaks, or a doublehumped wave, which has two peaks [17]. The proposed pulse taking platform can obtain the same type of pressure pulse waveform as shown in Figure 11. The strength, frequency, length, width, and trend of pulse conditions are simultaneously obtained from our designed pulse-taking platform. That means that the proposed pulse taking platform with the tactile sensor array satisfies the requirements of pulse diagnosis. Although the artery wrist is not a flat surface, the optimal pulse taking position focuses on the peak of the wrist pulse and the proposed pulse taking platform can be adjusted to the peak's occurrence at the nearby center of the tactile sensor array. In this way, the unique characteristics of wrist pulse signals similar to the single sensor can be obtained, and three-dimensional wrist pulse signals around the peak of wrist pulse can also be obtained. This pulse-taking method is consistent with the clinical method.
The main analytical methods for the single sensor are time domain and frequency domain. In the time domain, researchers have found unique characteristics of pulse waveforms such as a percussion wave, a tidal wave, and a dicrotic wave [16,17]. Based on these characteristics, the pattern of pulse conditions can be classified. In addition, in the frequency domain, the distribution of spectrum and the resonance of wrist pulse signals have recently been investigated [18,19]. Since the proposed platform can obtain the same pulse waveform as a single sensor, these analytical methods can equally apply to our recorded signals using the proposed pulse taking platform.
It is valuable to investigate the length, width, and trend of pulse conditions. Up to now, these characteristics have not been easily detected by a single sensor [5]. To measure the width of the wrist pulses and the pressure pulse waveform, a combined detecting probe has been implemented by, for instance, Tyan et al. who proposed a pressure sensor for recording the pressure pulse waveform and a strain gauge for recording the width of wrist pulses [8]. However, it is easier to find these characteristics through our proposed platform with the sensor array, as shown in Figure 8. The length, width, and trend of wrist pulses obtained through three-dimensional wrist pulses are more compatible with clinical data, and the algorithm is easier than that of the single sensor.
Additionally, a pressure sensor is the best choice for imitating the pulse-taking feeling of a TCM doctor. A pressure sensor can be made by PVDF, PZT, Piezoresistor, and so on; while they can only detect dynamic characteristics of wrist pulses, they have limitations with regard to static characteristics of wrist pulses, especially in static pressure, which represents hold-down pressure [5]. The dynamic characteristics which represent the pulse waveform are compared at different pulse taking depth, such as Fu, Zhong, and Chen, and at different pulse taking positions, such as Cun, Guan, and Chi. The static characteristics represent the pulse waveform at specific pulse taking depth and static pressure at each pulse taking depth. According to the above definitions, the proposed platform can simultaneously detect dynamic characteristics and static characteristics during a single pulse-taking procedure, as shown in Figures 9 and 10, in which we can observe the change of the wrist pulse waveform and the change of hold-down pressure during the pulse taking procedure. Differential static pressure (namely, the change of hold-down pressure), such as S 21 or S 32 in Figure 9, may be useful to recognize the tension of wrist pulses in the future. Conventionally, the optimal pulse taking position is at the maximum of the wrist pulse. Tyan et al. proposed a method to detect the optimal site for recording the pressure pulse waveform [8]. Since its detecting probe contained only single sensor, it made the pulse-taking procedure more complex. The sensing area of the sensor array is bigger than that of a single sensor. Hence, it is easy to find the peak location at the sensing area as shown in Figure 8. This characteristic should be beneficial with regard to developing an automatic pulse taking platform to find the optimal pulse taking position. Table 3 lists the apparatus that have been reported in detecting wrist arterial pulse. Basically, single sensor cannot provide information enough to construct surface fitting for investigating the pulse length and pulse width. Although there are 9 probe-sensing elements in Tang's report, its arrangement, which is cruciform, limits the feasibility of surface fitting. On the other hand, static pressure represents the pulse taking pressure; it is an important index during pulse taking procedure. From the above comparison, the 8 Evidence-Based Complementary and Alternative Medicine proposed pulse taking platform is suitable for imitating the procedure of taking pulses.
To sum up, the proposed pulse-taking platform can provide not only dynamic characteristics as single sensor platform but also static pressure of taking pulse and shape parameters (pulse length and pulse width). Based on these characteristics, the trend parameters can be analyzed. And it is easier to find the optimal pulse taking position by the array sensor platform.
Verify Feasibility of the Proposed Pulse Taking Platform in Clinic.
Two common acceptable principles of TCMPD are used to verify our assumption regarding the feasibility of quantifiable TCMPD researches with a tactile capacitive array sensor. One is that the pulse condition is almost not changed as long as no physical or psychological intervention exists in 10 minutes. The other is that the pulse conditions of the wrist radial artery are different among different pulse taking depths. These different pulses-taking depths in the terminology of TCMPD are called Fu, Zhong, and Chen.
One of the controversial issues of TCM pulse diagnosis is the repeatability of pulse conditions. This means that different doctors taking the pulse of the same subject may obtain different pulse conditions. According to the basic rule of pulse diagnosis, the pulse condition should not be changed in 10 minutes without any intervention. The result of the paired samples' t-test listed in Table 1 is consistent with clinic experiences. It shows that the repeatability of the proposed platform is feasible for quantification of TCMPD.
We evaluate the differences among Fu, Zhong, and Chen pulse taking depths with these defined core parameters. After an ANOVA analysis, the P value of STATIC, PEAK, FREQ, LENGTH, WIDTH, AS, and DS are 0.000, 0.000, 0.132, 0.000, 0.002, 0.000, and 0.000, respectively. More information is listed in Table 2. The results indicate that the frequency of the wrist pulse had no significant differences at the different pulse-taking depths; this means that once the wrist pulse is detected, the frequency of the wrist pulse is decided. This phenomenon corresponds with the clinic finding. On the other hand, other results, such as those for STATIC, PEAK, LENGTH, WIDTH, AS, and DS, have significant differences among different pulse-taking depths. Different combined parameters of pulse conditions at different pulse taking depths represent different health conditions. This quantified result indicates that the dynamic vertical characteristics of pulse conditions are different at different pulse taking depths. Furthermore, Jeon et al. proposed that the dynamic horizontal characteristics of pulse conditions are also different at different pulse taking positions, namely, Cun, Guan, and Chi [20]. This analysis methodology may be meaningful according to the dynamic vertical characteristics and dynamic horizontal characteristics of pulse conditions.
The simple application was also proposed to explain the usefulness of proposed platform. The fingertip's feeling of replete pulse represents a general term for a pulse felt forceful at all the three sections, Cun, Guan, and Chi, also called forceful pulse. That means the response area of replete pulse is higher and larger at tactile array sensor and depicted in Figure 12(a). On the other hand, the response area of vacuous pulse is lower and smaller at tactile array sensor and depicted in Figure 12(b). According to this application, the strength, length, and width of pulse conditions are easily obtained from 3D map, as in Figure 12.
To sum up, the proposed pulse-taking platform is feasible for the quantification of TCMPD according both experiments, including repeated sampling wrist radial artery signals within 10 minutes and verification of difference of pulse conditions among different pulse-taking depths. This provides sufficient evidence to verify the basic theory of pulse diagnosis: the mapping relationship is meaningful between organs and pulse conditions. A more detailed mapping relationship will be checked in the future. Based on our results; therefore, we infer that the proposed pulse-taking platform with an array sensor is feasible in pulse diagnosis.
Conclusions
The aim of this proposal is to provide an innovative method to obtain full information for wrist pulse signals, such as temporal and spatial properties. A pulse taking platform with an array sensor is implemented to carry out this purpose. The length, width, and trend of pulse conditions can be easily detected by our proposed platform, and the results reveal that the performance of the pulse-taking platform with an array sensor is better than that of a single sensor, since the proposed platform obtains not only the unique characteristic waveform but also the surface of the wrist pressure waveform. The paired samples' t-test shows that this proposed platform is feasible to repeat the pulse taking procedure, and the results of the ANOVA test for the pulse taking depths proves the array sensor pulse taking platform is practicable for quantified research of TCMPD. In the future, using this platform to evaluate the basic principle of TCM will open a new quantitative method for TCM. | 5,511.8 | 2011-06-30T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Sheaving—a universal construction for semantic compositionality
Semantic compositionality—the way that meanings of complex entities obtain from meanings of constituent entities and their structural relations—is supposed to explain certain concomitant cognitive capacities, such as systematicity. Yet, cognitive scientists are divided on mechanisms for compositionality: e.g. a language of thought on one side versus a geometry of thought on the other. Category theory is a field of (meta)mathematics invented to bridge formal divides. We focus on sheaving—a construction at the nexus of algebra and geometry/topology, alluding to an integrative view, to sketch out a category theory perspective on the semantics of compositionality. Sheaving is a universal construction for making inferences from local knowledge, where meaning is grounded by the underlying topological space. Three examples illustrate how topology conveys meaning, in terms of the inclusion relations between the open sets that constitute the space, though the topology is not regarded as the only source of semantic information. In this sense, category (sheaf) theory provides a general framework for semantic compositionality. This article is part of the theme issue ‘Towards mechanistic models of meaning composition’.
Introduction
The way that representations and their meanings for complex entities obtain from the representations and meanings for the constituent entities and their structural relations is called semantic compositionality. Some form of compositionality is supposed to explain concomitant cognitive capacities, such as the systematicity of language [1] and thought [2], i.e. where possessing certain cognitive capacities implies possessing certain other (structurally related) cognitive capacities-an equivalence relation on cognitive abilities [3]-such as under-mechanisms-a language of thought [2] on one side versus a geometry of thought [7] on the other-and their explanatory import [6]. The challenge is not just to explain how some form of compositionality accounts for properties such as systematicity, but why cognition is compositional in the first place [8].
Explaining the why versus how of systematicity was posed as a challenging problem for connectionist theories [4], and later shown to be also problematic for classical theory [6]. Problematically, while there are instances of compositionality that support a requisite systematicity property, there are also instances that do not support the same property. So, systematicity does not necessarily follow from core principles and assumptions of classical or connectionist theories. Auxiliary assumptions added to pick out just those instances of compositionality that support systematicity are ad hoc when they are unconnected to the theory's core principles and assumptions, cannot be confirmed independently of confirming the theory, and are motivated only by the need to fit the data, in which case, the theory fails to fully explain systematicity [6]. One recourse is to claim that the supposed counterexamples are not the 'canonical' forms of compositionality that classical theory takes as a core assumption [3]. Yet, its unclear what characterizes canonicity, or why cognition is canonically compositional [9].
A category theory [10] approach to compositionality was introduced to address the why of systematicity [11]. Category theory is a field of (meta)mathematics invented to formally compare mathematical structures [12]. The core explanatory concept is universal construction, formalized as universal morphism, which is a way of comparing cognitive capacities modelled as compositions of maps-such constructions are characterized by a universal mapping property [13]: in regard to a collection of systematically related cognitive capacities, each map modelling a member capacity is composed of the map shared by all members and a map that is unique to that capacity. Hence, a universal morphism identifies an equivalence class of systematically related cognitive capacities. Such constructions are the 'best' one can do within a certain (categorical) context-every construction in that context 'leads to' a universal construction, so necessarily obtains via a recursive process [9].
An explanation for semantic compositionality must ultimately connect to the physical (neural) system that supports cognition. Classical theory assumes that symbols are supported by a neural system that implements the equivalent of memory registers, i.e. the physical symbol system hypothesis [14]. Connectionist theory makes this link more directly as the representations that supposedly support semantic compositionality are instantiated as neural activity for a network of (abstract) neurons. A categorical approach must also make this kind of connection. To this end, the current work focuses on another universal morphism, called sheaving [15] or sheafification [16], to sketch out a category theory perspective on the semantics of compositionality. Sheaving is a construction at the nexus of algebra and geometry/topology, which alludes to an integrative view. This view starts with a ( pre)sheaf to model cognitive representations as data attached to a topological space [17]. As we shall see, the underlying topological space gives meaning to the data in terms of the relations between the open sets that constitute the topology.
The presentation of this work is primarily informal to facilitate an intuitive understanding of the approach. Connections to formal details appear elsewhere [17], and deeper introductions to categories and sheaves appear in many textbooks on these topics [10,16,18,19]. We proceed with an example of a universal morphism that serves to illustrate the basic category theory concepts ( §2) underlying the examples of sheaving given in the context of cognition ( §3). This approach is discussed by comparison and contrast with classical notions of compositionality and possible neural mechanisms ( §4). For convenience and to help ground concepts, some formal details appear in the appendix.
Categories and (universal) compositionality
We use playing cards as a running example of compositionality to bootstrap the needed category theory from the more familiar concepts of sets and functions. Each card has a rank (i.e. two, three, … , ten, jack, queen, king, ace) and a suit (i.e. spade, club, diamond, heart). For example, queen and heart constitute the queen of hearts. The ranks can be represented by the set of symbols Rank ¼ {2, 3,4,5,6,7,8,9,10, J, Q, K, A}, the suits by the set of symbols Suit ¼ {;, ', V,~} and the cards by the Cartesian product of those sets: For instance, the pair of symbols (Q,~) represents the queen of hearts. This product also comes with two functions that retrieve the rank and suit of each card: e.g. rk : (Q,~) 7 ! Q and st : (Q,~) 7 !~. Accordingly, sets and functions provide a basic set-theoretic model of playing cards.
Category theory starts with the formal concept of a category (definition A.1), which consists of a collection of entities, called objects, a collection of relations between objects, called morphisms, and an operation that takes two morphisms and returns a morphism, called composition. The archetypal category is Set (example A.2), the category of sets (objects) and functions (morphisms), with function composition as the composition operation (remark A.3). Hence, sets Rank, Suit and Card are objects and functions rk and st are morphisms in Set, constituting a categorical product (definition A.6), which is the Cartesian product for this category (example A.7). A deck of cards is modelled as a mapping of each face, signifying a playing card, to the corresponding symbol, e.g. a function card : Face ! Card; Q~7 ! (Q,~). The mappings from faces to ranks and from faces to suits are given by compositions faceRank ¼ rk card and faceSuit ¼ st card, respectively: e.g. faceRank : Q~7 ! Q, which says that the rank of the card signified by the face Q~is Q (remark A.8). Thus, we have a category-theoretic model of the same playing cards concept.
Having introduced categories, we can now look at basic constructions and their relations. A functor (definition A.12) is a way of constructing, indexing, or identifying objects and morphisms. For example, the product functor (example A.14) constructs the set of cards from the sets of ranks and suits, i.e. P : (Rank, Suit) 7 ! Rank  Suit, and a constant functor identifies the set of cards (i.e. the functor that sends every set and function, in Set, to the set of cards, Card, and its identity function, 1 Card ). Two functors are related by a natural transformation (definition A.15), and the optimal (or most efficient) transformation pertains to a universal morphism (definition A.17). For example, the transformation from the set of cards to their ranks and suits is the universal morphism (Card, rs), where rs ¼ {rk, st}. The transformation is efficient in that there are no more and no fewer mappings than needed to retrieve the rank and suit of every card.
royalsocietypublishing.org/journal/rstb Phil. Trans. R. Soc. B 375: 20190303 Note that universal morphisms are unique up to unique isomorphism (remark A.19). So, constituents need not be 'tokened' in the classical sense. A characteristic of classical compositionality is that the symbols representing constituents are tokened (inscribed, or written out) whenever the representation of their complex host is tokened [4]. The symbol pair representation of cards is an example of tokening: for instance, the symbols for queen, Q, and heart,~, are tokened whenever the symbol for queen of hearts, (Q,~), is tokened. In category theory, the product of two sets is conventionally given as the Cartesian product, but other products exist. For example, the cards can be represented as numbers, say from 1 to 52, provided the accompanying functions retrieve the requisite components. Being an isomorphic set is not sufficient, because one still needs the appropriate functions to recover the constituents-such isomorphisms are generally not unique (remark A.19).
Sheaving: bridging gaps in knowledge
Our categorical approach to semantic compositionality involves presheaves/sheaves (functors) and sheaving (natural transformation). A presheaf/sheaf (definitions A.20/A.21) models data attached to a topological space (definition A.4). A sheaf is a presheaf where the attached data are globally coherent, i.e. agree on overlapping regions. Pullbacks (definition A.9) express global coherency conditions (remark A.22). For Set, a pullback of f and g (example A.10) is a constrained product (remark A.11), which consists of only those pairs, (a, b), whose components map to a common value ( property): f (a) = g(b). Hence, pullbacks pertain to non-local (global) properties. Sheaving is a universal morphism that constructs the 'nearest' sheaf from a given presheaf (remark A.23). This construction is likened to the natural join operation (example A.24) that extracts information from data stored locally in different tables of a relational database-say, the addresses of all people prescribed a particular medication, where contact and medical data are stored in separate tables. In this way, sheaving is a kind of relational inference: a way of bridging gaps in knowledge via meaning grounded in the underlying topological space.
We give three examples of sheaving that pertain to cognition. The first example continues the introduction to category (sheaf ) theory constructions via the familiar concept of playing cards. The second example involves visual feature binding [17] extended for triple conjunction search [20]. The third example involves a simple version of depth perception. Each example illustrates the different ways that meaning is conveyed by the relations between the open sets that constitute the topology.
(a) Playing cards
The playing cards example, introduced earlier, can be considered as a presheaf or sheaf on a topological space constituted by elements identifying the (feature) dimensions of rank and suit. For example, suppose the rank and suit dimensions are labelled as R and S, respectively. The set of dimension labels D = {R, S} together with the topology {;, {R}, {S}, {R, S}} constitute a discrete topological space, which consists of all subsets of labels and their inclusion relations (example A.5). And, the values of each card constitute the data attached to that space. For example, the queen of hearts and two of spades are represented by the presheaf, F Q2 : D op ! Set. In database terms, this presheaf can be regarded as a collection of tables whose attributes (headings) correspond to the open sets and rows correspond to the attached data, e.g. there is a two-column table whose attributes correspond to the open set {R, S} that has two rows: one row for the queen of hearts and one row for the two of spades (example A.26). In sheaf theory terms, F Q2 sends each open set to the set of functions on that set-each function maps the elements of the open set to the attached data-e.g. F Q2 : {R, S} 7 ! {c QH , c 2S }, where c QH : R 7 ! Q, S 7 !ã nd c 2S : R 7 ! 2, S 7 ! ;. The inclusions given by the topology are preserved as restrictions on functions, e.g. {R} ⊆ {R, S} maps to the restriction fj R : c QH 7 ! c Q , c 2S 7 ! c 2 . Restriction corresponds to (database) projection of a table along the specified attribute(s).
Sheaving affords the systematic capacity to represent all cards (example A.27), but this capacity depends on the topology. To illustrate, suppose one knows the ranks and suits, i.e. there is a one-column table of 13 rows for ranks and a one-column table of four rows for suits. In this situation, sheaving simply constructs all pairwise combinations of ranks and suits, which is the sheaf F þ card . Thus, we have a systematic capacity to represent all 52 cards. One can think of sheaving as a kind of completion, or limit process-adding just enough rows to make a sheaf.
A contrasting scenario is where one knows some of the cards without knowing about constituents rank and suit: cards are understood as non-compositional entities. This situation is captured by the indiscrete topology (example A.5), i.e. {;, D}. Sheaving, in this case, does not add any rows to the table containing just the known (non-compositional) cards. Hence, one does not necessarily have a systematic capacity to represent all cards. Completion is trivial-the presheaf is a sheaf-because the topology does not consist of any other (non-empty) open sets.
This difference between sheaving with respect to a discrete versus indiscrete topological space was used to model the difference between generalization and lack of generalization observed with participants trained on cue-target maps [17]. The participants who failed to generalize were regarded as having learned the mappings from cues to targets-pairs of letters to coloured shapes-as mappings of non-compositional entities.
(b) Visual feature binding
Visual feature binding concerns the capacity to identify, say, a red square and a blue triangle, as opposed to a red triangle and a blue square based on globally coherent spatial information (location). This process is modelled as the sheaving of colour and shape location maps to obtain a colour-shape conjunction map that corresponds to objects observed in the visual field as needed to perform visual search [17]. Here, we show how this example of sheaving extends straightforwardly to triple conjunction search [20], i.e. where the target of search is identifiable by a triple of features, such as colour, orientation and (spatial) frequency.
In terms of universal morphisms, sheaving involves pullbacks (remark A.22). For instance, the colour-orientation map obtains from the pullback of the projections of the colour-location (CL) and orientation (OL) maps onto location: p 2 : CL ! L and p 2 : OL ! L to obtain the colour-orientation map, denoted C × L O, and its projections. Thus, triple conjunction obtains from two pullbacks: The topology in this example conveys a different (relational) meaning from the meanings conveyed by the discrete and indiscrete topologies. Each topology induces a corresponding order over the elements of the underlying space, called the specialization ( pre)order (remark A.28): C ≤ L, O ≤ L, F ≤ L for the current example, which says that colour, orientation and frequency specialize location; conversely, location is a general (global) property of the data (object features) attached to the topological space. By contrast, the discrete topology in the cards example has the corresponding order R ≤ R, S ≤ S, which says that neither dimension is a specialization of the other. In other words, the dimensions are independent; sheaving is effectively a Cartesian product of the sets of values on those dimensions (example A.27).
The preorder corresponding to the indiscete topology in the cards example has R ≤ S and S ≤ R, which says that the dimensions are specializations of each other, i.e. effectively the same dimension (remark A.28). Thus, topology plays a significant role in our approach to semantic compositionality.
(c) Depth perception
Binocular vision can be used to infer (triangulate) location of a target object using lines of sight and relative eye positions. This computation can be achieved as an instance of sheaving, using simple geometry. Suppose the position of the target object is (x, y) ∈ P and the angles of the eyes (lines of sight) to the target are λ and ρ for the left and right eyes, respectively. Left and right lines of sight specify position as functions of distance from the eyes, l ∈ L and r ∈ R, parameterized by angle: left l : l 7 ! l(cos l, sin l), and right r : r 7 ! r(cos r, sin r).
The position of the target is the intersection of the two lines of sight, which is the pullback of left l and right r . This pullback is equivalent to the pullback of projections p 2 : LP ! P and
Discussion
Semantic compositionality concerns the way that representations and the entities they stand in for correspond in some systematic, structurally consistent manner. Our sheaf theory approach regards this correspondence as data attached to a topological space (presheaf/sheaf ), where the shape (topology) of the underlying space conveys meaning to the representations. Shape is determined by the open sets and its structure is preserved by restrictions of the data, either locally ( presheaf ), or in a systematic, globally coherent manner (sheaf ). Systematicity is afforded by a universal construction (sheaving). Sheaving infers non-local information from locally sourced knowledge to construct the nearest sheaf by gluing together data that agree on the overlapping regions (global coherency). Three examples were given: (1) inferring the ranks and suits of every card, given ranks and suits of some cards, (2) inferring the binding of features to objects given the binding of features to locations and (3) inferring object location given binocular line of sight. In each case, local knowledge is extended (composed) to infer non-local information, and this form of compositionality depends on the topology.
Note that there are two senses in which sheaving spans a formal divide. There is a 'vertical' sense in that presheaves are maps that preserve spatial relations (inclusions) as algebraic relations (restrictions). We limited ourselves to the simplest case where attached data were sets. In general, other categories can be used, such as categories of partially ordered sets, or groups. And there is a 'horizontal' sense in that data attached to open sets are glued together to construct data attached to a larger open set. These two senses arise because functors are maps between categories, whereas natural transformations (sheavings) are maps between functors.
This sheaf theory approach can be compared/contrasted with classical approaches to compositionality. Classical compositionality, in comparison, says that representations of complex entities are given by representations of their constituent entities so that the semantic relations between constituents are preserved by syntactic relations between corresponding symbolic representations. Functors preserve structure. So, classical and categorical approaches are similar to the extent that classical structures are category-like. Classical theory assumes symbolic representations are instantiated on some physical system, e.g. memory registers (or, slots), hence classical systems are sometimes called physical symbol systems [14]. Given a set of registers, one can impose the discrete topological space, in which the instantiated symbols are data attached to that space, thus realizing a presheaf. In this way, classical compositionality can be seen as an instance of categorical compositionality. By contrast, however, functoriality is only one part of the categorical approach to compositionality presented here. Presheaves and sheaves are functors, but only presheaves that are sheaves satisfy the global coherency conditions. As noted elsewhere [17], pullbacks are reminiscent of symbolic connectionist models, LISA [21] and DORA [22]. The idea is that (relational) entities are represented via connections to corresponding neurons representing the constituent entities (fillers) and their roles in the relation based on shared semantic information represented by a common pool of neurons. Neurons representing related entities that have shared semantic features tend to bind together. Similarly, the pullback of morphisms f : A → C and g : B → C is a generalized intersection royalsocietypublishing.org/journal/rstb Phil. Trans. R. Soc. B 375: 20190303 of A and B constrained by C. In terms of those models, objects A and B pertain to roles and fillers, C to semantic features, and the pullback object to relational binding. This correspondence is suggestive of a way to connect sheaving to neural network models. Neurons are topologically organized and their activities are the attached data.
The nature of sheaves depends on the nature of the data and the underlying topology. The examples of sheaves presented here are relatively simple. Sheaf theory has applications in other areas that may be adaptable to cognition. For example, a sheaf theory approach to sensor fusion [23] suggests applications to the psychology of perception. Human probability judgments that violate classical probability laws motivate quantum probability theory for cognition [24]. The close connection between sheaf theory and contextuality effects in quantum physics [25] suggests that our sheaving approach to semantic compositionality may also be applicable to quantum-like compositionality effects [17]. In these applications, the data are measurements, or probabilities [23,25].
One important direction for further work is modelling the development of the underlying topological space. Our examples illustrate how different topologies ground relational information differently. However, we have not considered how these topological spaces are obtained. Sheaf theory methods in applied topology [26] may be useful here, where the underlying topological space is inferred from data.
The importance of the underlying topology is another way that the sheaving approach goes beyond classical and artificial neural network approaches to compositionality. In this paper, we focused on the universal morphism aspect of sheaves and sheaving, because universal morphisms were argued to play a crucial role in explaining systematicity [9,11], which is a cognitive property motivating compositionality principles [8]. Yet, the topological aspect of sheaving is also crucial. Any set of registers or neurons can be given a topology. The deeper question is why one topology arises over another. Discrete and indiscrete topologies were asserted for an application of sheaving [17] because they are two extremes obtained from universal morphisms. So, their determination accords with the general universal construction principle [9,11]. Determination of other topologies will depend on other constraints. For instance, the physical (geometrical) relations between sensors ground triangulation of object location. This view of semantics differs from the classical view, which regards the computational ( psychological) level as supported by, but independent of the specific physical (implementational) level-just as a programming language is supported by, but independent of a specific computer.
Topology captures order, and order is implicit even in the productive (recursive) aspects of cognition, e.g. level within a tree hierarchy. We have not dealt with productivity, as it purportedly implies recursion in language [27]. Category theory also provides general constructions for recursion [28], and these methods have been applied to some aspects of cognition [9]. Topology is not regarded as the only source of semantic information. So, in this sense, category (sheaf ) theory provides a general framework for semantic compositionality.
Data accessibility. This article does not contain any additional data. Competing interests. I declare I have no competing interests. Funding. This work was supported by a Japanese Society for the Pro-
Appendix A. Basic theory
Conceptual introductions to the formal concepts provided in this appendix can be found in [23,29,30], see also in [17]. Deeper introductions to the category theory concepts can be found in [13,16,19] and sheaf theory concepts in [16,18]. Specific results are referenced where they appear in the appendix.
Example A.5 (Topological space). A topological space is a category of open sets (objects) and inclusions (morphisms)there is just one morphism
The discrete topology on X is the set of all subsets of X; the indiscrete topology on X is {;, X}.
Definition A.6 (Product). A product of objects A and B, in a category C, is an object P (also written A × B) together with a pair of morphisms π 1 : P → A and π 2 : P → B such that for every object Z and morphisms f : Z → A and g : Z → B there exists a unique morphism u : Z → P such that f = π 1 • u and g = π 2 • u. Morphism u is also denoted 〈f, g〉, as it is uniquely given by f and g. royalsocietypublishing.org/journal/rstb Phil. Trans. R. Soc. B 375: 20190303 Remark A.8. The function u : Z → P need not be a one-toone correspondence (bijection). For instance, the rules of a game may stipulate that certain cards are duplicated or withheld, so a deck may contain more or less than 52 cards, i.e. the map from faces to cards, card : Face ! Card, is onto (surjection) or into (injection).
Definition A.9 (Pullback). A pullback of morphisms f : A → C and g : B → C, in a category C, is an object P (also written A × C B) together with a pair of morphisms π 1 : P → A and π 2 : P → B such that for every object Z and morphisms z 1 : Z → A and z 2 : Z → B there exists a unique morphism u : Z → P such that diagram Remark A.11. A pullback is a (generalized) product constrained by f and g. A product of A and B is equivalently a pullback of f : A → 1 and g : B → 1, where 1 is terminal: an object such that for every object X, in C, there exists a unique morphism from X to 1. In Set, a terminal is any singleton set, thence f (a) = g(b) for all a ∈ A and b ∈ B. Thus, a product is effectively an 'unconstrained' pullback.
Definition A.17 (Universal morphism). A universal morphism from functor F : C → D to object Y in D is a pair (B, ψ) consisting of an object B in C and a morphism ψ : F(B) → Y in D such that for every object X in C and every morphism g : F(X ) → Y in D there exists a unique morphism u : X → B in C such that g = ψ • F(u).
Example A.18 (Products, pullbacks). A product of A and B is a universal morphism (A × B, π) from the diagonal functor, Δ, to the pair of objects (A, B), where π = (π 1 , π 2 ). A pullback of morphisms f : A → C and g : B → C is a universal morphism (A × C B, π) from the (generalized) diagonal functor [10] to the pair of morphisms ( f, g).
where the right diamond indicates the pullback of fj U>V and gj U>V ; or equivalently, by the equalizer of pairs of morphisms Remark A.28. A topological space, (X, T ), induces a specialization preorder on the elements of the underlying set, X. Two elements x, y ∈ X are comparable, x ≤ y, if x is an element of the closure of y, i.e. the intersection of all closed sets containing y-if U is an open set of T, then the complement of U (i.e. the set of elements in X that are not in U) is a closed set. In the cards example, the indiscrete topology has closed sets ; and {R, S}. The closure of R and the closure of S are the same set, {R, S}. Hence, the preorder has R ≤ S and S ≤ R. Open sets specify closeness. Accordingly, the open set {R, S} says that R and S are close to each other, but not preferentially so, since there are no other open sets. The open sets of a discrete topology are also the closed sets. So, in the discrete case, R and S are not comparable, since R is not in the closure of S, i.e. {S}, and S is not in the closure of R, i.e. {R}. Note that an element is always comparable to itself, x ≤ x, because any topology T on X must contain X as an open set of T (by definition). Figure 1. Relational tables for a presheaf (a-e) and its nearest sheaf (a-d,f ). | 6,671 | 2019-12-16T00:00:00.000 | [
"Mathematics",
"Philosophy"
] |
A naturally occurring variant of endothelial lipase associated with elevated HDL exhibits impaired synthesis.
Human endothelial lipase (EL) is a member of a family of lipases and phospholipases that are involved in the metabolism of plasma lipoproteins. EL displays a preference to hydrolyze lipids in HDL. We report here that a naturally occurring low frequency coding variant in the EL gene (LIPG), glycine-26 to serine (G26S), is significantly more common in African-American individuals with elevated HDL cholesterol (HDL-C) levels. To test the hypothesis that this variant results in reduced EL function, we extensively characterized and compared the catalytic and noncatalytic functions of the G26S variant and wild-type (WT) EL. While the catalytic-specific activity of G26S EL is similar to WT EL, its secretion is markedly reduced. Consistent with this observation, we found that carriers of the G26S variant had significantly reduced plasma levels of EL protein. Thus, this N-terminal variant results in reduced secretion of EL protein, plausibly leading to increased HDL-C levels.
cysteine-and serum-free DMEM. Media were replaced with 2 ml serum-free media containing 100 U/ml heparin, and cells were chased for up to 2 h. The media and cells were collected at the end of the pulse and at times 15,30,60,90, and 120 min of chase period. An aliquot (1 ml) of media was mixed with 10 µl of an anti-human EL polyclonal antibody, generated as previously described ( 21 ), overnight at 4°C. The cells were lysed with RIPA buffer (50 mM Tris-HCL, pH 8.0, 1 mM EDTA, 1% Triton X-100, 1% deoxycholic acid, 1 mM dithiothreitol, 150 mM NaCl, 0.015% phenylmethylsulfonyl fl uoride, 0.1% SDS), and cellassociated EL was likewise immunoprecipitated from the cell lysates. The antibody-EL complexes were adsorbed to Protein A and washed six times with PBS (for medium samples) or RIPA buffer (for cell samples). The EL was eluted from Protein A with 200 µl of a lysis/gel-loading buffer (38.5 mM Tris-HCl, 0.1% EDTA, 2% SDS, 6 M urea, 0.1% dithiothreitol, 0.05% reduced glutathione, 0.001% bromophenol blue) at 100°C for 10 min and separated by electrophoresis on 10% polyacrylamide gels containing 0.1% SDS. Gels were exposed to fi lm and bands were excised from the gel and counted for radioactivity. Data from four pulse-chase experiments were normalized based on percentage of cell 35 S-EL after pulse.
Analyses of EL expression
Proteins in conditioned media samples from transfected cells were separated on Nupage TM 10% Bis-Tris gels (Invitrogen), and gels were transferred to nitrocellulose membranes. Nitrocellulose membranes were subjected to chemiluminescent immuno blot analyses for EL (using a 1:5,000 dilution of the anti-human EL polyclonal antibody and a 1:5,000 dilution of horseradish pero xidase-conjugated anti-rabbit IgG). Total RNA from cells transfected with EL was subjected to real-time PCR analyses for human EL and  -actin using commercially available primers (Applied Biosystems). The mass of all EL proteins used in lipase activity assays and lipoprotein binding assays, semiquantifi ed as arbitrary units, was determined using an ELISA in the same assay ( 21,22 ). The mass of EL in preheparin plasma from human subjects was quantifi ed by ELISA as ng/ml using a human EL protein standard (kindly provided by Dr. Karen Badellino, University of Pennsylvania).
Lipase assays
Triglyceride lipase and phospholipase assays using glycerolstabilized substrates of triolein and dipalmitoylphosphatidyl choline (DPPC), respectively, were performed as described previously ( 23 ). LDL and HDL 3 were isolated by potassium bromide density gradient ultracentrifugation ( 24 ). Assays of the kinetics of lipoprotein lipid hydrolysis by EL were performed as described previously ( 25 ). The free fatty acids generated by the hydrolysis of lipoproteins were measured using a commercial kit (Waco Pure Chemical Industries) according to the manufacturer's instructions. All activity data were corrected for protein mass (determined as described above) and were normalized to the percentage of WT EL.
Lipoprotein bridging assays
HEK293 cells in 12-well plates were transfected with EL as described above. At 32 h posttransfection, media were changed to 0.5 ml of serum-free medium containing 0.2% BSA. At 48 h, the serum-free media with BSA were replaced with fresh serum-free media with BSA containing either 5 g/ml [ 125 I]LDL or 5 g/ml [ 125 I]HDL 3 ± 100 U/ml heparin. LDL and HDL 3 were radiolabeled using the iodine monochloride method ( 26 ). Cells were incubated at 4°C for 1 h, and cell-associated lipoproteins were measured. Additional wells were transfected to assess cell surface-bound
Human subjects
Subjects from the University of Pennsylvania High HDL Cholesterol Study (HHDL; n = 854) and the Study of Inherited Risk of Coronary Atherosclerosis (SIRCA; n = 885) were assessed for the presence of either wild-type (WT) EL or the G26S variant of EL by Taqman custom genotyping (Applied Biosystems). The study designs and initial fi ndings of subjects were previously reported from HHDL ( 14 ) and SIRCA ( 16 ). Subjects identifi ed with the G26S variant were compared with age-and sex-matched control subjects from both SIRCA and HHDL. Subjects from the University of Pennsylvania Coronary Artery Calcifi cation Study (PennCAC; n = 2,616) were assessed for the presence of either WT EL or the G26S variant of EL using the Illumina IBC Candidate Gene array, version 2 ( 17 ). The PennCAC cohort is composed of subjects from SIRCA, the Penn Diabetes Heart Study ( 18,19 ), and the Philadelphia Area Metabolic Syndrome Network, which is an ongoing cross-sectional study of individuals with a varying number of the metabolic syndrome criteria. Age, height, mass, and histories of smoking, drinking, cardiovascular disease, type-2 diabetes, and metabolic syndrome were recorded by referring physicians. Total cholesterol, HDL-C, LDL cholesterol, and triglycerides were assessed in clinical laboratories. All studies were approved by the University of Pennsylvania Institutional Review Board and informed consent was obtained from all participants.
Preparation of EL expression plasmids
The cDNA for human EL (NM006033) was inserted into the pcDNA3 expression vector (Invitrogen). Mutagenesis of Gly-26 into Ser was performed using the QuikChange TM mutagenesis kit (Stratagene). The sense oligonucleotide (toward nucleotides 312-345) to generate the G26S variant is 5 ′ -GAGCCCCGTACCT TTTAGTCCAGAGGGACGGCTG -3 ′ ; a complementary antisense oligonucleotide was also used.
Cell culture
HEK293 cells were cultured in DMEM (Invitrogen) containing 10% fetal bovine serum (Sigma) and 1% antibiotic/antimycotic (Invitrogen). Cells were grown to 90% confl uency (in 12-well plates), and 0.5 g of EL expression plasmid was transfected per well using Lipofectamine TM (Invitrogen) according to the manufacturer's instructions. For analysis of EL expression and catalytic activity, media were replaced at 32 h posttransfection with serum-free media without or with 100 U/ml heparin. At 48 h posttransfection, media were collected and centrifuged at 1,200 rpm for 10 min to remove any cell debris. The supernatant was divided into aliquots and stored at Ϫ 80°C. The total extracellular EL released from transfected cells over 16 h in the absence versus presence of heparin was determined as described previously for HL ( 20 ). Cells were lysed to extract total RNA and protein, and samples were stored at Ϫ 80°C. For inhibition of degradation pathways, media were replaced at 48 h posttransfection with serum-free media containing 100 U/ml heparin and either 75 M chloroquine or 100 M N -acetylleucinyl-leucinyl-norleucinal (ALLN; Sigma). After a 6 h incubation with chloroquine or ALLN, cells and media were collected as described above.
Pulse-chase analyses
Cells (in 60 mm dishes) were transiently transfected as described above. At 48 h posttransfection, cells were washed three times with PBS and pulse-labeled with 1 ml of 100 µCi/ml [ 35 S] methionine/cysteine (Perkin-Elmer) for 2 h in methionine/ but signifi cant increase of HDL-C versus noncarriers within the same cohort ( Table 2 ).
Analysis of G26S EL catalytic function
We suspected that the G26S variant of EL may have an impaired function leading to elevated HDL-C levels. In transient transfections of the G26S EL and WT cDNAs, we consistently observed a profoundly reduced level of both cell-associated and secreted G26S EL protein (full-length 68 kDa protein, plus the 40 kDa and 28 kDa cleavage products of full-length EL) versus WT EL, despite identical levels of mRNA ( Fig. 1 ). We confi rmed that an epitope recognized by our antibody was not disrupted with the G26S variant of EL by comparing in vitro translated G26S EL and WT EL through immunoblot analyses (Supplemental Fig. I ). The specifi c hydrolytic activity of recombinant G26S EL toward synthetic substrates triolein and DPPC was comparable to WT EL ( Fig. 2 ). We also tested the kinetics of catalytic activity by the G26S EL variant using HDL 3 as substrate, and we found that both the apparent K M and V max values between WT and G26S EL were similar (app K M : WT, 464 ± 51 M HDL 3 phospholipid vs. G26S, 363 ± 119 M HDL 3 phosoholipid; app V max : WT, 272 ± 18 nmol free fatty acid/EL mass/h vs. G26S, 349 ± 61 nmol free fatty acid/EL mass/h).
EL within experiments. To assess cell surface-bound EL, at 48 h, the serum-free media with BSA was replaced with serum-free media containing only 100 U/ml heparin. Cells were incubated at 4°C for 1 h, and conditioned media were assessed for EL by immunoblot analyses as described above. EL-mediated binding of lipoproteins was calculated as the amount of lipoprotein bound per cell protein above mock-transfected background. Multiple experiments were normalized based on a percentage of WT.
In vitro translation
The expression plasmids for EL and empty vector (which also contain the T7 promoter) were used to express EL by in vitro transcription/translation using a rabbit reticulocyte system (Promega) in the presence of [ 35 S]methionine according to manufacturer's instructions. Reactions were halted at various time points for up to 60 min, and proteins were separated on Nupage TM 10% Bis-Tris gels. Gels were exposed to fi lm, and protein bands were excised from the gel and counted for radioactivity.
Statistical analyses and equations used
Error bars indicate ±SD. A nonparametric version of the t -test (Wilcoxon's Rank-Sum) was used for comparisons of plasma lipid levels among African-American probands in the HHDL cohort. Plasma lipid levels among subjects from the PennCAC cohort were analyzed using multivariable linear regression after adjustment for age, gender, diabetes, body mass index, and alcohol use. Plasma EL levels were compared using a two-tailed t -test for unequal variance. Rate constants for pulse-chase analyses and in vitro translation were calculated using GraphPad Prism software [assuming one-phased kinetics with the formula where Y represents amount of radiolabeled protein at time t , Y 0 represents amount of radiolabeled protein at time zero, Y ∞ represents maximal or minimal amount of radiolabeled protein at infi nite time, and k represents the rate constant in reciprocal units of time]. All biochemical studies were analyzed using a two-tailed paired t -test.
Carriers of G26S EL were exclusively identifi ed in African-Americans
We genotyped 854 unrelated subjects from the HHDL cohort for the G26S variant of EL. Of the 68 African-Ame ricans in the cohort who were genotyped, 8 (11.8%) were identifi ed as carriers for the G26S variant. In contrast, of the 767 Caucasians in the HHDL cohort who were genotyped, none were found to be carriers of the G26S variant. The G26S probands within the HHDL cohort had significantly higher levels of HDL-C versus noncarriers within the same cohort ( Table 1 ). Genotyping of family members from 3 of the G26S probands within the HHDL cohort has revealed 6 additional subjects with the G26S variant.
We also genotyped 2,616 unrelated subjects from the PennCAC cohort for the G26S variant of EL. Of the 521 African-Americans who were genotyped, we identifi ed 55 (10.6%) subjects as carriers for the variant. None of the 2,095 Caucasians were found to be carriers of the G26S variant. Furthermore, we failed to identify any carriers for the G26S variant in Caucasians ( n =851/885) from the SIRCA cohort, thus strengthening the likelihood that the G26S variant is specifi c to African-Americans. The G26S probands within the PennCAC cohort exhibited a small was 22 ± 11% (calculated from densitometry data of immunoblots), which was comparable to the 14 ± 7% release of uncleaved WT EL ( Fig. 3A ). Having ascertained that the cell surface association of G26S EL and WT EL are comparable, we determined the ability of cells expressing each EL to bridge 125 I-labeled LDL and HDL 3 to the cell surface at 4°C. We show that transfected cells expressing WT and G26S EL can equally bind LDL ( Fig. 3B ) and HDL 3 ( Fig. 3C ) to the cell surface. However, the amount of uncleaved fulllength (68 kDa) G26S EL on the cell surface in our bridging assays is 50% lower (calculated from densitometry data of immunoblot) than WT EL ( Fig. 3D ); thus, normalizing the bridging data to EL expression would suggest that G26S EL has a 2-fold greater ability to bind lipoproteins to the cell surface.
Analysis of G26S EL bridging function
To assess the bridging function of G26S EL, we fi rst addressed the cell surface association of the variant. As shown in Fig. 1B , immunoblot analyses of the media from transfected cells in the presence of heparin show that the protein mass of both WT and G26S EL was greater than the WT and G26S EL protein mass of the media from transfected cells in the absence of heparin during a 16 h incubation period. We determined that the release of uncleaved full-length (68 kDa) G26S EL into heparin-free media 0.020 ± 0.017 min Ϫ 1 , errors represent ±SD) and the appearance of EL into media (WT, 0.031 ± 0.015 min Ϫ 1 ; G26S, 0.019 ± 0.010 min Ϫ 1 ) were not signifi cantly different. These data show that there was no difference in traffi cking between G26S EL and WT EL, but it suggests that a defect exists in the translation of G26S EL. Using an in vitro transcription/translation rabbit reticulocyte system in the presence of [ 35 S]methionine, we compared the rates of translation between G26S EL and WT EL. Under these conditions, we failed to observe any difference in the rate of protein production between G26S EL (with a rate constant of 0.035 ± 0.005 min Ϫ 1 ) and WT EL (with a rate constant of 0.040 ± 0.008 min Ϫ 1 ) ( Fig. 6 ).
If our in vitro observation that less G26S EL protein is made and secreted from cells is physiologically relevant, carriers of the variant should have reduced levels of EL protein in vivo. Using an ELISA developed in our laboratory, we measured preheparin plasma EL mass by ELISA from eight G26S probands from the HHDL cohort plus six family members with G26S EL and compared them to both Caucasian and African-American noncarrier controls. Carriers of the G26S EL variant had a signifi cant 40% reduction of EL mass compared with controls ( Table 3 ).
G26S EL expression in vitro and in vivo
We next focused our attention on the markedly reduced G26S EL protein mass in transfected cells by addressing the possibility that G26S EL may be subjected to intracellular degradation. The lysosomal degradation inhibitor chloroquine failed to raise the cell-associated ( Fig. 4A ) or media ( Fig. 4B ) G26S EL mass to levels comparable to WT EL. We also assessed whether G26S EL was degraded via the ubiquitin-proteosomal pathway by incubating cells in the presence of ALLN. Like the lysosomal inhibition, ubiquitin-proteosomal inhibition failed to raise both the cell-associated G26S EL ( Fig. 4C ) and media G26S EL ( Fig. 4D ) to levels comparable to WT EL. We confi rmed the effectiveness of our chloroquine and ALLN treatments by assessing the lysosomal degradation of LDL apolipoprotein B and the accumulation of polyubiquitinated proteins, respectively (Supplemental Fig. II).
To address whether newly synthesized G26S EL was being degraded through an alternate mechanism, we assessed the traffi cking of newly synthesized EL using pulse-chase analyses. From quadruplicate experiments with cells transiently transfected with G26S or WT EL, following a 2 h pulse with [ 35 S]methionine/cysteine, we consistently observed a ف 20% reduction of total (cell and media) immunoprecipitated newly synthesized 35 S-G26S EL versus newly synthesized 35 S-WT EL at all time points throughout a 2 h chase (Supplemental Fig. III). Despite the reduced mass of 35 S-G26S EL versus 35 S-WT EL throughout the chase, the rate of disappearance from cells ( Fig. 5A ), the rate of appearance into media ( Fig. 5B ), and the stability throughout the chase ( Fig. 5C ) of both 35 S-G26S EL and 35 S-WT EL were comparable. The rate constants for the disappear- members of the protein disulfi de isomerase family and they exhibit a cysteine protease activity that is unaffected by lysosomal or proteosomal inhibition toward the ER proteins protein disulfi de-isomerase and calreticulin (27)(28)(29). ER-60 has been shown to interact with the secretory protein lysozyme when only in a misfolded form ( 30 ). In addition to proteosomal and lysosomal degradation, apolipoprotein B100 is directly degraded by ER-60 ( 31 ). It is possible that during translocation into the ER, a significant proportion of G26S EL is misfolded, perhaps due to a poor interaction with chaperones that may normally interact with EL; thus, the G26S EL peptide may be degraded by ER-60 and/or ERp72 prior to complete translation and translocation.
Some naturally occurring coding variants of HL and LPL have been shown to have impaired protein secretion. Unlike the G26S variant of EL, which has a reduction of newly synthesized protein but normal secretion of the protein that is synthesized, cell culture studies of the serine-267 to phenylalanine and threonine-383 to methionine variants of human HL showed that these variants had impaired activity and secretion, but intracellular HL protein was comparable to WT ( 32 ). The glycine-142 to glutamate variant of LPL also has impaired secretion, but newly synthesized protein is rapidly degraded due to targeting to lysosomes ( 33 ), which we ruled out for the G26S variant of EL. It is clear that these lipase variants undergo different fates that lead to impaired secretion, and it is likely due to changes in protein structure.
The G26S variant of EL has an allele frequency of about 5% in persons of African descent, but it is rare in persons of European descent. It is well established that persons of African descent have signifi cantly higher HDL-C levels than those of European descent ( 34 ). It was previously suggested that a variant in HL that is more common in Africans might contribute to the higher HDL-C levels ( 35 ). Our studies suggest that this G26S variant of EL might also help to explain the higher HDL-C levels in persons of African descent.
In summary, very little is known about structural variation in EL and how this might affect EL function and the clinically important phenotype of HDL-C. Our studies here indicate that the G26S variant found in persons of African descent is associated with elevated HDL-C, and the cellular expression of the variant results in markedly reduced protein production, which is associated with reduced plasma levels of EL in vivo. Our results emphasize that genetic variation of EL is a contributor to variation in HDL-C.
Of note, no difference was observed in noncarriers between Caucasian and African-Americans.
DISCUSSION
This study demonstrates that a G26S substitution in the N-terminal region of the EL protein results in markedly reduced synthesis and secretion of EL, leading to reduced levels of EL in plasma and plausibly explaining the association of this variant with elevated HDL-C levels. Our data point toward a very different mechanism for raising plasma HDL-C versus our recent study demonstrating that the asparagine-396 to serine variant of EL has normal protein secretion but impaired enzymatic activity ( 14 ). The G26S variant of EL also appears to display an enhanced bridging of lipoproteins, but the enhanced association does not translate into improved catalytic activity, as the hydrolysis of lipoprotein lipids and lipid emulsions is comparable to WT EL.
In attempting to defi ne the mechanism behind why we observe reduced G26S EL protein both in vitro and in vivo, we tested and ruled out intracellular degradation via a lysosomal or ubiquitin-proteosomal pathway, as well as any other unknown degradation mechanism of newly synthesized EL protein through pulse-chase analyses. We were unable to determine an impairment in translation of the G26S transcript through in vitro translation. We are currently limited by our lack of knowledge about what intracellular interactions EL may have; however, our data suggest that this N-terminal substitution of a serine for a glycine results in a cotranslational/translocational disruption of EL protein production. Two candidate proteases that may be responsible for reducing the amount of newly synthesized G26S EL are the endoplasmic reticulum (ER) chaperones ER-60 and ERp72. Both ER-60 and ERp72 are Fig. 6. In vitro translation of WT and G26S EL. WT and the G26S variant of EL were expressed in a rabbit reticulocyte in vitro transcription/translation system through the T7 promoter. Completely and incompletelysynthesized EL proteins from three separate experiments were separated by SDS-PAGE and counted for radioactivity. Null , empty vector. Error bars indicate ±SD. Subjects were matched for age and sex. G26S carriers include eight probands and six family members. Data represent the mean ±SD. *, P < 0.02 versus Caucasian noncarriers and African-American noncarriers. **, P = 0.001 versus Caucasian noncarriers, and P = 0.04 versus African-American noncarriers. | 5,034.4 | 2009-09-01T00:00:00.000 | [
"Biology"
] |
Real-time Pedestrian Detection Algorithm Based on Improved YOLOv3
As a research hotspot in the field of current computer vision, pedestrian detection is widely applied to many fields, such as video surveillance and autonomous driving. However, the accuracy of pedestrian detection under video surveillance is poor, and the miss rate of small target pedestrians is high. In this paper, an improves the YOLOv3 algorithm and a YOLOv3-Multi pedestrian detection model had been proposed. First, referring to the residual structure of DarkNet, the shallow features and deep features had been up-sampled and connected to obtain a multi-scale detection layer. Then, according to different special detection categories, the spatial pyramid pool (SPP) is introduced to strengthen the detection of small targets. The experimental results show that our method improves the average accuracy by 2.54%, 6.43% and 8.99%compared with YOLOv3, SSD and YOLOv2 on the VOC dataset.
Introduction
In the research of target detection, pedestrian detection has become a difficult and hot topic, because the image of pedestrian is affected by background occlusion, attitude and different shooting angle. For the problem of pedestrian detection, the current research methods of pedestrian detection are mainly divided into the traditional machine learning method and the deep learning methods. In terms of classic machine learning method. Gong, et al. proposed an algorithm based on mixed Gaussian background modeling combined with directional gradient histogram and SVM for classification [1]. Through the following three steps, such as, foreground segmentation, feature reduction and information updating, the finally false detection rate was reduced to 4%, at the same time, it showed good real-time performance and accuracy in complex scenes. Although researchers have made a lot of improvements in detection accuracy for target detection, there is still much room for improvement in detection speed and environmental impact by pedestrians with different postures.
With the development of artificial intelligence, the mainstream pedestrian detection method is based on deep learning algorithm. It is mainly divided into two categories: one is the two-stage target detection algorithm based on classification, that is represented by R-CNN, Faster R-CNN [2], Hypernet [3] and Mask R-CNN [4], and the other is the one-stage target detection algorithm that using regression algorithm by YOLO, SSD [5], G-CNN [6] and RON [7]. In recent years, breakthroughs have been made in the application of target detection algorithms based on deep learning in pedestrian detection. Han, et al. a video proposed a model based double stream network, which improves the detection accuracy and reduces the false detection rate in a small seale [8]. However, when dealing with complex background and occlusion for objects detection, the accuracy of detecting small objects still needs to be improved. This paper focuses on the problem of the low accuracy of small object detection [9] in traffic camera and the high rate of missing detecting the small target pedestrians in real-time pedestrian detection. To meet the demand for higher requirement of real-time and detection speed, in here, an improve YOLOv3 algorithm which using DarkNet residual structure idea, combined the characteristics of shallow and deep on the characteristics of sampling to get multi-scale detection layer [10]. In this way, a fusion layer of different sizes of target location information and semantic information can be extracted. Finally, the prediction accuracy of the different scales of target can be improved by increasing the multi-scale fusion layer [11]. At the same time, the spatial pyramid pooling (SPP) module is used to achieve feature fusion at different scales, and improve the detection accuracy.
YOLOv3 Algorithm
The model of YOLOv3 is composed of two parts: the backbone network DarkNet-53 and the other is detection network. The backbone network DarkNet-53 draws on the residual idea of Resnet to improved the problem of gradient disappearance under training of convolution neural network and makes the model more easily convergent. There are 5 residuals in the residual structure. The residuals block consists of multiple residuals, mainly including the convolution layer, the batch normalization layer (BN), and the activation function (Leaky ReLU). Among them, the convolution layer is mainly employed for feature extraction, and the extracted features have been normalized, and the activation function is a nonlinear processing, which can effectively fit the nonlinear model. In the forward propagation of image convolution, the size transformation of the tensor is realized by changing the step size of the convolution kernel. There are 53 convolution layers and 23 jump connections. The detection network is composed of three YOLO layers, up-sampling, several Concat layers and convolutional layers, and the whole network level is summed to 107 layers. YOLOv3 outputs three feature graphs with the size of 13x13, 26x26 and 52x52, corresponding to the features of deep layer, middle layer and shallow layer respectively. Deep feature maps have small size and large receptive field, which is conducive to the detection of large scale objects. On the contrary, shallow feature maps are more convenient for the detection of small scale objects.
Spatial Pyramid Pooling
The improved YOLOv3-Multi algorithm uses SPPnet network structure like reference [12]. The idea is that SPP can generate output of fixed size at arbitrary input size, and can pool the features extracted from various scales. In addition, SPP uses multi-layer space box, while sliding window pooling only uses one window size. And multi-level pooling has strong robustness to the deformation of objects [13]. The SPP module is referenced in the original YOLOv3 network structure. The specific structure of the SPP is shown in figure 1. In the input convolution layer, there are four branches. One branch is direct output, and the other three branches are sampling under the maximum pooling of 5x5, 9x9, 13x13. Finally, the feature layer of the same scale can be obtained. The feature fusion of different scales can be realized to obtain richer feature expressions and improve detection speed and accuracy.
Improvement of Multi -Scale Prediction Layer
In view of the problem that small targets are easily missed due to occlusion and long distance in the process of pedestrian detection, the YOLOv3-Multi algorithm have been proposed that integrates shallow features and middle-level features with the help of multi-scale prediction idea of YOLOv3 algorithm. On the basis of the original YOLOv3 network structure, the deep feature map is enlarged to the same size as the shallow feature map through the up-sampling operation. Then the new scale target detection layer is constructed through the connection operation. On the basis of the original network, a 104x104 scale detection layer be added. Compared with other scale detection layers, the image is divided into finer units, which can detect smaller objects and hence improve the detection effect of small targets. The detail of YOLOv3-Multi network structure used in this paper has shown in figure 2. On the basis of adding a 104x104 scale detection layer, the candidate box sizes obtained by clustering analysis of the real box in the data set are applied to the 13x13, 26x26, 52x52, 104x104 scale detection layer for target detection. Because a large feature image has a small receptive field and strong ability to detect small targets, the candidate frame with small size is suitable for a large feature image. Because a small feature image has large receptive field and is relatively sensitive to large targets, the candidate frame with large size is suitable for a small image.
Experimental Environment and Training Parameter Setting
In the experimental platform built in this paper, the server CPU is Intel Xeon Gold 6240R, and the GPU are Nvidia Tasla M40*2. The operating system is Ubuntu20.04, and the memory of the graphics card is 12GB*2. We set the number of iterations is 40000. The batch size is 64, and the learning rate is 0.001.
Evaluation Index and Result Analysis
In here, pedestrian detection is carried out based on intersection cameras to judge whether the target is pedestrian or not. In order to evaluate the improved model more accurately and in real time, Average Percision (AP) is selected as the evaluation index [14]. AP is the result from the sum of all the precision rates of this class in the verification set divided by the number of images containing the target of this class. Its evaluation indexes comprehensively consider P(Precision) and R(Recall) to solve the single point value limitation of P and R. Then the definition criteria of AP evaluation indexes are shown in equation: Where C Precision is the sum of all precision rates of the class in the verification set while C images is the number of images containing the target of the cases. The YOLOv3-Multi network proposed in this paper was compared with YOLOv3, SSD and YOLOv2 networks to analyze the variation trend of the average loss function of the improved model. And the P-R curve and AP valuc will be compared. The variation trend of the average loss function is shown in figure 3.
As can be seen from figure 3, the loss function value is relatively large at the beginning. But as the number of training iterations increases, the loss value decreases rapidly and gradually converges. When the training reaches 40,000 steps, the loss value is always stable at about 0.1, and the convergence degree of the model achieves the ideal effect and the model training is stable. In order to evaluate the model more accurately, the YOLOv3-Mulit network proposed in this paper is compared with YOLOv3, SSD and YOLOv2 networks to calculate the recall rate and accuracy of various algorithms respectively. P-R curve is drawn in figure 4. The experiment demonstrates that the improved algorithm improves both recall rate and accuracy. The plane area formed by P-R curve and coordinate axis is the calculation of AP. The larger the area is, the larger the AP value is. Table 1 below summarizes the AP value of the proposed YOLOv3-Multi network, YOLOv3, SSD and YOLOv2 network on VOC dataset. As can be seen from table 1, the YOLOv3-Multi algorithm proposed that used our experiments could get much improvement in pedestrian detection. It had been compared with YOLOv3, SSD and YOLOv2, the AP value increases by 2.54%, 6.43% and 8.99%, respectively.
In this paper, a trained YOLOv3-Multi model and YOLOv3 model are used to collect images at road intersections for testing and comparison. As shown in figure 5, the YOLOv3-Multi model can detect pedestrians at intersections well and mark the position of detected objects, which is more accurate in the detection of small targets. Compared with the YOLOv3 network, it greatly reduces the rate of missed detection.
Conclusion
The YOLOv3-Multi pedestrian detection network model proposed in this paper is based on making changes in YOLOv3 network model. By add SPP module layer and increase the multi-scale prediction, this model can make accurate pedestrian detection for complex road conditions, and improve the efficiency of the pedestrian detection. The result show that YOLOv3-Multi network model in the VOC 2007 test set improves the average accuracy by 2.54%, 6.43% and 8.99% compared with YOLOv3, SSD and YOLOv2 network model. The test achieves good results. However, since the pedestrian detection accuracy of YOLOv3-Multi algorithm in this paper is insufficient, its detection performance still needs to be optimized. Therefore, the next step of research will focus on optimizing the loss function and optimizing the setting of Anchor class to improve the detection accuracy. | 2,616.6 | 2021-08-01T00:00:00.000 | [
"Computer Science"
] |
A method of perspective normalization for video images based on map data
ABSTRACT Objective perspective distortion is a problem that needs to be solved by video surveillance analysis. Compared with the street scene method, which depends on prior knowledge of the scene or 3D scene of the dedicated hardware recovery scene, the commonly used perspective distortion correction method is based on the linear relationship to monitor a video image in perspective normalization. However, the distortion caused by perspective imaging is nonlinear, and the linear perspective normalization model cannot guarantee the accuracy of the correction in the scene where the perspective phenomenon is evident. An image normalization method based on map data is proposed to solve this problem. A nonlinear perspective correction model is introduced by establishing a single relation between video image space and map space. With selected control points between image and map, we can calculate homography matrix in order to build the perspective correction model, which is computed to know the single pixel real size in map. The proposed perspective correction model is applied to the moving target detection. The results of the linear correction model and the proposed nonlinear correction model prove the validity and practicability of the method.
Introduction
In recent years, video surveillance, which is an important application technology in the field of public security, attracted the attention of scholars in computer vision and video GIS (Milosavljević, Dimitrijević, and Rančić 2010). The main contents of video surveillance include human detection and track of population density estimation and other analytical methods (Ianăşi et al. 2005). These studies aim to extract and analyse various types of feature information from video images (Dalal, Triggs, and Schmid 2006). However, the objective deformation perspective has a serious impact on the detection accuracy in all kinds of video monitoring analysis. In the image, when the same object is near the camera, the visual expression of Angle distortion is that it occupies a large pixel area, and away from the camera position, which accounted for a smaller area of the pixels. Evidently, this position has brought the interference to the characteristic information extraction and analysis based on the video image; thus, it affected the precision of the video monitoring analysis.
Through perspective normalization, the interference of perspective distortion is eliminated and the accuracy of monitoring video analysis is improved. Perspective normalization means that video images can eliminate the same object caused by distance measurement difference in perspective imaging through various types of transformation. Existing research can be divided into three categories. The image normalization method by scene linear relation, the target normalization method, and the 3D reconstruction method.
Some researchers use the linear relation to normalize the image. For example, Chan et al. calculated the linear change weight map of near and far pixels, and extracted the image features of the dense population based on the obtained weight map to improve the accuracy of population density detection (Chan, Liang, and Vasconcelos 2008). Panlong et al. corrected the optical flow field in the image according to the normalized compensation method of near and far distance scaling, and applied it to the pedestrian detection in infrared image scene with small range and low angle of view (Panlong and Yuming 2008). Qinglong et al. used the image vertical coordinate and the pedestrian size linear fitting method for the video image normalization after high-density population estimation. This method is simple, feasible, and has been widely used (Qinglong, Hongsha, and Ning 2014). However, the experiment shows the same object in the monitor video image in different locations of the scale changes, and the change in its distance to the camera is not linear (Figure 1). Therefore, this method reduces the non-linear problem in the linear problem processing; a large error exists in the large-scale monitoring with the evident perspective phenomenon.
Unlike the method that global normalizes the video surveillance image, several researchers attempted to normalize the local image feature vector of the target when recognizing and classifying the monitoring target. D. Hoiem performed segmentation of video surveillance images according to the ground, buildings, and the spatial relationship between the sky to obtain the level of viewpoint position as a basis for local window normalization and to improve the accuracy of pedestrian detection (Hoiem, Efros, and Hebert 2008). Similar studies include that of B. Leibe, Z. Lin, and others (Ess et al. 2010;Lin, Lin, and Weng et al. 2011). Compared with global normalization, the local normalization method can effectively improve the accuracy of image feature detection and analysis in the video; it is useful for population density estimation, pedestrian detection, and tracking applications. However, these methods must rely on a priori knowledge and the inherent clues of the scene to restore the three-dimensional structure of the scene, which has no structural information does not apply to more blocks and irregular structure ( Figure 2).
Given certain conditions, some researchers use the depth camera or camera parameters in 3D reconstruction scene according to pedestrian depth information or 3D posture of the image-normalized operation. Nevatia et al. obtained the internal and external parameters of the camera directly through the PTZ camera interface, thereby recovering the 3D information of pedestrians in the image and eliminating the perspective distortion (Yuan, Bo, and Nevatia 2008). Wang et al. used a depthbased camera to obtain the depth information to achieve the perspective of the normalization of the scene. However, this method must obtain relevant and real-time parameters of the camera through the relevant interface of the video monitoring device; thus, it is difficult to be widely used in practical application.
Obviously, compared with the practical application, the linear transformation based on the normalization method has more restrictions, simple operation and strong practical value. The normalization method based on linear transformation can eliminate the perspective deformation of the monitored video image in a small range of the scene, but it will produce a large error in the evident scene of the fluoroscopic phenomenon. As a result, normalization is difficult to achieve.
One possible solution is to fully utilize the 2D map data that corresponds to the monitoring scene and normalize the video image accurately. Map is a 2D representation of the geographical space and the orthographic expression of geographical scenes; the moving objects on the map have the same projection area characteristics which can be represented by 2D area both in map and image.
If the map projection area is pixel-by-pixel, the corresponding actual area must be obtained. Only in this way can the non-linear relation between the precise normalization be reflected.
This paper presents a perspective normalization method based on map data. First, we obtain the homography matrix from the same point of the video image space and the 2D map space. Then, we establish the mapping relationship between the video image space and the map space and obtain the accurate non-linear perspective normalized weight graph. Finally, we use the perspective normalized weight map and the linear perspective normalized weight graph of the existing literature, which are used in the post-processing of the moving target detection, to verify the validity of the method. Then, the effectiveness of the two methods is evaluated.
The following sections of the paper are organized as follows: Section 2 describes the basic idea of this method. In Section 3, the concrete steps of this method are introduced in detail. In Section 4, we introduce the verification scheme of this method and verify the environment, data, and the results. Finally, the methods of this paper are discussed in Section 5 and the conclusion is summarized in Section 6.
The basic idea
The basic idea of this paper is to establish the relationship between the video image space and the 2D map space, calculate the map area that corresponds to each pixel in the image, and obtain the weight map that corresponds to the video image. In this method, the neighbouring pixels have smaller weight, whereas the distant pixels have a larger weight. The value of the distance pixel weight is used to achieve the normalized perspective and to eliminate the deformation perspective of the subsequent video feature extraction and interference analysis.
The following basic flow of this method is shown in Figure 3.
Step 1: We created a set of video image pairs with the same name in the map space using a manual marker.
Step 2: The homography matrix of the video image space and the map space is calculated by the point with the same name to establish the mapping relationship between the video image space and the map space.
Step 3: The corresponding geographic area of each pixel in the video image is calculated based on the homography matrix, and the corresponding weight map is obtained to realize the perspective normalization of the video image.
Video image space and map space definition of a single relationship
The core of this method is to construct a one-to-one mapping relationship between the video image space and the 2D map space by using the homography matrix. (Figure 4). This mapping is called homography (Criminisi 2002). This relationship can convert each point in one plane to another plane. The map area that corresponds to the midpoint of the video image can be calculated by mapping each point in a video image. Then, a weight map is created to eliminate the perspective distortion.
If the p point in the image plane transitions to the p' point in the map plane, then we have the following definition: The homography relationship between the image plane and the map plane can be simply expressed as: H is a homogeneous matrix, which can be expressed as a2D matrix of 3 × 3: The coordinate relationship between p and p' is further derived by: x ¼ h 11 x 0 þh 12 y 0 þh 13 h 31 x 0 þh 32 y 0 þh 33 Finally, it is converted to the following form: x 0 y 0 1 0 0 0 Àxx 0 Àxy 0 Àx 0 0 0 x 0 y 0 1 Àyx 0 Àyy 0 Ày In Formula 5, to normalize h_33 to 1, eight unknown equations are required to solve eight unknowns. Four groups of points or more points must be obtained to solve the homography matrix (Criminisi 2002).
The same point based on interactive mode selection
The derivation shows that it is necessary to select four or more homonymic points to establish the mapping relationship between video space and image space by using the homography matrix. We use artificial interaction to select the same point in the video image and 2D map ( Figure 5). The same name should be evenly distributed on the image surface, covering most of the monitoring area to avoid the distortion caused by the camera lens. We also need to acquire distinctive features, such as buildings, roads, etc. Therefore, the mapping is accurate. The location on a 2D map is identified.
Calculation of perspective weights based on the map area
After determining the mapping relationship between the image plane and the 2D map plane, the map area that corresponds to each pixel in the video image can be calculated to construct the perspective weight map.
Weight map, which is the same size of the video image, and each pixel assigned by the corners size of area in the map. In the camera imaging model, the pixel expresses information of a rectangular area with a certain length and width, and each pixel corresponds to a quadrilateral area on the 2D map space. The image coordinates of the pixel corners are converted to cartesian coordinates in the map space. Then, we can solve the map space under the corresponding quadrilateral map area. In order to simplify the calculation and ignore the anisotropy of the pixels, we assume that the corresponding quadrilateral pixel region is a square. (X + 0.5, x + 0.5), (x + 0.5, x + 0.5), (x + 0.5, x + 0.5), (x + 0.5), (x + 0.5) (X + 0.5, x + 0.5), as shown in Figure 6. The coordinates of the corners of the pixel converter correspond to the map coordinates based on the homography matrix, and the map area is calculated by using the map coordinates to obtain the perspective map.
Moving target detection base on a perspective weighted graph
The validity of the perspective correction effect must be applied to specific video analysis methods. The obtained nonlinear perspective weights and the linearized perspective weights in the existing literature are applied to the post-processing stage of the moving target detection, and the results are compared and verified. Moving object detection is a method that distinguishes moving objects from background information in video sequence images; it is the basis of various video analysis algorithms and video compression algorithms (Kim 2003;Kim and Hang 2003). In video surveillance applications, usually within a certain period, the background of the video does not change; thus, more backgrounds are subtracted based on the moving target detection. A variety of background subtraction algorithms have been incorporated into the open-source BGS Library. We select five background subtraction algorithms that are widely used to separate the front/back scene of video images. For the extracted foreground image, two different perspective weights were used to denoise, and the actual motion pedestrian was retained by using the digital morphological method. The process is shown in Figure 7.
To evaluate the effectiveness of the method, further calculations are performed on TP (actual class), TN (actual negative class), FP (false positive class), and FN (false negative class) to obtain Precision (precision), Recall (recall), and F1-measure (Lipton, Elkan, and Naryanaswamy 2014).
TP represents the actual statistic value, that is, the number of detected actual foreground pixels; TN represents the statistic value of the actual negative sample, that is, the pixel number of the actual background detected; FP represents the statistical value of false positive. FN is the statistical value of the false negative sample, that is, the number of pixels mistakenly recognized as the foreground of the background. Background subtraction method is commonly used in the two metrics of recall rate (Recall) and precision (Precision); their corresponding formula is as follows: When the recall rate and accuracy are high, the performance of the algorithm is better. Moreover, obtaining one-sided results is easy if the accuracy and performance of the background subtraction method are evaluated only by the recall rate and accuracy. For example, when all the pixels in the image are detected as foreground, the recall rate is 100%, which is evidently incorrect. F1-measure is used as an integrated measure of the experiment, which represents the average of recall and precision: If the F1-measure of the background subtraction method is closer to 1, then the performance of the algorithm is better. When the value is closer to 0, the performance of the algorithm is worse.
We calculate the F1-measure index of the moving target detection to evaluate the effectiveness of the perspective correction in three cases (non-linear perspective weighting chart is adopted without correcting the perspective, which is corrected by the linear perspective weight map).
Verification scheme design
The video images which are captured in square and corridor, meanwhile the map is covered by the same place. This system is based on VC ++ 2013. It uses OpencCV 3.0 to load and display video images, ArcEngine 10.0 to load and display 2D maps, and BGS Library (Sobral and Vacavant 2014) Background separation. The system operating environment is Windows 8.1 operating system. The processor is Intel Core i7 clocked at 3.5 GHz with a memory of 8GB. The Verification system interface is shown in Figure 8, including the following functions: 1. The video images which are captured in square and corridor, meanwhile the map is covered by the same place.
The mapping relation between the video image space and the map space is established based on the homonymy point pair.
(1) The map area value that corresponds to each pixel in the video image is calculated according to the homography matrix, and the perspective weight map is obtained; (2) Separation of the front/back scene of the video image; (3) The linearized perspective weights obtained by the method proposed by Chan (Chan, Liang, and Vasconcelos 2008) and the non-linear perspective weight map obtained in this paper are used to post-process the separated foreground and background images, respectively. The result of the treatment is calculated by the evaluation index designed in the previous section.
To verify the data, two sections of the 640 * 480 resolution outdoor monitoring video (Figure 9) are selected. The first video is a shooting scene for the high downstairs channel, and the second video is a shooting scene for a bus station next to the road; the moving targets detected are pedestrians. Usually, the pedestrian height is 1 to 2 metres, and the shoulder width is 0.3 to 1 metre; thus, the maximum pedestrian footprint area is T_max, which is set as two square metres, and the minimum map area is T_min 0.3 square metres.
Verify the results
For the two scenes of Videos 1 and 2, we first calculate the perspective weighting map based on the previous method (Figure 10), and then the background subtraction algorithm is used to separate the foreground and background. The perspective weighted map is used for the foreground image processing operation. The experimental results are shown in Tables 1 and 2. In the test video 1, the shooting scene is in the building between the channels. The building glass reflection light changes the ground of the light and shadow constantly; thus, the movement of the target detection caused interference. The experimental results indicate that the traditional method is based on the linear hypothesis for normalization and cannot effectively suppress the interference information that is caused by the change of light and shadow. Therefore, various kinds of moving target detection algorithms that are closer to the camera will retain more pseudo-moving targets. After adopting this method, the detection results of various motion target detection algorithms have been improved evidently by the nonlinear perspective normalization based on the map data.
In the test video 2, the shooting scene is located along the roadside. Therefore, various kinds of moving target detection algorithms that are closer to the camera will retain more pseudo-moving targets. The experimental results show that the traditional method, which is based on the linear hypothesis to normalize the processing, cannot effectively interfere with the tree shake caused by interference information. The detection results retained more targets that are pseudo-moving. After adopting this method, the detection results of various motion target detection algorithms have been improved evidently by the nonlinear perspective normalization based on the map data.
Accuracy analysis
We obtain the TP (actual class), TN (actual negative class), FP (false positive class), and FN (false (false positive class)) of the foreground and background, which are generated and normalized by the moving object extraction algorithm to further describe the accuracy of the detection and obtain Precision, Recall, and F1-measure (Lipton, Elkan, and Naryanaswamy 2014). The statistical results are presented in Tables 3 and 4.
The accuracy of the statistical detection shows that when the same post-processing method is used to Figure 9. Verification data.
process the foreground/background images that are separated by the BGS algorithm library, compared with the linear perspective weight map, the proposed nonlinear perspective weighting method has an evident improvement in precision.
Compared with the F1-measure, the accuracy of the inter-frame difference method is improved, and the accuracy of the improved multi-layer background modelling method is less. The result is due to the low complexity of the inter-frame difference algorithm. The extraction of foreground pixels contains more noise; thus, the effect of this algorithm is more evident. The multi-layer background modelling algorithm uses a complex algorithm with high accuracy. The extracted foreground contains less noise; thus, the precision is improved.
The accuracy of the two video experiments are different. Improvement comparison of the two algorithms before and after using f1-measure is shown in Figures 11 & 12. In the second video scene, the precision of the algorithm is improved. Analysis of the surveillance video that corresponds to the scene structure shows that the depth of the scene greatly affected the detection accuracy. The perspective distortion of the video that corresponds to a scene with less depth is not evident, whereas that of the video two is otherwise. Therefore, the proposed algorithm is more effective for videos. In the case of large-scale outdoor real-time video surveillance, the distortion caused by perspective interference can be effectively eliminated by using this method to obtain a non-linear perspective weight matrix.
Multiplane processing strategy analysis
The key step of this algorithm is to create the relationship between the image and the map obtain the normalized weighting map; thus, it can realize the high precision normalization when the image scene is a single plane. However, when many planes are in the scene, the mapping relationship of the single plane cannot guarantee the accuracy of the normalized weight map calculation. It is necessary to establish a mapping relationship between different images and map regions to obtain normalized weights of different plane regions. (Figure 13).
Application scalability analysis
This method can not only realize the improvement on moving target detection algorithm, but also can be extended to the population statistical process through simple modification. The method is as follows: We create a mapping relationship between the image and the mapping plane., obtain the normalized weight map, select the image area for statistics, and normalize the image area according to the normalized weight map. Taking the number of texture population as an example, the normalized population is obtained by performing an algorithmic analysis on the normalized corrected image.
Conclusions
Eliminating the interference of perspective distortion on video feature extraction and analysis is an important problem that needs to be addressed in many video analysis methods. This paper presents a perspective normalization method based on map data. First, we select the pair of same name points in the video image space and the twodimensional map space, establish the first-order relationship between the two, and then calculate the corresponding relation of each pixel in the video image. The map area that is a non-linear perspective weighting graphics obtained, and the perspective of the video image is finally normalized. The obtained non-linear perspective weights are compared with the linear perspective weights of the existing literature in post-processing of moving object detection. The results show that the method can eliminate the influence of perspective distortion on the accuracy of video analysis more effectively. Especially if the scene depth is large and the perspective deformation of the scene is evident, then the effectiveness of this method is particularly evident. Therefore, the perspective normalization method proposed in this paper can effectively to eliminate the image caused by perspective deformation, and can be better applied to video surveillance and analysis under large-scale scenes. | 5,332.6 | 2019-12-18T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Two Novel Donepezil-Lipoic Acid Hybrids : Synthesis , Anticholinesterase and Antioxidant Activities and Theoretical Studies
Alzheimer disease (AD) is a complex disease related to multiple pathogenic mechanisms. A strategy to develop effective drugs is based on the so-called multi-target directed ligands (MTDL) by using hybrid compounds. So, in the present study, we have designed and synthesized two hybrids, containing the indanone-piperidine moiety of donepezil, a drug approved for the treatment of AD, and the lipoic acid scaffold, an antioxidant compound endowed with neuroprotective effects. One hybrid was synthesized in four steps with 42% global yield, and the other hybrid in six steps with 19% global yield. The latter hybrid displayed moderate inhibitory activity against human acetylcholinesterase (hAChE) and greater activity against human butyrylcholinesterases (hBuChE). The selectivity for hBuChE was further rationalized by theoretical study. Importantly, the second hybrid showed a good antioxidant activity, exhibiting better ability in scavenging 2,2-diphenyl1-picrylhydrazyl (DPPH) radicals than lipoic acid.
Introduction
Alzheimer disease (AD) is the most common cause of dementia in aging population.It was estimated that, in 2010, about 35.6 millions of people suffered from dementia worldwide, and it is expected that this number might triplicate in the next 40 years. 1 Patients affected by AD experience progressive cognitive impairment, such as a decline in short-term memory, loss of speech, language and motor coordination. 2,3D is pathologically characterized by an extracellular deposition of β-amyloid (Aβ) peptide into senile plaques, intracellular formation of neurofibrillary tangles (NFTs) containing a hyperphosphorylated form of Tau protein, oxidative stress, mitochondrial abnormality, neuroinflammatory processes and neuronal loss, mainly affecting the frontal cortex and hippocampus. 4,5AD is also characterized by a reduction of acetylcholine (ACh) levels, which is correlated with the cognitive symptoms. 6he "cholinergic hypothesis", proposed in 1982 by Bartus et al., 7 postulated that the cognitive decline experienced by patients with AD resulted from a deficiency of acetylcholine or cholinergic neurotransmission.In humans, acetylcholine is degraded in the synaptic cleft by two main classes of cholinesterase enzymes: acetyl-(AChE) and butyryl-(BuChE) 8 cholinesterases.
Near the amyloid plaques and neurofibrillary tangles, an extensive oxidative stress has been observed 9 which is a result of an altered balance of formation of reactive oxygen species (ROS) versus scavenging activity. 5,10The production of ROS is also related to calcium homeostasis; the misbalance of calcium influx affects the mitochondrial enzymes and ROS production is a normal part of the electron transport chain.However, excessive levels of these species damage proteins, lipids and nucleic acids. 9D is a complex disease related to multiple pathogenic mechanisms involving different molecular targets.All the drugs approved so far are palliative and not curative.A strategy to develop effective drugs is based on the so-called multi-target directed ligands (MTDLs) 11 approach.This strategy builds on the development of a single drug that can simultaneously interact with different targets.The advantages of this polypharmacological strategy, when compared with the administration of a combination of multiple drugs, are the reduction of the risk of drug-drug interactions and a simplification of the pharmacokinetic and pharmacodynamic studies.Moreover, the success rate of the treatment of a complex disease of the elderly, as AD, should be higher. 12onepezil (Figure 1), a palliative drug approved in 1996, is indicated for the treatment of mild and moderate forms of AD. 13 Its structure represents an attractive starting point for the rational design of new MTDLs that can inhibit AChE and, at the same time, interact with other targets involved in AD onset and progression. 13,14ny prototypes for new drugs based on the hybridization strategy have been developed starting from donepezil fragments, i.e., indanone-piperidine moiety or piperidine-benzyl fragment. 13Furthermore, donepezil hybrids with tacrine, 15,16 diaminobenzyl group, 17 ferulic acid, 18 coumarin, 19 among others 13 have been prepared.
Hybrids containing the piperidine-benzyl moiety of donepezil and lipoic acid (LA) (Figure 2) were described by the groups of Kim et al., 20 Lee et al., 21 Prezzavento et al., 22 and Estrada et al. 23 The hybrids showed activity against cholinesterase (ChE) enzymes, antagonism toward σ1 receptors, β-secretase inhibition and antioxidant activity.
LA is a natural disulfide compound present in almost all foods from animal and vegetable sources.LA and its reduced form, dihydrolipoic acid (DHLA) (Figure 2), play an important role in pathological conditions characterized by oxidative stress, 24,25 such as: (i) scavenger of ROS, (ii) capacity to increase the level of reduced glutathione and other antioxidant enzymes, (iii) downregulation of the inflammatory processes, (iv) scavenging of lipid peroxidation products, (v) redox active transition metal chelation, (vi) increase of ACh production by activation of choline acetyltransferase. 25On the basis of such activities, LA can exert beneficial effects in AD, possibly stabilizing cognitive functions. 26us, LA is a good prototype to design new hybrids to combat AD, and previously developed LA hybrids maintained the antioxidant activity and showed other beneficial activities such as inhibition of AChE and BuChE as well as neuroprotective and anti-inflammatory activity. 27,28n 2005, Rosini et al. 29 reported the synthesis of lipocrine, an LA-tacrine hybrid, which further inspired the development of other hybrids featuring an LA fragment connected with N 1 -ethyl-N 1 -(2-methoxy-benzyl)-hexane-1,6-diamine moiety or with rivastigmine. 26lthough there are works involving the hybridization of the benzyl-piperidine moiety of donepezil with LA, to our knowledge, there is no report on the hybridization of the indanone-piperidine moiety with LA.Therefore, in the present study, following a simple synthetic route, we have designed and synthesized two hybrids containing the indanone-piperidine moiety of donepezil and the LA scaffold with the aim of achieving new MTDLs for the treatment of AD. 11 Here, we report their biological assessment on human AChE (hAChE), human BuChE (hBuChE), as well as the evaluation of their antioxidant activity using the 2,2-diphenyl-1-picrylhydrazyl (DPPH) assay.Finally, docking studies provided further insights of the binding mode of these novel donepezil-lipoic acid hybrids with AChE and BuChE.
Synthesis of hybrid compounds
Two novel donepezil-LA hybrids, differing only by the linkage between the two units, were designed and synthesized.In the final hybrid structures, the indanone and piperidine fragments were preserved and the benzyl group was replaced by the LA portion (Figure 3).In the synthesis of hybrid 1, the first step was to obtain the indanone 4, in high yield, by cyclization of 3-(3,4-dimethoxyphenyl)propanoic acid 3 in the presence of p-toluenesulfonic acid and phosphorus pentoxide (P 2 O 5 ). 30he Boc-piperidine-4-carboxaldehyde, obtained via Swern oxidation of N-Boc-4-piperidinemethanol 31,32 was directly used in aldol condensation with 4 furnishing 5 with 84% yield 33 (Scheme 1).
In the next step, the key intermediate 6 was prepared in 91% yield by hydrogenation of the aldol adduct 5 with palladium-carbon (Pd-C) as catalyst, 34 followed by removal of the Boc group under acidic aqueous media. 35Noteworthy, when the hydrogenation reaction lasted more than 20 min, the deoxygenation product was observed.The final product 1 was obtained by coupling the amine 6 and LA using benzotriazol-1-yloxytripyrrolidinophosphonium hexafluorophosphate (PyBOP) and N,N-diisopropylethylamine (DIPEA) (Scheme 1). 36he synthesis of hybrid 2 featured, as first step, the protection of 2-bromoethylamine hydrobromide to afford the tert-butyl (2-bromoethyl)carbamate, 37 which was reacted with 6 under basic conditions to afford the diamine 7 in 39% yield 38 (Scheme 2).Removal of the protection group with trifluoroacetic acid (TFA) furnished 8 as trifluoroacetate salt in quantitative yield. 39Specifically, compound 8 is in the form of mono trifluoroacetate salt as confirmed by HRMS analysis, which showed the mass-to-charge ratio (m/z) signals of the mono-protonated amine (m/z C 19 H 28 N 2 O 3 [M + H] + observed: 333.2178; required: 333.2166).Finally, hybrid 2 was obtained by condensation of LA and 8 in the same conditions used for the synthesis of 1 (Scheme 2). 36
hAChE, hBChE and antioxidant assay
Initially, to determine the potential interest of the new donepezil-LA hybrids for the treatment of AD, the inhibitory potency toward hAChE and BuChE from human serum was assessed by Ellman's method. 40Results, expressed as half maximal inhibitory concentration (IC 50 ) values, i.e., the IC 50 that reduces the cholinesterase activity by 50%, are listed in Table 1.In particular, anti-BuChE activity has recently raised interest because it was shown that with AD progression, BuChE activity in specific brain regions increases while AChE activity is greatly reduced. 41Conversely to donepezil which is an AChE selective inhibitor, hybrid 2 showed to be a selective BuChE inhibitor.Hybrid 1 was scarcely soluble in the assay conditions.At the highest tested concentration (50 mM) hybrid 1 did not significantly inhibit ChE enzymes.
The antioxidant activity was estimated using the DPPH antioxidant assay. 42For that purpose, different concentrations (20-640 µM) of the test compounds were incubated for 30 min in a solution containing the stable free radical.Figure 4 shows the DPPH radical scavenging activity of new hybrids and reference compounds, expressed as percentage of scavenged DPPH radicals.All tested compounds showed a decrease in the concentration of DPPH radicals confirming their scavenging ability.Hybrid 2 was able to scavenge DPPH radicals, showing a higher activity than LA.The half maximal effective concentration (EC 50 ), i.e., concentration that causes 50% decrease in the DPPH radical content, was 300 µM.The hybrid 1 exhibited lower scavenging activity than hybrid 2, however, it was similar to that of LA.Scavenging activity of hybrid 1 and LA was not concentration dependent; similar results were obtained at all concentrations tested.The scavenging activity of LA toward DPPH radicals was 27% at 100 µM, in agreement with data reported in literature. 43
Molecular modeling
To get insights on the binding mode, compounds 1 and 2 were docked in AChE and BuChE enzymes.The potential binding sites of AChE and BuChE were calculated using the built-in cavity detection algorithm from Molegro program. 44,45The BuChE enzyme has a larger cavity of 482.3 Å 3 ; meanwhile, AChE has a cavity of 363.0 Å 3 .Compound 2 formed a more stable protein-ligand complex with both ChEs than hybrid 1.It should be kept in mind that the interaction modes of the ligand with the active sites were determined as the lowest energy scored protein-ligand complex used during docking and the conformers of each compound were mostly associated to each other.Thus, from the theoretical findings, compound 2 revealed the lowest score energy value.For instance, differences between compound 2 and hybrid 1 of up to 10.5 and 31.8 kcal mol -1 were obtained for AChE and BuChE, respectively.
Regarding the interaction with AChE, donepezil, 1 and 2 interacted with phenylalanine Phe295A through hydrogen bond formation.In particular, donepezil interacted with tryptophan Trp86A (amino acid residue of catalytic site, CAS), tryptophan Trp286A and tyrosine Tyr341A (amino acid residue of peripheral anion site, PAS) through π-π interactions; 1 and 2 interacted with tryptophan Trp286A and tyrosine Tyr341 through π-π interaction.It is observed that the interaction with the amino acids of CAS is lost, which explains the lower activity observed of the hybrid 2 in the biological assay.Concerning BuChE, donepezil, 1 and 2 formed hydrogen bond interactions with serotonine Ser198A; furthermore, donepezil established π-π interaction with tryptophan Trp231 and the methoxy group of both 1 and 2 interacted with tryptophan Trp231A (Figure 5).It is important to note that the strength of molecular interactions was lower for compound 1 when compared to 2, i.e., the addition of a methyl group indicated an increase of hydrophobic interactions with the residues of the hydrophobic pocket, resulting in higher binding affinity with both enzymes.This feature suggests that the inclusion of a larger linkage group between the two units can be favorable for the biological activity.Exploring the fact that in BuChE there is a larger accessible cavity to the solvent, bulky substituents as well as larger linker chains will have a greater beneficial impact on selectivity and interaction with BuChE.
The activity of hybrid 2, compared with those of benzylpiperidine hybrids reported in the literature, [20][21][22][23] suggest that both interaction with the CAS and PAS of donepezil are important for the activity of this drug.When was removed from the moiety of donepezil, the inhibition of AChE enzymes was lower.
Conclusions
In this work, two donezepil-LA hybrids containing the indanone-piperidine moiety of donepezil were synthesized.Hybrid 1, in which the two fragments were connected directly, was obtained in four steps with 42% global yield, while hybrid 2, which features a linker between the two units, was synthesized in six steps, with 19% global yield.Hybrid 2 proved to be a selective BuChE inhibitor even if less potent than donepezil and a good antioxidant agent.In particular, the lower activity can be ascribed to the loss of the interaction with Trp86A, an amino acid of the AChE CAS, when the benzyl moiety of donepezil is replaced by LA.The selectivity of 2 toward hBuChE is explained by the larger gorge of this enzyme, which can better accommodate hybrid 2. Finally, and quite interestingly, hybrid 2 showed better scavenging ability toward DPPH radicals than LA.The combined anti-ChE and antioxidant properties exhibited by the hybrid 2 confirm their potential as anti-AD agents.
General techniques
All starting materials were obtained from commercially available sources with high-grade purity and used without further purification.Proton nuclear magnetic resonance H NMR data are presented in the following order: chemical shift in ppm (multiplicity, coupling constant (J) in hertz (Hz), integration).Melting points (mp) (uncorrected) were obtained on a Mettler FP 80 HT apparatus.Infrared spectra were recorded on a PerkinElmer Spectrum One spectrometer.The high resolution mass spectra were obtained using a mass spectrometer with an electrospray ionization source (ESI-MS) model Shimadzu LC-ITTOF.In a round bottom flask, P 2 O 5 (6.85 g, 36 mmol) and toluenesulfonic acid (5.11 g, 36 mmol) was warmed to 120 °C and stirred for 30 min.To the clear homogeneous solution was added the 3-(3,4-dimethoxyphenyl)propanoic acid (0.630 g, 3.0 mmol) in one portion and the solution was stirred at 120 °C for 5 min.Then, ice water was added to the deep purple solution formed and the resulting mixture was extracted three times with dichloromethane (CH 2 Cl 2 ).The combined organic layers were washed with saturated solution of sodium bicarbonate (NaHCO 3 ), dried with magnesium sulfate (MgSO 4 ), filtered and concentrated under reduced pressure.The resulting brown solid was purified by chromatography column eluted with ethylacetate/hexane (1:1, v:v) yielding 4 as a yellow solid (0.51 g) in 87% yield; mp 118-119 °C (Lit. 46 Under argon atmosphere, a solution of oxalylchloride (0.2 mL, 2.2 mmol) in anhydrous CH 2 Cl 2 (5 mL) was cooled to -78 °C and a solution of dimethyl sulfoxide (DMSO) (0.31 mL, 4.4 mmol) in anhydrous CH 2 Cl 2 (1 mL) was dropwise added.After stirring for 10 min, a solution of N-Boc-4-piperidinemethanol (0.43 g, 2 mmol) in anhydrous CH 2 Cl 2 (1 mL) was added.Triethylamine (1.4 mL, 10.2 mmol) was added after 15 min and the reaction was allowed to warm up to room temperature.The reaction was quenched by the addition of water (10 mL) after 2 h and the mixture extracted four times with CH 2 Cl 2 .The combined organic layers were dried with MgSO 4 , filtered and concentrated under reduced pressure.The crude aldehyde was used in the next step without further purification.
To a solution of 5,6-dimethoxy-indanone (4) (0.19 g, 1 mmol) in anhydrous THF (5 mL) was added sodium hydride (NaH) (0.05 g, 1.2 mmol, 60% dispersion in mineral oil).After stirring for 30 min, a solution of crude aldehyde in THF (1 mL) was added dropwise, and the resulting mixture was stirred at room temperature for 2 h.The solvent was concentrated under reduced pressure, and to the residual product was added water and extracted three times with CH 2 Cl 2 .The combined organic layers were dried with MgSO 4 , filtered and concentrated under reduced pressure.The crude reaction was purified by chromatography column eluted ethyl acetate/hexane (1:1, v:v) yielding 5 as oil (0.32 g) in 84% yield; IR (ATR) n / cm - To a solution of 5 (0.22 g, 0.57 mmol) in THF (5 mL) the Pd-C (0.01 g, 10% Pd-C) was added.The reaction mixture was purged with hydrogen and stirred at room temperature under hydrogen atmosphere for 20 min.Then, the reaction was filtered through celite washing with methanol and the solvent was removed under reduced pressure.The crude product (0.20 g) was solubilized in ethyl acetate (10 mL) and HCl 3 M (8 mL) was added.The reaction mixture was stirred at room temperature for 3 h.After this time, the solvent was concentrated under reduced pressure and the residue was dissolved with saturated NaHCO 3 solution.The resulting solution was extracted three times with CH 2 Cl 2 .The organic phases were dried over anhydrous MgSO 4 and concentrated under reduced pressure yielding the desired product (6) A solution of lipoic acid (0.06 g, 0.25 mmol) and PyBOP (0.13 g, 0.25 mmol), in anhydrous CH 2 Cl 2 (4.5 mL), at 0 °C, was stirred for 30 min and then, cannulated to a flask containing a solution of (6) (0.1 g, 0.27 mmol), N,N-diisopropylethylamine (0.25 g, 1.97 mmol), in anhydrous CH 2 Cl 2 (4.5 mL).The resulting mixture was stirred at room temperature for 20 h.Then, the reaction was quenched by the addition of water and the mixture was extracted four times with CH 2 Cl 2 .The combined organic layers were dried with MgSO 4 , filtered and concentrated under reduced pressure.The crude reaction was purified by chromatography column eluted with ethyl acetate/methanol (5%) yielding 1 as a beige solid (0.07 g) in 63% yield; IR (ATR) n / cm - A solution of compound 6 (0.20 g, 0.61 mmol), NaI (0.09 g, 0.61 mmol) and N,N-diisopropylethylamine (0.16 g, 1.22 mmol) in acetonitrile (12 mL) was dropwise added to a solution of tert-butyl (2-bromoethyl)carbamate (0.18 g, 0.79 mmol) in acetonitrile (1 mL).Then, the reaction was heated to reflux temperature and kept under stirring for 24 h.After that time, a reaction mixture was concentrated under reduced pressure and the residue was solubilized in ethyl acetate and washed with solution of potassium carbonate (K 2 CO 3 ) 1 M.The organic phase was extracted with two portions of ethyl acetate.The organic phase was then dried over anhydrous MgSO 4 and concentrated under reduced pressure on a rotary evaporator.The crude reaction was purified by chromatography column eluted with ethyl acetate/methanol (4:1, v:v) and the desired product 7 was obtained in 39% (0.10 g); IR (ATR) n / cm -1 3426, 2920, 2852, 1692, 1626, 1468, 1364, 1316, 1256, 1170, 1118,
N-(2-(4-((5,6-Dimethoxy
A solution of lipoic acid (0.05 g, 0.22 mmol) and PyBOP (0.11 g, 0.22 mmol), in anhydrous CH 2 Cl 2 (3.5 mL), at 0 °C, was stirred for 30 min and then cannulated to a flask containing a solution of 8 (0.11 g, 0.24 mmol), N,N-diisopropylethylamine (0.23 g, 1.76 mmol), in anhydrous CH 2 Cl 2 (4 mL) and the resulting mixture was stirred at room temperature for 20 h.Then, the reaction was quenched by the addition of water and the mixture was extracted four times with CH 2 Cl 2 .The combined organic layers were dried with MgSO 4 , filtered and concentrated under reduced pressure.The crude reaction was purified by chromatography column eluted with ethyl acetate/methanol (10%) yielding 2 as oil (0.08 g) in 74% yield; IR (ATR) n / cm -
Determination of inhibitory effect on AChE and BuChE activity
The capacity of compound 2 and donepezil to inhibit AChE activity was assessed using the Ellman method. 40Initial rate assays were performed at 37 °C with a Jasco V-530 double beam spectrophotometer by following the rate of increase in the absorbance at 412 nm for 3 min.AChE stock solution was prepared by dissolving human recombinant AChE (E.C.3.1.1.7)lyophilized powder (Sigma, Italy) in 0.1 M phosphate buffer (pH = 8.0) containing Triton X-100 (0.1% v:v).Stock solution of BuChE (E.C. 3.1.1.8)from human serum (Sigma, Italy) was prepared by dissolving the lyophilized powder in an aqueous solution of gelatine (0.1% m:v).The final assay solution consisted of a 0.1 M phosphate buffer pH 8.0, with the addition of 340 mM 5,5'-dithio-bis(2-nitrobenzoic acid), 0.02 unit mL -1 of human recombinant AChE, or BuChE from human serum and 550 mM of substrate (acetylthiocholine iodide, ATCh or butyrylthiocholine iodide, BTCh, respectively).Stock solutions of 2 were prepared in methanol and diluted in methanol, while donepezil was solubilized in water and dilutions were prepared in water.Five different concentrations of inhibitor were selected in order to obtain inhibition of the enzymatic activity comprised between 20 and 80%.50 mL aliquots of increasing concentration of inhibitor were added to the assay solution and pre incubated for 20 min at 37 °C with the enzyme before the addition of the substrate.Assays were carried out with a blank containing all components except AChE or BuChE in order to account for the non-enzymatic reaction.The reaction rates were compared and the percent inhibition due to the presence of inhibitor was calculated.Each concentration was analysed in duplicate, and IC 50 values were determined graphically from log concentration versus % inhibition curves (GraphPad Prism 4.03 software, GraphPad Software Inc.). 47
Scavenging of DPPH radicals
The ability of hybrids to scavenge DPPH radical, a reactive nitrogen species (RNS), was determined according to Gülçin 48 with modifications.The screening was done by incubating 50 µL of each compound in an ethanolic medium containing 50 µL of 200 µM DPPH.Final concentrations of test compounds were between 20-640 µM and the DPPH was 100 µM.The systems were maintained under stirring in the dark for 30 min.The absorbance at 517 nm was recorded.Each concentration was tested in triplicate.
Theoretical calculations
Crystal coordinates of the human AChE and BuChE enzymes were downloaded from Protein Data Bank (PDB code: 4BDT 49 and 5LKR, 50 respectively).Donepezil, 1 and 2 (Figure 3) were docked into both binding sites using the Molegro Virtual Docker (MVD), 44,45 a program for predicting the most likely conformation of how a ligand will bind to a macromolecule.MolDock scoring funcion (MolDock Score) employed by the MVD program is regulated on a new hybrid search algorithm, called guided differential evolution.This algorithm combines the differential evolution optimization technique with a cavity prediction algorithm during the searching procedure, which allows a fast and accurate recognition of binding modes.It is derived from the piecewise linear potential (PLP), a simplified potential whose parameters are fit to proteinligand structures and binding data scoring functions 44 and further extended in GEMDOCK program 51 (generic evolutionary method for molecular DOCK) with a new hydrogen bonding term and new charge schemes.Only ligand molecules are considered flexible during the docking simulation.Thus, a candidate solution is encoded by an array of real-valued numbers representing ligand position, orientation, and conformation as Cartesian coordinates for the ligand translation, four variables specifying the ligand orientation (encoded as a rotation vector and a rotation angle), and one angle for each flexible torsion angle in the ligand.
( 1 H NMR) and carbon-13( 13 C) NMR spectra at 200 and 400 MHz were obtained on a Bruker AVANCE DPX 200 and a Bruker AVANCE DRX 400 spectrometer, respectively.The chemical shifts (d) are expressed in parts per million (ppm) and are referenced to signals from tetramethylsilane (TMS) or residual solvent signal.
Table 1 .
hAChE and hBuChE activities of hybrids 1, 2 and the reference compound donepezil a IC 50 : Inhibitory concentration; b SEM: standard error of the mean; c n.a.: not active (% inhibition < 10%) at the highest concentration achievable (50 mM) in the assay conditions. | 5,395.6 | 2017-01-01T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
Thermodynamic and topological properties of copolymer rings with a segregation/mixing transition
Two ring polymers close to each other in space may be either in a segregated phase if there is a strong repulsion between monomers in the polymers, or intermingle in a mixed phase if there is a strong attractive force between the monomers. These phases are separated by a critical point which has a $\theta$-point character. The metric and topological properties of the ring polymers depend on the phase, and may change abruptly at the critical point. In this paper we examine the thermodynamics and linking of two ring polymers close in space in both the segregated and mixed phases using a cubic lattice model of two polygons interacting with each other. Our results show that the probability of linking is low in the segregated phase, but that it increases through the critical point as the model is taken into the mixed phase. We also examine the metric and thermodynamic properties of the model, with focus on how the averaged measures of topological complexity are related to these properties.
Introduction
Mutually attracting pairs of circular or ring polymers in solution undergo a transition from a segregated to a mixed phase at a critical temperature. The metric and topological properties of the polymers are different in the segregated and mixed phases, changing at the critical temperature from expanded spatially segregated conformations to more compact conformations in the mixed phase where the two polymers interpenetrate. In [1] the segregation-mixing transition of a polymerpolymer-solvent mixture is discussed in chapter IV.4. There it is argued that in a good solvent the polymer coils behave like hard spheres which cannot interpenetrate (they are segregated). In a poor solvent, however, the coils tend to exclude the solvent and are driven together increasing the local concentration which, if high enough, should drive the system through a θ-transition into a collapsed phase. This should also occur if there is a strong attractive interaction between the polymer coils, where they exclude solvent molecules by mixing in close proximity to one another.
In this paper we aim to model the segregated-mixed phases in a model of a system composed of a pair of proximate ring polymers which may be linked, especially when they are in the mixed phase. We use a cubic lattice closed self-avoiding walk model (lattice polygons) where the proximity is modelled by forcing the polygons to have at least one pair of vertices (one vertex in each polygon) a unit distance apart. See figure 1(a) for an example. The two polygons are both self-avoiding and mutually avoiding in the lattice.
Our model is also useful as a model for a particular class of diblock copolymers with figure eight connectivity composed of two polygons joined together by sharing a single step (see figure 1b). Each polygon is a block in the copolymer, and both polygons are in a good solvent but there is a short range interaction between vertices located in each of the two polygons. If the interaction is a strong attractive force, then the two polygons in the figure eight will tend to interpenetrate, otherwise they will segregate due to an entropic repulsion between them. One may also consider ring formation in a uniform 4-star polymer with two A-arms and two B-arms. The star polymer can be cyclized to form an A ring and a B ring, in conditions with different interaction strengths between the two rings, resulting in different extents of linking of the two rings.
We shall focus (in particular) on the metric and topological properties of our model in the segregated and mixed phases. Our aim is to address the following questions: • How do configurational properties (such as mean extension, shape and entanglement) depend on the strength of the interaction between the component polygons (or blocks)?
• How do topological properties of the system (such as the complexity of linking between the component polygons) differ in the segregated and mixed phases?
• How does topological complexity, as measured in terms of the link spectrum, depend on the degree of mixing?
The configurational and topological properties of the model could be different in the segregated and mixed phases. In the segregated phase the component polygons tend to be separated in space, and so the degree of linking between them will be low, while in the mixed phase they may be strongly intermingled with a high degree of linking. It is not clear a priori that the transition will be strongly signalled in the configurational properties of the model. While the segregated phase will have the scaling properties of a self-avoiding walk, as the model crosses the critical point into the mixed phase the two polygons might form ribbon-like structures [2,3]. If this is all that happens then the critical exponents of the mixed phase will still be given by self-avoiding walk exponents. The interactions between different parts of the ribbon boundaries could lead to further collapse, similar to that of a self-avoiding walk having gone through a θ-transition into its dense phase. This transition should be signalled in the linking (and in the mean topological invariants such as linking number) of the two components, and the critical properties (such as the radius of gyration exponent) might be different on the two sides of the transition. The plan of the paper is as follows. In section 2 we describe the model and its partition function and free energy. Section 3 contains some theorems establishing the existence of a transition from a segregated phase to a mixed phase. In section 4 we discuss the Monte Carlo methods used to sample configurations and consider the results of the simulations (and in particular the metric and shape properties of the system as a function of the attractive interaction). Results on the topological mutual entanglement are presented and discussed in section 5. We close with a short Discussion in section 6.
The model
We model circular polymers as lattice self-avoiding polygons of length n. These are embeddings of simple closed curves in the simple cubic lattice Z 3 . The embeddings are also simple closed curves in R 3 with well defined topological properties (knots and links). In this paper the lattice polygons are mutually avoiding and placed in Z 3 such that a pair of vertices (one vertex in each polygon) are a unit distance apart (see figure 1). We label the two polygons by A and B respectively so that this is a model of two adjacent rings A and B which may be linked, or parts of a copolymer with a figure eight connectivity and two blocks, each a ring of the figure eight.
A mutual contact between polygons A and B is a pair of vertices (v A , v B ), such that v A ∈ A and v B ∈ B and the distance between v A and v B is equal to one: d(v A , v B ) = 1. An interaction between the two polygons (or blocks in the copolymer) is introduced by introducing an energy m associated with each mutual contact, and then defining the parameter β m = − m /k B T where k B is Boltzmann's constant and T is the absolute temperature. Denoting the number of conformations of the lattice model of total length 2n (each polygon component of length n) with k m mutual contacts by p (2) 2n (k m ), the equilibrium properties of the system are given by the partition function Entropy dominates the model when β m ≤ 0 and the two component polygons tend to stay separated (this is a segregated phase due to the mutual avoidance between the polygons inducing a shortranged repulsion between them). When β m > 0 is large enough, then the mutual contacts induce an attraction between the component polygons, and one expects this to increase the number of mutual contacts. In this case one expects the most likely conformations to be those where the two polygons interpenetrate strongly in a mixed phase. The free energy per unit length is given by f 2n (β m ) = limiting free energy of the model is obtained: It is not known that this limit exists for all values of β m ∈ R but we shall show that it exists for −∞ < β m ≤ 0 and is equal to κ 3 = log µ 3 where κ 3 is the connective constant and µ 3 is the growth constant of cubic lattice self-avoiding walks [4] (see section 3). When β m > 0 the situation is more complicated and we shall rely on Monte Carlo simulations to explore the model in this regime.
Some rigorous results
In this section we obtain some bounds on the partition function Z 2n (β m ) and use these to prove the existence of the thermodynamic limit when β m ≤ 0 and the existence of a phase transition in the system. We also obtain some similar results for fixed link type. Attach the coordinate system (x 1 , x 2 , x 3 ) to Z 3 so that the coordinates of the vertices are all integers. Write p n for the number of n-edge polygons (modulo translation) so that p 2m+1 = 0, p 4 = 3, p 6 = 22, etc. Hammersley [4] has shown that lim n→∞ 1 n log p n = log µ 3 where µ 3 is the growth constant of self-avoiding walks on this lattice. Similarly, if we write q n for the corresponding number of polygons on the square lattice Z 2 then lim n→∞ 1 n log q n = log µ 2 where µ 2 is the growth constant of self-avoiding walks on the square lattice.
Proof: To obtain an upper bound consider β m = 0. Embed a polygon with n edges in p n ways and embed a second n-edge polygon in a box of side 2n centred on the first polygon, This implies that Z 2n (0) ≤ p 2 n e o(n) so that lim sup To get a lower bound construct two polygons, one (σ 1 ) with n−2 edges and the other (σ 2 ) with n edges. For σ 1 translate the right-most top-most edge a unit distance to the right and add two edges to reconnect the polygon to form σ 3 . Translate σ 3 , and rotate it if necessary, so that its right-most edge is unit distance from an edge of σ 2 and there are exactly two vertices of σ 3 that are unit distance from vertices of σ 2 . This gives the bound Since α ∈ (0, 1) is arbitrary, this shows that Proof: Construct a polygon σ 1 in the plane x 1 = 0 with αn edges, 0 < α < 1. Let σ 2 be a translate of σ 1 in the plane x 1 = 1. Let σ 3 be a polygon in Z 3 with n − αn edges and with no vertices with x 1 > −1. Similarly, let σ 4 be a polygon in Z 3 with n − αn edges and with no vertices with x 1 < 2. Concatenate σ 1 and σ 3 to obtain a polygon with n edges and, similarly, concatenate σ 2 and σ 4 . The two resulting polygons have αn pairs of vertices unit distance apart so that they contribute a Boltzmann factor exp[β m αn ] to the partition function Z 2n (β m ). σ 1 can be chosen in µ αn +o(n) 2 ways, σ 2 can be chosen in only one way, while σ 3 and σ 4 can each be chosen in µ n− αn +o(n) 3 ways. This construction gives a lower bound on Z 2n (β m ) Taking logarithms of the above, dividing by 2n, and letting n → ∞, the claimed lower bound is obtained. 2 Proof: We can rewrite the result of theorem 2 as lim inf and this is equal to log µ 3 when 1 2 (log µ 2 + β m ) − log µ 3 = 0, for 0 < α < 1.
Bounds on the free energies of linked conformations
The free energies of linked conformations of specified links can also be bounded using arguments similar to the above. We proceed by recalling that the growth constant of unknotted polygons is defined by the limit [5] lim It is known that µ ∅ < µ 3 [5,6]. One may similarly define the growth constant µ K of knotted polygons of knot type K by lim sup n→∞ 1 n log p n (K) = log µ K . Figure 2: A linked conformation of two polygons with one mutual contact shown. The total number of mutual contacts between the two polygons is km = 13, while each polygon component has length n = 14. In this case the link type is L = 2 2 1 , the Hopf link.
It is known that µ ∅ ≤ µ K < µ 3 [7] Denote the number of linked conformations of link type L, with polygon components of length n each, and with k m mutual contacts, by p (2) 2n (k m , L). The partition function is and it is a sum of weighted conformations of fixed linked type L and total length 2n. An example of a linked conformation in our model is shown in figure 2. This conformation is a Hopf-link and it has weight e 13βm (since k m = 13) in the partition function. More generally, the two components of a lattice link of link type L are lattice knots of knot types K 1 and K 2 respectively. In many cases K 1 = K 2 = ∅ (for example, if L is the Hopf link, as shown in figure 2), but K 1 and K 2 could be (necessarily) non-trivial knots for certain link types, or could be chosen to be given knot types.
We generalize theorem 1 as follows.
In the event that Proof: An upper bound is obtained by first noting that Z 2n (0, L), and then counting the number of conformations of the component polygons independently, times the number of ways they may be placed so that the link may be recovered. Each component polygon is a placement of a simple closed polygon of fixed knot type, say K 1 for the first polygon, and K 2 for the second polygon. This shows that Z In the event that the two component polygons of the link are both the unknot, then K 1 = K 2 = ∅ and lim sup n→∞ 1 2n log Z (2) 2n (β m , L) ≤ log µ ∅ . Proceed by concatenating the top edge of an unknotted polygon σ 1 onto the bottom edge A, and the bottom edge of a second unknotted polygon σ 2 onto the top edge B, as illustrated. Assume that the length of the first component in the embedded tangle T is m 1 , and of the second component, m 2 . Fix the length of σ 1 to be n−m 1 , and of σ 2 to be n−m 2 . This creates a link L of link type determined by T , and since the overpasses in T are accommodated by overstepping into the x 3 = 1 plane, there is at least one mutual contact between the components of T , as required. There are j m ≥ 1 mutual contacts in L, and j m ≤ 4(m 1 +m 2 ) since a polygon of length m 1 has at most 4 mutual contacts for each vertex, and since σ 1 and σ 2 do not contribute any such mutual contacts.
Since the orientation of the top edge of σ 1 has to match that of A, there are p n−m 1 (∅)/2 choices for σ 1 . Similarly, there are p n−m 2 (∅)/2 choices for σ 2 . This shows that Take logarithms, divide by 2n, and let n → ∞. Since m 1 and m 2 are fixed, this shows that This completes the proof. 2 Lower bounds on the free energies of linked conformations of the model in figure 1(a) are determined using a construction similar to that in the proof of theorem 2. A schematic diagram is shown in figure 4. The intersection of the x 3 = 0 and x 3 = 1 planes and Z 3 is a slab S of height 1. (That is, S consists of two square lattice planes a distance one apart in the x 3 -direction). Let T be a tangle diagram of a link of type L. Then T can be realised as two self-avoiding walks in S such that the endpoints of the self-avoiding walks are in a plane x 1 = k, and with the selfavoiding walks confined to the lattice points in S with x 1 ≤ k. We next translate the tangle, and extend the endpoints of its component self-avoiding walks by adding steps, if necessary, into the x 1 > k sublattice such that the two component self-avoiding walks have the same lengths , and the the four endpoints have coordinates (m, 0, 0) (m, 0, 1), (m, 1, 0) and (m, 1, 1) where (m, 0, 0) and (m, 1, 0) are the endpoints of one self-avoiding walk, and (m, 0, 1) and (m, 1, 1) are the endpoints of the second self-avoiding walk. By adding two edges to close off the tangle into a linked pair of polygons, the link L is realised as a lattice link of type L. We assume that there are c 0 mutual contacts between the component self-avoiding walks. Figure 4: Schematic of a tangle T embedded in a slab S and two polygons σ1 and σ2, each of length n, concatenated onto the components of the tangle. Since σ2 is a translation of σ1 one step along the x3-direction, the number of mutual contacts between them is n.
As shown in figure 4, let σ 1 be a square lattice self-avoiding polygon in the x 3 = 0 plane, and σ 2 be the translate of σ 1 one step in the x 3 -direction. If the length of σ 1 is n, then there are n mutual contacts between the pair of polygons (σ 1 , σ 2 ). This pair can be translated together and concatenated on the link L by placing the left-most and nearest edge (the bottom edge) of σ 1 one step in the x 1 -direction to the edge joining the endpoints (m, 0, 0) and (m, 1, 0). Then σ 2 has its bottom edge one step in the x 1 -direction from the edge joining the endpoints (m, 0, 1) and (m, 1, 1). The concatenation gives a lattice link of type L, with total length 2 +2 n+2. The total number of mutual contacts is c 0 +n. This shows that since the number of choices for σ 1 is q n , the number of square lattice polygons of length n. By taking logarithms of equation (6), diving by 2n and then taking n → ∞, the following theorem is proven. The corollary of theorems 2 and 5 is that there is, for some link types, a critical point β log Z (2) 2n (β m , L) as claimed. 2 Since µ ∅ < µ 3 the upper bound in corollary 1 is strictly smaller than the upper bound given in theorem 3. In addition, note that the probability of seeing a link of type L (and with both components the unknot), is By theorems 1 and 4, Since |µ 3 − µ ∅ | ≈ 10 −6 [8,9], the convergence of P 2n (L) to zero is numerically very slow and not significant until walks have lengths of O(10 6 ), this effect will not be visible in our data, and the probability of linking as a particular fixed link type L will be a function of the local geometry of the polygons in the lattice and the value of β m .
4 Results: Thermodynamic and metric properties
Monte Carlo method
Conformations of the lattice model were sampled from the Boltzmann distribution using a Markov chain Monte Carlo algorithm. The elementary moves were a combination of pivot moves for selfavoiding polygons [10] and local Verdier-Stockmayer style moves [11]. The Verdier-Stockmayer moves were introduced to increase the mobility of the Markov Chain when the algorithm samples at large positive values of β m [12,13] where there is a strong interaction between the component polygons which reduces the success rates of the pivot moves. Sampling was also improved by implementing the elementary moves using a Multiple Markov Chain algorithm with chains distributed along a sequence of parameters (β (j) m ) for j = 1, 2, . . . , M . Along each parallel chain Metropolis sampling was implemented to sample from the Boltzmann distribution at a fixed β (j) m , and chains were swapped using the protocols of Multiple Markov Chain sampling [14,12,13]. The collection of parallel multiple Markov chains is itself a Markov Chain with stationary distribution the product of the Boltzmann distributions along each chain (see reference [12,13]).
In this paper we sampled along M ≈ 50 parallel chains, and we were able to obtain sufficiently uncorrelated samples for systems of total size 2n ∈ {96, 200, 296, 400, 600, 800} and for the β
Thermodynamic properties
The results in section 2 show that there should be two regimes in the model, namely a segregated regime for negative or small positive values of β m , and a mixed regime of interpenetrating components when β m is large and positive. The segregated regime is characterized by states where the two polygons are separated from one another with a low density of mutual contacts between them, while the mixed phase has the two components close together in the same local space so that the number of mutual contacts is increased.
A sharp change in the average energy per monomer k m /(2n) is consistent with the two regimes being separated by a phase boundary (see section 3). This is also seen in the variance of k m defined by Estimates of k m /(2n) and the normalized variance V ar(k m )/(2n) are plotted as functions of β m for various values of n in figure 5(a) and in figure 5(b), respectively. For β m < β * m , k m /(2n), tends to zero with increasing n. In this phase the small and decreasing number of mutual contacts per monomer is consistent with the two component polygons being largely segregated in space. This is the segregated phase, as explained in section 3. For β m > β * m the curves of k m /(2n) are increasing as n increases. In this mixed phase the two polygons have large non-zero energy per monomer (that is, a high incidence of mutual contacts), consistent with the two component polygons having sections near to each other in the lattice as they share the same volume in space. Our data are consistent with k m → A n as n increases with 2 < A ≤ 3. This shows that the mixed phase is not an extended ribbon with the two polygons forming its boundary, but is a denser phase where strands in each polygon have a high number of contacts per monomer (between 2 and 3) with the other component. The segregated-mixed transition as β m is taken through its critical value is also seen in the peak forming in V ar(k m )/2n when plotted as a function of β m , see figure 5(b). With increasing n the peaks move to smaller values of β (p) m . This behaviour is consistent with theorems 1 and 2, since in the infinite n limit the variance is equal to zero in the segregated phase, has a jump discontinuity at the critical point, and then decreases with increasing β m .
The data in figure 5(b) strongly suggest that the transition at β * m is asymmetric (that is, in the limit n → ∞ the variance is zero if β < β * m but it characteristically increases as β β * m ). In these circumstances the intersections of the variances for different values of n in figure 5(b) are a good estimator of the location of the critical point as n → ∞. This gives the estimate of β * m = 0.31±0.01. The asymmetry of the transition is consistent with the results of section 3.
Metric and shape properties
The size and metric scaling of lattice links can be examined by calculating metric quantities such as the mean square radius of gyration R 2 j for j ∈ {1, 2} for components j = 1 or j = 2. As expected the behaviour of R 2 j does not depend on j and we improve the estimate of this metric observable by averaging over the two components: R 2 g = (R 2 1 + R 2 2 )/2. This quantity, scaled by the power n 2ν is reported, as a function of β m , in figure 7(a) and (b), and for lengths n ∈ {48, 100, 148, 200, 300, 400}. Since the metric scaling of the self-avoiding walk has exponent ν SAW = 0.587297(7) [15] it should be the case that the ratio R 2 g /n 2ν SAW is a constant for β m < 0 (in the segregated regime). This is seen in figure 7(a) where the data for β ≤ β * m ≈ 0.3 collapse to a constant close to 0.1, with little dependence on n. This is evidence that for these values of β m the system is in a segregated phase where the self-avoidance between the two polygons separates them in space, and each polygon has the properties of a ring polymer in a good solvent, with associated metric exponent ν.
For values of β m > β * m the model is instead in a mixed phase. Here the ratio R 2 g /n 2ν with ν = ν SAW is dependent on both n and β m , decreasing either with increasing β m or with increasing n, as seen in figure 7(a). The collapsed nature of the model is exposed by plotting R 2 g /n 2/3 against β m , showing collapse of the data for different values of n to an underlying curve for large values of β m , see figure 7(b). These observations are consistent with the model passing through a phase transition into a mixed and collapsed phase where the interpenetrating components explore states with a high (local) density of monomers. Note that in this figure the critical point β * m separating the segregated and mixed phases has value approximately 0.3, that is, consistent with the one estimated using the variance of the mutual number of contact (see figure 5(b)).
The collapse of the data for large β m > β * m is consistent with the two polygons interpenetrating each other in a phase with high mutual contacts. This indicates that the mixed phase may be characterized by compact conformations in a collapsed phase and that the lattice link transitions through a θ-point at β * m from an expanded and segregated phase into a collapsed and mixed phase. The transition between a segregated (and expanded or free) phase for negative β m , and a mixed (and collapsed) phase for large positive β m , is also suggested by other metric observables. For instance, in figure 7(c) and (d) the mean separations between the centres of mass d cm of the two polygon components are examined. The β m dependence of this measure is reported in figure 7(c) while in figure 7(d) the scaled version d cm /n ν is plotted. In both cases the data decrease with increasing β m , consistent with the model entering a compact phase where the centres of mass of the two components are close to each other. Note that in (c) d cm increases with n in the segregated phase, and this growth is shown in (d) to be at the expected rate of O(n ν ), the typical length scale of the model. The curves in (c) intersect pairwise close to a critical value β * m ≈ 0.4, slightly larger than the estimate suggested by the data of figure 7(a), but not inconsistent with the expected segregated-mixed transition.
The configurational properties of the model change as it crosses over from the segregated phase to the mixed phase. The interaction between the two polygon components, due to both the selfavoidance repulsion, and the short ranged interaction induced by weighted mutual contacts, deform the components in the two phases, and this may be seen by measuring the asphericity and prolateness of components. In the segregated phase the conformations may be similar to that shown in figure 6 (left), while the mixed phase has interpenetrated components as shown in figure 6 (middle and right). In the segregated phase the polygon components are aspherical when the components are segregated, but transitioning into the mixed phase reduces the degree of asphericity as the two components collapse by forming mutual contacts and interpenetrate into a locally dense conformation. This is seen in figure 8(a) where the average asphericity ∆ = (∆ 1 + ∆ 2 )/2 is highest in the segregated phase but decreases once the model transitions through a critical value of β m into the mixed phase. In the segregated phase the data are collapsed into a horizontal line (independent of both β m and n) with the asphericity ∆ ≈ 0.08 over the entire range of the segregated phase). In the mixed phase ∆ decreases with increasing β m and increasing n.
The degree of prolateness of the model (plotted in figure 8(b)) presents a more nuanced picture, and our data are more noisy. They suggest a relatively constant (in β m ) value in the segregated phase that increases and appears to peak in the mixed phase at a location that moves to smaller values of β m with increasing n approaching the expected value β * m . When β m ≤ 0 the main effect is entropic repulsion where the two curves are mutually repelling (for entropic reasons), leading to the two components being prolate. As β m increases towards its critical value the two components will start to intermingle but there will still be parts of each component that are not intermingled (so that the components are not completely mixed). These intermingled parts might feel a stronger entropic repulsion resulting in a more prolate shape. For large β m the mutual attraction dominates, and this will overcome the entropic repulsion and give a more spherical shape when the two components are mixed. Clearly, from the figure, this effect is small. A natural observable for the segregated/mixed transition is based on the estimate of the overlap volume fraction, V o /V , namely the volume of the box shared by the two polygons scaled by the total volume of the box containing the full system [16]. This is shown as a function of β m in figure 9. If β m < β * m the overlap is relatively small (although not zero) and its value increases very mildly with n. When β m > β * m the overlap volume fraction V o /V steadily increases approaching an asymptotic value that for n = 400 approaches the 80% of the system volume. This indicates a very strong interpenetration of the two rings when the system is well inside the mixed phase (β m = 1).
Linking probability and average linking number
A first characterization of the topological mutual entanglement that forms in the system is provided by the estimate of the probability that the two polygons are topologically linked. In general two disjoint simple closed curves C 1 and C 2 are topologically unlinked (or splittable) if there exists a homeomorphism H of R 3 onto itself, H : R 3 → R 3 , such that the images H(C 1 ) and H(C 2 ) can be separated by a plane [17]. This definition is not convenient computationally and we relied instead on the notion of linking based on the computation of the 2-variable Alexander polynomial ∆(t, s) of the link diagram. This is done by encoding crossings (overpasses and underpasses in a planar projection of the polygon pair) and calculating ∆(t, s) from the encoding. For details see reference [17].
∆(t, s) is not a perfect invariant able to distinguish all link types, but if we restrict ourselves to the identification of link types with minimal crossing number at most 7 its resolution will be sufficient for the analysis of the data. The calculation of ∆(t, s) could be prohibitively costly if the number of crossings n c after a planar projection is very large. This occurs in particular when the two polygons are strongly overlapping in the mixed phase (that is, for β m 's sufficiently large). The number of crossings was decreased by simplifying the polygons while keeping the topology unaltered using BFACF moves [18,19] at low temperature [20,21]. This reduces the system to components of close to minimal length compatible with the linked state. See figure 6 for some examples of simplified configurations. This implementation almost always reduced the number of crossings in the projections to well below 50, reducing the CPU time devoted to calculating ∆(t, s). Notice that if a component is reduced to length n ≤ 6, then the pair cannot be linked for geometric reasons, so that a calculation of ∆(t, s) is not necessary.
The calculation of ∆(t, s) proceeded by performing 100 independent projections onto randomly oriented planes of the simplified configurations, then choose amongst these the projection P with the least number of crossings. ∆(t, s) is then calculated using P for t, s ∈ {2, 3}. This gives four values which are compared to the values computed from the explicit expression of ∆(t, s) for link types up to 7 crossings (see for instance [22]). Those cases where the Alexander polynomial is not trivial but does not correspond to a link with 7 or fewer mutual crossings in its minimal projection, are classified as complex links.
In figure 10(a) we show the probability P link of topologically linked pairs of polygons (i.e. ∆(t, s) = 0) as a function of β m and for different values of n. There are clear qualitative trends seen in this graph and, in particular, P link increases with β m . For β m 0.4 P link increases rapidly with β m as the model transitions from the segregated into the mixed phase. At large values of β m (well inside the mixed phase), P link appears to settle on a value close to 0.9 at the larger values of n. We do not give data for n = 300 or n = 400 when β m 0.4 because of a possibility of false positives for the unlink. There are link types with ∆(s, t) = 0 but which are topologically linked. These link types start to appear in the standard knot table as having minimal crossing numbers 10. In our model as β m increases the link types are of increasing complexity. Computing ∆(s, t) for some of these link types, however, gives ∆(s, t) = 0, and they are classified as being the unlink. This causes overcounting of unlinks in our data at large n and high β m , as well as undercounting of non-trivial links as a consequence. The result is that P link would be systematically underestimated in figure 10(a) at large values of n and β m .
A simpler way to measure the complexity of the linked states of the polygon pairs is by computing their linking number Lk. The linking number Lk(C 1 , C 2 ) of a pair of closed curves (C 1 , C 2 ) is calculated by summing positive and negative mutual crossings in a simple projection of (C 1 , C 2 ) [23]. The linking number defines homological linking of (C 1 , C 2 ), namely, two curves are homologically linked if and only if Lk(C1, C2) = 0 [17]. In figure 10(b) we report the average absolute value of Lk(C 1 , C 2 ), as a function of β m . These graphs of |Lk| increase for all values of β m with n and with increasing β m at fixed n. The increase is large in the mixed phase when β m > β * m . It shows that both increasing n, and increasing β m , increase the complexity of the links in the mixed phase, an effect which is much less pronounced in the segregated phase where the probability of linking is low.
Link spectrum
In figure 11 we examine aspects of the link spectrum measured as a function of β m by plotting the percentage of the most popular links (with minimal crossing number up to 7 and with percentage at least 1%) as detected in our simulations. As expected, the population of unlinks is very large for β m < 0 (approaching 100%), and decreases as β m is increased and the mixed phase is approached.
For sufficiently large values of β m and sufficiently large values of n, the proportion of unlinks stabilizes at very low levels. Concomitantly with this, the simplest link type (the Hopf link, 2 2 1 ) has a non-monotonic behaviour reaching a maximum at values of β m that decrease as n increases. This non-monotonic behaviour is common to all link types with n c ≤ 6 and for the 7 2 1 and 7 2 2 links. The other 7 crossings links also show this behaviour but their populations are too small (below 1%) for this to be significant.
The fact that for large n and well inside the mixed phase the complexity of the linked pairs is rapidly increasing is also suggested by the rapid increase of the populations of topologically linked pairs (∆(t, s) = 0) having n c > 7. This is reported in figure 12 together with two examples of Figure 11: Percentage of the population of link types with nc ≤ 6 as a function of βm for different values of n. The two link types 7 2 1 and 7 2 2 are the only two links with nc = 7 whose relative populations are larger than 1%.
linked pairs of polygons with n c = 8. Again, the fact that for n ≥ 300 the curves seem to approach a constant value could be due to the failure of the two-variable Alexander polynomial in detecting more complex links at large β m .
Finally, in figure 13 we report the link spectrum as a function of β m where each panel presents data for a different value of n. In all cases the unlink dominates the segregated phase, but its incidence decreases sharply when β m ≈ 0.3 while the incidence of linked conformations increases into the mixed phase. The Hopf link (2 2 1 ) dominates the linked conformations in the mixed phase but it also peaks close to β m ≈ 0.3. More complex links also appear, albeit at smaller proportions, as β m increases, and the simplest of these similarly peak at β m ≈ 0.3. It appears from our data that the number of link types multiplies in the mixed phase with increasing values of n, and while the incidence of specific link types decreases with increasing n and β m , we know from figure 10 that the sum over all these link types increases with β m into the mixed phase. This may indicate that the increase in the number of link types compensates for the reduction in the incidence of any specific link type, so that the proportion of linked conformations dominate state space.
Metric and energy properties at fixed link type
In figure 14 the number of mutual contacts in pairs of unlinked polygons is plotted as a fraction of the total number of mutual contacts. This fraction is (expectedly) close to one if β m is negative, showing that almost all conformations are unlinked. Increasing β m towards its critical value reduces this fraction, and this is consistent with both an increase in the total number of contacts due the components starting to approach one another, and with an increase in the proportion of linked conformations where the polygons are closer together and so contain larger numbers of mutual contacts. This observation is supported by noting that the ratio of contacts between links of type 2 2 1 and all polygons in the first instance, and between 2 2 1 and unlinked states are high in the segregated phase, showing that linked states of type 2 2 1 contain, on average, a higher density of mutual contacts.
Data on the mean square radius of gyration paint an interesting picture. Deep in the segregated phase the unlink dominates state space and so the ratio of R 2 g 0 2 1 to R 2 g is approximately equal to 1. This is also the case deep in the mixed phase -unlinked states have about the same size as the average conformation (since the polygons are mixed and together collapsed into a dense conformation minimizing the R 2 g ). Near the critical point the situation is more interesting. As the proportion of linked states increases as β m approaches its critical value from below, the ratio R 2 g 0 2 1 / R 2 g increases because linked states are smaller than unlinked states. This ratio should exceed 1 (which it does). Passing through the transition causes collapse of both unlinked and linked states, and so the ratio should settle down to 1 again, as it does. This picture is reaffirmed in the figures plotting the ratios R 2 g 2 2 1 / R 2 g and R 2 showing that the link 2 2 1 is larger than the unlink and the average of all states in the segregated phase, but are about the same size in the mixed (collapsed) phase.
Discussion
To investigate the thermodynamics, metric and topological properties of a pair of polymer rings undergoing a segregated to mixed phase transition, we have considered a pair of polygons on the simple cubic lattice constrained to have a pair of vertices (one from each polygon) unit distance apart. The polygons are self-and mutually avoiding and, in addition, there is a short range potential between pairs of vertices in the two polygons. When this potential is repulsive or weakly attractive the two polygons are largely separated in space but when the potential is sufficiently attractive the polygons interpenetrate and form a more compact object in a mixed phase.
In section 3 we prove that the limiting free energy exists when the potential is repulsive, and we establish bounds when it is attractive that establish the existence of a phase transition from a segregated phase to one where there are many inter-polygon contacts. We use a Monte Carlo approach to investigate configurational properties such as the expected number of inter-polygon contacts, and the radius of gyration of a polygon as a function of the strength of the potential. The mean number of contacts increases as the potential becomes more attractive and increases rapidly in the region of the transition. The radius of gyration scales differently (with size) in the segregated and compact phases and there are changes in the asphericity and prolateness.
In section 5 we looked at the extent of linking of the two polygons as a function of the strength of the potential, both by computing the 2-variable Alexander polynomial (as a detector of topological linking) and the linking number (as a detector of homological linking). As the potential becomes more attractive the linking probability and the link complexity both increase, with a relatively sharp increase around the transition region. It is clear that, in the compact phase where there is considerable interpenetration of the polygons, the linking probability is high.
A related experimental situation is as follows. Consider a uniform 4-star polymer with two Aarms and two B-arms with the ends of the arms functionalized so that the system can be cyclized to form an A ring and a B ring, forming a figure eight. The A and B arms carry opposite charges so that they are attracted to one another and the strength of the attraction can be modified by changing the pH or the ionic strength. Prepare the system (ie the 4-star) at some fixed pH and ionic strength and, after equilibration, carry out a cyclization reaction. At high ionic strength or where charges are suppressed by varying the pH, the A and B arms will repel or weakly attract and there should be little interpenetration so that, after cyclization, there should be little linking. Conversely, with large charge densities and low ionic strength there should be considerable interpenetration and linking. | 10,648.4 | 2022-06-30T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Laser-Induced Breakdown Spectroscopy in Africa
Laser-induced breakdown spectroscopy (LIBS), known also as laser-induced plasma spectroscopy (LIPS), is a well-known spectrochemical elemental analysis technique. The field of LIBS has been rapidly matured as a consequence of growing interest in real-time analysis across a broad spectrum of applied sciences and recent development of commercial LIBS analytical systems. In this brief review, we introduce the contributions of the research groups in the African continent in the field of the fundamentals and applications of LIBS. As it will be shown, the fast development of LIBS in Africa during the last decade was mainly due to the broad environmental, industrial, archaeological, and biomedical applications of this technique.
Introduction
Laser-induced breakdown spectroscopy (LIBS) is an analytical technique with a wide variety of applications for the qualitative and quantitative elemental studies.LIBS has been developed since the invention of the laser in 1960 [1].The idea emerged to develop an analytical method based on laserinduced plasma.The first paper of laser sampling technique with spark excitation was introduced by Brech and Cross at the international conference on spectroscopy (1962) [2].The technique was called "Microemission Spectroscopy" and marketed as a source unit which could be coupled to any spectrograph.In 1981, Loree and Radziemski [3] introduced the acronym LIBS for the first time by referring to the breakdown of air by laser pulses during the plasma creation.LIBS is a developing and promising technology that has the advantages of simplicity and robustness and the possibility of detecting both low and high atomic number elements [4].The technique has a far-reaching capability to provide rapid, in situ multielement detection of any material, solid, liquid, or gas [4][5][6][7][8].The numbers of LIBS papers published by groups all over the world are increasing steeply along the last four decades [9].In fact, as shown in Figure 1, the situation on the African level is the same, although the numbers are not comparable with the international scale.
Detailed description of LIBS and its applications has been given in a number of published review papers and text books [4][5][6][7][8][9].Basically, in LIBS, laser pulses from a Qswitched laser source are focused via a suitable focusing lens onto the surface of the sample.Adopting laser pulses of few tens of millijoules and pulse duration of a few nanoseconds leads to an irradiance in the order of some megawatts.Focusing of such huge amount of laser power in a tiny volume results in the evaporation, dissociation, atomization, and ionization of some nanograms to micrograms of the sample surface material.At the end of the laser pulse, we are left with the so-called plasma plume which consists of a collection of positive ions and swirling electrons at very high temperature in the range 6000-10,000 K that depends on the laser pulse energy and the physical properties of the target material (melting point, heat of vaporization, thermal conductivity, surface reflectivity, etc.).As the plasma cools down, recombination and deexcitation of ions and atoms take place in the form of emission of light which is collected and fed to a suitable spectrometer-detector system to obtain the LIBS spectrum.Qualitatively, the spectral lines are the finger print of the atomic species in the plasma and consequently in the target material.Quantitatively, there is a direct proportionality between the intensity of the spectral lines and the concentration of the relevant elements in the target material.Laser phase velocity changes are due to a decrease in the real part of the refractive index, which in turn is caused by radial electron density gradients.The laser-induced air plasma has a concave parabolic-shaped electron density and a convex parabolic-shaped refractive index near the laser axis, that is, around a diameter of 0.5 mm [14].
Factors Affecting LIBS
Factors affecting the nature and the characteristics of the laser produced plasma were studied.These factors could be grouped in (i) laser parameters including (energy, wavelength, and pulse duration), (ii) optical parameters or focusing properties, (iii) ambient conditions of surrounding atmosphere in composition, pressure, electric field, and temperature, and (iv) physical properties of the material under investigations including reflectivity of the surface, density, specific heat, and boiling point of the target.Abdellatif and Imam [15] studied laser wavelength effect on the produced aluminum plasma.Measurements were done using Q-switched Nd:YAG laser at wavelengths 1064, 532, and 355 nm.The plasma electron temperature was calculated using the Boltzmann plot for the Al II lines.The spatial profile of the electron density using Stark broadening formula was estimated.Assuming LTE (local thermal equilibrium) conditions, they found that the maximum attainable value of the spatial electron temperature depends on the laser wavelength and the electron density reaches its highest value near the target surface [15].Later on, Galmed and Harith [16] studies showed that the emission line intensities increase by increasing the laser pulse energy until they level off due to self-absorption.The line intensities decrease exponentially as the delay time increases due to plasma expansion [16].Calculated plasma parameters show that electron density ( ) at different laser energies and for different samples has the same values, while plasma excitation temperatures ( exc ) increase on increasing the laser pulse energy to stabilize at higher laser energy due to spectral line self-absorption in the plasma plume.They deduce that the LTE conditions are not fulfilled for lower laser fluences but only at higher laser fluences.This may be as a result of the effect of the initial plasma conditions that depend on the incident laser pulse energy [16].Another study for the same group showed that, for high energies (150-750 mJ), electron density increases with increasing the laser energy while they decrease with increasing the delay time [17].From the previous results, it can be concluded that plasma parameters depend strongly on the laser wavelength, laser energy, and delay time [15][16][17].The effect of laser irradiance in depth profile measurements was studied by Abdelhamid et al. [18].They studied the effect of irradiance on intersection point (the number of laser pulses required to reach the interface between two layers), average ablation rate, the crater depth, and depth resolution.Lowering the irradiance, the average ablation rate decreases and the depth of the crater becomes less.They found that, for all the layered specimens of Au and Ag grown onto Cu substrates, as the working distance (difference between the lens-to-sample distance and the focal length of the lens) increases, the intersection point value between the two layered materials increases, while both the average ablation rate and crater depth are decreasing [18].Elhassan et al. [19] demonstrated the effect of applied electric field on plasma parameters and LIBS signal emitted.They used pure aluminum impeded in one of two copper electrodes.They show that electric field had a pronounced effect on the emission intensities of the ionic lines under forward biasing (negative target) where the emission of the ionic lines grew exponentially.In case of reversed biasing, the line intensity deteriorated with respect to the zero field value.Plasma temperature was slightly affected by increasing the electric field in both directions.On the other hand, the electron number density was found to decrease slightly in the case of forward biasing, with a much stronger decrease (about one order of magnitude) in the case of reversed biasing.They found that signal-to-noise (S/N) ratio and the limit of detection (LOD) were improved in case of forward biasing electric field [19].Electric field was found to have no effect on laser-induced shock wave (SW) velocity, which depends mainly on the laser parameters, such as pulse energy and spot size.Different research groups from Egypt and Algeria investigated the dynamics of plasma expansion in vacuum [20][21][22].Imam et al. investigated spatially the dynamics of plasma expansion velocity, as well as its composition in vacuum [20].Moreover, modeling and theoretical analysis of the experimental data allowed the study of nonequilibrium processes in the laser-induced plasmas.Investigation of the plasma expansion in a vacuum revealed a departure from equilibrium which has been explained in terms of the three-body recombination effect.The corresponding rate constant of such effect was measured and the obtained results were in good agreement with the corresponding theoretical estimates.Finally, deviations from the Saha balance were found.An explanation of the phenomenon was given in terms of radiative effects and three-body recombination [20].The excitation temperature of the core of the plasma plume is measured in vacuum using Boltzmann plots.It is found that, in the core of the plume, the excitation temperature exc is 9900 K in vacuum (at the distance 0.6 mm from the target).The density of electrons is determined from the Stark broadened line width.It is found that the electrons in the core of the plume have a density of 1.8∼10 16 cm −3 in vacuum at a distance 0.6 mm from the target [21].The temperature increases with distance from the target until a distance of 0.6 mm; after that, it decreases.This is due to the enhancement of the cooling rate in the outer part of the plasma.The decrease of the electron density after 0.6 mm may be due to the shielding of the target by the plasma, which prevents further interaction of the laser radiation with the target.Moreover, it might be due to the enhancement of the recombination processes [21].Plasma diagnostics in vacuum for different oxygen pressures were accomplished by both fast imaging and optical emission spectroscopy [22].The former approach showed a splitting of the plasma under different oxygen pressures that were in the range from 0.02 to 1 mbar and started for time delays varying from 550 to 190 ns, respectively.The plasma appeared to have, at the early stage, a monodimensional expansion, followed by a tridimensional expansion into vacuum and under oxygen atmosphere.The drag model was found to describe well the spatial-temporal behavior of the plume for 0.02, 0.1, 0.5, 1, and 5 mbar of oxygen pressure.The estimation of the stopping distance of the plasma plume by the drag model was necessary when choosing the substrate target distance.The spectroscopic analysis of the emission spectrum of the alumina plasma, recorded between 200 and 600 nm into vacuum and under different O 2 pressures, suggested that the plume of alumina was composed of emitting species as Al I, Al II, Al III, and AlO where no oxygen emission line was observed.The band head of AlO emission at 484.21 nm appears only in oxygen ambiance.As oxygen pressure increased, the AlO molecular band emission became distinct.In vacuum, the conversion of plasma plume from the thermal energy into the kinetic one was shown as a consequence of the decrease in the electron temperature along the distance from 1.69 to 0.52 eV [22].Laser emitted plasma at nitrogen ambient gas was studied [23,24].For different pressures, results show that, at lower ambient pressure values of N 2 , the intensities of Ti spectral lines last only for a few hundreds of nanoseconds even not more than 500 ns at 15 Torr.As the ambient pressure is increased, the intensities of the lines can last for up to several microseconds and even reach about 25 s at 760 Torr [23].However, it is observed that the intensities of the spectral lines increase with increasing ambient N 2 pressures, especially at the higher values of the ambient N 2 pressure where the intensities of the lines increase rapidly.Continuum radiation results from collisions of electrons with heavy particles, neutrals, and ions and is also due to recombination of the electrons with ions.Thus, it can be inferred that, at the initial stage of the plasma near the ablated surface, there are a large number of electrons, ions, and neutrals in the excited states [23].Laser ablated carbon plasma under nitrogen ambience at different laser fluence (12,25, and 32 J/cm 2 ) shows that CN and C 2 emission intensity did not depend on the laser fluence, while CII and NII emission intensity increases continuously with the rise of the fluence.The spatiotemporal evolution of CN follows the C 2 one at the vicinity of the target surface, whereas, for greater distances, it follows the CII one.These investigations also demonstrated that there are different chemical reactions leading to the CN formation, by stating that, at the neighborhood of the target surface, CN molecules come directly from this surface or from the bimolecular reaction between C 2 and N 2 in the gas phase.However, at greater distances, CN molecules are mainly produced by a three-body reaction between the atomic species C and N [24].
LIBS in Aqueous Medium
LIBS in aqueous medium did not develop rapidly like in solid and gas.The reason was primarily attributable to the technical difficulties encountered in performing LIBS experiments in liquids and the short lifetime of in-bulk generated laser-induced plasma making the interpretation of the obtained spectra not significant and consequently preventing the extraction of plasma parameters.Liquids, in addition, must be transparent at the laser wavelength and the emitting wavelengths of the monitored species; another experimental difficulty arises when laser-induced plasma is produced on the surface of the liquid.The splashing of the liquid and shock waves which produced ripples on its surface represent obstacles in this case.The first normally leads to the opacity of the light collection optics in the vicinity, while the second defocuses the laser beam on the liquid surface.In the year 2002, a detailed experimental study of laser-induced breakdown spectroscopy in water was performed by Charfi and Harith [25] where the aqueous plasma has been studied temporally and spatially.Aqueous solutions of different Na and Mg concentrations were used to construct calibration curves and estimate the limit of detection (LOD) in pure solution and mixed solutions of different matrices.The lowest detection limits were 1 and 2 g mL −1 , respectively, in pure solutions while they were slightly higher (1.2 and 2.5 g mL −1 ) in mixed solution.The differences in the LIBS limit of detection of the same element in different matrices could be correlated with the compatibility of the physical properties of the elements existing in the same matrix.Approximately, similar electronic structures may facilitate better conditions for energy transfer within the matrix consequently raising the technique sensitivity.The target physical properties play an important role in the obtained values of the laser-induced plasma temperature and electron density .These, in turn, affect the spectral characteristics of each element in the same matrix [26].Another study shows that the detection limits are a function of the element studied [27].Ben Ahmed et al. [28] studied the kinetics of plasma produced in aqueous solution and they proposed a model based on electron-ion recombination that was compared with the experimental results obtained from plasma on the surface of water solutions of MgCl 2 .They proposed that the recombination of the electrons created at the beginning of the interaction with the laser pulse, with ions ejected from the solution, could be the origin of the observed excited species.Further experimental results reported on the temporal characteristics of laser generated plasma in Na and Cu aqueous solutions that exhibit fluorescence signal on the decaying edge of plasma emission at their respective characteristic resonance lines.The potential of the laser plasma spectroscopy for in situ pollution monitoring in natural and waste water was discussed [29].Spatial and temporal evolution of the plasma produced on the distilled water surface was discussed.The temporal evolution from 200 ns after the plasma creation to 2200 ns of the H and H lines is reported.Supposing LTE, electron density and temperature were determined, including the influence of the self-absorption on its measurements [30].
LIBS Applications
5.1.Surface Hardness Measurement.Laser-induced plasma spectroscopy can be exploited not only as elemental analysis techniques, but also as the estimation of the surface hardness of solid targets; it has been found that there is a remarkable correlation between the ionic to atomic spectral lines emission ratio and the surface hardness of solid targets.This phenomenon is related to the repulsive force of the laserinduced shock waves.LIBS used to measure the hardness for different objects from metal alloys to calcified biological samples.The measured shock wave front speeds in case of the three investigated calcified tissues confirm that the harder the target the higher the SW fronts speed and the higher ionic to the atomic line ratio of Mg [31].Kasem et al. [32] found it possible to discriminate between bones from different dynasties from the results of the surface hardness measurement by evaluating calcium ionic to atomic spectral line intensity ratios in the relevant LIBS spectra.LIBS used to estimate the age of broiler breeders by measuring the surface hardness of their eggshell on two different strains, Arbor Acres Plus (AAP) and Hubbard Classic (HC) [33].In case of steel, alloys are treated thermally to have different surface hardness.ZrII/ZrI line ratios used to investigate hardness [34].Aberkane et al. [35] from Algeria showed the correlation between plasma temperature and surface hardness for Fe-V 18% -C 1% alloys.Samples have the same ferrite composition but different surface hardness measured by Vicker method.The differences in hardness values were attributed to the crystallite size changes due to different heat treatment.The results showed a linear relationship between the Vickers surface hardness and the plasma temperature.The relation between ionic and atomic line ratio for vanadium (VII/VI) provided good linear results too [35].
Depth Profiling. LIBS is a relatively novel technique
that is being applied to the characterization of interfaces in layered materials.LIBS technique with relatively high laser pulse energy (50 mJ/pulse) is reliable to investigate layered specimens of different metallic elements via depth profiling procedure at fixed experimental conditions [18].Kiros et al. [36] studied rock of hewn churches from Lalibela, Ethiopia.The elemental composition of both the bulk rock materials and their external layers, exposed to the environmental factors, was analyzed.Depth profile shows a lower content of potassium on the surface together with increasing oxygen intensity in depth which was observed.Variations in depth of these two elements, which are clearly anticorrelated, may reflect changes in abundance of clay minerals and feldspar due to alteration of the basalt.They established relationship between loss of cations and the high presence of hydrogen in the samples collected from external wall of the churches and in-depth profile of weathered basalt.Since cations are lost from the constituent, primary minerals are replaced by H + ; this process disrupts the lattice structure and causes a marked loss of strength.Khedr et al. [37] from Egypt studied ancient Egyptian glazed ceramic samples.Depth profiling allowed differentiation between the dirty layer, the glaze surface, and the ceramic body.Galmed et al. [38] studied Ti thin film using femtosecond LIBS.Titanium of thickness 213 nm was deposited onto a silicon substrate before and after thermal annealing.Femtosecond laser was unable to differentiate between the annealed and nonannealed samples because of the lack of energy homogeneity throughout the laser pulse cross-section.Studies by the same group also showed that spectral line choice was not significant as long as the spectral lines are fulfilling the LIBS spectral lines conditions.The normalization of the lines was able to improve the LIBS results to be more reproducible [39,40].
Cultural Heritage and Archaeology
LIBS has been applied for the analysis of Egyptian Islamic glaze ceramic sample from Fatimid period Fatimid period.The analysis of contaminated pottery sample has been performed to draw mapping for the elemental compositions.Results show that one of the most important constituents in the glaze was copper, which suggests that the green glaze pigment was made from copper compounds.However, the presence of tin in the samples revealed the assumption of using bronze (copper-tin alloy) in the green pigment preparation [37].LIBS used to evaluate cleaning of corroded Egyptian copper embroidery threads on archaeological textiles using laser cleaning method and two modified recipes of wet cleaning methods.This was done by following up the copper signal before and after cleaning.It was found that laser cleaning is the most effective cleaning method without any damage to both metal strips and fibrous core [41].Ahmed and Nassef [42] studied mummy's linen wrapping textile dated back to the Ptolemaic period (305 BC-30 AD).LIBS qualitative results were comparable to those of SEM-EDX results.Roberts et al. from South Africa studied the 2 million-year-old fossils and rocks in surrounding recovered from the Cradle of Mankind site at Malapa.They found that the phosphorus content is significant enough to discriminate fossil bones with relative ease from the surrounding rock which had no significant phosphorus content.The rock lines in the same spectral region were shown to be mainly from silicon, iron, and manganese.They quantify the damage to the fossils during laser removal of rock; the depth of fossil removal was measured as a function of laser fluence.The threshold fluence for maximum rock removal of depth = 40 m was 600 Jcm −2 [43].Kasem et al. [32] used LIBS for interpretation of archeological bone samples from different ancient Egyptian dynasties.They found that buried bones are susceptible to minerals diffusion from the surrounding soil that has undergone careful analysis (Figure 2).Diagenesis or postmortem effects must be taken into consideration on studying dietary habits and/or toxicity levels via analysis of ancient bones.
Environmental and Chemical Studies
Environmental studies for tropical forest in Ethiopia were done by Dilbetigle Assefa et al. [44].They employed calibration-free LIBS to determine the concentration of elements in the rock samples.Area under study showed high concentrations of iron, neodymium, Zn, and Pt which means greater potential for mining of these elements.The concentration of Cr, Mn, and Fe in sediment samples collected from Tinishu Akaki River (TAR), Addis Ababa, Ethiopia, was determined using LIBS.Areas with less number of industries (such as Biheretsigie and Gefersa) had the lowest concentrations while those with large number of industries (such as AA TAR Kolfe and AA Melkaqurani) had the highest concentrations of the selected metals which indicated an increase in anthropogenic effects around the investigated areas.Results showed that LIBS can be applied as an alternative technique to other existing methods, like flame-atomic absorption spectroscopy (F-AAS) and does not require a sample decomposition step which is time consuming and expensive and may result in contamination of samples and the environment itself [45].Mukhono et al. [46] used Multivariate chemometrics in spectroanalysis and characterization of environmental matrices.Multivariate calibration strategies were applied for prediction of the trace elements in the geothermal field samples.It was found that geothermal areas were characterized by elevated content of arsenic while at the same time its concentrations were normally distributed in the field samples.Exploratory data analysis using principal components analysis (PCA) and soft independent modeling of class analogy (SIMCA) were successfully applied to classify and distinguish the origin of the geothermal field matrices (HBRA or NBRA) based on LIBS atomic signatures in a manner applicable to geothermal resource characterization and environmental impact modeling.Mukhono and coworkers [46] concluded that LIBS spectra provide vital information, for example, spectral signatures of Ca, Mg, Fe and Si which can be used in routine monitoring analysis for variations in soils from three sources: (i) high background radiation area-(HBRA-) geothermal, (ii) HBRA-nongeothermal, or (iii) normal background radiation area-(NBRA-) geothermal field.Femtosecond LIBS has been used by Roberts et al. [47] in the detection of metallic silver on chemical vapor deposited (CVD) grown silicon carbide (SiC) and in pebble bed modular reactor (PBMR).Samples used were tristructural isotropic (TRISO) coated with 500 m diameter zirconium oxide surrogate kernel.The SiC layer of the TRISO coated particle is the main barrier to metallic and gaseous fission products.They concluded that the LIBS technique is a good alternative for a remote analytical technique and that femto-LIBS can achieve good surface spatial resolution and good depth resolution in experimental coated particles [47].LIBS was used by Elnasharty et al. in Egypt [48] for the estimation of consumption and/or combustion of motor oil during routine engine operation.This has been performed by following up the intensities of molecular emission lines cyanide (CN) Figure 2: LIBS spectra of ancient bones from different historical eras and recent bones compared to spectra of soil samples [32].and carbon (C 2 ) relevant to the main compounds of oil in its LIBS spectra, while the oil undergoes a range of chemical and physical transformations during consumption.The results showed that the trend of integral intensity values of CN and C 2 emission lines versus the mileage at all selected wavelengths is similar and can be described by an exponentially decaying curve (Figure 3).The rates of dissociation for CN and C 2 contents in oil samples were calculated to be taken as indicator for consequent depletion of engine oil.Additionally, the ratios for the integrated emission intensity of CN to C 2 have been calculated and found to be proportional to the corresponding mileage.Furthermore, they concluded that the obtained trend can be used as prognostic approach for normal degradation of engine oil [48].
Another Egyptian group [49] studied the feasibility of LIBS technique in a turbulent combustion environment and signal enhancement by applying an orthogonal dual-pulse arrangement for air-fuel mixture.The data showed that the signal is slightly higher in the double pulse mode as compared to the same application in a solid material [49].LIBS has been used to identify the constituents of Sudanese crude oil from Adaril oilfield.Almuslet and Mohamed [50] showed organic compounds specific spectral features including sequences of the CN violet system and the C 2 Swan system and H, C, N, and O atomic and ionic lines.The principle for identification of organic compounds was based on their spectral features and on the integrated intensity ratios of the molecular bands (CN and C 2 ) and atomic lines (H and C) [50].Calibrationfree LIBS at 2nd harmonic laser excitation (532 nm) has been used for semiquantitative analysis of different species of eff seeds (Red, White, and Sirgegna) of Ethiopia [51].The differences in relative concentrations were demonstrated.Red species shows the highest Ca content but the lowest in Mg, while, for the other two species, it was the contrary.Spectrochemical analysis of organic liquid media such as vegetable oils and sweetened water characterized by two types of molecules, saccharose (cyclic) and linear chain fatty acids, were performed with the use of LIBS by a Tunisian research group [52].The absence of C 2 emission in plasma of sweetened water was observed.This present work suggested that the C 2 emission depends on the form of the molecule constituting the pulverized sample to create plasma.It seems that there is emission of C 2 if the molecule contains at least one carbon-carbon linear band.It was also shown that oil containing more saturated fatty acids emits more C 2 compared to CI but shows no correlation with the number of double bonds.A statistical analysis was performed based on the ANOVA test on the single parameter C 2 /CI which was used for classification of vegetable oils according to their saturated fatty acid content [52].El Sherbini et al. [53] observed LIBS signal enhancement from the nanostructured ZnO compared to bulk material signal.They suggested that the surface plasmon resonance dependence on the electron density is the major effect acting to enhance the radiation field.In order to get the highest signal enhancement from the nanostructured samples, the lowest possible fluence at the largest delay time with shortest laser wavelength was the possible choice [53].
Biomedical and Biological Applications
LIBS is probably the most versatile method of elemental analysis currently in use for many biomedical applications.
Studies about the possible correlation between some elements and disease are often among the medicine experts and biologists interesting.El-Hussein et al. [54] from Egypt have used LIBS to identify and characterize human malignancies.
The study depended on in vitro relative abundance of calcium and magnesium in malignant tissues with respect to the nonneoplastic tissues.Measurements have been performed under vacuum (10 −2 Torr) and the samples were frozen down to −196 ∘ C (liquid nitrogen temperature) to improve signal from the soft biological samples.They found significant discriminating results in case of breast and colorectal cancers.Another Egyptian group [55] had also used the Ca and Mg levels to monitor tumor photodynamic therapy (PDT) in malignant tissues.Tissues were injected with methylene blue photosensitizer with concentrations 0.5%, 1%, and 2%.The results showed a decrease in tissue elements content after PDT application for both calcium and magnesium compared to before PDT [55].Hamzaoui et al. [56] from Tunisia had used LIBS for the first time as a potential method for analysis of pathological nails.They found a distinct difference in LIBS spectra of normal and pathological nails in the spectral intensity distribution of calcium, sodium, and potassium of normal and pathological nails.The CN band emission spectrum was used for the estimation of the transient temperature of the plasma plume and consequently of the sample surface.
The elemental content of the superficial and inner enamel as well as that of dentin was analyzed using LIBS and X-ray photoelectron spectroscopy (XPS) of bleached and unbleached tooth specimens [57].LIBS revealed a slight reduction in the calcium levels in the bleached specimens compared to the control ones in all the different bleaching groups and in both enamel and dentin.Good correlation has been found between the LIBS and XPS results which demonstrated the possibility of using LIBS technique for detection of minor loss in calcium and phosphorus in enamel and dentin [57].LIBS multielemental analysis of horse hair was found to be potential in revealing retrospective information about nutritional status using hair as a biomarker [58].Longitudinal segments of the hair may reflect the body burden during growth.In the field of poultry science, Abdel-Salam et al. [59] investigated elemental composition of egg shell before and after hatching; depth profile measurements were carried out to follow up different elements throughout the shell.They found that calcium distribution is not homogenous throughout the shell thickness while Mg and Na concentrations in the internal layers of the egg shell before hatching were higher than those after hatching.The results were interpreted due to consumption of inner layer contents by the embryo during its development.Increase in magnesium content is directly related to an increase in shell hardness.In the field of animal production, characterization of semen samples from buffalo bulls (Bubalus bubalis) was studied by Abdel-Salam and Harith [60].LIBS provided information about the elemental seasonal variation in the seminal plasma.The obtained results demonstrated that buffalo seminal plasma contents of Ca, Mg, Zn, and Fe are higher in winter (high season) than in summer (low season).Such elements have direct relation to the sperm parameters, that is, sperm count and motility, and, consequently, LIBS can be used to assess parameters indirectly [60].Interesting results were found from the evaluation of the nutrients in maternal milk and commercially available infant formulas using LIBS technique by the same Egyptian research group [61].They found high elemental and protein content of the maternal milk compared with the commercial formulas samples (Figure 4).
Conclusion
LIBS is a rapidly developing spectrochemical analytical technique.It is an attractive and promising technology for a large number of applications.LIBS has the advantages of simplicity and robustness and the possibility of detecting both low and high atomic number elements in different types of materials.Besides, portable LIBS systems can be used to perform real-time and in situ measurements.In this paper, LIBS fundamental and applications in African countries were reviewed.Through the review, the growing interest in LIBS during the last decade was shown.LIBS has been applied in Africa extensively in the environmental field, archaeology, and cultural heritage studies and in the biomedical and biological field studies.Some of the African research groups are now worldwide well known by their pioneering research works in these fields.The number of papers published is remarkably increasing each year.The 1st Euro-Mediterranean Symposium on LIBS (EMSLIBS 2001) and the 7th international conference on LIBS (LIBS 2012) have been hosted in Africa, namely, in Egypt in the years 2001 and 2012, respectively.
Figure 1 :
Figure 1: Number of LIBS publications in Africa since the year 2000.
Figure 3 :
Figure 3: Trends of summation of integrated line intensities for (a) CN and C 2 peaks and (b) CN/C 2 ratio at different mileage [48].
Figure 4 :
Figure 4: Trends of integrated intensity values for different violet CN emission bands for maternal milk and six types of commercial infant formulas [61]. | 7,354.2 | 2015-03-04T00:00:00.000 | [
"Physics"
] |
Intracerebral Hemorrhage Prognosis Classification via Joint-Attention Cross-Modal Network
Intracerebral hemorrhage (ICH) is a critical condition characterized by a high prevalence, substantial mortality rates, and unpredictable clinical outcomes, which results in a serious threat to human health. Improving the timeliness and accuracy of prognosis assessment is crucial to minimizing mortality and long-term disability associated with ICH. Due to the complexity of ICH, the diagnosis of ICH in clinical practice heavily relies on the professional expertise and clinical experience of physicians. Traditional prognostic methods largely depend on the specialized knowledge and subjective judgment of healthcare professionals. Meanwhile, existing artificial intelligence (AI) methodologies, which predominantly utilize features derived from computed tomography (CT) scans, fall short of capturing the multifaceted nature of ICH. Although existing methods are capable of integrating clinical information and CT images for prognosis, the effectiveness of this fusion process still requires improvement. To surmount these limitations, the present study introduces a novel AI framework, termed the ICH Network (ICH-Net), which employs a joint-attention cross-modal network to synergize clinical textual data with CT imaging features. The architecture of ICH-Net consists of three integral components: the Feature Extraction Module, which processes and abstracts salient characteristics from the clinical and imaging data, the Feature Fusion Module, which amalgamates the diverse data streams, and the Classification Module, which interprets the fused features to deliver prognostic predictions. Our evaluation, conducted through a rigorous five-fold cross-validation process, demonstrates that ICH-Net achieves a commendable accuracy of up to 87.77%, outperforming other state-of-the-art methods detailed within our research. This evidence underscores the potential of ICH-Net as a formidable tool in prognosticating ICH, promising a significant advancement in clinical decision-making and patient care.
Introduction
Intracerebral hemorrhage (ICH) constitutes a severe threat to human health, accounting for 20% to 30% of all stroke cases.As a critical cerebrovascular condition, ICH is characterized by its complex etiologies and heterogeneous clinical presentations.Within the first 30 days post-onset, the mortality rates for ICH patients remain alarmingly high, ranging from 35% to 52% [1].Additionally, a prospective observational cohort study demonstrated a cumulative recurrence rate of 6.1% within the first year, increasing to 7.9% by the fifth year following a lobar hemorrhage [2].Furthermore, survivors of ICH often face the prospect of enduring long-term disabilities, epilepsy, blood clotting, vision, or vascular issues [3].Considering the notable incidence, disability, and mortality rates associated with ICH, the urgency of timely and precise diagnostic processes cannot be overstated [4].
Historically, the diagnosis of ICH has relied upon the professional understanding and empirical knowledge of physicians, who interpret computed tomography (CT) scans by examining parameters such as the location, volume, and distinctive texture characteristics of the hemorrhagic site, in conjunction with the Glasgow Coma Scale (GCS) score [5].This conventional method is inherently subjective, heavily dependent on the clinician's expertise, and can be resource-intensive.
To mitigate these issues, earlier research adopted machine learning techniques with promising outcomes.However, the potential for further enhancement remains.The burgeoning field of artificial intelligence (AI) has heralded significant advancements in medical imaging technology, thereby enhancing the comprehensiveness of imaging data available for clinical use.This innovation plays an increasingly pivotal role in facilitating disease screening, informing treatment planning, and assessing prognostic outcomes.Biomedical images are particularly informative, as they encapsulate crucial information reflecting underlying pathophysiological changes.CT, especially the widely utilized and straightforward non-contrast-enhanced CT, is instrumental in the diagnosis and management of ICH and its potential complication, hematoma expansion.Diagnostic signs detectable on non-contrast CT, such as the black hole sign [6], mixed-density sign, low-density areas, and the island sign, hold clinical significance for predicting hematoma growth.Nevertheless, the interpretation of these imaging features hinges on the expertise of well-trained clinicians and is subject to the limitations of an individual reader's experience and subjective judgment, which often results in low sensitivity.AI models offer a solution to surmount these challenges.By mitigating the impact of subjective biases during the analysis, AI can provide more precise and reproducible assessments.In addition, the integration of AI into medical imaging analysis has the potential to augment the objective evaluation of signs, such as those indicative of hematoma expansion in ICH, to improve the quality of patient care and outcomes.
To refine diagnostic accuracy, contemporary AI methods have embraced more sophisticated algorithms.In the medical domain, AI has demonstrated notable successes in diagnosing conditions, such as breast cancer [7], prostate cancer [8], intracranial hematoma [9], and pleural effusion [10].In the specific context of ICH, cross-modal methods that leverage comprehensive datasets have become increasingly relevant.Recent advancements in deep learning models have been adopted to effectively boost ICH diagnosis.For instance, Wang et al. [11] proposed a data fusion framework based on convolutional neural networks (CNNs) for early prediction of hematoma expansion.Likewise, del Barrio et al. [12] presented a deep learning model based on CNN for prognosis prediction after ICH.Also, the current methods include variational autoencoder (VAE) [13][14][15][16] and generative adversarial networks (GANs) [17].In the current landscape, image-based methodologies [18] and multi-task strategies [19] have been employed in this domain, producing commendable outcomes.Specifically, the Res-Inc-LGBM model [20], a cross-modal technique that extracts information from two distinct modalities within CT imagery, has demonstrated promising results.Nevertheless, this model did not utilize clinical data to further enhance its efficacy.In addition, the UniMiSS framework [21] represents an innovative approach by incorporating an extensive array of 2D medical images into a 3D self-supervised learning paradigm, thereby addressing the limitation imposed by the paucity of 3D datasets, such as those obtained from CT scans.Additionally, GCS-ICH-Net [22] has improved performance by employing a self-attention mechanism to integrate imaging data with domain knowledge.However, existing methodologies have yet to implement effective fusion mechanisms.To rectify these deficiencies, this article puts forth a novel approach with the following advantages: (1) We introduce a cross-modal loss function that accounts for the intrinsic correlation between the disparate data modalities.This innovative strategy promises to enhance the precision of ICH diagnosis, thereby facilitating more effective patient management and improving clinical outcomes.In our study, we incorporated spontaneous ICH into our dataset, explicitly excluding cases with causes such as arteriovenous malformations, cerebral aneurysms, traumatic brain injury, brain tumors, and cerebral infarctions.
Problem Formalization
Within the specified dataset comprising patients diagnosed with ICH, each patient's record encompasses clinical data alongside one or more CT slices.The objective of this study is to develop a predictive model that, when trained on the designated training set, can process the provided inputs and yield outputs that closely align with the target la-bels.Upon evaluation using the test set, the model demonstrated commendable performance.
Patient Population
A retrospective study was conducted on a cohort of 294 patients who were admitted to our hospital with spontaneous ICH from August 2013 to May 2021 and completed the prescribed treatment regimen.The study received approval from the hospital's ethics committee and informed verbal consent was obtained from all participants.The inclusion criteria for the data were as follows: (1) diagnosis of spontaneous ICH confirmed, (2) completion of plain CT scans within 24 h after the cessation of bleeding, (3) availability of complete GCS scores at admission, (4) prognostic data based on the Glasgow Outcome Scale (GOS) at discharge, and (5) comprehensive clinical information, including age, gender, and location of hemorrhage, among other variables.Patients presenting with secondary ICH resulting from arteriovenous malformation, cerebral aneurysm, traumatic brain injury, brain tumor, or cerebral infarction were excluded from the study.Regarding imaging equipment, the study utilized a Philips Brilliance 16-slice CT scanner and a Toshiba Aquilion ONE 320-slice CT scanner.The scans were conducted with a slice thickness of 6 mm and a matrix size of 512 × 512, which has voxel sizes in millimeters (0.488, 0.488, and 6).
Data Acquisition
Our proprietary dataset comprised 294 clinical cases obtained from our partner hospital, representing a balanced collection, with 149 cases classified as having positive outcomes and 145 cases with negative outcomes.The prognostic labels for these datasets were determined by three neurosurgeons, one with a senior professional title, one with an intermediate professional title, and one with a junior professional title.Prognosis was predicted using a double-blind method based on two approaches: utilizing image features alone and combining image features with GCS scoring information to assess the prognosis of enrolled ICH cases.Patient demographics and clinical characteristics were extracted from the electronic medical record system, encompassing variables such as gender, age, CT scan acquisition time, length of hospital stay, GCS score, treatment methodology, and the location and volume of the hemorrhage.If the patient has a good prognosis, they are unlikely to experience any other concurrent symptoms following treatment.Conversely, a patient with a poor prognosis may develop sequelae in the later stages of treatment, such as hemiplegia, language disorders, and decreased muscle strength.Prognosis was stratified based on the GOS, with a GOS score of ≥4 indicating a good prognosis, and a GOS score of ≤3 indicating a poor prognosis.Additionally, a GOS of 1-5 corresponds to outcomes ranging from death, vegetative state, severe disability, mild disability, to return to normal life.
In our dataset, the good prognosis group included 149 patients, 109 males and 40 females, with an age range of 29-88 years and a mean age of 53.85 years.Their hospitalization period varied from 3 to 104 days, with an average stay of 19.40 days.Treatment approaches varied, with 119 patients receiving conservative management in the internal medicine department and 25 undergoing surgical interventions.Hemorrhage locations were distributed as follows: 48 cases in the basal ganglia, 29 in the thalamus, 5 in the external capsule, 19 in the cerebral lobes, 16 in the brainstem, 14 in the cerebellum, 6 in the ventricles, 5 in multiple regions, and 32 with secondary ventricular involvement.Conversely, in the poor prognosis group, there were 145 individuals, comprising 108 males and 37 females, with an age range of 29 to 90 years and a mean age of 54.16 years.The duration of hospital stays ranged from 1 to 388 days, with an average of 39.63 days.In terms of treatment, 67 patients received conservative care within internal medicine, while 75 underwent surgical procedures.The distribution of hemorrhage locations included 76 cases in the basal ganglia, 18 in the thalamus, 6 in the external capsule, 12 in the cerebral lobes, 13 in the brainstem, 2 in the cerebellum, 1 in the ventricles, 10 spanning multiple regions, and 49 associated with secondary ventricular bleeding.Although no universally accepted GCS threshold exists for predicting the prognosis in ICH patients, it is generally held by clinicians that a GCS score of 9 or above is predictive of a more favorable outcome.
ICH-Net Architecture
Enhancing the accuracy of prognosis classification for ICH can significantly reduce mortality and disability risks.However, current methods fail to efficiently integrate clinical and imaging data for prognosis classification, thereby limiting their effectiveness in aiding accurate diagnosis by medical professionals.To address the challenge of integrating clinical and imaging data and to enhance prognosis classification accuracy, we proposed ICH-Net, a novel network framework.
The architecture of our ICH-Net is depicted in Figure 1 and is composed of three sequential components: the Feature Extraction Module, the Feature Fusion Module, and the Classification Module.Notably, the Feature Fusion Module employs a cross-modal attention mechanism to fully account for the intrinsic relationships between the different modalities, thereby enabling targeted and pertinent feature integration within the network.
fied based on the GOS, with a GOS score of ≥4 indicating a good prognosis, and a GOS score of ≤3 indicating a poor prognosis.Additionally, a GOS of 1-5 corresponds to outcomes ranging from death, vegetative state, severe disability, mild disability, to return to normal life.
In our dataset, the good prognosis group included 149 patients, 109 males and 40 females, with an age range of 29-88 years and a mean age of 53.85 years.Their hospitalization period varied from 3 to 104 days, with an average stay of 19.40 days.Treatment approaches varied, with 119 patients receiving conservative management in the internal medicine department and 25 undergoing surgical interventions.Hemorrhage locations were distributed as follows: 48 cases in the basal ganglia, 29 in the thalamus, 5 in the external capsule, 19 in the cerebral lobes, 16 in the brainstem, 14 in the cerebellum, 6 in the ventricles, 5 in multiple regions, and 32 with secondary ventricular involvement.Conversely, in the poor prognosis group, there were 145 individuals, comprising 108 males and 37 females, with an age range of 29 to 90 years and a mean age of 54.16 years.The duration of hospital stays ranged from 1 to 388 days, with an average of 39.63 days.In terms of treatment, 67 patients received conservative care within internal medicine, while 75 underwent surgical procedures.The distribution of hemorrhage locations included 76 cases in the basal ganglia, 18 in the thalamus, 6 in the external capsule, 12 in the cerebral lobes, 13 in the brainstem, 2 in the cerebellum, 1 in the ventricles, 10 spanning multiple regions, and 49 associated with secondary ventricular bleeding.Although no universally accepted GCS threshold exists for predicting the prognosis in ICH patients, it is generally held by clinicians that a GCS score of 9 or above is predictive of a more favorable outcome.
ICH-Net Architecture
Enhancing the accuracy of prognosis classification for ICH can significantly reduce mortality and disability risks.However, current methods fail to efficiently integrate clinical and imaging data for prognosis classification, thereby limiting their effectiveness in aiding accurate diagnosis by medical professionals.To address the challenge of integrating clinical and imaging data and to enhance prognosis classification accuracy, we proposed ICH-Net, a novel network framework.
The architecture of our ICH-Net is depicted in Figure 1 and is composed of three sequential components: the Feature Extraction Module, the Feature Fusion Module, and the Classification Module.Notably, the Feature Fusion Module employs a cross-modal attention mechanism to fully account for the intrinsic relationships between the different modalities, thereby enabling targeted and pertinent feature integration within the network.Clinicians utilize imaging data and clinical texts to comprehensively assess prognosis.Clinical text information, such as GCS, offers domain-specific knowledge that aids in prognostic evaluation.Additionally, factors such as age, gender, and other demographic Brain Sci.2024, 14, 618 5 of 13 information have distinct impacts on ICH outcomes.Consequently, we designed various modules to extract both image and text features.In the Feature Extraction Module, we utilized two distinct encoders: f t and f v , to obtain the textual representation and the visual representation, respectively.For text encoding, we leveraged the pre-trained BioClinical-BERT model [23], while for visual encoding, we employed the pre-trained 2D ResNet50 architecture [24].
Within the Feature Fusion Module, the textual representation, f t , and the visual representation, f v , are processed through a Text Conversion (TC) sub-module and a Vision Conversion (VC) sub-module, respectively.The outputs from these sub-modules, denoted as f t and f v , were subsequently input into the Cross-Modal Attention Fusion (CMAF) and the Multi-Head Self-Attention Fusion (MHSAF).This step facilitated the effective integration of textual and visual representations.Finally, in the Classification Module, the fused representation was passed through our neural network, culminating in a classification task that discerns the prognostic outcomes.
The Detail Blocks
TC block and VC block.In our architecture, the TC and VC blocks are integral to the efficacious fusion of textual and visual data.As depicted in Figure 2, the TC block began by calculating the product of the text representation, f t , and its transpose.This operation facilitated the modeling of associations between each word in the text representation and its counterparts, thereby capturing the semantic relationships and contextual nuances inherent within the text sequence.Leveraging an attention-based mechanism, the TC block enhanced the model's comprehension of the textual input and fostered a more nuanced and information-rich representation during text data processing.Following this, the resulting matrix was subjected to a transformation via a Fully Connected (FC) layer.The FC layer was designed to discern nonlinear relationships within the input data, ultimately yielding output representations that are tailored to meet the specific demands of our task.The sequence concluded with the reshaping of this output to produce f t , which represents the refined final form of the text data, as processed by the TC blocks.Clinicians utilize imaging data and clinical texts to comprehensively assess prognosis.Clinical text information, such as GCS, offers domain-specific knowledge that aids in prognostic evaluation.Additionally, factors such as age, gender, and other demographic information have distinct impacts on ICH outcomes.Consequently, we designed various modules to extract both image and text features.In the Feature Extraction Module, we utilized two distinct encoders: and , to obtain the textual representation and the visual representation, respectively.For text encoding, we leveraged the pre-trained Bio-ClinicalBERT model [23], while for visual encoding, we employed the pre-trained 2D Res-Net50 architecture [24].
Within the Feature Fusion Module, the textual representation, , and the visual representation, , are processed through a Text Conversion (TC) sub-module and a Vision Conversion (VC) sub-module, respectively.The outputs from these sub-modules, denoted as and , were subsequently input into the Cross-Modal Attention Fusion (CMAF) and the Multi-Head Self-Attention Fusion (MHSAF).This step facilitated the effective integration of textual and visual representations.Finally, in the Classification Module, the fused representation was passed through our neural network, culminating in a classification task that discerns the prognostic outcomes.
The Detail Blocks
TC block and VC block.In our architecture, the TC and VC blocks are integral to the efficacious fusion of textual and visual data.As depicted in Figure 2, the TC block began by calculating the product of the text representation, , and its transpose.This operation facilitated the modeling of associations between each word in the text representation and its counterparts, thereby capturing the semantic relationships and contextual nuances inherent within the text sequence.Leveraging an attention-based mechanism, the TC block enhanced the model's comprehension of the textual input and fostered a more nuanced and information-rich representation during text data processing.Following this, the resulting matrix was subjected to a transformation via a Fully Connected (FC) layer.The FC layer was designed to discern nonlinear relationships within the input data, ultimately yielding output representations that are tailored to meet the specific demands of our task.The sequence concluded with the reshaping of this output to produce , which represents the refined final form of the text data, as processed by the TC blocks.In contrast, within the VC block of our architecture, we handled the visual representation, , derived from the input visual data.To refine the integration of these visual In contrast, within the VC block of our architecture, we handled the visual representation, f v , derived from the input visual data.To refine the integration of these visual features, the initial step involved passing them through a FC layer.This FC layer executed linear transformations on the visual representations, thereby remapping them into a representation space that was more apt for the ensuing analytical steps.Subsequent to the FC layer's processing, the visual representations underwent a sequence of four up-sampling operations.These operations incrementally enhanced the resolution of the feature map, enabling a more detailed and precise capture of the structural nuances present in the input image.The amalgamation of these transformative components culminated in the output visual representation, f v , which embodied an effectively processed and integrated depiction of the visual feature set.
Brain Sci.2024, 14, 618 6 of 13 CMAF block.Inspired by the methodological framework of CMAFGAN [25], for simple notation, we denoted f t and f v as x and y, respectively.As shown in Figure 3, the CMAF block began with six 1 × 1 convolution layers applied to x and y.These layers transformed each input into three matrices-V1, K1, and Q1 for x, and V2, K2, and Q2 for y-with their associated weights represented by ω.The transformation can be formalized to yield the attention matrices, as follows: where ω Q1 x ⊤ i signifies the linear transformation of the input x i via the weight matrix ω Q1 followed by transposition.Subsequently, as depicted in Figure 3, we obtained the final output representation, f cm f , through these operations.Here, f cm f is the representation of f cm f after passing through an up-sampling layer.
operations.These operations incrementally enhanced the resolution of the feature map, a more detailed and precise capture of the structural nuances present in the input image.The amalgamation of these transformative components culminated in the output visual representation, , which embodied an effectively processed and integrated depiction of the visual feature set.
CMAF block.Inspired by the methodological framework of CMAFGAN [25], for simple notation, we denoted and as x and y, respectively.As shown in Figure 3, the CMAF block began with six 1 × 1 convolution layers applied to x and y.These layers transformed each input into three matrices-V1, K1, and Q1 for x, and V2, K2, and Q2 for ywith their associated weights represented by ω.The transformation can be formalized to yield the attention matrices, as follows: where signifies the linear transformation of the input via the weight matrix ω followed by transposition.Subsequently, as depicted in Figure 3, we obtained the final output representation, , through these operations.Here, is the representation of after passing through an up-sampling layer.MHSAF block.As depicted in Figure 4, to enhance feature representation, our approach projected features onto three distinct subspaces, utilizing a trio of independent linear transformations, each governed by a unique weight matrix.Following this projection, self-attention computations were executed within each subspace to yield a set of output vectors.These vectors were subsequently concatenated, resulting in a comprehensive final output that captured the diversified interactions within the data.This method allowed for a more nuanced and multi-faceted analysis by capitalizing on the strengths of multiple representational spaces.MHSAF block.As depicted in Figure 4, to enhance feature representation, our approach projected features onto three distinct subspaces, utilizing a trio of independent linear transformations, each governed by a unique weight matrix.Following this projection, self-attention computations were executed within each subspace to yield a set of output vectors.These vectors were subsequently concatenated, resulting in a comprehensive final output that captured the diversified interactions within the data.This method allowed for a more nuanced and multi-faceted analysis by capitalizing on the strengths of multiple representational spaces.The merit of this methodology lies in its capacity to process features with enhanced granularity across disparate subspaces while concurrently conducting self-attention computations within each individual subspace.This approach enables the model to more adeptly discern the inter-feature relationships from varied perspectives.By carrying out attention-based calculations within each designated subspace, the model gains a more The merit of this methodology lies in its capacity to process features with enhanced granularity across disparate subspaces while concurrently conducting self-attention computations within each individual subspace.This approach enables the model to more adeptly discern the inter-feature relationships from varied perspectives.By carrying out attentionbased calculations within each designated subspace, the model gains a more profound comprehension of the dependencies among features, which allows for the extraction of richer and more precise information.Ultimately, this refined understanding contributes to the improvement of the model's final predictive accuracy.
Furthermore, by concatenating multiple output vectors from within each subspace to construct the final output, we enable the effective integration of information across each subspace.This process yields a more holistic and nuanced representation, thereby augmenting the model's expressive capabilities and enhancing its predictive accuracy.In essence, by distributing features across various subspaces and executing self-attention computations within these discrete domains, we capitalize on the inter-feature relationships.This strategy not only facilitates the extraction of richer and more precise information but also substantially elevates predictive efficacy of the model.
Loss Function
We utilized a composite loss function termed Cross-Modal Fusion (CMF) Loss, which comprises three components: Intra-Modality and Inter-Modality Alignment (IMIMA) loss, Similarity Distribution Matching (SDM) loss, and Masked Language Modeling (MLM) loss.
IMIMA Loss.We deployed four principal loss terms, namely, Text-to-Text (t2t), Visionto-Vision (v2v), Vision-to-Text (v2t), and Text-to-Vision (t2v).We designated the negative sample set for a given sample as N .The formulation for each loss component is as follows: where the pairwise similarity measure δ(a, b) = exp a ⊤ b .
To encapsulate the total loss, we aggregated these terms to define the IMIMA loss: SDM Loss.We incorporated the SDM loss, as delineated by Jiang et al. [26], to quantify the discrepancy between the predicted similarity distribution and the ground-truth similarity distribution produced by the model.The computation of this loss leverages the Kullback-Leibler (KL) divergence, encompassing bi-directional components: Visual-to-Text (v2t) and Text-to-Visual (t2v).The formulation of this loss function was meticulously designed to steer the model toward a more accurate alignment of similarity distributions between visual and textual representations of the input data.Such alignment is instrumental in amplifying the model's performance across cross-modal tasks.The v2t loss is defined as follows: where q i represents the true probability distribution of matches for the i-th sample, and p i denotes the SoftMax-normalized cosine similarity scores.In this context, p i,j is the predicted probability that the i-th sample corresponds to the j-th category, and q i,j is the ground-truth probability.Consequently, the cumulative SDM loss is the sum of the v2t and t2v losses, calculated by: MLM Loss.Drawing on the architectural principles of the BERT model, we devised a novel method.Initially, we obscured select words within the input sequence using a specialized masking token.Subsequently, the model's predicted probability distribution for these masked positions was contrasted with the actual labels to assess the model's predictive bias.This bias was quantified and integrated into the loss function as a facet of the training process.The intent behind this methodology was to aid the model in attaining a more profound comprehension of the context enveloping the input sequence.By diminishing the disparity between the predicted and true labels, we aimed to bolster the model's overall performance.
Ultimately, our CMF loss function is a weighted sum of the individual loss components.Each loss term was assigned a corresponding weight, reflected in the final formulation.The comprehensive loss function is expressed as follows: where α and β are hyperparameters that balance the contributions of the SDM and MLM losses, respectively, alongside the IMIMA loss.
Data Pretreatment
In the preprocessing of the imaging data, we employed multi-threshold segmentation and connected component analysis to delineate the regions of interest, thereby mitigating interference from non-brain-tissue areas.Threshold segmentation was applied to discriminate bone structures from other tissues, effectively isolating the hemorrhagic zones and preserving normal brain parenchyma.To further enhance precision and eliminate extraneous noise, connected component analysis was performed in a batch-processing manner.The objective was to ensure that the final image comprised solely of pertinent brain tissue regions; specifically, the hemorrhagic sites, gray matter, and intact brain tissue.This meticulous approach to image preprocessing is critical for the accuracy of subsequent analyses.
Regarding the text data preprocessing, we extracted key variables, such as age, gender, time from onset to CT, hospital stay, GCS score, treatment method, and physician's diagnosis, from the clinical dataset.Then, each variable was individually processed through a word segmentation tool integrated within the pre-trained Bio-ClinicalBERT model.This process facilitated the conversion of the textual data into a tensor representation to be suitable for subsequent computational analysis.
Experiments
To demonstrate the superiority and validate the robustness of our methodology, we executed a comprehensive suite of experiments, encompassing both comparative and ablation studies.Throughout the training phase, we meticulously optimized hyperparameters.Specifically, we set the learning rate at 0.0001, determined the number of training epochs to be 300, and established the batch size at 128.These configurations were carefully chosen to balance computational efficiency with model performance.The libraries used in our experiment included but were not limited to torch = 1.21.1 + cu116 and torch vision = 0.13.1 + cu116.And the code is available at https://github.com/YU-deep/ICH-2D,accessed on 18 June 2024.
Comparative Experiments
As detailed in Table 1, our comparative study benchmarked our algorithm against the leading contemporary methodologies, which span purely 3D, purely 2D, and hybrid 2D + 3D approaches.The results underscored our method's superiority across all evaluated metrics, thereby confirming its efficacy.Specifically, when juxtaposed with the best-performing metrics of alternative methods, our approach exhibited an increment of 2.35% in accuracy (ACC), a 0.13% rise in recall, and a 0.0027 enhancement in the area under the receiver operating characteristic curve (AUC).Despite employing a 2D-based framework, these results clearly demonstrated that our method's overall performance surpassed that of both existing 3D and 2D techniques.
ACC (%) Recall (%) Precision (%) AUC
DL-Based Method (3D) [12] 81.02 78.52 83.31 0.9141 Image-Based Method (2D) [18] 74.23 67.11 75.98 0.6933 Multi-Task Method (3D) [19] 85.42 79.86 89.80 0.8998 GCS-ICH-Net (2D) [22] 85.08 81.88 87.25 0.8590 UniMiSS (2D + 3D) [21] 82 Based on the outcomes of our analysis, we posit that the observed superiority of our method can be attributed to the following factors: (1) Multimodal Information Fusion: Our model incorporated a CMF loss function, which effectively harnessed the intrinsic correlations between various modalities.By synergistically integrating CT images with clinical data, our model achieved a more holistic understanding of the tasks at hand, consequently enhancing its overall performance.(2) Feature Fusion Mechanism: Our CMAF module employed a cross-modal attention mechanism designed to extract salient and comprehensive fusion features.This method facilitated a more discerning aggregation of information from multiple sources, enhancing the representational power of the fused features.(3) Utilization of Advanced Pre-trained Models: Our framework incorporated two distinct modules for feature extraction-a visual feature extraction module utilizing the ResNet50 model and a text feature extraction module employing the BioClinicalBERT model.These pre-trained models were instrumental in enhancing the capability of our system to extract more robust and nuanced features.By leveraging the extensive knowledge encoded within these pre-trained models, our approach achieved superior feature extraction performance.
Ablation Experiment
As presented in Table 2, the Vision-Only approach exclusively processed visual data within the model, whereas the Text-Only approach was limited to textual information.The tabulated results elucidated that the cross-modal input strategy implemented in our ICH-Net significantly enhanced the performance, yielding an 11.18% increase in accuracy and a 0.0934 improvement in AUC compared to the Vision-Only method.These findings affirm the value of integrating clinical information to enhance diagnostic accuracy.Our perspective posits that patient information extends beyond the confines of isolated modalities, such as CT images or clinical data; rather, there is an intrinsic interrelation between these two forms of information.A concurrent comprehension of both modalities can significantly augment the model's proficiency in executing predictive tasks.By harnessing this synergistic understanding, our model was tailored to leverage the compounded insights gained from the integrated analysis of multimodal data, thereby enhancing the accuracy and efficacy of its predictions.
Visualization Analysis
Through meticulous examination of the images within the test set and the delineation of regions of interest, we gained a refined understanding of the network's recognition accuracy of areas afflicted by ICH.As depicted in Figure 5, the employed visualization technique served a dual purpose: it corroborated the model's remarkable precision in localizing hemorrhagic sites and underscored its capacity for accurate hemorrhage localization predictions-both of which are pivotal for assessing the network's predictive capabilities.Moreover, these visualization outcomes provided a more profound insight into the network's cognitive mechanisms, specifically its process for identifying hemorrhagic areas within images.Such clarity presents an opportunity to refine and enhance network performance.By deciphering the pivotal features and patterns that the network relies on for decision-making, we can implement targeted adjustments and advancements, thereby augmenting its efficacy in recognizing ICH.Consequently, this thorough inspection and visualization approach yielded indispensable insights for model refinement and the optimization of predictive performance.It equipped us with a deeper comprehension of the complexity inherent in the tasks at hand and propelled the enhancement of our models' predictive acumen.
Visualization Analysis
Through meticulous examination of the images within the test set and the delineation of regions of interest, we gained a refined understanding of the network's recognition accuracy of areas afflicted by ICH.As depicted in Figure 5, the employed visualization technique served a dual purpose: it corroborated the model's remarkable precision in localizing hemorrhagic sites and underscored its capacity for accurate hemorrhage localization predictions-both of which are pivotal for assessing the network's predictive capabilities.Moreover, these visualization outcomes provided a more profound insight into the network's cognitive mechanisms, specifically its process for identifying hemorrhagic areas within images.Such clarity presents an opportunity to refine and enhance network performance.By deciphering the pivotal features and patterns that the network relies on for decision-making, we can implement targeted adjustments and advancements, thereby augmenting its efficacy in recognizing ICH.Consequently, this thorough inspection and visualization approach yielded indispensable insights for model refinement and the optimization of predictive performance.It equipped us with a deeper comprehension of the complexity inherent in the tasks at hand and propelled the enhancement of our models' predictive acumen.
Discussion
Our investigation spanned numerous datasets, yet we found that none offered concurrent public access to both imaging and clinical information.Our private dataset stood out as both comprehensive and reflective of real-world conditions.However, future applications to external datasets may influence the performance of our model.Thus, enhancing the model's generalization capability remains a primary objective.
Within our methodological framework, segmentation techniques were not utilized.We posit that approaches eschewing segmentation might offer a more holistic consideration of a patient's condition, encompassing clinical presentation, imaging findings, medical history, and other pertinent factors.Such an approach could potentially aid physicians in conducting a more thorough prognostic assessment of patients and formulating optimal treatment strategies.Nevertheless, it is acknowledged that both segmentation and nonsegmentation methods have their respective merits and limitations, contingent on the clinical scenario.
Our study identified several pathways for further enhancement and outlined key areas for future research.Firstly, the dataset used was sourced from collaborating hospitals, which may limit its size and diversity, potentially reducing the effectiveness of our model when applied to datasets from different institutions or that include varied data types.Secondly, during feature extraction, we utilized pre-trained text and visual encoders.Should these models perform sub-optimally on certain tasks, the efficacy of our methodology could be compromised.Lastly, our model integrated CT imaging data and text, which has been demonstrated to influence prognosis classification outcomes, such as gender [27][28][29], early cognitive status [30], and the location and volume of the hemorrhage [31,32].Therefore, any loss of information between modalities might impair the model's performance.These identified limitations guide a roadmap for future improvements, including expanding the dataset's scope and diversity, enhancing feature extraction techniques, and mitigating information loss across different modalities.
Conclusions
In current methodologies, the lack of effective fusion mechanisms has resulted in the suboptimal amalgamation of clinical data with CT imaging for the prognostic classification of ICH across different modalities.To address this shortfall, we proposed a pioneering framework named ICH-Net, which stands for joint-attention cross-modal network.ICH-Net comprises a Feature Extraction Module, a Feature Fusion Module, and a Classification Module.Additionally, ICH-Net incorporates a CMF loss function, which includes IMAMA loss, SDM loss, and MLM loss, to enhance modality alignment and improve the model's interpretability with respect to the task at hand.Moreover, in the CMAF block, a crossmodal attention mechanism was employed to strategically focus on significant regions within the data.
Our empirical assessments, encompassing both comparative and ablation studies, have substantiated the efficacy of our proposed approach.In our future work, we plan to collect a more comprehensive dataset of clinical information to strengthen the model's generalizability.Furthermore, we aim to extend the application of ICH-Net to a broader spectrum of tasks by embracing multi-modal and multi-task learning paradigms.
( 2 )
We incorporate clinical data to enrich the model's comprehension and enhance ICH prognosis accuracy.(3)Our fusion model incorporates a joint-attention mechanism, effectively facilitating the extraction of more salient and comprehensive fusion features.
Figure 1 .
Figure 1.The diagram illustrates the architecture of ICH-Net, which comprises a Feature Extraction Module, a Feature Fusion Module, and a Classification Module, in sequential order.Ultimately, it outputs the final results.
Figure 2 .
Figure 2. Architecture of the TC block and VC block.These blocks were designed to specifically process text and visual information, respectively.The symbol ⊗ stands for matrix multiplication.
Figure 2 .
Figure 2. Architecture of the TC block and VC block.These blocks were designed to specifically process text and visual information, respectively.The symbol ⊗ stands for matrix multiplication.
Figure 3 .
Figure 3. Architecture of the proposed CMAF block.Here, ⊗ represents matrix multiplication, ⊘ stands for SoftPool, ⊙ symbolizes matrix addition, and ⊕ signifies concatenation.In this block, and are inputs, then is obtained as the output.
Figure 3 .
Figure 3. Architecture of the proposed CMAF block.Here, ⊗ represents matrix multiplication, ⊘ stands for SoftPool, ⊙ symbolizes matrix addition, and ⊕ signifies concatenation.In this block, f v and f t are inputs, then f cm f is obtained as the output.
13 Figure 4 .
Figure 4. Detailed architecture of the proposed MHSAF block.The symbol ⊗ denotes matrix multiplication.
Figure 4 .
Figure 4. Detailed architecture of the proposed MHSAF block.The symbol ⊗ denotes matrix multiplication.
Figure 5 .
Figure 5. Visual representation of diverse prediction outcomes, emphasizing the activation zone of class association in the predicted network with a prominent red hue.
The diagram illustrates the architecture of ICH-Net, which comprises a Feature Extraction Module, a Feature Fusion Module, and a Classification Module, in sequential order.Ultimately, it outputs the final results.
Table 1 .
The comparative experiments comparing our method with other methods.
Bold indicates the best, and underline is the second best.
Table 2 .
Ablation experiment on loss function.Bold indicates the best, and underline is the second best.
Bold indicates the best, and underline is the second best. | 8,580.6 | 2024-06-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Investigation of Dual-Vortical-Flow Hybrid Rocket Engine without Flame Holding Mechanism
A 250 kgf thrust hybrid rocket engine was designed, tested, and verified in this work. Due to the injection and flow pattern of this engine, this engine was named dual-vortical-flow engine. This propulsion system uses N2O as oxidizer and HDPE as fuel. This engine was numerically investigated using a CFD tool that can handle reacting flow with finite-rate chemistry and coupled with the real-fluid model. The engine was further verified via a hot-fire test for 12 s. The ground Isp of the engine was 232 s and 221 s for numerical and hot-fire tests, respectively. An oscillation frequency with an order of 100Hz was observed in both numerical and hot-fire tests with less than 5% of pressure oscillation. Swirling pattern on the fuel surface was also observed in both numerical and hot-fire test, which proves that this swirling dual-vortical-flow engine works exactly as designed. The averaged regression rate of the fuel surface was found to be 0.6~0.8mm/s at the surface of disk walls and 1.5~1.7mm/s at the surface of central core of the fuel grain.
Introduction
Hybrid rocket propulsion has attracted tremendous attention in the past decade due to its distinct characteristics and performance as compared to liquid and/or solid rocket propulsion.The advantages of hybrid rocket propulsion include the following [1,2]: (1) extremely high safety because of separation of oxidizer and fuel storage and solid form of fuel minimizing fuel explosion; (2) good cost-effectiveness attributing to reduced complex plumbing and valve system; (3) good throttle capability similar to liquid rocket engines; and (4) highly green and environmental friendly combustion technology which allows various choices of fuel and oxidizer.Despite the benefits mentioned above, there are some issues that a hybrid rocket engine design must take into account before it becomes a useful rocket propulsion technology.These well-known issues are mostly due to the inherent characteristics of hybrid rocket engine (HRE), which are described next.
These issues include (1) O/F ratio shift during combustion due to varying burning area and regression rate [3]; (2) limited total operating duration, which affects the size (diameter or length) of the hybrid rocket engine; (3) low combustion efficiency due to the nature of diffusion flame as compared to premixed flame; and (4) various choices of fuel and oxidizer [2,4].When designing a new type of hybrid rocket engine, one needs to take the above issues into account based on the specific mission requirements.
There are many different types of designs of HRE nowadays.For example, Nagata et al. [5] proposed an HRE called CAMUI (cascaded multistage impinging-jet).Fuel grain of this HRE has small ports along the engine axis, similar to the multiport design that increases the fuel burning area.The innovative part of this engine was the cascading feature of the ports.This creates large turbulence intensity as the flow impinges at the surface of the next fuel grain which boosts mixing efficiency while increasing the reaction area.
A second example was proposed by Knuth et al. [6].This HRE came up with the idea of coaxial, corotating vortex flow field engine called Orbitec Hybrid.This HRE injects the oxidizer at the rear end of the engine in the tangential direction.After injection, the oxidizer flows to the front part of the engine along the outer surface then back to the nozzle with swirling motion.A similar engine was also studied by Wall [7].This engine increases the fuel regression rate by applying swirl injection [8], in which this design also enhances mixing by introducing bidirectional axial flow.More information on alternative designs on HRE could be found in Haag's study [9].All these studies are designed to meet specific requirement missions, such as large thrust for short period of time (boosters) and small thrust for long operation (cruising).Lai et al. [10] recently proposed a highly efficient dualvortical-flow (DVF) engine.The overall performance of the proposed configuration may give a very stable and high efficiency combustion and thrust.That specific configuration includes a flame holding mechanism implemented inside the high-temperature combustion chamber, which is very technically challenging or even impractical for application purposes.In this study, we would like to present a similar but easy to fabricate HRE without the flame holding mechanism for sounding rocket flight mission based on numerical and experimental investigation.
This engine consists of two counter-rotating flow reacting zones perpendicular to the engine axis with four tangential oxidizer injectors each.The fuel grain disks are connected by a central port to the nozzle, as shown in Figures 1(a) and 1(b).The main objective of this design is to provide a relatively stable thrust throughout the entire operating period.With this design, the burning surface could maintain a nearly constant combustion area throughout the operation.The counter-rotating flow maximizes the mixing and combustion efficiency with possible roll control of the rocket.Furthermore, this DVF HRE has a very small aspect ratio (L/D~1), which is favorable for gimbal-based thrust vector control (TVC) if needed.
Research Methods
This study initially analyzed the described engine numerically using the well-known computational fluid dynamics (CFD) technique [11] and was verified via hot-fire tests.The numerical tool used in this work was UNIC-UNS [12], which is to be described later.Figure 1(a) shows the schematic diagram of the design, in which the engine was designed to fit into a casing of 266 mm in diameter and 148 mm in length (not including the convergent-divergent nozzle).In this design, the reacting zone (blue) is surrounded by fuel grain (yellow) which regresses while combustion These equations and models were discretized using the cellcentered finite-volume method, parallelized using MPI protocol and had been applied successfully to similar problems [10,12].For the reacting species, the finite rate reaction model (a.k.a.Arrhenius reaction model) was implemented.Together with a simplified set of species and reaction path [10], the UNIC-UNS code handles the species conservation equation.In general, the simulations were performed using a time step of 5 × 10 −6 s with boundary conditions summarized in Table 2.These computations are performed using 64 cores of processors on the IBM 1350 PC cluster at the National Center for High-Performance Computing (NCHC) of Taiwan, in which a single node consists of 4 cores with 3.0 GHz of CPU for each core and 16 GB of RAM for each node.
For the CFD model, we performed a series of grid convergence tests using 5.03, 2.34, and 1.63 million cells which were labeled as Case A 1 , A 2 , and A 3 , respectively.We have compared three simulated quasi-steady-state data which include thrust, mass flow rate (ṁ), and I sp .Figure 2 illustrates the corresponding convergence history, in which all the cases were calculated long enough to reach a quasi-steady-state where the abovementioned values are nearly constant.The thrust oscillation caused by the pressure oscillation will be explained later in Results and Discussion.For all three cases, the simulated mass flow rates are essentially the same as 1.05 kg/s.But the thrust (or I sp ) of Case A 3 (coarse mesh) is slightly higher than the A 1 (fine mesh) and A 2 (medium mesh) which are ~254 kgf (or 234 s) and ~246 kgf (or 232 s), respectively.With these results, one can summarize that the resolution of Case A 2 is good enough for engine design purpose, further analysis, and discussion.The corresponding hot-fire tests are described next.
2.2.Hot-Fire Test Setup and Analysis.Figure 3 shows the schematic sketch of the DVF engine design in detail.For hot-fire test purpose, the chamber body, external plumbing, and both bulkheads were made of 304 stainless steel.The nozzle was fabricated using high-density antioxidation graphite.The insulators are made of silicone rubber, EPDM, or ceramic depending on the location.The fuel grains were manufactured using off-the-shelf high-density polyethylene (HDPE) [4] with a density of 0.945 g/cm 3 .Photographs of all manufactured components are shown in Figure 4(a) and are assembled into a DVF engine as shown in Figure 4(b).
Figure 5 shows the test setup of this hot-fire test.For this system, the N 2 cylinder was filled to exceed 120 atm and was connected to the top of the specially prepared oxidizer (N 2 O) running tank with a regulator set as 57 atm.A pressure transducer was also mounted to the top of the running tank to monitor the tank pressure during operation.The bottom opening of the running tank was connected to each of the injectors of the engine with a plumbing system including a main valve followed by a flow distributor and some pressure transducers.The DVF engine was then mounted on a horizontal thrust stand.The pipe size used before the flow distribution system was 3/4 inch (~19 mm) and was split into eight 1/4-inch (~6.35 mm) tube before injection into the combustion chamber.The valve used in this system was a traditional ball valve with nominal diameter 19 mm controlled by a pneumatic valve.
A reliable electrical ground support equipment (EGSE) is also required to accomplish the task.To acquire the experimental data, we mounted a series of sensors on the engine system: these sensors were managed on a data acquisition system (DAQ) based on National Instruments Corporation's (NI) products.The sensors used in this test were pressure transducers and load cells.These sensors output 0~10 VDC according to the physical quantity measured.The pressure transducers were JPT-131 series provided by Jetek Electronics Co. Ltd. [14].The pressure transducer can measure 0~100 bar gauge pressure with the accuracy of +/−0.5%.For the thrust of the engine and mass difference of the supply tank, S-type load cell with the desired load range provided by Sensolink [15] was used.These sensors were wired to the cRIO 9074 of NI using ordinary signal cables.Then the cRIO 9074 is connected to a PC using Ethernet cable, which is suitable for long-distance monitoring and controlling as shown in Figure 6.The programming was done using the 3 International Journal of Aerospace Engineering software called LabVIEW, which is also provided by NI.This platform makes the programming process simple and easy.The obtained data is stored in the PC at the rate of 1 kHz.These data are then postprocessed and analyzed after the hot-fire test.
Results and Discussion
3.1.Simulation Results.We have performed the calculation of thermodynamic equilibrium reactions for N 2 O [16] (oxidizer) and C 2 H 4 (fuel) using NASA CEA online [17][18][19].The chamber pressure was set to 38 atm with an O/F ratio of 4.2, and the area expansion ratio of the nozzle was 2.56.For this case, the resulting outlet pressure 2.85 atm was underexpanded for sea level operation.The optimal (theoretical) I sp , C * , and C f value for this case was 236 s, 1759 m/s, and 1.32, respectively.Figure 7(a) shows the summary of the simulated equilibrium mass fractions of all species.The major species include N 2 , CO, H 2 O, CO 2 , and H 2 with the mass fraction of 0.5136, 0.3572, 0.0664, 0.0417, and 0.0200, respectively.These five species sum up to 0.9989 of the composition.Figure 7(b) shows the simulated mass fractions by UNIC-UNS CFD code.The major species are identical to those of CEA, but the mass fractions are slightly different.The mass fraction of N 2 , CO, H 2 O, CO 2 , and H 2 is 0.4715, 0.2924, 0.0713, 0.0554, and 0.0121, respectively.In addition to the above species, there are other species that sum up to 0.0973, which are mostly radical species of combustion.The main reason for this difference was that for CEA the species were calculated assuming thermodynamic equilibrium while, for UNIC, they were obtained using the finite-rate chemistry.The majority of the mixture is N 2 , which comes from the direct decomposition of International Journal of Aerospace Engineering which leads to incomplete combustion, the second dominant species is CO.The formation and distribution of the species will be further discussed later in this paper.Figure 8(a) shows the instantaneous sliced pressure distribution in the engine in the quasi-steady-state after ignition of combustion.The pressure inside the engine ranges from 30 to 38 atm, in which the maximum pressure is located near to the injectors (circumference of the disks) and the lowest pressure is distributed near the nozzle as expected.Noticeably, the pressure in the major central core (between disk 1 and disk 2) is low with a value in the range 32-33 atm due to strong swirling motion.This strong swirling central core region disappears as the counter flows meet somewhere downstream of disk 2. As the flow continues to flow towards the nozzle, the flow accelerates and the pressure drops quickly.In addition, Figure 8(b) shows the same instantaneous sliced distribution of Mach number in the engine.Gas flow inside the engine is subsonic, which is accelerated to sonic speed at the throat of the nozzle and is further accelerated in the diverging part.The performance of HRE relies heavily on the combustion efficiency, which temperature of combustion can be considered as a good indicator.Figure 9(a) shows the corresponding instantaneous sliced temperature distribution.Near the injection regions where N 2 O are injected, the temperature is at room temperature (300 K), which serves as a natural cooling mechanism for preventing the injectors from melting by high combustion temperature.As the N 2 O stream is injected into the engine, the high combustion temperature causes N 2 O to decompose directly.This decomposition reaction is an exothermic reaction that helps to sustain the combustion.The main products of N 2 O decomposition are N 2 and O 2 with some related species in radical form.Distributions of some critical species such as O 2 and OH radical are shown in Figures 9(b) and 9(c), respectively.Therefore, we can observe that in Figure 9(b), massive O 2 are formed and quickly disappear just before the high-temperature region.In hydrocarbon combustion, flame location or highly reacting region generally consists of abundant OH radical generally.This shows that the abundant OH radical distribution corresponds to the high-temperature region very well.
Figure 10 shows the distributions of mass fraction of the five other major species (N 2 , CO, H 2 O, CO 2 , and H 2 ) in the engine.In Figure 10( We have found that the thrust (or pressure, not shown) oscillates as a function of time in Figure 2. A detailed analysis on this oscillating phenomena was performed in this study.Figure 11 illustrates five instantaneous temperature distribution in the middle sliced sections of both disk combustion chambers at five different times of simulation.The "spikes" of the contour rotate clockwise and counterclockwise at disk 9 International Journal of Aerospace Engineering considered to be fairly stable with the maximal oscillation amplitude of 2 kg over 245 kg, which is less than 1%.
Figure 12 shows the stream traces of the flow patterns at the surface of the fuel grain (0.1 mm above the surface) of Case A 2 (Figure 12).The injected flow streams revolve in the disk combustion chambers around the central port before entering the central port.This provides a relatively longer flow path for the flow (relative to straight radial injection) which greatly increases the residence time for the combustion reactions to take place.For a specific case, the stream trace marked with the blue arrow in Figure 12 was being observed.We define the stream trace entering the swirl pattern when it reaches the radial position where R is at 95% of R max (about 5 mm from the circumference, indicated by the long red arrow) and exit when R is at 25% of R max (about 5 mm larger than the central port, indicated by the short red arrow).This figure shows that the flow revolved about 180 degrees starting from injection until it entered the central port.The path that the flow takes in this situation is about twice the length of the radius of the disk.As shown in Figure 12, the gap between disk chamber walls is small and the averaged flow speed is larger.This also indicates that the tangential momentum is larger; therefore, the flow revolves almost 180 degrees before reaching the central port.As the gap grows wider, the cross-sectional area increases and the tangential momentum becomes smaller.This will be further compared and discussed with the hot-fire test results.
3.2.Hot-Fire Test Results.After the rocket engine is set up following Figure 5 with all connections carefully checked, we have followed the following procedure to perform the hot-fire tests.A snapshot of the hot-fire test during combustion is shown in Figure 13, in which the exhausting plume is slightly underexpanded because of clear further expansion leaving the lip of the nozzle.Figure 14 shows the measured thrust and pressures at many locations for a typical run.A pyrograin in the engine is used to ignite the engine at t = −4 s.After the pyrograin burns out, main valve opens at t = 0 s to allow the N 2 O oxidizer to flow into the chamber.A delay of 0.5 s could be observed based on the rise of measured thrust and pressure data, which is caused by the speed of valve opening.The liquid N 2 O flow depletes at t = 9.5 s, and finally, the valve totally closes at t = 12 s to shut down the engine, during which both thrust and pressures decrease rapidly.Note that the N 2 O becomes a gaseous state and flows into chamber between t = 9.5 and t = 12 s.The thrust is relatively stable in the range of 240~245 kgf in the period of 1.25~9.5 s.The thrust decreases almost exponentially from t = 9.5 s to the end.The running tank pressure starts at 57 atm and decays slowly to about 50 atm at t = 9.5 s due to cooling effect caused by thermal expansion of N 2 O flow during operation.Both disk chamber pressures show clear oscillations (even different) probably due to the lack of flame holding mechanism in the engine which coincides with the simulation results.However, the oscillation frequency is only 4 Hz based on the measured pressure data in Figure 14, which is much lower than the experimentally observed 100 Hz of oscillation using 600 fps high-speed camera most probably caused by the limitation of the setup of pressure sensor that may damp out the high-frequency component.In addition, the simulated oscillation frequency is ~200 Hz which is roughly consistent with the measured ~100 Hz.The difference, however, requires further investigation.
The swirl pattern is also observed on the surface of the fuel grain after hot-fire test.Figure 15 shows the photo of cavity wall of chamber disk 1 after hot-fire test.The flow pattern was indicated by the red arrows.Similar to the simulated case, the two red arrows in Figure 15 indicate the entering and exiting of the flow into the chamber disk.The revolved angle of this case is about 80 degrees which is a lot smaller than the simulated Case A 2 .The main reason was that the gap after firing (~26 mm) is a lot larger than the initial case (10 mm).As the gap increases, the ratio of tangential momentum and radial momentum decreases.Therefore, the angle revolved by the flow is expected to decrease before entering the central port.
The fuel surface contour was measured by a bridge-type 3D Coordinate Measuring Machine (CMM) (Model PIO-NEER, Hexagon Manufacturing Intelligence).We have scanned three fuel grain surfaces using an automatic mode.Figure 16 shows the averaged regression rates at different radial positions by taking average from 6 to 12 scans per radius along different radial directions, considering minimal measured positions (0, 45, and 48 mm for fuel grain 1, 2, and 3, resp.) to the outer radial position of 100 mm.The averaged regression rate at the disk surface is 0.6 to 0.8 mm/s.The regression rate at the center region of grain 1 (central port) is two times the value of those at the disk surface.This highly regressed region may be attributed to the long light of sight of radiation from the combustion flame to the exhausting plume along the axis.In addition, very high regression rates are also observed at the wall of the central port which is about 1.5 to 1.7 mm/s, which should be caused by the very strong swirling in this region that promotes the pyrolysis of the fuel grain.For the CFD simulated cases, the simulated Case A 2 represents the initial state of this work.The simulated Case A 2 has a 98% of I sp efficiency (232 s) and more than 100% of the C * value of 1814 m/s which exceeds that (1759 m/s) of CEA.For this work, the location P c used is somewhere near the injector where the pressure sensor of the experimental model could be mounted during tests.Due to the use of this pressure, the C * value was higher than the one from CEA.As a result, the C f value of Case A 2 is not as high as the one from CEA.This can be easily observed by I sp × g 0 = C * × C f .For the hot-fire test results, the pressure and thrust are obtained as a function of time but the mass difference (especially the fuel part) could only be measured after the test due to the limitation of the facilities.Therefore, the mass flow rate of the hot-fire test was only available in terms of time averaged values.Due to the reasons above, this work could only obtain the averaged value of I sp and C * of the hot-fire test.But to obtain the value of C f , the instantaneous value can be calculated properly by C f = F instant /(P instant × A throat ).The averaged I sp was calculated by integrating the measured force and divide it by total mass difference.The C * value was then calculated with P c taken as time averaged value.As for the C f , it was calculated using the instantaneous force and chamber pressure.The averaged I sp obtained by the hot-fire test in this work was 221, about 93.6% of the theoretical value.The averaged C * and C f are 1565 m/s and 1.39, respectively.Since the value of C f was available as a function of time, we observed an increase of C f from 1.21 to 1.51, which is mainly due to the decrease of chamber pressure as the test proceeds.As the chamber pressure decreases, the underexpanded flow shifts towards optimal criteria, and the C f increases.Though the main objective of this DVF HRE was to provide a constant and stable thrust for a specific flight mission, it was surprising, that to us, that the thrust remains almost the same throughout the test period.This was probably due to O/F ratio shift, mixing efficiency, and a lot more reasons that compensate with each other.However, this definitely requires further investigation in the near future.
Conclusion
This work proposes a DVF HRE without flame holding mechanism for possible sounding rocket application.The length-to-diameter ratio of the engine is only about 1, which is different from conventional lengthy HREs.This design was simulated considering geometrical configurations at the initial state.The resulting simulated I sp at initial state is 232 s with very high combustion efficiency as compared to that calculated by NASA CEA (236 s).A hot-fire test based on proposed design was performed for 12 s by measuring the thrust and pressures at many locations.The maximal thrust of 245 kgf was measured to be relatively constant with a value of ~240 kgf and an averaged ground I sp of 221 s.Measured regression rates are in the range of 0.6-0.8mm/s at the walls of the two disk chambers and 1.5-1.7 mm/s at the walls of the central port region due to strong vortex motion of the hot gases.In addition, the central end wall, located furthest from the nozzle, also possesses very high regression rate, probably due to high thermal radiation and highly turbulent flow field.The O/F ratio of this specific test was 4.2 which is relatively low compared with the value for optimal I sp from various references [2].An oscillation frequency of ~100 Hz was observed in numerical simulation, while ~200 Hz in hot-fire test, which definitely requires further investigation.These observed or measured instabilities may be caused by the fact that no physical flame holders were used in the chamber.Despite with the presence of these instabilities, this DVF engine design still shows fairly stable thrust, which should satisfy the use in sounding rocket application.
Figure 1 :
Figure 1: Schematic diagram of the 250 kgf class DVF engine.(a) XY plane cross section.(b) YZ plane cross section.
Figure 5 :
Figure 5: Schematic diagram of the test setup of 250 kgf class DVF engine.
Figure 7 :
Figure 7: Composition of species product at exhaust outlet by (a) NASA CEA and (b) UNIC CFD.
Figure 6 :
Figure 6: Schematic diagram of the EGSE setup.
Figure 12 :
Figure 12: Surface stream trace of the flow pattern of Case A 2 at X = −122.6mm.
Figure 13 :Figure 14 :
Figure 13: Snapshot of the 250 class DVF engine during operation.
Figure 15 :
Figure 15: Swirl pattern on fuel grains of disk chamber 1 after hot-fire test.
Figure 16 :
Figure 16: Measured grain regressed thickness for different grain fuel surfaces of the DVF engine after hot-fire test.
Table 1 :
Dimensions of the 250 kgf class DVF engine.Due to the pyrolysis of the fuel grain during combustion, some of the dimensions of the engine change over time, namely, P d , D d , and Gap in Figure1(a).The dimensions for the numerical investigation are summarized in column "Cases A 1 ~A3 " of Table1.The pyrolysis rate of the fuel grain is a function of many physical properties such as temperature, flow pattern, turbulence intensity, and oxidizer mass flux, to name a few.
Table 2 :
Boundary settings for numerical analysis.
Table3summarizes the numerically simulated cases with hot-fire test data of the 250 kgf class DVF HRE.For the numerical results, there are two specific cases being discussed.The NASA CEA case assumed 0-D chemical equilibrium condition which we can be considered as the theoretical results, and the UNIC-UNS simulated Case A 2 using CFD finite-rate model.For CEA test case, the initial state of the engine is calculated.The resulting O/F ratio, ground I sp , and C * are 4.2, 236 s, and 1759 m/s, respectively.The corresponding C f is calculated to be 1.32 using the nozzle area expansion ratio of 2.56.
Table 3 :
Comparison of numerical and test data of 250 kgf class DVF engine. | 6,092.8 | 2018-03-11T00:00:00.000 | [
"Engineering"
] |
ACHIEVING CLOSE RANGE PHOTOGRAMMETRY WITH NON-METRIC MOBILE PHONE CAMERAS
. Close range photogrammetry (CRP) has gained increasing relevance over the years with its principles and theories being applied in diverse applications. Further supporting this trend, the current increase in the wide spread usage of mobile phones with high resolution cameras is expected to further popularize positioning by CRP. This paper presents the results of an experimental study wherein two (2) non-metric mobile phone cameras have been used to determine the 3-D coordinates of points on a building by using the collinearity condition equation in an iterative least square bundle adjustment process in MATLAB software environment. The two (2) mobile phones used were Tecno W3 and Infinix X509 phones with focal lengths of 5.432 mm and 8.391 mm respectively. Statistical tests on the results obtained shows that there is no significant difference between the 3-D coordinates obtained by ground survey and those obtained from both cameras at 99% confidence level. Furthermore, the study confirmed the capability of non-metric mobile phone cameras to determine 3D point positions to centimeter level accuracy (with maximum residuals of 11.8 cm, 31.0 cm, and 5.9 cm for the Tecno W3 camera and 14.6 cm, 16.1 cm and 1.8 cm for the Infinix X509 camera in the Eastings, Northings and Heights respectively).
Introduction
Close-range photogrammetry (CRP) has found many diverse applications in the fields of industry, biomechanics, chemistry, biology, archaeology, architecture, automotive and aerospace, construction as well as accident reconstruction (Jiang et al., 2008). Furthermore, the capability of CRP to produce dense point clouds similar to the output from terrestrial laser scanning (TLS) makes it a cheaper alternative to be considered in applications that require 3D position of points (Ruther et al., 2012;Mokroš et al., 2013Mokroš et al., , 2018. Consequent upon its many applications, CRP has witnessed a wide range of developments in the past 4 decades many of which are results of automation and digital techniques which occurred on the sidelines of mainstream photogrammetry (Fraser, 2015). Many of these developments have been especially concerned with models and automation of the procedure for the rigorous determination of the geometric relationship that exist between image and object as at the time of image capture which is the fundamental task of photogrammetry (Mikhail et al., 2001;Luhmann As digital photogrammetric techniques began to gain relevance above analytical photogrammetry, Jechev (2004), worked on the use of amateur cameras for determination of 3D coordinates of buildings using CRP approach. The results obtained in the study showed root mean square error (RMSE) of 1 2 cm ± . and 6 1 cm ± . in planimetry and altimetry respectively when compared with Total station observations at same points. The data was processed using PHOTO MOD Lite software. Later, Abbaszadeh and Rastiveis (2017) explored the ability of CRP for volume estimation using non-metric cameras and found that the use of non-metric cameras produced results with a relative error of 0.2% in comparison with ground survey techniques. The study further established the possibility of using non-metric cameras for CRP applications. However, the images used for the study were also processed using the Agisoft software hence, the study did not explicitly discuss the procedure utilized in converting the image coordinates to object coordinates which is fundamentally known as space resection (exterior orientation) and space intersection.
Exterior orientation involves the process of determining the 3D spatial position and the three orientation parameters of the camera, as at the time of exposure (Jacobsen, 2001). There are three major fundamental condition equations used in photogrammetry in-order to achieve exterior orientation and all equations rely on the point coordinates as input data (Elnima, 2013). Several approaches have been developed over years in the field of photogrammetry for solving the problem of exterior orientation. Some of such methods include the Direct Linear Transformation (DLT) method which gives the exterior orientation parameters without initial approximation (Elnima, 2013) and the matrix factorization method which uses matrix factorization and a homogenous coordinate representation to recover the exterior orientation parameters in a planar object space (Seedahmed & Habib, 2015). All these methods are modifications of the collinearity equation which is conventional approach for solving exterior orientation problem.
This paper explicitly discusses the procedures (space resection and space intersection) for determination of 3D object space coordinates from 2D images taken with mobile phone (non-metric) cameras using the collinearity equation; and implements same using the MATLAB software.
Data
The basic data/equipment used for this study are: -Ground coordinates of two exposure stations.
-Two (2) non-metric cameras. This is to determine if there is any relationship between positioning accuracy and calibration parameters of the non-metric camera used. -Calibration parameters of cameras (determined with the MATLAB software)
Methodology
Although, the basic rational of this study is to illustrate and develop a simple (easy to replicate) MATLAB procedure for determination of accurate 3D point coordinates of object points from CRP using non-metric cameras; ground survey methods was still conducted to determine: -Co-ordinates of exposure stations, -Co-ordinates of photo control points (PCP) and -Co-ordinates of check points that were used to validate the model. Sequentially, the procedure adopted in this study is as shown in Figure 1.
Determination of coordinates of exposure stations, PCP and check points was done using the ZTR 320 Hi-Target Total Station by conventional survey technique. Two exposure stations (A001 and A002) were established within 70 m distance away from the building and coordinated accordingly.
Thereafter, five (5) photo control points (P1-P5) used in obtaining the exterior orientation parameters and nine other check points (C1 -C9) used to check the accuracy of the determined 3D coordinates from CRP were also coordinated by taking observations to the designated points using the total station in reflectorless mode. Figure 2 shows the location of the PCPs and the check points on the building whose 3D coordinates are determined in this study.
Camera calibration was done in order to determine the intrinsic parameters of the camera (Zhang, 2000). Camera calibration for the two non-metric cameras used for the image acquisition was performed by taking ten (10) shots to a mounted checker board which has five rows and seven columns with 11.3 cm dimension of each squares. The acquired images were then processed using MATLAB 2014a software with a camera calibration add-in tool. The obtained results are presented in Table 1.
Photo shots were taken to the building whose 3D coordinates are to be determined from the two established exposure stations. The camera shots are taken such that 100% overlap is obtained from both exposure stations for each of the cameras.
Pixel extraction was done using the MATLAB software as the comparator. The pixel coordinates of PCPs and check points were extracted accordingly as illustrated in Figure 3. Since the MATLAB comparator environment has its origin at the top right corner, transformation from comparator coordinates to camera coordinates (with origin at the perspective point) was carried out by subtracting the x pixel coordinate from the x principal point coordinate (obtained from camera calibration) and subtracting the y principal point coordinate (obtained also from camera calibration process) from y pixel coordinate. The result of each camera coordinates was then multiplied by the pixel to millimeter conversion constant "0.2645833333".
The collinearity condition equation was used for transforming the camera coordinates to object coordinates in this study. The transformation was achieved in a two staged solution approach as follows: -Space resection stage (Determination of exterior orientation parameters): The exterior orientation parameters of the camera positions were determined using the collinearity equation given in Eqs (1) and (2). MATLAB codes used were modified after the works of Alsadik (2010). The code written executes the collinearity equation iteratively in a least squares adjustment until convergence is reached. The condition for convergence was defined such that the difference between final solution and previous solution does not exceed 0.001. The condition for convergence was modified by the authors in this study. ; where: dω , dφ and dκ are the corrections to be applied to omega, Phi and Kappa respectively; XL, YL and ZL are the 3D exposure station coordinates; a x and a y are the camera coordinate of the control points.
11
33 32 13 32 Figure 4 shows a graph of the iterations during the least squares determination of the orientation parameters for the right photo taken with the Infinix X509 camera. The figure shows that the MATLAB codes continue to iterate until the difference between the final value obtained for each of the parameters and the previous value does not exceed the specified range of 0.001 mm.
-Space intersection stage (Determination of 3D object coordinates from camera coordinates): Transformation from the 3D camera coordinates to 3D object coordinate system was again carried out with the MATLAB software by evaluating the collinearity condition equation given by Eqs (3) and (4). Eqs (3) and (4)
Results and discussion of results
Tables 2 and 3 present the exterior orientation parameters and also the 2D comparator and camera coordinates respectively obtained from both cameras. Since exterior orientation parameters are based in general on geometric and topologic characteristics of imaged objects, the computed orientation parameters (ω, φ and κ) as presented in Table 2 reveal that the photographs were taken in a near horizontal direction. Furthermore, the adjusted coordinates of the two exposure stations from where the left and right photos (respectively) were taken indicate the instability of the position of the camera at the various times of exposure. With an observed maximum difference of 21 cm and 32 cm in the computed horizontal position of the exposure station, it is obvious that the camera positions varied for each exposure. This could have been minimized if the camera was mounted on a tripod that is properly centered during exposure.
Furthermore, Table 4 presents the residuals of the coordinates at the check stations obtained by space intersection with those obtained by ground survey method.
From Table 4, it can be observed that the highest residuals are 11.8 cm (Eastings), 31.0 cm (Northings), and 5.9 cm (height) when the Tecno W3 camera was used. Similarly, the maximum residuals were -14.6 cm, 16.1 cm and 1.8 cm for the Eastings, Northings and Height coordinates respectively. The residuals obtained suggests that the Tecno W3 camera performed better in determination of the object coordinates than the Infinix X509 camera despite that the latter has a more refined focal length. Similar residual pattern is observed during the determination of the exterior orientation parameters for images obtained from both cameras. This is because the final adjusted exposure station coordinates obtained from the Tecno W3 camera is closer to known coordinates of stations. The result as obtained therefore confirms that while the focal length of the camera plays a significant role in image magnification, it does not necessarily depict that the relative image to object geometry is properly preserved. Notwithstanding, it is evident that centimeter level 3-D positional accuracy can still be achieved from CRP by using non-metric cameras. Consequent upon the centimeter level residual obtained from the space intersection results in comparison with ground survey coordinates, a statistical test (students -t test for equality of means and variances) was conducted on the obtained coordinates from space intersection at 99% confidence interval. The comparison of the results obtained from space intersection (Tecno W3 and Infinix X509) and those obtained from survey technique was done to ascertain the level of reliability of the use of non-metric cameras in low order (3rd order) accuracy position determination. Tables 5 and 6 present the results of the statistical tests of equality of means and variances performed on the coordinates obtained by space intersection from the Tecno W3 and Infinix X509 cameras respectively.
From Table 5, the corresponding p-value for the test statistic of the Levene's test is very large (0.99, 0.97 and 0.98 for the Northings, Eastings and Height respectively) therefore we accept the null hypothesis that there is no significant difference in the variances of the results obtained by ground survey technique and CRP using the Tecno W3 camera (Snedecor & Cochran, 1989). Similarly, we observe that there is no significant difference in the means of both results in the Northings, Eastings and Height. Similar result is observed in Table 6 on comparison of the results obtained from ground survey method with that from CRP using the Infinix X509 camera. This is again because all the obtained values are greater than the chosen significant level (0.01).
Conclusions
This study has ascertained the statistical reliability of using non-metric cameras for determination of low order accuracy position via CRP. This was achieved by using Collinearity condition equation in an iterative least square bundle adjustment process in the MATLAB software environment. Therefore, the study concludes that by a careful implementation of the conventional collinearity equation, third order accuracy positions can be obtained with the use of non-metric mobile phone cameras.
Furthermore, the study concludes that mobile phone cameras (non-metric) with a minimum of 5 megapixel and 5.40 mm focal length are suitable devices for CRP applications requiring 3rd order positional accuracy.
Finally, the study discovered that camera capacity for preservation of image to object geometric / topologic relations does not necessarily improve with increasing focal length.
Funding
There was no funding for this research. | 3,081.4 | 2021-07-15T00:00:00.000 | [
"Physics"
] |
The Development of Experimental Absorption Based on Arduino-Uno and Labview on Light Radiation by Colourful Surface
This development research aims to make a tool and test its ability to absorb the heat in light radiation by colored surfaces (black, dark green, blue, white). The benefit of this research is to present a simple laboratory that can be used in physics education class as a test tool for heat absorption. The research method that applied is Microcomputer Based Laboratory (MBL). The software that used is LabVIEW which connected to Arduino Uno hardware and heat source. The appropriate set of tools and materials is then tested for eligibility. The feasibility test of the device is reviewed from the aspect of content, linguistic, presentation and graphic by an expert validator as well as the compatibility of the findings with the theory. The results found that the darker the color of the object, the greater the ability of heat absorption and vice versa. Changes in power at a light source can affect changes in maximum temperature in the color absorption of heat. The greater the power provided by the light source, the greater the increase in maximum temperature. This development research concludes that the tools are feasible to use and appropriate based on existing theories.
her. His respondents also fail to apply it right in their daily life. He found that students consider bright colors like yellow and orange present warmer temperatures, while black and green present cooler ones.
The misunderstood of this concept is of concern to researchers so that it can be corrected. This case can be answered with the solution based on existing physical concepts. Therefore, some lessons that support students in mastering physics completely is needed.
Physics is essentially a collection of knowledge, ways of thinking and inquiry. Physics is not only about a collection of principles, concepts, or facts but also discoveries and prospects for further development. Furthermore, these findings can be applied or utilized in daily life.
Physics emphasizes concepts in learning or understanding. Good and right concepts help students to understand physics easier. Suseno (2014) states that physics has abstract concepts that need to be tested theoretically. The material that contains abstract concepts becomes a significant difficulty for participants.
INTRODUCTION
Heat is a process of energy transfer due to temperature differences. Daily life is often shrouded in heat from the sun. The sun as the biggest source of heat has many benefits for humans. Humans really enjoy the benefits of solar heat, as well as in various fields of agriculture, even as a source of electricity generation.
In daily activities, everyone must have felt hot. Most people don't know the cause of this phenomenon including students. Most students consider it happens because of the direct sunray or the type of clothes they wear. Even though other causes can make this happen such as the color of the clothes.
A study states that there are big differences between teachers' and students' understanding of the connection between color, temperature, and heat. Canlas (2016) states that his respondents fail to properly track the flow of heat energy from one system to anot-existing materials. One of the abstract materials is radiation on heat transfer.
The concept of radiation needs to be emphasized in the practical aspect to prove the material. In fact, in schools, it is still hard to find test tools that can be used to prove the truth about the concept of radiation in physics (Wulandari, et al. 2016). As a result, practical activities as a learning investigation medium cannot be carried out even though it can improve students' understanding of the concepts (Maghfiroh & Sugianto, 2011).
Puspasari (2017); Saepuzaman & Yustiandi, (2017) states that practicum tools are needed by educational institutions (schools). Practical tools can support the learning process in the classroom. Teachers can help students' learning performance by using tools such as technology.
Learning in 21st Century emphasizes technology-based activities. The presence of technology is convenience for teachers and students. Moreover, practicum activities combined with media/technology will provide more effectiveness and efficiency in learning. It's work and has been proved by some previous studies that combine computer devices with hardware (Arduino).
The research conducted by Zachariadou, et al., (2012) states that the educational laboratories based on computer technology with the help of Arduino provide convenience at a low cost. The advantage of Arduino-Uno is it doesn't need a programmer chip device. This can happen because there is a bootloader inside of it that will handle program uploads from the computer. Arduino-Uno has a USB communication tool, so laptop users who don't have serial / RS323 ports can use it. The programming language is relatively easy because the Arduino-Uno software is equipped with a fairly complete collection of libraries, it has a readymade module (shield) that can be plugged into the Arduino board. For example, shield GPS, Ethernet, SD Card, and others (Magdalena, et al. 2013).
The software that can be combined with Arduino-Uno in conducting heat absorption tests is LabView. LabVIEW can be used as a human-machine interface (HMI), it is because LabVIEW has program functions that can form an interface that represents several criteria as HMI. These functions include monitoring realtime conditions in the plant, visualizing events or processes that are happening, and it can carry out measurement logging data (Wardoyo, et al. 2013).
Consider the facts above, development research was carried out in order to make an experimental tool to support the physics learning process about the heat transfer material. The tool that was developed based on Arduino-Uno and LabVIEW.
METHOD
This research is an experimental research development. The research method used in the form of Microcomputer Based Laboratory (MBL). The design of the developed instrument can be seen as shown in Figure 1. Research procedures go through several processes, namely the process of making tools and the testing process. The manufacturing process is carried out in several stages including: (a) the stage of making a radiation test box (containing a bulb, colored plate and LM35 temperature sensor), (b) the wiring stage with an LM35 temperature sensor on Arduino-Uno, (c) the stage of making a virtual instrument in the LabVIEW application. This study uses four units LM35 temperature sensor and is affixed to colored surface. The LabVIEW application used is La-bVIEW 2014 (64bit). The completed manufacturing steps are then continued towards the integration process so that it becomes a series of heat absorption test equipment.
The series of research schemes that have been developed are then passed the feasibility test and theory-based tests. The feasibility test of the device is reviewed from the aspect of content, linguistic, presentation and graphic by an expert validator. After meeting the eligibility criteria, then proceed to testing in theory in the form of data collection.
Indicators of the success of the device are made such as a good feasibility test based on expert judgment and the ability of the tool to test the effect of color surfaces on heat absorption in light radiation. In addition, this study also took data in the form of the relationship of power variations with the maximum temperature achieved and the difference in time required to reach the maximum temperature.
The instrument feasibility observation data is based on several indicators. After the assessment is obtained then it is converted into the following categories (Sahidu, 2013).
The dependent variable is a different color and different lamp power used. The colors used in the test are black, dark green, blue and white. The dependent variable observed was the ability of heat absorption and the maximum temperature that could be achieved by each color. The control variables in this study are room temperature, and the distance of each color to the lamp.
RESULTS AND DISCUSSION
This research produced a tool that can test the heat absorption of several colors through the help of the LabVIEW application. In addition, the tool developed can also test the relationship of power variations (P) to the maximum temperature (T max ) on a colored surface.
The way this tool works is very simple. When the application is run and then, the lights are turned on, the temperature sensor will capture radiation by each color used simultaneously. The distance between the lamps and all plates is the same. So, the data obtained is valid. The LabVIEW application will read changes automatically with digital notes and virtual thermometers. The use of LabVIEW and LM35 Sensors are often used in several studies one of which is the Sandeep & Prakasam research (2019).
Making this tool does not take a long time and how to make it is very simple. The developed instrument is said to be good or can be used because it has passed several series of test processes. Tests carried out such as temperature sensor capability tests, LabVIEW tests, validation of experimental devices, feasibility tests of experimental modules and tests by taking data on the colors used.
LM35 Temperature Sensor Test
This test is carried out to assess the performance of the temperature sensor used. Arduino-Uno which is connected to LabView and temperature sensors is then run. Based on the temperature data contained in the LabVIEW front panel, it can be seen that the room temperature value is around 30 0 C. Subsequent tests using body temperature by touching hands on the LM35 sensor. The result shows that there is a change in temperature to 37 0 C which is visible at the thermometer in the front panel. According to Tansey and Johnson (2014), the normal temperature of the human body is at 37 0 C. This means that the LM35 temperature sensor is suitable for use as research material. The excess use of the LM35 temperature sensor is because it has a linear scale of +10 mV/°C and has a temperature range that can be measured from -55°C to + 150°C (Malvika, et al. 2015). When the temperature exceeds the threshold, the output will produce a buzzer.
LabVIEW test
LabView is useful in facilitating data control and instrumentation developed . To be able to communicate with Arduino, LabVIEW requires additional software called VISA (The Virtual Instrument Software Architecture) which is software for configuration, programming, and troubleshooting of instrumentation systems, PXI serial, Ethernet and USB interfaces. VISA provides a programming interface between LabVIEW and Arduino.
LabVIEW testing is carried out in the block diagram section. The block diagram created as shown in Figure 2 consists of four main parts of the program, namely temperature, serial port, channel sensor, and stop button. In each of these sections also contains programs that are representations of the results of measurements made by sensors and have been processed by Arduino. After running, the temperature identification display will appear on the LabVIEW front panel like Figure 3.
Test the validation of experimental tools
The design and development of tools that have been carried out are then tested for validity by experts. A Validity test uses a questionnaire assessment with several aspects of assessment. The questionnaire was given to the validator consisting of a tool expert lecturer, a media expert lecturer, a laboratory assistant, and two students. The results of these tests are experimental tools in the good category with a value of 85.
According to experts, the design of the tool has been well made. The safety and conditioning of the variables that may affect the measurement have been minimized properly. So, this experimental tool is suitable to be used to test the absorption of light on the radiation of light on a colored surface.
Test the feasibility of the experimental module
Every practicum activity requires a practicum instruction module. According to Waluyo & Parmin (2014), practical instructions play a role the scientific performance and the development of student attitudes. This role requires modules with good criteria and are suitable for use. The module must pass the feasibility test stage to then be used as well as possible. The feasibility test uses several indicators in a questionnaire. The aim is to get an assessment, suggestions, and criticism from experts in order to know the level of feasibility of the modules made. From the results of processing questionnaire, it obtained a value of 82 with a good category and is suitable for use.
According to expert validators, the design of the display modules, fonts, layouts is well arranged and attractive. The grammar used is in accordance with Indonesia dictionary (KBBI). The module content reflects structured practicum guidelines. So that the experimental module is feasible to be used as a guideline for the practical absorption of heat on colored surface light radiation.
Test Data
Tests carried out directly lead to data retrieval. LM35 sensor that has been tested for its capacity in measuring temperature, is then affixed to a colored plastic plate in a test box. The testing process begins with the initial temperature measurements on each sensor so that they are at the same temperature. Based on the initial test results obtained a room temperature of 30 0 C. After uniform temperature in all colors, the 25W yellow lamp is turned on and the results in the LabView application are immediately observed. In a simple measurement result every second can be extracted into Microsoft Excel for later analysis and graphs ( Figure 4).
Based on observations it was found that each color has a different absorption ability. The black color has a very large absorption compared to other colors (Giancoli, 2001). While the color white has a very small absorption compared to other colors. Dark green has higher absorption than blue and white. The following Table 2 shows the observations seen from the maximum temperature obtained each color. Hardiyanto, et al. (2017) stated that young colors such as white have fewer heat absorption rates, ranging from 10% -15%, whereas in old colors like black can absorb heat up to 95%. Research conducted by Stuart-Fox, et al. The maximum temperature reached by each color on the plat indicates that each color has a different heat absorption ability. Previous research suggests that the black surface absorbs more heat than the white surface at the same power (radiation source). Each color has a different emissivity value based on the level of darkness of the color. The darker the color, the greater the emissivity (Iannacone et al, 2012;Levandovski et al, 2013).
Color heat absorption ability has a time difference. Black is faster in reaching maximum temperatures with a time of 285s. Meanwhile, the color white becomes the longest color reaching a maximum temperature of 310s. Dark green takes 290s and blue takes 300s.
A large increase in temperature of heat absorption by plastic colors can be seen in Figure 4. The graph shows that there is direct heat absorption by colored plastic. This is evidenced by changes in temperature that occur. Very rapid and extreme temperature increases occur in black. While other colors increase slowly until the maximum value. Temperature changes with time inverse exponentially in all colors. This temperature change over time. The longer the time the slower the increase in temperature to the point where the temperature cannot increase anymore which is called the maximum temperature (T max ).
When related to existing theories, these results are following form of equation: P r =∆Q/∆t=eσAT 4 (1) where is the radiated power, is the surface area, is a universal constant called the Stefan constant, is temperature and is the object's emissivity, its value varies between 0 and 1 depending on the surface composition of the object (Tipler, 2001). The next data collection aims to determine the relationship of the addition of power (P) to the maximum temperature of each color. The type of incandescent lamp used is the same. But the power used is different, namely 25watt, 40watt and 60watt. Adding power will also increase the intensity of the light provided. This means that the light intensity is given more, the change in temperature will be increase. The use of incandescent lamps is due to several reasons including cheap and not much influence on electric voltage compared to fluorescent lamps and the effect on health is better than incandescent fluorescent lamps (Ogrutan, et al. 2016;Monroe, 1999).
The observations found can be seen in Table 3. Based on the table, the temperature change increases when the light source power is increased. This is consistent with the existing theory that the greater the power applied to a colored surface, the maximum temperature will be even greater (Tipler, 2001).
CONCLUSION
Based on observations and discussions, it can be concluded that the developed instrument can be used well to test the absorption of several colors of heat on plastic and can test the relationship of variations in power to the Table 3. The Relationship between Power Changes and the Maximum Temperature of Each Color maximum temperature produced. Black absorbs heat faster than dark green, blue, and white. White has the weakest absorption capacity of the heat compared to other colors. The older a color is, the stronger its heat absorption ability, and brighter the color, the smaller its heat absorption ability. The greater variation in power causes the maximum temperature in the heat absorption of each color to increase. The researcher suggests that the education element can make or use the development of this experiment for the benefit of students in order to achieve learning objectives. Further research is expected to get more information about some of the findings in the development of this experiment. | 4,060 | 2020-01-01T00:00:00.000 | [
"Physics"
] |
In silico structural homology modelling of EST073 motif coding protein of tea Camellia sinensis (L)
Background Tea (Camellia sinensis (L). O. Kuntze) is known as the oldest, mild stimulating caffeine containing non-alcoholic beverage. One of the major threats in south Asian tea industry is the blister blight leaf disease (BB), caused by the fungus Exobasidium vexans Masse. SSR DNA marker EST SSR 073 is used as a molecular marker to tag blister blight disease resistance trait of tea. The amino acid sequences were derived from cDNA sequences related to EST SSR 073 of BB susceptible (TRI 2023) and BB resistant (TRI 2043) cultivars. An attempt has been made to understand the structural characteristics and variations of EST SSR 073 locus that may reveal the factors influencing the BB resistance of tea with multiple bioinformatics tools such as ORF finder, ExPasy ProtParam tools, modeler V 9.17, Rampage server, UCSF-Chimera, and HADDOCK docking server. Results The primary, secondary, and tertiary structures of EST SSR 073 coding protein were analyzed using the amino acid sequences of both BB resistant TRI 2043 and BB susceptible TRI 2023 tea cultivars. The coding amino acid sequences of both the cultivars were homologous to photosystem I subunit protein (PsaD I) of Pisum sativum. The predicted 3D structures of proteins were validated and considered as an acceptable overall stereochemical quality. The BB resistant protein showed CT repeat extension and did not involve in topology of the PsaD I subunit. The C terminal truncation of BB resistance caused the formation of hydrogen bonds interacting with PsaD I and other subunits of photosystem I in the modeled three-dimensional protein structure. Conclusions Camellia sinensis EST 073 SSR motif coding protein was identified as the PsaD I subunit of photosystem I. The exact mechanism of PsaD I conferring the resistance for blister blight in tea needs to be further investigated.
Background
Tea, Camellia sinensis (L.) O. Kuntze, is the second most popular, healthy non-alcoholic beverage in the world. It is an economically important tree crop, grown in several countries in Asia and Africa. Globally, Sri Lanka is the third-largest producer and the 2 nd largest exporter of tea [1] with its popular brand "Ceylon Tea", playing a key role in the international tea trade.
Blister blight leaf disease (BB), caused by the obligatory fungal pathogen, Exobasidium vexans Masse (Basidiomycetes) is one of the most devastating biotic constraints, commonly found in a majority of tea plantations in south Asia including Sri Lanka, India, Indonesia, Bangladesh, Thailand, Nepal, Vietnam, Cambodia, and Japan [2]. The BB leaf disease causes approximately 25 to 30% crop loss annually depending on the agro-ecological region (AER) of Sri Lanka [3]. The disease infection also causes a reduction of the quality of black tea by changing the composition of leaf biochemical components such as polyphenols, catechins, and enzymes which highly influence the quality of black tea [4].
Presently, the control of this disease is solely based on chemical means, where spraying of Cu-based fungicides directly on to the foliage, before infection, being the recommended practice. The disease is very common in major tea growing areas of Sri Lanka throughout the year, and the repeated application of Cu-based fungicides may lead to chemical residues in the end product "Black tea." Though tea, is a popular healthy beverage, exceeding maximum residual levels (MRLs) of pesticides, heavy metals, and other chemical impurities, leads to a non-tariff trade barrier in exporting and consumption of tea [5]. Therefore, to overcome the said constraints and also to maintain the quality of the symbol "Ceylon Tea", the development of resistant cultivars to BB disease would be the most effective and sustainable approach to control the disease.
Tea is a perennial crop, which requires 20-25 years to develop a new improved cultivar and therefore, the application of marker-assisted selection (MAS) techniques would be highly desirable to increase the efficiency and effectiveness of the breeding program. Bulk segregant analysis (BSA) approach has successfully been applied to identify a SSR DNA marker EST SSR 073 to tag blister blight disease resistance trait using a segregating population derived from the two parents: TRI 2043 (resistant cultivar) × TRI 2023 (susceptible cultivar) [6]. EST SSR 073 motif correlates with the photosystem I subunit D (PsaD I) and identifying the structural model of a protein of the motif is one of the key points for understanding the underlying biological mechanism at a molecular level. The available knowledge on the structure and the role of PsaD I protein is scarce. The experimental elucidation of the tertiary structure of a protein is a huge and a difficult endeavor [7]. The X-ray crystallography or nuclear magnetic resonance techniques (NMR), which are applied to identify the tertiary structure of a protein, are time consuming and expensive [8,9]. However, the "In silico homology modeling" provides an alternative application to predict the 3D structure of proteins with better validation. Homology modeling is known to be one of the best and extensively used computational methods to generate three-dimensional structures when there is more than 35% sequence identity between the known protein structure (template) and the unknown protein structure [10][11][12][13].
In silico homology modeling has been successfully applied to predict the structure of Matrix metalloproteinase 25 (MMP 25) and it can be used as a target for the inhibition of airway remodeling in asthma disease by using in silico drug designing methods [14]. Furthermore, an acceptable protein structure of nif A which is involved in nitrogen fixation of rhizobial strains, has been identified and validated by using in silico structure homology modeling [15]. In silico characterization of ChiLCV coat proteins of Begomovirus in chilli aided in the development of strategies to control Begomovirus disease of crops [16]. Vascular wilt disease of tomato caused by Fusarium oxysporum f. sp. lycopersici is controlled by targeting a novel candidate protein FOXG_04696 which has been developed by homology modeling [17].
With the above background, molecular modeling of EST SSR 073 motif coding protein was the objective of the current study to provide a topology for revealing protein folding and functional structure which would help in understanding the blister blight fungal infection for combating the disease.
DNA sequence of EST SSR 073 motif
The EST SSR 073 motif containing cDNA sequence of blister blight disease resistant tea cultivar TRI 2043 (BBR) (GenBank accession no: MT303817) [18] and the DNA sequence of EST SSR 073 motif of blister blight disease susceptible tea cultivar TRI 2023 (BBS) (GenBank accession no: MT303818) [6] were retrieved. The sequences of BBR and BBS were aligned with BLASTn program [19].
Amino acid sequence analysis and template retrieval
All possible open reading frames (ORFs) for both the nucleotide sequences were identified by ORF finder (NCBI) [20]. Amino acid sequences derived by conceptual translation of each of the ORFs were used as the query for searching homologous sequences using BLASTp [21] against uniprotKB/swissprot database to identify potential orthologs [22]. The search was repeated against Protein Data Bank (PDB) and the amino acid sequence which contained the putative conserved domain and showed the highest sequence similarity and the lowest E value, was selected for structure modeling.
Homology modeling and energy minimization
PsaD subunits have been reported to possess N (1Met to GLY 90) and C (Asp171 to Gly 193) terminal unstructured domains which are involved in the assembly of photosystem I super complex [23,24]. Accordingly, the three-dimensional (3D) structure of the identified proteins were built using modeler V 9.17 [25] using the crystal structure of PsaD subunit of Pisum sativum photosystem I super-complex (PDB ID: 5l8r_D) [26] as the template and viewed by UCSF Chimera [27]. The generated model of C. sinensis PsaD-like protein was superimposed on the PasD subunit of 5l8r in energy minimized state while keeping the rest of the complex fixed. Superimposition was carried out using Matchmaker function of UCSF-Chimera [28,29]. Energy minimization was carried out using AMBER force field [30][31][32] in chimera with 100 steps of steepest descents followed by 10 steps of conjugate gradients.
Protein model validation
The quality of generated models was validated with respect to backbone and side chain geometry. To validate protein backbone quality, Ramachandran plot [33] was generated using Rampage server (http://mordred.bioc. cam.ac.uk/~rapper/rampage.php) and the backbone quality was validated by analyzing φ and ψ angles using Ramachandran plot. Further; VERIFY3D, ERRAT, PROVE, PROCHECK, AND WHATCHECK [34] servers were used to analyze the overall quality of the model.
Structural comparison of modeled proteins
Optimized energy minimized protein models generated for the sequence derived from BB susceptible TRI 2023 and the sequence derived from BB resistant TRI 2043 were superimposed using Matchmaker function of UCSF Chimera and RMSD (root mean square deviation) value was obtained. Further, structural comparison was carried out by superimposing and RMSD evaluation against the template protein that was used to generate the protein models.
Validation of physiological parameters
Structure-function relationship of the derived protein models was further validated using ProtParam tool of ExPASy Proteomics Server [35] for various parameters such as estimated half-life, theoretical pI, instability index, aliphatic index, and grand average of hydropathicity (GRAVY). The values were compared with the protein sequence used as the template as well as the BB susceptible and resistant genotypes.
Molecular docking
Crystal structure of PSI-I complex of Pisum sativum (PDB ID: 5l8r) was retrieved from RCSB-PDB. Subunits that interact with PsaD were retained and other subunits were removed using UCSF-Chimera. Modeled PsaD was docked against the binding site of the complex by HAD-DOCK docking server [36]. Docking results were viewed using UCSF Chimera. Default parameters were used for docking process and energy (E) values of each docking event were obtained. For comparative analysis, docked complexes were compared with the interactions of the PSI complex of Pisum sativum.
Nucleotide sequence analysis and comparison
The best aligning nucleotide sequence of the Pairwise alignment of BBS and BBR sequences indicated 20 microsatellite CT repeat extension in 5′UTR of BBR and a single nucleotide deletion at 552 bp (deletion of C) (Fig. 1). Amino acid sequence analysis The amino acid sequence obtained from the longest ORF of BBS sequence was 193 amino acid in length and had 97% similarity and 95.2% identity over the full length of PsaD subunit of Pisum sativum photosynthesis complex I (Fig. 2). The BBR protein sequence had a 86.2% identity and 89.1% similarity with the PsaD subunit of P. sativum. Both the nucleotide and protein sequence comparison clearly showed a frame shift mutation (single nucleotide deletion) leading to truncated C terminal caused by the single nucleotide deletion of BBR at 552 bp as shown in the sequence alignments in Fig. 3. The search was repeated against PDB database and the crystal structure of P. sativum photosynthesis complex I was retrieved for homology modeling.
Homology modeling and structure comparison
After homology modeling and energy minimization superimposed model of the BBS sequence retained the same fold and domain structures as the PsaD subunit of P. sativum with a RMSD value of 0.530 Å. More importantly, all amino acids that form H bonds with other subunits of the complex were conserved in P. sativum and C. sinensis (Fig. 4a). The model generated from BBR sequence clearly showed truncation of the C terminus, completely eliminating the anti-parallel beta strands and the C terminal unstructured domain (Fig. 4b, c). Rest of the structure was the same as the BBS and had an RMSD value of 0.616 Å when compared to aligning region of PsaD of P. sativum.
Structure validation
After energy minimization, the models generated for BBS and BBR were showed good overall stereochemical quality as expected for modeled with high sequence identity with the template [37][38][39]. The BBS model had no residues in the outlier region, while 92.7% of residues lied in the favored region (Fig. 5), and BBR had 90% of its residues in the favored region, while no residues were found in the outlier region (Fig. 6). Further quality analysis using VERIFY3D, ERRAT, PROVE, PROCHECK, AND WHATCHECK servers indicated a good overall quality of both BBS (Fig. 7) and BBR (Fig. 8) homology models.
Generated structures and the PsaD subunit of P. sativum were further compared with physicochemical parameters such as theoretical isoelectric point (pi), estimated half-life, instability index, aliphatic index, and the grand average hydropathicity (GRAVY) of BBR, BBS, and PsaD subunit of P. sativum as given in Table 1.
Molecular docking and interaction analysis
For comparative analysis, docked complexes were compared with the interactions of the PSI complex of Pisum sativum (Fig. 9). Interaction analysis showed that in both P. sativam and BBS PsaD s, all residues that involve in H bonding with PsaA, Psa C, and Psa L to be conserved and showing similar interaction patterns. However, BBR proteins possessed a C terminal truncation which prevents PsaD from interacting with PsaC.
Discussion
The EST SSR 073 motif containing cDNA sequences of both TRI 2043 and TRI 2023 display significant similarity with PsaD I subunit nucleotide sequence of Chinese tea cultivar shuchazao by confirming the reliability of DNA sequence data were used in the study. Furthermore, the DNA sequences displayed high similarity with the Diospyros kaki photosystem I subunit D-I. Accordingly, both of the sequences were identified as putative PsaD subunit of photosystem I of Camellia sinensis. In almost all plants, the Psad I gene exists as a single-copy gene [40].
The CT extension of BBR does not change the sequence and the structure of protein. However, it may be involved in the post-transcriptional regulation of PsaD I expression. In fact, 5′UTR of most of the mRNA sequences contains regulatory structures such as hairpins [41]. Furthermore, the BBR sequence possesses a single nucleotide deletion at 552 bp (deletion of C) which leads to a truncation of ORF and the formation of a shorter protein product.
Homology modeling of the resulted amino acid sequences from BBS sequences using PsaD I of P. sativum as a template produced a three-dimensional structure of the protein which is very similar to the template used. PsaD I subunit is reported to be expressed as unfolded protein having a leader sequence and later undergoing proper folding when complexed with the rest of the subunit of photosystem I super complex. Also, PsaD subunits possess N terminal and C terminal domains having no rigid structure. However, these unstructured domains of PsaD I, particularly the C terminal unstructured domain has reported to be involved in forming H bonds with PsaC subunit of the complex. Therefore, energy minimization of the models generated for PsaD subunits of BBS and BBR sequences were carried out within the binding surface of PsaD I complex with the PsaD subunit.
After energy minimization, the 3D model obtained for BBS was almost identical to the secondary and tertiary structure of the PsaD I of P. sativum. Both of the generated models for BBS protein sequence and the PsaD I of P. sativum showed identical hydrogen bonding pattern with the other subunits of photosystem I super complex. When both models of BBS were docked against PsaD binding site of the photosynthesis complex, interestingly both protein models retained the same H bonding pattern within the complex by further confirming the adoption of the Psad I topology by them. Photosystem subunit D of photosystem I complex is hydrophobic and is exposed on stromal face of the thylakoid. The subunit interacts with ferredoxin in both cyanobacteria and eukaryotes [42].
The model generated for BBR had a C terminal truncation that eliminates the entire unstructured C terminal domain along with the C terminal anti-parallel beta sheets. It is unlikely that the BBR sequence would produce the functional protein because the interactions with PSaC is critical to maintain the stability of photosystem complex I. However, the N terminal extension of PsaD I in higher plants stabilizes the interactions with PsaC and rest of the photosystem I complex. Cross-linking study of barely, suggested that PsaD is stabilized with interaction between photosystem I H subunit [43] and Psa D is not tightly bound with photosystem I core [44]. Therefore, the C terminal truncation of the BBR may not involve in changing the stability of photosystem I complex and its main functions. Further, stability study of PsaD in Synechocystis has shown the reduced flavodoxin in, photosystem complex I without the PsaD subunit [45]. The mutation of BBR may be involved in so far unreported function and the predicted model may lead to discover the functions of mutated PsaD subunit.
The half-life of protein is the time it takes for half of the amount of protein in a cell to disappear after its synthesis in the cell. In this study, the half-life of all the proteins was 30 h. The instability index provides an estimate of the stability of the protein in a test tube. A protein whose instability index is smaller than 40 is predicted as stable, while a value above 40 predicts that the protein will be unstable [46]. The results from this study recorded instability index higher than 40 in all the three proteins (P. sativum, BBR, and BBS) indicating unstable properties. The aliphatic index of a protein is defined as the relative volume occupied by aliphatic side chains (alanine, valine, isoleucine, and leucine). If the aliphatic index is higher, the thermostability increases; therefore, the predicted proteins are thermostable. Isoelectric point is the condition of a solution where the amino acid produces the same amount of positive and negative charges and the ultimate charge will be zero. Isoelectric point (pI) of the three proteins was 9.4 to 10.1 and it seemed to be basic protein. The value of GRAVY spread between − 0.310 and − 0.514 and lower values are suggested to have good interactions between water and protein [47,48].
In silico computational approach has been applied to predict protein structures of leaf rust disease resistance, and Lr 10 coding protein was identified as more resistant against the leaf rust disease of wheat [49]. However, the Psa D subunit has not been reported as associated with disease resistance up to now. Epicatechin (EC) and epigallocatachingallate (EGCG) are involved in BB disease resistance in tea [50]. Flavonoids biosynthesis pathway which synthesizes EC and EGCG is light sensitive and therefore, the allele may indirectly involve in the BB disease resistance.
Conclusions
The EST SSR 073 motif flanking sequences of Camellia sinensis is conserved in the PsaD I subunit of photosystem I complex, and the developed in silico structures of homology proteins are reliable with their physicochemical parameters. When compared with BBS, CT repeat extension of BBR did not change the topology of PsaD I subunit but the single nucleotide deletion leads to C terminal truncation of BBR coding PsaD I subunit by preventing hydrogen bond interaction with other complexes of photosystem I. It can be recommended that more sequence data of EST SSR 073 motif flanking sequences in different tea cultivars and analyzing the protein model would lead to unravel the mechanism of BB resistance. | 4,387.6 | 2020-07-19T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Use of Plasmodium falciparum culture-adapted field isolates for in vitro exflagellation-blocking assay
Background A major requirement for malaria elimination is the development of transmission-blocking interventions. In vitro transmission-blocking bioassays currently mostly rely on the use of very few Plasmodium falciparum reference laboratory strains isolated decades ago. To fill a piece of the gap between laboratory experimental models and natural systems, the purpose of this work was to determine if culture-adapted field isolates of P. falciparum are suitable for in vitro transmission-blocking bioassays targeting functional maturity of male gametocytes: exflagellation. Methods Plasmodium falciparum isolates were adapted to in vitro culture before being used for in vitro gametocyte production. Maturation was assessed by microscopic observation of gametocyte morphology over time of culture and the functional viability of male gametocytes was assessed by microscopic counting of exflagellating gametocytes. Suitability for in vitro exflagellation-blocking bioassays was determined using dihydroartemisinin and methylene blue. Results In vitro gametocyte production was achieved using two isolates from French Guiana and two isolates from Cambodia. Functional maturity of male gametocytes was assessed by exflagellation observations and all four isolates could be used in exflagellation-blocking bioassays with adequate response to methylene blue and dihydroartemisinin. Conclusion This work shows that in vitro culture-adapted P. falciparum field isolates of different genetic background, from South America and Southeast Asia, can successfully be used for bioassays targeting the male gametocyte to gamete transition, exflagellation. Electronic supplementary material The online version of this article (doi:10.1186/s12936-015-0752-x) contains supplementary material, which is available to authorized users.
Background
Among the actions required for malaria elimination, blocking the transmission of Plasmodium parasites from human to mosquitoes is critical [1]. Passage through the vector is an obligatory step for the parasite to continue its life cycle and it relies exclusively on the most mature forms of the sexual stages, stage V gametocytes. Mosquito feeding assays remain the gold standard to evaluate transmission-blocking strategies but require resourceintensive techniques. To circumvent these technical difficulties, several in vitro transmission-blocking bioassays targeting the sexual stages of the parasite have been described with different endpoints and various interpretative values [2,3]. Although the outputs of these in vitro assays are difficult to transpose in vivo, the most clinically relevant assay should be able to evaluate the ability of a compound to either kill stage V gametocytes (viability assay) or inhibit their ability to differentiate into later mosquito stages (functional maturity assay). It is possible to induce in vitro gamete formation from Plasmodium falciparum male and female gametocytes where male gametocytes undergo major differentiation leading to the production of eight mobile gametes, by a process called exflagellation, which is easily observed visually.
Currently, the majority of P. falciparum in vitro transmission-blocking studies use a handful of reference laboratory strains, isolated decades ago. While these strains are useful to normalize high throughput screenings, results should be verified on natural parasites that have been selected after years of multiple drug exposures and hence are likely to display differential drug responses compared to reference strains. In addition, it is well known that gametocyte production capacity is lost over time of in vitro culture [2,4]. Laboratories must therefore rely on precious stocks of cryopreserved isolates with minimal passage since in culture. Adaptation of P. falciparum isolates from patients to in vitro blood-stage culture is routinely performed. Studies describing the use of field isolates for in vitro gametocyte cultures have been published in the early 1980s and have led to the selection of the current reference strains [4][5][6][7][8]. Nowadays using circulating parasites after culture adaptation for transmissionblocking assay is rarely performed. Some studies have shown that they can be used in gametocyte viability assay [9] or for experimental infections of mosquitoes [10], however their use for exflagellation-blocking bioassays, reporting in vitro functional maturity of gametocytes has not been reported yet. A requirement for their use in such assay is to be able to produce functional gametocytes in high enough number to be meaningful in a bioassay.
The objective of the work presented here was therefore to determine if culture-adapted field isolates could be used for recently developed in vitro exflagellationblocking bioassays [11].
Reference strain and field isolates
The P. falciparum South American chloroquine-resistant strain 7G8 (MRA-926) has been obtained from the MR4. Plasmodium falciparum isolates were collected from monoinfected patients seeking treatment in 2013 in French Guiana (Q206 and Q188) and in 2014 in Cambodia (6831 and 6836).
In vitro culture adaptation
Culture adaptation of isolates was performed using standard protocols [12,13]. Briefly, after removal of plasma, the red blood cell (RBC) pellet was washed three times in RPMI 1640 supplemented with gentamicin (Gibco-Life Technologies SAS, France) and placed in culture medium (RPMI 1640, 0.5 % AlbuMAX II (Gibco-Life Technologies SAS, France), 2 % decomplemented human plasma) at 4 % haematocrit at 37°C in 5 or 10 % O 2 , 5 % CO 2 , rest of N 2 atmosphere. Parasitaemia was checked daily and kept below 2 % by dilutions with fresh RBC and medium. Field isolates were considered adapted to in vitro conditions after three weeks of uninterrupted culture. After culture adaptation, asexual blood-stage sensitivity to anti-malarials was determined using the [ 3 H]-hypoxanthine incorporation assay [14].
In vitro gametocyte production and maturation
Gametocyte cultures were performed following published protocols [2,11]. Briefly, asexual cultures with a parasitaemia of~5 % were used to seed gametocyte cultures at 0.5-1 % parasitaemia and 4 % haematocrit under 5 or 10 % O 2 , 5 % CO 2 , rest of N 2 atmosphere. Culture medium (RPMI medium with 25 mM HEPES, 50 mg/L hypoxanthine, 2 g/L sodium bicarbonate, 10 % human serum) was replaced daily but without any further addition of RBCs and critically, temperature was maintained at all time at 37°C.
Gametocytogenesis was evaluated morphologically using Giemsa-stained blood films. Stage V male gametocyte functional maturity was assessed by observation of exflagellation in wet preparation under bright-field microscope. A 30-μL aliquot of gametocyte culture was briefly centrifuged, the cell pellet was resuspended in 15 μL of ookinete medium (RPMI medium with 25 mM HEPES, 50 mg/L hypoxanthine, 2 g/L sodium bicarbonate, 100 μM xanthurenic acid, 10 % human serum) and then introduced into a chamber of a FastRead disposable haemocytometer slide (Immune Systems). Exflagellation centres were observed at 10 × or 20 × magnification.
Exflagellation-blocking bioassay
Exflagellation-blocking assays were performed according to published protocols [11]. Assays were performed using gametocyte cultures providing high enough exflagellation centres for meaningful measures (>30 in five 10 × microscopy fields).
To show the suitability of field isolates for exflagellationblocking assays, the activity of 1 μM of dihydroartemisinin (DHA), methylene blue (MeBlue) and chloroquine (CQ) was evaluated. Assays were carried out in 1.5-mL tubes containing 170 μL of gametocyte culture medium with 1 μM of drugs dissolved in either DMSO or methanol. Thirty μL of stage V mature gametocyte was dispensed into each assay tube. Tubes were then placed into a 37°C incubator. After 24 h, exflagellation was induced by temperature drop to 25°C and replacing culture medium with ookinete medium. The number of exflagellation centres were recorded and compared to controls (DMSO or methanol). Activity was expressed as percentage of exflagellation inhibition compared to controls.
Results
The protocols were first optimized by using the South American isolates prior to being verified with the Southeast Asian isolates. Gametocyte development was straightforward at the first attempt. Initially however, although stage V gametocytes were clearly identified in the cultures, no exflagellation was observed. This suggests that bioassays with a read-out based only on morphological development of gametocytes do not report on their onward functional viability, therefore they may be underpowered. As already observed by others [2,4], the age of uninfected RBC when initiating the gametocyte culture was a critical factor for successful maturation: RBC stored at 4°C more than~two to 3 days rapidly compromise maturation. Additionally, variability in human serum was found also to be a critical point for gametocyte maturation, so pools from different donors of serum were used when possible. Using fresh uninfected RBC, fully mature gametocytes capable of exflagellation were generated for all isolates. For all isolates and 7G8, gametocyte development time was similar and maturity peaked at~16-18 days of culture, slightly longer than what is usually reported for 3D7,~12-14 days (Fig. 1) [2,11].
It is important to note that all the experiments using 7G8 and the South American isolates were performed in a different laboratory (Institut Pasteur de la Guyane, Cayenne) than those using the Cambodian isolates (Institut Pasteur du Cambodge, Phnom Penh) showing good reproducibility in the protocols that can be easily implemented in laboratories doing P. falciparum in vitro culture adaptation.
Exflagellation is a time-dependent process [15]. Within 20 min after induction, for all field isolates and the reference strain 7G8, a plateau is reached allowing consistent measurements for~10 min (Fig. 2). This is similar to what has previously been observed with 3D7 [11].
Although the patterns of gametocyte development and time for exflagellation are similar for all isolates and laboratory strains, there are variations in the amount of exflagellation centres from one isolate to another (see Additional file 1) as already observed by others [4]. For example, in French Guiana, the isolate Q206 consistently gave higher number of exflagellation than Q188. Nevertheless, for all the isolates, using gametocyte cultures at maturity, the numbers of exflagellation centres obtained were high enough to allow significant measures in exflagellation-blocking bioassay (from~35 to >100, see Additional file 1).
Based on those observations, a previously described exflagellation-blocking protocol was adapted to the four field isolates and 7G8 [11]. Exflagellation centres were recorded between 20 and 30 min after induction of 18 days-old gametocyte cultures. As a proof-of-concept, the activity of DHA and MeBlue was evaluated as both compounds have been previously reported to block exflagellation of 3D7 [11]. As negative control, CQ was used on a sub-set of the strains [16]. Mature gametocytes were incubated 24 h in presence of 1 μM of each drug before exflagellation was induced. All isolates responded similarly to the compounds with 1 μM DHA giving 78-99 % inhibition and consistently near total inhibition with 1 μM of MeBlue (Table 1). As expected, CQ had no effect on exflagellation in any isolate tested. For each independent culture, exflagellation centres values are normalized to the highest value of the culture (data are presented in Additional file 1). The mean and SEM of two independent culture is shown for the two French Guiana strains (Q188 and Q206) while a single culture for the Cambodian isolate (6836) and the reference strain 7G8 are represented Expressed as a percentage of inhibition compared to the drug-free control (mean ± SEM, n = 3) | 2,473.8 | 2015-06-04T00:00:00.000 | [
"Medicine",
"Biology"
] |
FoldX as Protein Engineering Tool: Better Than Random Based Approaches?
Improving protein stability is an important goal for basic research as well as for clinical and industrial applications but no commonly accepted and widely used strategy for efficient engineering is known. Beside random approaches like error prone PCR or physical techniques to stabilize proteins, e.g. by immobilization, in silico approaches are gaining more attention to apply target-oriented mutagenesis. In this review different algorithms for the prediction of beneficial mutation sites to enhance protein stability are summarized and the advantages and disadvantages of FoldX are highlighted. The question whether the prediction of mutation sites by the algorithm FoldX is more accurate than random based approaches is addressed.
Introduction
Increasing protein stability is a desirable goal for different life science purposes, this includes design of therapeutic proteins like antibodies, human cell biology and biotechnology. It is expected that such improvements result in lower process costs and in enhanced long-term stability of the applied proteins. Enhanced protein stability in general can be achieved due to various factors, e.g. by increasing thermostability, salt tolerance or tolerance towards organic solvents, and consequently, involves different bioinformatics approaches. The emphasis for application of proteins for medical and chemical purposes is focused on the fields of biosensors (e.g. blood sugar test strips [1]), biomedical drugs (e.g. antibodies against cancer cells [2]) or on the synthesis of complex as well as chiral substances for food (e.g. high fructose corn syrup [3]) and pharmaceutical industry (e.g. sitagliptin [4]) [5]. Obviously biosensors for medical use assisting to diagnose several diseases like breast cancer [6], diabetes [7] or infectious diseases [8] have to be functional and reliable for a defined period of time. It seems, for example, to be beneficial to gain more thermostable antibodies for treatment of cancer diseases [9]. Furthermore, for the synthesis processes of drugs and pharmaceutically relevant intermediates, applied enzymes have to be active and functional for long batch times to prevent drastic increases in costs per unit of product [10][11][12] [13]. For industrial enzymes improved stability against heat, solvents and other relevant process parameters, e.g. acidic or basic pH, often becomes crucial [14]. In addition, improved thermostability of enzymes might prevent thermal inactivation and conformational changes at higher reaction temperature, which could in turn be beneficial to raise turnover rates and substrate concentrations [15][16][17][18][19]. According to the Q 10 -rule of thumb, biological systems and enzymes tend to have a Q-factor of 2, i.e. a temperature increase of about 10 K results in doubling the reaction rate [20,21]. Contrary, stabilization also can lead to more rigid enzymes, which are less active at the same temperature, but show the same activity at elevated temperatures. This can be observed when enzymes from hyperthermophilic and mesophilic sources are compared with respect to their reaction rates [22]. A thermostabilized enzyme might be less active at a certain temperature, but longer active at higher temperatures, which allows applying the Q 10 -rule on the condition that the activity can be maintained for longer time periods at elevated temperatures [23,24]. However, it is also possible that a thermostabilized enzyme is not impaired in activity at moderate temperatures and is even more active at higher temperatures [25][26][27][28][29]. Arnold et al. demonstrated that an enzyme can be simultaneously developed towards higher stability and activity [24,30,31,45].
Protein denaturation and degradation due to both heat and solvents are based on the same protein unfolding processes. The most important forces for protein stability, which are relevant targets for the improvement of protein stability, are intramolecular interactions, i.e. disulfide bridges, ion interactions, hydrogen bonds, hydrophobic interactions and core packings [22]. The rigidity and flexibility of proteins seem to be the key parameters [33] and both can be influenced by using Computational and Structural Biotechnology Journal 16 (2018) [25][26][27][28][29][30][31][32][33] immobilization techniques or enzymatic engineering in order to expand the durability of protein applications.
The well-known technique to immobilize proteins to gain stabilized proteins is applied for antibody to increase the thermostability [34,35]. Beside the improvement of thermostability using immobilization techniques, directed evolution is an alternative approach, but the existence of a robust high-throughput screening assay for the selected protein is an important prerequisite [11,[36][37][38][39][40][41]. For enzymes, activity can be used as parameter for functionality at elevated temperatures, but for non-catalyzing proteins a more sophisticated assay or even protein purification is necessary. Furthermore, the number of necessary protein variants, created by using e.g. error prone PCR or other techniques, is mostly about 10 3 to 10 5 and even higher. However, in case of enzymes, selection might easily be performed by heating up unpurified crude extracts from cells [42]. Using this technique, protein melting temperature T m can be improved in the best screenings by far N10°C [42][43][44][45][46][47]. The artificial evolution approach can result in a 140-fold increase in long-term enzymatic activity like demonstrated for the alkaline pectate lyase [48]. Also for antibodies or antibody fragments evolutionary approaches can be used [49]. For example, the protein melting temperature of a human antibody domain was improved by N10°C [50].
Directed evolution can be a successful strategy but might not be applicable at any time, especially when missing a high-throughput screening or protein purification for stability measurements is necessary. Therefore, this mini-review focusses on protein/enzyme engineering for thermostabilization using structure guided site-directed mutagenesis. This strategy helps to reduce screening effort and also costs, which is an issue in large screenings. Furthermore, we selected the popular FoldX algorithm and would like to answer the question: how powerful is FoldX for common protein stability improvements? FoldX is a frequently used algorithm and many studies about protein stabilization experiments are described in literature. A second reason is the user-friendliness of FoldX, because it can easily be used as plugin in the protein structure visualizer YASARA [51]. In contrast to other, command-line based in silico approaches, which are without graphical interface, scientists not familiar with programming languages like python, Java, R-script and so on and hide a larger workload for these kind of approaches.
Computational Approaches for Stability Engineering
Besides FoldX, several other algorithms used for site-directed mutagenesis are also known aiming at different inter-and intra-protein interactions. One target is the introduction of artificial disulfide bridges into proteins. As a covalent bond, a disulfide bridge is a strong physical force which helps to stabilize the 3D-structure within a protein chain or between monomers raising the protein melting temperature (T m ) up to 30°C, and can achieve an increase of thermal stability by N40% at distinct temperature levels [52][53][54]. However, introduction of disulfide bridges can also lower the T m up to −2.4°C [55]. Starting with the protein structure as basis for molecular dynamic simulations and energy calculations, amino acid positions can be selected which are potentially suitable for engineering of disulfide bridges. However, these approaches need profound understanding of different prediction and calculation software, often without graphical interfaces [56]. Two examples are the algorithms for fast recognition of disulfide mutation sites FRESCO and the open access webtool "Disulfide by Design 2" (DD2), but only DD2 can be easily used with graphical interface [57][58][59]. Using FRESCO, a temperature improvement of 35°C was achieved due to the combination of single disulfide bridges. Jo et al. increased the T m of the α-type carbonic anhydrase by 7.8°C due to an introduction of a disulfide bond efficiently predicted by DD2 [60]. Albeit the promising examples, it has to be mentioned, that the extensive FRESCO strategy cannot be understood as an end-user script, but more or less as a blue script for improving thermostability. Wijma et al. further improved FRESCO by integrating FoldX and Rosetta as additional energy improvement tools and combined these results with Dynamic Disulfide Discovery algorithm based on molecular dynamic simulations [57,145]. After in silico elimination of less stable variants, they expressed, tested and combined beneficial point mutation sites and disulfide bonds to gain two variants with drastically increased T m of 34.6 and 35.5°C, respectively. However, this strategy is very extensive and many point mutations have to be tested and combined.
Beside the possible de novo design of disulfide bridges, further computational methods like helix dipole stabilization or core repacking exist. Core repacking aims only at the core region of proteins to increase hydrophobic interactions. Vlassi et al. showed that a reduction of hydrophobic interaction decreases the protein stability [61] and computational tools like RosettaDesign and Monte Carlo simulations are used for the optimization process [62][63][64]. Adapted and automated RosettaDesign framework for repacking are available, but profound programming capabilities are needed for applying [64]. In contrast, helix dipole stabilization methods lead to improvements of molecular interactions at the end of helices, which can also result in drastically increased T m by N30°C [65,66]. However, for this strategy elaborate electrostatic calculations and molecular simulations are needed to select mutation sites. Beside these strategies, consensus sequences can also help to improve protein stability using multiple sequence alignments. In so-called consensus guided mutagenesis, sequences are compared according to their amino acid frequencies to elucidate consensus sequences. Replacing amino acid residues at certain positions with the most prevalent ones often result in highly beneficial energy improvements stabilizing proteins [67][68][69][70]. Huang et al. demonstrated that by using consensus approach it was possible to improve the stability of the reductase CgKR1 T 50 15 (temperature at which the enzyme activity is halved within 15 min) by N10°C [71].
Un/folding Energy Algorithms
At least 22 standalone calculation tools are described for the prediction of beneficial single and multiple point mutation sites to reduce the Gibbs free energy of proteins. The broad diversity of these standalone software was reviewed by Modarres et al. and beside the mentioned FoldX algorithm, other tools like PoPMuSiC, CUPSAT, ZEMu, iRDP web server or SDM were mentioned [72][73][74][75]. These calculation tools are structure or sequence dependent and use energy calculation functions or machine learning algorithms. Also databases collecting changes in protein stability (e.g. for Gibbs free energy changes and melting temperatures) are available like ProTherm (others are e.g. MODEL, DSBASE), but it should be mentioned that 70% of the logged mutations are destabilizing which leads to unintended biases [73,76]. Beside the more popular algorithms others are published like mCSM, BeAtMuSiC and ENCoM using different calculation approaches [77][78][79]. Moreover, it is also possible to use crystallographic data gained by X-ray analysis of protein structure. The B-factor is an indicator for the flexibility of positions within the protein. Reetz et al. used this factor for increasing protein stability [80].
FoldX
Considering the diversity of available algorithms, it seems to be very difficult to choose an efficient tool for protein stabilization. In this review we concentrated on the force field algorithm FoldX, which we have used by ourselves to create a more stable ω-transaminase [81]. The force field algorithm, which was originally created by Guerois et al. became popular as webtool in 2005 by Schymkowitz et al. and was refined to the currently last version FoldX 4.0 [82][83][84].
The software package FoldX includes different subroutines e.g. RepairPDB, BuildModel, PrintNetworks, AnalyseComplex, stability and so on. For example the repair function of FoldX reduces the energy content of a protein-structure model to a minimum by rearranging side chains and the function BuildModel introduces mutations and optimizes the structure of the new protein variant. The energy function of FoldX is only able to calculate the energy difference in accurate manner between the wildtype and a variant of the protein [83].
FoldX is also able to calculate total energies of objects, but this function is only valid to predict, whether a problem with the structure is given or not. The total energy results are not able to predict experimental results [51,83]. The core function of FoldX, the empirical force field algorithm, is based on free energy (ΔG) terms aiming to calculate the change of ΔG in kcal mol −1 (Eq. (1)). This equation includes terms for polar and hydrophobic desolvation or hydrogen bond energy ΔG wb of a protein interacting with solvent and within the protein chain. Increased protein rigidity works against entropy and consequently, results in entropy costs.
Furthermore the energy algorithm also addresses the free energy change at protein interfaces of oligomeric proteins. This term is mainly ΔG kon which calculates the electrostatic contribution of interactions at interfaces [83]. The parameters which are important for the energy calculation were determined in laboratory experiments, e.g. for amino acid residues and explored on protein chains. Beside this distinct parameters the letters of the total energy equation, a to l, represent the weights of separate terms [83]. The algorithm works with optimal accuracy when the hypothetical unfolding energy difference of the hypothetical energy from a wild-type variant is determined in comparison to a mutated protein. For this purpose, FoldX uses the 3D structure to calculate the hypothetical unfolding energy. The algorithm was first implemented as free available web server tool and is now a commercially available software, which can be used free of charge for academic purposes. As a prerequisite, a highly resolved crystal structure is necessary to calculate the energy changes for site-directed mutagenesis experiments. Users can also automate the calculations e.g. by using the programming code Python to calculate whole protein amino acid exchanges at every distinct position [85,86]. Furthermore, FoldX shows very good performance with respect to calculation time even on single core computers. Compared to e.g. ZEMu, FoldX needs only half the time for calculating single site mutations (calculated on one single processor) and is faster than RosettaDDG [75,87]. As mentioned earlier, it can be used with a graphical user interface as plugin tool in YASARA, which opens FoldX towards a broad community of researchers.
FoldX-applications
FoldX was applied for different stability tests, especially when protein design was performed to predict whether distinct mutations are destabilizing. Therefore FoldX shows to be beneficial for different approaches and is not strictly limited to a distinct function. Moreover the peptides, individual domains and multi-domain proteins can be addressed for experiments [88,89]. The algorithm has been used to explain and predict stability improvements when designing solvent stable enzymes. The group of U. Schwaneberg designed a laccase with improved resistance in ionic liquids for using hardly soluble lignin lysates and increased tolerance towards high molarity of salts [90]. Beside its suitability for protein energy calculations, it is also possible to calculate the energy changes of DNA-protein interactions [91]. Furthermore, FoldX is implemented in a lot of approaches like Fireprot, FRESCO, TANGO or in combination with Voronoia 1.0. Voronoia helps to engineer protein core packing and is based on energy calculations using FoldX as force field algorithm [92,93]. The program FRESCO (Framework for Rapid Enzyme Stabilization by Computational libraries) joins Rosetta with FoldX energy calculations and combines single point mutations with disulfide predictions for drastic energy improvements of enzymes [57]. The direct alternative to FoldX is the Rosetta energy algorithm. It was shown, that Rosetta predicts other possible mutation sites than FoldX for energy improvements, but only 25% of all mutations were predicted by both algorithms for the same protein [57]. Additionally, the authors of this work excluded 52% of the selected mutations manually, e.g. excluding hydrophobic mutations on surface exposed sites and mutations to a proline residue or a proline residue to a non-proline residue. At the end around 65% of the predicted mutation sites were calculated by FoldX and thereby 35% of all predicted sites were discarded. Voronoia in combination with FoldX helps to predict and to explain why hydrophobic interactions in the core region can have a huge impact on protein stability, as it was demonstrated for the thermophilic lipase T1 [93]. Another approach is TANGO, which helps to predict the aggregation of proteins and, in combination with FoldX, is a powerful tool for the investigation of predicted mutations regarding solubility, e.g. protective site-directed mutations for the Alzheimer's αβ peptide [83,94,95]. Furthermore, FoldX can also support protein design. For engineering the zinc-finger nuclease, FoldX was used as prediction algorithm to detect if the binding energy of a distinct DNA-sequence was increased or decreased [96]. Also, FoldX can help to estimate protein-protein binding energy and resulting stabilities of protein complexes. Szczepek et al. redesigned the interface between dimeric zinc finger nucleases using FoldX as prediction tool [97]. After deeper in silico calculations, only 9.3% of predicted variants were expressed and proved to be beneficial for stability [97]. Considering these and other experiments the performance for FoldX should be critically evaluated.
Therefore, we gathered FoldX experiments and analyzed available publications if FoldX was helpful for increasing protein stability (Table 1). In general, the amount of standalone FoldX calculations for protein stability improvement in literature is relatively low compared to approaches, which are using FoldX as an additional tool for stability calculation. Furthermore, FoldX is often only used as algorithm for explanations of the impact of mutagenesis in proteins with respect to stability or towards predictions of protein-protein or protein-DNA binding. Therefore, in Table 1 only mutations with effects based on FoldX predictions are pointed out, even when authors used additional calculation tools. When no pre-selection of distinct protein sites are indicated, a complete calculation of every position in the protein was performed. In this case, every amino acid was exchanged with the 19 standard amino acid residues. This calculation setup results very fast in high numbers of predicted variants. One criterion for excluding many variants is to set an energy barrier for ΔΔG between −0.75 and −5 kcal mol −1 for stabilizing mutations and for destabilizing mutations of N+1 kcal mol −1 in accordance to the Gaussian distribution of FoldX predictions (SD for FoldX 1.78 kcal mol −1 [95]) [98]. After this preselection a large number of variants can be excluded. Furthermore, mutations nearby active sites, proline residue mutations or variants which seem to be critical for protein structure can be also excluded. In addition to manual exclusion of variants, also MD-simulations can be performed to exclude more variants. Aiming to indicate the grade of improvement, protein melting temperature T m or half-life activity is frequently used. The largest positive changes in stability were reached for the T1 lipase, phosphotriesterase, Flavin-mononucleotide-based-fluorescent-protein and for the haloalcohol dehalogenase ranging from 8 up to 13°C using single site mutations [99]. However, FoldX also allows prediction of destabilizing mutations, which were performed very accurately for the thermoalkalophilic lipase with a negative ΔT m of 10°C. Noticeably, stabilizing predictions are useful for biotechnology and are therefore mentioned in studies with biotechnological background, whereas destabilizing predictions seem to be more applicable for human disease studies [95]. Beside mere stability studies, also protein design was performed towards specific enzyme-DNA binding or antibody-antigen binding, which can reduce the size of antibody libraries for distinct antigen targets. Moreover, FoldX can also be used to adapt or to select Table 1 Summary of different FoldX applications for single point mutations regarding stability and ligand binding. The changes achieved i.e. T m is listed for changes in protein melting temperature. ΔΔG displays the change in free energy by mutation/design of proteins. "Criteria" describes the settings for experiments. "Cut-off" means, that the authors excluded those indicated FoldX predictions (with a higher or lower ΔΔG) from further experiments. ΔΔG is defined as: ΔΔG = ΔG fold (mutation) − ΔG fold (wild type).
Accuracy of FoldX
From the FoldX studies summarized in Table 1 it can be deduced that the crystal-structure quality is crucial for accurate calculations. From a benchmark test on myoglobin mutants Kepp concluded that some protein stability predicting algorithms are extremely sensitive towards crystal structure quality and some are very robust [121]. It seems plausible that interactions are in the order of atomic resolutions and therefore the crystal structure quality has an important influence on energy calculations [107,[121][122][123]. However, for the prediction algorithms PoPmuSic, I-Mutant 3.0 and other tools the influence of the crystal structure quality was only in the order of 0.2 kcal mol −1 (standard deviation using different structure data of superoxide dismutase 1) [123]. According to Christensen et al. FoldX belongs to the more structure sensitive methods and Kepp suggested to use only structures solved in scales of near-atomic-resolutions [107,121]. With reference to Table 1, all cited studies were based on crystal structures with an resolution better than 3.3 Å and an average resolution of 1.87 Å which is nearby atomic resolution (1 Å is approximately the diameter of an atom plus the surrounding cloud of electrons). Furthermore, also protein-protein interactions might have an influence on the prediction power, which are not addressed in some performance studies like from Tokuriki et al., because only monomeric proteins were selected [124], but e.g. Pey et al. and Dourado and Flores showed that also oligomers can be utilized for calculations (using extra terms: ΔG kon electrostatic interaction, ΔS tr translational and rotational entropy) [125,75]. The rootmean-square deviation (RMSD) in a dataset of protein complexes, with known energy impacts, was determined to be 1.55 kcal mol −1 (for single mutants) [75]. In contrast the algorithm ZEMu addresses such mutations on interfaces better than FoldX [75].
Based on experimental results, it can be concluded that the prediction of destabilizing mutations is more accurate than prediction of stabilizing mutations. After pre-selection of experiments with the aim to increase stability, it can be concluded that the approximate success rate for mutations predicted as stabilizing (according to their negative ΔΔG-values) is only 29.4% (focusing on 13 single mutation experiments). For experiments with focus on detection of destabilizing mutations or for simple proof of destabilizing events, sample size is only five but the average success rate is 69%. However, with regard to the small sample size a valid statement about success rates cannot be made. It is likely that many unsuccessful experiments were not published and therefore, the real success rate might be much lower. Khan et al. evaluated the performance of 11 protein stability predictors by using a dataset containing N1700 mutations in 80 proteins which were taken from ProTherm database. It was shown that FoldX was among the three most reliable algorithms, predicting 86 true positives and 133 false positives for stabilization from 776 variants, which is a success rate of 64%. Only Dmutant and MultiMutate were comparably successful in predicting stabilization events [102].
Compared to other results, this success rate might be higher than expected. As an example for an investigation of the performance of an adapted FoldX algorithm, laccase isoenzymes were used. The large calculation setup included 9424 FoldX predictions per isoenzyme using an adapted algorithm. These calculations were evaluated by using molecular dynamic simulations and additional different settings within FoldX were tested. Like mentioned before, the authors remarked that FoldX needs high-resolution crystal structures of proteins and that FoldX performs well in predicting stability trends, but not in a quantitative accuracy [75]. Using the deciphering protein (DPP) as an example, Kumar et al. showed on the basis of 54 DPP mutants how accurate the prediction power of FoldX is compared to other tools. The study focused on destabilizing mutation events, which were described in medical data sets of DPP and concluded that the R-value (correlation coefficient) was only 0.45 to 0.53. The quality of the crystal structures in this study ranged between 1.07 and 1.93 Å [77]. Potapov et al. utilized for performance investigation a protein database set regarding 2156 variants in 59 proteins. The crystal structure qualities were not reported. However, they concluded that 81.4% of T m changes were qualitatively predicted correctly [127]. Furthermore Potapov et al. headlined their work for analyzing different protein stability tools "Assessing computational methods for predicting protein stability upon mutation: good on average but not in the details", and proved that FoldX has potential to predict if a certain mutation is stabilizing or destabilizing, but its prediction power decreases, when ΔΔG is correlated with ΔΔG experimental or with stability parameters like T m [127]. The correlation coefficient R, plotting ΔΔG theoretical against ΔΔG experimental values from databases was only 0.5 (for negative and positive ΔΔG), but it also depends on the crystal structure and on the nature of the protein [127].
For better comparison, we summarized statistical parameters given for the different algorithms derived from Kumar et al., and other studies (as indicated) in Table 2, but not for every algorithm we were able to find a full set of data. For example, Kumar human superoxide dismutase1 [73] which is involved in the motor neuron disease [131]. In this benchmark test FoldX and PoPMuSiC performed best by far. FoldX showed in this test a correlation coefficient R of 0.53 and a standard error of 1.1 kcal mol −1 , which was only slightly surpassed by PoPMuSiC [77]. In conclusion, the authors described FoldX as more sensitive and accurate towards difficult mutation sites but PoPMuSiC as more accurate to all kinds of mutations. Also, they demonstrated that FoldX can interpret patient data for dismutase diseases quite well with an R of 0. 45 [130]. In contrast, by investigating 582 mutants of seven proteins, R was 0.73 with a standard deviation of 1.02 kcal mol −1 [134]. The best result was a correlation coefficient of 0.73 for a lysozyme structure [127] and was increased to 0.74, when only hotspot areas were chosen for prediction. The standard deviation (1.37 kcal mol −1 ) was in the same range of Broom et al. (1.78 kcal mol −1 ) [95]. However, Tokuriki et al. calculated that the average ΔΔG for any protein is +0.9 kcal mol −1 ΔΔG, which clearly shows, that the probability of destabilization events is much higher, which concludes that the number of stabilizing theoretical mutants is much lower [135]. Not only the number of theoretical stabilizing mutations seems to be lower, also the correlation for predicted stabilizing mutations towards real stabilization is weaker than for destabilizing mutations [57,111]. In contrast, Khan et al. showed for human proteins that FoldX predicts more stability increasing variants than destabilizing variants, which might be a hint that human proteins are relatively nonrigid and less thermostable compared to other protein sources or that distribution of ΔΔG calculated against the frequency of stabilizing and destabilizing mutations is only protein depending [102]. Furthermore, the calculated ΔΔG Foldx energies deviate from real ΔΔG measurements. The values can be recalculated using an experimental factor ΔΔG experimental = (ΔΔG calculated + 0.078) 1.14 −1 [135,136]. Depending on the method used to evaluate FoldX, the accuracy will be in the range from 0.38 to 0.80 [102,129]. Obviously, FoldX can predict positions which are important for stability, but the discrimination between different amino acid residues at one site is not really powerful, e.g. an exchange of lysine to glutamate did not result in any change of ΔG, but experimentally a stabilization was observed [120,128]. The summarized results in Table 2 demonstrate that actually all algorithms are not able to design or predict single mutation events towards trustworthy one mutation protein designs. However, FoldX shows a good performance in most of the studies compared to other algorithms, but it is necessary to increase the number of experimental mutations above 3 to achieve probable true positive results for protein engineering experiments. A general disadvantage of FoldX and other algorithms seems to be that FoldX often predicts hydrophobic interactions but at the expense of protein solubility [95].
The Next Generation of FoldX Based Predictions
Due to the low accuracy of all algorithms for stabilization mutations, algorithms often are combined to find coincident predictions or to prove predictions with a second algorithm. A popular combination is FoldX and Rosetta-ddG to gain more stabilizing mutation predictions. It was shown that FoldX and Rosetta-ddG predictions overlapped only in 12%, 15% or 25%, respectively, which means that a good coverage of beneficial mutations can only be achieved when more than one tool is used [57,87,105]. As a consequence of low prediction accuracy, popular algorithms are continuously improved. Recently a refinement of the Rosetta energy algorithm was reported with increased accuracy and faster calculation times. This demonstrates also the continuing importance of stability prediction in the field of protein engineering, but the authors stated that it is still far away from a final gold standard in the field of energy content prediction [137].
A sophisticated approach is the freeware webtool FireProt [138]. The FireProt algorithm uses FoldX as a pre-filter to select beneficial mutations which are subsequently proved in a second round using Rosetta-ddG. Only if Rosetta-ddG also predicts these mutations as putatively stabilizing they will be used for the experimental realization of these amino acid exchanges. Furthermore, the algorithm uses a consensus analysis of the protein-sequences to predict evolutionary beneficial mutation sites towards stability. These selected sites are then evaluated for their suitability using FoldX. The algorithm is divided into three stages using different methods for crosschecking the accuracy of the calculations and combines putative beneficial mutations to gain further improvement of stability. The free webtool of FireProt allows even inexperienced users to perform protein energy calculations. Bednar et al. demonstrated at two examples the utility of this algorithm using the example of two enzymes and combined many mutation sites with overall improvements of ΔT m of 21°C and 24°C for the combination of all sites [87]. However, to verify if FireProt is useful or not, more studies are necessary. Furthermore, the core function of FoldX algorithm does not simulate backbone movements of the protein, which might be a potential factor to improve FoldX [75]. The stability prediction tool of Goldenzweig et al. might be an alternative to the mentioned Fireprot -algorithm. Similar to Fireprot, it combines information gained in sequence homology alignments and of energy calculations using crystal structure data and Rosetta-ddG. Using the human acetylcholinesterase, an improvement in stability (ΔT m = 20°C) was demonstrated and simultaneously, the expression level in E. coli BL21 was increased. They hypothesized, that putatively destabilizing mutations can be excluded from mutation libraries using homologous Table 2 Summary of different algorithms evaluated in performance tests considering prediction accuracy in comparison to experimentally investigated mutations and calculated statistical parameters. This table displays reported standard deviations of predicted true positives and true negatives. Accuracy is defined as ratio of true positives/true negatives to the total number of predictions. R-values (correlation coefficients) describe how precisely the predicted energies fit to database values.
Conclusion
The performance of FoldX depends drastically on the quality of the crystal structure and it is unclear if the protein source might have an influence on the accuracy of such algorithms. Nevertheless, FoldX seems to be more accurate for the prediction of destabilizing mutations and less accurate for the prediction of stabilizing mutations, but in both cases it was shown that FoldX is clearly better than random approaches: e.g. Christensen et al. described FoldX as one of the most accurate single site stability predictors and Potapov et al. even described the accuracy of FoldX as impressive compared to other algorithms [122,127]. The natural success rate for random mutagenesis is only~2%, which was surpassed by most experiments [95,140]. Therefore, FoldX seems to be a promising tool for protein design, but as mentioned by Thiltgen et al. we agree that FoldX cannot serve as a gold-standard for generally improving stability of proteins. Moreover, using FoldX together with other algorithms for reciprocal control of calculation results, Rosetta-ddG or PoPmuSiC as filter for true positive results will most probably increase the accuracy and the success rate of thermostability engineering [141,87,95]. In general the accuracy can be improved additionally, when mutation outliers are eliminated or additional MD-simulations are performed [83]. FoldX was used successfully in different approaches (Table 1) aiming from enzyme stabilization towards predictions of protein-protein interactions (especially for drug design) or for the prediction of disease-associated mutant proteins, making FoldX a versatile tool for life science [81,[142][143][144]. The progress in protein stability prediction is striking, however up to now no in silico calculation can fully spare experimental procedures, although the existing tools can reduce the amount of lab experiments significantly. | 7,406 | 2018-02-03T00:00:00.000 | [
"Biology",
"Computer Science"
] |
A Variant of D’alembert’s Matrix Functional Equation
<jats:p>The aim of this paper is to characterize the solutions Φ : <jats:italic>G → M</jats:italic>
<jats:sub>2</jats:sub>(ℂ) of the following matrix functional equations
<jats:disp-formula>
<jats:alternatives>
<jats:graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphic/j_amsil-2020-0025_eq_001.png" />
<m:math xmlns:m="http://www.w3.org/1998/Math/MathML" display="block">
<m:mrow>
<m:mfrac>
<m:mrow>
<m:mi mathvariant="normal">Φ</m:mi>
<m:mrow>
<m:mo>(</m:mo>
<m:mrow>
<m:mi>x</m:mi>
<m:mi>y</m:mi>
</m:mrow>
<m:mo>)</m:mo>
</m:mrow>
<m:mo>+</m:mo>
<m:mi mathvariant="normal">Φ</m:mi>
<m:mrow>
<m:mo>(</m:mo>
<m:mrow>
<m:mi>σ</m:mi>
<m:mrow>
<m:mo>(</m:mo>
<m:mi>y</m:mi>
<m:mo>)</m:mo>
</m:mrow>
<m:mi>x</m:mi>
</m:mrow>
<m:mo>)</m:mo>
</m:mrow>
</m:mrow>
<m:mn>2</m:mn>
</m:mfrac>
<m:mo>=</m:mo>
<m:mi mathvariant="normal">Φ</m:mi>
<m:mrow>
<m:mo>(</m:mo>
<m:mi>x</m:mi>
<m:mo>)</m:mo>
</m:mrow>
<m:mi mathvariant="normal">Φ</m:mi>
<m:mrow>
<m:mo>(</m:mo>
<m:mi>y</m:mi>
<m:mo>)</m:mo>
</m:mrow>
<m:mo>,</m:mo>
<m:mi> </m:mi>
<m:mi> </m:mi>
<m:mi> </m:mi>
<m:mi> </m:mi>
<m:mi> </m:mi>
<m:mi> </m:mi>
<m:mi>x</m:mi>
<m:mo>,</m:mo>
<m:mi>y</m:mi>
<m:mo>,</m:mo>
<m:mo>∈</m:mo>
<m:mi>G</m:mi>
<m:mo>,</m:mo>
</m:mrow>
</m:math>
<jats:tex-math>{{\Phi \left( {xy} \right) + \Phi \left( {\sigma \left( y \right)x} \right)} \over 2} = \Phi \left( x \right)\Phi \left( y \right),\,\,\,\,\,\,x,y, \in G,</jats:tex-math>
</jats:alternatives>
</jats:disp-formula>
and
<jats:disp-formula>
<jats:alternatives>
<jats:graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="graphic/j_amsil-2020-0025_eq_002.png" />
<m:math xmlns:m="http://www.w3.org/1998/Math/MathML" display="block">
<m:mrow>
<m:mfrac>
<m:mrow>
<m:mi mathvariant="normal">Φ</m:mi>
<m:mrow>
<m:mo>(</m:mo>
<m:mrow>
<m:mi>x</m:mi>
<m:mi>y</m:mi>
</m:mrow>
<m:mo>)</m:mo>
</m:mrow>
<m:mo>−</m:mo>
<m:mi mathvariant="normal">Φ</m:mi>
<m:mrow>
<m:mo>(</m:mo>
<m:mrow>
<m:mi>σ</m:mi>
<m:mrow>
<m:mo>(</m:mo>
<m:mi>y</m:mi>
<m:mo>)</m:mo>
</m:mrow>
<m:mi>x</m:mi>
</m:mrow>
<m:mo>)</m:mo>
</m:mrow>
</m:mrow>
<m:mn>2</m:mn>
</m:mfrac>
<m:mo>=</m:mo>
<m:mi mathvariant="normal">Φ</m:mi>
<m:mrow>
<m:mo>(</m:mo>
<m:mi>x</m:mi>
<m:mo>)</m:mo>
</m:mrow>
<m:mi mathvariant="normal">Φ</m:mi>
<m:mrow>
<m:mo>(</m:mo>
<m:mi>y</m:mi>
<m:mo>)</m:mo>
</m:mrow>
<m:mo>,</m:mo>
<m:mi> </m:mi>
<m:mi> </m:mi>
<m:mi> </m:mi>
<m:mi> </m:mi>
<m:mi> </m:mi>
<m:mi> </m:mi>
<m:mi>x</m:mi>
<m:mo>,</m:mo>
<m:mi>y</m:mi>
<m:mo>,</m:mo>
<m:mo>∈</m:mo>
<m:mi>G</m:mi>
<m:mo>,</m:mo>
</m:mrow>
</m:math>
<jats:tex-math>{{\Phi \left( {xy} \right) - \Phi \left( {\sigma \left( y \right)x} \right)} \over 2} = \Phi \left( x \right)\Phi \left( y \right),\,\,\,\,\,\,x,y, \in G,</jats:tex-math>
</jats:alternatives>
</jats:disp-formula>
where <jats:italic>G</jats:italic> is a group that need not be abelian, and <jats:italic>σ</jats:italic> : <jats:italic>G → G</jats:italic> is an involutive automorphism of <jats:italic>G</jats:italic>. Our considerations are inspired by the papers [13, 14] in which the continuous solutions of the first equation on abelian topological groups were determined.</jats:p>
Introduction
Throughout this paper, let G be a group with neutral element e, and σ : G → G be a homomorphism such that σ • σ = id. Let M 2 (C) denote the algebra of complex 2 × 2 matrices. It will represent the range space of the solutions in this paper. The purpose of this paper is to solve the following matrix functional equation where Φ : G → M 2 (C) is the unknown function. The contribution of the present paper to the theory of matrix d'Alembert's functional equations lies in the study of (1.1) on groups that need not be abelian. On abelian groups the solutions of Eq. (1.1) are known: the matrix or even operator version (1.1) of d'Alembert's functional equation with σ = −id has for Φ(e) = I been treated by Fattorini ([7]), Kurepa ([9]), Baker and Davidson ( [2]), Kisyński ([8]), Székelyhidi ([17]), Chojnacki ([3]), Sinopoulos ([12]) under various conditions like G = 2G or the solution being bounded on G. For a general involutive automorphism σ not just σ = −id, Stetkaer ( [14]) determined the general solution Φ : G → M 2 (C) of (1.1). He did not need extra assumptions on the abelian topological group G and also found the solutions of (1.1) when Φ(e) = I.
The 2 × 2 matrix valued solutions of (1.2) and (1.3) are given in Corollaries 6.1 and 6.2, respectively. Example 5.5 shows that solutions of (1.2) are not generally abelian (see Notation). This is in contrast to the complex valued solutions of (1.2) which are multiplicative ( [15,Theorem 3.21]). We also show that any continuous solution of (1.1) on a compact group is abelian. Another main result of this paper is the solution of the functional equation the sine addition law and the symmetrized additive Cauchy equation play important roles in finding the solutions of the functional equation (1.1). The complex-valued solutions, where G is a semigroup, of (1.5), (1.8), and (1.9) were studied by Stetkaer in [16], [15,Chapter 4], and [15,Chapter 2], respectively, while the complex-valued solutions, where G is a possibly nonabelian group or monoid, of (1.6) and (1.7) were obtained by Fadli, Zeglami and Kabbaj in [5] and [6], respectively. General results about similar scalar functional equations on abelian groups are summarized in the monograph by Aczél and Dhombres [1] that contains many references. Notation. Throughout this paper we work in the following framework and with the following notation and terminology. We use it without explicit mentioning. G is a group that need not be abelian with neutral element e. Let id : G → G denote the identity map, and σ : G → G a homomorphism of G such that σ • σ = id. We let M 2 (C) the algebra of all complex 2 × 2 matrices, I its identity matrix and GL 2 (C) the group of its invertible matrices. We use the notation A(G) for the vector space of all additive maps from G to C, and put A ± (G) := {a ∈ A(G) : a • σ = ±a}.
By N (G, σ) we mean the set of the solutions θ : G → C of the homogeneous equation, namely Let S be a semigroup and X be a groupoid. A function f : S → X is multiplicative on S if f (xy) = f (x)f (y) for all x, y ∈ S. A character of G is a multiplicative function from G into C * . A function f : S → X is abelian, if for all x 1 , x 2 , · · · , x k ∈ S, all permutations π of k elements and all k = 2, 3, · · · . Any abelian function f is central, meaning f (xy) = f (yx) for all x, y ∈ S.
Auxiliary results
The following lemma presents some results that are essential for the proof of our first main result (Theorem 5.1).
Lemma 2.1. If the pair X, Z : G → C satisfies the functional equation where γ : G → C is a multiplicative function such that γ = 1, then X and Z are abelian functions.
Proof. For all x, y, z ∈ G we have Subtracting the two previous identities we get for all x, y, z ∈ G. Putting x = z in (2.2) we obtain Let z 0 ∈ G first satisfy that γ(z 0 ) = 1, then we get by (2.3) that Z(z 0 y) = Z(yz 0 ) for all y ∈ G.
Next, we show that X is abelian. Indeed, making the substitutions (x, yz) and (x, zy) in (2.1), we get respectively Subtracting the two previous identities, we get .
Changing x and y in (2.1) we see that the function X is central. Since X and Z are central functions, then X is abelian. From the equation (2.1) and since γ = 1 we can prove that Z is also abelian. Hence we get the claimed result.
A connection to the sine addition law
The following lemma lists pertinent basic properties of any solution Φ : G → M 2 (C) of (1.1) satisfying Φ(e) = I.
x ∈ G is also a solution of (1.1). Proof.
(2) Interchanging x and y in (1.1) we get and then replacing y by σ(y) in the last equation, we obtain by using (1) that (3) can be trivially shown.
Lemma 3.2 below derives an interesting connection between (1.1) and the sine addition matrix functional equation, viz.
Proof. Making the substitutions (ax, y), (σ(y)a, x) and (a, xy) in (1.1) we get respectively Subtracting the middle identity from the sum of the two others we get after some simplifications that shows that the functional equation (1.1) is connected with the sine addition matrix functional equation as follows:
Simultaneous triangularization
To set the stage let Φ : G → M 2 (C) be a solution of the functional equation (1.1), namely Suppose that Φ(e) = I. In view of Lemma 3.1 (2) the elements of the set {Φ(x), x ∈ G} commute pairwise. Then it is easy to verify after some computations that the elements of the following bigger set E = {Φ(x), Φ a (x) | x, a ∈ G} also commute pairwise, so by linear algebra all elements Φ(x), Φ a (x) of E can be brought into upper triangular form. Therefore there exist six functions φ 1 , φ 2 , ψ 1 , l 1,a , l 2,a , l 3,a : G → C, and a matrix P ∈ GL 2 (C) such that According to Lemma 3.1 the function x → C(x) = P −1 Φ(x)P, x ∈ G is also a solution of (1.1), so its components satisfy the following system of functional equations Likewise, the component functions of Φ a , a ∈ G satisfy the following system of equations By the definition of Φ a , the functions l 1,a , l 2,a and l 3,a can be expressed in terms of φ 1 , φ 2 and ψ 1 as follows: for all x ∈ G then the elements of the set {Φ(x) | x ∈ G} can be simultaneously diagonalized and so we may assume that ψ 1 = 0. Thus the system (4.2) becomes as follows: Otherwise we have φ := φ 1 = φ 2 and l 0,a := l 1,a = l 2,a where a ∈ G. Then by (4.2) combined with (4.3) we get: for all a, x, y ∈ G.
Main results
Putting x = y = e in (1.1) we get Φ(e) 2 = Φ(e), from which we see that Φ(e) : C 2 → C 2 is a projection, so there are only the following three cases: Φ(e) = 0, Φ(e) = I or Φ(e) is a 1-dimensional projection.
The first case implies that So from now on we are going to focus only on the other two cases. The first main theorem of the present paper concerns the second case: it highlights the form of the solutions Φ of the matrix functional equation (1.1) for which Φ(e) = I. It reads as follows: are the matrix valued functions of the three forms below in which P ranges over GL 2 (C): where χ 1 and χ 2 are characters of G. (2) where χ is a character of G such that χ = χ • σ and a ± ∈ A ± (G).
where χ is a character of G such that χ = χ • σ, ψ is a solution of the symmetrized additive Cauchy equation (1.9) such that ψ ∈ N (G, σ) and Proof. It is easy to verify with simple computations that all formulas above for Φ define solutions of (1.1). So it remains to show the other direction. So we assume that Φ : G → M 2 (C) is a solution of (1.1) such that Φ(e) = I. With the notation from Section 4, we have two cases: So we are in case (1) of our statement. Case 2: Suppose that φ 1 = φ 2 = φ, then for every a ∈ G we have l 1,a = l 2,a =: l 0,a . Since φ is a solution of (1.5), then from [16, Theorem 2.1] there exists a character χ of G such that Now, we are going to distinguish between two subcases: Subcase 2.1: If χ = χ • σ, then we get φ = χ. From (4.1) ψ 1 is a solution of the following equation: ψ 1 (xy) + ψ 1 (σ(y)x) = 2χ(x)ψ 1 (y) + 2ψ 1 (x)χ(y), x, y ∈ G.
Dividing (5.4) by χ(x)χ(y) and putting Γ := ψ 1 /χ, then we see that Γ is a solution of the variant of the quadratic functional equation which shows, according to [6,Theorem 5.4], that where B : G × G → C is a bi-additive function of G such that B(x, σ(y)) = −B(y, x) for all x, y ∈ G, and ψ is a solution of the symmetrized additive Cauchy equation (1.9) such that ψ ∈ N (G, σ). Hence we get So we are in case (3) of our statement. Subcase 2.2: Here χ = χ • σ. We will start by showing that Φ is abelian. According to (4.4) (l 0,a , φ), a ∈ G is a solution of the sine addition law, then from [15, Theorem 4.1] there exist α a ∈ C * such that Replacing φ and l 0,a into (4.4), then we get l 3,a (xy) = χ(x)H(y) + H(x)χ(y) + χ • σ(x)L(y) (5.5) Dividing (5.5) by χ(x)χ(y) gives us Similarly we can deduce easily that Z(e) = 0. Putting y = e in (5.6), we obtain So the functional equation (5.6) becomes where γ = 1, because χ = χ • σ. From Lemma 2.1 we get that X and Z are abelian. Then so are l 3,a = χX and L = χZ. From we infer that ψ 1 is also abelian. Therefore then the matrix Ω := C(x 2 0 ) − C(σ(x 0 )x 0 ) is invertible. Since the matrix 1 2 (C(x 2 0 ) − C(σ(x 0 )x 0 )) −1 is invertible, it has a square root K, which is a polynomial in Ω (see, e.g., [4, Chapter VII, Section 1]). Now C(x) commutes with Ω, so C(x) commutes with any polynomial in Ω, and in particular it commutes with K. Since C(x) for any x ∈ G is an upper triangular matrix, so is Ω. It follows that K, being a polynomial in Ω, is also upper triangular.
In a similar fashion as in the case of an abelian topological group ( [13]), we introduce another function, this time N , as which means that the function M is multiplicative. Moreover, using Lemma 3.1, we get Since the matrix-valued functions C(x), K and N (x), x ∈ G are upper triangular, where the diagonal elements of each function are equal, then by using the definition of M we may put M = m m 12 0 m . From (5.7) we get m + m • σ = χ + χ • σ, which implies by the linear independence of group homomorphisms from G into C * that m = χ or m = χ • σ.
As it is possible to exchange χ and χ • σ then we may assume that m = χ.
Since M is a multiplicative function, then we get Hence, a := m 12 /χ is an additive function. By using (5.7) we obtain which is equivalent to where a ± := a±a•σ 2 ∈ A ± (G). So we are in case (2) of our statement, which completes the proof.
Remark 5.2. If we assume that G is a topological group and that the function Φ : G → M 2 (C) is a continuous solution of (1.1) then the functions χ, χ 1 , χ 2 , a + , a − , S and ψ in Theorem 5.1 are continuous. Indeed, using [15,Theorem 3.18 (d)], it is easy to see that the characters in Theorem 5.1 are continuous. For the case (3) of Theorem 5.1, we have that g 1 := S + ψ by assumption is continuous. Hence so is g 2 (x) := g 1 (x 2 ), x ∈ G. But g 2 − 2g 1 = 2S, so S is continuous. ψ is also continuous, because ψ = g 1 − S. If we are in case (2) of Theorem 5.1, we can prove that a + and a − are continuous. In fact, we have x → N (x) = C(x 0 x) − C(σ(x 0 )x) and N • σ = −N are continuous. These yield that M = C + KN and M • σ = C • σ + N • σ = C − KN are continuous. Since a = m 12 /χ, we can deduce easily that a + and a − are continuous.
The second main theorem of the present paper concerns the third case: It describes the complete solutions Φ of (1.1) when Φ(e) is a 1-dimensional projection. It reads as follows: is a 1-dimensional projection, are the matrix valued functions of the two forms below in which P ∈ GL 2 (C): where χ is a character, β ∈ C and a − ∈ A − (G).
Proof. Let Φ : G → M 2 (C) be a solution of (1.1) such that Φ(e) is a 1dimensional projection. Then there exists P ∈ GL 2 (C) such that P −1 Φ(e)P = 1 0 0 0 . We write φ 1 φ 3 φ 2 φ 4 := P −1 ΦP. If we put y = e in (1.1), then we get that (5.10) From (5.10) it is easy to show that φ 3 = φ 4 = 0, so that we have Φ = Then simple computations show that φ 1 and φ 2 satisfy the following system of functional equations Thus from [5,Theorem 3.6] there exists a character χ of G such that where α, β ∈ C and a − ∈ A − (G). Since φ 2 (e) = 0, then we get And so we get the desired result. Conversely, it is easy to verify that any function Φ of the form (5.8) or (5.9) is a solution of (1.1) such that Φ(e) is a 1-dimensional projection. where a and b range over C (see e.g., [15,Example 3.14]). We consider the functions of the form where a, b, c ∈ C, c = 0, and P ∈ GL 2 (C). It is elementary to check that these functions are non-abelian solutions of (1.1) on H 3 (R) in which σ = id because the complex-valued function is a solution of the symmetrized additive Cauchy equation (1.9) on H 3 (R) and is not even central (see [15,Example 12.4]).
Applications
By applying Theorems 5.1 and 5.3 we describe the matrix valued solutions of the symmetrized multiplicative Cauchy equation on groups.
Corollary 6.1. The non-zero solutions Φ : G → M 2 (C) of the matrix functional equation are the matrix valued functions of the three forms below in which P ranges over GL 2 (C): where χ 1 , χ 2 , χ are characters of G, and ψ is a solution of the symmetrized additive Cauchy equation (1.9).
Proof. The proof follows from Theorems 5.1 and 5.3.
As another application of our results we give, in the following corollary, a complete description of the solutions of the equation (1.3), that is where the unknown function takes its values in the complex 2 × 2 matrices. Setting x = y = e in (1.3), we get Φ 2 (e) = −Φ(e), which means that −Φ(e) (or equivalently I + Φ(e)) is a projection. (2) If Φ(e) = 0, then Φ has one of the following three forms below in which P ranges over GL 2 (C): where χ 1 and χ 2 are characters of G.
where χ is a character of G such that χ = χ • σ and a ± ∈ A ± (G).
where χ is a character of G such that χ = χ • σ, ψ is a solution of the symmetrized additive Cauchy equation (1.9)such that ψ ∈ N (G, σ) and S : G → C is a map of the form S(x) = B(x, x), x ∈ G, where B : G×G → C is a bi-additive function of G such that B(x, σ(y)) = −B(y, x).
(3) If I +Φ(e) is a 1-dimensional projection, then Φ has one of the two forms: where χ is a character of G, P ∈ GL 2 (C), β ∈ C and a − ∈ A − (G).
Proof. Let Φ : G → M 2 (C) be a solution of (1.3). If we add the identity matrix in the two sides of (1.3), we get that where Ψ := Φ+I. So, by applying Theorems 5.1 and 5.3 we obtain the claimed result.
Conversely, simple computations show that the above forms of Φ are solutions of (1.3). Now, we derive formulas for the continuous solutions of (1.1) on compact groups. Corollary 6.3. The non-zero continuous solutions Φ : G → M 2 (C) of (1.1), on a compact group, are the functions of the following two forms: where P ∈ GL 2 (C), χ, χ 1 , χ 2 are continuous characters of G and β ∈ C.
Proof. Let Φ : G → M 2 (C) be a non-zero continuous solution of (1.1) on a compact group. It is easy to see that the functions a − , χ in Theorem 5.3 are continuous and in view of Remark 5.2 the functions a + , a − , S, ψ and the characters in Theorem 5.1 are also continuous. Hence a + , a − and S are bounded because G is compact. So by [15,Exercise 2.5] we deduce that a ± ≡ 0.
We may use the same argument as in [15,Exercise 2.5] to show that S ≡ 0. From [15,Proposition 2.17] and [15,Corollary 12.6] we can prove that any continuous solution of (1.9) on a compact group will vanish. So the first direction deduces easily from Theorems 5.1 and 5.3.
Conversely, it is elementary to show that the above forms of Φ are solutions of (1.1).
Remark 6.4. Corollary 6.3 above implies that any continuous solution Φ : G → M 2 (C) of (1.1) on a compact group is abelian. Remark 6.5. On a compact group if Φ : G → M 2 (C) is a continuous solution of (6.1), then it is a multiplicative function. Example 5.5 shows that this result is not generally true in any group.
Solution of Eq. (1.4)
As another main result of this paper, we solve the matrix functional equation where M is a monoid, the function Φ to be determined takes its values in M 2 (C), and σ : M → M is a homomorphism such that σ • σ = id. Putting x = y = e in (7.1), we get that Φ(e) is nilpotent with index less than 2, then we have only the two possibilities: Φ(e) = 0 or Φ(e) is a nilpotent matrix with index 2.
In the following theorem we express the solutions of (7.1) in terms of the complex-valued solutions of the variant of the homogeneous equation, namely (7.2) θ(xy) − θ(σ(y)x) = 0, x, y ∈ M. where P ranges over GL 2 (C) and θ is a solution of (7.2).
Proof. It is easy to prove with simple computations that the above formula for Φ defines solutions of (7.1). So it remains to show the other direction. For that we are going to distinguish between two cases: Case 1: If Φ(e) = 0, then we can prove that each commutator of the form Φ(x)Φ(y) − Φ(y)Φ(x), x, y ∈ M is nilpotent. Indeed, by using Lemma 7.1 (2) and (3), we get (Φ(x)Φ(y) − Φ(y)Φ(x)) 2 = 0 for all x, y ∈ M.
From [11, Theorem 3.1] we get that φ 4 = 0 and so φ 3 is a solution of (7.2). Finally we have the desired form.
By the same procedure as in the proof of Theorem 7.2 we can prove the following result | 5,446 | 2020-12-14T00:00:00.000 | [
"Mathematics"
] |
Inflation from a No-scale supersymmetric $SU(4)_{c}\times{SU(2)_{L}\times{SU(2)_{R}}}$ model
We study inflation in a supersymmetric Pati-Salam model driven by a potential generated in the context of no-scale supergravity. The Pati-Salam gauge group $SU(4)_{c}\times{SU(2)_{L}\times{SU(2)_{R}}}$ , is supplemented with a $Z_{2} $ symmetry. Spontaneous breaking via the $SU(4)$ adjoint leads to the left-right symmetric group. Then the $SU(2)_{R}$ breaks at an intermediate scale and the inflaton is a combination of the neutral components of the $SU(2)_{R}$ doublets. We discuss various limits of the parameter space and we show that consistent solutions with the cosmological data for the a spectral index $n_{s}$ and the tensor-to-scalar ratio r are found for a wide range of the parameter space of the model. Regarding the latter, which is a canonical measure of primordial gravity waves, we find that $r\sim{10^{-3}-10^{-2}}$. An alternative possibility where the adjoint scalar field $S$ has the role of the inflaton is also discussed.
INTRODUCTION
In cosmological models inflation is realized by a slowly rolling scalar field, the so called inflaton, whose energy density dominates the early history Universe [1,2,3,4]. Among several suggestions regarding its origin, the economical scenario that this field can be identified with the Standard Model (SM) Higgs state h, has received considerable attention [5]. In this approach, the Higgs field drives inflation through its strong coupling, ξ h 2 R, where R is the Ricci scalar and ξ is a dimensionless parameter that acquires a large value, ξ 10 4 .
In modern particle physics theories, cosmological inflation is usually described within the framework of supergravity or superstring grand unified theories (GUTs). In these theories the SM is embedded in a higher gauge symmetry and the field content including the Higgses are incorporated in representations of the higher symmetry which includes the SM gauge group. In this context, several new facts and constraints should be taken into account. For instance, since new symmetry breaking stages are involved, the Higgs sector is usually extented and alternative possibilities for identifying the inflaton emerge. In addition, the effective potential has a specific structure constrained from fundamental principles of the theory. In string theory effective models, for example, in a wide class of compactifications the scalar potential appears with a no-scale structure as in standard supergravity theories [6,7]. In general, the scalar potential is a function of the various fields which enter in a complicated manner through the superpotential W and the Kähler potential K. Thus, a rather detailed investigation is required to determine the conditions for slow roll inflation and ensure a stable inflationary trajectory in such models. Modifications of the basic no-scale Kähler potential and various choices for the superpotential have been studied leading to a number of different inflationary cases [8]- [14], while studies of inflation within supergravity in a model independent way can be found in [15,16].
In the present work we implement the scenario of Higgs inflation in a model based on the Pati-Salam gauge symmetry SU (4) C × SU (2) L × SU (2) R [17] (denoted for brevity with 4-2-2). This model has well known attractive features (see for example the recent review [18]) and has been successfully rederived in superstring and D-brane theories [19,20,21,22]. Early universe cosmology and inflationary predictions of the model (or its extensions) have been discussed previously in several works [23,24,25]. Here we consider a supersymmetric version of the 4-2-2 model where the breaking down to the SM gauge group takes place in two steps. First SU (4) breaks spontaneously at the usual supesymmetric GUT scale M GU T 10 16 GeV, down to the left-right group 1 via the adjoint representation. Then, depending on the specific structure of the Higgs sector, the SU (2) R scale can break either at the GUT scale, i.e., simultaneously with SU (4), or at some lower, intermediate energy scale. The variety of possibilities are reflected back to the effective field theory model implying various interesting phenomenological consequences. Regarding the Higgs inflation scenario, in particular, the inflaton field can be identified with the neutral components of the SU (2) R doublet fields associated with the intermediate scale symmetry breaking. In this work we will explore alternative possibilities to realise inflation where the inflaton is identified with the SU (2) R doublets. We also examine the case of inflation in the presence of the adjoint representation.
The layout of the paper is as follows. In section 2, we present a brief description of the 4-2-2 model, focusing in its particle content and the symmetry breaking pattern. In sections 3 we present the superpotential and the emergent no-scale supergavity Kähler potential of the effective model. We derive the effective potential and analyse the predictions on inflation when either the SU (2) R doublets or the adjoint play the rôle of the inflaton. We present our conclusions in section 4.
DESCRIPTION OF THE MODEL
In this section we highlight the basic ingredients of the model with gauge symmetry, This model unifies each family of quarks and leptons into two irreducible representations, F i andF i transforming as [28] F i = (4, 2, 1) i andF i = (4, 1, 2) i , under the corresponding factors of the gauge group (2.1). Here the subscript i (i = 1, 2, 3) denotes family index. Note that F +F comprise the 16 of SO(10), 16 → (4, 2, 1) + (4, 1, 2). The explicit embedding of the SM matter fields, including the right-handed neutrino is as follows: where the subscript (r, g, b) are color indices.
Collectively we have the following SM assignments: (2.9) Fermions receive Dirac type masses from a common tree-level invariant term, FF h, whilst right-handed (RH) neutrinos receive heavy Majorana contributions from nonrenormalisable terms, to be discussed in the next sections. In addition, the colour triplets d c H and d c H are combined with the D 3 and D 3 states via the trilinear operators HHD 6 +HHD 6 and get masses near the GUT scale.
After the short description of the basic features of the model, in the following sections we investigate various inflationary scenarios in the context of no-scale supergravity, by applying the techniques presented in [29,30].
INFLATION IN NO SCALE SUPERGRAVITY
In this section we consider the 4-2-2 model as an effective string theory model and study the implications of Higgs inflation. The 'light' spectrum in these constructions contains the MSSM states in representations transforming non-trivially under the gauge group and a number of moduli fields associated with the particular compactification. We will focus on the superpotential and the Kähler potential which are essential for the study of inflation.
The superpotential is a holomorphic function of the fields. Ignoring Yukawa interaction terms, the most general superpotential up to dimension four which is relevant to our discussion is where from now on we set the reduced Planck mass to unity, M P l = 1. We focus on the dynamics of inflation during the first symmetry breaking stages at high energy scales. For this reason we ignore all the terms involving the bi-doubled since this state mostly contribute in low energies by ginving mass to the MSSM particles and do not play an important rôle during inflation. In addition we impose a Z 2 symmetry, under which Σ is odd and all the other fields are even. As a result the trilinear termsHΣH and tr (Σ 3 ) are eliminated from the superpotential in (3.1). The elimination of these trilinear terms of the superpotential is important, since if we useHΣH and tr (Σ 3 ) instead ofH tr(Σ 2 )H and tr (Σ 4 ), the shape of the resulting potential is not appropriate and it leads to inconsistent results with respect to the cosmological bounds while at the same time returns a low scale value for the parameter M in the superpotential, which usually expected to be close to the GUT scale. Then, using (2.5) and (2.6) the superpotential takes the following form: whereλ = 3λ 4 andκ = 7κ 12 . From the phenomenological point of view we expect S = v to be at the GUT scale. By assuming v 3 × 10 16 GeV and using the minimization condition ∂W/∂S = 0, we estimate that m 2κv 2 which, forκ = 1/2, gives m ∼ 10 14 GeV.
In the two step breaking pattern that we consider here, L H and L H must remain massless at this scale in order to break the SU (2) R symmetry at a lower scale. The SU (2) R breaking scale should not be much lower than the GUT scale in order to have a realistic heavy Majorana neutrino scenario. In addition we have to ensure that the coloured triplets Q H and Q H will be heavy. In order to keep the L H , L H doublets at a lower scale, and at the same time the coloured fields Q H and Q H to be heavy, we assume that M ≈λ S 2 =λυ 2 . In this case Q H , Q H acquire GUT scale masses M Q H ≈ 8λ 9 S 2 . During inflation the colored triplets Q H , Q H and the charged components of the RH doublets, L H and L H , do not play an important rôle. The SU (2) R symmetry breaks via the neutral components 2 ν H and ν H . In terms of these states the superpotential reads: where we have made use of the relation M λ υ 2 .
The Kähler potential has a no-scale structure and is a hermitian function of the fields and their conjugates. For the present analysis, we will consider the dependence of the Higgs fields of the 4-2-2 gauge group and the 'volume' modulus T . Therefore, assuming the fields φ i = (S, T, H, h) and their complex conjugates, we write where ξ is a dimensionless parameter. In the expression (3.4), we can ignore the last term which involves the bidoublet and in terms of ν H , ν H and S, the Kähler potential reads: In order to determine the effective potential we define the function Then the effective potential is given by where G i (G j * ) is the derivative with respect to the field φ i (φ * j ) and the indices i, j run over the various fields. V D stands for the D-term contribution.
Computing the derivatives and substituting in (3.6) the potential takes the form where we have ignored the D-term contribution and we have assumed that the value of the T modulus field is stabilized at T = T * = 1/2, see [31,32]. Notice that in the absence of the Higgs contributions in the Kähler potential, the effective potential is exactly zero, V = 0 due to the well known property of the no-scale structure.
We are going now to investigate two different inflationary cases: firstly, along Hdirection and secondly along S-direction.
INFLATION ALONG H-DIRECTION
We proceed by parametrizing the neutral components of the L H and L H fields as ν H = 1 2 (X + Y ) e iθ and ν H = 1 2 (X − Y ) e iϕ , respectively. These yield Assuming θ = 0 and ϕ = 0, along the D-flat direction, Y = 0, and the combination X is identified with the inflaton. The shape of the potential, as a function of the fields S and X, is presented in Figure 1. In order to avoid singularities from the denominator we have assume a condition which is described in the following.
The potential along the S = 0 direction is: (3.9) The shape of the V (X, S) scalar potential presented in Figure 1 along with the inflaton trajectory description and the simplified form in (3.9) is similar with the one presented in [29,30]. As it is usually the case in no-scale supergravity, the effective potential displays a singularity when the denominator vanishes. The presence of these singularities lead to an exponentially steep potential which can cause violation of the basic slow-roll conditions (i.e. ε 1, |η| 1). Consequently, these singularities must be removed. In our specific model described by the potential (3.9), we first notice that for the special value ξ = 1 the potential is free from singularities. For generic values of ξ however, i.e. ξ = 1, the potential displays a singularity for X = 6 1−ξ . In order to remove the zeros of the denominator in (3.9), we assume the following condition [29], This is a strong assumption which relates parameters with different origins. Indeed, α is a superpotential parameter while ξ descents from the Kahler potential. Since in our specific model the condition (3.10) lacks an explanation from first principles, it will be reasonable in the subsequent analysis to study the effects of a slightly relaxed version of (3.10). This can be achieved by introducing a small parameter δ (with δ 1) and modifying the condition as follows, In the remaining of this section, we are going to study the potential for special ξ values using the conditions (3.10) and (3.11).
We will start by analysing some special cases first. By imposing (3.10), which means δ = 0 the scalar potential simplifies to a quadratic monomial, something that can be also seen from the plots in Figure 1, where for small values of S (along the S = 0 direction) the potential receives a quadratic shape form. The equation (3.12) shows the potential of a chaotic inflation scenario. However, at this stage, the inflaton field X is not canonically normalized since its kinetic energy terms take the following form We introduce a canonically normalized field χ satisfying (3.14) After integrating, we obtain the canonically normalized field χ as a function of X Next, we investigate the implications of equation (3.15) by considering two different cases, for ξ = 0 and ξ = 0.
• For ξ = 0 we have X = √ 6 tanh χ √ 6 and the potential becomes, which is analogous to the conformal chaotic inflation model (or T-Model) [33]. In these particular type of models the potential has the general form: As we can see, for n = 1 we receive our result in (3.16) with λ = 3λ 2 υ 4 . This potential can be further reduced to subcases depending upon the value of χ. For χ 1 the potential in equation (3.16) reduces to Starobinsky model [34]. In this case the inflationary observables have values (n s , r) ≈ (0.967, 0.003) and the tree level prediction for ξ = 0 is consistent with the latest Planck bounds [35]. This type of models will be further analysed in the next section where inflation along the S-direction is discussed.
• The particular case of ξ = 1 implies a quadratic chaotic inflation and the tree-level inflationary prediction (n s , r) ≈ (0.967, 0.130) is ruled out according to the latest Planck 2015 results. For 0 < ξ < 1 , the prediction for (n s , r), can be worked out numerically.
After this analysis we turn our attention to a numerical calculation. In our numerical analysis we imply the modified condition (3.11) were as mentioned previously a small varying parameter δ has been introduced in order to soften the strict assumption (3.10). By substitute the relaxed condition (3.11) in (3.9) and neglecting O(δ 2 ), the potential receives the following form: (3.18) As we observe the first term in the above relation is the quadratic potential (3.12), while the second term encodes the effects of the small parameter δ. In addition, we note that the order of the singularity enhancement have been improved in comparison with the initial potential (3.9). Next we present our numerical results where the rôle of the parameter δ is also discussed.
NUMERICAL ANALYSIS
Before presenting numerical predictions of the model it is useful to briefly review here the basic results of the slow roll assumption. The inflationary slow roll parameters are given by [36,37]: The third slow-roll parameter is, 4 (3.20) where a prime denotes a derivative with respect to X. The slow-roll approximation is valid as long as the conditions 1,| η | 1 and ς 2 1 hold true. In this scenario the tensor-to-scalar ratio r, the scalar spectral index n s and the running of the spectral index The number of e-folds is given by, where l is the comoving scale after crossing the horizon, X l is the field value at the comoving scale and X e is the field when inflation ends, i.e max ( (X e ) , η (X e ) , ς (X e )) = 1. Finally, the amplitude of the curvature perturbation ∆ R is given by: .
Focusing now on the numerical analysis, we see that we have to deal with three parameters: ξ, δ andλ. We took the number of e-folds (N ) to be 60, and in Figure 2 we present two different cases in the n s − r plane, along with the Planck measurements (Planck TT,TE,EE+lowP) [35]. Specifically, in Figure 1(a), we fixed ξ and varyλ and δ. The various colored (dashed) lines corresponds to different fixed ξ-values. The green line corresponds to the limiting case with ξ = 1 and as we observe the results are more consistent with the Plank bounds (black solid contours) as the value of ξ decreases. Similar, in Figure 1(b) we treat δ as a fixed parameter while we vary ξ andλ. Also, in this case, we observe that for a significant region of the parameter space the solutions are in good agreement with the observed cosmological bounds. The green curve here corresponds to δ = 10 −6 . The special case with δ = 10 −6 ∼ 0 and ξ = 1 is represented by the black dot and as we discussed earlier is ruled out from the recent cosmological bounds. We observe from the plot that, as ξ approaches to unity the splitting between the curves due to different values of δ is small and the solution converges to δ ∼ 0 case. However, as we decrease the values of ξ we have splitting of the curves and better agreement with the cosmological bounds. Finally in plots 1(c) and 1(d) we present values of the running of the spectral index with respect to n s . We observe that the running of the spectral index, approximately receives values in the range −5 × 10 −4 < dn S d ln k < 5 × 10 −4 . Next we present additional plots to better clarify the rôle of the various parameters involved in the analysis.
Firstly, we study the spectral index n s as a function of the various parameters. The results are presented in Figure 3. In plots (a) and (b) we consider the cases with fixed values for ξ and δ respectively, and we take variations forλ. We vary the parameter ξ in the range ξ ∼ [0.92, 1] with the most preferable solutions for ξ [0.96, 1]. In addition the two plots suggest that acceptable solutions are found in the rangeλ ∼ [10 −2 , 10 −1 ]. In plots (c) and (d) n s is depicted in terms of δ and ξ respectively. As we expected the dependence on δ is negligible when it receives very small values, since we observe from plot 3(c) that the various curves are almost constant for very small δ values. The results are become more sensitive on δ as we decrease the value of ξ. This behaviour can also be confirmed from the potential (3.18). As we can see for ξ ∼ 1 the second term is simplified and the potential receives a chaotic like form. In this case the effects of small δ in the observables are almost negligible (green line). However as we decrease the value of ξ and we increase the values of δ the second term becomes important and contributes to the results.
Next, in Figure 4 we consider various cases for the tensor to scalar ratio, r. The description of the plots follows the spirit of those presented in Figure 3 for the spectral index n S . In particular, by comparing the plots 4(c) and 3(c) we notice that the dependence of r on δ is weaker in comparison with n S . Thus the relaxation parameter δ strongly affects the spectral index n S while for δ < 10 −4 and fixed ξ the tensor-scalar ratio r remains almost constant. In summary from the various figures presented so far we observe that consistent solutions can be found in a wide range of the parameter space. We also note that the model predicts solutions with r ≤ 0.02, which is a prediction that can be tested with the discovery of primordial gravity waves and with bounds of future experiments.
Regarding the superpotential parameterλ, we can see from the various plots that and we evaluate it at the pivot scale. In Figure 5 we show the values of the Hubble parameter in the (H inf − n s ) plane. We observe that the values of the Hubble parameter with respect to n s bounds are of order 10 13 GeV.
REHEATING
As Majorana mass for the RH-neutrinos can be realized from the following non-renormalisable term where we have suppressed generation indices for simplicity, γ is a coupling constant and M * represents a high cut-off scale (for example the compactification scale in a string model or the Planck scale M P l ). In terms of SO(10) GUTs this operator descent from the following invariant operator and as described in [38] can be used to explain the reheating process of the universe after the end of inflation. In our case the 4-2-2 symmetry breaking occur in two steps: We can see that a heavy Majorana scale scenario implies that the SU (2) R breaking scale should not be much lower than the SU (4) scale and also γ should not be too small. Another important role of the higher dimensional operators is that after inflation the inflaton X decays into RH neutrinos through them to reheat the Universe. In addition the subsequent decay of these neutrinos can explain the baryon asymmetry via leptogenesis [39,40] . For the reheating temperature, we estimate [38] (see also [41]) : where the total decay width of the inflaton is given by with M ν c = γ ν H 2 M P l the mass of the RH neutrinos and M X the mass of the inflaton. The later is calculated from the effective mass matrix at the local minimum and approximately is M X = 2M 2λυ 2 . Since M 10 13 GeV, the decay condition M X > M ν c it is always satisfied for appropriate choices of the parameters ν H and γ. In Figures 6 we present solutions in n s − T RH and r − T RH plane with respect to the various parameters of the model. For the computation of T RH we assume that ν H = M λ v 2 and we present the results for γ = 0.1 (solid), γ = 0.5 (dashed) and γ = 1 (dotted). In this range of γ values we have a Majorana mass, M ν c ∼ 10 6 − 10 7 GeV, which decreases as we decrease the value of γ. In addition, gravitino constraints implies a bound for the reheating temperature with T RH < 10 6 − 10 9 GeV and as we observe from the plots there are acceptable solutions in this range of values. More precisely, from plots (a) and (c) we see that for ξ > 0.97 and γ > 0.5 most of the results predict T RH > 10 9 GeV. However, it is clear that the consistency with the gravitino constraints strongly improves as we decrease γ, since all the curves with γ = 0.1 (solid lines) predicts T RH 10 9 GeV. Similar conclusions can be derived from plots (b) and (d). In addition, from the r − T RH plots (c) and (d) we observe that for T RH < 10 6 − 10 9 there are regions in the parameter space with r ∼ 10 −2 − 10 −3 . Furthermore, we observe from plot 6(c) that the tensor-scalar ratio and the reheating temperature are decreased as we decrease the value of ξ since the curves are shift to the left and down regions of the plot.
A sample of the results have been discussed so far is presented in Table 1. The table is organized in horizontal blocks and each block contains three sets of values. For each set in a block we change only the coupling constant γ (γ = 1, 0.5, 0.1) while we keepλ, ξ and δ constant. We observe that as we decrease the values ofλ and ξ the values of the tensor to scalar ratio (r) and the reheating temperature (T RH ) also decreased.
INFLATION ALONG S DIRECTION
Here we briefly discussed the case where the S field has the rôle of the inflaton. In the potential (3.7) we put ν H = 0 and ν H = 0 so we have: In order to remove the singularity of the denominator, we take m = 6κ. In this case we get the following simple form which is of the form of a chaotic-potential.
Now the kinetic energy is defined as, then the potential in (3.30) becomes, V = 72κ 2 X 2 , and from the coefficient of the kinetic energy term we can find X in terms of a canonical normalized field χ: The potential in terms of the canonical normalized field reads as which is analogous to the conformal chaotic inflation model or T-Model inflation already mentioned before. Potentials for the T-Model inflation are given in Equation (3.17). For n = 1 the potential become, V (χ) = λ tanh 2 χ √ 6 , which is similar to our potential in (3.33) for λ = 432κ 2 . We can understand the inflationary behaviour in these type of models, by considering two cases.
First for χ 1, by writing the potential in exponential form we have Using the relation above we have for the slow-roll parameter that, (3.38) Similarly the second slow-roll parameter η is found to be, Finally, the predictions for the tensor-to-scalar ratio r and the natural-spectral index n s are, r = 12 N 2 , n s = 1 + 2η − 6 = 1 − 2 N − 9 4N 2 (3.40) and for N = 60 e-foldings we get n s 0.9673 and r 0.0032.
Regarding the case with χ 1, we can see from the expression (3.33) that the potential reduces to a quadratic chaotic form. The tree-level inflationary predictions in this case are (n s , r) ≈ (0.967, 0.130), which are ruled out with the latest Planck 2015 results.
The discussion above strongly depends on the assumption m = 6κ that we imposed on the potential in order to simplify it. If we consider small variations of this assumption similar to (3.11) and modify the condition as, m = 6κ + δ, we will see that the parameter δ contributes only to n S while the tensor-to-scalar ratio r remains constant.
CONCLUSIONS
In the present work we have studied ways to realise the inflationary scenario in a no-scale supersymmetric model based on the Pati-Salam gauge group SU (4) × SU (2) L × SU (2) R , supplemented with a Z 2 discrete symmetry. The spontaneous breaking of the group factor SU (4) → SU (3) × U (1) B−L is realised via the SU (4) adjoint Σ = (15, 1, 1) and the breaking of the SU (2) R symmetry is achieved by non-zero vevs of the neutral components ν H , ν H of the Higgs fields (4, 1, 2) H and (4, 1, 2)H.
We have considered a no-scale structure Kähler potential and assumed that the Inflaton field is a combination of ν H , ν H and find that the resulting potential is similar with the one presented in [29,30] but our parameter space differs substantially. Consequently, there are qualitatively different solutions which are presented and analysed in the present work. The results strongly depend on the parameter ξ and for various characteristic values of the latter we obtain different types of inflation models. In particular, for ξ = 0 and canonical normalized field χ ≥ 1, the potential reduces to Starobinsky model and for ξ = 1 the model receives a chaotic inflation profile. The results for 0 < ξ < 1 have been analysed in detail while reheating via the decay of the inflaton in right-handed neutrinos is discussed.
We also briefly discussed the alternative possibility where the S field has the rôle of the inflaton. In this case, the potential is exponentially flat for χ ≥ 1. Similar conclusions can be drawn for the Starobinsky model. On the other hand for small χ it reduces to a quadratic potential.
In conclusion, the SU (4)×SU (2) L ×SU (2) R model described in this paper can provide inflationary predictions consistent with the observations. Performing a detailed analysis we have shown that consistent solutions with the Planck data are found for a wide range of the parameter space of the model. In addition the inflaton can provide masses to the right-handed neutrinos and depending on the value of reheating temperature and the right-handed neutrino mass spectrum thermal or non-thermal leptogenesis is a natural outcome. Finally we mention that, in several cases the tensor-to-scalar ratio r, a canonical measure of primordial gravity waves, is close to∼ 10 −2 − 10 −3 and can be tested in future experiments. | 6,986 | 2018-04-13T00:00:00.000 | [
"Physics"
] |
An Improved Model for Kernel Density Estimation Based on Quadtree and Quasi-Interpolation
: There are three main problems for classical kernel density estimation in its application: boundary problem, over-smoothing problem of high (low)-density region and low-efficiency problem of large samples. A new improved model of multivariate adaptive binned quasi-interpolation density estimation based on a quadtree algorithm and quasi-interpolation is proposed, which can avoid the deficiency in the classical kernel density estimation model and improve the precision of the model. The model is constructed in three steps. Firstly, the binned threshold is set from the three dimensions of sample number, width of bins and kurtosis, and the bounded domain is adaptively partitioned into several non-intersecting bins (intervals) by using the iteration idea from the quadtree algorithm. Then, based on the good properties of the quasi-interpolation, the kernel functions of the density estimation model are constructed by introducing the theory of quasi-interpolation. Finally, the binned coefficients of the density estimation model are constructed by using the idea of frequency replacing probability. Simulation of the Monte Carlo method shows that the proposed non-parametric model can effectively solve the three shortcomings of the classical kernel density estimation model and significantly improve the prediction accuracy and calculation efficiency of the density function for large samples.
Introduction
Density estimates are a common technique in modern data analysis. They are usually used to analyze statistical characteristics, such as skewness and multimodality of samples, and quantify uncertainties. They have been widely used in engineering, economics, medicine, geography and other fields. The methods of density estimation contain the parametric method and nonparametric method. The parametric method requires strong assumptions for the prior model to restrict the probability density function drawn from a given parametric family of distribution, and then calculates the corresponding parameter estimates from the samples. The main problem of the parametric method is that inaccurate setting of the prior parameter model may lead to wrong conclusions. Moreover, in the process of testing the posterior model, there is a common situation that multiple assumptions of prior models can pass a posterior test, which greatly affects the accuracy and efficiency of data analysis. Therefore, to avoid the defects in the parametric method, Fix and Hodges [1] first eliminate the strong assumptions of the parametric method by introducing the idea of discriminant analysis, which is also the fundamental thought source of the nonparametric method. The simplest histogram method is an intuitive embodiment of this idea. Nonparametric methods do not require any prior assumptions and can predict density estimator method based on the resampling strategy of a multi-point grid. Harel [17] discussed the asymptotic normality of a binned kernel density estimator for non-stationary random variables. Peherstorfer [18] proposed a density estimation based on sparse grid, which can be viewed as improved binned rules. It used a sparse grid instead of full grid to reduce the bins. Although the binned kernel density estimator improves the processing efficiency of large sample data through the binned strategy, it still faces the boundary problem of the kernel density estimator in essence. In addition, there are some other methodologies to apply kernel density estimation to large datasets. Cheng [19] proposed a quick multivariate kernel density estimation for massive datasets by viewing the estimator as a two-step procedure: first, kernel density estimator in sub-interval and then function approximation based on pseudo data via the Nedaraya-Watson estimator. However, the research of Gao [20] demonstrated that the generalized rational form estimators provide a low convergence rate. Moreover, the computation of pseudo data using a kernel density estimator brings more computation than the above binned rule and does not consider the boundary problem of the kernel density estimator. Zheng [21] focused on the choice method of samples from large data to produce a proxy for the true data with a prescribed accuracy, which is more complex than the direct binned rule. Moreover, the research does not pay much attention to the discussion of the kernel density estimator. Therefore, the binned method is very simple and clear. Recently, we proposed a kernel density estimator based on quasi-interpolation and proved its theoretical statistical properties, but the research does not provide a solution for the over-smoothing phenomenon [22].
Another problem (over-smoothing phenomenon) for kernel density estimators is caused by the improper selection of bandwidth, and different scholars have adopted different methods to reduce the occurrence of this phenomenon. The most classical method to choose the bandwidth is the thumb rule, which calculates the optimal bandwidth by the standard deviation and dimension of the samples. Due to the simplicity of this method, it is regarded as a common tool in most application studies of kernel density estimators. However, the actual samples are usually random and uneven, and the optimal bandwidth obtained by the thumb rule is fixed. It only provides a calculation criterion of an optimal bandwidth in a sense and has a very limited improvement effect on the over-smoothing phenomenon. An adaptive bandwidth approach is used to ameliorate this phenomenon viewed as a correction to the thumb rule, which consists of two steps. Firstly, the evaluated function is calculated with a fixed bandwidth and the quantitative relationship between the pointwise function value of samples and the geometric mean value of the samples is established. Then, according to the quantitative relationship, the pointwise correction coefficient is determined to modify bandwidth. The final kernel density estimator can be obtained based on these modified bandwidth. The adaptive bandwidth method improves the accuracy of kernel density estimators for a fixed bandwidth, but it is difficult to apply to large samples because each sample will affect the determination of the correction coefficient and the computational efficiency is low. Barreiro Ures [23] proposed a bandwidth selection method for large samples via using subagging. The subagging can be viewed as an improvement on the cross-validation method. Therefore, it is difficult to capture local changes in samples. Moreover, the research does not consider the boundary problem.
In conclusion, a classical kernel density estimator is a convenient vehicle and it is widely used in many branches of science and technology. However, the majority of research usually did not consider the constraints of the kernel density estimator model itself. These limitations and deficiencies for the kernel density estimator need to be further considered. In addition, previous methods of kernel density estimators are not synthetically considered among the boundary problem, smooth problem and large sample computation efficiency. Therefore, in view of the insufficiency of the classical kernel density estimator, this paper proposes a new modeling process of multivariate adaptive binned kernel density estimators based on the quadtree algorithm and quasi-interpolation, which significantly improves the prediction accuracy of the estimation density function. Research works in this paper are summarized as follows: (1) Aiming at the boundary problem of the classical kernel density estimator defined over bounded region a new set of asymmetric kernel functions is introduced based on the quasi-interpolation theory to avoid the boundary problem.
(2) To improve the computational efficiency of the classical kernel density estimator for large samples, the idea of binned kernel density estimation is introduced. The coefficient explicit expression of the density estimator under the binned rule of data is derived, which greatly reduces the computation and improves the computational efficiency of the model.
(3) To alleviate the over-smoothing phenomenon of classical kernel density estimators, this paper proposes an adaptive strategy based on the segmentation thought of quadtree algorithm. We set the segmentation thresholds from sample size, bin width and kurtosis to achieve adaptive computation for the amount of bin and bin width. It can effectively avoid the over-smoothing phenomenon in the high (low)-density area and increase local adaptability in the model for samples and further improve the accuracy in the model. (4) We extend the univariate model based on the quadtree algorithm to the multivariate model. The numerical simulation based on Monte Carlo shows that the constructed models in this paper perform well in the boundary problem, large samples and over-smoothing phenomenon, which are significantly better than the current widespread use of kernel density estimation methods.
Univariate Quasi-Interpolation Density Estimator
Let X 1 , X 2 , · · · , X n be a set of random samples subject to the probability distribution of an unknown probability density function f (x). The classical non-parametric kernel density estimator is defined as: where h denotes bandwidth and K(x) denotes kernel function or weight function. There are some common symmetric kernel functions shown in Table 1: Table 1. Common kernel functions.
Type of Kernel Function Expression of Kernel Function
Gaussian kernel According to Equation (1), the classical kernel density estimator requires one to calculate the distance between the predicted point and each sampling point to allot weight function. It means that the computation increases rapidly with the increase in sample size. We can note that the prediction points are mainly influenced by the samples in the limited bandwidth domain, while the samples outside the bandwidth domain have very little influence. The pointwise calculation of large samples outside the bandwidth domain greatly reduces the computational efficiency. Therefore, the binned kernel density estimator was proposed: where t j denotes the centers of the j-th bin and n j denotes the number of samples dropped in the j-th bin, satisfying ∑ n j = n. For clarity, we remind readers: X i denotes random sample and t j represents center of bin. According to Equation (2), it can be found that the binned kernel density estimator transforms the pointwise calculation of the classical kernel density estimator into the calculation for bin centers. Its essential idea is to treat the samples in a small region as a whole and the central points of each region as the core samples. Therefore, it can ignore the bandwidth difference between each individual sample and the central point of the region. In this way, unnecessary detailed calculation in the classical kernel density estimator can be reduced and the computational efficiency can be improved on the premise of ensuring accuracy. However, since actual samples are usually sampled from the bounded domain, the above two classes of the kernel density estimator face the same problem; that is, the boundary problem will occur when a fixed symmetric kernel function is used to predict the true density function defined over a bounded domain. The main reason for the boundary problem is that the weight is allotted outside the density support when smoothing near the boundary point by using a fixed symmetric kernel function. A natural strategy is to use a kernel that has no weight allotted outside the support. Therefore, under the framework of numerical approximation, combining with the theory of quasi-interpolation and the binned idea to improve the above models, this paper proposes a new binned quasi-interpolation density estimator, which can not only improve the computation efficiency of large samples, but also eliminate the boundary problem.
Let us start with some definitions and lemmas: ] be a bounded interval, and a, b be known, a = t 0 < t 1 < · · · < t n = b be a set of scattered centers on the interval [a, b], f t j n j=0 be the discrete function values corresponding to scattered centers. Let c be the positive shape parameter, φ j (x) = c 2 + x − t j 2 be the MQ function first constructed by Hardy [24], then we have a quasi-interpolation (L D operator).
where ψ j n j=0 are the asymmetric MQ kernels Here, these kernels satisfy 0 < ψ j (x) < 1 and In addition, we obtain the following error estimates, which can be found in Wu and Schaback [25].
j=0 , c be a positive shape parameter, there exists some constant K 1 , K 2 , K 3 independent of h and c, such that the following inequality According to lemma 1, for any shape parameter c satisfying 0 ≤ c ≤ O h/ |log h| , the convergence rate O h 2 for the whole bounded interval can be provided by quasiinterpolation L D . Furthermore, the research of Ling [26] shows that the multivariate L D operator by the tensor product technique(dimension-splitting) can provide the same convergence rate as the univariate case. Inspired by the convergence characteristics of quasiinterpolation and the idea of the binned kernel density estimator, we construct a univariate adaptive quasi-interpolation density estimator based on the quadtree algorithm, which consists of three steps. Suppose that X is a random variable, {X k } n k=1 are the n independent samples in the random variable X. There is an unknown density function f (x) on the bounded interval. The first step is to divide the interval [a, b] into N bins t j , t j+1 N−1 j=0 . Let n j denote the number of samples {X k } n k=1 dropping into the corresponding bin t j , t j+1 . In the second step, we construct a new univariate binned density estimator as follows: Here, ψ j N j=0 denote the asymmetric MQ kernels defined by Equation (3), and the coefficients α j ( f ) N j=0 are defined as According to Equation (4) and lemma 1, we can note that the introduction of asymmetric MQ kernels can avoid the boundary problem caused by the weight allotted outside the support when the traditional kernel function smooths near the boundary points. Moreover, Equation (5) shows that n j /n represents the frequency of samples falling into the corresponding bin t j , t j+1 . Through the linear combination of frequencies between adjacent bins, the explicit expression of coefficients of the estimator under the binned rule is given, which can effectively improve the calculation efficiency in the model. Thirdly, the over-smoothing phenomenon in the kernel density estimator is considered. In the above two steps, we built a univariate binned quasi-interpolation density estimator. Based on the known samples and interval, the interval was divided into a certain number of bins, and then the estimated density function could be calculated by the endpoint position of the bin and the number of samples in the bins. If the number of bins is too few, the predicted result is over-smoothing, which differs greatly from the actual scenario. If the number of bins is too great, the calculation efficiency will be greatly reduced. How to determine the number and width of bins is the key to both model accuracy and calculation efficiency. The most common method is the thumb rule, which takes the idea of a fixed bandwidth and calculates the bandwidth as follows: Here, d denotes the dimension and σ denotes standard deviation of samples. In particular, to maintain notational clarity, we remind readers: d denotes dimension and D is a mark of L D operator. The number of bins is calculated by ceiling (b − a)/h. This method uses equal bandwidth, and similar equal bandwidth methods include the unbiased cross-validation method and insertion method, etc. However, due to the strong randomness and uneven distribution of actual samples, the equal bandwidth method generally has the problem of insufficient description of details for the high-density area, which causes the over-smoothing phenomenon. Therefore, it is expected that the bandwidth can be adjusted adaptively with the density of samples. The bandwidth should be smaller in high-density areas to enhance local characterization and improve accuracy. In addition, the bandwidth should be larger in the gentle area to avoid excessive calculation and improve calculation efficiency. A common adaptive method determines the number of bins according to the thumb rule and obtains the estimated value of bin centers. Then, the ratio between each estimated value and the geometric mean of each estimated value is taken as the correction coefficient of bandwidth, so as to achieve the purpose of taking smaller bandwidth in the intensive area and larger bandwidth in the sparse area. This adaptive method is simple and easy to operate, but it also has three disadvantages: First, this method is based on the estimation of thumb rule, and the adaptive process does not change the number of bins, which can be regarded as the optimal configuration of bandwidth in essence. Second, the degree of adaptive refinement is insufficient and the determination of bandwidth correction coefficient is too rough, which is susceptible to extreme values. Moreover, it is difficult to distinguish sharp peaks from wide peaks. Third, the adaptive effect of multi-peak distribution is poor. In addition, the density near the boundary is usually small, and increasing the width of the bin easily aggravates the boundary problem. Therefore, this paper proposes a new adaptive binned method.
Adaptive Binned Method Based on Quadtree Algorithm
The quadtree algorithm, as a space partition index technology, is widely used in the image processing field [27]. The key idea is an iterated segmentation of data space. The number of iterations depends on the number of samples in bins and bin-width threshold. Therefore, the density of samples can be characterized by the number and width of bins. The area with dense samples has more iterated segmentation and the area with sparse samples has less iterated segmentation. Therefore, according to the idea of quadtree segmentation, we can adaptively adjust the bin number and bin width in the quasi-interpolation density estimator via a data-driven method. The high-density area is divided into more bins to obtain a smaller bin width, which can more keenly capture the distribution details of the area, while the gentle area is divided into fewer bins to save the cost of calculation, so as to achieve a reasonable distribution of bins and improve the accuracy in the model. The adaptive binned method based on the quadtree algorithm is shown in Figure 1: First of all, the sample space is divided into four bins and the number of samples in each bin and the bin widths {L i } 4 i=1 are recorded. Secondly, we set the threshold of sample number n max and bin width L max . The setting of sample number threshold n max captures distribution details in the high-density area with more bins and improves computing efficiency in the gentle area with less bins. It not only solves the over-smoothing problem but also takes into account computing efficiency. The setting of the bin-width threshold ensures the segmentation level of the whole domain and avoids an insufficient number of bins, which leads to the large estimation error or boundary problem. Following the thumb rule, we set the bin-width threshold to 1.06σn −1/5 . In addition, we set a kurtosis threshold to identify the peak distribution of samples and improve the accuracy. Finally, the number of samples and bin width in each bin are compared with the number of sample number threshold n max , bin-width threshold L max and kurtosis threshold. The segmentation is finished when all of these conditions are met. are recorded. Secondly, we set the threshold of sample number and bin width . The setting of sample number threshold captures distribution details in the high-density area with more bins and improves computing efficiency in the gentle area with less bins. It not only solves the over-smoothing problem but also takes into account computing efficiency. The setting of the bin-width threshold ensures the segmentation level of the whole domain and avoids an insufficient number of bins, which leads to the large estimation error or boundary problem. Following the thumb rule, we set the bin-width threshold to 1.06 −1/5 . In addition, we set a kurtosis threshold to identify the peak distribution of samples and improve the accuracy. Finally, the number of samples and bin width in each bin are compared with the number of sample number threshold , bin-width threshold and kurtosis threshold. The segmentation is finished when all of these conditions are met.
Multivariate Adaptive Binned Quasi-Interpolation Density Estimator
Based on the idea of the above univariate adaptive binned quasi-interpolation density estimator, we extend it to the multivariate model. Following the above process, we first construct the multivariate binned density estimator. The classical multivariate kernel density estimator and multivariate binned density estimator are extended from the univariate model via the tensor product technique. They are defined as follows: Figure 1. Univariate adaptive binned method based on quadtree algorithm.
Multivariate Adaptive Binned Quasi-Interpolation Density Estimator
Based on the idea of the above univariate adaptive binned quasi-interpolation density estimator, we extend it to the multivariate model. Following the above process, we first construct the multivariate binned density estimator. The classical multivariate kernel density estimator and multivariate binned density estimator are extended from the univariate model via the tensor product technique. They are defined as follows: where ∑ · · · ∑ n j 1 , j 2 , ··· , j d = n. Based on the above univariate binned quasi-interpolation density estimator, we also extended it to the multivariate binned quasi-interpolation density estimator via tensor product technique: Let X be a d-dimension random variable with an unknown density function f defined on a bounded hyperrectangle quasi-interpolation density estimator via the tensor product technique is as follows: where In Equation (9), for i = 1, 2, · · · , d, there are N i + 1 := N i − 1 and N i + 2 := N i − 2. To avoid the over-smoothing phenomenon, we use the advantage of the tensor product to transform the multivariate adaptive binned problem into a univariate problem, and the adaptive process is shown in Figure 2.
adaptive process is shown in Figure 2.
First, we divide the domain into two bins for each dimension and record the number of samples and the bin width in each bin from the univariate dimension. Secondly, they are compared with the threshold of sample number, bin width and kurtosis to achieve iterative segmentation. Finally, these bins in each dimension are spanned into some twodimensional bins via the tensor product technique, and the number of samples falling in each two-dimensional bin is recorded. First, we divide the domain into two bins for each dimension and record the number of samples and the bin width in each bin from the univariate dimension. Secondly, they are compared with the threshold of sample number, bin width and kurtosis to achieve iterative segmentation. Finally, these bins in each dimension are spanned into some twodimensional bins via the tensor product technique, and the number of samples falling in each two-dimensional bin is recorded.
Numerical Simulation
In order to verify the performance of the model proposed in this paper, the Monte Carlo method is used for numerical simulation in this section. Maximal Mean Squared Error (MMSE) and Mean Integrated Squared Error (MISE) are used to quantify the difference between the estimated density function and the true density function. Here, E denotes the expectation value, Q ( * ) f (x) denotes estimated density function and f (x) denotes true density function. MMSE and MISE error are used to measure the local and overall accuracy in the model, respectively.
Univariate Test
As the first example, we test the prediction accuracy of the univariate model by using the following test function: Here, N(µ, σ) denotes a normal distribution with an expectation µ and variance σ. The test function (called asymmetric claw distribution) is a combination of the five different parameters' normal distribution, which has five peaks and troughs of different heights on the considered interval [0, 1]. Next, the comparison of the quasi-interpolation density estimator (QIDE), univariate adaptive binned quasi-interpolation density estimator (AQIDE) based on the quadtree algorithm, classical kernel density estimator (KDE) and binned kernel density estimator (BKDE) are shown in Figure 3.
Numerical Simulation
In order to verify the performance of the model proposed in this paper, the Monte Carlo method is used for numerical simulation in this section. Maximal Mean Squared Error (MMSE) are used to quantify the difference between the estimated density function and the true density function. Here, denotes the expectation value, ( * ) ( ) denotes estimated density function and ( ) denotes true density function. MMSE and MISE error are used to measure the local and overall accuracy in the model, respectively.
Univariate Test
As the first example, we test the prediction accuracy of the univariate model by using the following test function: Here, ℕ( , ) denotes a normal distribution with an expectation and variance . The test function (called asymmetric claw distribution) is a combination of the five different parameters' normal distribution, which has five peaks and troughs of different heights on the considered interval [0, 1]. Next, the comparison of the quasi-interpolation density estimator (QIDE), univariate adaptive binned quasi-interpolation density estimator (AQIDE) based on the quadtree algorithm, classical kernel density estimator (KDE) and binned kernel density estimator (BKDE) are shown in Figure 3. Figure 3 shows the sketches of different density estimators when the sample number is n = 12, 400 and the number of simulation experiments is 100. Furthermore, we provide a comparison sketch of KDE under the larger sample number, n = 12, 400 × 50 = 620, 000. In these simulation experiments, the bandwidth selection for the KDE model and bin-width selection of the BKDE model both adopt the thumb rule of Equation (6). The bin number and bin width in the AQIDE model proposed in this paper are adaptively obtained by the univariate quadtree segmentation algorithm designed in Section 3. The shape parameter is selected as c = min(L i ). The threshold of bin width L max is determined by using the thumb rule L max = 1.06σn −0.2 from Equation (6). The threshold of bin number n max is determined by n max = nL max based on the thumb rule, and the kurtosis threshold is determined as 3.
In addition, to compare the performance of the quasi-interpolation model after an adaptive processing proposed in this paper, the same bin number and shape parameters are selected for the QIDE model and AQIDE model.
In Figure 3, the blue dashed line denotes the true density function, while the turquoise, black, red and green lines represent the results by the KDE, BKDE, QIDE and AQIDE models, respectively. The black dashed line denotes the result of KDE for larger samples. We can note that the ability of classical KDE to catch the last two high peaks is poor. It performs nearly as well as our QIDE only when the sample number is increased to 620,000. The MMSE error and MISE error corresponding to each model are shown in Table 2. According to Figure 3 and Table 2, the binned technique does not affect the fitting accuracy. Moreover, the KDE and BKDE models both have a serious over-smoothing phenomenon, and the prediction effect of peaks and troughs is poor. The QIDE and AQIDE models in this paper can alleviate the problem. The fitting effect of peaks and troughs performs significantly better than the KDE and BKDE models. In addition, according to the adaptive algorithm proposed in this paper, we calculate the bin number, and then we provide the results of the equidistant QIDE and AQIDE model under the same bin number. These results show that the AQIDE model performs better than the QIDE model when the bin number is the same. It means that the proposed adaptive method based on the quadtree algorithm can better capture the distribution details than the case of equidistance bin width and improve the fitting accuracy of the model by increasing or reducing adaptive bins in the high-density or gentle area.
Bivariate Test
In order to further test the performance of the multivariate model proposed in this paper, we choose the following modified bivariate density function as the test function: e −((9x 1 +1) 2 /49−(9x 2 +1)/10) The function originates from the classic Franke function, which is difficult to approximate due to two Gaussian peaks of different heights and a small dip. Therefore, it is widely used as a test function in numerical analysis. In the test function, a constant G is introduced to ensure that the final test function f is the density function defined over the domain [0, 1] 2 . A comparison of the adaptive multivariate binned quasi-interpolation density estimator (AMQIDE), multivariate binned quasi-interpolation density estimator (MQIDE), classical multivariate kernel density estimator (MKDE) and multivariate binned kernel density estimator (MBKDE) is shown in Figure 4. Figure 4 shows the sketches of different multivariate density estimators under the samples N = 300, 000 and the number of simulation experiments is 50. In these simulation experiments, the bandwidth of the MKDE model and the bin width of the MBKDE model both adopt the thumb rule from Equation (6). The bin number and bin width in the AQIDE model are calculated by the multivariate adaptive quadtree algorithm. The shape parameter is chosen as c = h and the threshold of the bin width L max is given by the thumb rule L max = σn −1/6 from Equation (6). The threshold of the sample number n max is determined by n max = nL max based on the thumb rule, and the kurtosis threshold is determined as 3.
In addition, the QIDE model chooses the same bin number and shape parameter as the AQIDE model. Figure 4 shows the sketches of different multivariate density estimators under the samples = 300,000 and the number of simulation experiments is 50. In these simulation experiments, the bandwidth of the MKDE model and the bin width of the MBKDE model both adopt the thumb rule from Equation (6). The bin number and bin width in the AQIDE model are calculated by the multivariate adaptive quadtree algorithm. The shape parameter is chosen as = ℎ and the threshold of the bin width is given by the thumb rule = −1/6 from Equation (6). The threshold of the sample number is determined by = based on the thumb rule, and the kurtosis threshold is determined as 3. In addition, the QIDE model chooses the same bin number and shape parameter as the AQIDE model.
The Figure 4a is the real density function. Figure 4b,c are the estimated density functions obtained by the AMQIDE model and MQIDE model, while Figure 4d,e are the estimated density functions obtained by the MKDE model and MBKDE model. In addition, corresponding MMSE errors and MISE errors in the four models are provided in Table 3. From Figure 4 and Table 3, it can be noted that the kurtosis in the Franke density function is small, and the estimated results of the MQIDE model and AMQIDE model are consistent, meaning that our adaptive method can effectively identify high-density areas. The results of the MKDE model and MBKDE model are similar to the univariate situation, which perform poorly with a serious boundary problem. The performance is much lower than the MQIDE and AMQIDE models proposed in this paper. Table 3. From Figure 4 and Table 3, it can be noted that the kurtosis in the Franke density function is small, and the estimated results of the MQIDE model and AMQIDE model are consistent, meaning that our adaptive method can effectively identify high-density areas. The results of the MKDE model and MBKDE model are similar to the univariate situation, which perform poorly with a serious boundary problem. The performance is much lower than the MQIDE and AMQIDE models proposed in this paper.
Conclusions
This paper proposes a multivariate adaptive quasi-interpolation density estimation model based on the quadtree algorithm. The key goal to achieve the adaptive segmentation for samples via the quadtree algorithm and obtain the proper binned number and bin width. The method can be adjusted adaptively according to the distribution of the samples. It not only identifies details of distribution in the high-density area, but also avoids the inefficiency of large bins, which can effectively avoid the over-smoothing phenomenon. Moreover, based on the good properties of quasi-interpolation, the theory of quasi-interpolation is introduced to construct the kernel function for the density estimator, which can avoid the boundary problem of the classical kernel density estimator. Finally, the idea of frequency approximation probability is used to construct the coefficient of the binned density estimator, which can handle large samples and improve computational efficiency. The simulation of Monte Carlo shows that the proposed nonparametric model has strong robustness and can estimate the density function with high performance. | 7,519.4 | 2022-07-08T00:00:00.000 | [
"Computer Science"
] |
A Novel Sequence-Based Feature for the Identification of DNA-Binding Sites in Proteins Using Jensen–Shannon Divergence
: The knowledge of protein-DNA interactions is essential to fully understand the molecular activities of life. Many research groups have developed various tools which are either structure-or sequence-based approaches to predict the DNA-binding residues in proteins. The structure-based methods usually achieve good results, but require the knowledge of the 3D structure of protein; while sequence-based methods can be applied to high-throughput of proteins, but require good features. In this study, we present a new information theoretic feature derived from Jensen–Shannon Divergence (JSD) between amino acid distribution of a site and the background distribution of non-binding sites. Our new feature indicates the difference of a certain site from a non-binding site, thus it is informative for detecting binding sites in proteins. We conduct the study with a five-fold cross validation of 263 proteins utilizing the Random Forest classifier. We evaluate the functionality of our new features by combining them with other popular existing features such as position-specific scoring matrix (PSSM), orthogonal binary vector (OBV), and secondary structure (SS). We notice that by adding our features, we can significantly boost the performance of Random Forest classifier, with a clear increment of sensitivity and Matthews correlation coefficient (MCC).
Introduction
Interactions between proteins and DNA play essential roles for controlling of several biological processes such as transcription, translation, DNA replication, and gene regulation [1][2][3].An important step to understand the underlying molecular mechanisms of these interactions is the identification of DNA-binding residues in proteins.These residues can provide a great insight into the protein function which leads to gene expression and could also facilitate the generation of new drugs [4,5].
Until now, several groups have published different studies based on either experimental or computational identification of DNA-binding proteins [1,[6][7][8][9][10][11] as well as residues in these proteins [12][13][14][15][16][17][18][19][20][21][22][23].However, the usage of experimental approaches for the determination of binding sites is still challenging since they are often demanding, relatively expensive, and time-consuming.To overcome the difficulty of experimental approaches, it is highly desired to develop fast and reliable computational methods for the prediction of DNA-binding residues.For this purpose, several state-of-the-art prediction methods have been developed for the automated identification of those residues.Such methods can be assigned into two main categories: (i) based on the information observed from structure and sequence in a collective manner; (ii) based on the features derived directly from the amino acid sequence alone (for more detail see reviews [24] and [25]).Although the first type of approaches provides promising information about DNA-binding residues in proteins, their application is difficult due to the limited number of experimentally determined protein structures.In contrast to structure-based approaches, sequence-based methods have been developed by extracting different sequence information features, like amino acid frequency, position-specific scoring matrix (PSSM), BLOSUM62 matrix, sequence conservation, etc. [3,4,18,19,26,27].Using these features, several machine learning techniques have been applied to construct the classifiers for the prediction of binding residues in proteins.To this end, a variety of support vector machine (SVM) classifiers have been developed in recent studies [2,[17][18][19]23,26,28].For example, Westhof et al. have recently used an SVM classifier approach in their study, named RBscore (http://ahsoka.u-strasbg.fr/rbscore/), by using the physicochemical and evolutionary features that are linearly combined with a residue neighboring network [2].Further, SVM algorithms were also applied for the models proposed in BindN [18], DISIS [19], BindN+ [23], DP-Bind [27] using different sequence information features including the biochemical property of amino acids, sequence conservation, evolutionary information in terms of PSSM, the side chain pKa value, hydrophobicity index, molecular mass and BLOSUM62 matrix.In addition, other machine learning classifiers such as neural network models [13,15], naive Bayes classifier [26], Random Forest classifiers (RF) [4,29,30] have been developed based on the features derived from protein sequences.For example, Wong et al. [29] have recently developed a successful method using RF classifier with both DNA and protein derived features to predict the specific residue-nucleotide interactions for different DNA-binding domain families.
Despite the rich literature on the sequence-based methods as mentioned above, to date there is still a need to find suitable feature extraction approaches that can enhance the characteristics of DNA-binding residues and thus help to improve the performance of existing methods for identification of DNA-binding residues in proteins.For this aim, we introduce and evaluate a new information theory-based method for the prediction of these residues using Jensen-Shannon divergence (JSD).As a divergence measure based on the Shannon entropy, JSD is a symmetrized and smoothed version of the Kullback-Leibler divergence and is often used for different problems in the field of bioinformatics [31][32][33][34][35].In this study, following the line of Capra et al. [34] we first quantify the divergence between the observed amino acid distribution of a site in a protein and the background distribution of non-binding sites by using JSD.After that, in analogy to our previous studies QCMF [32] and CMF [36], we incorporate biochemical signals of binding residues in the calculation of JSD that results in the intensification of the DNA-binding residue signals from the non-binding signals.
To demonstrate the performance and functionality of our proposed approach, we apply Random Forest (RF) classifier using our new JSD based features together with three widely used machine learning features, namely position-specific scoring matrix (PSSM), secondary structure (SS) information, and orthogonal binary vector (OBV) information (see review [24]).Our results show that using JSD based features, RF classifier reaches an improved performance in identifying DNA-binding residues with a significantly higher Matthews correlation coefficient (MCC) value in comparison to using previous features alone.Although we only applied RF classifier in this study, both of our sequence-based features could be used in other classifiers such as SVM, neural networks, or decision trees.
Results
In this study, we introduce new sequence-based features using JSD to improve the performance of previous machine learning approaches in identification of DNA-binding residues in proteins.For this purpose, we propose new sequence-based features (f JSD and f JSD-t ) using JSD in two different ways.First, using JSD, we calculate the divergences between observed amino acid distributions in multiple sequence alignments (MSAs) of proteins under study and the background distribution which is calculated according to amino acid counts at non-binding residue positions in MSAs.In the second step, we transform the observed amino acid distributions with a doubly stochastic matrix (DSM) to enhance the weak signal of binding sites in proteins which could not be predicted in the first step.Finally, we calculate for each residue in proteins JSD-based scores and use them for the improvement of the performance of machine learning approaches.
To evaluate our new features, we use two frequently considered cut-off distances of 3.5 Å and 5 Å and thus define a residue in a protein as DNA-binding if the distance between at least one atom on its backbone or side chain and the DNA molecule is smaller than the considered cut-off.
The Results section of this study comprises of two parts.First, we investigate the functionality of our new features combining them in Random Forest (RF) classifier with three previous features.The RF classifier is constructed from 4298 positive and 44,805 negative instances extracted from 263 proteins.The performance of the classifier is evaluated using a five-fold cross validation procedure in which we randomly divided the samples into five parts.The assessment is performed by choosing each of these parts as a test set and the remaining four parts as a training set for model selection.Second, to illustrate the usefulness of our new approach for the prediction of DNA-binding residues, we analyzed the proto-oncogenic transcription factor MYC-MAX (PDB-ID: 1NKP) which is a heterodimer protein complex of two proteins.It is important to note that this protein complex is not included in the training dataset.
Random Forest Classifier
To apply the Random Forest (RF) classifier, we combine our new features (f JSD and f JSD-t ) with the features f PSSM , f OBV , and f SS which are widely used for the prediction of DNA-binding residues.Our results show that using our features RF classifier reaches an improved performance in identifying DNA-binding sites with clearly higher statistical values (see Tables 1 and 2).Moreover, we individually evaluated the combination of our features with existing features.The results suggest that the classifier with f JSD-t feature has provided better sensitivity and comparable Matthews correlation coefficient (MCC) values in comparison to f JSD feature.However, its specificity is moderately decreased.A further comparison reveals that the usage of our both features together with other features does not affect the performance of the classifier.The details are presented for 3.5 Å in Table 1 and for 5 Å in Table 2 and in Appendix A with the standard error of each of the performance measures over the values obtained in the five iterations (see Tables A1 and A2).To further investigate the performance of JSD-based features proposed in this study, we analyzed two additional datasets, namely RBscore [2] and PreDNA datasets [37].Although the RBscore and PreDNA datasets initially contain 381 and 224 DNA-binding proteins, respectively, we have eliminated a few proteins since they are either included in our training dataset or ineligible due to their MSAs.Consequently, we constructed RF classifier using 263 proteins (which were also used for cross-validation) and randomly selecting 60 proteins from each dataset for testing, respectively.The results of these analyses consistently suggest that our new features show great complementary effect to the previous features which often leads to clear improvement of the classification performance (see Tables 3 and 4).The detailed performance of classifier on different features using different cut-offs for each dataset can be found in Appendix A (see Tables A3-A6).
Considering the AUC-ROC and AUC-PR as the only evaluation factor, results indicate that the RF classifier often achieved its best performance based on both cut-off distances if we combine our new f JSD-t feature together with the existing three features (see Tables 1-3).Interestingly, by analyzing the PreDNA dataset we observed that RF classifier with f JSD or f JSD-t features for the cut-off of 3.5 Å showed similar performance.However, regarding to the distance cut-off of 5 Å, the classifier with f JSD feature reached slightly better performance than those with f JSD-t feature (see Table 4).After looking at the overall performances, it is inferred that adding our new features can boost the performance of the RF classifier in terms of AUC-ROC and AUC-PR.
Position Analysis of the MYC-MAX Protein
The proto-oncobenic transcription factor MYC-MAX (PDB-Entry 1NKP) is a heterodimer protein complex that is active in cell proliferation and is over-expressed in many different cancer types [38].MYC-MAX transcription factors bind to Enhancer boxes (a core element of the promoter that consists of six nucleotides) and activate transcription of the underlying genes [39].
The amino acid chain of MYC protein consists of 88 residues, ten of which are known DNA-binding sites indicating that their distances to DNA are less than 3.5 Å. Applying RF classifier, which takes a majority vote among the random tree classifiers, with our first feature (f JSD ) combined with existing features, we predicted in total 17 residue positions to be DNA-binding in MYC protein.
Seven out of these positions (H906, N907, E910, R913, R914, P938, K939) correspond to the true DNA-binding sites of this protein.While the sites R913, R914, P938, and K939 could also be identified by RF classifier without using our new JSD-based features, the remaining three binding sites could only be detected using our features (for details see Table 5 and Figure 1).Interestingly, using f JSD-t together with f PSSM , f OBV , and f SS , the RF classifier correctly predicted these seven positions again as binding sites.
The second protein in the proto-oncobenic transcription factor complex is the MAX protein which consists of 83 residues including nine DNA-binding sites.Using f JSD or f JSD-t together with existing features individually, we observed 14 and 13 residue positions to be DNA-binding in MAX protein, respectively.Eight of the predicted positions (H207, N208, E211, R212, R214, R215, S238, R239) found by using either of our both features are true DNA-binding sites in MAX protein.However, without using our new features the RF classifier could only identify two (S238, R239) out of nine true DNA-binding sites in MAX protein (for details see Table 5 and Figure 1).Further, we observed that, the usage of f JSD-t leads to the reduction of false positive predictions in identifying DNA-binding sites in MAX protein.Moreover, when statistically evaluating both of our features, we observed that using our sequence-based features RF classifier reaches a significantly improved performance in identifying DNA-binding sites of both proteins with significantly higher sensitivity and MCC values whereas the specificity is moderately decreased.The simultaneous usage of both of our features together with f PSSM , f OBV , and f SS could result in the decrement of specificity or MCC values.The details are presented in Table 5.
Materials and Methods
In this section, we describe in particular the data we have used and our new residue-wise features designed to predict DNA-binding sites in proteins.
Materials
To compile our data needed for training and test, we started with the DBP-374 data set of representative protein-DNA complexes from the Protein Data Bank (PDB) [40] published by Wu et al. [5].Having performed a comparison with the new PDB version, we calculate for every remaining protein a multiple sequence alignment (MSA) using HHblits and the UniProt20 database (version from June 2015) [41].We eliminated all proteins, the MSA of which has less than 125 rows, so that we finally ended up with a dataset of 263 protein-DNA complexes and associated MSAs.To obtain our results we perform a five-fold cross validation.
As in [5], an amino acid residue is regarded as a binding site, if it contains at least one atom at distance of less than or equal to 3.5 Å or 5 Å from any atom of DNA molecule in the DNA-protein complex.Otherwise it is treated as non-binding site.For the distance cut-off of 3.5 Å, our set contains 4298 binding sites and 44, 805 non-binding sites.For the distance cut-off of 5 Å, however, our data set contains 7211 binding sites and 41, 892 non-binding sites.
Methods
Let M be a multiple sequence alignment, where its first row represents the protein under study.Every residue of that protein is then uniquely determined by its column.In what follows, we identify the residues of the protein with their columns of the MSA.
Grosse et al. [35] pointed out that the Jensen-Shannon divergence (JSD) is extremely useful when it comes to discriminate between two (or more) sources.Capra and Singh [34] carefully discussed several information theoretic measures like Shannon entropy, von Neumann entropy, relative entropy, and sum-of-pair measures to assess sequence conservation.They were the first using JSD in this context and stated its superiority.Gültas et al. [32] showed that the Jensen-Shannon divergence in the context of quantum information theory is of remarkable power.These three articles encouraged us to use JSD in this study.Our first idea is to design a new feature for the prediction of DNA-binding sites in proteins which leverages the Jensen-Shannon divergence (1) Therein, p k is the empirical amino acid distribution of the k-th column of the query MSA M, and p nd is the null distribution taken over all non-binding sites of our training data.
More precisely, we represent every column k of every MSA M considered by a 20 × 20 counting matrix C M k .The matrix C is symmetric and its rows as well as columns are indexed by the 20 amino acids.For every ordered pair of amino acids a, a , the matrix coefficient C M k aa is equal to the number of ordered pairs (i, j) (i = j) of row indices of M such that M ik = a and M jk = a .
To compute the null distribution p nd , we first set up the 20 × 20 counting matrix C nd using our training data.C nd is the sum over all matrices C M k , where M ranges over all training MSAs and k ranges over all non-binding site columns of M. Next, the rows of C nd are added up.Finally, the resulting row vector is normalized to obtain p nd .
There is nothing wrong with the idea that a large value JSD (p k p nd ) indicates that k is a DNA-binding residue.However, no information on binding sites is integrated.Only the non-binding sites of our training data are used to compute p nd .As we have seen in [32] and [36], transforming empirical amino acid distributions of MSA columns by a carefully designed doubly stochastic matrix is an effective way to integrate the binding site signals.To this end, we first set up a counting matrix C bind in a way similar to that of calculating the matrix C nd .The difference is that the variable column index k now ranges over all binding site columns of the training MSAs.Taking the counting matrix C bind as input, the doubly stochastic matrix D is computed by means of the canonical row-column normalization procedure [42].
Let M be the query MSA having columns.Compared with [32] and [36], we enhance the effect of transforming M's empirical column distributions by means of the doubly stochastic matrix D just defined.Let k be a column index of M. First, we compute the matrix product Second, we add up all of C (t) M k 's rows.Finally, we normalize the resulting row to obtain the transformed empirical row distribution p (t) k .We define two window scores score JSD,M (k) and score JSD-t,M (k) of residue k w.r.t.query MSA M, where the window w(k) surrounding k formally equals {k − Recapitulate that for any real x the binomial coefficient ( x 2 ) equals x(x − 1)/2.We define the scores as follows.
score JSD,M (k The preceding two score definitions are motivated as follows.Bartlett et al. [43] and Panchenko et al. [44] pointed out that exploiting conservation properties of spatial neighbors is useful to predict a residue as functionally important.Since the 3D structures are often unavailable, Capra and Singh [34] developed a window score for such predictions.The concrete shape of our scores takes pattern form Janda et al. [45], who in turn refer to Fischer et al. [33].Our scores are convex combinations of the Jensen-Shannon terms associated with the residues belonging to the surrounding window w(k).The weights fall linearly in the distance from k.
In a last step, we transform two window scores according to Equations ( 2) and ( 3) with respect to the query MSA M into final scores using the Equations ( 4) and ( 5), respectively.To this end, for every column index k ∈ {1, 2, . . ., } of M we define: The Equations ( 4) and ( 5) are basically the determination of the percentage of scores below the current one at index k.This transformation procedure is essential because it converts MSA-dependent window scores to MSA-independent scores.
To demonstrate the benefit of our new features, we adopt the features f PSSM , f OBV and f SS devised in [5].Together with our two new features f JSD and f JSD-t , we plugged them into the Random Forest (RF) classifier [46] (see Tables 1 and 2 for the combinations we used).For the RF implementation we used the WEKA data mining software [47].
To deal with the imbalanced data problem, we applied bagging techniques suggested in [48].Since we make use of five-fold cross validation, we randomly split the dataset into 5 roughly equal-sized parts.Every training phase performed on 4 parts consists of 11 sub-phases.In each such sub-phase we randomly draw twice as many non-binding sites as there are binding sites.We then construct a Random Forest (RF) taking those non-binding sites and all binding sites of the 4 parts as input.Finally, for each instance of the validation part the majority vote of above 11 RF classifiers was taken.
Discussion
Our results show that combining either feature f JSD-t or feature f JSD with the three features f PSSM , f OBV and f SS we have adopted from [5] clearly boosts the performance of the RF-based classifier in identifying the DNA-binding sites in proteins, where feature f JSD-t generally reaches a slightly better performance than feature f JSD .
Although our two new features and PSSMs are derived from MSAs, Tables 1 and 2 clearly demonstrate that these approaches carry distinct information.Thus they capture different kinds of evolutionary information.The reason for this essential difference can be explained based on the underlying algorithms.While the PSSM approach consists of statistic which indicates how likely a certain amino acids occurs at a certain position, our JSD-based approach measures the divergence of a certain distribution to a known non-binding site distribution.
The superiority of feature f JSD-t to feature f JSD deserves an explanation attempt.Feature f JSD does not integrate any information on DNA-binding sites.Only training non-binding sites are used.In contrast, feature f JSD-t additionally uses a doubly stochastic matrix gained from the training binding sites.The effect on empirical amino acid column distributions of the transformation we have devised using that matrix is the following.The empirical column probabilities of amino acids are merged, if it is very likely to co-observe them in a binding site column.Since the amino acid content of binding site columns and non-binding site columns differ, the distance between f JSD-t,M (k) and f JSD-t,M (k ) is larger and more significant than the distance between f JSD,M (k) and f JSD,M (k ), where k is a binding site column of MSA M, and k is a non-binding site column.
At first glance it is surprising that adding both feature f JSD-t and feature f JSD to the feature triplet f PSSM , f OBV , f SS is worse than adding feature f JSD-t alone.Taking into account what we have mentioned in the preceding paragraph, it turns out that if feature f JSD-t is already there, feature f JSD may increase the noise.
Conclusions
In this work, we report a new sequence-based feature extraction method for the identification of DNA binding sites in proteins.For this purpose, we adopt the ideas from Capra et al. [34] and our previous studies CMF [36] and QCMF [32].Our approach is an information theoretic method that applies the Jensen-Shannon divergence (JSD) for amino acid distributions of each site in a protein in two different ways.First, the JSD is applied to quantify the differences between observed amino acid distributions of sites and the background distribution of non-binding sites.Second, we transform the observed distributions of sites through a doubly stochastic matrix to incorporate biochemical signals of binding residues in the calculation of JSD that results in the intensification of the DNA-binding residue signals from the non-binding signals.The results of our study show that the additional usage of our new features (f JSD-t or feature f JSD ) in combination with existing features is significantly boosts the performance of RF classifier in identifying DNA binding sites in proteins.Our results further indicate the importance of our second feature (f JSD-t ) since taking into account the binding site signals in the calculation of JSD metric, the characteristics of DNA binding residues are enhanced.As a consequence, an intensification of the signal caused by DNA binding sites from non-binding sites occurs and thus the classifier achieves its improved performance.
Figure 1 .
Figure1.DNA-binding sites in proto-oncobenic transcription factor MYC-MAX protein complex (PDB-Entry 1NKP).Green spheres denote positions of the DNA-binding sites in both proteins which are detected by RF classifier either using the existing features (f PSSM , f OBV , and f SS ) alone or combining our new features with these existing features together.Purple spheres show the localization of additional binding sites which were only found by RF classifier using our new features with existing features.Moreover, there are further three binding sites in MYC protein and one binding site in MAX protein, shown with yellow spheres, that could not be identified by the classifier.
Appendix A. 1 .
Performance Measures with Standard Error
Table 1 . Prediction performance of Random Forest (RF) classifier on different features using a cut-off of 3.5 Å. The prediction system was evaluated by five-fold cross validation. Feature Sensitivity Specificity MCC AUC-ROC AUC-PR
MCC: Matthews correlation coefficient; AUC-ROC: area under the receiver operating characteristics (ROC) curve; AUC-PR: area under the precision-recall curve.
Table 2 .
Prediction performance of Random Forest (RF) classifier on different features using a cut-off of 5.0 Å.The prediction system was evaluated by five-fold cross validation.
MCC: Matthews correlation coefficient; AUC-ROC: area under the receiver operating characteristics (ROC) curve; AUC-PR: area under the precision-recall curve.
Table 3 .
Prediction performance of Random Forest (RF) classifier on RBscore dataset using different distance cut-offs.
MCC: Matthews correlation coefficient; AUC-ROC: area under the receiver operating characteristics (ROC) curve; AUC-PR: area under the precision-recall curve.
Table 4 .
Prediction performance of RF classifier on PreDNA dataset using different distance cut-offs.
Table 5 .
Prediction performance of RF classifier on different features using a cut-off of 3.5 Å for MYC-MAX protein complex (Protein Data Bank (PDB)-Entry 1NKP).
Table A1 .
Prediction performance of Random Forest (RF) classifier on different features using a cut-off of 3.5 Å.The prediction system was evaluated by five-fold cross validation.
Table A2 .
Prediction performance of Random Forest (RF) classifier on different features using a cut-off of 5.0 Å.The prediction system was evaluated by five-folds cross validation.
Table A3 .
The detailed prediction performance of Random Forest (RF) classifier on different features using a cut-off of 3.5 Å.
MCC: Matthews correlation coefficient; AUC-ROC: area under the receiver operating characteristics (ROC) curve; AUC-PR: area under the precision-recall curve.
Table A4 .
The detailed prediction performance of Random Forest (RF) classifier on different features using a cut-off of 5.0 Å. : Matthews correlation coefficient; AUC-ROC: area under the receiver operating characteristics (ROC) curve; AUC-PR: area under the precision-recall curve. MCC
Table A5 .
The detailed prediction performance of Random Forest (RF) classifier on different features using a cut-off of 3.5 Å.
MCC: Matthews correlation coefficient; AUC-ROC: area under the receiver operating characteristics (ROC) curve; AUC-PR: area under the precision-recall curve.
Table A6 .
The detailed prediction performance of Random Forest (RF) classifier on different features using a cut-off of 5.0 Å.
MCC: Matthews correlation coefficient; AUC-ROC: area under the receiver operating characteristics (ROC) curve; AUC-PR: area under the precision-recall curve. | 6,118 | 2016-10-24T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Malignant Evaluation and Clinical Prognostic Values of m6A RNA Methylation Regulators in Glioblastoma
N6-methyladenosine (m6A) RNA methylation, the most common form of mRNA modification and regulated by the m6A RNA methylation regulators (“writers,” “erasers,” and “readers”), has been reported to be associated with the progression of the malignant tumor. However, its role in glioblastoma (GBM) has been poorly known. This study aimed to identify the expression, potential functions, and prognostic values of m6A RNA methylation regulators in GBM. Here, we revealed that the 13 central m6A RNA methylation regulators were firmly related to the clinical and molecular phenotype of GBM. Taking advantage of consensus cluster analysis, we obtained two categories of GBM samples and found malignancy-related processes of m6A methylation regulators and compounds that specifically targeted the malignant processes. Besides, we also obtained a list of genes with poor prognosis in GBM. Finally, we derived a risk-gene signature with three selected m6A RNA methylation regulators, which allowed us to extend the in-depth study and dichotomized the OS of patients with GBM into high- and low-risk subgroups. Notably, this risk-gene signature could be used as independent prognostic markers and accurate clinicopathological parameter predictors. In conclusion, m6A RNA methylation regulators are a type of vital participant in the malignant progression of GBM, with a critical potential in the prognostic stratification and treatment strategies of GBM.
The vital functions of RNA modification in processes of life have caught people's eyes in recent years. Substantial progress in regulating RNA transcription (18,19), the event of processing (20,21), splicing (5,22), RNA stabilities (23,24), and translation (25,26) was witnessed in m6A posttranscriptional modifications. However, to date, the functions of the majority of RNA modifications found in mRNAs need further exploration. Notably, the functional roles of m6A methylation in tumorigenesis, tumor differentiation (27), proliferation (28), and invasion (27) remain elusive.
GBM is the most common and devastating primary tumor in the brain. Even the combined surgical resection, radiation therapy, chemotherapy, and other therapies were broadly used, the recurrence of the patients with GBM is inevitable. Besides, the median survival of GBM patients is <15 months after a definite diagnosis (29)(30)(31). The m6A RNA methylation regulators were also reported to be associated with self-renewal, radio resistance, and tumorigenesis of GBM stem cells (32). However, there is no comprehensive investigation of the expression of m6A RNA methylation regulators in GBM.
In the current study, the 13 m6A RNA regulators, which have been widely reported, were systematically analyzed using the GBM RNA sequencing data from The Cancer Genome Atlas (TCGA) (n = 174) and Chinese Glioma Genome Atlas (CGGA) (n = 249) databases. Taking advantage of m6A RNA methylation regulator-based consensus clustering analysis, we demonstrated the malignant process and obtained a list of genes with poor prognosis in patients with GBM. Importantly, we further validated these genes in the CGGA database and identified potential drugs targeting the malignant process of GBM using the Connectivity Map (CMap) (33). Besides, the risk-gene signaturederived from m6A RNA methylation regulators might be used as a novel biomarker that could identify GBM patients' prognosis and predict the clinicopathological parameters of GBM.
Data Acquisition
The RNA-seq transcriptome data and corresponding clinicopathological parameters of GBM patients were obtained from the TCGA database (http://cancergenome.nih.gov/) and the CGGA database (http://www.cgga.org.cn). The RNA-seq transcriptome data of healthy human tissue was obtained from the Genotype-Tissue Expression (GTEx) database (http://commonfund.nih.gov/GTEx/). We combined GTEx and CGGA data, and then harmonized them using quantile normalization and svaseq-based batch effect removal (34). The clinicopathological parameters for the CGGA and TCGA datasets are summarized in Table S1.
Selection of m6A RNA Methylation Regulators
Thirteen widely recognized m6A RNA methylation regulators were retrieved from published literature. We then systematically compared the correlation between the expression of these m6A RNA methylation regulators and clinicopathological parameters in GBM patients.
Bioinformatic Analysis
To further explore the role of m6A RNA methylation regulators in GBM patients, we clustered the GBM patients into two clusters by using the R package ConsensusClusterPlus (35). Heatmaps were drawn based on the average linkage method and the Pearson distance measurement method. Principal Component Analysis (PCA) was carried out by an R package called PCA to observe the distribution of gene expression in two clusters. Differential analyses for each gene in the pre-classified samples performed using the limma package in R (36). Fold change (FC) > 2 and adjusted p-value (q-value) < 0.01 were set as the cutoff values to screen for differentially expressed genes (DEGs). Gene Ontology (GO) functional analyses and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analyses were performed to analyze the upregulated DEGs. The relationship between DEGs gene expression levels and overall patient survival time was illustrated by generating Kaplan-Meier plots. The correlation was tested using a log-rank test. Gene Set Enrichment Analysis (GSEA) was used to investigate the functions correlated with different clusters of GBM. |NES| > 1, adjusted p < 0.05, and FDR q < 0.25 were considered as statistically significant as described in a previous study (37).
Construction of Protein-Protein Interactions (PPI) Network
PPI among selected genes was analyzed using the STRING database (38) and reprocessing via Cytoscape software (39). For better visualization, the color of the node in the PPI network was applied to reflect the logFC value, and the size of the node was applied to indicate the number of source proteins with the target protein. Molecular COmplex Detection (MCODE) (version 1.4.3), which is clustered based on given network topology, was used to discover densely connected regions. Then, the most significant module was filtered out by MCODE from the PPI networks. The criteria for selection were as follows: MCODE scores > 5, degree cut-off = 2, node score cut-off = 0.2, Max depth = 100, and k-score = 2.
Construction of Gene-Signature
Univariate Cox regression analysis of the expression of 13 m6A RNA methylation regulators was conducted to determine the candidate genes associated with overall survival (OS). After that, an L1-penalized (LASSO) was performed to further identify the selected genes with independent prognostic value (40,41).
Finally, their regression coefficients were determined by the minimum criteria. The risk score for the signature was calculated accurately by the formula: where Coefi is the regression coefficient and xi is the expression of each selected gene. GBM patients were divided into lowand high-risk subgroups according to the median risk score. Kaplan-Meier plot was performed to compare the OS between two risk subgroups.
Identification of Potential Compounds Targeting the Malignancy-Related Pathways
CMap (updated in September 2017) (https://clue.io/), as the world's largest perturbation-driven gene expression dataset, was employed to search for candidate chemical compounds that might target GBM stemness related pathways (33). The compounds were discovered by interrogating the CMap database of signatures with a query (a list of DEGs relevant to biological features of interest). The final results involved a CMap connectivity score (from −1 to 1) that indicated the degree of specificity associated with our particular query. 300 DEGs (150 downregulated and 150 upregulated) were selected for query methodology. Noteworthily, the closer the connectivity score of a compound was to −1, the more likely it was to reverse the genetic pattern we are querying. Finally, the compounds with the absolute value of CMap connectivity score of 90 or higher were considered to be potential therapeutic agents for functional validation.
Statistical Analysis
Chi-square tests were used to compare the expression levels in GBM for age, gender, healthy samples, primary GBM and recurrence GBM, isocitrate dehydrogenase (IDH) status, and cytosine-phosphate-guanine island methylator phenotype (G-CIMP) status. One-way ANOVA was used to compare the distribution of the subtype of GBM (Classical, Mesenchymal, Neural, Proneural) (42). To evaluate the prediction accuracy of the risk score model, we performed a receiver operating characteristic (ROC) curve and calculated the area under the curve (AUC). Potential prognostic factors such as age (≤ 65 vs. > 65), gender (female vs. male), GBM subtype, and risk score (lowrisk vs. high-risk) were analyzed by Univariate and multivariate Cox hazard regression.
Expression Patterns of m6A RNA Methylation Regulators in GBM
According to the essential biological functions of methylation regulators in the development of GBM, we first analyzed the relationship between each m6A RNA methylation regulator and the clinical molecular phenotype of GBM. The expression level of individual m6A RNA methylation regulator and different types of samples was presented as a heatmap. The result strongly indicates that the expression of the majority of m6A RNA methylation regulators was associated with the occurrence of GBM ( Figure 1A). Importantly, the significant correlation between the occurrence of GBM and the expression levels of ALKBH5, METTL3, KIAA1429, HNRNPC, WATP, YTHDC2, YTHDF1, YTHDF2, and FTO were confirmed by the quantitative analysis of CGGA ( Figure 1B). Compared with the healthy samples, the expression of METTL3, HNRNPC, WTAP, KIAA1429, YTHDF2, and YTHDF1 was upregulated, while the expression of ALKBH5, YTHDC2, and FTO was downregulated in the GBM samples (Figures 1C,D). Correlation analysis was also employed to investigate the relationship between the expression level of m6A RNA methylation regulators and the different stages (primary tumor stage and recurrent tumor stage) of GBM. Among the 13 m6A RNA methylation regulators, only HNRNPC was significantly related to the cancer recurrence ( Figure 1E). Considering the dramatically imbalanced numbers of primary GBM (n = 156) and recurrent GBM (n = 13) in the TCGA database, the results of TCGA database analysis were not necessarily as accurate as those of CGGA, while the numbers of primary GBM (n = 140) and recurrent GBM (n = 109) were relatively balanced. Therefore, we analyzed the expression profile of the primary and recurrent GBM samples from the CGGA database and found that HNRNPC exhibited no correlation. Interestingly, as shown in Figure 1F, the expression of WTAP, ALKBH5, and METTL14 was significantly associated with the recurrence of GBM. These results suggested that WTAP, ALKBH5, and METTL14 were firmly related to the recurrence process of GBM (Figures 1G,H). We further explored the relationship between the expression of m6A RNA methylation regulators and GBM molecular subtypes. Notably, every expression of the majority of m6A RNA methylation regulators was associated with the subtype of GBM, except YTDHF2 and HNRNPC ( Figure S1A). We also investigated the relationship between IDH status, G-CIMP status, and expression levels of each m6A RNA methylation regulator in GBM. The results revealed that the expression levels of METTL14, KIAA1429, YTHDC1, ZC3H13, and FTO were significantly dysregulated between different IDH status in the TCGA dataset ( Figure S1B). As for the G-CIMP status, the expression of RBM15, YTHDF2, KIAA1429, and YTHDC1 exhibited a significant difference between G-CIMP+ and G-CIMP-( Figure S1C). We speculate that the change in the correlation of m6A RNA methylation regulators may be an internal characteristic that can reflect the external differences. From Figures 1I,J, different degrees of the relationship were observed between different m6A RNA methylation regulators. Most of the relationships between the regulators were positive correlations, especially YTHDC1, which contained the most active correlation with other regulators (Figures 1I,J).
Identification of Two Clusters of GBM Samples With Different Clinical Characteristics
Next, GBM samples with complete clinical parameters were selected for the subsequent consensus clustering analysis. From the view of the number of samples per group, an unbalanced distribution was observed in the three groups when k = 3 ( Figure S2A). Hence, based on the expression similarity of the 13 regulators, k = 2 was the most optimum with clustering stability datasets increasing from k = 2-10 (Figures 2A-C and Figure S2). Then, GBM samples from the TCGA dataset were pre-classified into two groups (52 samples in one group labeled as RM1 and 106 samples in another group labeled as RM2) through consensus cluster analysis. The clinical features of the two groups are summarized in Table S2. The heatmap of cluster analysis showed that the 13 regulators could distinguish different samples, and the samples in the same cluster possessed a high correlation ( Figure 2C). Principal component analysis was performed to elucidate the difference in transcriptional profiles between the RM1 and RM2 subgroups. The results investigated that there was a clear distinction between these two subgroups ( Figure 2D). The survival curve according to Kaplan-Meier survival analysis for the clustered samples revealed a noticeable decrease in the OS in the RM2 subgroup compared with the RM1 subgroup, suggesting that the 13 methylation regulators could classify the GBM samples in prognostic level ( Figure 2E). We further found that the median survival of the RM1 group was 1.4 years, while the RM2 was only 1 year. In addition, the clinicopathological features of these two subgroups were compared. The RM1 subgroup was markedly correlated with younger age at diagnosis (P < 0.05), Neural or proneural subtypes (P < 0.001), and G-CIMP-status ( Figure 2F). The RM2 subgroup mainly contained GBM with older age at diagnosis, classical or mesenchymal subtypes, and G-CIMP+ status. Consistent with the report that classical and mesenchymal were more malignant compared to neural and proneural (42).
Functional Annotation of Classification Determined by Consensus Clustering Analysis
The above results indicate that the consensus clustering results were closely related to the degree of malignancy of GBM.
To better understand the mechanisms between the malignancy of GBM and the 13 m6A RNA methylation regulators, a total of 2,299 genes (599 genes were upregulated, and 1,700 genes were downregulated) were identified as DEGs by using differential analysis ( Figure 3A). To summarize the potential function of DEGs, we performed an annotation of the 599 significantly upregulated genes (onco role, Table S3) in the RM2 subgroup through GO function analysis and KEGG pathway analysis. The top 10 GO terms indicated that upregulated genes were enriched in malignancy-related processes, including neutrophil-mediated immunity, cell proliferation, cell junction, phagocytosis, and cell-substrate adhesion ( Figure 3B). KEGG pathway analysis top 10 terms testified that upregulated genes were related to the regulation of actin cytoskeleton pathway, focal adhesion pathway, proteoglycans in cancer pathway, and Fc gamma R-mediated phagocytosis pathway ( Figure 3C). Furthermore, GSEA suggested that the malignant hallmarks of tumors, including KRAS signaling (NES = 1.59, normalized P = 0.013), inflammatory response (NES = 1.68, normalized P = 0.052), myogenesis (NES = 2.02, normalized P < 0.001), and IL-6/JAK/ STAT3 signaling (NES = 2.0, normalized P < 0.001), were significantly associated with the RM2 subgroup (Figures 3D-G). All these results proved that the two categories derived from consensus clustering analysis were closely related to the malignancy of GBM.
Novel Candidate Compounds Targeting the Malignancy-Related Pathways and Biological Functions in GBM
Next, we sought to determine the potential compounds that target malignancy-related pathways and biological functions; DEGs based on consensus clustering were submitted to retrieve the CMap databases (33). The top 89 compounds capable of repressing the above gene expression of GBM were summarized in Table S4 and
Identification and Analysis of m6A-Related Genes With Poor Prognosis in GBM
To explore the significance of each upregulated gene for the survival time of GBM patients from the TCGA database, Kaplan-Meier survival curves were generated. Among the 599 upregulated DEGs in the RM2 subgroup, a total of 79 DEGs (Table S3) were able to predict the poor OS in the log-rank test (P < 0.05, representative figures were shown in Figure 5).
To better understand the interactions between the 79 genes with prognostic value and the 13 m6A RNA methylation regulators, we also analyzed the PPI among them using the STRING database. The network consists of four modules with a total of 88 nodes and 527 edges, which indicates that close interaction exists in this PPI network (Figure 6A). Among the 88 nodes, 54 central node genes (bold in Table S5) were selected with the filtering of degree > 10. The most significant 10 node degree genes were ITGAM, STAT3, SPI1, TNFRSF1B, MYO1F, SLC11A1, TCIRG1, RAP2B, FERMT3, and LCP1. The top two significant modules were selected by using the MCODE application for further analysis. To make the description more convenient, we named these modules "ITGAM module" and "RAP2B module, " respectively. Twenty-eight nodes and 165 edges were involved in ITGAM modules, with STAT3, SPI1, TNFRSF1B, SLC11A1, and FERMT3 being the remarkable nodes, as they had the most connections with other nodes in this module ( Figure 6B). In the RAP2B module, 8 edges involving 6 nodes were formed in this network ( Figure 6C). In addition, we predicted the function of the ITGAM module through GO analysis. It was related to the biological process of mRNA splicing via spliceosome, mRNA methylation, and oxidative single-stranded RNA demethylation. For instance, ITGAM was reported to play a critical role in invasive growth and angiogenesis in malignant gliomas (43). STAT3 methylation via STAT3 signaling could also promote tumorigenicity of GBM stem-like cells (44). The above results clearly demonstrate that m6A regulators participate in the critical malignant related biological regulatory network.
To find out whether the 79 OS-related DEGs found in the TCGA database were meaningful in the additional database, we further analyzed the expression profiles of 249 GBM cases from the CGGA database. Importantly, a total of 64 genes were validated to be significantly related to the poor prognosis, of Table S3) were of particular interest, as they have not been previously reported for their prognostic value in GBM patients ( Figure S3).
Prognostic Value of m6A RNA Methylation Regulators
For the purpose of investigating the prognostic value of m6A RNA methylation regulators, univariate Cox regression analysis was performed on the expression profile data. Based on the information contained in these results, 4 of 13 genes exhibited a significant correlation with the prognosis. Among these four selected genes, HNRNPC, ALKBH5, and ZC3H13 were risky genes, with HR > 1, while FTO was a protective gene, with HR < 1 (Figure 7A).
Robust likelihood-based survival modeling and LASSO regression are widely used to screen prognostic genes in the context of high dimensional data and were therefore applied in our study. Compared with a single biomarker, integrating multiple biomarkers into one risk model could present a better prediction performance of the model. To remove the prediction errors and maintain the stability of the predictive prognosis, we specifically selected three genes (P < 0.05 and HR > 1) to develop the gene signature. Afterward, the above-selected genes with independent prognostic value, including NRNPC, ALKBH5, and ZC3H13, were screened as the candidate genes using LASSO regression. The regression coefficients based on the minimum criteria were used to assess the risk score for the GBM patients, and the coefficients of selected gene signatures were −0.014623, 0.017905, and −0.08661, respectively (Figures 7B,C).
Gene-Signature Showed Strong Associations With Clinical Features in GBM
To investigate the prognostic value of the risk gene signature in the TCGA database, GBM patients were dichotomized into lowand high-risk subgroups, based on the median risk score. We next sought to detect the correlation between the two risk subgroups and clinical features-a heatmap was designed and showed the expression of the three selected m6A RNA methylation regulators ( Figure 7D). Significant differences were observed in this heatmap between the high-and low-risk subgroups with respect to IDH1 status (P < 0.001), age (P < 0.001), molecular subtypes (P < 0.001), and RM1/2 subgroups (P < 0.001). To evaluate the predictive accuracy of the risk score model, we performed the ROC curve and calculated the AUC. The AUC was 0.701 in the 2-year ROC curve for the prognostic model ( Figure 7E). For molecular phenotypes, such as IDH1 and G-CIMP status, the risk model also performed relatively well, with AUC was 0.821 and 0.733, respectively (Figures 7F,G). A similar trend was observed in subgroup analyses for mesenchymal and proneural, with AUCs of 0.703 and 0.764, respectively (Figures 7H,I). Moreover, the predicting power of the risk score model was significantly increased in RM1/2 subgroup analysis, with an AUC of 0.887 ( Figure 7J). Notably, patients in the highrisk group exhibited significantly shorter survival time than those in the low-risk group (P < 0.05) (Figure 7H). Consistent with these findings, the patients with a high-risk score were also more sensitive to temozolomide chemotherapy, radiation therapy, and chemoradiation than low-risk score patients ( Figure S4).
We further performed univariate and multivariate Cox proportional hazard regression analyses for the TCGA dataset to determine whether the risk signature was an independent prognostic factor. By univariate Cox analysis, age (HR = 1.033, P < 0.001) and risk score (HR = 11.899, P < 0.001) were all correlated with the OS, while GBM subtype and gender were not (Figure 7L). A similar trend of risk score was also observed when including these factors in the multivariate Cox proportional hazard regression (Figure 7M). The results demonstrated that age and risk score were independent prognostic factors in the TCGA GBM dataset. According to our results, the independent prognostic value and excellent prediction accuracy of the gene signature derived from the 13 m6A RNA methylation regulators were identified.
Low Expression in Normal Brain Tissues of METTL3 and METTL14
Based on our results and the evidence in literature, METTL3 was overexpressed specifically in GBM and was significantly related to the occurrence of GBM (45)(46)(47). To comprehensively understand the function of METTL3, we retrieved the expression levels of healthy tissue and tumor tissue in different parts from the GTEx and GEPIA databases (48), respectively. We found that the expression values of METTL3 in the brain were lower than other tissues in the organism (Figure 8A). Notably, the expression level of METTL3 in most tumors was smaller than the corresponding healthy tissue, except for GBM ( Figure 8B). These results indicated that high expression of METTL3 might act as a driver of GBM and play a crucial role in GBM. Considering that METTL3 and METTL14 are both the most common and abundant mRNA modifications in eukaryotes, we also searched the expression profiles of METTL14 and found the same trend ( Figure S5). The above results provided evidence for METTL3 and METTL14 as proto-oncogenes of GBM.
DISCUSSION
In the current study, we systematically analyzed the expression of m6A RNA regulators with different clinicopathological parameters and revealed the potential values. In particular, by comparing the expression of 13 regulators in a large number of healthy tissues and primary and recurrent tumor tissues, we found that they were related to the occurrence and recurrence of GBM. Furthermore, the expression of m6A RNA methylation regulators was also associated with the GBM subtype, G-CIMP status, and IDH status. Besides, GBM samples were classified into two subgroups, RM1/2, through consensus cluster analysis based on the expression of the 13 regulators. The RM1/2 subgroup not only affected OS and clinical characteristics, but also closely related to malignancy-related processes, key signaling pathways, and GBM hallmarks. Taking advantage of CMap, we also identified potential compounds targeting RNA methylation regulators in GBM. In addition, we obtained 79 genes with poor prognosis based on the RM1/2 subgroup by Kaplan-Meier analysis. Importantly, 64 genes with poor prognosis were validated in CGGA, a separate GBM database. Finally, we derived a prognostic gene signature, which dichotomized the OS of GBM patients into low-and high-risk subgroups and allowed us to extend the analysis. This risk gene-signature could be used as an independent prognostic marker and accurate clinicopathological parameters predictor. Glioma was divided into GBM and low-grade glioma (LGG). GBM, as the most destructive glioma (WHO: IV), possesses significantly different genomics, treatment methods, clinical manifestations, characteristics, and prognosis from LGG (WHO: I-III) (49)(50)(51)(52). Prior to this, the value of m6A methylation regulators in gliomas has been explored (53). Considering the comprehensive differences between LGG and GBM, we believed that this analysis was not sufficiently detailed and specific. Therefore, we specifically analyzed the specific value of these regulators in GBM. Similarly, we have all identified different hallmarks and pathways associated with malignancy. In particular, we further analyzed and obtained regulator-related specific targeted drugs and genes with poor prognosis, of which 37 genes have not been previously reported at the prognostic level. Especially, the PPI network between regulators and related genes was explored. Furthermore, we derived a prediction model that can predict the specific clinical characteristics and molecular phenotypes of GBM. Finally, we also provided evidence for the large transcriptome levels of METTL3 and METTL14 as cancer driver genes.
Among the m6A RNA methylation regulators, METTL3 or METTL14 is one of the most common and abundant mRNA modifications in eukaryotes. It has been reported that METTL3 or METTL14 inhibits the growth and selfrenewal of the GBM stem cells (32). ALKBH5 was reported to maintain tumorigenicity of GBM stem cells by sustaining FOXM1 expression and cell proliferation program (54), suggesting a crucial tumorigenic role. Most recently, it has been reported that FTO plays a carcinogenic role through the FTO/m6A/MYC/CEBPA signaling pathway in IDH mutant cancers, such as glioma and leukemia (19,55). The differences of involved genes among different tumor types give us a clue that altered the expression of key genes, which are sensitive to the function of m6A methylation regulators, can cause significant phenotype changes.
In this section, we rounded analyzed the expression of all m6A RNA methylation regulators in GBM at the occurrence and recurrence stages. Unlike our study, a previous trial showed that ALKBH5 was an oncogene to maintain tumorigenicity, while our study showed a significantly decreased trend in the GBM group compared with normal. However, unlike the previous trial, our study included a large number of clinical samples and was validated in two databases. This difference in the number of samples may account for the different results. Interestingly, the upward trend in ALKBH5 was significantly associated with tumor recurrence when we compared primary and recurrent tumors. It's worth mentioning that ALKBH5 belongs to the AlkB family of non-heme Fe(II)/a-ketoglutaratedependent dioxygenases, and the activity is iron-dependent (17). Given our results, we further speculate that iron metabolism is involved in GBM recurrence (56,57). However, this hypothesis needs to be more tested. Nevertheless, a tendency toward a lower expression of FTO was observed in GBM compared with normal tissues. Unlike ALKBH5, FTO was found to mediate the demethylation of m6Am instead of m6A preferentially. It seems that FTO and ALKBH5 mediate the demethylation of different methylation targets in GBM, which is worthy of future research. Based on this difference in demethylation targets and tendencies, we speculate that ALKBH5 and FTO have different functions and mechanisms in GBM and are worthy of further study.
Nearly all IDH-mutant GBMs harbored G-CIMP and patients carrying G-CIMP (G-CIMP+) have been confirmed to confer a better clinical outcome than those not carrying (G-CIMP-) (58). Collectively, we conclude that the expression of m6A RNA methylation regulators is closely associated with the occurrence, recurrence, IDH status, G-CIMP status, and molecular subtype of GBM. Moreover, these findings of the expression of each individual m6A methylation regulator can contribute to the development of new cancer therapies, as chemotherapy targeting m6A methylation is now at the forefront of cancer therapy.
We demonstrated that m6A RNA methylation regulators were also related to the biological processes, cellular component, and signaling pathways of GBM malignant progression. RNA m6A methylation is a nascent field as of yet, the significance of the above epigenetics marker in human cancer is just beginning to be appreciated. Although the m6A modification showed tissue-specific regulation and increased significantly throughout the brain development process (3), studies (59, 60) on the role of m6A modification in either brain lesions or brain cancers have only been reported sporadically (61,62). Several biological processes and signaling pathways have already been identified: tumor stem-like cell regulation, including maintenance, radio-resistance, and tumorigenesis; post-transcriptional modification, including in RNA transcript, RNA processing, RNA processing, RNA degradation, and RNA translation; FTO/m6A/MYC/CEBPA signaling pathways (19); JAK1/STAT5/C/EBP β pathways (63); and the IL-7/STAT5/SOCS pathways (64). This report provided the potential biological process and pathway between RNA m6A methylation and GBM-malignant progression, which represent a significant step toward developing therapeutic strategies to treat GBM by targeting m6A modification.
CMap can identify biomarkers for predicting specific drug reactions, mechanisms of treatment, and ways to overcome them (65)(66)(67). CMap analysis, which is based on a limited number of treated cell lines, accurately identified a number of compounds that have been shown to have an effect on m6A of other tumor types with specificity (33,(68)(69)(70). METTL3 has been reported to promote gastric cancer angiogenesis by secreting HDGF (71). This result verified the accuracy of our CMap-based drug prediction from the side. Based on these results, hence, we speculated that PDGFR tyrosine kinase receptor inhibitor, KIT inhibitor, and tubulin inhibitor could all be used as potential agents that specifically target m6A-related biological functions and pathways for subsequent research.
METTL3, served as a methyltransferase, has been reported to be essential for glioma stem-like cell maintenance and radioresistance (45). Our findings further confirm that METTL3 was a potential therapeutic target, and future research is expected to focus on studies that specifically target METTL3. Since the smallmolecule inhibitors of METTL3 have not yet been invented, future research should focus on this area (72).
This study identified and validated that 64 genes were associated with poor outcomes in GBM patients. Moreover, we were able to construct four PPI modules, all of which were related to critical GBM biological processes. Highly relevant nodes in the modules, including STAT3, SLC11A1, and ITGAM, have been reported to promote tumor proliferation, angiogenesis, migration, and invasiveness (73)(74)(75)(76)(77). Among the 64 genes validated, 27 (such as ALOX5, CAST, HS6ST1, ITGAM, PTPN6, SLC11A1, and SLC12A7) have been reported to be involved in the pathogenesis of GBM or critical in predicting OS. This suggests that our big data-based analyses using TCGA and CGGA cohorts harbor predictive value. Although the remaining 37 genes have not been previously reported to be associated with GBM prognosis, they can be used as a potential clinical prognostic indicator for GBM patients, which can facilitate clinicians to make more accurate diagnosis easily.
In this study, we attempted to introduce some concepts associated with the theory of the prognosis value of m6A RNA methylation regulators based on uncovered sets. METTL3 has been reported as a potential biomarker panel for prognostic prediction in colorectal carcinoma (78). The prognostic model of multiple m6A RNA methylation regulators for patients with GBM has not been developed. The GBM prognostic gene signature based on three selected m6A RNA methylation regulators was designed for the first time. As we observed, the risk score calculated by the correlation coefficient conferred the ability in prognosis and clinicopathological parameter prediction. Excitingly, Cox analysis results further confirmed the independent prognostic value of the risk score. Meanwhile, GBM patients with a high-risk score showed more sensitivity to temozolomide chemotherapy, radiation therapy, and chemoradiation than low-risk-score patients. These findings may deepen our understanding of m6A methylation regulators in prognosis level and tolerance to chemoradiotherapy.
To sum up, we attempted to identify the expressions, potential functions, and prognostic values of m6A RNA methylation regulators in GBM. Our study provides strategies for comprehensive analysis of cancer genomics based on consensus clustering analysis for systematic identification of specific m6Arelated targets and specific targeted drugs based on m6A RNA methylation regulators. The prognostic gene signature and genes with poor prognosis might contribute to the personalized prediction of GBM prognosis and serve as a potential biomarker reflecting GBM patients' response to therapies that specifically target m6A. Finally, further investigation of these genes could lead to novel insights into the potential association of m6A methylation regulators with GBM prognosis in a comprehensive manner.
AUTHOR CONTRIBUTIONS
JD, RX, LC, and SH conceived and designed the study and drafted the manuscript. JD and KH collected, analyzed, and interpreted the data. HJ, SMi, YB, and SMa participated in revising the manuscript. All authors have read and approved the final manuscript. | 7,334.6 | 2020-03-09T00:00:00.000 | [
"Medicine",
"Biology"
] |
Evaluation of the ex vivo liver viability using a nuclear magnetic resonance relaxation time-based assay in a porcine machine perfusion model
There is a dearth of effective parameters for selecting potentially transplantable liver grafts from expanded-criteria donors. In this study, we used a nuclear magnetic resonance (NMR) relaxation analyzer-based assay to assess the viability of ex vivo livers obtained via porcine donation after circulatory death (DCD). Ex situ normothermic machine perfusion (NMP) was utilized as a platform for viability test of porcine DCD donor livers. A liver-targeted contrast agent, gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid (Gd-EOB-DTPA), was injected into the perfusate during NMP, and the dynamic biliary excretion of the Gd-EOB-DTPA was monitored by measuring the longitudinal relaxation time (T1). The longitudinal relaxation rate (R1) of the bile was served as a parameter. The delay of increase in biliary R1 during early stage of NMP indicated the impaired function of liver grafts in both warm and cold ischemia injury, which was correlated with the change of alanine aminotransferase. The preservative superiority in cold ischemia of dual hypothermic oxygenated machine perfusion could also be verified by assessing biliary R1 and other biochemical parameters. This study allows for the dynamic assessment of the viability of porcine DCD donor livers by combined usage of ex situ NMP and NMR relaxation time based assay, which lays a foundation for further clinical application.
www.nature.com/scientificreports/ approximately 50% of the Gd-EOB-DTPA was incorporated by hepatocytes and finally excreted into the bile 20 min after the administration of one bolus [12][13][14] . In this way, quantitative comparison of the signal intensity measured by T1 relaxation time (T1) before and after contrast enhancement within a certain time could indicate the functional reserve. Decreased liver function may delay the development of bile duct images using Gd-EOB-DTPA. Previous studies have shown the feasibility of using Gd-EOB-DTPA-enhanced MRI for predicting liver function by measuring liver parenchymal or biliary tract enhancement on hepatobiliary-phase (HBP) MRI for patients 13,15 . However, these methods are now used in liver imaging in situ, but not ex situ. Furthermore, the device for NMP cannot be placed in the MRI apparatus for imaging because it contains iron-based material.
The main objective of this study was to assess the ex vivo liver viability by using a nuclear magnetic resonance (NMR) relaxation time based assay in a porcine machine perfusion model.
Material and methods
Study design and animal model. Bama miniature pigs weighed 45.2-53.7 kg (Liver weight = 1.184 ± 0.243 kg) were used for this study. The study was approved by the Clinic's Institutional Animal Care and Use Committee in General hospital of south theatre command in China (authorized number: 2019022501). All procedures performed in accordance with its guidelines.
Donor procedure. Zoletil 50 (2-3.5 mg/kg) was injected intramuscularly to induce anesthesia. Endotracheal intubation was provided after intramuscular injection of atropine (0.02 mg/kg). Tramadol Hydrochloride (2 mg/kg, i.v.) was administered for analgesia. General anesthesia was then induced with propofol (mg/kg*h, i.v.) by means of a 21-gauge butterfly cannula inserted into an external marginal ear vein. Maintenance of anesthesia was induced with propofol (4-6 mg/kg), sevoflurane 2-3% and cis-atracurium (0.05 mg/kg). Ventilation mode was set as follows: VT: 8 ml/kg; RR: 16/min. Donor animals underwent a midline laparotomy. The cystic duct was ligated. The bile duct was dissected and the bile was allowed to drain freely. The major vessels in hepatic hilus were dissected. The aorta and inferior vena cava were cannulated with a large cannula. Pigs received heparin at 500 IU/kg of body weight 5mins prior to cross-clamping, then blood was quickly drawn from the aorta and inferior vena cava. The blood was collected in acid citrate dextrose bags for later NMP (approximate 1200-1500 ml). After blood collection, cardiac arrest by intra-cardiac infusion of potassium chloride (20 mEq) was provided. The donor would remain untouched for a period to simulate the procedure of obtaining DCD donor. 0, 30 or 60 min after the cardiac arrest were recorded as the warm ischemia time (WIT). In order to study the effect of different degrees of warm ischemia on liver function, the grafts were divided into three groups according to the warm ischemia time (WIT: 0′, n = 5; 30′, n = 6; 60′, n = 6). The donor liver was perfused with 2 L of University of Wisconsin cold storage solution (UW-CS) both portally and arterially. Liver grafts were quickly removed with standard technique and preserved in UW-CS with ices. The grafts were prepared for cannulating. Then the donor hepatic artery, portal vein, and bile duct were cannulated. The liver was connected to a prototype of ex situ normothermic machine perfusion device.
NMP.
In brief, the device consisted of a container for the liver, two centrifugal pump revolutions (Sorin centrifugantion pump revolution, Germany) delivering continuous flow to the portal vein and pulsatile flow to the hepatic artery; Two membrane oxygenator (WEGO, China) and a measurement and control device connected to an interface. The catheter is made by medical PVC and silica gel and could detect the blood pressure and blood flow by sensors. The pulsatile pump delivered perfusate from the container, through the oxygenator, and into the hepatic artery; the continuous pump perfused the portal vein also with oxygenation ( Supplementary Fig. 1A,B). Oxygen flow was constantly to both portal vein and hepatic artery and the FiO 2 was 60%. Before placing the liver, the machine was primed with 2 L of whole blood mixed with a machine perfusion solution. The composition of machine perfusate solution was list in Supplementary Table 1. Arterial perfusion pressures were maintained at 80/60 mmHg (systolic pressure/diastolic pressure) (Supplementary Fig. 1D-E) and the portal vein was perfused with a constant flow with 0.5 ml/min/g (liver weight) at the first hour, and elevate to 0.75 ml/min/g (liver weight). The portal vein related hemodynamic data were shown in supplementary Fig. 1A-C.
SCS and DHOPE.
Donor grafts were preserved at 4 °C with UW-CS. Cold ischemia time was recorded from grafts preparing period to SCS period. Then to study the advantage of DHOPE in preservation at 4 °C, the liver grafts were connected to the machine with continuous perfusion at 4 °C. Oxygen flow was provided 1000 mL/ min constantly. Arterial perfusion pressure was maintained at 25 mmHg and the portal vein was perfused with a constant flow with 200 ml/min. Before placing the liver, the machine was primed with 2 L of UW-MP. The temperature in the perfusate was detected and controlled at 4 °C by a semiconductor system. Flows and pressure in artery and portal vein were recorded.
Dynamic test for biliary longitudinal relaxation time. The commercial name of the contrast agent of Gd-EOB-DTPA we used in the experiment was Primovist. And the dosage of the agent was 3 ml/kg (liver weight). After 2 h of NMP, Gd-EOB-DTPA was injected into the perfusate. Firstly, bile samples were collected at 30 min intervals after the injection, and the longitudinal relaxation time was detected by a 1.0 T MRI. 2 mL of the secreted bile was collected at each time point and centrifuged at 2000 rpm for 2 min. The supernatant was placed in a 0.2 mL centrifuge tube for T1 relaxation time measurement. A 0.5 T NMR analyzer (MINIPQ001, Shanghai Niumag Co., China) was used for the measurement. T1 was acquired via an inversion-recovery (IR) pulse sequence with a TR of 9000 ms, TE of 14 ms, and 20 inversion recovery points ranging between 10 and 5000 ms. R1 (R1 = 1/T1) was used as the parameter for the viability test. All measurements were carried out in triplicate and data were expressed as mean ± SD. www.nature.com/scientificreports/ Following relaxation time measurement, T1-weighted MR images of the samples were acquired on a 1.0 T MRI (NM-G1, Shanghai Niumag Co., China) using a routine spin echo (SE) sequence. The parameters were as follow: 32 °C, TR/TE = 40 ms/16.5 ms, NS = 4, field of view (FOV) = 40 × 40 mm 2 , slice thickness = 1.5 mm, matrix = 192 × 256.
Blood and biliary viability test.
At the beginning of laparotomy, we isolated the bile duct and connect with a catheter. Blood and bile were sampled in the donor pigs as a baseline and during NMP hourly. Serum levels of transaminase, lactate, blood glucose and biliary pH were measured.
Histological analysis. Biopsies of the liver were collected at 6 h after NMP. All biopsies were fixed in formalin and paraffin-embedded. Slides were stained with hematoxylin and eosin (H&E) and assessed by light microscopy. Common bile duct was collected at 2 h after NMP. Evaluation of BDI was assessed by using an established, clinically relevant, histological BDI grading system 17 as described by Supplementary Table 2. The presence of necrosis in bile duct epithelium, the peribiliary glands loss, blood congestion and infiltration of inflammatory cells in bile duct biopsies have been observed and recorded.
Transmission electron microscopy (TEM). The fixed procedure was as described 16 . Samples were immersed with 2% paraformaldehyde and was further fixed with sequential incubation with 1 and 2% glutaraldehyde in PBS at 4 °C for 24 h. Post-fixed procedure was provided with 1% osmium tetroxide, electron-stained with 3% uranyl acetate, and embedded in an Epon-Araldite mixture. Then ultrathin sections were cut with approximately 60 nm by a ultrathin microtome (Leica EM UC7) and detected by a Hitachi H-7700 electron microscope (Hitachi, Tokyo, Japan).
Data and statistical analysis. Data was analyzed with the SPSS 22 statistical package (IBM, Chicago, IL, USA). Two-way ANOVA was used for the analysis of differences among three groups (WIT: 0′, 30′, 60′). Bonferroni post-test was used for analysis the differences between two groups at every time-points during NMP. Twoway ANOVA plus Bonferroni post-test was also used for analyzing the DHOPE group and SCS group at every time-point during NMP. Correlations were calculated using the Spearman correlation test. Data was presented as mean ± SD (standard deviation) and was considered significant at the level of p < 0.05.
Detection of biliary longitudinal relaxation time after Gd-EOB-DTPA injection during NMP.
The NMP device was utilized for the maintaining of liver grafts at 37℃ (Fig. 1A). After 2 h of NMP, Gd-EOB-DTPA was injected into the perfusate. Bile samples were collected at 30 min intervals after the injection, and the longitudinal relaxation time was detected by a 1.0 T MRI. longitudinal relaxation rate R1 = 1/T1 [s −1 ] of the samples was calculated. Before the Gd-EOB-DTPA injection, the bile showed a small background T1. Once Gd-EOB-DTPA was injected into the perfusate, the change in biliary T1 appeared 30 min after Gd-EOB-DTPA injection (Fig. 1C). To visually depict the change in T1 of the samples, T1-weighted MR images of the crosssection of six samples were acquired following the measurement of T1 (Fig. 1C). Next, to verify the relationship between T1 and Gd excretion in bile, six bile samples from different animals were collected and the contained Gd concentration was analyzed using inductively coupled plasma-atomic emission spectrometry (ICP-AES). An excellent linear correlation between Gd concentration and biliary 1/T1 (R1) was observed (Fig. 1B). Therefore, we used biliary R1 as a new indicator of viability, as its change directly reflected the excretion of Gd-EOB-DTPA.
Change of biliary R1 reflects the injury of hepatocyte caused by warm ischemia. In order to study the effect of warm ischemia on liver function, the grafts were divided into three groups according to the warm ischemia time (WIT: 0′, n = 5; 30′, n = 6; 60′, n = 6). (Fig. 1D). Liver biopsy was provided after 6 h of NMP. The H&E staining of liver grafts that endured 0′, 30′ and 60′ of warm ischemia revealed that little difference in hepatocyte was found among three groups ( Fig. 2A). However, transmission electron microscopy (TEM) images of these samples indicated that mitochondrial swelling and disappearance of cristae in hepatocytes were observed in grafts endured 60′ of warm ischemia, but were not observed in hepatocytes that endured 0′ and 30′ of warm ischemia ( Fig. 2A).
Next, biliary R1 after injection of Gd-EOB-DTPA was detected at 30 min of intervals during NMP. Bile was collected before and after Gd-EOB-DTPA injection at indicated time points for measurement of R1. A rise in R1 means that a strong contrast signal could be detected in the bile. Then, the biliary R1 increased in the groups with WIT for 0′ and 30′ after 1 h after injection of Gd-EOB-DTPA. However, the biliary R1 was significantly lower in the group of WIT for 60′ than that in the other two groups at 1hour (60′) post-injection (biliary R1 at 60′ postinjection: 0′ group vs. 30′ group vs. 60′ group: 22.4 ± 8.06 s −1 vs.12.9 ± 4.96 s −1 vs.1.3 ± 0.37 s −1 , P < 0.01) (Fig. 2B).
Other conventional parameters for viability test, such as alanine aminotransferase (ALT), perfusate lactate, and perfusate glucose levels were also assessed hourly during NMP (Fig. 2C-E). ALT level could vary significantly among the three groups after reperfusion (p < 0.01). ALT level was significantly higher in the group of WIT for 60′ after NMP (Fig. 2C). Perfusate lactate level were decreased in all three groups. Perfusate lactate level started reducing in group of WIT for 60′ at 4 h after NMP , but still higher than groups of WIT for 0′ and 30′ group (perfusate lactate at 4 h after NMP: WIT 0′ group vs. 30′ group vs. 60′ group: 2.12 ± 1.04 mmol/L vs. 2.31 ± 0.56 vs. 3.63 ± 0.50 mmol/L, P < 0.01) (Fig. 2D). There was no significant difference in perfusate glucose level among three groups (Fig. 2E). Correlation analysis was conducted between biliary R1 (60′) and conventional parameters at 4 h after NMP. As expected, biliary R1 (60′) and ALT level at 4 h after NMP were strongly correlated www.nature.com/scientificreports/ www.nature.com/scientificreports/ (Fig. 4A). But significant correlation was not found in biliary R1 (60′) and perfusate lactate at 4 h after NMP (Fig. 4B). These results suggested that the delay of increasement of biliary R1 at early stage of NMP may reflect the impaired function of liver grafts.
Detection of biliary duct injury (BDI) in warm ischemia injury. Biliary duct injury (BDI) is also an
important factor of functional evaluation for donor livers. Recently, Porte reported that the degree of BDI can be assessed via histological scoring 17 . Herein, we employed these histological items to evaluate BDI from the histological samples of the bile duct 2 h after initiation of NMP. The representative H & E staining of bile ducts in three groups were shown in Fig. 3A. The mean BDI score of three groups were shown in Fig. 3B. Severe change after ischemia and reperfusion injury were observed in liver grafts with WIT for 60′. As expected, the BDI score in group with WIT for 60′ was significantly higher than the other two groups. (BDI score: WIT 0′ group vs. 30′ group vs. 60′ group: 2.8 ± 0.84 vs. 3.50 ± 0.55 vs. 5.67 ± 0.82, P < 0.01).
Parameters assessed from bile could also reflect the cholangiocyte function, such as biliary pH and bicarbonate. Here, biliary pH and bicarbonate were assessed at different time points for livers with different WIT. The results showed that biliary pH increased faster in livers with WIT for 0′. Biliary pH was significantly lower since 4 h after NMP in livers with WIT for 60′ (Biliary pH at 4 h of NMP: WIT 0′ group vs. 30′ group vs. 60′ group: 7.58 ± 0.11 vs. 7.50 ± 0.05 vs. 7.36 ± 0.07, P < 0.01) (Fig. 3C).The fast release of bicarbonate in bile was observed in livers with WIT for 0′. Biliary bicarbonate was significantly lower in livers with WIT for 60′ even after 6 h of NMP (Fig. 3D). (Biliary bicarbonate at 6 h of NMP: WIT 0′ group vs. 30′ group vs. 60′ group: 28.42 ± 6.51 vs. 16.52 ± 1.90 vs. 10.88 ± 2.63, P < 0.01) Correlation analysis was also conducted between biliary R1 at 1 h after Gd-EOB-DTPA injection (60′) and biliary pH level at 4 h after NMP. However, there was a moderate correlation between values of R1 (60′) and Biliary pH at 4 h after NMP (Fig. 4C).
Detection of biliary R1 in liver grafts treated with DHOPE after circulatory death donation.
In order to study the effect of combined ischemia injury (warm ischemia and cold ischemia) on grafts quality, long term of static cold storage (SCS) was added in our model. As shown in Fig. 5A, after 30 min of warm ischemia, liver graft was preserved in SCS for 6 h before cannulating to the NMP device. The biliary R1 was tested at indicated time points. As expected, the level of biliary R1 was reduced in the SCS group during NMP and failed to increase even after 4 h of NMP (Fig. 6A).
DHOPE has been reported to be beneficial for donor preservation under cold ischemia condition. To confirm that, we compared the preservative effect of DHOPE (n = 6) with SCS (n = 6) by detecting parameters, including lactate, ALT, biliary pH, and biliary R1 in following NMP. The procedure was shown in Fig. 5A. Representative images of the liver treated with warm ischemia in situ (a), liver graft connected to the ex situ DHOPE in 4 °C (b), and liver graft connected to the ex situ NMP in 37 °C were showed in Fig. 5B.
A significant difference in dynamic biliary R1 was observed between the DHOPE and SCS groups (Fig. 6A). As shown above, a delayed of R1 increase was observed in the SCS group. In contrast, biliary R1 increased obviously 1 h after the injection of Gd-EOB-DTPA and peaked at 2 h in the DHOPE group, while it remained below 1 s −1 within 2 h after the injection of Gd-EOB-DTPA in the SCS group (R1 at 2 h post-injection: DHOPE vs. SCS: 32.89 ± 19.88 s −1 vs. 2.36 ± 0.67 s −1 , P < 0.01). (Fig. 6A).
Dynamic changes in other parameters were also studied. The pattern of ALT alteration varied in the two groups. In the SCS group, ALT level increased obviously within the 3 h after reperfusion and still increased with time during NMP (ALT: 3 h after NMP: DHOPE vs. SCS:88.00 ± 44.85 U/L vs. 187.3 ± 90.52 U/L, P < 0.05), while in the DHOPE group, ALT increased moderately and remained below the average ALT value in the SCS group at the same time point during NMP (Fig. 6B). The perfusate lactate levels decreased in both groups 2 h after NMP. However, a significantly lower lactate level was observed in the DHOPE group during the 3 h in NMP (lactate value at 3 h: DHOPE vs. SCS: 1.06 ± 0.71 mmol/L vs. 3.31 ± 1.91 mmol/L, P < 0.05) (Fig. 6C). Biliary pH fluctuated in the two groups in the first two hours of NMP. In the DHOPE group, biliary pH increased faster and peaked at 4 h, while in the SCS group, biliary pH remained low during NMP (biliary pH at 4 h after NMP: DHOPE vs. SCS: 7.59 ± 0.25 vs. 7.26 ± 0.24, P < 0.05) (Fig. 6D).
Discussion
Machine perfusion was developed to improve the quality of preservation for donor grafts 8,18-21 . Two major perfusion strategies, hypothermic and normothermic techniques, are currently used in clinical settings 1,6,[22][23][24] . Normothermic perfusion allows the liver grafts to remain alive with normal metabolism, which provides a platform to determine the viability prior to transplantation 10,25 . Currently, detection of multiple biochemical parameters during NMP to predict different modes of potential graft failure were considered as the best means of assessing the viability of liver grafts 8,9 . However, conventional biochemical parameters as bilirubin, albumin, transaminase, perfusate lactate, perfusate glucose, and biliary pH could not directly or dynamically indicate the real-time metabolic state of the liver graft 15,26 . The test for functional reserve of liver grafts including uptake, metabolism, and excretion, should be considered in the viability test for ECD livers.
The indocyanine green (ICG) clearance test has been widely used as a quantitative indicator for evaluation of the liver functional reserve in vivo before hepatic surgery 27 . A similar mechanism (i.e., the organic anion transporter) is considered responsible for uptake of gadoxetate disodium and ICG in the liver. Thus, the excretion speed of Gd-EOB-DTPA could indicate the functional reserve in liver grafts with similar degrees of injury like ICG clearance 14,28 . However, conducting the ICG clearance test during NMP is complex and unstable owing to the method of assessing ICG fluorescence during ex situ NMP 29 . In contrast, the NMR-based assay reported in this study is suitable for complex samples such as bile because it is based on the magnetic properties www.nature.com/scientificreports/ of Gd-EOB-DTPA. The signal reflected by longitudinal relaxation time is closely associated with the dynamic excretion of the contrast agent. In this way, the entire test process for biliary R1 using NMR based assay is stable and fast. www.nature.com/scientificreports/ The speed of Gd-EOB-DTPA excretion in bile, which is closely associated with the biliary R1 could reflect the real-time function of liver grafts. For porcine liver grafts with warm ischemia time of 0′ and 30′, a strong magnetic signal of Gd-EOB-DTPA could be detected in the bile sample at early stage after injection. Thus the presence of signal of contrast agent in bile after 30 min of injection of Gd-EOB-DTPA could be considered as an indicator of a good working state in the liver graft. Then, long periods of warm ischemia (60′) caused a visible delay in excretion of Gd-EOB-DTPA into the bile which was reflected by lower biliary R1. Besides, the correlation between ALT value at 4 h after NMP and Biliary R1 at 1 h after Gd-EOB-DTPA injection was observed (Fig. 4A). In the morphological study, swollen mitochondria without cristae were observed in the hepatocytes with WIT of 60′ in transmission electron microscopy (TEM). These results may indicate that the delay in Gd-EOB-DTPA excretion could predict impaired liver function in DCD liver grafts.
Many experts have confirmed that long term of cold ischemia could aggravate warm ischemia injury in DCD donor 30 . Biliary epithelial cells in the liver are sensitive to long term cold ischemia, which leads to dysfunctional regulation of ions and irreversible disruption of the plasma membrane 1,31 . So in this study, we hope to simulate some clinic scenarios through a combined ischemic injury model (30 min of warm ischemia plus 6 h of cold www.nature.com/scientificreports/ ischemia) in order to observe changes of biliary R1 and other traditional biochemical indicators. The preservation advantage of hypothermic oxygenated machine perfusion has already been confirmed 1,31,32 . Fondevila et al. showed that hypothermic oxygenated machine perfusion was more advantageous to preservation than cold storage for warm ischemia damaged livers 30 . Recently, a clinical study showed that the 5-year outcomes of DHOPE-treated DCD liver transplants were similar to those of DBD primary transplants and superior to those of untreated DCD liver transplants 33 . In our study, the preservative advantage of HOPE in 4 °C could also be reflected by the changes of biliary R1 as well as other conventional parameters. The increase in biliary R1 in the DHOPE group at early stage may indicate that the impaired liver function had been rescued by oxygenated perfusion. Application of biliary R1 as an indicator for donor selection still needs further experimental verification. In a porcine model of DCD, grafts with 70 min of warm ischemia were successfully transplanted and had 100% survived 34 . This may indicate that the ECD livers with long term of ischemia time still have the potential for usage. Then, any single biochemical parameter could not generally reflect the viability of these ECD livers. Therefore, it is currently considered that the evaluation of viability of liver grafts could only be achieved by combining multiple index. Change of biliary R1 could reflect the ability of uptake, metabolism and excretion in liver graft, which could be a reliable complement to the proposed viability criteria. Transplant experiments to verify the correlation between biliary R1 and graft survival is indispensable, which will be carried out in future experiments. As Gd-EOB-DTPA and the NMP are currently utilized as clinical approaches, the clinic verification tests and the criteria set will be performed soon.
In conclusion, we demonstrated the feasibility of detecting ex vivo liver viability with combination of gadolinium-enhanced MRI and NMP device, as it is a fast technique based on already existing devices developed for the Figure 6. Evaluation of biliary R1, ALT and lactate levels, and biliary pH during NMP after 6 h of DHOPE or SCS treatments. (A-E) Analysis of biliary R1, ALT, perfusate lactate, and biliary pH during NMP following 6 h of DHOPE or SCS treatment at the indicated time points. Data represent mean ± SD. Two-way ANOVA was used for analysis the differences between two groups. DHOPE group vs. SCS group: *P < 0.05, **P < 0.01). www.nature.com/scientificreports/ | 5,828.2 | 2021-02-18T00:00:00.000 | [
"Medicine",
"Biology"
] |
Evolution of intermediate latency strategies in seasonal parasites
Abstract Traditional mechanistic trade-offs between transmission and parasite latency period length are foundational for nearly all theories on the evolution of parasite life-history strategies. Prior theoretical studies demonstrate that seasonal host activity can generate a trade-off for obligate-host killer parasites that selects for intermediate latency periods in the absence of a mechanistic trade-off between transmission and latency period lengths. Extensions of these studies predict that host seasonal patterns can lead to evolutionary bistability for obligate-host killer parasites in which two evolutionarily stable strategies, a shorter and longer latency period, are possible. Here we demonstrate that these conclusions from previously published studies hold for non-obligate host killer parasites. That is, seasonal host activity can select for intermediate parasite latency periods for non-obligate killer parasites in the absence of a trade-off between transmission and latency period length and can maintain multiple evolutionarily stable parasite life-history strategies. These results reinforce the hypothesis that host seasonal activity can act as a major selective force on parasite life-history evolution by extending the narrower prior theory to encompass a greater range of disease systems.
Introduction
The timing of seasonal activity, or phenology, is an environmental condition affecting all aspects of life cycles including reproduction, migration, and diapause (Elzinga et al., 2007;Forrest & Miller-Rushing, 2010;Park, 2019;Pau et al., 2011;Lustenhouwer et al., 2018;Novy et al., 2013).The timing and prevalence of transmission opportunities for parasites, which could alter parasite life-history strategies, are also impacted by the phenology of host species (Altizer et al., 2006;Biere & Honders, 1996;Gethings et al., 2015;Hamer et al., 2012;MacDonald et al., 2020;Martinez, 2018;McDevitt-Galles et al., 2020;Ogden et al., 2018).For example, phenological patterns that extend the time period between when hosts are infected and when transmission occurs are expected to select for longer parasite latency periods (time between infection and new parasite release), as observed in some malaria species (Plasmodium vivax).In these systems, shorter latency period strains persist in regions where mosquitoes are present yearround, while longer latency period strains are more common in regions where mosquitoes are nearly absent during the dry season (White, 2011).
Recent theoretical work predicts that seasonal host activity can select for intermediate latency periods in monocyclic (one infectious cycle per season), obligate-killer parasites even when traditional mechanistic trade-offs between transmission and latency are omitted (MacDonald et al., 2022).In these systems, the optimal latency strategy is determined by host phenological patterns: Longer seasons select for longer periods between infection and the release of new parasites.While these results suggest that seasonal host activity patterns can serve as a selective driver of intermediate latency periods, they were only investigated in monocyclic, obligate-killer parasites.An extension of this work demonstrated that seasonal host activity can select for both a monocyclic parasite strategy (one round of infection per season, thus the same optimum from MacDonald et al. 2022) and a polycyclic parasite strategy (multiple rounds of infection per season) (MacDonald & Brisson, 2023) where host phenology dictates the optimal strategy.The theory developed thus far for the impact of host phenology on parasite evolution applies only to obligatekiller parasites, which, while numerous, only represent a small proportion of the vast diversity of parasite strategies in nature.
There is reason to expect that some of the main conclusions from prior studies on the impact of host phenology on parasite latency period evolution will apply to parasites that are not obligate killers.For example, all parasites must complete a latency period between infection and the release of parasite progeny, regardless of whether progeny release requires host death.Thus, selection on latency periods may operate similarly for all parasites in seasonal environments as releasing parasite progeny too early or too late is maladaptive if it mistimes interactions with seasonally available hosts.These studies suggest that host phenology could create important selective pressures affecting parasite latency period evolution in many seasonal disease systems.
Here we investigate the impact of seasonal host activity on latency period evolution for parasites not constrained to the obligate-killer lifestyle.We examine how two components of host phenology, the timing and duration of host emergence, impact parasite latency period evolution in non-obligate killer parasites.We demonstrate that the conclusions from previously published theory investigating the impact of host seasonal patterns on obligate-killer parasite evolution hold for non-obligate killer parasites.That is, seasonal host activity can select for both an intermediate latency period in the absence of a mechanistic trade-off between transmission and virulence and generate evolutionary bistability.These results demonstrate that host seasonal activity could serve as a major driver of parasite evolution in a wide range of parasite species.
Materials and Methods
We modify a published model that studies how host phenology impacts the evolution of the time between infection and host death in an obligate-killer parasite (MacDonald et al., 2022) to study how host phenology impacts the evolution of non-obligate killer parasite latency periods (time between infection and the beginning of new parasite release, τ).The main modification between the previous model and this study is that the parasite does not kill its host to release progeny.Instead, hosts experience infection-induced virulence either as a reduction in fecundity or an increased mortality rate following infection.A second model in the present article also relaxes the assumption that new parasite release is synchronous after a set latency period (τ).Instead, infected hosts move to the infectious class (i) where they release new parasite progeny at a constant rate until they recover.
The models describe the transmission dynamics of a free-living parasite that infects a seasonally available host (Figure 1).The exact disease system is left general, so it can be adapted to any system.Hosts, s, have non-overlapping generations and are alive for one season.The susceptible host cohort, ŝ(n), enters the system at the beginning of the season.ŝ(n) is a function of the number of uninfected hosts at t = T in season n-1.The parasite, v, must infect and release new infectious progeny before the end of the season to leave progeny in the environment to infect the next season's host cohort.In the first model, parasites are semelparous; thus, infected hosts release all new parasite progeny synchronously.Parasite release occurs after a set latency period (τ) after which infected hosts move to the recovered class (r).In the second model, parasites are iteroparous; thus, new parasite progeny transmission is distributed over time.In this case, infected hosts move to the infectious class (i) after a set latency period at which point they release new parasite progeny at a constant rate until they recover to r.The number of rounds of infection the parasite completes within a season depends on τ.If there is a long period between infection and progeny release, the parasite completes one round of infection per season and is therefore monocyclic.If there is a short period between infection and progeny release, the parasite can complete multiple rounds of infection per season and is therefore polycyclic.
The duration of each season extends from t = 0 to t = T. Time units are not specified in order to maintain the generality of the model across disease systems.It is expected that the relevant time unit will be in months for many disease systems, corresponding to spring and summer (Baltensweiler et al., 1977;Danks, 2006;Donovan, 1991;Grant & Shepard, 1984;Takasuka & Tanaka, 2013) and weeks for other disease systems (Cummins et al., 2011;Dalen, 2013;Danks, 2006).The initial conditions at the beginning of the season are sn(0) = 0, vn(0) = v(n) = v n-1 (T) where v(n) is the size of the starting parasite population introduced at the beginning of season n determined by the number of parasite progeny remaining at the end of season (t = T) in season n -1.In the model with semelparous parasite release, the transmission dynamics in season n are given by the following system of delay differential equations (all parameters are described in Table 1): where μ s is the susceptible host death rate, μ r is the recovered host death rate, δ is the decay rate of parasites in the environment, β is the transmission rate and τ is the latency period.α is the total number of parasites released.In most cases, we assume α is a function of τ and the scaling parameter b, but also investigate the impact of a constant, trade-off free α.
When there is a trade-off between latency period (τ) and the number of parasite progeny released (α), we assume that the number of new progeny released increases as the latency period increases: α(τ) = b(τ + 0.5) 0.8 .Note that when there is no trade-off between α and τ, the parasite growth rate in the host is essentially the trait under selection.That is, α is constant regardless of τ; thus, the trait that is effectively evolving is the rate that new parasites are assembled in between infection and host death (e.g., longer τ corresponds to slow assembly of new parasites).
In the model with iteroparous new parasite release, the transmission dynamics in season n are given by the following system of delay differential equations (again all parameters are described in Table 1): where γ is the rate at which infected hosts recover from the infection.
The emergence phenology of hosts is captured by the function g(t, t l ), which is a probability density that describes the percapita host emergence rate through the timing and length of host emergence.We use a uniform distribution for simplicity, although other distributions are expected to have qualitatively similar results (MacDonald et al., 2020).Note that the uniform distribution used here translates to hosts emerging at a constant rate (equal to 1/t l ) t l denotes the length of the host emergence period and T denotes the season length.The season begins (t 0 = 0) with the emergence of the susceptible host cohort (ŝ(n)) over the duration of 0 ≤ t ≤ t l .v(T) parasites remaining in the system at the end of the season give rise to the next season's initial parasite population (v(n) = v(0)).Parasites only release progeny during the season (no further progeny release after t = T).Background mortality arises from predation or some other natural cause.We assume that infected hosts that die from background mortality do not release parasites because the parasites are either consumed or the latency period corresponds to the time necessary to develop viable progeny (Wang, 2006;White, 2011).
Between-season dynamics
To study the impact of the feedback between host demography and parasite fitness on parasite evolution, we let the size of the emerging host cohort be a function of the number of uninfected hosts remaining at the end of the prior season.For the semelparous model: For the iteroparous model: both of which correspond to Beverton-Holt growth, which is the discrete-time analogue of logistic growth in continuous time (Beverton & Holt, 1957).sn(T) is the density of susceptible hosts at t = T in season n, σ is host reproduction, ϕ is the reduction in fecundity experienced by hosts who are or have been infected, and ρ is the density-dependent parameter.We modelled host reproduction with negative density dependence as we assumed higher population density would reduce host fecundity due to, e.g., competition for resources.
We have shown previously that host carryover generates a feedback between parasite fitness and host demography that can select for quasiperiodic dynamics for some parameter ranges (MacDonald & Brisson, 2022).We explore the impact of parasite-induced increases in host mortality and decreases in host fecundity on the discrete-time dynamics in Appendix B.
Parasite evolution
We use evolutionary invasion analysis (Geritz et al., 1998;Metz et al., 1992) to study how parasite latency periods adapt.We first extend system (1) to follow the invasion dynamics of a rare mutant parasite (vn,m) in a population of resident parasites (vn,r) in season n when parasite progeny transmission is synchronous following a latency period (τ): We also extend system (2) to follow parasite mutant invasion dynamics when parasite progeny transmission is distributed over time following a latency period (τ): where r and m subscripts refer to the resident and invading mutant parasites, respectively, and their corresponding traits.
In previous work on similar models that only considered parasites that complete one round of infection per season (monocyclic parasites), we were able to derive an analytical expression for mutant invasion fitness (MacDonald et al., 2022;MacDonald & Brisson, 2022).We are unable to solve the current models with parasites that complete multiple rounds of infection per season (polycyclic parasites) analytically due to the nonlinear αsn(t)vn(t) terms and instead determine parasite evolutionary end points numerically.Thus, for both models, we estimate the invasion fitness of rare mutants numerically.As in previous analyses (MacDonald & Brisson, 2022, 2023;MacDonald et al., 2022), the invasion fitness of a rare mutant parasite depends on the density of vn,m produced by the end of the season (vn,m(T)) in the environment set by the resident parasite at equilibrium density v * .The mutant parasite invades in a given host phenological scenario if the density of vn,m produced by time T is greater than or equal to the initial vn,m(0) = 1 introduced at the start of the season (vn,m(T) ≥ 1).
The simulation analysis was done by first numerically simulating system (1) with a monomorphic parasite population with respect to the latency period (τ).A single mutant parasite is introduced at the beginning of the season after 100 seasons have passed.The mutant's latency period strategy is drawn from a normal distribution whose mean is the value of τ from the resident strain.System (2) is then numerically simulated with the resident and mutant parasite.New mutants arise randomly after 1,000 seasons have passed since the last mutant was introduced, at which point system (2) expands to follow the dynamics of the new parasite strain.This new mutant has a latency period strategy drawn from a normal distribution whose mean is the value of τ from whichever parasite strain has the highest density.System (2) continues to expand for each new mutant randomly introduced after at least 1,000 seasons have passed.Any parasite whose density falls below 1 is considered extinct and is eliminated.The latency period evolves as the population of parasites with the adaptive strategy eventually invades and rises in density.Note that our simulations deviate from the adaptive dynamics literature in that new mutants can be introduced before earlier mutants have replaced the previous resident.Previous studies have shown that this approach is well suited to predicting evolutionary outcomes (Kisdi, 1999;MacDonald & Brisson, 2022;White & Bowers, 2005;White et al., 2006).
Results
Intermediate times between infection and parasite progeny release are adaptive for both obligate killer (MacDonald et al., 2022) and non-lethal, semelparous parasites (Figure 2).Similar to results for obligate killer parasites (MacDonald & Brisson, 2023), seasonal host activity also generates two evolutionarily stable strategies (ESSs) for non-obligate killer parasites: A shorter latency period strategy that allows multiple parasite generations within one season (polycylic transmission) and a longer latency period strategy that results in a single parasite generation each season (monocyclic transmission) (Figure 2).Furthermore, the model predicts that a semelparous life-history strategy, where hosts synchronously release parasite progeny, is not required for these results to hold (Figures 3 and 4).That is, intermediate latency periods are adaptive without a mechanistic trade-off, and evolutionary bistability is generated regardless of whether infected hosts synchronously release parasite progeny (semelparous) or release progeny over a longer period of time (iteroparous).
Lethal vs. non-lethal parasites
Seasonal host activity generates both a shorter latency period ESS and a longer latency period ESS for semelparous parasites, regardless of whether they are obligate killers (MacDonald et al., 2022, Figure 2).Shorter and longer latency period ESSs are separated by an evolutionary repellor such that the two strategies cannot coexist in the same environment.The ESS that a semelparous parasite population evolves towards is determined by the latency period of the initial resident population.Host phenological patterns determine both ESSs: shorter activity periods and longer host emergence periods select for shorter latency times (Figure 5A and C), as seen previously for obligate killer parasites (MacDonald et al., ) shows the outcome of invasion by mutant parasite strains into resident parasite populations that have latency period trait τ.Mutants possess an adaptive latency period trait and invade (black regions) or possess a maladaptive latency period trait and go extinct (white regions).The PIP shows two evolutionarily stable strategies (ESS) at τ = 2.8 and τ = 1.31 that are attractive and uninvasible for the parameter values used here.An evolutionary repellor lies between the two ESS at τ = 1.9.(B) Top: Parasites with the shorter latency period phenotype (density shown by solid line, τ = 1.31) complete two generations of infections during the season for the parameter values shown here and is thus polycyclic.Bottom: Parasites with the longer latency period phenotype (solid line, τ = 2.8) release new parasites just prior to the end of the season and are thus monocyclic.Dashed line shows new host infections over time.T = 4, t l = 1, α(τ) = b(τ + 0.5) 0.8 .All other parameters are the same as in Table 1.
Figure 3. seasonality selects for parasite latency period bistability when the transmission of semelparous parasite progeny is (left) or is not (right) constrained by a trade-off between the number of progeny released and the length of the latency period.Left: α(τ) = b(τ + 0.5) 0.8 , right: α = 200, T = 4, t l = 1,.All other parameters are the same as in Table 1. 1.
2022
).The longer latency period ESS reaches an intermediate trait value in the absence of a mechanistic trade-off between transmission and the time between infection and new parasite release, analogous to previous results for obligate-killer parasites (Figure 2 in MacDonald et al. 2022).In the absence of a trade-off, however, the shorter latency period ESS always corresponds to the minimum possible latency period (Figures 3 and 4).
Semelparous vs. iteroparous for non-lethal parasites
Host phenology selects for qualitatively similar ESS virulence strategies for non-lethal semelparous and iteroparous parasites.That is, host phenology selects for both a shorter latency period ESS and a longer latency period ESS separated by an evolutionary repellor, regardless of whether parasites are semelparous or iteroparous (Figure 5).However, semelparous and iteroparous parasites have quantitatively different ESS latency periods.Semelparity selects for longer latency periods such that the release of all parasites occurs just before the end of the season (Figure 5A and C).Conversely, iteroparity selects for shorter latency periods in order to ensure infected hosts have time to release parasites before the end of the season (Figure 5B and D).Furthermore, parasites with the longer latency period ESS generally outcompete parasites with the shorter latency period ESS for semelparous parasites (Figure 5A and C), while this trend is reversed for iteroparous parasites (Figure 5B and D).The results of the iteroparous model more closely match the results of the semelparous model as the transmission rate and recovery rate increase.Figure 6 shows that the long latency period strategy dominates in the iteroparous model when the emergence period is short and the short latency period strategy dominates when the emergence period is long.These results are qualitatively similar to what is presented in Figure 5C for semelparous parasites.
The rate of parasite-induced host mortality has only a small impact on the optimal latency periods of semelparous and iteroparous parasites (Figure 7A and B).Shorter latency periods are adaptive for non-obligate killer parasites when parasite-induced host mortality is low; however, the impact varies depending on whether parasites are semelparous or iteroparous.Low parasite-induced host mortality rates select for slightly shorter latency periods for the shorter latency period ESS but have no impact on the longer latency period ESS for semelparous parasites (Figure 7A).Conversely, low infected host mortality rates select for slightly shorter latency periods for the monocyclic ESS but have no impact on the polycyclic ESS for iteroparous parasites (Figure 7B).
The impact of parasite infection on host fecundity has a small impact on optimal latency periods for semelparous and iteroparous parasites (Figure 7C and D).Shorter latency periods are adaptive for parasites that strongly decrease host fecundity, as opposed to killing the host; however, the effect is small.Decreased host fecundity decreases equilibrium host densities, which shifts the timing of infections later in the season because transmission is density dependent (Equations 1a, 2a, 3a, and 4a).Infections that occur later in the season select for shorter latency periods to ensure that parasite progeny are released before the end of the season.The impact of infectioninduced reductions in host fecundity has qualitatively different impacts on semelparous and iteroparous parasites.That is, the longer latency period ESS is the semelparous parasite global attractor for a greater range of infected host fecundities compared to iteroparous parasites.
Discussion
The assumption that parasites must be obligate-host killers is not necessary for seasonal host activity to select for intermediate latency periods in the absence of a trade-off between latency period length and number of parasite progeny (Figure 2).Furthermore, seasonal host activity can select for two evolutionary stable strategies when parasites are obligate-killers (MacDonald & Brisson, 2023) and are not obligate-killers: a shorter latency period ESS and a longer latency period ESS (Figure 5).Finally, seasonal host activity patterns select for intermediate latency periods in both Figure 5. Host seasonality selects for latency period bistability for parasites when the transmission of parasite progeny is semelparous (A and C) or iteroparous (B and D).Furthermore, semelparous parasite populations at the longer latency period ESS generally outcompete parasite populations at the shorter latency period ESS.However, this trend is reversed for iteroparous parasites.Black points indicate the evolutionary attractor that outcompetes the other ESS (i.e., global attractor); gray points indicate local attractors; hollow points indicate repellors.T = 4, t l = 1, α(τ) = b(τ + 0.5) 0.8 .All other parameters are the same as in Table 1.
Figure 6.The left plot shows that short latency period strategies dominate for iteroparous parasites when the host emergence period in short (t l = 0.5), while the right plot shows that long latency period strategies dominate when the host emergence period is long (t l = 2).These results demonstrate that the results of the iteroparous model approach those of the semelparous model as the transmission and recovery rates increase.β = 10 -5 .All other parameters are the same as in Table 1.
semelparous parasites that release all progeny simultaneously and iteroparous parasites that release progeny over a longer period of time.These results suggest that seasonal host activity can be an important driver of parasite life-history strategies in a wider range of parasites than previously recognized (Mac-Donald & Brisson, 2023;MacDonald et al., 2022).While the model would need to be altered to fit any specific system, this general model can serve as the foundation to study obligate-killer parasites (Baltensweiler et al., 1977;Donovan, 1991;Grant & Shepard, 1984;Takasuka & Tanaka, 2013), non-lethal parasites (Crowell, 1934;Gaulin et al., 2007;Li et al., 2007;Zhang & Fernando, 2017;Zehr, 1982), 1.
The results presented here are qualitatively similar to theory developed for latency period evolution of obligate-killer parasites in seasonal environments (MacDonald et al., 2022).Seasonal host activity sets up an alternative trade-off between releasing new parasites too early or too late regardless of whether the parasite must kill its host to release progeny.For both obligate-killer and non-obligate killer parasites, longer latency periods are maladaptive for parasites as they do not release progeny before the end of the season when all adult hosts die regardless of their infection status.Shorter latency periods are also maladaptive in both cases as progeny released early are more likely to die due to environmental exposure.Thus, the conflicting costs of not releasing progeny before the end of the season (which results in zero new progeny) and releasing progeny too early in the season (which results in many progeny dying from environmental exposure) selects for intermediate latency periods.Taken together, these results suggest that parasites need not kill their host for seasonality to make intermediate latency periods adaptive.
Parasite transmission strategies impact the evolution of latency period quantitatively, but not qualitatively.
Iteroparous parasites that release progeny over time have shorter optimal latency periods than semelparous parasites that release progeny all at once (Figure 5).Iteroparous parasites require shorter latency periods to increase the number of progeny released before the end of the season.Conversely, semelparous parasites require longer latency periods to decrease the number of progeny that decay in the environment before the end of the season.The longer latency period optimum tends to outcompete the shorter latency period optimum for semelparous parasites, while the converse is true for iteroparous parasites.The model thus predicts that semelparous parasites found in nature are likely to have longer latency periods, while iteroparous parasites are likely to have shorter latency periods.
Several features of the current model can be altered to investigate more complex impacts of host phenology on parasite latency period evolution.For example, the model presented here could be extended to study the impact of different host reproductive strategies on parasite latency period evolution by allowing hosts to reproduce more than once per season.Hosts that reproduce throughout the season would likely favour shorter latency period strategies that rely on hosts being available mid-season for later parasite generations (van den Berg et al., 2011).The model could also be used Semelparous parasites are more likely to drive host-parasite demographic cycling than iteroparous parasites.The top panel shows the semelparous parasite discrete-time dynamics, and the bottom panel shows iteroparous parasite discrete-time dynamics for seasons 400-500 (i.e., when the system is at its ecological attractor).The panels on the left demonstrate that high parasite-induced host mortality can drive cycling when parasites are semelparous, but not iteroparous.The panels on the right demonstrate that large parasite-induced decreases in host fecundity can drive cycling when parasites are semelparous, but not iteroparous.μ r = μ i = 5, ϕ = 0.1.All other parameters are the same as in Table 1.
to study the impact of different types of trade-offs between parasite latency period and other traits, such as the mortality rate of hosts that recovered from infection.However, increased host mortality following parasite infection is not predicted to strongly impact optimal latency periods in the current framework (Figure 7).
Host phenology impacts the timing and prevalence of transmission opportunities for parasites (Altizer et al., 2006;Biere & Honders, 1996;Gethings et al., 2015;Hamer et al., 2012;Ogden et al., 2018;Martinez, 2018;MacDonald et al., 2020;McDevitt-Galles et al., 2020), which selects parasite lifehistory strategies (Donnelly et al., 2013;Hamelin et al., 2011;King et al., 2009;MacDonald et al., 2022;MacDonald & Brisson, 2023;van den Berg et al., 2011).Past work has shown that host phenology can select for intermediate latency periods and select for multiple evolutionarily stable parasite strategies, but only in obligate-killer parasites (MacDonald et al., 2022;MacDonald & Brisson, 2023).The present study extends this area of research by predicting that host phenology can also select for intermediate latency periods and multiple evolutionarily stable strategies in non-lethal parasites.Thus, seasonal host patterns could act as a selective force in a wide range of disease systems given that non-lethal parasites are extremely common in nature.
Figure 1 .
Figure 1.Diagrammatic representation of the infectious cycle within each season.All parasites (v) emerge at the beginning of the season (t = 0), while all hosts (s) emerge at a constant rate between time t = 0 and t = t l .At time τ postinfection, parasite progeny (v) are released into the environment where they decay from exposure at rate δ.The top infection diagram shows the semelparous parasite model in which all parasite progeny released at time τ following infection at which point hosts recover to stage r.The bottom infection diagram shows the iteroparous parasite model in which parasites are released once infected hosts have entered the infectious stage (i) at time τ postinfection and continue to be transmitted until hosts recover to stage r at rate γ.If τ is less than half the season length, a second generation of infections can occur within the season.Parasite progeny that survive in the environment to the end of the season comprises the parasite population that emerge in the following season (v(T) = v(n + 1)).
Figure 2 .
Figure2.Seasonal host activity generates multiple parasite virulence attractors when parasite release is semelparous.(A) The pairwise invasibility plot (PIP) shows the outcome of invasion by mutant parasite strains into resident parasite populations that have latency period trait τ.Mutants possess an adaptive latency period trait and invade (black regions) or possess a maladaptive latency period trait and go extinct (white regions).The PIP shows two evolutionarily stable strategies (ESS) at τ = 2.8 and τ = 1.31 that are attractive and uninvasible for the parameter values used here.An evolutionary repellor lies between the two ESS at τ = 1.9.(B) Top: Parasites with the shorter latency period phenotype (density shown by solid line, τ = 1.31) complete two generations of infections during the season for the parameter values shown here and is thus polycyclic.Bottom: Parasites with the longer latency period phenotype (solid line, τ = 2.8) release new parasites just prior to the end of the season and are thus monocyclic.Dashed line shows new host infections over time.T = 4, t l = 1, α(τ) = b(τ + 0.5) 0.8 .All other parameters are the same as in Table1.
Figure 4 .
Figure 4. Host seasonality selects for parasite latency period bistability when the transmission of iteroparous parasite progeny is (left) or is not (right) conby a trade-off between the number of progeny released and the length of the latency period.Left: α(τ) = b(τ + 0.5) 0.8 , right: α = 400, T = 4, t l = 1.All other parameters are the same as in Table1.
Figure 7 .
Figure 7. Parasite-induced increases in host mortality (μ i ) or decreases in fecundity (ϕ) have minimal impact on parasite latency period evolution (τ).(A) Low infected host mortality rates select for slightly shorter latency periods for the shorter latency period ESS but have no impact on the longer latency period ESS for semelparous parasites.(B) Conversely, low infected host mortality rates select for slightly shorter latency periods for the longer τ ESS but have no impact on the shorter τ ESS for iteroparous parasites.(C) Low parasite-induced infected host fecundity selects for shorter semelparous latency periods for the shorter τ ESS but have minimal impact on the longer τ ESS.(D) In contrast, when iteroparous parasites minimally reduce infected host fecundity, slightly longer latency periods are adaptive for both the shorter τ ESS and the longer τ ESS.Black points indicate the global attractor; gray points indicate local attractors; hollow points indicate repellors.T = 4, t l = 1, α(τ) = b(τ + 0.5) 0.8 .All other parameters are the same as in Table1.
Figure 8 .
Figure8.Semelparous parasites are more likely to drive host-parasite demographic cycling than iteroparous parasites.The top panel shows the semelparous parasite discrete-time dynamics, and the bottom panel shows iteroparous parasite discrete-time dynamics for seasons 400-500 (i.e., when the system is at its ecological attractor).The panels on the left demonstrate that high parasite-induced host mortality can drive cycling when parasites are semelparous, but not iteroparous.The panels on the right demonstrate that large parasite-induced decreases in host fecundity can drive cycling when
Table 1 .
Model parameters and their respective values. | 7,191.8 | 2024-02-08T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Automated Segmentation of Infarct Lesions in T1-Weighted MRI Scans Using Variational Mode Decomposition and Deep Learning
Automated segmentation methods are critical for early detection, prompt actions, and immediate treatments in reducing disability and death risks of brain infarction. This paper aims to develop a fully automated method to segment the infarct lesions from T1-weighted brain scans. As a key novelty, the proposed method combines variational mode decomposition and deep learning-based segmentation to take advantages of both methods and provide better results. There are three main technical contributions in this paper. First, variational mode decomposition is applied as a pre-processing to discriminate the infarct lesions from unwanted non-infarct tissues. Second, overlapped patches strategy is proposed to reduce the workload of the deep-learning-based segmentation task. Finally, a three-dimensional U-Net model is developed to perform patch-wise segmentation of infarct lesions. A total of 239 brain scans from a public dataset is utilized to develop and evaluate the proposed method. Empirical results reveal that the proposed automated segmentation can provide promising performances with an average dice similarity coefficient (DSC) of 0.6684, intersection over union (IoU) of 0.5022, and average symmetric surface distance (ASSD) of 0.3932, respectively.
Introduction
Brain infarction, generally known as stroke, is a global health issue and public health priority. It is a significant cause of disability and the second leading cause of death worldwide [1]. Based on the up-to-date statistical data from the World Stroke Organization (WSO), over 13.7 million new cases and 5.5 million deaths of stroke are occurring annually [2]. Moreover, up to two-thirds of stroke survivors usually suffer residual disabilities and no longer participate in their daily activities [3]. Examples of disabilities may include transient or lasting paralysis on one or both sides of the body, difficulties in speaking or eating, and muscular coordination loss. Such devastating and life-altering results after brain infarction can seriously impact a critical economic and humanistic burden as well [1]. Approximately an annual $51.2 billion economic loss results from stroke-reducing approaches, for example, medical costs and costs for rehabilitation in poststroke patients such as physical functioning and caregiver involvement [4].
In medical terminology, "infarction" is also known as "necrosis." It is damage or death of tissues due to the failure of blood and oxygen supply to the affected area. Brain infarction or stroke is a type of infarction that mainly affects the brain. Specifically, it is a cerebrovascular disease resulting from the formation of necrotic or damaged tissues inside the brain. It commonly occurs when an artery in the brain gets blocked by clots (ischemic stroke) or bursts (hemorrhagic stroke) [5]. Fortunately, brain infarction is curable if it is
Related Works
In neuroimaging analysis, several studies for automated delineation of brain infarct lesions from MRI scans have emerged. Generally, previous studies that were conducted at the beginning of the last decade were based on the standard machine learning algorithms such as K Nearest Neighbors (KNN) [8], Naive Bayes (NB) [8,9], Support Vector Machine (SVM) [6,10,11], Random Forest (RF) [12][13][14], and so on. These conventional techniques were simple and easy to use; however, their major weakness is that the performance was strongly dependent on the quality of handcraft features. Extracting meaningful features from the images is crucial to make the machine learning models learnable and robust [15]. In practice, handling of such features is very time-consuming and challenging because machine learning engineers are not medical domain experts.
With the state-of-the-art advancements and substantial results of convolutional networks, there is no doubt that deep learning algorithms hit a milestone in medical image analysis. Unlike the standard machine learning techniques, deep learning-based methods are not necessary to extract handcrafted features. They can learn high-level features directly from the input images and can provide more reliable results. Due to these advantages, the deep learning-based methods became trendier in automated diagnosis of brain infarction. For example, the use of benchmark deep learning models such as AlexNet, VGG, Inception, ResNet, are found as transfer learning algorithms for infarct lesion detection and classification [16][17][18]. Although they are applicable for detection and classification tasks, they need to follow a general encoder-decoder architecture to perform a semantic segmentation. More specifically, such standard deep networks can be applied as encoders to extract the discriminative features from the inputs and to perform pixel-wise classification. However, due to the use of constitute convolutions in encoder networks, the resolution of the inputs becomes lower and cannot produce the segmentation results having the same dimension as the input. For this problem, a decoder network is necessary to upsample and enhance the resolution of the convoluted images. Based on a recent and intensive review of deep learning method for neuroimaging, it is found that fully connected network (FCN) and U-Net were widely used for semantic segmentation of infarct lesions. Both methods follow encoder-decoder structure, but FCN based semantic segmentations have only one upsampling layer in the decoder part and mainly use bilinear interpolation for upsampling. Unlike FCN, U-Net's architecture is designed using multiple upsampling layers along with skip-connections and concatenations. Moreover, it uses learnable weight filters instead of fixed interpolation for upsampling. This architecture makes U-Net more robust and provides better segmentation results compared to conventional FCN based segmentations. For these strengths, we choose U-Net over other deep learning models in this study.
Brain infarct lesions segmentation based on U-Net architecture was the most frequently used method in recent studies [19][20][21][22][23][24][25]. It is a baseline and famous state-of-the-art deep learning architecture in biomedical image segmentation. Depending upon the modification of the U-Net architecture, the names of the segmentation Nets were changed study by study. Among [19][20][21][22][23][24][25], Cross-Level fusion and Context Inference Network X-Net [21], (CLCI-Net) [22], Deep Residual Attention Convolutional Neural Network (DRANet) [23] used two-dimensional (2D) based U-Net architectures. They segmented the infarct lesions from input 2D slices of the MRI based on a single orientation. As a weakness, the performance of such 2D-based methods is limited because they cannot access the spatial information of the lesions from the other two planes. Moreover, those methods also required extensive postprocessing mechanisms to combine slice-by-slice predictions into final volumetric segmentation outputs.
Unlike the 2D-based U-Nets, multi-path 2.5D CNN [24] considered the volumetric information of the brain lesions by performing three different normalizations for each of the three axial, sagittal, and coronal planes. Nine different 2D paths resulted from the normalizations were then fed into the nine end-to-end U-Nets, and path-wise segmentations were performed. However, like the aforementioned 2D-based U-Nets, the 2.5D net also had to perform an extensive postprocessing task. It used 3D CNN to concatenate 2D lesion masks for postprocessing.
Apart from the previous 2D and 2.5D U-Nets, fully 3D architectures were also found in 3DCRF [19], D-UNet [20], and 3D-Res-UNet [25]. Since these U-Nets worked on the volumetric inputs in 3D space, they can fully utilize the contextual and spatial information of the infarct lesions to provide more robust predictions. However, as a tradeoff, these fully 3D models significantly spend more computational resources in training.
The primary purpose of this paper is to present an alternative and automatic scheme to segment infarct lesions from brain MRI scans. Like the previous 3D-based methods, our proposed approach is also based on volumetric segmentation using U-Net. However, as a difference, the automated infarct lesion segmentation proposed in this paper applies variational mode decomposition (VMD) followed by a three-dimensional U-Net-based segmentation. VMD is a popular preprocessing method, and its efficacy in brain MRI analysis had been proposed in [6,11,26]. However, all those studies utilized conventional machine learning algorithms, specifically support vector machine (SVM), combined with VMD to classify normal and abnormal brain lesions. To our knowledge, the use of VMD together with a deep learning model has not been conducted in previous brain abnormalities detections. Indeed, deep learning-based methods obviously outperform traditional machine learning algorithms, and it has been proven by several technical studies. For this reason, we decided to apply VMD and U-Net model in order to take advantage of both methods.
Our proposed method brings three significant technical contributions as follows: (i) For the first contribution, we proposed variational mode decomposition (VMD) as a preprocessing task. It helps remove non-infarct tissues from the input MRI scans and lessens the amount of unwanted information from the input volumes. (ii) For the second contribution, we presented overlapped patches strategy, which divides the input MRI volumes into smaller patches. The divided patches were fed into the U-Net model to perform patch-wise segmentation. The proposed overlapped patch strategy also performed patch pruning to reduce the workload of the segmentation model. Moreover, it records the reference numbers of patches aiming at seamless and intensive postprocessing. (iii) For the last contribution, we developed a three-dimensional U-Net model for the segmentation of infarct lesions from volumetric patches. Then, a postprocessing was followed in order to produce the final segmentation results.
The rest of the paper is organized as follows: Section 3 will discuss the details about the materials and methods applied in this study. Section 4 will explain the experimental results and discussion, and finally, Section 5 will summarize and conclude the paper.
Overview of the Proposed Method
This paper developed a deep learning-based algorithm for the segmentation of infarct lesions from chronic-stroke MRI scans. Figure 1 demonstrates an overview of the proposed method, and it consists of three fundamental processes, namely preprocessing, segmentation, and postprocessing.
The primary objective of preprocessing in this study is to reduce the computational workload by suppressing and removing unwilling parts from the input images, for instance, background, skull, and other non-infarct tissues. We conducted three main operations in the preprocessing step. They are (i) stripping of the skull using a pretrained model, (ii) removing non-infarct lesions using variational mode decomposition (VMD), and (iii) dividing the output volumes of VMD into small patches using overlapped patches strategy. The outputs of preprocessing step are three-dimensional patches of brain scans and associated lesion masks. In the second process, segmentation, the divided patches are subsequently fed into a three-dimensional U-Net model to perform patch-wise semantic segmentation. Finally, a postprocessing step is followed to combine the segmented patches and generate the segmented infarct lesions.
Data Source
The brain MRI scans applied in this study are obtained from a freely accessible and standard dataset called Anatomical Tracings of Lesions After Stroke (ATLAS) [3]. The raw images in the dataset were collected from chronic-stroke patients in 11 cohorts worldwide. There was a total of 304 T1-weighted MRIs in the original version of the dataset. Along with the dataset, manually delineated lesion masks and metadata can also be downloaded for the ground truths. The reliability of the lesion masks in ATLAS dataset were thoughtfully reviewed and confirmed by an expert radiologist. The individual subject of MRI contains at least one lesion, and 58% of subjects in the dataset have a single lesion. The rest, 42.1%, are multiple lesions, and separate lesion masks were used to identify them.
Besides the original raw MRIs, ATLAS also provides a standardized version of the dataset. This standard version was created to reduce the technical difficulties due to the image quality produced by different scanners. Some MRI subjects, especially collected using 1.5 T scanner, were removed from the original raw dataset (containing 304 T1weighted MRIs), and the rest were defaced, normalized to standard MNI-152 space. As a result, there were a total of 239 scans in the standard ATLAS dataset. In this study, we will apply the standard ATLAS dataset to conduct the experiments. Each input MRI in the standard dataset has a dimension of 197 × 233 × 189 mm 3 with a canonical voxel size of 1 mm 3 .
Variational Mode Decomposition (VMD)
Variational mode decomposition (VMD) is one of the most popular decomposition methods in biomedical image analysis. It decomposes an image into a specific number of spectral bands, having different directional and oscillatory characteristics. As a result of
Variational Mode Decomposition (VMD)
Variational mode decomposition (VMD) is one of the most popular decomposition methods in biomedical image analysis. It decomposes an image into a specific number of spectral bands, having different directional and oscillatory characteristics. As a result of decomposition, VMD produces a discrete number of modes in which each mode has limited bandwidth around its center frequency. For instance, suppose a two-dimensional input image f (x) is decomposed into k number of modes using VMD. The spatial bandwidths in each mode k are needed to be compact around a center pulsation ωk [27]. To calculate the bandwidths in each mode uk, analytical signals of each mode should be computed first using the following equations.
where ∝ is the bandwidth constraint and u AS, k (x) represents the analytic signal of the k th mode. However, the objective function in Equation (1) has a reconstruction constraint because it was calculated by setting one half-plane of the frequency domain to zeros. Therefore, quadratic penalty and Lagrangian multiplier are conducted to render this constraint. Finally, the optimal mode uk of the image can be obtained using the following equations [26,27].
where L in Equation (3) is the augmented Lagrangian, and the saddle point of L is the solution to the original constraint minimization problem. λ is the Lagrangian multiplier term, and the following Equation (4) can be derived to change into the quadratic penalty term, The main idea of applying VMD in this study is to extract silent image features from the spectral characteristics of the decomposed images. For a clear understanding, a visual representation can be seen in Figure 2. It compares the VMD of normal and infarcted (denoted by the red circle) brain MRI scans. As we can see in Figure 2, VMD transforms the input images into a number of spectral bands exposing different directions and oscillatory characteristics. Such spectral characteristics are the key indicators of distinctive anatomical features, which are very useful for further diagnostical analysis. In the second row of Figure 2, it can be evidently seen that the spectral bands around the abnormal (infarct) lesion exposes higher oscillations compared to other areas. Based on this fact, we create candidate infarct lesion masks by suppressing the mode oscillation values. Applying these masks, the objects showing low potential to be infarct lesions can be removed.
Overlapped Patches Strategy
Deep learning for medical images is notably arduous when the input images are volumetric data obtained from a stack of multiple sequential 2D images. The overlapped patches strategy proposed in this research intends to reduce the training efforts and time by dividing the input volume into smaller patches. As described in Section 3.2, the dimension of each input MRI is 197 × 233 × 189 mm 3 with a canonical voxel size of 1 mm 3 . Feeding the whole volume into the deep learning-based segmentation model is very bulky and computationally intensive. For this reason, our proposed overlapped patches strategy tries to create smaller, same-dimensioned patches before segmentation. Note that the MRI volumes in this stage were resulted from skull stripping and VMD decomposition, as illustrated in Figure 1. Since the skull stripping and VMD do not affect the input MRIs' dimension, the resulted volumes after passing those operations remain the same as the original input volume (197 × 233 × 189 mm 3 ).
Overlapped Patches Strategy
Deep learning for medical images is notably arduous when the input images are volumetric data obtained from a stack of multiple sequential 2D images. The overlapped patches strategy proposed in this research intends to reduce the training efforts and time by dividing the input volume into smaller patches. As described in Section 3.2, the dimension of each input MRI is 197 × 233 × 189 mm 3 with a canonical voxel size of 1 mm 3 . Feeding the whole volume into the deep learning-based segmentation model is very bulky and computationally intensive. For this reason, our proposed overlapped patches strategy tries to create smaller, same-dimensioned patches before segmentation. Note that the MRI volumes in this stage were resulted from skull stripping and VMD decomposition, as illustrated in Figure 1. Since the skull stripping and VMD do not affect the input MRIs' dimension, the resulted volumes after passing those operations remain the same as the original input volume (197 × 233 × 189 mm 3 ).
These skull-stripped and VMD-masked volumes are divided into small, overlapped patches using the overlapped patches strategy. Each patch has 64 × 64 × 64 in dimension, and ten voxels overlapped to its adjacent patches. A zero-padding of the original volume (197 × 233 × 189) is conducted to exactly divide the input volume into 64 × 64 × 64 patches. Thus, after padding, the volume size became 256 × 256 × 192. Therefore, every input volume MRI generates the same number of patches (48 patches in total). The corresponding annotation masks are also divided into patches. Moreover, the reference numbers of patches for each input subject are also recorded, aiming for seamless, intensive stitching in the postprocessing stage. We used the subject_ID and a serial number of the patch to record the reference numbers-for example, "c0003_patch_1" means the very first patch of the input c0003.
Although the primary purpose of separating overlapped patches is to reduce the volume size and computation effort, the padding makes the volume size bigger. Thus, our proposed overlapped patches strategy alleviates this problem by pruning unnecessary patches. Figure 3 demonstrates how the proposed overlapped patches strategy works. If the summation of all voxels in a patch is equal to zero, then that patch does not need to consider for segmentation. Moreover, patch pruning does not hinder the postprocessing thanks to using the same number of patches for every subject and recording the reference number of patches. These skull-stripped and VMD-masked volumes are divided into small, overlapped patches using the overlapped patches strategy. Each patch has 64 × 64 × 64 in dimension, and ten voxels overlapped to its adjacent patches. A zero-padding of the original volume (197 × 233 × 189) is conducted to exactly divide the input volume into 64 × 64 × 64 patches. Thus, after padding, the volume size became 256 × 256 × 192. Therefore, every input volume MRI generates the same number of patches (48 patches in total). The corresponding annotation masks are also divided into patches. Moreover, the reference numbers of patches for each input subject are also recorded, aiming for seamless, intensive stitching in the postprocessing stage. We used the subject_ID and a serial number of the patch to record the reference numbers-for example, "c0003_patch_1" means the very first patch of the input c0003.
Although the primary purpose of separating overlapped patches is to reduce the volume size and computation effort, the padding makes the volume size bigger. Thus, our proposed overlapped patches strategy alleviates this problem by pruning unnecessary patches. Figure 3 demonstrates how the proposed overlapped patches strategy works. If the summation of all voxels in a patch is equal to zero, then that patch does not need to consider for segmentation. Moreover, patch pruning does not hinder the postprocessing thanks to using the same number of patches for every subject and recording the reference number of patches.
Three-Dimensional U-Net (3D U-Net)
U-Net [28] is one of the state-of-the-art deep learning models for semantic segmentation of images, and it has been successfully applied in biomedical image segmentation. As the name implies, the architecture of U-Net exposes a U-shaped structure, which is comprised of two main parts: contracting path (encoder) and expansive path (decoder). The first path, the encoder, extracts the discriminative features from the input images. Specifically, it follows the typical architecture of a convolutional neural network and contains repeated series of two 3 × 3 convolutions, each followed by a rectified linear unit (ReLU) and a 2 × 2 max pooling. Since the goal of the contracting path is feature extraction, the number of feature channels in each downsampling step becomes double while the spatial dimensions are reduced. The bottommost layer of the U-Net is treated as a bridge between contraction and expansive paths, and it contains two 3 × 3 convolution layers followed by ReLU and one 2 × 2 up convolution layer.
Three-Dimensional U-Net (3D U-Net)
U-Net [28] is one of the state-of-the-art deep learning models for semantic segmentation of images, and it has been successfully applied in biomedical image segmentation. As the name implies, the architecture of U-Net exposes a U-shaped structure, which is comprised of two main parts: contracting path (encoder) and expansive path (decoder). The first path, the encoder, extracts the discriminative features from the input images. Specifically, it follows the typical architecture of a convolutional neural network and contains repeated series of two 3 × 3 convolutions, each followed by a rectified linear unit (ReLU) and a 2 × 2 max pooling. Since the goal of the contracting path is feature extraction, the number of feature channels in each downsampling step becomes double while the spatial dimensions are reduced. The bottommost layer of the U-Net is treated as a bridge between contraction and expansive paths, and it contains two 3 × 3 convolution layers followed by ReLU and one 2 × 2 up convolution layer.
U-Net applies a series of convolutional filters in the contraction path; thus, the spatial dimensions of the inputs at the bottommost layer become smaller than that of the original images [20]. Since the ultimate goal of U-Net is semantic segmentation, which is a classification of pixels to determine whether a specific pixel in the input is part of a lesion, the output should be the same dimension as the input. For this reason, the expensive (decoder) path of U-Net semantically projects the discriminative features (lower spatial dimensions) generated by the encoder onto the pixel space (higher spatial dimensions) to maintain symmetric dimensions between input and output images. Similar to the encoder, every step in the decoder path consists of a 2 × 2 upsampling (upconvolution), followed by two 3 × 3 convolutions with ReLU. At the final layer, U-Net ends up with a 1 × 1 convolution that converts the size of the feature map of the first last layer to the desired number of output classes. U-Net applies a series of convolutional filters in the contraction path; thus, the spatial dimensions of the inputs at the bottommost layer become smaller than that of the original images [20]. Since the ultimate goal of U-Net is semantic segmentation, which is a classification of pixels to determine whether a specific pixel in the input is part of a lesion, the output should be the same dimension as the input. For this reason, the expensive (decoder) path of U-Net semantically projects the discriminative features (lower spatial dimensions) generated by the encoder onto the pixel space (higher spatial dimensions) to maintain symmetric dimensions between input and output images. Similar to the encoder, every step in the decoder path consists of a 2 × 2 upsampling (upconvolution), followed by two 3 × 3 convolutions with ReLU. At the final layer, U-Net ends up with a 1 × 1 convolution that converts the size of the feature map of the first last layer to the desired number of output classes.
The detailed architecture of our proposed 3D U-Net is illustrated in Figure 4. The original version of U-Net was designed for the segmentation of two-dimensional color images. However, in this research, we tend to segment infarct lesions from volumetric patches of brain MRIs. Therefore, we develop a three-dimensional (3D) version of U-Net based using divided patches using overlapped patches strategy.
Three-Dimensional U-Net (3D U-Net)
U-Net [28] is one of the state-of-the-art deep learning models for semantic segmentation of images, and it has been successfully applied in biomedical image segmentation. As the name implies, the architecture of U-Net exposes a U-shaped structure, which is comprised of two main parts: contracting path (encoder) and expansive path (decoder). The first path, the encoder, extracts the discriminative features from the input images. Specifically, it follows the typical architecture of a convolutional neural network and contains repeated series of two 3 × 3 convolutions, each followed by a rectified linear unit (ReLU) and a 2 × 2 max pooling. Since the goal of the contracting path is feature extraction, the number of feature channels in each downsampling step becomes double while the spatial dimensions are reduced. The bottommost layer of the U-Net is treated as a bridge between contraction and expansive paths, and it contains two 3 × 3 convolution layers followed by ReLU and one 2 × 2 up convolution layer.
U-Net applies a series of convolutional filters in the contraction path; thus, the spatial dimensions of the inputs at the bottommost layer become smaller than that of the original images [20]. Since the ultimate goal of U-Net is semantic segmentation, which is a classification of pixels to determine whether a specific pixel in the input is part of a lesion, the output should be the same dimension as the input. For this reason, the expensive (decoder) path of U-Net semantically projects the discriminative features (lower spatial dimensions) generated by the encoder onto the pixel space (higher spatial dimensions) to maintain symmetric dimensions between input and output images. Similar to the encoder, every step in the decoder path consists of a 2 × 2 upsampling (upconvolution), followed by two 3 × 3 convolutions with ReLU. At the final layer, U-Net ends up with a 1 × 1 convolution that converts the size of the feature map of the first last layer to the desired number of output classes.
Configurations of the Proposed Method
Working with deep learning models can guarantee better performance, but they considerably demand a high number of hyperparameters. A correct configuration and the best choice of hyperparameters for the model are the most critical issues to get the accurate outputs. This section will discuss the details of the experimental setups and the results of our proposed method.
. Data Preparation
As stated in the materials and methods (Section 3), the experiments of our proposed method are done using 239 MRI exams of the standardized ATLAS dataset. We divided those input data into three partitions: 60% for training (143 scans), 20% for validation (48 scans), and 20% for testing (48 scans). Since some MRI scans contain more than one infarct lesion, we summarized data preparation details in Table 1.
Preprocessing
Once preparing the data, we perform the preprocessing of the input images. Since an input MRI exam is taken from an individual patient and contains multiple two-dimensional (2D) sequential slices, we stacked these 2D slices in sequential order and constructed them as a volumetric image. However, we did not normalize the input volumetric images because we used the standardized ATLAS dataset version. All the exams are already undergone a standardization process and formatted into 197 × 233 × 189 mm 3 dimensions with a canonical voxel size of 1 mm 3 . For this reason, we skipped the standardization process and continued the following preprocessing steps.
• Skull Stripping
Skull stripping is one of the most initial and crucial tasks in every type of neurological MRI analysis. On a head scan, the brain region occupies approximately one-third of the entire scan while the rest, two-third, is occupied by extra-meningeal tissues. Skull stripping detects the boundaries of the skull to determine the brain area. Subsequently, it removes nonbrain tissues outside of boundaries and extracts the brain region only. In this study, we focus on detecting the infarct brain lesions located within the brain area. Thus, this step is necessary not only to reduce search area and computational effort but also to improve the detection accuracy. Several approaches had been proposed to perform this operation. Among them, we applied a deep learning-based method called DeepBrain [29] for skull stripping. The main reasons for using DeepBrain includes (i) it is developed using T1 weighted MRIs, and we are also working on T1 weighted MRIs, (ii) it is easy to use and fast (only~20 s in CPU version and~2 s in GPU version), (iii) it is working well on 3D volumetric data without requiring any extra effort and (iv) it had proven high accuracy (>0.97 dice metric) using popular standard datasets.
•
Variational mode decomposition (VMD) After skull-stripping from the input brain MRIs, the next preprocessing step is variational mode decomposition (VMD). As described in Section 3.3, applying VMD as a preprocessing is one of the major contributions of this study. Our main objective of this contribution is to suppress the non-infarct lesions inside the brain. It can significantly reduce the computation effort and time because the deep learning model does not need to segment such non-infarct lesions. On the other side, VMD also provides a great deal of help in reducing overfitting and data imbalance problems. VMD works on a number of hyperparameters, and we selected the appropriate values of hyperparameters as described in Table 2. Among (K = 5) modes of decomposed images, experiments showed that mode number 3 is the most suitable for decomposition because it can represent most information about infarct lesions. Thus, we applied mode three decomposed images to create masks and applied them to cover out undesired non-infarct lesions.
• Overlapped Patches Strategy
Once masking out the non-infarct candidates, we then divided the resulted volumes into smaller patches. These patches have the same dimensions (64 × 64 × 64), and each patch possesses ten overlapped pixels from its adjacent patches. Moreover, each patch's reference position number is also extracted simultaneously, aiming for seamless and intensive stitching in the postprocessing step. These divided patches are then fed into the 3D U-Net algorithm in order to perform patch-wise segmentation.
Segmentation Using 3D U-Net
We preprocessed for all MRIs and associated ground-truth scans in the dataset hence each patch has its own associated mask patch. The outputs from the preprocessing step are 3D overlapped patches and corresponding masks, and they were used as the inputs for 3D U-Net based segmentation. As described in the graphical representation of the proposed U-Net (Figure 4), the inputs are 64 × 64 × 64 × 1 patches. Here, the color channel is assigned as one because MRIs are the grayscale images. The output of U-Net is 64 × 64 × 64 × 2 for two classes, that is one for background and another for the infarct lesion.
The proposed 3D U-Net was trained using 3D patches that were obtained from MRIs in training partition. The patches for validation were apart from the training patches and used to evaluate the model performance. Dice loss function is calculated to assess the training performance and Adam optimizer is applied to optimize the loss. Moreover, we applied batch normalization after each layer of convolution to improve the stability of the training and drop-out after each level of U-Net to reduce overfitting. Several times of training using different values of hyperparameters are conducted to get the lowest loss value on the validation samples. Based on the experimental trials and results, we achieved our best segmentation model using the following hyperparameters stated in Table 3. Table 3. Hyperparameter values for proposed 3D U-Net.
Postprocessing
The is the final step of our proposed infarct lesion segmentation. The segmented image patches using 3D U-Net were stitched together again, using the reference patch position numbers and 3D connected component labeling. In preprocessing, we pruned out some patches having all black voxels, and they were not fed into the U-Net for segmentation. Therefore, we substitute zero voxels again for those patches to get the same dimension as the original image. Then, we performed 3D connected component labeling using voxel connectivity 26 to finetuned on full-size images and generate the final segmentation of infarct lesions. In order to get more comprehensive understanding, Algorithm 1 describes the pseudo code to summarize the workflow of our proposed automated infarct lesion segmentation. training performance and Adam optimizer is applied to optimize the loss. Moreover, we applied batch normalization after each layer of convolution to improve the stability of the training and drop-out after each level of U-Net to reduce overfitting. Several times of training using different values of hyperparameters are conducted to get the lowest loss value on the validation samples. Based on the experimental trials and results, we achieved our best segmentation model using the following hyperparameters stated in Table 3. Figure 5 illustrates the learning curve of our proposed U-Net model, showing the training and validation loss. The model reached the best stage at epoch 20 with a mean DSC of 0.6738 for training, and 0.6718 for validation, respectively.
Postprocessing
The is the final step of our proposed infarct lesion segmentation. The segmented image patches using 3D U-Net were stitched together again, using the reference patch position numbers and 3D connected component labeling. In preprocessing, we pruned out some patches having all black voxels, and they were not fed into the U-Net for segmentation. Therefore, we substitute zero voxels again for those patches to get the same dimension as the original image. Then, we performed 3D connected component labeling using voxel connectivity 26 to finetuned on full-size images and generate the final segmentation of infarct lesions. In order to get more comprehensive understanding, Algorithm 1 describes the pseudo code to summarize the workflow of our proposed automated infarct lesion segmentation.
Results
To evaluate the performance of the proposed segmentation, we calculated the following assessment measurements. Note that all of these measurements were calculated after the postprocessing stage; that is, they were not calculated for patch-wise segmentation but for the final volumetric lesion segmentation.
•
Jaccard similarity coefficient (IoU) This index is also known as intersection over union (IoU) and measures the overlap between the segmented lesions and the ground truth images. IoU value is ranging from 0 (no overlapped) to 1 (perfect segmentation), and it can be calculated using the following equation: where X is the segmented lesion and Y is the ground truth lesion mask.
• Dice similarity coefficient (DSC) Similar to IoU, DSC also measures the overlapped between the segmented lesion and the ground truth lesion mask. DSC can be calculated by:
Results
To evaluate the performance of the proposed segmentation, we calculated the following assessment measurements. Note that all of these measurements were calculated after the postprocessing stage; that is, they were not calculated for patch-wise segmentation but for the final volumetric lesion segmentation.
• Jaccard similarity coefficient (IoU)
This index is also known as intersection over union (IoU) and measures the overlap between the segmented lesions and the ground truth images. IoU value is ranging from 0 (no overlapped) to 1 (perfect segmentation), and it can be calculated using the following equation: where X is the segmented lesion and Y is the ground truth lesion mask.
• Dice similarity coefficient (DSC) Similar to IoU, DSC also measures the overlapped between the segmented lesion and the ground truth lesion mask. DSC can be calculated by: • Average symmetric surface distance (ASSD) Unlike IoU and DSC, this index is a distance measurement. It calculates the average of all the distances from voxels on the boundary of the segmented lesion to those of the ground truth and vice versa [30]. The smaller number of ASSD indicates the better segmentation performance. Figure 6 demonstrates the raincloud plots showing the distribution of three assessment measurements on testing data. By analyzing the raincloud visual representation, it is evident that the plots for IoU and DSC are relatively short, meaning overall segmentation results have a high level of agreement with each other. For the ASSD, the plot is comparatively taller than the others because some outputs had low ASSD and some were high. The mean and standard deviation values of each assessment measurement were summarized in Table 4. Note that the lower number of ASSD value indicates the higher similarity between the segmentation result and ground truth.
Algorithm 1. Pseudo Code for Proposed Infarct Lesion Segmentation
Input: Brain MRI exams as S = {s 1 , s 2 , . . . ., s n }, and associated ground truth masks as M = {m 1 , m 2 , . . . ., m n }, where n is the total number of exams in the given dataset.
Step 5: Test the trained U-Net using S test and perform postprocessing.
Step 6: Evaluate the performance of U-Net using M test . Output: Segmented infarct lesions of tested MRIs and assessment measurements.
Discussion
Some example outputs of our proposed segmentation method are illustrated in Figure 7 using volume rendering. The first column of Figure 7 represents input MRI scans from four different patients containing different sizes of infarct lesions. Then, in the second column, we can see the skull stripped volumes of input images and associated lesion masks (highlighted by the green color). The lesion size of each subject was also described by measuring the number of voxels in the major axis length. And finally, the last column shows the segmentation results (highlighted by the blue color) and associated DSC values. From this figure, we can note that our proposed automated infarct lesion segmentation performs well for any size of lesions.
The mean and standard deviation values of each assessment measurement were summarized in Table 4. Note that the lower number of ASSD value indicates the higher similarity between the segmentation result and ground truth.
Discussion
Some example outputs of our proposed segmentation method are illustrated in Figure 7 using volume rendering. The first column of Figure 7 represents input MRI scans from four different patients containing different sizes of infarct lesions. Then, in the second column, we can see the skull stripped volumes of input images and associated lesion masks (highlighted by the green color). The lesion size of each subject was also described by measuring the number of voxels in the major axis length. And finally, the last column shows the segmentation results (highlighted by the blue color) and associated DSC values. From this figure, we can note that our proposed automated infarct lesion segmentation performs well for any size of lesions. Moreover, we evaluated the performance of our proposed method by performing a comparative analysis with state-of-art methods described in related work (Section 2). Comparing different methods that were trained using different datasets and measured using different assessments is very troublesome. For that reason, we ensure a fair and quantitative comparison by selecting the previous methods using the same dataset base on the same assessment method (DSC). Table 5 summarizes different infarct lesion segmentation methods applying the ALATS dataset. The details about the experimental setups of each method and reported performance (DSC) are also described in the table [25]. From this table, we can prove that our proposed infarct lesion segmentation method can provide a slightly higher DSC value compared to the previous methods. As stated in [24], the DSC of the human expert gold standard can be considered in the range of 0.67. Therefore, the DSC of our proposed segmentation method is quite close to that of the gold standard. Moreover, we evaluated the performance of our proposed method by performing a comparative analysis with state-of-art methods described in related work (Section 2). Comparing different methods that were trained using different datasets and measured using different assessments is very troublesome. For that reason, we ensure a fair and quantitative comparison by selecting the previous methods using the same dataset base on the same assessment method (DSC). Table 5 summarizes different infarct lesion segmentation methods applying the ALATS dataset. The details about the experimental setups of each method and reported performance (DSC) are also described in the table [25]. From this table, we can prove that our proposed infarct lesion segmentation method can provide a slightly higher DSC value compared to the previous methods. As stated in [24],
Conclusions
In this study, we have proposed a method for automated segmentation of infarct lesions from T1 weighted brain MRIs. For technical contributions, our study brings three major ideas: (1) applying variational mode decomposition (VMD) for preprocessing of input MRI volumes, (2) dividing the preprocessed MRIs into overlapped patches together with the associated reference numbers, and (3) segmenting the infarct lesions using threedimensional U-Net. The first contribution, VMD, decomposed the input MRIs into different images by highlighting different spectral bands, which are the key indicators to extract silent image features from infarct lesions. We suppressed the non-stroke (non-infarct) lesions inside the brain by analyzing the spectral characteristics of decomposed images. This contribution helps to reduce the computation effort and time because the segmentation model does not need to work on non-infarct lesions. Moreover, it can implicitly relieve the overfitting and data imbalance problems that specifically occurred due to the high numbers of non-infarct lesions. The second contribution, overlapped patches strategy, is also applied as a preprocessing aiming to reduce the workload of the 3D U-Net based segmentation. Instead of direct inputting the whole MRI volume (197 × 233 × 189 mm 3 ), overlapped patches are generated and fed into the U-Net model. Training the U-Net using multiple small patches is also an implicit way of data argumentation and it makes the model more robust. Moreover, thanks to the use of VMD in preprocessing, we can get rid of some empty patches from segmentation. This also considerably reduces the computational effort. Besides, as our proposed overlapped patches strategy records the reference number of the patches, we can easily finetune the final segmented volumes in the postprocessing step. Finally, the last contribution is the development of a three-dimensional U-Net using the extracted patches to segment the infarct lesions. U-Net model performed patch-wise segmentation, and its outputs are postprocessed to get the full-size segmented volumes.
Our proposed method is developed and evaluated using 239 T1 weighted MRI scans (with a total of 430 infarct lesions) from a standard dataset called ATLAS. Based on the experimental results, our method has achieved a mean DSC (0.6684), IoU (0.5022), and ASSD (0.3932), respectively. Moreover, empirical comparison with some previous popular works established using the same dataset also proved that our proposed method can provide preferable segmentation performance. Thus, we believe that our proposed automated infarcted lesion segmentation method can be applied as an adjunct tool to relieve the complications of manual lesion segmentation and assist in providing timely diagnosis decisions and treatments for patients. However, as a major limitation, our proposed work is a unimodal and focus on T1 weighted MRI scans. Hence, its efficacy can be further improved using multimodal MRI scans. Moreover, we have a great interest in improving the performance of our segmentation model using VMD with a combination of the modernized architecture of U-Net. Thus, we believe that the proposed idea of this paper will also be a great help for readers to get future research directions in the automatic diagnosis of any other neurological diseases. | 9,955.4 | 2021-03-01T00:00:00.000 | [
"Computer Science"
] |
Antioxidant potential of Moringa Stenopetala leaf extract on lager beer stored at room temperature
Abstract Modern breweries are focused on controlling oxidation in beer using natural antioxidants to improve shelf-life stability. The most significant quality issues in the brewing industry are flavor instability and oxidation. In this research, the effects of adding Moringa stenopetala leaf extract to lager beer at 400, 600, and 800 ppm concentrations for 30, 60, and 90 days storage at 25ºC were investigated. The total phenolic and total flavonoid content were measured by Folin-Ciocalteu and aluminum chloride method, respectively. Using the 1, 1-diphenyl-2-picrylhydrazyl (DPPH) radical scavenging activity and the phosphomolybdate assay, the antioxidant activity of each treatment was assessed and compared. Addition of the extract in beer showed a linear increment in the total phenolic content from 46.79 up to 88.30 milligram of gallic acid equivalent per liter of beer (mg GAE/L) and total flavonoid content from 123.36 up to 167.09 milligram of catechin equivalent per liter of beer (mg CE/L). Similar increment was observed within the DPPH scavenging potential, from 46.55 up to 67.16% and total antioxidant power from 139.12 up to 216.67 milligram of butylated hydroxytoluene equivalent per liter of beer (mg BHTE/L). The total phenolic and flavonoid contents and antioxidant activities of extract treated beer showed slight reduction as compared to untreated beer with increasing storage time. According to the findings, M. stenopetala leaf extract could be utilized as a component in beer to reduce oxidation and keep it fresh for extended periods of time.
PUBLIC INTEREST STATEMENT
Beer is one of the most widely consumed alcoholic beverages in the world for its fresh taste, low calories, and higher nutritional value than other alcoholic beverages, because of its minerals content.Flavor instability resulting from beer storage and oxidation remains one of the most important quality problems in the brewing industry.Moringa is one of the most powerful sources of natural anti-oxidants by supplying the free atoms and mitigate the effect of free radicals.Furthermore, moringa has antimicrobial properties that extend the shelf life of alcoholic beverages by suppressing lactic acid bacteria.Therefore, this study deals with the possible extension of shelf life of lager beer using moringa leaf extract to replace chemical antioxidants.The incorporation of moringa stenopetala at moderate level in beer reduces oxidation and increase the phenolic content that is typically reduced during the boiling, filtration, bottling, and storage stages of the brewing process.This has a potential promise to improve the stability and shelf-life of commercial beers without the incorporation of artificial or chemical preservatives.
antioxidant power from 139.12 up to 216.67 milligram of butylated hydroxytoluene equivalent per liter of beer (mg BHTE/L).The total phenolic and flavonoid contents and antioxidant activities of extract treated beer showed slight reduction as compared to untreated beer with increasing storage time.According to the findings, M. stenopetala leaf extract could be utilized as a component in beer to reduce oxidation and keep it fresh for extended periods of time.
Introduction
The fresh taste, low calorie content, and nutritional value of beer make it one of the world's oldest and most popular alcoholic drinks (Arnold, 2005).It is a good source of phenolic compounds (Oñate-Jaén et al., 2006) and contains potassium, magnesium, calcium and sodium (Styburski et al., 2018).Beer production involves a number of complex chemical and biochemical reactions, so the final beer contains many compounds with antioxidant activity derived mainly from yeast, malt, and hops, or formed during preparation (Quifer-Rada et al., 2015).Antioxidant types and concentrations in finished beer are mostly determined by the brewing technology, raw materials, and yeast used throughout the brewing process.Slight variations in the structural composition of these compounds can result in large changes in antioxidant activity, affecting the beer's overall oxidative or flavor stability.Beer flavor instability is becoming a major concern for most breweries due to the loss of freshness and quality of beer over time as its chemical composition changes.Flavor instability resulting from beer storage and shelf life is becoming a major concern for most breweries due to the loss of freshness and quality of beer over time due to changes in its chemical composition.Flavor stability is now the most critical issue in determining shelf life packaged beer, and reducing flavor staling can help increase shelf life (Aron et al., 2011).It is largely determined by oxygen content, brewing processes, and the materials used.As a result, research is currently being focused on improving the antioxidant activity of beer itself, because oxidative staining of beer can occur even at low oxygen levels of 0.1 mg/l.Since oxidative staling of beer is still apparent in low oxygen environments, researchers are now investigating ways to increase the antioxidant activity of beer (Bamforth et al., 2018).Scientists are searching different mechanisms to decrease beer oxidation using synthetic antioxidants.However, current research recommends not using synthetic antioxidants in food and beverage industries due to prolonged health effects and the evolution of food laws and regulations which forbids synthetic antioxidants in food (Arnold, 2005).Thus, natural antioxidants are increasingly being used in food products both for reasons of quality and potential health benefits.Therefore, replacement of synthetic antioxidants with natural antioxidants is the issue of modern breweries to tackle such antagonists and succeed in their business with customer satisfaction and respect government regulations.
Natural antioxidants, such as Moringa, are one of the most powerful sources since they provide bioactive compounds and inhibit the effects of free radicals.According to Nadeem et al. (2013), M. stenopetala is rich in phenolic compounds like cryptochlorogenic acid, astragalin, glucosinolates, and isothiocyanates.Also the leaves are rich in flavonoids such as isoquercetin and rutin, and the beta carotene present in M. stenopetala leaves also acts as antioxidants (Engeda & Rupasinghe, 2021;Tesfaye & Solomon, 2014).It also showed antimicrobial effect for shelf life extension of alcoholic beverages by suppressing lactic acid bacteria (Florence et al., 2016).Therefore, the aim of this study was to evaluate the effect of M. stenopetala extract on total phenolic and flavonoid contents and antioxidant activity of Lager beer stored at room temperature.
Raw materials
M. stenopetala leaves were taken from the Hawassa Teachers Training Center.Malts were obtained from Assela malt factory.From BGI Ethiopia, new Saccharomyces cerevisiae yeast (S-189 type) and brewing liquor were obtained for standard brewing.
Sample preparation and extraction
M. stenopetala leaves were carefully collected, wrapped in aluminum foil, and delivered to Hawassa University's Food Science and Post-harvest Technology lab.After that, it was cleaned with deionized water, dried and ground with grinder (Nadeem et al., 2013).The fine powder was combined with 80 percent ethanol at a ratio of 1 gm to 10 mL in a duplicate Pyrex beaker.An electrical shaker was used to macerate the beakers for 18 hours after they were tightly wrapped with cover bush.After that, the extracts were separated and filtered before being evaporated to dryness at 40°C under vacuum (Buchi, 3000 series, Switzerland).A stock solution was made and kept at 4°C in the refrigerator (Siddhuraju & Becker, 2003).
Experimental design and treatments
The stock solution (1000 ppm) was prepared by dissolving 50 mg of powder dried leaf extract in distilled water to make 50 mL solution.Using a factorial design, diluted leaf extract solutions with concentrations of 400, 600, and 800 ppm, as well as storage times of 1, 30, 60, and 90 days were investigated on total phenolic and flavonoid content and antioxidant potential of a Lager beer.A beer containing 12 ppm potassium metabisulphite (KMS) was used as a positive control, and a beer without the extract was used as a negative control.Three concentration of treatments (400, 600, and 800 ppm) were chosen based on the previous findings (Gedefaw et al., 2022) on sensory evaluation in which trained panelists assessed the maximum amount of M. stenopetala leaf extracts that could be applied without changing the original beer flavor.
Beer preparation
The beer was made utilizing a dry milling system using Pires and Brányik (2015) method.The mash was prepared with malt and water at 55°C, with water to grain ratio of 2.3 L/kg.It was heated at 64°C for 20 min.and then at 74°C for 15 min.Following saccharification of the mash with temperature increases to 78°C, the mash was filtered.A wort sample was boiled for 60 minutes with 0.12 kg CO 2 hop extract per hectoliter, and the hot trub was removed, cooled to 10°C, and aerated to 18 ppm oxygen.It was then fermented at 12°C until the initial gravity dropped to 8ºP, and then at 16°C until the vicinal diketones (VDK) level decrease to less than 0.18 ppm.The matured beer was then purged and filtered after being stored at −2°C for two days to allow for maturation at 0.5 bar counter pressure.The beer was diluted to 11.05°P with de-aerated water, carbonated to 5.8 gm/lit CO 2 , and packed using 330 mL sanitized amber bottles and crowned with manual crowner after addition of 3.3 mL of extracts at various concentrations.All samples were labeled and pasteurized at 60°C for about 20 min and kept at 25°C.Starting on the first day of sample preparation, the total phenolic, flavonoids and antioxidant activities of each sample were tested every 30 days for three months.
Total Phenolic Content (TPC)
The TPC of beer samples were measured by Zhao et al. (2010) using the Folin-Ciocalteu spectrophotometric method.0.1 mL beer sample (diluted five-folds) was mixed with 1 mL of Folin-Ciocalteu reagent (diluted ten-folds), and left for five minutes.The mixture was then incubated at 25°C for 90 min with 1 mL sodium carbonate (7.5%w/w).The absorbance of the solution was then measured at 765 nm with a UV-visible double beam spectrophotometer.From the gallic acid calibration curve (y = 0.023× + 0.014, R 2 = 0.996), the TPC was determined and reported as milligram gallic acid equivalent per liter of beer (mg GAE/L).
Where; y = the absorbance of the sample, x = the concentration established from the calibration curve (mg GAE/L).
Total Flavonoid Content (TFC)
The TFC of beer samples were determined using Pai et al. (2015) method.One mL of beer samples (diluted five-folds) was diluted with 1.25 mL of deionized water and 75 μL of NaNO 2 was added to the mixture.After 6 min, 150 m AlCl 3 was added to the reaction liquid, followed by 1 mL 1 M NaOH after 5 min.The absorbance was measured at 510 nm versus prepared water blank.A standard curve was prepared using catechin (5-1000 µg/mL).All of the results were calculated using standard calibration curves (y = 0.011× + 0.132, R 2 = 0.973) and presented as milligram catechin equivalents per liter of beer (mg CE/L).
Where; y = the absorbance of the sample, x = the concentration established from the calibration curve (mg GAE/L).
DPPH radical scavenging activity
The ability beer samples to scavenge DPPH radicals were measured using the method described by Tafulo et al. (2010).The beer samples were diluted to five-fold and 1.0 mL of the sample was mixed with 2.0 mL freshly prepared DPPH solution (0.06%, w/v) in ethanol.After vortexing the reaction mixture, it was left at room temperature in the dark for 30 min.The absorbance was then measured at 520 nm using a double beam UV-visible spectrophotometer (JENWAY-9500, UK).The discoloration of DPPH given as a percentage was then used to calculate free radical scavenging activity using the equation, where Ac is the absorbance of DPPH in the absence of the extract sample and As is the absorbance of DPPH in the presence of the extracts sample.
Total antioxidant activity using phosphomolybdate assay
The total antioxidant capacity of beer samples were measured by Huda-Faujan et al. (2009) method using phosphomolybdenum assay.In a capped test tube, 0.3 mL of each sample (diluted five times) was mixed with 3 mL of phosphomolybdenum reagent.The solutions were then incubated in water at 95°C for 90 minutes before being cooled to room temperature.Finally, using a spectrophotometer (JENWAY-6300, UK), the absorbance of each solution was measured at 695 nm against a blank containing 3 mL methanol.The overall antioxidant activities were determined using the equation y = 0.432× + 0.078, R 2 = 0.99, and presented as milligrams of butylated hydroxytoluene equivalent per liter of beer (mg BHTE/L).
Where; y = the absorbance of the sample, x = the concentration established from the calibration curve (mg BHTE/L).
Statistical Analysis
On the experimental data, an analysis of variance (ANOVA) was performed and Duncan's multiple range tests were employed to find differences (p < 0.05) between the mean values.The data was analyzed using SAS 9.0 and origin 8 program, and the results were expressed as mean and standard deviation.
Effect of M. Stenopetala leaf extract on TPC and TFC of lager beer
The effect of M. stenopetala leaf extract at different concentrations and storage period on the TPC and TFC of beer is presented in Table 1.The concentration of TPC in extract-treated beer influenced it considerably.Untreated beer had a total phenolic concentration of 46.79 mg GAE/L, whereas adding 400 ppm extract increased total phenolic content to 70.51 mg GAE/L, a 33% increase.Increasing the extract concentration to 800 ppm resulted in significantly (P < 0.05) increase in TPC to 88.60 mg GAE/L.This finding was supported with a study conducted by Ulloa et al. (2017) that bioactive compounds increased with the increasing concentration of propolis extract.This significant increment of TPC in beer samples enriched with extract might come from the potentially high accumulation of bioactive compounds in M. stenopetala leaf (Engeda & Rupasinghe, 2021).A study conducted by Tesfaye and Solomon (2014) on the antioxidant potential of M. stenoptetala found up to 92.8 mg GAE/100 g of dry weight which supports finding that probably increased phenolic content in beer samples enriched with the extract.The TPC of beer sample treated with commercial beer antioxidant, potassium metabisulphite (66.79 mg GAE/L) was significantly (P < 0.05) lower than the TPC of beer sample treated with 400 ppm of extract.
There was drastic decline of total phenolic content (44% reduction) of the controlled beer up to 26.16 mg GAE/L within 90 storage days.While beer samples enriched with potassium metabisulphite showed only 15% reduction in total phenolic content (66.79-56.78mg GAE/L).Besides, with storage 90 days, M. stenopetala leaf extract treated beer samples showed slight reduction of total phenolic content (70.51-61.73mg GAE/L for 400 ppm, 82.75-73.08mg GAE/ L for 600 ppm and 88.60-84.34mg GAE/L for 800 ppm).
Similarly the TFC of beer samples enhanced with increasing the concentration of the extract.Increasing the concentration of the extract from 400 ppm to 600 ppm significantly (p < 0.05) increased the total flavonoid content from 147.51 up to 159.82 mg CE/L and further increment to 800 ppm significantly (p < 0.05) increased its total flavonoid content to 167.09 mg CE/L.The beer sample enriched with 12 ppm potassium metabisulphite had also raised the concentration to 143.99 mg CE/L.The total flavonoid content for untreated beer sample was significantly affected by storage time with drastic reduction of total flavonoid content.The result disclosed that total flavonoid content had been reduced linearly from 123.36 mgCE/L (at the first date of storage) to 85.71 mg CE/L (after 90 days of storage) which implies 30% reduction.On the contrary, only 15% total flavonoid content reduction was observed for potassium metabisulfite treated beer sample (143.99-120.02mg CE/L) with in the same storage time.
Beer treated with different concentrations of M. stenopetala leaf extracts, S01: untreated, S02: treated with 12 ppm potassium metabisulphite, S03: treated with 400 ppm extract, S04: treated with 600 ppm extract, S05: treated with 800 ppm extract.Values are means and standard deviations of duplicated determinations; the means with the same letter across the column are not significantly different at P ˂ 0.05.
The beer samples enriched with M. stenopetala leaf extracts showed slight reduction of total flavonoid content from starting date to 90 days of storage (147.51-125.69mg CE/L for 400 ppm, 159.82-141.81mg CE/L for 600 ppm and 167.09-150.93 mg CE/L for 800 ppm) which is in average of only 10% reduction.According to Vignault et al. (2018), phenolic compounds significantly decrease during storage due to oxidative polymerization and the formation of colloidal hazes.A large amount of phenolic compounds would be liberated by malting and fermentation, and these compounds would undergo reaction with protein, leading to greater declines of phenolic compounds than those bound to proteins in most beers (Szwajgier et al., 2005).According to the study conducted by Zhao (2015), extract treated beer samples had a slight reduction in TPC due to the fact that the bioactive phenolic compounds introduced from extract were not equally reduced.In addition, there was no difference in the amount of some phenolic compounds during storage.The slight reduction of total phenolic and flavonoid contents of M. Stenopetala leaf extract treated beer might be the presence of large number of stable phenolic compounds bound with polysaccharides, in leaf cells of M. stenopetala (Dadi et al., 2019;Engeda & Rupasinghe, 2021).
Effect of M. Stenopetala leaf extract on beer DPPH scavenging activity
The DPPH scavenging activity of M. stenopetala leaf extract-enriched beer over time is demonstrated in Figure 1.The result for untreated beer was 46.55%, but its potential was decreasing with increasing storage time from 46.55 up to 30.91% which might be due the reduction of endogenous antioxidants happened by reacting with hydroxyl radicals through time (Dennis et al., 2004).However, beer enriched with potassium metabisulfite showed 61.80% DPPH scavenging activity, which is significantly much higher than untreated beer, and there was only a minor decrease in DPPH scavenging activity over time from the first date of production to three months of storage.
According to Lugasi (2003), metal sulphites had relatively high antioxidant potential which has been used by modern breweries for decades.But due to legislations on health of consumers, many developed countries avoid using this metallic salt.
DPPH scavenging activity for beer samples treated with the extracts revealed linear increment with concentration of 60.14, 71.60 and 86.44% for 400, 600 and 800 ppm, respectively.The abundantly identified bioactive chemicals and antioxidants (Tesfaye & Solomon, 2014) in the leaf of M. stenopetala contribute to the increase in DPPH scavenging capability with extract concentration.This finding also showed reduction of DPPH scavenging potential with storage time for beer samples enriched with the extracts.The reduction was from 60.14-34.29%(400 ppm sample), 71.60-41.61%(600 ppm sample) and 86.44-50.40%(800 ppm sample) within three months of storage time (Figure 1).
Beer treated with different concentrations of M. stenopetala leaf extracts, S01: untreated, S02: treated with 12 ppm potassium metabisulphite, S03: treated with 400 ppm extract, S04: treated with 600 ppm extract S05: treated with 800 ppm extract.The values are the average of duplicated experiment (mean± SD).At p < 0.05, values of the same concentration in the histogram bar with different letters are significantly different.
Effect of M. Stenopetala leaf extract on total antioxidant potential of beer
The total antioxidant potential of beer enriched with M. stenopetala leaf extract with storage time is illustrated in Figure 2. The total antioxidant potential for untreated beer was found 139.12 mg BHTE/L which is almost similar to study of light lager beers by Zhao et al. (2013) which was 140.23 mg BHTE/L.With increasing concentrations of the extract, the total antioxidant potential of beer sample was significantly (P < 0.05) enhanced.The total antioxidant potential increased by 30% with addition of 400 ppm of the extract and increasing the concentration of extract by 200 ppm raised the beer antioxidant potential by 5%.The increment was linear for all extract treated beer samples.Accordingly, the total antioxidant potential of beer treated with 400, 600 and 800 ppm were 190.16, 213.00 and 216.67 mg BHTE/L, respectively.The beer enriched with potassium metabisulfite also showed similar trained with beer treated with 400 ppm extract.The total antioxidant potential of untreated beer showed linear reduction with increasing storage time by 27% (139.12-101.72 mg BHTE/L) within three months of storage time.The reduction was drastic when compared with extract treated samples.Similar study conducted by Piazzon et al. (2010) that compared the antioxidant capacity of several beer kinds showed that light beer stored without antioxidants would lose 20-30% of its antioxidants within three months of storage.Storage time had slight reduction of total antioxidant potential for treated beer when compared with untreated beer.The slight reduction of total antioxidant for potassium metabisulphite and extract treated beer showed that these additives had strong antiradical potential when compared with untreated beer.The total antioxidant potential of propolis-treated beer samples increased linearly with increasing propolis concentration (Ulloa et al., 2017).Similar study conducted by Florence et al. (2016) proved that 600 ppm M. oleifera treated pito improved the total antioxidant potential of beer by 32% and showed slight reduction with storage time, which are consistent with the present findings.
Beer treated with different concentrations of M. stenopetala leaf extracts, S01: untreated, S02: treated with 12 ppm potassium metabisulphite, S03: treated with 400 ppm extract, S04: treated with 600 ppm extract and S05: treated with 800 ppm extract.The values are the average of duplicated experiment (mean± SD).At p < 0.05, values of the same concentration in the histogram bar with different letters are significantly different.
Conclusion
The possibility for developing and using M. stenopetala leaf extract as an additional source of bioactive phenolic compounds and increasing antioxidant activity in beer was highlighted in this work.The use of a moderate amount of the leaf extract in beer prevents oxidation and increased phenolic content, decreased slightly during the storage for three months.These findings support that the addition of M. stenopetala leaf extract increased the total phenolic and flavonoid contents and antioxidant activity of the beer samples.However, the effect of other phytochemicals like carotenoids, vitamins and minerals on the quality of Lager beer should be studied.Therefore, without the addition of synthetic preservatives (example, potassium metabisulfite), commercial beers may be able to maintain their flavor stability or reinforce the nutritional property of beer and extend shelf life by incorporating phenolic rich leaf extract of M. stenopetala.
Figure 1.Effect of M. stenopetala leaf extract on DPPH scavenging potential of beer.
Figure 2. Effect of M. stenopetala leaf extract on total antioxidant potential of beer. | 5,065.2 | 2023-08-22T00:00:00.000 | [
"Chemistry",
"Environmental Science"
] |
Emergence of Polycystic Neotropical Echinococcosis
The discovery of an unusual parasitic disease and its causative agents is recounted.
Echinococcosis is a parasitic zoonosis of increasing concern. In 1903, the fi rst cases of human polycystic echinococcosis, a disease resembling alveolar echinococcosis, emerged in Argentina. One of the parasites responsible, Echinococcus oligarthrus, had been discovered in its adult strobilar stage before 1850. However, >100 years passed from the fi rst description of the adult parasite to the recognition that this species is responsible for some cases of human neotropical polycystic echinococcosis and the elucidation of the parasite's life cycle. A second South American species, E. vogeli, was described in 1972. Obtaining recognition of the 2 species and establishing their connection to human disease were complicated because the life cycle of tapeworms is complex and comprises different developmental stages in diverse host species. To date, at least 106 human cases have been reported from 12 South and Central American countries. E chinococcosis is a parasitic zoonosis characterized by the development of a larval tapeworm stage (metacestode) in herbivorous intermediate hosts, such as rodents and ungulates, and accidentally in humans. The adult tapeworm is minute and inhabits the small intestine of canids or felids, the defi nitive hosts. Infections occur in intermediate hosts when they ingest eggs that have been passed in the feces of defi nitive hosts. In the past, many Echinococcus species have been described, but most have been abandoned or reclassifi ed. Molecular phylogeny reconstructions are complex, and the process of taxonomic revision has not yet been completed (1). The causative agent of cystic echinococcosis (hydatidosis), the dog tapeworm E. granulosus sensu lato, is cosmopolitan. The species responsible for alveolar echinococcosis (AE), the fox tapeworm E. multilocularis, is endemic to Holoarctic regions. Recently, E. shiquicus n. sp. was discovered in Tibet (2). The "neotropical" echinococcal species E. oligarthrus and E. vogeli are confi ned to the New World. Either species is capable of causing polycystic echinococcosis (PE) in its natural intermediate host and accidentally in humans. Disease due to E. vogeli is similar to AE and is characterized by aggressive infi ltrative growth and external budding, whereas infection with E. oligarthrus has a more benign course. PE thus comprises 2 disease entities. Each is characterized by distinctive epidemiology, clinical manifestations, and morphologic features of the adult and larval parasite (3). Today, PE is no longer a medical rarity as more and more cases are being discovered. The prevalence of the disease, however, is unknown.
First Description of Human Neotropical Echinococcosis
In 1903 and in the years following, Marcelo Viñas in the Buenos Aires province of Argentina described a few cases of what he thought was AE on the American continent. The patients in whom he diagnosed the disease had multilocular cysts with an alveolar aspect, resembling European AE. Notably, the patients came from rural areas and claimed that they had never been out of the country (4-6). At that time, only E. granulosus (described by Batsch in 1786) and E. multilocularis were known members of the genus Echinococcus. AE had never been detected in South America before and was thought to be restricted to temperate, Holoarctic regions. AE lesions had been recognized as echinococcal 48 years before, in 1855, by Rudolf Virchow (7); the causative agent, E. multilocularis, had been described by German parasitologist Rudolf Leuckart in 1863 (8). The life cycle of the parasite, which involves foxes and rodents, was not elucidated until the 1950s by Robert L. Rausch and Everett L. Schiller (9) and Hans Vogel (10). Since the patients described by Viñas had never left their home country, he concluded that they must have acquired the disease in Argentina. Would this be the fi rst description of AE in the New World?
Discovery of Adult Echinococcus oligarthrus
Many years earlier, on April 9, 1817, the Austrian emperor, Franz I, had sent a group of natural scientists to Brazil to explore the country. On board one of the ships was 36-year-old Johann Natterer (1781-1843), a passionate ornithologist (11). In his past search for parasitic worms in birds, Natterer had studied helminthology at the Naturalien-Cabinete of Vienna's Hofmuseum under the supervision of Johann Gottfried Bremser (1767-1827), a physician and helminthologist. Natterer was fascinated by Brazil and stayed abroad for 18 years. He explored the area from Rio de Janeiro to Mato Grosso and British Guyana. Natterer returned to Vienna in 1836 with a Brazilian wife, 3 children, and 37 boxes of collected material (11). Among the many specimens he brought home was a helminth he had found in the upper part of the small intestine of a puma, Felis (Puma) concolor.
Karl Moritz Diesing (1800-1867), a zoologist and successor to Bremser in Vienna, listed the helminth collected by Natterer in his famous Systema Helminthum of 1850 initially under the juvenile form of Taenia crassicollis ("Taeniolae in fele concolore lectae probabiliter pullae") found in F. concolor (12). Rudolf Leuckart (1822-1898) stated in a monograph (13) that these helminths may not be seen as juveniles of T. crassicollis because they share some characteristics with T. echinococcus. Diesing later reclassifi ed Natterer's specimen as Taenia oligarthra in his Revision der Cephalocotyleen, which was presented to the scientifi c academy in Vienna on November 5, 1863 (14). In his Latin description, Diesing noted the presence of only 3-4 proglottids (articuli), hence the name "oligarthrus" (Figure 1). Diesing stated that the low number of proglottids is similar to the number of proglottids in T. echinococcus. The organism was still not recognized as an echinococcus, however. The presence of hooks typical for echinococci was not mentioned, and the parasite was placed in a subgroup with hookless tapeworms. All of these scientifi c descriptions of the South American tapeworm were forgotten by 1903, when Viñas described the cases of possible AE in Argentina.
In 1910, Max Lühe (1870-1916), a German physician and zoologist from Königsberg, requested the cestode material from Vienna and extensively characterized the small helminth. Lühe noted that most of the specimens had lost their rostellar hooks but that they were still present in some organisms ( Figure 2). He believed that Diesing must have overlooked the few specimens with hooks. Besides the remarkable difference in body length, no discrepancy with T. echinococcus was found. Lühe therefore concluded that T. oligarthra and T. echinococcus were closely related (15). Sixteen years later, Thomas Wright Moir Cameron (1894Cameron ( -1980, from the London School of Hygiene and Tropical Medicine, rediscovered the adult tapeworm in a different South American felid, a jaguarundi (Felis yaguarondi), which had died at the London Zoo. Cameron proposed placing T. oligarthra in the genus Echinococcus (16), which had been established by Karl Asmund Rudolphi in 1801. At that time, a cystic larval stage of the parasite had not been found or assigned to a strobilar stage. Whether this parasite could cause human disease was still unknown because no connection to the early Argentinian cases had been established.
Description of the Larval Stage of E. oligarthrus
On May 22, 1914, Emile Brumpt (1877-1951 and Charles Joyeux (1881-1966) from the Laboratoire de Parasitologie in Paris autopsied 4 agoutis (Dasyprocta agouti, today: D. leporina, Figure 3) in the state of São Paulo, Brazil (17). In the spleen and liver of one of these South American rodents they found multiple cysts. The liquid of the cysts resembled hydatid sand. The authors stated that the cuticle of the larva was very thin and that this "reminded us that in Echinococcus granulosus this cuticle may reach several millimeters." The inner surface of the cysts contained a proliferative membrane with many vesicles and protoscolices, the larval stage of tapeworms. The authors extensively described the protoscolices and the amount and shape of the rostellar hooklets they found. They concluded that the cysts in the agouti resembled the general structure of E. granulosus cysts. After comparing the hooks with those from E. granulosus and E. multilocularis, Brumpt and Joyeux concluded that the larva found in the agouti must have originated from a very small tapeworm. They stated that it was "unfortunately impossible to assign our hydatid to a known adult form." The authors continued to speculate that "due to the origin of the material, it seems (14, p. 370). In addition to the morphologic characterization of the helminth, the 2 prior references from Diesing's Systema Helminthum (12) and from Leuckart's monography (13) are listed. Natterer, who collected the helminth in Brazil, is also mentioned.
absolutely indicated to think of Taenia oligarthra." However, they concluded that the hooklets previously described by Lühe were different in size and shape and that therefore the cysts in the agouti belonged to a not yet described adult tapeworm, which they tentatively named Echinococcus cruzi. Their observations were published 10 years later, in 1924 (17).
In 1926, Cameron proposed that E. cruzi is the larval stage of E. oligarthrus, on the basis of the similar size and shape of the rostellar hooks and their origin in the same geographic region (16). Cameron had compared the morphologic features of the helminths' rostellar hooks from the larval stage obtained from the agouti and from the strobilar stage he had rediscovered in the jaguarundi.
Parasite's Life Cycle and Human Infection
Around that time, more cases of the emerging South American PE were recorded by Viñas in Argentina (1932, [18]). A single case also occurred in Uruguay and was described by Félix Dévé (1872Dévé ( -1951 and co-workers in 1936 (19); a second one was described by G. Dardel in 1955 (20). Dévé, a French physician, thought that the new South American echinococcosis was a "forme intermédiaire" between AE and cystic echinococcosis. However, Dévé believed in the unicyst theory of echinococcosis: all types of hydatid disease were caused by a single Echinococcus species (21,22).
In 1966, Vernon E. Thatcher and Octavio E. Sousa from the Gorgas Memorial Laboratory in Panama presented a redescription of adult E. oligarthrus on the basis of material from a puma in Panama (23). They also implicated humans as possible intermediate hosts, which they deduced from a case report by Sousa and Lombardo Ayala in 1965 (24). The latter report described the case of a polycystic, multilocular, hepatic cyst in a native Panamanian; the cyst had characteristics distinct from E. granulosus and E. multilocularis cysts and was probably caused by a parasite indigenous to the American tropics. The authors concluded that the human hy-datid possibly represented E. oligarthrus. They further suggested that the polycystic multilocular human hydatidosis of the Panama-Colombia area, studied around that time by Antonio D'Alessandro from the Tulane University International Center for Medical Research in Colombia, might be caused by the same species of parasite.
One year later, adult E. oligarthrus was found again by the same authors in the small intestine of another wild felid, the Panamanian jaguar (Felis [Panthera] onca) (25). After a reexamination of material previously misconstrued by others, Thatcher and Sousa concluded that a metacestode found in a nutria (Myocastor coypus), a South American rodent that had died in a United States zoo, was the larval stage of E. oligarthrus (26). Until then, various South and Central American felids had been considered to be defi nitive hosts of E. oligarthrus, and the presumed larval stage of the parasite had been discovered in rodents from the same geographic area. Experimental work was needed at that time to elucidate the biologic defi nition and the life cycle of the parasite. Proof had to be found that the formerly described E. cruzi was indeed the presumed metacestode stage of E. oligarthrus.
Sousa and Thatcher achieved this aim in 1969 by experimentally inducing hydatidosis in different rodent species. Among others, climbing rats, spiny rats, and agoutis were fed gravid proglottids of E. oligarthrus obtained from a naturally infected puma (27). In these successfully infected intermediate hosts, mature metacestodes showing similar morphologic features to E. cruzi developed in the muscles and inner organs. In a second experiment, the experimentally induced hydatids of the agoutis transformed into adult and mature E. oligarthrus in the feline intestine when fed to domestic cats. In return, parasite material obtained from the infected cats produced hydatid cysts in agoutis. In contrast, dogs could not be infected. The house cat was therefore implicated as playing an important role as defi nitive host and as a potential risk to humans. The life cycle of the parasite, however, was considered to be mainly sylvatic (27). After nearly 120 years, the mystery of human PE seemed fi nally solved. In 1972, however, a second South American species, E. vogeli, was discovered.
Discovery of a Second South American Species, E. vogeli
In late 1969 or early 1970, Martin Stummer, an animal dealer at Amazon Ltd, a company supplying animals for zoos, captured a bush dog (Speothos venaticus) in the province of Esmeraldas in Ecuador. The animal was sent to the Los Angeles Zoo and routinely examined. After a deworming treatment had resulted in the expulsion of numerous cestodes of the genus Echinococcus, Calvin Schwabe from the School of Veterinary Medicine in Davis, California, examined the helminths and found unusual morphologic 294 Emerging Infectious Diseases • www.cdc.gov/eid • Vol. 14, No. 2, February 2008 (30). None of the researchers could know at that time that E. vogeli would soon be the most frequently encountered species of the 2 indigenous South American echinococcal tapeworms. The synonymy of E. cruzi with E. oligarthrus was then questioned. A reexamination in 1984 of material obtained from Brumpt's and Joyeux' initial case of the agouti demonstrated that the larval stage of E. oligarthrus was indeed the causative organism (31). In contrast, the metacestode found in the nutria and in the Panamanian patient described in 1965 was shown to be E. vogeli (30,32). The 11 cases described by Viñas in Buenos Aires and those noted by Dévé and Dardel from Uruguay could not be defi nitively assigned to either E. oligarthrus or E. vogeli. The presence of protoscolex hooklets, which are used for discrimination, was not described in detail in these reports (33). However, the cases are most likely caused by E. oligarthrus because the fi nal host of E. vogeli is not found in those areas (33). By the end of 2007, 3 cases of proven E. oligarthrus infection in humans have been reported: 1 cardiac case from Brazil (34) and 1 orbital case each from Suriname (35) and Venezuela (36).
Rausch and Bernstein predicted, on the basis of the known predator-prey relationship of the bush dog, that the larval stage of E. vogeli would also occur in rodents, including pacas (28). Indeed, parasitic cysts were found in a Colombian paca (Cuniculus paca, Figure 4) in 1975. The material was experimentally fed to a dog; in addition, larvae obtained from a Colombian human patient with PE (37) were given to a second canid. From both dogs, the strobilar stage of E. vogeli was later recovered (30). As suffi cient material was collected from the fi eld in Colombia and obtained from experimentally infected animals, R.L. Rausch, V.R. Rausch, and A. D'Alessandro were able to morphologically distinguish E. vogeli from E. oligarthrus. The rostellar hooks of each of the 2 South American species were found to consistently differ in length and form, which permitted discrimination of the tapeworms' larval stages. As a consequence, known human and animal cases of PE were reexamined, and some cases thought to have been caused by E. oligarthrus were shown to have been caused by E. vogeli instead (32). E. vogeli typically has a thick laminated outer layer and a thin inner germinal layer, whereas E. oligarthrus has the reverse. Calcareous corpuscles are abundant in the germinal layer and in the protoscolices of E. oligarthrus but are almost absent in E. vogeli (33).
In just a few years, a second indigenous South American echinococcal species had been discovered, and the life cycle of the parasite, involving the bush dog and the paca, had been described. In a survey of Colombian mammals, 73 (22.5%) of 325 pacas harbored metacestodes of E. vogeli, but only 3 (0.9%) of pacas harbored E. oligarthrus. Twenty (6.2%) more pacas were shown to be infected with polycystic larvae, but the species involved could not be determined. In addition to the bush dog, a domestic dog belonging to a hunter was found to be naturally infected with adult E. vogeli (38). Researchers then assumed that domestic dogs might play a role in the transmission of parasite eggs to humans.
Current Situation
As of 2007, at least 106 human cases of PE from 12 countries have been documented. The disease occurs exclusively in rural areas of the American tropics and often in regions where E. granulosus is not present (33). Most cases are reported from Brazil and Colombia (33,39), but PE is endemic from Nicaragua to Chile (35). Its rising frequency (12 cases from 4 countries in 1979, 72 cases by 1997, and 86 cases from 11 countries as of 1998) shows that human PE is an emerging disease and no longer a medical curiosity (33). Most cases are caused by E. vogeli, but many cases could not be assigned specifi cally to any of the 2 South American echinococcal species because the presence of hooks was not reported (33,39). In an advanced laboratory setting, Echinococcus species can be distinguished by PCR followed by sequencing or restriction fragment length polymorphism analysis (40). Parasite material obtained from those infected, for whom a diagnosis cannot be made by means of classic parasitology, can now be subjected to methods of molecular biology. Why most PE is caused by E. vogeli is unclear. Some have speculated that because felids cover their feces, contact with infectious ova of E. oligarthrus is less likely than contact with eggs of canid-borne E. vogeli (33). Accordingly, similar proportions in infection rates of the respective natural intermediate hosts have been found (38). Seven species of wild felids that were naturally infected with E. oligarthrus have been found. The geographic distribution of wild cats extends from northern North America to southern Argentina. In contrast, the bush dog, the only natural defi nitive host for E. vogeli, is found from Panama to south Brazil. The published number of human cases is probably just the tip of the iceberg (33); the true prevalence of human PE is far from being known.
Dr Tappe is a medical microbiologist at the Institute of Hygiene and Microbiology, University of Würzburg, and a fellow in clinical tropical medicine, Medical Mission Hospital, Würzburg, Germany. His research interests focus on tissue-dwelling parasites. | 4,310.6 | 2008-02-01T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Generalized Second Law of Thermodynamics in Quintom Dominated Universe
In this paper we will investigate the validity of the Generalized Second Law of thermodynamics for the Quintom model of dark energy. Reviewing briefly the quintom scenario of dark energy, we will study the conditions of validity of the generalized second law of thermodynamics in three cases: quintessence dominated, phantom dominated and transition from quintessence to phantom will be discussed.
Introduction
law of thermodynamics was modified in the way that in generalized form, the sum of all time derivative of entropies related to horizons plus time derivative of normal entropy must be positive i.e. the sum of entropies must be increasing function of time. In [23], the validity of Generalized Second Law (GSL) for the cosmological models which departs slightly from de Sitter space is investigated. However, it is only natural to associate an entropy to the horizon area as it measures our lack of knowledge about what is going on beyond it. In this paper we show that the sum of normal entropy and the horizon entropy in phantom dominated universe is non-decreasing function of time. Also, the transition from quintessence to Phantom dominated universe is considered and the conditions of the validity of GSL in transition is studied. Also for quintom model of dark energy [16], we study the GSL in quintom dominated universe and conclude the same results when we consider two scalar fields with no coupling potential term. In our calculations we use c = 8πG N = 1.
The quintom model of dark energy
The quintom model of dark energy [16] is of new models proposed to explain the new astrophysical data, due to transition from w > −1 to w < −1, i.e. transition from quintessence dominated universe to phantom dominated universe. Here we consider the spatially flat Friedman-Robertson-Walker universe, where has following space-time metric Containing the normal scalar field σ and negative kinetic scalar field φ, the action which describes the quintom model is expressed as the following form where we have not considered the lagrangian density of matter field. In the spatially flat Friedman-Robertson-Walker (FRW) universe, the effective energy density, ρ, and the effective pressure, P, of the scalar fields can be described by So, the equation of state can be written as From the equation of state , it is seen that forσ >φ, w ≥ −1 and forσ <φ, we will have, w < −1. Alike [17], we consider a potential with no direct coupling between two scalar fields Where the λ φ and λ σ , are two dimensionless positive numbers characterizing the slope of the potential for φ and σ respectively. So, the evolution equation for two scalar fields in FRW model will have the following form where, H is the Hubble parameter.
Generalized second law and quintom model of dark energy
To study the GSL through the universe which is dominated by quintom scenario, we deduce the expression for normal entropy using the first law of thermodynamics.
From the equations (3), (4) we have and the Friedman constraint equation will be So, using relations (7) and (8), it is seen thaṫ Thus, ifφ 2 <σ 2 thenḢ < 0 , i.e. for the quintessence dominated universe and iḟ φ 2 >σ 2 thenḢ > 0, for the phantom dominated universe. Rewriting the first law of thermodynamics with respect to relations above and using V = 4 3 πR h 3 , in which the R h is the event of horizon, one can obtain where T is the temperature of the quintom fluid. Therefore, the time derivative of normal entropy will have the following forṁ As we know, the quintom is the combination of normal scalar filed, i.e, quintessence and phantom scalar field. From the definition of event of horizon where for different space times t s has different values, e.g for de Sitter space time t s = ∞, R h satisfies the following equation which is true for both scalar fields individuallẏ whereṘ h ≤ 0 for phantom dominated universe [22] andṘ h ≥ 0 for quintessence dominated universe [23]. As the final form, we write the time derivative of normal entropy of the quintom fluid using relation (13) As it is seen from relation (14), it is shown that, the sign ofṠ depends on the sign ofḢ, hence for quintessence dominated universeṠ < 0 and for phantom dominated universė S > 0. The entropy of a black hole is proportional to the area of its event horizon is well understood, it has deep physical meaning. The status of an entropy associated to a cosmological event horizon is not well established. In some cases like the case a de Sitter horizon this seems plausible, with some caveats, but in general this is a topic of current research; see [24]. If, the horizon entropy is taken to be S h = πR 2 h , the generalized second law stated thatṠ +Ṡ h ≥ 0 Thus, we will haveṠ To investigate the validity of equation (19), we will consider three different cases, the first case we dominate the phantom fluid, the second, the quintessence will be dominated, and the third, the transition from quintessence to phantom. a) Phantom dominated: In this caseṘ h ≤ 0 andḢ > 0, thenṠ h < 0. If the phantom fluid temperature T > 0 the condition for validity of GSL is asḢ If the temperature is assumed to be proportional to the de Sitter temperature [23] T = bH 2π (21) where b is a parameter, the GSL hold when: 22) in de Sitter spacetime case R h = 1 H , then b ≤ 1. In phantom model case which is small perturbed around de Sitter space, one can expect T ≤ H 2π , which is the condition that the phantom fluid be cooler than the horizon temperature.
b) Quintessence dominated:
In this caseṘ h ≥ 0, andḢ < 0 so the sum of normal entropy and horizon entropy could be positive, if T > 0 then the condition for validity of GSL is using eq.(21)this condition has following form c) Phase transition from quintessence to phantom: AsṘ h ≥ 0 in quintessence model andṘ h ≤ 0 in phantom model, and assuming that R h variates continually one can expect that in transition from quintessence to phantoṁ R h = 0. So the horizon entropy in transition time will be zero, also in transition timė H = 0, using eq. (17), we obtainṠ = 0. Therefore in the transition time the total entropy is differentiable and continuous.
Conclusion
In order to solve cosmological problems and because the lack of our knowledge, for instance to determine what could be the best candidate for DE to explain the accelerated expansion of universe, the cosmologists try to approach to best results as precise as they can by considering all the possibilities they have. Investigating the principles of thermodynamics and specially the second law-as global accepted principle in the universe -in different models of DE, as one of these possibilities, has been widely studied in the literature, since this investigation can constrain some of parameters in studied models, say, P. C. Davies [23] studied the change in event horizon area in cosmological models that depart slightly from de Sitter space and showed that for this models the GSL is respected for the normal scalar field, provided the fluid to be viscous.
In the present paper we have considered total entropy as the entropy of a cosmological event horizon plus the entropy of a normal scalar field σ and ghost scalar field φ. In the quintom model of dark energyḢ is given by eq. (12), for the phantom dominated caseḢ > 0, in this caseṘ h ≤ 0, then the horizon entropy is constant or decreasses with time, i.eṠ h ≤ 0, therefore the phantom entropy must increases with expansion so long as T > 0. In fact the phantom fluids possess negative entropy and equals to minus the entropy of black hole of radius R h . In contrast with the previous case in the quintessence dominated case,Ḣ < 0 andṘ h ≥ 0, thenṠ h ≥ 0. By considering the influence of the transition from the quintessence to phantom dominated universe on the GSL , one can obtain that the time derivative of the future event horizon and the entropy must be zero at the transition time. In the summary, we have examined the quintessence and phantom dominated universe, and we have shown that by satisfying the conditions (20), (23) the total entropy is non-decreasing function of time. Otherwise the second law of thermodynamics break down. Note that in [25] these calculations have been done for the case of interacting holographic dark energy with dark matter, the authors have shown, in contrast to the case of the apparent horizon, both the first and second law of thermodynamics break down if one consider the universe to be enveloped by the event horizon with the usual definitions of entropy and temperature.
Acknowledgment
The author would like to thank the referee because of his/her useful comments, which assisted to prepare better frame for this study. | 2,143.4 | 2006-10-05T00:00:00.000 | [
"Physics"
] |
Persistent susceptibility of Aedes aegypti to eugenol
Botanical insecticides are preferred for their environment and user-friendly nature. Eugenol is a plant-based monoterpene having multifarious biocidal activities. To understand whether eugenol would persistently work against Aedes aegypti, we performed larvicidal bioassays on thirty successive generations and determined median lethal concentration (LC50) on each generation. Results showed no apparent differences between LC50 at F0 (63.48 ppm) and F30 (64.50 ppm) indicating no alteration of susceptibility toward eugenol. To analyze, if eugenol has any effect on metabolic detoxification-associated enzymes, we measured esterases (alpha and beta), cytochrome P450, and GST activities from the survived larvae exposed to LC50 concentration from F0–F30. Results revealed a decrease of esterases, GST, and cytochrome P450 activities at the initial 4–8 generations and then a gradual increase as the generations progressed. GST activity remained significantly below the control groups. Synergists (TPP, DEM, and PBO) were applied along with eugenol at F30 and LC50 concentration, and the said enzyme activities were recorded. Results showed a noticeable decrease in LC50 and enzyme activities indicating effective inhibitions of the respective enzymes. Overall, present results inferred that eugenol would effectively work as a larvicide for a longer period in successive generations without initiating rapid resistance and therefore could be advocated for controlling A. aegypti.
www.nature.com/scientificreports/ and reduced cuticular penetration 22 . Among these mechanisms, metabolic resistance mediated by the complex multigene enzyme family of esterase, glutathione-s-transferase, and cytochrome P450 is prominent and well established [23][24][25] . Therefore, to evaluate the persistent toxicity of eugenol, the study was performed to determine LC50 concentration in each generation, from F0 to F30, and to spectrophotometrically estimate the said enzymes, using suitable substrates, to examine the effect of eugenol on these detoxifying enzymes.
Synergists like triphenylphosphate (TPP), diethyl maleate (DEM), and piperonyl butoxide (PBO) could inhibit esterases, glutathione-s-transferase, and cytochrome P450s, respectively. This inhibition enhances the toxicity of insecticides [26][27][28] . Such studies establish the importance of metabolic detoxification-associated enzymes in resistance. Therefore, an attempt was made to assess the effect of the combination of synergists with eugenol on (1) the toxicity in terms of LC50 and (2) the detoxification-associated enzyme activities on F30 larvae of A. aegypti.
Materials and methods
Establishment and maintenance of A. aegypti colony. The rearing of mosquitoes was done following the method described by earlier authors [29][30][31] . Egg strips were kept submerged in a shallow rearing tray containing almost 2 L of water. After emergence, larvae were fed with a diet of dog biscuit (Pedigree) and yeast powder, in the ratio of 3:1. After 5/6 days of emergence the larvae metamorphosed into pupae. The pupae were collected in plastic cups holding almost 150 ml water and put in an adult rearing cage. Adults emerged from the pupae after 1-2 days of pupal period. Adults were fed 10% glucose solution soaked in cotton. After 5 days of adult emergence, along with sugar solution, the adult females were offered blood meals from albino rats to obtain progeny. Each rat was used only once a week. For egg-laying, filter paper submerged in water in a glass beaker was provided. The culture was maintained at 28 ± 2˚C and 75 ± 5% RH, at 12:12 light: dark cycle. Ethics approval: the Institutional Animal Ethics Committee (IAEC), Gauhati University, had approved the protocol for the study (Permit No. IAEC/Per/2020-21/08). The guidelines for laboratory animals prescribed by the Committee for the Purpose of Control and Supervision of Experiments on Animals (CPCSEA), a statutory committee under the Ministry of Environment and Forests (Animal Welfare Division), Government of India, were strictly followed while using rats for the experiment. For the blood-feeding purpose, we followed the method as described by Morlan et al 31 for A. aegypti and the guidelines formulated by the Ethiopian Public Health Institute (EPHI) for mosquito rearing (Anopheles) and insectary handling 32 with few modifications.
Initially, the eggs were obtained from the Regional Medical Research Center (RMRC), Dibrugarh, Assam, India. In RCMR, the mosquito colony was established from field-collected A. aegypti, and reared for almost 14 years without exposure to any form of insecticides. In our laboratory (Laboratory of Entomology, Gauhati University), the mosquito colony was reared for about six years with no exposure to insecticides. So, by the time the bioassays were carried out, the mosquitoes were 20 years of culture in the laboratory.
Larvicidal activities. The larvicidal bioassay was carried out following the method of the world health organization (WHO) 33 . Graded concentrations of eugenol-1, 5, 10, 25, 50, 100, 250, and 500 ppm were prepared using DMSO as an emulsifier. Four replicates were prepared for each concentration of eugenol in 100 ml water. In each replica 25 number of 4th instar larvae were introduced. An equal number of negative and no-treatment controls were also set using DMSO-water and water only, respectively. The mortality of the larvae was constantly monitored, and data was recorded at 5 min, 10 min, 15 min, 30 min, 1 h, 2 h, 3 h, 4 h, 5 h, 6 h, and 24 h, respectively. Experiments were set in a separate laboratory away from the mosquito culture room. If any of the 4th instar larvae pupated during the exposure period, it was negated from the test, again, if more than 10% larvae from the control group died, the whole experiment was repeated. Larvae that did not show movement even after touching with a fine brush were considered dead. If mortality of control larvae occurred below 10%, the mortality of the treated groups was calculated using Abbott's formula 34 .
where C = percentage of larvae survived in the control group, T = percentage of larvae survived in the treated group.
Mortality percentage was calculated after 24 h exposure period. From the same larvicidal experiment, median lethal concentration (LC50) was calculated. The mortality values were analyzed through SPSS software using Probit analysis 35 . Log concentrations and probit obtained from the SPSS software (version 20) were then analyzed using MINITAB software. This LC50 was considered the LC50 for F0 generation.
Selection of larvae from F0 to F30 generation. From the original susceptible colony of mosquitoes, approximately 3000 fourth instar larvae were separated and introduced to a median lethal concentration (LC50) of eugenol determined as stated above for 24 h. Survived larvae were separated and transferred to clean water, and food was provided. When these larvae pupated, they were allowed to metamorphose into adults. The progeny obtained from those adults was the F1 generation. When the F1 larvae grew to the fourth instar, they were again used for the determination of LC50 using the same series of concentrations as described for F0. The newly calculated LC50 was the LC50 for F1. This LC50 was applied to approximately 3000 F1 larvae for 24 h. The survived larvae were separated, food was provided, and allowed to continue the next generation. The same procedure was followed for F2-F30 generations.
Simultaneously, metabolic detoxification-associated enzyme activity was determined using some of the larvae that survived the F0 selection. Likewise, the enzyme activity was determined using some of the F1 larvae that survived the F1 selection. The same procedure was followed for determining the enzyme activity of the remaining F2-F30 generations. www.nature.com/scientificreports/ Estimation of the detoxification enzyme activity. Three principal detoxification enzymes involved in metabolic resistance, viz. esterases, GST, and cytochrome P450 monooxygenase were quantified following the method as described by Safi et al 36 . Sample. Homogenate was prepared from larvae that survived the selection pressure. In each replica, three larvae were taken and homogenized in 900 µl (300 µl for each larva) 0.0625 M potassium phosphate buffer (pH 7.2) in a 1.5 ml microtube. Four replicates for each treatment and an equal number of controls were set. The larvae exposed to eugenol were used as treatment group, exposed to emulsifier DMSO were used as a negative control group whereas the larvae in tap water was used as a no-treatment control group. The samples were cold centrifuged at 10,000 rpm for 10 min at a controlled temperature of 4ºC, and the supernatant obtained was used as a crude enzyme extract for spectrophotometric determination of enzyme activities.
Esterase. Alpha naphthyl/beta naphthyl acetate was used as a substrate for the quantification of alpha and beta esterase, respectively. 200 µl of 30 mM of (alpha/beta) naphthyl acetate was mixed with 100 µl insect homogenate. Allowing to stand for 30 min, 10 µl 0.3% fast blue stain was added. Blank was also prepared in a similar manner in distilled water instead of insect homogenate. After 5 min, OD was measured at 570 nm. The mean of ODs was converted to product concentrations using the standard curve of alpha/beta naphthol. Enzyme activities are presented as μM of product formed/min/mg protein.
Glutathione-s-transferase. GST activity was quantified using a mixture of 200 μl of 10 mM GSH and 3 mM 1-chloro-2,4-dinitrobenzene (CDNB) (the mixture of these two chemicals is referred to as cocktail). 10 µl of insect homogenate was taken to which 100 µl of cocktail buffer was added. After 5 min, OD was taken kinetically for 5 min at an interval of 1 min. GST activity was quantified as mM of conjugate produced/min/mg protein taking the extinction co-efficient of CDNB multiplied by the path length.
Protein assay. Protein estimation was done following the method of Lowry et al 37 , taking BSA as standard.
Samples were prepared in quadruplet and OD was measured at 660 nm.
Efficacy of synergist. The best-known synergists-PBO, TPP, and DEM, known to inhibit cytochrome P450, esterase, and GST, respectively, were bought from MERCK and used with eugenol. For the synergistic experiment, application method as described by Xu et al 4 was followed, with some modifications. While for the larvicidal bioassay, the method of WHO 33 was followed as described above. For each liter of water, 5 mg of synergists (PBO, TPP, and DEM) was used. The synergized water was used to prepare a graded concentration of eugenol (1, 5, 10, 25, 50, 100, 250, and 500 ppm), and 25 numbers of F30 larvae were introduced into each concentration. Larval mortality was monitored at different time intervals, and median lethal concentration was calculated at 24 h. The LC50 thus calculated was applied to a large mass of larvae in one liter of synergized water for 24 h. From the survived larvae, enzyme activity was determined after 24 h.
Approval for animal experiments. The protocol for the study was approved by the Institutional Animal
Ethics Committee (IAEC), Gauhati University (Permit No. IAEC/Per/2017/RF/2018-05). Figure 1 represents the percent mortality of larvae in response to increasing concentrations of eugenol. As evident, eugenol exhibited dose-response mortality in larvae and did not show a noticeable decreasing or increasing effect on mortality over 30 generations. When plotted for every 5th generation, it appeared nearly overlapping graphs. Three of the tested concentrations-25 ppm, 50 ppm, and 100 ppm showed a modest difference in mortality percent, whereas the rest of the concentrations 1 ppm, 5 ppm, 10 ppm, 250 ppm, and 500 ppm exhibited almost similar larval mortality over a generational time ( Fig. 1 and Supplementary Table 1).
Results
LC50 values of the larvae from F0 to F30 generations. The bar diagram (Fig. 2) of the median lethal concentration (LC50) shows the initial decrease in the LC50 value, which was significant between F0 and F10, and then a gradual increase, with the increasing trend which continued up to the last generation studied. No significant difference in LC50 was found between F0 and F30 but was found to be significant between F0 and F10. The LC50 value dropped from 63.48 ppm in the F0 to 43.79 ppm in the F10 before progressively rising following the F10 generation (Supplementary Table 2). It remained fairly constant from F24 to F30 generation (63-65 ppm), which was almost similar to F0. Although there were fluctuations between generations, over 30 generations of exposure, the mosquitos showed no obvious changes in susceptibility to eugenol in terms of median lethal concentration. Rather, susceptibility increased up to the first ten generations as indicated by the low LC50 value. However, the mosquito began to adapt in the later generations, followed by a rise in LC50, and Table 2).
Quantification of enzyme activity. Esterases. Alpha esterase activity was found almost constant at F0, F1, and F2 generation of exposure. The enzyme activity of treated larvae was found significantly different compared to that of the control group from the F3 till F30 generation. It rose above the control groups in the 5th generation, and the titer remained elevated in the subsequent generations. However, from F21 onwards, the enzyme activity became quite steady (Fig. 3A,B and Supplementary Table 3).Throughout the generations, the negative and no treatment control groups remained almost constant with no significant difference. The Beta esterase enzyme activity of the treated group was inconsistent up to the F5 generation. In the F0, F1, F5 and F6 generations, it was found constant with the control groups. In the rest generations, it differed significantly. In the F7 generation, beta esterase activity rose significantly above the control group, and from the F7 to F30 generation, a gradual and significant rise in the treated group enzyme activity was observed ( Fig. 3B and Supplementary Table 3). However, from the F21 generation onwards, the enzyme activity became almost steady. No significant difference between the negative and no-treatment control groups was found in any generations.
Glutathione-s-transferase. In all generations studied, the GST enzyme titer remained below the level of control groups. It began to decrease from the F0 generation itself. F5 generation experienced the lowest GST activity. With the passage of generations, though it began to rise gradually, it still remained significantly below the control groups ( Fig. 4 and Supplementary Table 3). However, from the F27 generation, the enzyme activity became quite steady. No significant difference between the negative and no-treatment control groups was found.
Cytochrome P450. Initially, the p450 enzyme activity decreased from the level of control groups and, after certain generations, it began to rise. During the F0 to F5 and again in the F13 and F14 generations, the www.nature.com/scientificreports/ treated and control enzyme activity were almost at par. During the F6-F12 generation, the treated enzyme titer decreased significantly compared to control groups, and from the F15 generation onwards, it began to rise significantly. This rise in enzyme titer continued till the last generation studied. The enzyme titer showed a regular trend of increase and decrease ( Fig. 5 and Supplementary Table 3). However, after the passage of the F24 generation, it became almost steady. A significant difference between the negative and no-treatment control groups was not found.
Efficacy of synergists. While using eugenol in combination with PBO, TPP, and DEM specific synergist for P450, esterases, and GST, respectively, toxicity variations reflected in mortality in terms of LC50 value changes were observed (Fig. 6). While using PBO, the significantly higher toxicity of eugenol was observed, which was reflected by a decrease in LC50 value from 64.50 to 49.98 ppm. While using TPP, we recorded a non-significant higher toxicity of eugenol. Whereas DEM, a specific inhibitor of GST, was found to have vey negligible effect on increasing the toxicity of eugenol (Supplementary Table 4). In the Fig. 7A, A = eugenol alone, B = eugenol + PBO, C = eugenol + TPP, D = eugenol + DEM. As is evident, the PBO + eugenol combination had an effect on cytochrome P450 as well as on esterase enzymes, as both of these enzymes decreased significantly. It had the highest effect on P450 enzymes because this enzyme activity decreased almost threefold. However, the combination had no effect on GST enzymes. The β-esterase enzyme activity (± SE) in continuously exposed (F0 to F30) A. aegypti larvae. The asterisks represents the significant difference between the experimental groups. Tukey's post hoc test was employed to determine the significance in differences. The enzyme activity is expressed at the unit of μM of product formed/min/mg protein. *The mean difference is significant at the 0.05 level. www.nature.com/scientificreports/ combination of TPP and eugenol significantly inhibited esterases but had no effect on GST and P450. Similarly, the combination of DEM and eugenol affected GST activity significantly but had no effect on esterases and cytochrome P450 (Fig. 7 and Supplementary Table 5).
Discussion
Eugenol, a monoterpenoid commonly present in a major portion of clove, basil, nutmeg, and cinnamon essential oils, is a compound of interest in the medicinal, pharmaceutical, cosmetic, insecticidal, and food industries because of its multifarious properties. This phenolic terpene compound has been reported effective against a wide range of human ailments. It has been proved safe in food packaging 38 , antiseptics in the food industry 39 , food preservatives 40 . The environment protection agency (EPA) has approved eugenol use as an active insecticide ingredient because of its low mammalian toxicity 41 . In the research area of mosquito control, eugenol is reported to have mosquitocidal properties 42,43 , but a detailed investigation is wanting. So, the present investigation aimed to investigate the persistent susceptibility of A. aegypti toward eugenol, taking two primary parameters-toxicity in terms of median lethal concentration (LC50) and metabolic detoxification associated enzyme activities up to thirty successive generations. While looking at the values of LC50 obtained from the experiment, we observed an initially gradually decreasing trend up to F9, followed by a gradually increasing trend up to F24 and then a steady level till F30 generation (Supplementary Table 2). Though there was a fluctuation between F0 to F30, the initial and final LC50 values remained almost the same, which might imply that the treated A. aegypti population remained susceptible to eugenol for up to thirty successive generations. Susceptibility assessment across generations was examined in Culex quinquefasciatus, A. aegypti and Aedes albopictus against malathion, permethrin, and temephos by Hamdan et al 3 , where they showed that the median lethal concentration determined in one generation was not median lethal to the subsequent generation. They recorded in A. aegypti after exposure for 32generations, increase of resistance ratio by 4.97, 64.2, and 51.0 folds to malathion, permethrin, and temephos, respectively. Similarly, Hidayati et al. 5 reported lowered susceptibility status of A. aegypti when exposed to malathion for successive generations. After 45 generations of exposure, they recorded 52.7 folds increase in resistance ratio compared to F0. In contrast we have not observed marked changes of susceptibility status in terms of LC50 values up to F30 generations. Rather, at the tenth generation the treated mosquito populations was observed to be more susceptible with reduced LC50 value.
Esterases, glutathione-s-transferase, and cytochrome p450 protect insects against the lethal effect of insecticides by detoxifying them. Esterases cleave the carboxyl ester and phosphodiester bonds of insecticides, inactivating them and rendering the chemicals amenable to excretion 44 . Because these bonds do not exist in eugenol, esterase may not be directly involved in eugenol metabolism, but it may be involved in the metabolism of eugenol conjugates and metabolites. Fischer et al 45 recorded conjugates and nine metabolites of eugenol in human subjects. In the experiment, we noticed a gradual fall in both alpha and beta esterase activities initially up to 4-5 generations and then a gradual rise in the following generations, which was significant than the control group. The initial decrease of these enzymes in post-treatment matches with the findings of Koodalingam et al 46 , where they recorded a decline in esterase activity after treatment of A. aegypti with extract of soapnut Sapindus emarginatus. The initial decrease in the activity of esterases might be due to sudden shock of selection pressure which required some time to adapt, and also may be due to involvement in the detoxification process. So, the freely available enzyme titer decreased. Again, the increase in esterase activity on subsequent treatment matches with the findings of Cao et al 47 , where they recorded 4.54 fold increase in esterase activity in Aphis gossypii. The increased expression of alpha and beta esterases might require for providing life support in a stressful environment. Esterase comprises 0.4% of the total protein in the insect body 48 , which may increase by 50% upon the application of selection pressure to account for about 3% of the total body protein in insects 49 . Cao et al 47 reported a significant rise in esterase activity in Aphis gossypii after continuous exposure for ten generations with omathoate.
GST catalyzes the conjugation reaction, in which it conjugates glutathione to electrophilic substrates. Glutathione may also be added to eugenol itself or the metabolites of eugenol and helps detoxification and subsequent excretion. While studying GST in continuously exposed populations, we observed that GST activity significantly decreased in all the treated generations compared to control. GST activity might be inhibited by eugenol, as Rompelberg et al 50 reported the inhibition of GST upon eugenol exposure in rats, mice, and humans. When these enzymes are blocked, they become unable to metabolize insecticides, thereby toxicity of insecticides persists. Similar decrease in GST activity in Brontispa longissima fed with myristicine treated coconut leaves Cytochrome p450 monooxygenase is involved in organophosphate and pyrethroid resistance. Elevated levels of these enzymes accounts for the increased breakdown of insecticide, which alters the susceptibility status of insects 56 . P450 is involved in the detoxification of exogenous compounds. In the present investigation, P450 enzyme activity decreased till F8 generation, following which it gradually increased. By the F15 generation, the activity of P450 significantly rose above the control groups, and this rising trend continued in the later generations. A similar reduction in P450 following initial treatment was also recorded by Bullangpoti et al 57 on Spodoptera frugiperda when treated with the senescent leaf extract of Jatropha gossypifolia. This enzyme is also involved in a wide range of biological activities such as the metabolism of juvenile hormones, synthesis and degradation www.nature.com/scientificreports/ of ecdysteroids, as well as the metabolism of fatty acids and pheromones 28 . As a result, lower enzyme activity could be linked to greater toxicity, as seen during the initial exposure. For the confirmation of the involvement of the tested enzymes in the detoxification process, three specific synergists (TPP, DEM and PBO) were used individually with eugenol. On using TPP with eugenol, alpha and beta esterase enzyme activity were decreased significantly, but there was no apparent changes in LC50 value. This result matches with the reports of Koou et al 58 , where they recorded no significant increase in mortality after treatment with synergists. While combining DEM with eugenol, we recorded a significant decrease in GST activity but no effect on the toxicity of eugenol. Inhibition of GST activity post-treatment with DEM is well-established 59 . However, in the present study, no apparent alteration in toxicity was observed between DEM + eugenol treated group and eugenol alone group (63 and 64.50 ppm, respectively). But while combining PBO with eugenol, we recorded a significant decrease of P450 and esterase enzyme activities. PBO application imparted the highest toxicity to eugenol, as evident from the LC50 values. The median lethal concentration value when treated with eugenol alone and eugenol + PBO was 64.50 and 49.98 ppm, respectively. Similar to our results, Tak et al 28 reported the inhibition of cytochrome P450 as well as esterases by synergist PBO in Trichplusia ni.
Overall, the results represent that the LC50 value initially decreased till F9 generation before rising to meet the original LC50 value. The activity of metabolic detoxifying enzymes particularly cytochrome p450 and GST followed a similar pattern showing initial decrease and then the GST activity began to rise from the F5 generation, and cytochrome P450 from the F8 generation onwards. Thus, we observed that there exists an association between enzyme activity and susceptibility of the larvae toward eugenol. During the initial treatment, all the tested enzymes produced might have been used in physiological processes and in the detoxification of eugenol into its less harmful products, resulting in a drop in enzyme activity. The enzymes produced might not have been sufficient to detoxify eugenol, which resulted in increased toxicity. Under continuous stressed conditions, detoxification enzymes tend to increase in concentration than their normal counterpart. This increased production might affect the fat bodies, the site for ecdysone synthesis, and also the main source of acetyl groups needed for the synthesis of constitutive amino acids as well as other vital body processes. It is probable that in the later generations with the activation of detoxifying genes, enzymes were produced in larger quantities that led to increased detoxification and a subsequent rise in LC50. Though the exact mechanism of stimulation or decline of enzyme activity is not clear, synthesis of new protein after an exogenous compound binds to a cytosolic receptor and activation of structural gene products may be responsible for such activity. Alternatively, exogenous compounds may interfere with the degradation of existing proteins and favor de novo protein synthesis in insects 60 . TPP, DEM, and PBO work synergistically with eugenol. More specifically, PBO can be used to enhance the toxicity of eugenol towards the larvae of A. aegypti.
Conclusion
Eugenol caused dose-response mortality in larvae, and the effect on mortality did not change for 30 generations. Despite variations in LC50 across generations, the mosquitoes' susceptibility to eugenol did not appear to vary in terms of median lethal concentration. The LC50 value initially decreased till F9 generation and then rise to the original LC50 value after F24 generation and remained almost stable till F30. The activity of metabolic detoxifying enzymes followed a similar pattern showing initial decrease for up to F4-F10 generations and then gradual increase in subsequent generations. Thus, there is a relationship between higher toxicity and lower detoxifying enzyme activity in early generations and a subsequent reduction of toxicity with increasing detoxifying enzyme activity in later generations. In cytochrome p450, the pattern of an initial decrease and subsequent increase in enzyme activity was observed in the studied generation. The pattern matches with the initial increase in toxicity and later decrease in toxicity of eugenol on exposed mosquito populations. Hence, there might be a relationship between cytochrome p450 enzyme activity and the susceptibility of the larvae to eugenol. However, no such prominent pattern was observed for esterases and GST activities with respect to eugenol toxicity. GST remained below the control group throughout the generation and esterase increased significantly than control after the passage of 5-10 generations. When PBO was used as a synergist, the toxicity of eugenol increased significantly. Combining PBO with eugenol affected cytochrome P450 as well as esterases. P450 activity was observed to decline significantly by almost threefold. Overall, the authors suggest that eugenol would effectively work as an A. aegypti larvicide for a longer duration, without impairing its effectiveness. Combining synergists with eugenol would increase the toxicity of eugenol on A. aegypti larvae. | 6,221.6 | 2022-02-10T00:00:00.000 | [
"Biology"
] |
Double parton distributions out of bounds in colour space
We investigate the positivity of double parton distributions with a non-trivial dependence on the parton colour. It turns out that positivity is not preserved by leading-order evolution from lower to higher scales, in contrast to the case in which parton colour is summed over. We also study the positivity properties of the distributions at small distance between the two partons, where they can be computed in terms of perturbative splitting kernels and ordinary parton densities.
Introduction
Parton distribution functions and related quantities are crucial ingredients for computing hadronic cross sections at high energies, and they are the main quantities that describe the structure of hadrons at the level of quarks and gluons. It is hence important to know and understand their general properties. One of these properties is positivity. For ordinary parton distributions (PDFs) this is just the statement f a (x) ≥ 0 if the parton a and the hadron are unpolarised. In the polarised case, this generalises to a set of inequalities, namely the well-known Soffer bounds on polarised distributions [1]. Corresponding bounds have been formulated for transverse-momentum dependent distributions (TMDs) in [2], for impact parameter distributions in [3], and for double parton distributions (DPDs) in [4]. The latter appear in the description of double parton scattering and contain a wealth of information about correlations between two partons in a hadron; for a recent review we refer to the monograph [5]. DPDs have a non-trivial dependence not only on the polarisation of the partons, but also on their colour, and corresponding positivity bounds have been derived in [6]. Positivity bounds can be of considerable practical value. They may be used as constraints in fits of PDFs, and in the context of spin physics, ansätze that saturate certain bounds are often used to estimate the maximal allowed size of spin asymmetries. A corresponding strategy for DPDs in spin or colour space appears all the more attractive because our current knowledge of these distributions is very incomplete. Whether positivity bounds on parton distributions actually hold turns out to be a non-trivial question. At leading order (LO) accuracy, the positivity of PDFs can quite directly be deduced from the positivity of cross sections, whilst the situation is more involved at next-to-leading order (NLO) in the strong coupling [7,8]. When derived along such lines, positivity holds for renormalisation scales µ that are high enough for the approximations of leading-twist dominance and of the perturbative expansion to be valid. To formulate a corresponding approach for DPDs would be complicated due to the large number of involved degrees of freedom (two pairs of partons in each colliding hadron and two hard-scattering subprocesses), and we shall not pursue such an avenue here.
Intuitively, the positivity bounds on parton distributions are a consequence of their partonmodel interpretation as number densities or linear combinations of number densities. At a more formal level, one may use light-cone quantisation and write appropriate linear combinations of distributions as squared operator matrix elements that are summed over unobserved degrees of freedom. Equivalently, one can represent the distributions in terms of light-cone wave functions. This has for instance been done for ordinary PDFs in [9] and for impact parameter distributions in [10]. A corresponding representation holds for DPDs (see [11] for the momentum space version, which can readily be adapted to the case of definite transverse parton position). A limitation of this approach is that it does not account for the renormalisation of ultraviolet divergences in the matrix elements (at least not in customary schemes such as MS) nor for subtleties related with Wilson lines or the definition of light-cone gauge at infinite light-cone distances. It has long been realised that ultraviolet subtractions can in principle invalidate the positivity of distributions. A detailed discussion and examples can be found in the recent paper [12]. An important result is that LO DGLAP evolution to higher scales conserves the positivity of PDFs, both in the unpolarised sector [13,14] and in the polarised one [15,16]. This means that if PDFs satisfy the positivity bounds at a certain scale µ, these bounds remain valid at higher scales when the PDFs are evolved at leading order. Discussions for NLO evolution can be found in [17,18]. Conversely, experience shows that PDFs eventually turn negative when evolved down to very low scales. Examples for this can for instance be found in [19]. In [4] it was shown that positivity of spin dependent but colour summed DPDs is conserved by LO DGLAP evolution to higher scales. The first goal of the present paper is to investigate whether the same holds for the bounds derived in [6] for DPDs with non-trivial colour dependence. In addition to DGLAP evolution, we will also consider Collins-Soper evolution in a rapidity variable, which appears when the parton colours are not summed over. Following [6] we will limit ourselves to unpolarised partons throughout this work. For small transverse distances y between the two partons, DPDs can be computed in terms of ordinary parton densities and kernels for the perturbative splitting of one parton into the two observed partons and (at higher orders) additional unobserved ones. Using the results of the recent two-loop calculation in [20], we can investigate to which extent positivity of DPDs in colour space is realised in the small y limit. This is the second goal of our work. This paper is organised as follows. In section 2, we specify the two parametrisations for the colour structure of DPDs used in this work, specify the property of positivity, and discuss the perturbative splitting mechanism for DPDs at LO accuracy. In section 3, we analyse whether Collins-Soper evolution from smaller to larger scales conserves positivity of DPDs, and in section 4 we perform a corresponding analysis for LO DGLAP evolution. The perturbative splitting mechanism at NLO accuracy is investigated in section 5. Our results are summarised in section 6, and some technical formulae are collected in an appendix.
Colour structure of DPDs
In this section, we discuss the general colour structure of quark and gluon DPDs and state the hypothesis of positivity in colour space. Throughout this work, we consider distributions for unpolarised partons. As illustrated in figure 1, the colour structure of a DPD can be described in terms of four colour indices, one for each parton field in its definition. We generically write F r 1 r 1 r 2 r 2 a 1 a 2 , where a 1 and a 2 denote the two parton flavours. The colour indices are in the fundamental or adjoint representation as appropriate, with r 1 and r 2 referring to the partons in the amplitude of the scattering process and r 1 and r 2 to the partons in the conjugate amplitude. As described in [21], the definition of a DPD involves the hadronic matrix element of two twist-two operators, Figure 1: Assignment of colour labels for a quark-antiquark distribution (a) and a quarkgluon distribution (b). The dashed vertical line indicates the final state cut of the scattering process in which the distributions appear.
as well as a soft factor given as the matrix element of Wilson line operators in the vacuum. Both matrix elements contain ultraviolet divergences that need to be renormalised. One can take different renormalisation scales µ 1 and µ 2 for the two partons, and the dependence of the DPD on these scales is given by DGLAP equations, which will be discussed in section 4. The soft factor removes rapidity divergences in the hadronic matrix element, in a similar way as in the definition of transverse-momentum dependent distributions [22]. This leads to a dependence of the DPD on a rapidity scale ζ p , which is described by a Collins-Soper equation as discussed in section 3.
The s and t channel colour bases. The four colour indices of a DPD must be coupled to an overall colour singlet. This can be achieved by first coupling the colour of two parton pairs to an irreducible representation and then coupling these two representations to an overall singlet. Depending on the choice of parton pairs, we consider two bases for the colour coupling. In the s channel basis, we pair the partons in the amplitude and in the conjugate amplitude. The projection on irreducible representations can then be written as for (a 1 a 2 ) = (qq), (qq) with colour indices r 1 , r 2 , r 1 , r 2 in the fundamental or adjoint representation as appropriate.
The multiplicity m(R) of the representation R is m(1) = 1, (1) follows the choice made in [6] and corresponds to eq. (6) in [23]. We denote the conjugate of a representation R by R, where it is understood that some representations like the singlet and the octet are their own conjugate. The matrix P R R in (1) projects the colour of the parton pair a 1 a 2 on the representation R in the amplitude and on the representation R in the conjugate amplitude. Its explicit form is given in the appendix. Quark-antiquark distributions F RR qq are defined as in (1), but with transposed indices r 2 r 1 in P R R , which ensures that covariant indices are always contracted with contravariant ones. Likewise, the definition of F RR qq has transposed indices r 2 r 1 . In the t channel basis, we pair the partons with momentum fractions x 1 and x 2 . Following [21,24], we write Table 1: Combinations of colour representations in the s channel distributions F RR a 1 a 2 and the t channel distributions R 1 R 2 F a 1 a 2 . Two adjoint indices can couple to a symmetric (S) or an antisymmetric (A) octet. For two-gluon distributions, the colour combinations are identical in the s and t channels. Interchanging a 1 ↔ a 2 implies interchanging R 1 ↔ R 2 while keeping RR unchanged. (2) projects the colour indices r i r i of parton a i on the representation R i for i = 1, 2. For distributions with antiquarks, one needs to transpose the corresponding colour indices in P R 1 R 2 . The correct ordering is hence r 1 r 1 if a 1 =q and r 2 r 2 if a 2 =q. Combining the definition (1) with the completeness relation (63) and the explicit form of the singlet projector P 11 , one readily finds that for all parton combinations, where the sum runs over all relevant colour representations R. Throughout this work, we fix N = 3 for the number of colours. The colour factors used in later results thus have the values C F = 4/3, C A = 3, and T F = 1/2. The accessible colour representations for different parton combinations are given in table 1. We note that a quark and a gluon can couple to 6 (see e.g. table 24 in [25]) rather than to 6 as stated in equation (8c) of [23]. As shown in [21], the t channel distributions R 1 R 2 F are real valued, except for the decuplet sector in the pure gluon case, in which case one has ( 1010 F gg ) * = 1010 F gg . In the s channel basis, this translates into all distributions F RR being real, except for the mixed octet combinations, where one finds (F AS gg ) * = F SA gg .
Density interpretation and positivity. The parton model interpretation of DPDs can be obtained in the same way as for single parton distributions by expressing the field operators in terms of creation and annihilation operators in light-cone quantisation, neglecting all complications from Wilson lines and from renormalisation. Details can for instance be found in [22]. The t channel distributions are normalised such that 11 F a 1 a 2 (x 1 , x 2 , y) is the probability density for finding partons a 1 and a 2 with momentum fractions x 1 and x 2 at a transverse distance y from each other, with the density measure being dx 1 dx 2 d 2 y. The colours and polarisations of both partons are summed over in 11 F a 1 a 2 . Correspondingly, the s channel distribution F RR a 1 a 2 is the probability density for finding the parton pair in one of the m(R) states of the colour representation R. This provides an intuitive interpretation of the relation (3).
The positivity property for DPDs in full colour space is then the statement that which of course implies the weaker condition 11 F a 1 a 2 ≥ 0. Note that we define "positivity" as including the value zero. Note that in the pure gluon channel, the distributions in the s channel basis include the cases F AS gg and F SA gg , which correspond not to densities but to interference terms in colour space (and which may be complex valued as noted above). Given the large number of accessible colour channels in that case, we will not consider two-gluon DPDs in the remainder of this work, concentrating on the pure quark-antiquark sector and on mixed quark-gluon or antiquarkgluon distributions.
Basis transformations. Whilst the s channel basis is natural for considering positivity, the evolution of DPDs in the renormalisation and rapidity scales is much simpler in the t channel basis. We hence need the explicit transformations between the two representations. In the pure quark sector, the transformations between s and t channel bases read The transformation matrix for two antiquarks is Mqq = M qq . For gq and gq distributions, one has Perturbative splitting at leading order. If the transverse distance y between the two partons is small, DPDs can be computed in terms of a perturbative splitting process and ordinary PDFs. Example graphs for the perturbative splitting are shown in figure 2. This mechanism is interesting in our context because it generates a non-trivial colour dependence. Let us take a closer look at the splitting process at one-loop order, postponing the discussion of two-loop accuracy to section 5. The splitting formula at leading order reads [21,24] where f a 0 (x, µ) is the PDF for parton a 0 and we defined a s (µ) = α s (µ) 2π (10) and Note that (9) is an approximation for small y and receives corrections suppressed by a power of a s or y 2 Λ 2 , where Λ is a hadronic scale. The splitting kernels 11 V (1) a 1 a 2 ,a 0 (z) for the colour singlet channel are equal to the usual LO DGLAP splitting functions without the distributional parts (plus prescription and delta function) at z = 1. One therefore has 11 V (1) a 1 a 2 ,a 0 (z) > 0. This implies that 11 F a 1 a 2 > 0 at the scale µ where the LO splitting formula (9) is evaluated, provided of course that the PDFs are positive. To avoid large a 2 s corrections, one should take µ ∼ 1/y. The LO splitting kernels for other colour channels are proportional to 11 V (1) a 1 a 2 ,a 0 (z), and it is easy to transform them to the s channel colour basis. The result is where R 0 is the colour representation of the initial parton a 0 of the splitting process (with R 0 = A for g → gg). The parton pair a 1 a 2 must be in the representation R 0 because the LO splitting graphs are disconnected between the amplitude and conjugate amplitude (see figure 2(a)). The result (12) then follows from the relation (3). With 11 F a 1 a 2 > 0 one hence finds positivity of DPDs in colour space when taking the LO splitting approximation. We will see in section 5 whether this still holds at NLO.
Collins-Soper evolution
In this section, we investigate how Collins-Soper evolution affects the positivity of DPDs.
Collins-Soper evolution of DPDs does not mix different colour representations in the t channel basis, where one has Here we displayed all arguments of the functions. The Collins-Soper kernel R 1 J depends only on the multiplicity of R 1 (which is equal to the multiplicity of R 2 ) but not on the parton types. Note that colour singlet distributions in the t channel are ζ p independent, i.e. 1 J = 0. For all parton combinations except gg, the only non-trivial kernel needed is hence the one for the colour octet. Remarkably, this kernel satisfies the exact relation [26] 8 where K g (y, µ) is the Collins-Soper kernel for the evolution of single-gluon TMDs. Let us discuss the sign of the Collins-Soper kernel, which will be important in the following. The renormalisation group equation for the Collins-Soper kernel is solved by with a positive anomalous dimension that is proportional to the cusp anomalous dimension for adjoint Wilson lines: At given y one can hence always achieve a negative 8 J by taking the scales µ 1 and µ 2 sufficiently high.
To make a more specific statement, we first consider small distances y, where one can compute the kernel in perturbation theory and obtains where b 0 = 2e −γ ≈ 1.12 and γ is the Euler-Mascheroni constant. Bearing in mind that there are higher-order terms in (17), we see that the transition from positive to negative 8 J happens at µ around b 0 /y, as long as y remains in the perturbative regime. Not much is known about 8 J or K g for y in the nonperturbative domain. The situation is different for the Collins-Soper kernel K q for quark TMDs. Several phenomenological extractions find that K q (y, µ) is negative for large y, see e.g. [27] (figure 6), [28], and [29] (figure 23). Furthermore a number of lattice determinations, covering a distance range between about 0.1 fm and 0.8 fm, find that K q (y, µ) < 0 at µ = 2 GeV, see figure 7 in [30], figure 5 in [31], and figure 8 in [32]. We find it plausible to assume a qualitatively similar behaviour of K g (y, µ) and K q (y, µ) as functions of y (at perturbatively small y, one actually has K g /K q ≈ C A /C F up to corrections of order α 4 s , see footnote 10 in [21]). Under this assumption, we conclude that 8 J(y, µ 1 , µ 2 ) < 0 for scales µ 1 and µ 2 sufficiently larger than max(b 0 /y, 2 GeV).
Collins-Soper evolution in the s channel. In the s channel, different colour representations mix under Collins-Soper evolution. Starting with the qq channel and using the basis transform given in the previous section, we get Restoring all arguments, we find that the evolution equation is solved by with the matrix exponential where we abbreviate The Collins-Soper equation for the other parton combinations we consider has the same form as (18) with appropriate changes in the colour labels and matrices. In the quark-antiquark case, one hasĴ and for gq distributions we find where α is always given by (22). We furthermore getĴqq =Ĵ qq andĴ gq =Ĵ gq and corresponding equalities for the evolution matrices U a 1 a 2 . The evolution equations for distributions F q 1 q 2 , F q 1q2 , etc. with unequal flavours involve the same matrices as their counterparts for equal flavours.
We see that for all parton combinations a 1 a 2 except gg (which we do not consider) all elements of the evolution matrix U a 1 a 2 (α) are positive for α > 0. This is the case for forward evolution (ζ p > ζ 0 ), provided that 8 J < 0, which is the case when µ 1 and µ 2 are sufficiently large. Under this condition, Collins-Soper evolution to higher scales thus preserves positivity. With the notation the Collins-Soper equation in the s channel basis reads for all parton combinations considered here. Using that (Ĵ a 1 a 2 ) 2 =Ĵ a 1 a 2 , we can write its solution in the form Writing the relation (3) at rapidity scale ζ p and using that 11 We can now discuss the behaviour of Collins-Soper evolution for large negative α, which is relevant when evolving backward with 8 J < 0, and when evolving forward at scales µ 1 and µ 2 that are so low that 8 J > 0. In the regime where the factor e −α in (27) is much larger than 1, the condition (28) implies that the evolved distributions F RR a 1 a 2 (ζ p ) must be positive in some colour channels and negative in others. An exception to this statement is the case where the initial conditions satisfy (Ĵ a 1 a 2 F a 1 a 2 (ζ 0 )) RR = 0 for all R, so that all distributions are independent of ζ p . In the t channel basis, this is tantamount to all distributions other than 11 F a 1 a 2 being zero. Apart from this special case, evolution to large negative α always leads to a violation of positivity.
DGLAP evolution
In this section, we investigate how leading-order DGLAP evolution affects the positivity of DPDs. We limit ourselves to the evolution of two-quark and quark-antiquark distributions, which are the simplest cases as far as mixing and the number of colour channels are concerned. We first discuss evolution in µ 1 at fixed µ 2 and ζ p and then evolution in all three scales simultaneously.
Evolution in the scale of one parton
Let us consider evolution in the renormalisation scale of one parton, which we take to be the first one without loss of generality. For the parton combinations of interest, the LO evolution equations in the t channel basis read where the second parton a is a quark or an antiquark, and where R = 1 if R 1 = 1 and In the second equation we have made use of the charge conjugation relations betweenqg and qg splitting kernels; this gives a sign factor ε 2 (A) = −1 for the antisymmetric octet. The evolution equations involve the Mellin convolution whose lower integration boundary reflects the support region of DPDs in the momentum fractions: F (z 1 , z 2 , . . .) is zero for z 1 + z 2 > 1. The evolution kernels can be written as where b = q, g, γ q (µ) = 3C F a s (µ) + O(a 2 s ), 1 γ J (µ) = 0, and 8 γ J (µ) is given in (16). The z dependent part of (31) involves the familiar splitting functions and the colour factors Whilst our analysis is limited to the LO approximation of the splitting kernels, our results do not depend on whether one uses the LO or the NLO approximation for the anomalous dimension 8 γ J (µ), which is associated with Sudakov double logarithms. Taking such anomalous dimensions at NLO corresponds to next-to-leading logarithmic (NLL) approximation (see e.g. table 1 in [33] for single hard scattering and section 6.6 in [21] for DPS). If one takes 8 γ J (µ) at two-loop accuracy, one may also want to use the two-loop rather than the one-loop running of α s (µ). Our arguments in the present work do not depend on that choice. We will shortly need to know the sign of the Mellin convolutions (30).
On the other hand, the plus-prescription for P qq involves a negative term proportional to F (x 1 , x 2 , . . .): It can therefore have any sign, even if F (z 1 , z 2 , . . .) ≥ 0 for all z 1 , z 2 . To illustrate this, let us consider the DPDs computed with the LO splitting formula (9). We take the PDFs of the CT14lo PDF set [34], using the LHAPDF interface [35] via ManeParse [36]. We evaluate the DPD at µ = b 0 /y = 10 GeV and verify that the PDFs are positive at that scale. For the strong coupling, we use the value α s (10 GeV) = 0.178 provided by the PDF set. As shown in figure 3, the convolution of P qq with 11 F qq so obtained is indeed negative in a large region of the momentum fractions. The evolution equations in the s channel basis are derived along the same lines as the Collins-Soper equation in (18). If the first parton is a quark, we get where the function arguments of P and F are as those of P and F in (29). The colour mixing matrices read whilstĴ qq andĴ qq are given in (19) and (23), respectively. With we find that the evolution equations for Fqq and Fq q are respectively obtained from (35) and (36) by interchanging q ↔q in the DPDs and swapping their representation labels, i.e. F 33 qq → F 33 qq , F 33 gq → F 33 gq , etc. The splitting kernels, anomalous dimensions, and colour mixing matrices remain the same. The convolution x 1 x 2 P (z, µ) ⊗ x 1 11 F qq (z, x 2 , y, µ, µ) with P (z, µ) = a s (µ) P qq (z) and 11 F qq computed from the LO splitting formula (9) at µ = b 0 /y = 10 GeV. A weighting factor x 1 x 2 is included to keep details visible at higher momentum fractions.
The evolution equations have the same form for distributions with unequal flavours, i.e. one may replace F qq → F q 1 q 2 and F gq → F gq 2 in (35), or F qq → F q 1q2 and F gq → F gq 2 in (36), whilst keeping the splitting kernels and colour mixing matrices unchanged. Before analysing the effect of the evolution equations (35) and (36) on positivity, let us recall the situation for colour singlet distributions in the t channel basis. The LO evolution equation where a is a quark or an antiquark. If 11 F qa and 11 F ga are non-negative for all momentum fractions, the terms involving P qg or γ q in (40) are non-negative as well and hence conserve positivity. As we have shown in figure 3, the first term in (40) can be negative. However, the part that is responsible for a negative sign is proportional to 11 F qa (x 1 , x 2 , . . .) itself, as is easily seen in (34). This negative contribution hence decreases in magnitude when 11 F qa (x 1 , x 2 , . . .) approaches zero from above, and closer inspection shows that it cannot lead to a violation of positivity. This is shown in more detail in appendix B of [4]. Overall, positivity is hence conserved for LO evolution of 11 F qa to higher scales, and the same can be shown for all other parton combinations. We now analyse the sign of the different terms on the r.h.s. of (35) and (36) under the assumption that the s channel distributions on the r.h.s. are non-negative for all momentum fractions. For brevity, we write D 1 F = ∂F/∂ log µ 2 1 .
The scale variation
A negative contribution from the term with a R cannot lead to a violation of positivity, as just discussed. However, the contribution from the term with b R can remain large and negative even if F RR qa approaches zero. This term can therefore lead to a zero crossing of the distribution as one evolves to higher scales.
2. The matrices P qg,a have no negative elements, so that the terms with P qg are all nonnegative.
3. The terms with γ q are all non-negative.
4. The contributions with 8 γ J to D 1 F 33 qq and D 1 F 66 qq have opposite sign, as well as those to D 1 F 11 qq and D 1 F 88 qq . This follows from the form ofĴ qq andĴ qq in (19) and (23). Whether these contributions can violate positivity depends on the sign of log µ 2 1 /(x 2 1 ζ p ).
It follows that LO DGLAP evolution of DPDs is not guaranteed to conserve their positivity in colour space, in contrast to the colour singlet distributions 11 F a 1 a 2 . This is one of our main results. As a numerical illustration, let us take the initial conditions provided by the perturbative splitting mechanism at LO. According to (12), we have F 11 qq = F 66 gq = F 1515 gq = 0 at the scale µ where the splitting formula is evaluated. Inserting this in the evolution equation (36) and taking the LO approximation (16) of 8 γ J , we obtain at the point µ 1 = µ 2 = µ. We evaluate the r.h.s. numerically with the same settings as in figure 3 and the choice which is natural for the perturbative splitting mechanism [20]. With this choice, the term with ζ p is zero at x 1 = x 2 . At that point, a negative value of (41) must hence be due to the term with P qq . We see in figure 4 that there are regions in x 1 and x 2 for which D 1 F 11 qq < 0 at the scale µ. With F 11 qq = 0 at that scale, positivity is thus explicitly violated by evolution to a higher scale for the first parton.
Simultaneous evolution in all scales
When computing double parton scattering cross sections, the choice of scales µ 1 , µ 2 and ζ p is driven by the kinematics of the process. In particular, taking µ 1 = µ 2 is natural for processes with two hard scales of very different size. On the other hand, the simplest setting for the physical interpretation of a DPD is with all relevant scales set equal. We therefore investigate the evolution of DPDs with a common renormalisation scale µ = µ 1 = µ 2 for both partons and the rapidity scale given by ζ p = µ 2 /(x 1 x 2 ) as in (42). Let us briefly comment on the factor x 1 x 2 in (42). As explained in [20,21], the definition of ζ p involves the rapidity regulator and the plus-momentum of the target proton. By contrast, x 1 x 2 ζ p refers to the regulator and the plus-momenta of the two extracted partons, and in this sense is more closely related to the scales µ 1 and µ 2 that refer to the renormalisation of the operators associated with the two partons. This motivates our choice (42), along with the fact that x 1 x 2 ζ p is the combination of variables appearing at higher orders in the perturbative splitting formula for DPDs, see (44) and (45). Combining the DGLAP equations for the first and second parton with the Collins-Soper equation (18) yields With ζ p chosen as in (42), the term with 8 γ J in (35) has cancelled against the corresponding term from the evolution equation in µ 2 . The sign of the r.h.s. of (43) can be analysed along the same lines as in the previous subsection: 1. The terms with P qq can lead to a violation of positivity because of the negative part in the plus-distribution and of the positive off-diagonal entries in the matrix P qq,q . Figure 4: The right-hand side of (41) with qq = uū, evaluated at µ = b 0 /y = 10 GeV and ζ p = µ 2 /(x 1 x 2 ) with LO splitting DPDs. The individual terms going with P qq , P qg , and γ J are shown as well. The PDFs used in the splitting formula are specified in the text below (34). A weighting factor x 1 x 2 is included to keep details visible at higher momentum fractions. colour channel gives a positive contribution. This is consistent with our findings for Collins-Soper evolution in section 3.
The leading order expression (17) of 8 J(y, µ, µ) contains an explicit logarithm log(µ 2 y 2 ). In the leading double logarithmic approximation, one therefore has to keep only the last term in the evolution equation (43), which then reduces to the LO Collins-Soper equation with µ 2 = x 1 x 2 ζ p . The evolution equation for F qq has the same form as (43) and involves the colour mixing matrices P qq,q , P qg,q , andĴ qq . Its discussion proceeds in full analogy. In summary, we find that evolution to higher scales can violate positivity of DPDs in colour space, both when one evolves in the renormalisation scale of one parton and when one evolves in all scales simultaneously.
DPDs from parton splitting at two-loop accuracy
In section 2 we saw that the perturbative splitting mechanism gives DPDs that satisfy positivity if the splitting is computed at LO, i.e. at one-loop accuracy. This is not surprising, since the LO splitting formula in the s channel basis can be written as a squared matrix element (in the mixed representation of definite plus-momentum and transverse position for the observed partons). Starting from two loops, the splitting formula has explicit logarithms of the renormalisation scale µ and of the rapidity parameter ζ p , which respectively result from subtractions for ultraviolet and rapidity divergences. As a consequence of these subtractions, positivity is no longer guaranteed. It is then natural to ask whether the resulting DPDs at small y violate positivity in colour space, and if so, by how much. We address this question in the present section, using the results of the recent two-loop calculation in [20]. The generalisation of the LO splitting formula (9) to higher orders has the form and the special convolution Here we write and recall that according to (11). The analogue of (44) for distributions in the s channel basis is readily obtained using the transformations in section 2. We limit our attention to the quark-antiquark sector and consider the distributions Among these, only F 88 uū is nonzero at order a s , whilst all others start at order a 2 s .
Parton combinations appearing first at two loops. At order a 2 s the distributions F ud and F ud receive a contribution only from the kernels V qq ,q or V qq ,q , which respectively correspond to the graphs in figure 5(a) and 5(d) and further graphs with identical topology. As a consequence, distributions for different colour representations are proportional to each other, as specified in equation (4.40) of [20]. This leads to the relations in (49). Furthermore, the splitting kernels for F ud and F ud are proportional to each other, because 11 V qq ,q = 11 V qq ,q at two-loop accuracy. Using the basis transform (5) and the proportionality factors between t channel singlet and octet kernels in equation (4.35) of [20], one obtains with V 33 qq ,q = 2 9
Here the convolution V (z 2 , z 1 ) ⊗ f (z) is defined as in (46) with u and 1 − u interchanged on the r.h.s. In the last term of (51) we used the relation Vq q ,q = V qq ,q from charge conjugation invariance. Notice that the kernel 11 V qq ,q includes an overall factor a 2 s .
The distributions F 33 uu and F 66 uu receive contributions from the kernels V qq ,q and V v qq,q with different weights and are therefore not proportional to each other. Using equation (3.3) in [20], we find that the relevant kernels read and are to be convolved with f u (z). At order a 2 s , the kernel 11 V qq ,q and hence all kernels in (52) and (53) require ultraviolet renormalisation but no subtraction of rapidity divergences. As a consequence, they depend linearly on the renormalisation group logarithm L y . A natural choice of scale in the fixed-order formula (44) is µ = µ y , so that L y = 0. In a numerical study of the two-gluon distributions RR F gg in [20], we found that the size of a 2 s corrections relative to the a s term is moderate for µ = µ y but grows substantially for µ = µ y /2 or µ = 2µ y . In the present work, we therefore consider a smaller amount of variation and take µ = 1.2µ y or µ = µ y /1.2 as alternative scales, which corresponds to L y ≈ ±0. 36. We checked that the size of a 2 s corrections in the two-gluon sector remains moderate for these values. The DPDs depend on µ as specified by the relevant DGLAP equations. When comparing the fixed-order formula (44) for different µ, one thus sees evolution effects truncated at the lowest order in a s . In figures 6 and 7, we plot the kernels 11 V qq ,q , V 33 qq,q , and V 66 qq,q . We see that they are negative over wide ranges of z and u, although they are obtained from a sum of graphs that correspond to the squared amplitude for q → qq q or q → qqq. This illustrates that the subtraction of ultraviolet divergences can indeed lead to a negative result. We note that the kernels go to large positive values for z → 1, which can be traced back to terms diverging like log 1/(1 − z) in that limit. Finally, we observe that the kernels increase with µ for all z and u. We now investigate to which extent the negative regions in the splitting kernels lead to negative DPDs. To this end, we evaluate the splitting formula for the distributions in (49) and (50) with the central PDFs from the CT14nlo set, having checked that the PDFs are positive in the kinematics of interest. As we did in section 4, we fix y such that µ y = 10 GeV. In figure 8 we show F 33 ud and F 11 ud for µ = µ y /1.2 and µ = µ y . The curves are scaled such that the difference between them originates from the different PDFs in (51) and not from the different normalisation of the kernels in (52). We observe a strong effect of scale evolution: for x 1 ∼ x 2 the distributions are negative at the smaller scale but positive at µ = µ y (and also at µ = 1.2µ y , which is not shown in the figure). We also note that the negative values in figure 8(c) are tiny compared with the size of the same distributions at other values of x 1 for the same x 2 . The distributions F 33 uu and F 66 uu are shown in figure 9, with a scaling factor such that the difference between the curves is due to the contribution of 11 V v qq,q to the kernels in (53). The situation for µ = µ y /1.2 (not shown in the figure) is qualitatively similar to the one at µ = µ y , where we find negative values at x 1 ∼ x 2 for F 33 uu but not for F 66 uu . At the higher scale µ = 1.2µ y , all values are positive. As in figure 8(c), the negative values in figure 9(c) are tiny compared with the size of the distribution at other momentum fractions. In this sense, the violations of positivity we have shown so far may be regarded as minor.
Quark-antiquark distributions. The last two distributions in (50) are for a quark and an antiquark of equal flavour. F 11 uū receives contributions from all three kernels in the bottom row of figure 5 and from the real two-loop graphs for the splitting g → qq, an example of which is shown in figure 2(b). F 88 uū receives contributions from the same graphs, from the LO graph in figure 2(a), and from virtual two-loop graphs such as the one in figure 2(c). The uū at x 1 = x 2 at two-loop level, evaluated with x 1 x 2 ζ p = µ 2 for two values of µ. The curves labelled "g" are for the splitting g → qq, and the curve labelled "q" is for the sum of all splitting contributions initiated by a quark or an antiquark. The labels "regular", "[1 − z] + ", and "δ(1 − x)" refer to the different parts of the kernels V 11 [2,0] + L y V 11 [2,1] discussed below equation (55).
virtual graphs depend on the colour of the observed partons in the same way as the LO graph and hence do not contribute to F 11 uū . The distributions F 11 uū and F 88 uū require both ultraviolet renormalisation and the subtraction of rapidity divergences. The latter appears in the splitting process g → qq, whose kernels have a more complicated structure than the ones considered so far. Up to order a 2 s , they can be written as qq,g (u) = u 2 + (1 − u) 2 2 is the LO splitting kernel appearing in (9) and contains the double logarithms associated with rapidity divergences. We note that (55) is obtained with the standard definition of the MS scheme, and that the term π 2 /6 is absent if one instead uses the definition proposed by Collins in section 3.2.6 of [22]. The coefficients V [2,0] and V [2,1] in (54) are smooth functions of u but distributions in z. They consist of a regular part, a part proportional to the plus-distribution 1/[1 − z] + , and a term proportional to δ(1 − z). The regular part is a smooth function of z but may have a log(1 − z) singularity for z → 1, similar to what we saw for the pure quark kernels in figures 6 and 7.
In figure 10 we show the different contributions to F 11 uū for x 1 = x 2 . The regular part of the g → qq kernel turns out to be negative for all values of z and u and results in a large negative contribution to the DPD. At µ = 1.2µ y , another negative contribution comes from the part of the kernel that goes with the plus-distribution 1/[1 − z] + . Adding up all contributions, one obtains the distribution shown in the two upper rows of figure 11. The negative contributions dominate for x 1 = x 2 up to a few 0.01, whereas for larger momentum fractions the positive contributions from quark or antiquark splitting gradually takes over. In the two lower rows of figure 11, we see that the regions of negative F 11 uū are centred around x 1 ∼ x 2 , with values that are small compared with the size of the distribution at other values of x 1 at the same x 2 . This is the same phenomenon that we observed earlier for F ud , F ud , and F 33 uu . However, in the present case the negative values around x 1 ∼ x 2 decrease with µ, so that the violation of positivity in F 11 uū becomes more pronounced as µ becomes larger. In figure 11 we also see that a mild variation of the rapidity parameter ζ p has a rather small effect in the kinematics considered here. The choice of ζ p is only relevant if µ = µ y , because L ζ is multiplied by L y in (55). Let us finally turn to the distribution F 88 uū . Evaluated at one-loop accuracy, this distribution is positive, so that negative values at two-loop level can only appear when the contribution of order a 2 s is larger in size than the one of order a s . In such a case, one may of course worry whether the unknown contributions of yet higher orders will change the sign of the distribution again. This situation is qualitatively different from the one for the distributions discussed so far, where the terms of order a 2 s give the first non-vanishing contribution. In figures 12 and 13 we show F 88 uū for different values of the momentum fractions. We always take x 1 x 2 ζ p = µ 2 , bearing in mind that according to (54) the effect of varying ζ p is eight times smaller for F 88 uū than it is for F 11 uū . As can be seen in figure 12, the difference between the distributions at µ = µ y to µ = 1.2µ y is rather small, in contrast to what we found for the other distributions discussed so far. This is not surprising, because for F 88 uū the relative effect of changing the scale from µ a to µ b is of order a s log(µ a /µ b ), whereas it is of order log(µ a /µ b ) for distributions that receive their first nonzero contribution at two loops. In all panels of figures 12 and 13, we show the result obtained with either the one-loop kernel or with the sum of one-and two-loop kernels. In both cases we take the same NLO PDFs, so that the difference between the LO and the LO+NLO curves directly shows the impact of the two-loop kernels. We find that the size of the a 2 s corrections is often moderate but becomes large in several kinematic situations.
1. As discussed in section 4.3 of [20], the two-loop corrections for g → qq and q → q q are enhanced at small x 1 + x 2 . This is seen in the left panels of figures 12(a), 12(b), 13(b), and 13(c). As follows from equations (4.46) and (4.48) in [20], the enhanced corrections provide a positive contribution to F 88 uū . An all-order resummation of the enhanced corrections using techniques from small-x factorisation may be possible, but details of this have not been worked out.
2. As explained in section 4.4 of [20], the splitting graph for u → uū in figure 5(d) leads to a behaviour of the DPD like 1/(1 − u) ≈ x 1 /x 2 for x 2 x 1 , which is absent at order a s . This explains why in figure 13(a) the LO+NLO result for the scaled DPD x 1 x 2 F 88 uū goes to a finite value when x 2 x 1 , whereas the LO result goes to zero. It also explains the huge relative NLO corrections seen in the right panel of figure 13(b). In the latter case, the splitting process u → uū is further enhanced by the fact that the u quark distribution becomes the dominant PDF with increasing x.
The appearance of an additional power 1/(1 − u) is unique for the step from LO to NLO in this channel and will not repeat itself at yet higher orders.
For x 2
x 1 we find a large negative two-loop contribution to F 88 uū , which is seen in the right panel of figure 13(c) and more clearly in figure 13(d), where the LO+NLO result becomes just slightly negative. This can be traced back to a negative term − 5a 2 qq,g (u) log 2 u (56) in the kernel V 88 qq,g , which is enhanced by two powers of log u compared with the LO expression.
It would require further analysis to understand whether this type of enhancement repeats itself at yet higher orders and, if so, whether it can be resummed to all orders. We therefore cannot say whether the negative values seen in figure 13(d) will disappear when higher order terms are included.
Notice that figures 13(c) and figure 13(d) refer to the lowest scale µ = µ y /1.2 considered in this study. The corresponding plots for µ = µ y or 1.2µ y show large negative NLO corrections as well, but the LO+NLO curves do not become negative any more.
We remark in passing that the enhanced two-loop contributions described in points 1 and 3 scale like the LO kernel for g → qq and hence cancel out in F 11 uū . By contrast, the 1/(1 − u) behaviour discussed in point 2 appears both in F 88 uū and F 11 uū .
Colour summed distributions Let us finally consider the colour summed distributions 11 F ud , 11 F ud , 11 F uu , and 11 F uū . The relations (3) and (49) imply that 11 F ud and 11 F ud are negative whenever their s channel counterparts are. By contrast, we find that 11 F uu and 11 F uū remain positive for the kinematic settings shown in figures 9 to 13 .
Summary
In the context of the parton model, DPDs in the s channel colour basis are probability densities for finding two partons in a definite colour state (with specified longitudinal momenta and specified transverse distance from each other). This leads to the expectation that s channel DPDs for unpolarised partons should be non-negative. If it is satisfied, this positivity property provides valuable constraints on the colour dependence of DPDs, along with a strategy to model them by saturating the positivity bounds at a certain scale.
In the present work, we investigate whether evolution of DPDs to higher scales preserves the positivity property under the assumption that it holds at the starting scale. We limit ourselves to unpolarised partons, and we exclude two-gluon DPDs from our consideration because their colour structure is much more involved than the one of pure quark or quarkgluon distributions. DPDs in the s channel colour basis are subject to Collins-Soper evolution in the rapidity parameter ζ p , and this evolution includes mixing between different colour channels. Provided that the renormalisation scales µ 1 and µ 2 associated with the two partons are large enough compared with 1/y, the Collins-Soper kernel 8 J(y, µ 1 , µ 2 ) for DPDs is negative. Under this condition, evolution to higher ζ p preserves positivity. Conversely, backward evolution to sufficiently small ζ p eventually leads to negative distributions (except for the special case in which all t channel distributions other than 11 F a 1 a 2 are zero and hence all s channel distributions are independent of ζ p ). We next consider the DGLAP equations for evolution in one of the scales µ 1 or µ 2 , with the evolution kernels taken at LO. We find that evolution to higher scales is not guaranteed to preserve positivity: there are initial conditions that satisfy positivity but lead to negative s channel distributions at higher scales. This is due to the convolution of a plus distribution in the evolution kernel with a DPD different from the one being evolved. There is no contribution of this type in the LO evolution equations for polarised colour summed DPDs 11 F a 1 a 2 , which conserve positivity in the same way as the LO evolution of polarised PDFs [4]. In a numerical illustration, we choose initial conditions where certain s channel DPDs are zero and see that they turn negative at slightly higher scales. We also study joint DGLAP and Collins-Soper evolution in the common scale µ = µ 1 = µ 2 = x 1 x 2 ζ p and find that positivity is not preserved, for the same reasons as above. At small inter-parton distance y, the initial conditions for DPD evolution can be computed using the perturbative splitting mechanism, setting µ = µ 1 = µ 2 ≈ 1/y and using a fixedorder truncation of the DPD splitting kernels. It is easy to see that at order a s one obtains DPDs that satisfy positivity in colour space. At order a 2 s this no longer holds: in a numerical study we obtain negative values for colour channels in which the distributions are zero at order a s , and also for the distribution F 88 uū , which is nonzero at order a s . Negative values are also found for the colour summed distributions 11 F ud and 11 F ud . In several cases, the considered distributions have no ζ p dependence at order a 2 s , and negative values of them can be uniquely traced back to the subtraction of ultraviolet divergences implied in the definition of twist-two operators. The explicit form of this subtraction is given in section 2.6 of [20]. The negative values we find for the distributions are small compared with the size of the same distributions at other values of the momentum fractions x 1 and x 2 . In this sense, the violations of positivity we have seen may be regarded as "relatively small". In view of this, we should also caution that negative values obtained with the splitting formula at order a 2 s may turn into positive ones when yet higher orders are included. Note that the violations of positivity just discussed refer to DPDs defined with MS renormalisation of twist-two operators. By contrast, the violation of positivity by forward DGLAP evolution described earlier occurs at LO and is hence not specific to the MS scheme. We conclude that the positivity of DPDs in full colour space cannot be taken for granted and can be violated in physically realistic settings. Using positivity as a guide for modelling DPDs at large y may still be an option when there is a lack of better information. It should however be done with due caution, and one should check whether the chosen initial conditions give positive distributions when evolved to higher scales.
A Colour space projectors
In this appendix, we list the colour space projectors that appear in the definitions (1) and (2) of DPDs in the s and t channel bases. In the pure quark and the mixed quark-gluon sector, we have and P ij i j For completeness, we also give the projectors for the pure gluon sector, although they are not used in the present work.
In all cases, we have set the number of colours to N = 3. Further projectors are obtained by exchanging the representation labels and the corresponding indices: P r 1 r 2 r 1 r 2 RR = P r 1 r 2 r 1 r 2 R R .
One readily verifies the completeness relations R P r 1 r 2 r 2 r 1 RR = δ r 1 r 1 δ r 2 r 2 for P RR in (57) , R P r 1 r 2 r 1 r 2 RR = δ r 1 r 1 δ r 2 r 2 for P RR in (59), (60), (61) , where in each case the sum runs over all available representations. The multiplicity of a representation R can be computed from the trace m(R) = P r 1 r 2 r 2 r 1 RR for P RR in (57), P r 1 r 2 r 1 r 2 | 13,034 | 2021-09-29T00:00:00.000 | [
"Physics"
] |
An Introspective Comparison of Random Forest-Based Classifiers for the Analysis of Cluster-Correlated Data by Way of RF++
Many mass spectrometry-based studies, as well as other biological experiments produce cluster-correlated data. Failure to account for correlation among observations may result in a classification algorithm overfitting the training data and producing overoptimistic estimated error rates and may make subsequent classifications unreliable. Current common practice for dealing with replicated data is to average each subject replicate sample set, reducing the dataset size and incurring loss of information. In this manuscript we compare three approaches to dealing with cluster-correlated data: unmodified Breiman's Random Forest (URF), forest grown using subject-level averages (SLA), and RF++ with subject-level bootstrapping (SLB). RF++, a novel Random Forest-based algorithm implemented in C++, handles cluster-correlated data through a modification of the original resampling algorithm and accommodates subject-level classification. Subject-level bootstrapping is an alternative sampling method that obviates the need to average or otherwise reduce each set of replicates to a single independent sample. Our experiments show nearly identical median classification and variable selection accuracy for SLB forests and URF forests when applied to both simulated and real datasets. However, the run-time estimated error rate was severely underestimated for URF forests. Predictably, SLA forests were found to be more severely affected by the reduction in sample size which led to poorer classification and variable selection accuracy. Perhaps most importantly our results suggest that it is reasonable to utilize URF for the analysis of cluster-correlated data. Two caveats should be noted: first, correct classification error rates must be obtained using a separate test dataset, and second, an additional post-processing step is required to obtain subject-level classifications. RF++ is shown to be an effective alternative for classifying both clustered and non-clustered data. Source code and stand-alone compiled versions of command-line and easy-to-use graphical user interface (GUI) versions of RF++ for Windows and Linux as well as a user manual (Supplementary File S2) are available for download at: http://sourceforge.org/projects/rfpp/ under the GNU public license.
Introduction
Our research was motivated by an analysis of matrix-assisted laser desorption/ionization (MALDI) time of flight (TOF) data. MALDI-TOF data are high dimensional data, characterized by a large number of variables, a (typically) small number of subjects, and a high level of noise. These features complicate subsequent data analysis. Nonetheless, analyses of ion TOF data, including both MALDI-and surface-enhanced laser desorption/ionization (SELDI) TOF data, are used to discover disease-related biomarkers and identify features that discriminate between disease states [1][2][3][4][5][6][7][8][9][10][11][12].
Due to heterogeneous crystallization of the sample/matrix mixture spotted onto MALDI plates, and/or to account for dayto-day instrument variation for both MALDI and SELDI, it is common practice to obtain replicate spectra from the same subject sample, resulting in non-independent (cluster-correlated) subjectlevel data [13]. Here cluster refers to the collection of samples collected from the same subject. Since multiple samples are collected for the same subject, in principal the samples should be identical. The imperfections in technology and sample processing introduce some variation, resulting in non-identical replicate samples that are more similar to one another than samples from different subjects; that is to say, there is positive correlation between technical replicates from the same subject.
For replicate subject-level observations, we expect the intracluster correlation (ICC) to be moderate to high, while for other types of clustered data, the ICC can be quite low. When discriminating between the disease groups, correlated replicate data may not be considered independent [14,15]. Within-cluster data dependence limits the use of classifiers such as Random Forest (RF) without first altering the data to induce independence, for example, averaging the observations obtained from technical replicates from the same subject [16].
RF is an ensemble of decision trees. Decision trees have been used in bladder cancer diagnosis based on SELDI spectrum protein profiles [11]. Decision trees are examples of weak learners, that is, classifiers characterized by low bias but high variability [16,17]. Another advantage of decision trees is the ease in which variables and their associated values can be interpreted.
Minor data alterations can result in large changes in the structure of a single tree. RF overcomes this problem of overfitting by averaging across different decision trees. Specifically, each tree is built on a bootstrap sample of the training dataset, so that the bootstrap sample contains, on average, 63% of the unique original samples [16,18,19]. Bootstrap sampling, also called bagging (from bagged aggregation), exposes the tree construction algorithm to a slightly different subset of the training data for each tree, resulting in a collection of different trees. Since forests typically consist of thousands of trees, the examination of an individual tree or even a select subset of trees is dubious in regards to the effective determination of important variables and corresponding values. For this reason, several variable importance measures have been proposed that rank important variables by considering all trees in the RF [16,20]. We discuss one of these measures used in RF++ in the Methods section.
A small subsample of variables (the mtry parameter in the RF literature) is used at each tree node split, inducing further variation among trees. Together, bagging and variable subsampling reduce overfitting and make RF a more stable classifier than a single decision tree [21,22]. RFs have been shown to perform comparably to other classification algorithms with respect to both prediction accuracy and the capacity to accommodate large numbers of predictor variables [23][24][25].
RFs have been used in numerous biological applications, including the identification of cancer biomarkers, using a single observation per subject [23,26,27]. Vlahou et al. and Svetnik et al. used decision trees and RF, respectively, on averaged replicate data [11,24]. Although averaging induces independence, a consequence of the resulting data reduction is a loss of information. Moreover, if the number of replicates differs across subjects, averaging masks this imbalance and leads to each subject contributing equally to the resulting classifier.
In our novel RF implementation, we utilize subject-level bootstrapping (described in the Methods section), which enables the effective use of all data samples and allows for unequal contribution from the subjects. In the sections that follow, we describe a generalized Random Forest classifier, RF++, and simultaneously compare it with classical RF approaches for dealing with replicate data. In addition to providing a classification algorithm and measures of variable importance, RF++ accommodates cluster-correlated data in a manner that is consistent with the data's structure.
MALDI-TOF Simulated Data
We first investigated the ability of RF++ to correctly identify discriminating variables and classify subjects under conditions of varying: intra-cluster correlation (ICC), numbers of subjects, and numbers of replicates per subject. We grew forests using 125 simulated training datasets with 3 equally discriminating variables as described in the Methods section. We then assessed the forests' classification accuracy and variable selection ability using 25 new simulated testing datasets. We repeated the simulation 200 times to produce stable estimates of the median, 5 th and 95 th percentiles for the measurements presented below. The simulation study was designed to resemble characteristics observed in the MALDI-TOF data discussed in the previous section. Figures 1, 2, and 3, depict results corresponding to forests grown by RF++ with subject-level bootstrap sampling (SLB), dot-dashed blue lines; results corresponding to forests grown assuming all samples are i.i.d. (URF), solid red lines; and results corresponding to forests grown on subject-level averaged (SLA) samples, dashed black lines. For each performance measure, we present results only for ten (five in each class) and 30 (15 in each class) subjects. Simulation results for 20, 50 and 100 subjects were qualitatively similar to those shown for 30 subjects, and were therefore excluded in the interest of brevity.
Variable Importance. To compare each method's ability to select discriminating variables, we ranked the variable importance scores produced by the simulations for each forest and computed an average rank for the 3 equally discriminating variables. The best possible average rank was 2 when all discriminating variables were in the top 3 positions. Figure 1 shows the median and the 5 th and 95 th percentiles of the logarithm of the average rank for the 3 discriminating variables of the 200 simulations for the SLB, URF and SLA forests. Results are shown for simulations with 10 and 30 subjects in Figures 1A and 1B, respectively.
The RF++ variable importance ranks obtained from SLB and URF forests were consistently lower than the ranks from SLA forests. The ability to select discriminating variables decreased for both SLB and SLA forests as the ICC increased. This is expected, since the effective sample size for clustered data is n 1z m{1 ð ÞÃr which approaches the sample size for the SLA method when r~1, n=m. Here n is total number of samples, m is number of samples within a subject (cluster), and r is intracluster correlation coefficient. For 20 subjects or more, the intervals defined by the 5 th and 95 th percentiles of the average rank distribution for the three discriminating variables were uniformly lower and narrower for SLB and URF forests than for SLA forests.
For 10 subjects, SLB and URF forests performed better for all but the highest value of ICC = 0.9. There was little difference in the accuracy of variable selection between the SLB and URF forests, suggesting that both bootstrap methods can be equally used for variable selection.
As the number of subjects increased, all of the forests identified the discriminating variables with increasing accuracy across a wider range of ICC values. Note, for example, the straight line in Figure 1B at average rank = 2 for the ICC values from 0.1 to 0.5, which indicates nearly perfect identification of the 3 discriminating variables in this ICC range. The average rank increased to 4 or greater for ICC = 0.7 with a large increase in the width of the interval defined by the 5 th and 95 th percentiles. Figure 1 demonstrates that SLB and URF forests identify important variables equally well and usually better then SLA forests. Specifically, SLB and URF forests in our simulations produced lower discriminating variable importance ranks than the SLA forests for ICC values between 0.1 and 0.7. All forests performed poorly at ICC = 0.9 with median average ranks above 76.
Classification Accuracy
Proportion Correctly Classified. Because RF++ is constructed to accommodate clustered data, it summarizes classification both at the replicate and subject level. Replicates are classified based on the majority vote of all trees in the forest. Subjects are then classified by majority vote of their replicates, as described in the Methods section. Figures 2A and 2B show the median and the 5 th and 95 th percentiles of the proportion of subjects correctly classified for SLA, SLB and URF forests across 200 simulated test data sets for ten and 30 subjects, respectively. As expected, the algorithm predicted class membership with decreasing accuracy as the ICC increased, but the classification accuracy of SLA forests was uniformly equal to or less than that of SLB and URF forests (except for a single case for 30 subjects with 2 replicates and ICC = 0.5). This is most notable for small numbers of subjects (Figure 2A), with a nearly 15% difference in accuracy for the small values of ICC. The differences between the forests decreased as ICC increased, due to effective sample size for SLB and URF forest approaching the sample size of SLA forest as explained above. We also note that the forests achieved similar classification performance as the number of subjects increased. We observed no difference in classification performance between SLB and URF forests.
Area Under the Receiver Operating Characteristic
Curve. To assess classification performance of the forests in a manner independent from the decision threshold (for majority vote the decision threshold is 0.5, i.e. above 50% trees to vote for a particular classification in a two class classification), we computed the area under the receiver operating characteristic curve (AUC) [28]. Figures 3A and 3B show the median and the 5 th and 95 th percentiles of AUC for the three forests across 200 simulations for 10 and 30 subjects, respectively. URF forests produced greater median AUCs for 10 subjects with SLB tracing closely and SLA performing up to 18% worse. Although URF and SLB forests had similar median AUCs, SLB forests yielded consistently narrower 90% credible intervals than URF forests, representative of a more stable performance. Differences in AUCs among all forests decreased for 50 subjects and were negligible for 100 subjects. All forests produced similar 90% credible intervals for 100 subjects. It is noteworthy that all forests had similar performance at the extreme ICC values of 0.1 and 0.9 for numbers of subjects 30 or larger ( Figure 3B), but URF and SLB forests had greater AUCs than SLA forests at intermediate ICC values (0.3, 0.5, 0.7).
Application to Esophageal Cancer Data
We analyzed MALDI-TOF spectra derived from serum samples of esophageal cancer patients to further validate the results in classification accuracy on real MS data. Sera were obtained from 38 (30 cancer and 8 control) subjects, fractionated, and analyzed by MALDI-TOF MS. We obtained 507 spectra with the following numbers of replicates per subject: 28 subjects had 12 replicates; 5 subjects had 24 replicates; 4 subjects had 11 replicates; and 1 subject had only 7 replicates. Spectra were preprocessed using PrepMS with the mean spectrum smoothing threshold set to 20, individual spectra smoothing threshold set to 16, and signal-tonoise ratio set to 20 [29]. Intensities below 2000 kDa were considered matrix noise and were eliminated from the analysis. A total of 185 peaks were identified. Spectra were further normalized with EigenMS to eliminate any systematic bias [30]. One significant eigenpeptide (trend) that explained 88.25% of the variation was detected and its effects were removed.
We grew URF, SLB and SLA forests each with 2001 trees. We performed 100 experiments dividing the subjects into training and testing datasets. Two-thirds of the 38 subjects (26 subjects) were used for training, randomly choosing 6 of the subjects from the control group and 20 of the subjects from the disease group, respectively. The remaining 12 subjects were used for testing.
As depicted in Figure 4, all three forests performed similarly with 50th and 95th percentiles at 100% correct classification. Fifth percentiles differed with 83% for SLB, 91% for SLA and 100% for URF. These results are otherwise consistent with the results obtained using simulated data.
Discussion
Our motivation for this research was biomarker discovery based on MALDI-TOF mass spectrometry (MS) data. MS data are characterized by both a small number of subjects and a large number of variables (most of which are non-discriminating between the classes), and require the use of robust classifiers that can handle such constraints. Previously it was unclear whether correlation among replicate spectra (common with data obtained in MS experiments) should be specially handled.
Our study results indicate that RF++ provides an approach to the analysis of cluster-correlated data that matches the performance of the existing (unmodified) RF algorithm applied at the sample level. The only caveat is that OOB error rate produced by the URF forests is typically an underestimate. Error rates for clustered data analyzed with URF should properly be estimated on a separate test dataset. We further demonstrated that the performance of SLB forests is typically better than the performance of SLA forests with respect to the detection of discriminating variables, classification accuracy, and AUC.
When the ICC was near zero, we observed substantial gains in variable selection and classification capabilities for both URF and SLB as compared to SLA forests. This is not surprising because the replicates are nearly independent when the ICC is small, and therefore averaging results in the greatest loss of information. Conversely, when the ICC is large (close to 1), the within-subject data are nearly identical and there is little additional information in the replicates. Subsequently, we observed little performance improvement when comparing forests as ICC approaches 1.
Overall, for number of subjects greater then 100 any of the three forests discussed here will produce similar prediction and variable selection accuracy.
Although this manuscript has focused on the analysis of technical replicates, dependence must also be taken into account in longitudinal studies and designs in which the class assignments associated with subject replicates are potentially different. Our approach can be extended to longitudinal data by the utilization of a modified impurity measure [31][32][33] and to address the issue of the correlated predictor variables [34].
This report mainly considers the issue of classification of data clustered at the subject level. Some of the functionality of (the original) Breiman's RF has been omitted, such as regression analysis where the outcomes are continuous and weighted class analysis for unbalanced data sets. Missing values imputation for MS-based proteomics data has been described in Karpievitch et al. and can be performed prior to classification. We consider these features important and plan to incorporate them into future RF++ implementations.
MS data are an example of data with a small number of subjects and a large number of variables. The use of subject level bootstrapping (SLB) by RF++ is shown to be advantageous for the analysis of such data, because the sampling scheme is designed to accommodate data with multiple measurements for a given subject (e.g. technical replicates). Perhaps surprisingly, our results also suggest that it is still reasonable to utilize URF for the analysis of cluster-correlated data with two caveats: first, correct classification error rates must be obtained using a separate test dataset, and second, an additional post-processing step is required to obtain subject-level classification. Our studies also show that, even for moderate values of ICC, forests grown utilizing all available data (SLB or URF) classify and identify discriminating variables with greater accuracy than forests grown on averaged samples. RF++ constitutes a useful research tool contribution providing an easy-to-use graphical interface and eliminating the manual reconfiguration and recompilation requirements of Breiman's existing FORTRAN version. The SLB additions to the RF algorithm implemented in RF++ are valuable to researchers analyzing cluster-correlated data. RF++ can be used to effectively analyze both clustered and non-clustered data.
RF++ algorithm
RF++ is a classifier capable of analyzing cluster-correlated data. It was developed as a C++ implementation of the RF algorithm, as described by Breiman [16], with additional functionality specific to the structure of cluster-correlated data.
First, RF++ grows each tree on a bootstrap sample (a random sample selected with replacement) at the subject-level rather than at the replicate-level of the training data. Individual trees are unpruned classification/decision trees grown using the Gini impurity score. A particular subject is chosen at random from the pool of all available subjects and all of its replicates are allocated to the in-bag dataset. As mentioned previously, approximately 63% of the individual samples are in-bag (IB) and the remainder are held out in order to compute a runtime error estimate on the out-of-bag (OOB) samples. When using subjectlevel bootstrapping we also expect about 63% of the subjects to be placed in-bag. Subject-level bootstrapping ensures that bootstrap samples are constructed from independent units, or in this case, subjects, with correlated replicates collected from those subjects. Subject-level bootstrapping overcomes the problem of potentially exposing individual trees to all subjects (See Supplementary File S1 Section 1).
Since our primary goal is to provide a classification method applicable to cluster-correlated data, we are only interested in estimating the classification error rate and not in performing inference on the model components. For these reasons it is not necessary to include covariance estimates in the tree construction. Using the subject-level bootstrap results in unbiased classification error rate estimation, regardless of whether the dependence within clusters is incorporated into the tree construction.
Second, we provide a means for computing subject-level classification. Specifically, we first classify subject replicates at the sample-level and then perform a majority vote across the subject replicates in order to compute subject classification. The ability to classify at the subject level in addition to the replicate level is useful when analyzing clustered data in which all subject replicates belong to the same class. In such cases we are ultimately interested in subject-level classification, and not just classification of individual replicates from the same subject. Figure 5 illustrates RF++ replicate-and subject-level classification. If different replicates for the same subject belong to different classes (such as measurements taken at different time points), only replicate-level classification is produced.
Third, like Breiman's original Random Forest, RF++ provides an error rate based on OOB data [16]. The OOB replicate error rate estimate is always computed. When all subject replicates belong to the same class, we compute an unbiased running OOB subject-level error estimate. Occasional misclassifications (e.g. one or two replicate misclassifications out of a collection of replicates) generally have little effect on the final forest subject-level error rate.
It is important to note that even when subject-level error rate and classifications are computed, the replicate-level error rate and classifications are still computed and made available for closer examination on an individual replicate level. For example it may be of interest to know that 5 out of 10 subject replicates are correctly classified (replicate-level error rate of 50%). RF++ also produces proportions of votes for each class which gives an estimate of the probability that the subject (and/or the replicate) falls within a particular class. These proportions can be used in decision making models that use different cut-off values to distinguish between classes. For example, in a two class problem with 0/1 outcomes in which the cut-off is 0.5. However, one might want to explore the predictive performance (e.g. sensitivity, specificity, AUC) over a range of thresholds, and this is facilitated by the reporting of estimated probabilities of class membership. RF++ Variable Importance Measures. RF++ utilizes permutation-based variable importance measure implemented in Breiman's original RF. It has been shown that other variable importance measures (such as number-of-times-used and Gini importance) do not perform as well with respect to detecting discriminating variables [20]. Number-of-times-used, a count of how many times a variable is used to split a node in a forest, is susceptible to random variable subsampling effects at each node split. This means that, due to the selection of a variable from a much smaller set (usually a subset of size ffiffi ffi q p , where q is the total number of variables in the data set), the variable may be chosen for a split even if it is not truly discriminating. In fact, number-oftimes-used is not implemented in the current FORTRAN version of RF. The Gini importance measure, on the other hand, is more robust [35]. It quantifies the decrease in the ''Gini impurity score'' computed at each node split, and can be accumulated for each variable across all trees. Gini importance has been shown to be biased towards variables with larger numbers of possible values, including continuous variables [20]. For example, Gini importance ranks a continuous variable as more importance than a binary variable even if both are equally discriminating.
The permutation-based variable importance measure is the least biased towards variables with a large range of values, as described by Strobl et al. 2007. Systems biology studies produce variables with wide continuous ranges, and thus we are less likely to encounter bias when using a permutation-based variable impor-tance measure. RF++ provides two variations of the permutationbased importance measure. In RF++ the simple permutationbased importance measure for variable v, I v , is described in Equation 2 as Here p c,t is the proportion of correctly classified replicates out of the total number of OOB replicates in a given tree t, p v c,t is the proportion of OOB replicates correctly classified after variable v has been randomly permuted across all OOB replicates for tree t, and T is the total number of trees in the forest.
The second variable importance measure included in RF++ is the mean decrease in margin (MDM) for each variable as shown in Equation 3. Margin is defined as the proportion of votes for the correct class minus the largest proportion of votes for an incorrect class (that is, the incorrect class that received the largest number of votes). The mean decrease in margin for variable v is defined as where p c,t is the proportion of correctly classified replicates out of the total number of OOB replicates for a given tree, t, p r,t is the proportion of OOB replicates incorrectly classified; p v c,t and p v r,t are the proportions of correctly and incorrectly classified OOB replicates, respectively, after variable v has been randomly permuted within the OOB replicates for tree t; and T is the total number of trees in the forest.
Training and Testing Data Generation
To test the performance of the RF++ algorithm, we generated training and testing datasets with cluster-correlated observations in which each subject had more than one replicate and where some covariates may also be correlated. Our goal was to simulate data derived from the replicate spectra obtained from MS TOF experiments. Therefore, in our simulations, we considered data with a small number of subjects and a large number of variables, most of which possessed no discriminating information. We modelled data that has already been preprocessed, i.e. aligned along the m/z scale, denoised, baseline corrected and where peaks were detected. As a result the number of peaks are usually reduced from tens of thousands to hundreds and all peaks have the same m/z scale [29,36,37]. MS TOF data preprocessing is an essential step that is performed prior to analysis with any classifier including RF++.
Our simulation study addressed the effects of varying ICC on variable selection and classification abilities of RF++. The ICC is defined as the proportion of total variance attributable to between cluster variability, and is given by where s 2 e is the within cluster variance and s 2 b is the between cluster variance, i.e. the variance of the random effects, and as such influences dispersion among the cluster locations. The cluster locations become increasingly 'spread out' as s 2 b increases. Thus we refer to s 2 b as the 'between cluster variance'. In our simulations, we fixed s 2 e at 1, and, based on Equation (4), selected s 2 b values of 0.11, 0.43, 1, 2.33 and 9 to produce ICC values of 0.1, 0.3, 0.5, 0.7, and 0.9, respectively. The log-transformed normalized intensities in real MS data are less skewed, with more similar variances and are roughly normally distributed [28,36]. We therefore simulated all log peak intensities from a normal distribution. For convenience, we chose a mean of 6 and variance of 1. For peaks that were discriminating (randomly selected a priori), we took the original peak mean and added (subtracted) one standard deviation to (from) it producing two distinct disease group means corresponding to the disease and control classes. Standard deviations for the two disease groups were unchanged. For each subject i and peak k, we generated j replicate m/z log peak values using the corresponding means and adding a subject-specific random effect, b ik , assuming that b ik *Normal 0,s 2 b À Á . For a given subject, the value of b ik remained constant for all m/z log peak intensity replicates, thereby creating a common 'shift' in that subject's observations that corresponded to the specified m/z value. To provide additional variation to the values, we added noise, given by e ijk , which we assumed followed a standard normal distribution. Additionally, we assumed that the random effects and the errors were independent. Conditional on the random effect, the subject replicates were assumed to be independent, but marginally the within-subject observations were correlated. For a given m/z value, we generated replicate log peak intensities using where i is the subject index, j is the replicate index for subject i, k is the peak index, and m k is the mean log peak intensity for the specified m/z value corresponding to the disease group of the i th subject. We produced replicate log peak intensities corresponding to 185 total m/z values for each subject. Three of the m/z values (peaks) were discriminating features, and the remaining 182 m/z values were pure noise. Noise peaks were generated from the same distribution as the discriminating peaks but with the means of the two disease groups being equal. For two of the discriminating peaks, we selected m disease~5 and m control~7 . For the remaining discriminating peak, we specified m disease~7 and m control~5 .
In the design above, the peaks are uncorrelated. This is not the case in real MS datasets. For this reason, we generated datasets with correlation between peaks. We generated vector e ij from a multivariate normal distribution e ij *N 185 0,C C ð Þ, where C C is the correlation matrix computed from the esophageal cancer dataset described in the Results section. Readers interested in a more detailed description of the data generation and the classification and variable selection accuracy of the forests on these data are referred to Section 3 of the Supplementary File S1.
Simulation study
In our simulation study we compared the impact of varying ICC on variable selection and classification performance for 5 different ICC values (0.1, 0.3, 0.5, 0.7 and 0.9), 5 different numbers of subjects (10,20,30,50 and 100), and 5 different numbers of replicates within subjects (2, 3, 5, 8 and 10). We therefore generated 125 training data sets to accommodate all possible combinations of the 3 parameters. In the training data, the total number of subjects was always equally divided between 2 classes. Thus, a training data set with 10 subjects had five disease and five control subjects.
To test the prediction and variable selection accuracy of RF++ we fixed the number of subjects to 100 and generated 25 test data sets. We again allocated equal numbers of subjects to each class to facilitate easy comparison.
To mitigate the effects attributable to random number generations for each data set, and to provide measures of uncertainty in our estimates, we repeated each simulation 200 times for each combination of ICC, number of subjects, and number of replicates. For each simulation, we obtained the average importance ranks of the three discriminating variables based on the MDM variable importance scores, the proportion of subjects correctly classified, and the AUC. Based on the empirical distributions of these performance measures, we summarized our results by reporting the median and the 5 th and 95 th percentiles.
In each of these 200 simulations we regenerated both the training and testing data sets. For each of the training data sets we grew 3 different types of forests: a SLA forest grown on averaged subject samples, a SLB forest, and an unmodified Breiman's forest, URF. All forests contained 2001 trees. We subsequently tested each forest's performance using the same testing data set. For testing of the SLA forest, subject replicates were averaged.
Supporting Information
File S1 Supplementary materials | 7,179.2 | 2009-09-18T00:00:00.000 | [
"Biology",
"Chemistry",
"Computer Science"
] |
Lectin spatial immunolocalization during in vitro capacitation in Tursiops truncatus spermatozoa
Abstract Spermatozoa interactions with the female reproductive tract and oocyte are regulated by surface molecules such as glycocalyx. The capacitation process comprises molecular and structural modifications which increase zona pellucida binding affinity. Lectins allowed us to describe glycocalyx changes during maturation, capacitation and acrosome reaction. This study had as its aim to identify lectin binding patterns using four lectins with different carbohydrate affinity in bottlenose dolphin (Tursiops truncatus) spermatozoa both before and after in vitro capacitation. Two semen samples from the same dolphin obtained on consecutive days were used, with four different lectin binding patterns becoming visible in both samples before and after capacitation. A highly stained equatorial segment with prolongations at the edges appeared as the most frequent pattern with Wheat germ agglutinin (WGA) in uncapacitated spermatozoa. However, it was homogeneously distributed over the acrosomal region after capacitation. Instead, the use of Peanut agglutinin (PNA) resulted in most spermatozoa showing high labelling in the acrosomal periphery region before capacitation and a homogeneous staining in the acrosomal region within the population of capacitated spermatozoa. Nevertheless, the most representative patterns with Concavalin A (ConA) and Aleuria aurantia agglutinin (AAA) lectins did not change before and after capacitation, labelling the acrosomal region periphery. These findings could contribute to the understanding of the reproductive biology of cetaceans and the improvement of sperm selection techniques.
Introduction
Most of the interactions between the spermatozoon and its environment inevitably have their starting point in an interplay with the sperm glycoprotein and glycolipid covering; hence the tendency to associate the acquisition of a mature glycocalyx with the achievement of a full sperm fertilizing ability (Schröter et al., 1999). Sperm glycocalyx composition in mammals has already been studied by a variety of authors (Bearer and Friend, 1990;Schröter et al., 1999;Töpfer-Petersen, 1999;Tecle and Gagneux, 2015), most of whom used lectins. These molecules are proteins or glycoproteins of a non-immune nature that bind to specific membrane carbohydrate sequences which could be potentially involved in primary oocyte recognition (Osawa and Tsuji, 1987). The distribution of sugars in sperm glycocalyx has thus been described by means of various lectins, not only in different mammal species such as the goat (Bawa et al., 1993), the mouse (Baker et al., 2004), the rabbit (Nicolson et al., 1977), the monkey (Navaneetham et al., 1996) and the boar (Jiménez et al., 2002) but also, and especially, in humans (Lee and Damjanov, 1985;Gabriel et al., 1994;Fierro et al., 1996;Gómez-Torres et al., 2012). Likewise, studies suggest that the distribution of lectin receptors changes during spermatozoon epidydimal maturation, capacitation and acrosome reaction in mammals (Magargee et al., 1988;Bawa et al., 1993;Peláez and Long, 2007).
According to other authors, lectins may prove useful to select spermatozoon subpopulations with the highest fertilizing capacity (Gabriel et al., 1994;Purohit et al., 2008;Gómez-Torres et al., 2012), which stresses the usefulness of knowing how glycocalyx varies in other biologically or commercially valuable species. All this information could help increase the knowledge about the reproductive biology of endangered or vulnerable species, including cetaceans, which in turn can improve the set-up of artificial insemination (Robeck et al., 1998;Robeck et al., 2005).
Tursiops truncatus is currently listed in Appendix II, Annex A, of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (Cites.org, 2019). In other words, although not enough data exist to categorize it as a species threatened with extinction worldwide, control needs to be exerted on its trade so that guaranteeing the survival of this species can remain the main priority. Even though breeding programs have been performed in this species thanks to the collaboration of aquatic parks, the difficulty to avoid consanguinity for the purpose of ensuring captive population sustainability has aroused great interest in the application of assisted reproduction techniques with the bottlenose dolphin (Andrews, 2000). For this reason, the knowledge of the molecular changes suffered by spermatozoa during in vitro capacitation could make it easier to select T. truncatus spermatozoon subpopulations with the highest fertilizing capacity for their use in artificial insemination processes. This would additionally facilitate genetic exchange along with the subsequent increase in genetic variability derived from maintaining group stability, avoiding the transport of animals or keeping a high number of breeding males in a single facility. Such measures can accordingly avoid further conflict and enhance the well-being of a dolphin group at reproduction time.
This study sought to describe the changes occurred in the glycocalyx of T. truncatus spermatozoa before and after in vitro capacitation using four lectins with different sugar affinity by fluorescence microscopy.
Semen collection
Ejaculated spermatozoon samples were obtained from a healthy adult bottlenose dolphin (T. truncatus) (which was 24 years old and weighed 187-kg) trained for voluntary semen collection (Keller, 1986) in Valencia's Oceanographic. The collection was performed in accordance with the Animal Welfare Act for the care of Marine Mammals and in agreement with the Animal Care Protocol followed at the Oceanographic. Semen collection constitutes a routine medical behaviour -typically requested by trainers under veterinary supervisionfor this species. Two samples obtained on consecutive days were used for this specific study after being transported to the laboratory keeping the temperature constant at 37ºC.
Semen analysis
A basic semen analysis was performed in both samples -subsequently transferred to a sterile graduated container with the aim of determining the total volume and storage in an incubator at 37ºC for 30 minutes. As for pH, it was measured with a reactive strip.
A 20-micron Spermtrack chamber (Proiser R&D, Valencia, Spain) served to assess sperm concentration and a motility count was performed from visu. Mixing 5µl of semen with 5µl of the eosin solution on a tempered slide allowed us to evaluate sperm viability. The sample was left at rest for 30 seconds, monitoring the preparation in a phase contrast microscope. A total of 200 cells were evaluated in each sample and in each concentration, motility and viability analysis too.
In-vitro capacitation by swim-up
At this point, sperm samples were divided into two physiological conditions: uncapacitated sperm (UCAP); and capacitated sperm (CAP) (Figure 1). Seminal plasma was removed after a ten-minute centrifugation at 250g and subsequently washed with Human Tubaric Fluid (HTF) (Origio, Malov, Denmark) for 5 minutes at 250g. Capacitation took place using the swim-up technique for 1 hour with a HTF medium supplemented by 5mg/mL bovine serum albumin (BSA) (Sigma, Madrid, Spain) at 37°C and 5% CO2.
Sperm sample fixation
Samples before and after capacitation were fixed in a 2% paraformaldehyde (TAAB Essentials for Microscopy Ltd, Aldermaston, England) phosphate buffered saline solution (PBS) (Life Technologies, Grand Island NY, USA) for 1h at 4°C, after which they were diluted in PBS to a final concentration of 1 million cells/mL and stored at 4°C.
Lectin labelling
Carbohydrate surface distribution in UCAP and CAP was characterized by means of AAA, ConA, PNA, and WGA lectins conjugated with FITC (Vector Laboratories, Burlingame CA, USA) (see Table 1 for the taxonomic name of sources and the specificity of lectins used). After being placed on a 10mm diameter round coverslip, fixed samples (5µL) were air-dried so that cells could attach to the surface. Afterwards, they were rehydrated with PBS for 10 minutes and incubated with a 2% BSA-PBS block solution for 30 minutes. An incubation of the coverslip with each FITC-conjugated lectin at a final concentration of 20µg/mL for 1h at room temperature in a humid chamber followed the blocking. Coverslips were later washed three times in PBS for 5 minutes each and assembled with Vectashield H-1000 with 4´6-diamidino-2-phenylindole (DAPI) (Vector laboratories, Burlingame CA, USA). Negative control experiments were performed omitting the lectin.
Statistical analysis
We determined lectin binding patterns (P) through the assessment of 1,600 UCAP cells and 1,600 CAP cells (Figure 1) using fluorescence confocal microscopy (Leica TCS SP2 Microsystems GmBH, Wetzlar, Germany) and Leica Confocal Software. After examining every staining pattern identified with the different lectins, a decision was made to consider only those present in more than 5% of the spermatozoa in a sample, whether before or after capacitation. Statistical differences between UCAP and CAP spermatozoa were tested for the different lectins using a t-Student distribution. Differences were regarded as statistically significant at a 95% confidence level (P<0.05), the statistical analysis being performed with the 22.0 SPSS software version.
Semen analysis
The seminal parameter values corresponding to both semen samples used in this study can be found in Table 2.
Characterization of lectin labelling patterns in uncapacitated and capacitated sperm
Analyzing the lectin binding pattern in the head spermatozoa before and after capacitation allowed us to detect four seemingly consistent different patterns -shown in Figure 2. Lectin binding patterns were named as follows: Pattern 1(P1): highly labelled acrosomal region; Pattern 2 (P2): equatorial segment stained with elevations at the edges; Pattern 3 (P3): highly labelled edges of the acrosomal region, and Pattern 4 (P4): highly stained equatorial segment and weak fluorescence in the acrosomal region.
Changes in lectin binding patterns after in vitro capacitation
The most frequent pattern with AAA lectin both uncapacitated (63.25%) and in capacitated spermatozoa (60.06%) was P3. Significant differences were additionally identified in pattern P1 between cells before and after in vitro capacitation (36.75% and 30.63%, respectively). Regarding P4, despite not being observed in uncapacitated samples, it appeared at 9.31% of cells after the swim-up (Figure 3). The same as with AAA lectin, the most abundant pattern after capacitation with ConA lectin turned out to be P3 (48.25%), this pattern being the most representative one in non-capacitated sperm (38.25%) too. Patterns P2 (12.25%) and P4 (11.25%) significantly changed following in vitro capacitation, though, lowering frequency to 5.00% and 7.00%, respectively. No statistically significant differences appeared in pattern P1 after capacitation (Figure 3).
Most of the uncapacitated sperm showed pattern P3 (53.50%), which largely diminished to 39.50% after in vitro capacitation, with PNA lectin. Instead, pattern P1 stood out as the most representative in capacitated cells (60.50%) -being significantly lower in uncapacitated spermatozoa (23.75%). P2 was present in 22.75% of cells prior to in vitro capacitation, but it disappeared after the swim-up process (Figure 3).
The percentage of WGA lectin patterns and their variation rate (Figure 3) showed that the P1 percentage increased to a great extent after capacitation -from 10.33% to 42.16%-this pattern being the most frequent one in capacitated spermatozoa. Whereas P2 appeared in 38.67% of cells before capacitation, its percentage significantly dropped to 22.01% in capacitated cells. As for the P3 percentage in uncapacitated and capacitated sperm, it reached 32.33% and 24.25% respectively, this difference being significant. Finally, P4 showed a sizeable decrease from 18.67% in non-capacitated sperm to 11.57% in capacitated sperm (Figure 3).
Sperm subpopulations according to lectin patterns
Spermatozoon glycocalyx has been described by means of lectins in some mammal species (Nicolson et al., 1977;Bains et al., 1993;Bawa et al., 1993;Navaneetham et al., 1996;Jiménez et al., 2002;Baker et al., 2004). However, the composition and redistribution of spermatic glycocalyx during the capacitation process in T. truncatus still remains largely unknown. This report attests the presence of carbohydrates recognized by AAA, ConA, PNA and WGA lectins in bottlenose dolphin sperm glycocalyx before and after in vitro capacitation.
In relation to the total number of binding patterns identified (P1-P4), the results obtained in this work differ from those of other studies about sperm glycocalyx in other mammal species. By way of example, Gómez-Torres et al. (2012) identified seven patterns in human sperm using the same lectins. In turn, several studies have revealed that lectin receptor quantity and distribution varies between normozoospermic semen samples and oligozoospermic (Jiménez et al., 2002;Purohit et al. 2008) or teratozoospermic ones (Gabriel et al., 1994). In fact, according to Gabriel et al. (1994) a close relationship seems to exist between WGA receptors on human sperm membranes and sperm morphology. Therefore, we probably identified fewer patterns in T. truncatus sperm in contrast to human species due to the differences in the lower reference limit of sperm morphology between both species. The spermatic lower reference limit for normal sperm morphology in humans is 4% (WHO, 2010). Instead, normal morphology values of 90% have been described in previous studies dedicated to T. truncatus semen (Migliorisi et al., 2011;van der Horst et al., 2018), which suggests that seminal parameters could be better conserved in dolphins. The greater degree of homogeneity in bottlenose dolphin sperm morphology perhaps has to do with the fact that human sperm has more lectin binding patterns that bottlenose dolphin sperm.
Immunolocalization of WGA and PNA
We observed four fluorescence binding patterns with WGA lectin (P1-P4). More specifically, patterns with highly stained acrosomal region and two prolongations towards the periphery of the acrosomal region (P2) stood out as the most frequent lectin distribution prior to capacitation. In contrast, receptors for WGA were homogeneously extended throughout the acrosomal region after capacitation (P1). Jiménez et al. (2002) unveiled a connection between WGA binding sites and fertility in boars, since WGA labelling was significantly lower in the spermatozoa of subfertile than in those of fertile boars (Jiménez et al., 2002). A similar redistribution appeared with PNA lectin. Uncapacitated spermatozoa showed a highly stained periphery of the acrosomal region (P3) but a homogeneous bound to the whole acrosomal region after capacitation (P1). This similarity between PNA and WGA patterns could derive from the arrangement of the glyceride residues with which they have affinity within glycocalyx. Regarding WGA lectin, which recognizes sialic acid and N-acetylglucosamine, it requires the presence of N-acetylneuraminic acid bound to galactose or N-acetylgalactosamine, carbohydrates with PNA affinity (Lassalle and Testart, 1994). Checking that WGA and PNA present similar and common patterns in our study should consequently come as no surprise. The distribution of the receptors is probably similar after in vitro capacitation because of the analogous distribution within glycocalyx. Furthermore, the molecular glycocalyx model proposed by Tecle and Gagneux (2015), shows that the sialic acid and N-acetylglucosamine are closely linked to galactose and N-acetylgalactosamine residues.
Added to this, the redistribution over the acrosomal region (P1) after in vitro capacitation observed with WGA and PNA lectin could correlate with a larger contact surface before oocyte recognition. Moreover, PNA lectin has been previously used to assess acrosomal status in T. truncatus (Montano et al., 2012). Therefore, the use of this lectin acts as a membrane integrity indicator and could be used for assessing the acrosomal morphology.
Immunolocalization of AAA and ConA
In any case, our study did not reveal any significant differences between the most frequent patterns before and after capacitation with ConA and AAA lectins -which showed a highly stained periphery in the acrosomal region (P3). Perhaps mannose, glucose and fucose residues change in processes other than capacitation, such as gamete recognition, as exemplified by mice (Lee and Ahuja, 1987), or the methodology used prevents us from observing the changes occurred after in vitro capacitation. Fleming et al. (1981) argued that bottlenose dolphin spermatozoa were capable of fusing with zona-free hamster eggs only after preincubation for 2 hours, which leads us to think that the 1-hour-long incubation carried out in this study did not suffice to redistribute AAA and ConA receptors.
Role of sperm glycocalyx during fertilization
It also deserves to be highlighted that fluorescence distribution in P3 might suggest what the morphology of the anterior region of the spermatozoon head in T. truncatus is like. Kita et al. (2001) described the anterior region of the sperm head in this bottlenose dolphin as being thin, flat and slightly concave using scanning electron microscopy (SEM). Moreover, field-emission scanning electron microscopy (FE-SEM) has provided higher resolution images of mammalian spermatozoa than conventional SEMs, thus permitting to observe cetacean spermatic morphology in detail (Meisner et al., 2005). Therefore, different spermatozoa membrane domains have been described in this clade -e.g. "apical ridge", which refers to the marginal region of the anterior area of the sperm head. The peripheral zone which showed the P3 pattern in our study possibly corresponds to the elevated areas of the apical ridge.
In short, the specific distribution of lectin binding in bottlenose dolphin spermatozoa observed in this study definitely provides additional evidence not only about the presence of different domains in the plasma membrane surface but also about the changes that it experiences during in vitro capacitation.
Furthermore, two of the most common patterns identified in our results typically show a distribution at the periphery or at the boundary of membrane domains (P2 and P3). Studies on dog spermatozoa (Bains et al., 1993) and human spermatids (Lee and Damjanov, 1985) revealed a similar pattern, described as a "semilunar staining in the apical part of the acrosomal region". An outstanding characteristic found in the male reproductive tract of dogs is the absence of seminal vesicles (Bains et al., 1993) -a peculiarity shared by dolphins (Harrison, 1969). These structures, along with the prostate, secrete glycoproteins which bind to the sperm surface in a selective way. Therefore, the fact that their glycocalyx has been formed without the influence of seminal vesicles probably explains the similar distribution of carbohydrates in dolphin spermatozoa and in human spermatids as well as in ejaculated dog spermatozoa.
Receptors in ejaculated bottlenose dolphin spermatozoa are generally found at the equatorial segment level, which makes sense if we remember that one of the main functions of glycocalyx is cell recognition until meeting the oocyte (Friend, 1982;Tecle and Gagneux, 2015). The binding patterns of each lectin can thus be linked to the head regions of greater interaction between the spermatozoon and its immediate environment. Even though many molecular aspects of the fertilization process still remain unknown when it comes to bottlenose dolphins, several authors have described the presence of longitudinal edges in the postacrosomal region (Fleming et al., 1981;Kita et al., 2001;Meisner et al., 2005) which could play a role in the fusion of the spermatozoon with the oocyte and / or in the early post-fusion (Fleming et al., 1981). Consequently, the initial recognition of the dolphin spermatozoa with the oocyte is likely to occur in the region of the equatorial segment, after which fusion takes place in the postacrosomal area.
Finally, since labelling with lectins could thus prove useful in selecting the sperm subpopulations with a fertilizing potential, the most common patterns of each physiological condition studied in this report could represent sperm subpopulations with a higher fertilizing capacity.
Glycocalyx desing based on major AAA, ConA, PNA and WGA lectin patterns
In the light of all the above, we propose a model which represents the most representative location of different sugar surface in T. truncatus spermatozoa and its changes after in vitro capacitation ( Figure 4). As shown by our model, the glycans recognized by means of AAA and ConA lectin do not change after capacitation and are distributed around the acrosomal region. However, those recognized by WGA and PNA appear in the periphery of the acrosomal region and in the equatorial segment in uncapacitated spermatozoa and throughout the acrosomal region after in vitro capacitation. According to these changes, WGA and PNA could be used as indicator of in vitro capacitation.
Conclusion
These findings lead us to conclude that the labelling with WGA, PNA, ConA and AAA makes it possible to know the composition and distribution of sugars in the membrane of T. truncatus spermatozoa. Moreover, through the observation of the binding patterns in different stages; it could also evidence the presence of several sperm populations. According to this, it is possible the assessment of the fertilizing capacity based on the use of lectins. These findings would optimize artificial insemination in this species and improve the quality of life at the time of captive reproduction. | 4,547.2 | 2020-02-05T00:00:00.000 | [
"Biology"
] |
Mechanical parts picking through geometric properties determination using deep learning
In this study, a system for automatically picking mechanical parts required in the industrial automation field was proposed. In particular, using deep learning, bolts and nuts were recognized and geometric information of these parts was extracted. By applying YOLOv3 specialized in high recognition rate and fast processing speed, the recognition of target object, location, and postural information were obtained. The geometric information for the bolt can be obtained by creating two bounding boxes and calculating the orientation vector formed by these center values of two bounding boxes after successfully detecting two individual bounding boxes. Moreover, to obtain more precise geometric information on bolts and nuts, image distortion compensation on the detected object was done after detecting the center value of the bolt and nut through YOLOv3. Based on this result, it was proven that an automatic picking of the mechanical parts using a five-axis robot was successfully implemented.
Introduction
In modern society, factory automation by robots is being conducted extensively. Robots are being used in various process fields such as manufacturing, processing, packaging, and assembly, and processes that involve manual work gradually disappear. 1,2 In particular, robots have become essential in automated processes that require high load-bearing capacity and accuracy. 3 Recently, intelligent robots combined with rapidly developing artificial intelligence have attracted great attention and many studies are being conducted. 4,5 However, in most industries, industrial robots are mainly used to grab or move a target object with a fixed position and posture. This is because the level or cost of technology required to build an intelligent robot system for autonomously handling objects in arbitrary positions is high. 6 Additionally, since the existing automation method requires the complete design of the entire process system from the initial process design stage, it is difficult to respond when addition and correction of the intermediate process is required. In particular, if a variable process structure such as a smart factory is applied more in the future in the industry, the existing automation method will become more obsolete, so it is urgent to secure artificial intelligence-based process technology that can be implemented with relatively easy technology and low cost.
Therefore, technologies such as robot vision have been developed to solve this problem, which refers to a technology that combines a visual sensor with a robot and gives the robot the ability to recognize and identify objects through images. 7 However, to solve complex problems such as bin picking 8 using robot vision, it is necessary to estimate the 3D position and posture of an object, so a high-performance 3D camera sensor is essential. 9 The requirement to use an expensive 3D sensor is a significant obstacle to building an economical automation process, and it is the biggest reason that robot vision technologies are rarely adopted in the actual field even though highly useful robot vision technologies are being developed in various ways. To compensate for this problem, it is necessary to obtain the most accurate object information (class, position, orientation, etc.) using a relatively inexpensive 2D camera. 10 Previously, this work was mainly implemented through the classical image processing technology, but in the 2010s, image processing using deep learning, which is robust to changes in the surrounding environment, has been mainly performed. 11,12 Among the numerous deep learning models, the object detection model performs classification and location detection at the same time. The object detection model is a popular technology because it is useful in real life and professional fields, and it is fast and accessible. So far, starting with the first R-CNN series (R-CNN, Fast R-CNN, Faster R-CNN), various models such as YOLO, SSD, and RetinaNet 13 have been developed, and research on the development of models with better performance are in progress.
However, the object detection model is limited in its application to the actual automated process system in that it can give object class and position information but cannot provide orientation information. For this reason, object detection has been mainly used in cases where only approximate information of an object in the screen such as a security camera or a vehicle black box is required. Therefore, it has been known that the sensing system using only the object detection model is so difficult to apply in a process aimed at accurately grasping and picking up the posture of an object.
Generally, the method of obtaining orientation information through point cloud application 14 and additional sensor fusion 15 was considered, and the object detection model was only applied as a supplementary role. The above technologies require specialized technology and computing cost that cannot be compared to the use of a deep learning-based object detection model alone. Therefore, it is time for a simpler and affordable solution.
To solve the shortcomings of the existing deep learningbased object detection model that it cannot determine the orientation of an object, in this study, a new method that each part of the object to be detected is learned as a different object, and the orientation of the object is obtained through the position information of the separated object is presented.
Using object detection model, by giving different labels for each part of an object, which has a nonuniform feature, different bounding boxes for each part are found through deep learning. Consequently, an orientation vector that connects the center positions of each bounding box can be obtained. In this way, the proposed scheme can be used to effectively acquire the center position and orientation information of machine parts such as bolt and nut, through this scheme automatic machine part picking process can be completed. Some works have been reported on bin-picking system using deep learning. 16,17 These works mainly focused to classify and estimate the size of the target object buy creating a bounding box without specifying the detailed geometric information on the object such as posture.
In this study, using YOLOv3, a commercially available deep learning tool, we propose a method to find the center value and orientation of an object even when the shape is not uniform such as a bolt. It completes the automatic bolt and nut-picking system, which is different from the general bin picking that simply recognizes and picks up a target object. Afterward, the picking and moving operation of the bolts and nuts were directly implemented in a five-axis robot using an inverse kinematics solution. 18 The reliability of the proposed method was verified through repeated experiments after placing the bolts and nuts randomly on the plate. Moreover, the precise center and posture values of the detected target object to accurately pick it up were determined by correcting the distortion of the image that is inevitable in a cheap monocular camera.
Deep learning and object identification
In this work, we propose an automatic bolt and nut-picking system ( Figure 1) that recognizes a target object from bolts and nuts randomly placed on a flat plate, determines the center position and direction of the target object, and then picks up and moves it to the designated position. YOLOv3, a well-known object detection tool, is adopted here but special scheme to find the object and determine the geometric information of the target object is proposed. The input data set comes from the images captured by a camera installed above the robot.
Object identification
In the training process, M8 bolt and M8 nut images taken by the camera were used. The camera is the oCam-5cro-u-m model of WITHROBOT, a South Korean company, with a resolution of 1280 Â 720, and the five-axis robot is a lowcost robot driven by Dynamixel servo motors from Robotis company.
PyTorch-based YOLOv3 (eriklindernoren, github) 19 was used for image training and testing. The YOLO series (YOLO, YOLOv2, YOLOv3, etc.) is a deep learning model for object detection widely used in real-time image processing because it provides high-efficiency results in learning time through an optimized network.
The YOLO 20 series divides image into NÂN grids and extracts classification and bounding box information for each grid. Naturally, the loss function also reflects both the classification and the bounding box. For more details about the loss function, refer to the study of Redmon and Farhadi. 20 YOLOv3 21 used in this study is further developed from the existing YOLO and performs object detection with three-scale layers. YOLOv3 creates three layers of 13 Â 13, 26 Â 26, and 52 Â 52 grid scale by resizing input image of arbitrary size into 416 Â 416 and then conducting convolution through Darknet-53 CNN structure. Each of the three layers is responsible for capturing large, medium, and small objects. Finally, the following output of tensor T is derived for each grid as shown in equation (1) Here, ðt x , t y Þ are the center coordinates, ðt w , t h ) are the width and height of the bounding box with respect to the image plane (x, y), and p o is a confidence score indicating the probability that an object exists in the corresponding bounding box. p 1 *p c is the probability that the corresponding object will be classified into each class for a total of c classes. In the case of YOLOv3, three bounding boxes per grid can be predicted, so B ¼ 3. In other words, the output for one grid contains coordinates information and class probability for three bounding boxes.
Finally, only bounding box with a confidence score higher than the threshold specified by the user is displayed on the screen with the highest probability of the class name. Figure 2 shows the process of forming bounding boxes using YOLOv3.
The input image size is 1280 Â 720 Â 3. The threshold of confidence score was set to 0.85. The threshold for the NMS (non-maximum suppression) 22 function that controls the overlapping capture of the same bounding box was set to 0.1. Since the target objects of this work are M8 size bolts and nuts, training data sets using images of M8 bolts and nuts were created and used. Therefore, the total number of classes is four, including three classes on the bolt and one class on the nut.
The reason that the number of classes is four is aiming to finding the bolt orientation. To grip the bolt accurately in a robot gripper, information on the orientation is essential, but the YOLOv3 result only informs the center position (x, y), width (w), and height (h) of the object through a bounding box, thus, the orientation of the object is unknown.
In this work, rather than finding the bolt by YOLOv3, it is divided into three classes: Whole bolt, Bolt head, and Bolt tail. On the other hand, the shape of the nut circular, so one class is enough for nut detection and its geometric information.
In learning, as shown in Figure 3, the bolt head was designated as Bolt head, the screw part as Bolt tail, and the When the gripper is placed as shown in Figure 4, the orientation of the gripper can be determined through equations (2) to (4) On the other hand, as stated before, the orientation of the nut is unimportant due to its circular shape, so picking of the nut is possible with keeping the initial orientation of the gripper.
In this work, a data set for learning was produced by taking 1000 images of bolts and nuts placed randomly on the floor. Therefore, no other objects were put in the learning data, and all of the data set were taken directly with a camera. Thus, six to eight bolts and nuts were included per image, increasing the learning efficiency compared to the number of image data.
YOLO Training
At the first stage, labeling was conducted using YOLOv3 label-master (tzutalin, github). 23 Annotation was created by designating the class and size for each bolt and nut in image as shown in Figure 5. As previously explained, there are four classes in the bolt: Whole bolt, Bolt head, Bolt tail, and one class for the nut.
In addition, in the actual robot work, a human hand or a robot gripper could enter the work space. Therefore, to recognize only bolts or nuts, a data set including externally intervened objects was used. If the hand or gripper is not labeled within the data set, as shown in Figure 6, YOLOv3 determines it as an object not to detect and thus learns not to create a bounding box.
After that, data augmentation was performed to secure more input data set. Using the imgaug library (aleju/ imgaug), 24 five options were applied: Hue value change, brightness change, contrast, blur, and dropout ( Figure 7). Here, hue change was applied in common to other four argumentation. At this time, as shown in Figure 8, the process was repeated for every 100 raw data, and learning time was saved by producing the next data while the previous data were being learned. Finally, the existing 1000 image data set was amplified to 5000 through image augmentation.
Camera calibration
Next, in order for the robot to accurately pick up the bolt and nut placed on the floor and move it to the designated location, the 2D coordinates (x; y) for the object, which is obtained by performing object detection and determination of geometric information, should be converted into 3D coordinates based on the reference coordinates ( Figure 9). The transformation relationship between image frame and reference frame is summarized in the form of camera equation (5) below In equation (5), the values in I P c; avg is the camera's intrinsic parameters, representing the transformation between the image frame and the camera frame. The elements of c M r is the camera's extrinsic parameters, indicating the transformation between the camera frame and the reference frame. Each component of I P c; avg and c M r is shown on the last term in equation (5).
R 3Â3 is the rotational matrix, t 3Â1 is the translational vector, f x and f y are the focal lengths for the x and y components, respectively, c x and c y are the center points for the x and y coordinates, respectively, and skew À cf x is the asymmetry coefficient, which is a value that occurs when the image is tilted due to a precision problem during camera manufacturing, and it is zero in most cases. Finally, s is the scale factor.
Camera calibration 25,26 is the process of obtaining internal and external parameters, and it was obtained using the Camera Calibrator app of MATLAB. The camera is located 400 mm above the floor, and the calibration was repeatedly performed by taking pictures of 13 checkboards. Through the process of substituting and verifying the parameters obtained through the camera calibration, the most appropriate calibration matrix was confirmed by equation (6) Even after the calibration is done successfully, there is no guarantee that the position and orientation of the detected object with respect to the reference frame is correct because the image captured to the camera is likely to be distorted as long as a cheap camera is employed. In particular, the image of the object far from the plate center is more distorted than the image of object placed at the center of the working plate. Here, to correct the position and orientation of the object associated with the distorted image, the correct position and orientation of the object were obtained using the lens distortion coefficient. 27 where k 1 and k 2 are lens distortion coefficients, which are obtained from internal parameters during the calibration process and are unique values of the lens regardless of resolution. x distorted and y distorted are distorted values expressed as a pixel location obtained through object recognition, and x and y are distortion-corrected values that is also given as a pixel location. r is the shortest distance from the camera origin to the corresponding pixel, and it is obtained from r 2 ¼ x 2 þ y 2 . The distortion correction coefficients obtained here are k 1 ¼ À0:4328 and k 2 ¼ 0:338. That is, it can be seen that the distortion is small when an object appears in the center of the image frame, whereas the distortion increases as the distance from the image center increases. After applying distortion correction, the value of the object's center position relative to the image coordinate is converted back to the position relative to the reference coordinate through the correction matrix, which becomes the actual position where the robot can pick up the object. Then, the centers between the bolt head and the bolt tail are used to determine the orientation of the object, and details are described in the next section. Finally, the robot arm uses the detected bolt or nut and its geometric information to accurately pick it up and move it to the target position through the robot's inverse kinematics.
Experimental results in object detection by YOLOv3 and camera calibration
In the learning process to determine the four classes for the bolt, and one class for the nut, and geometric information of the bolt, 2000 epochs were trained for 6000 bolts and nuts data sets through YOLOv3, and the loss was finally reduced to about 0.03. Normally, in YOLOv3, if the loss is less than 0.06, it is considered that the learning is perfectly done. However, this loss is only for the training data set, so the accuracy of the object detection and its geometric information when the actual image is applied may not be guaranteed. To ensure the performance on detection of bolt and nut and orientation angle of the bolt, experiments were performed directly using the finally learned weight values and the success rate for picking up the bolt and nut was measured. The experiments were divided into three areas: Performance on the detection and geometric information for an object through YOLOv3, image correction, and object pick up and movement test.
Detection performance test
To check whether the proposed learning scheme through YOLOv3 to detect the bolt and nut and determine its geometric information was successful, we tried to check whether bounding boxes were created correctly after detecting the bolt. A total of eight objects (four bolts and four nuts) were randomly placed on the plate and the bounding boxes generated from the image were analyzed ( Figure 10).
If detection is performed perfectly, 16 bounding boxes should be created for 8 objects, three per bolt (Whole bolt, Bolt head, and Bolt tail) and one per nut (Nut), respectively. Among these bounding boxes, the accuracy was derived by calculating the number of times the bounding box was incorrectly captured. The four types in which the bounding box may be incorrectly caught are as follows.
(a) When an object to be caught is missing (b) When multiple bounding boxes for one object are captured (c) When captured where there is no object (d) When the label is incorrectly classified Among these, cases (c)and (d) can be solved by creating the test environment similar to the environment of the learning data set. In the actual test, only errors corresponding to cases (a) and (b) appeared.
To obtain the accuracy of detection for each class, the test is repeated 20 times to obtain 320 bounding boxes. After counting the bounding boxes that are detected as missing or duplicate for each class (80 each), the detection accuracy was derived for each class and the results are shown in Table 1.
All four classes showed an accuracy of 90% or more, and the nut was 100% accurate, which states that all bounding boxes for the nuts were detected perfectly without error. Bolt tail's accuracy was the lowest at 92.50%. This is associated with the fact that the tail shape does not have relatively distinct feature compared to other classes.
Since it plays a crucial role in determining the gripper's posture for pickup, a more data set learning is required.
Image calibration performance test
In the case of the monocular camera used in this study, the radial distortion occurred, which resulted in the shift of the image outward like a convex lens. Since the detected bounding box coordinates become also inaccurate by image distortion, image distortion should be corrected for the robot to successfully pick up the object. Object detection was done by placing one nut in the center of the working plate, where the distortion is least, and four nuts on the edge, where the radial distortion is the most (Figure 11).
By applying distortion correction (9-10) for the center coordinates of each bounding box of the five nuts (denoted by A, B, C, D, and E) the center coordinates for the bounding boxes are corrected. Table 2 shows the results of distortion correction for the center coordinates of five bounding boxes. As can be seen from this result, the more the object moves away from the center of the image, the greater the distortion occurs. After estimating the center value of the object using YOLOv3, it was transformed into the value with respective to the reference coordinates and then compared with the actual measured value. Table 3 shows the comparison results between the two center values. The resolution of the camera used here is 1280 Â 720.
Nut A in the center of the image has a zero error, and the remaining four nuts show errors of approximately 1 mm to 4 mm compared to the actual coordinates. Since B, C, D, and E are at the location where the image distortion is most severe, the relatively large error occurs.
Since the width of the gripper used in this work is 20 mm, the maximum error of 4 mm was judged to be within the allowable range for the gripper used, and it was not a big problem in picking the nut. However, since more precise control is required when assembling the actual nut, it is necessary to perform a more rigorous calibration work and distortion correction.
System configuration
Here, experiments were conducted to confirm the reliability of object detection and its geometric information determination. We checked the whole processes after placing several bolts and nuts on the working plate: detecting the bolt and nut, grasping it with right posture, finally moving it to the designated location. Figure 12 is the overall work flow of picking up, transferring, and dropping off bolts and nuts after identifying the target. Using the proposed method that divides several parts from one object by creating bounding boxes through YOLOv3 the center and orientation information of each bolt and nut are identified, and these values are transformed relative to the reference coordinates to inform the robot gripper to pick it up. Then, the robot determines the picking order for identified objects, and then the geometrical information on the bolts and nuts and picking order is delivered to the robot arm controller.
Once the position and orientation of the target object are identified through the proposed deep learning algorithm, the inverse kinematic solution to control the robot is adopted to work for picking up, transferring to the designated location, and dropping off the target object. Generally, since kinematic decoupling may not be designed satisfactorily in a degree-lacking robot system, it is difficult to solve the inverse kinematics problem using a geometric solution or an algebraic solution. Therefore, a numerical solution for solving the inverse kinematics was developed and then applied. The design of the inverse kinematics solution of the robot is similar to described in detail. 18 Now, the pseudo code of the whole process is summarized in the following box.
Bolt orientation test
In this part, experiments were conducted to check whether the bolt orientation obtained by the proposed method on object recognition and determination of geometric information was correct. The orientation angle of the bolt (Figure 13) that comes by connecting the two center values of the bolt head and bolt tail once two center values are successfully identified is calculated as follows ¼ Atan2 ðy ct À y ch ; x ct À x ch Þ where ðx ch ; y ch Þ is the bolt head center values and ðx ct ; y ct Þ is the tail center values that are determined by YOLOv3 training and test. Then, the orientation angle was compared with the directly measured angle. Figure 14 shows the bounding boxes of bolt head and tail for each bolt and the corresponding orientation angles for the bolts. Table 4 shows the comparison between the determined orientation angles of five bolts and the measured angles.
In this experiment, an insignificant orientation angle error of 0.3 or less was found for all five bolts. To increase the reliability of the method for determining the bolt posture, six additional experiments were conducted. Table 5 shows the results, similar to Table 4, and it can be seen that the average errors are less than 0.6 for all bolts.
Picking test
After placing two bolts and two nuts in the work space, the robot is controlled to pick them up and move them to specific positions one by one. Figure 15 shows the entire picking process when performing the task. Figure 16 shows that even with the gripper and human hand moving in the workspace during the operation, the bounding boxes are captured only for bolts and nuts on the plate by training the data set shown in Figure 6. It can be confirmed that object recognition of bolts and nuts proceeds smoothly even if such external intervention occurs. Table 6 summarizes the results of picking task for 20 times, 50 times, and 100 times each for bolt and nut, respectively.
Repeated picking task test
As a result of the tests, it was confirmed that the picking and subsequent transporting of the target object were performed very well. There was no significant difference in the success rate when picking bolts and nuts repeatedly 20 times, 50 times, and 100 times in the picking experiments. However, in the case of repeating 100 times, there were three times of bolt-gripping failures. Some failures belong to the second case described in section "Image calibration performance test," which was caused by an incorrect postural command because two bounding boxes for one bolt tail were caught for one bolt, and this can be resolved by more appropriately adjusting the YOLO v3's NMS value. Another failure factor is the fourth case described in section "Image calibration performance test," where the bolt head and tail are recognized as the same class. In other words, when the surrounding environment changes, the tail of the bolt is not recognized correctly, and the three bounding boxes are not clearly distinguished. This can be overcome by properly adjusting the threshold value of each corresponding bounding box.
Performance comparison with general YOLOv3 based on COCO data set
The performance of the proposed YOLOv3-based object detection process was compared with YOLOv3 (named Original) performed on the basis of the existing COCO data set. For mAP, the original referenced the results of YOLOv3-416 shown in Levine et al., 17 and the performance results of this study were obtained from the detection rate shown in Table 7. Frame per second (FPS) was set as the average value directly measured for 1 min.
The mAP was 96.25, which was significantly improved compared to the original case of 55.3. It is regarded as a result of learning by applying various image options in a limited workspace. For the actual application process, the goal was to achieve mAP of 90 or higher, and although it is not perfect, it is understood that it has reached a sufficiently applicable value.
In the case of FPS, it was reported as 34.48 in the original case, but in the actual execution, it was shown to be 12.07. This seems to be a difference due to computing power. The FPS of this study was measured to be 11.72, which was similar to the previous value.
As a result, this study succeeded in achieving sufficient mAP at a level that can be applied to the process while acquiring object orientation information that was previously impossible through YOLOv3 without reducing FPS.
Conclusions
In this work, an automatic bolt and nut-picking system that recognizes bolts and nuts and extracts geometric information at the same time by applying YOLOv3 architecture was introduced, and the effectiveness of this system was confirmed through actual tasks. In the case of bolt, by creating multiple bounding boxes for one bolt, the picking position was accurately determined by the center of the bolt head, and a vector connecting the two centers of the bounding boxes of the bolt head and bolt tail was found to determine the posture to pick up the bolt. Also, even if an object other than the target object intervened in the middle of object recognition, only the target object was detected by excluding the intervened objects in the training process. As a result, using a basic YOLOv3 architecture, it was confirmed that automatic pickup of target object from bolts and nuts randomly placed on the plate can be achieved with sophisticated object detection algorithm and its geometric information extraction. In this work, since object detection was performed with a low-cost monocular camera, the center value of the bounding box was different from the actual value due to the camera distortion. To solve this problem, the image correction is performed to find the correct object center and then send the information to the robot controller. Due to the limitation of the monocular camera, automatic picking was performed only for bolts and nuts placed on the flat working plate, which has the fixed Z axis value. By further expanding the work, it is expected to be able to perform automatic pickup of objects on a curved surface by introducing stereovision using a binocular camera system or an additional distance measurement sensor.
On the other hand, deep learning algorithm is advancing very rapidly, and the YOLO model applied to this system has been upgraded from YOLOv3 to a higher version such as YOLOv4 and YOLOv5. If the latest high-performance YOLO model for object detection along with appropriate sensors is employed, it is expected that an automatically picking a target arbitrary placed on 3D surface with higher reliability can be developed.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by Korea Institute for Advancement of Technology (KIAT) grant funded by the Korea Government(MOTIE) (P0008473, HRD Program for Industrial Innovation).
Supplemental material
Supplemental material for this article is available online. | 7,166.6 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
ON THE STRUCTURE OF THE GLOBAL ATTRACTOR FOR NON-AUTONOMOUS DYNAMICAL SYSTEMS WITH WEAK CONVERGENCE
The aim of this paper is to describe the structure of global attractors for non-autonomous dynamical systems with recurrent coefficients (with both continuous and discrete time). We consider a special class of this type of systems (the so–called weak convergent systems). It is shown that, for weak convergent systems, the answer to Seifert’s question (Does an almost periodic dissipative equation possess an almost periodic solution?) is affirmative, although, in general, even for scalar equations, the response is negative. We study this problem in the framework of general non-autonomous dynamical systems (cocycles). We apply the general results obtained in our paper to the study of almost periodic (almost automorphic, recurrent, pseudo recurrent) and asymptotically almost periodic (asymptotically almost automorphic, asymptotically recurrent, asymptotically pseudo recurrent) solutions of different classes of differential equations.
Introduction
Denote by R n the n-dimensional real Euclidian space with the norm | · |, and by C(R × R n , R n ) the space of all continuous functions f : R × R n → R n equipped with the compact-open topology.
Consider the differential equation where f ∈ C(R × R n , R n ). Assume that the right-hand side of (1) satisfies hypotheses ensuring the existence, uniqueness and extendability of solutions of (1), i.e., for all (t 0 , x 0 ) ∈ R × R n there exists a unique solution x(t; t 0 , x 0 ) of equation (1) with initial data t 0 , x 0 , and defined for all t ≥ t 0 .
Equation (1) (respectively, the function f ) is called regular, if for every x 0 ∈ R n and g ∈ H(f ) := {f τ : τ ∈ R} (where by bar we denote the closure in the space C(R × R n , R n ) and f τ (t, x) := f (t + τ, x) for all (t, x) ∈ R × R n ), the equation possesses a unique solution ϕ(t, x, g) passing through the point x 0 at the initial moment t = 0, and defined on R + := {t ∈ R| t ≥ 0}.
Theorem 1.1. [9,ChII] Suppose that f ∈ C(R × R n , R n ) and H(f ) is a compact subset of C(R × R n , R n ). Then, the following statements are equivalent: (1) is uniformly dissipative; (ii) there exists a positive number R 0 such that for all (x, g) ∈ R n × H(f ).
At light of Theorem 1.1, it is said that equation (1) is dissipative (in fact, the family of equations (2) is collectively dissipative, but we use this shorter terminology) if (3) holds.
Problem (G. Seifert [20]): Suppose that equation (1) is dissipative and the function f is almost periodic (with respect to time). Does equation (1) possess an almost periodic solution?
Fink and Fredericson [20] and Zhikov [32] established that, in general, even when equation (1) is scalar, the answer to Seifert's question is negative.
Related to this result, there are the following interesting questions: a) To extract some classes of dissipative differential equations for which the response to Seifert's problem is positive; b) To indicate the additional (it is desirable "optimal") conditions which, jointly with dissipativity, guarantee the existence of at least one almost periodic solution of equation (1).
Below we include a short survey on results concerning the questions a) and b). a) For the following classes of dissipative equations of type (1), the response to Seifert's question is affirmative: linear equations [9,ChII], quasi-linear equations (weak non-linear perturbations of linear equations) [7,9]; holomorphic equations [5,6,8,9]. b) Zubov [35] established that equation (1) admits a unique almost periodic solution if it is convergent, i.e., it admits a unique solution which is bounded on R and also uniformly globally asymptotically stable. This result was generalized for equations (1) with recurrent coefficients by Cheban [9,ChII] and with pseudo recurrent coefficients by Caraballo and Cheban [4].
The main result for ODEs (Theorem 4.2 and its generalizations) that we prove in this paper is the following: we show that if equation (1) is weak convergent (i.e., there exists a positive number L such that lim t→+∞ |ϕ(t, x 1 , g) − ϕ(t, x 2 , g)| = 0 for all |x i | ≤ L (i = 1, 2) and g ∈ H(f )), and f is pseudo recurrent with respect to the time variable (in particular, f is recurrent, almost automorphic, Bohr almost periodic or quasi periodic), then, equation (1) We present our results in the framework of general non-autonomous dynamical systems (cocycles) and we apply our abstract theory to several classes of differential equations.
The paper is organized as follows.
In Section 2, we collect some notions (global attractor, minimal set, point/compact dissipativity, non-autonomous dynamical systems with convergence, quasi periodicity, Levitan/Bohr almost periodicity, almost automorphy, recurrence, pseudo recurrence, Poisson stability, etc) and facts from the theory of dynamical systems which will be necessary in this paper.
Section 3 is devoted to the study of a special class of non-autonomous dynamical systems (NDS): the so-called NDS with weak convergence. We give a generalization of the notion of convergent NDS. On the one hand, this type of NDS is very close to NDS with convergence (because they conserve some properties of convergent systems) and larger than that of convergent systems. On the other hand, we analyze the class of compact dissipative NDS with nontrivial Levinson center. The main results of our paper are proved in this section, namely Theorem 3.5 and Theorem 3.8 (see also Corollary 3.9 and Corollary 3.10) which provide sufficient conditions for the existence of a unique minimal set in the Levinson center which is homeomorphic to the base dynamical system (driving system). This means, in particular, that if the base dynamical system is a compact minimal set consisting of recurrent (respectively, almost automorphic, Bohr almost periodic, quasi periodic, periodic, stationary) points, then, under the conditions of Theorem 3.5, the Levinson center of a non-autonomous dynamical system contains a unique minimal set which consists of recurrent (respectively, almost automorphic, Bohr almost periodic, quasi periodic, periodic, stationary) points.
In Section 4, we exhibit some applications of our abstract results to different classes of differential equations. Namely, almost periodic and asymptotically almost periodic solutions (Subsection 4.1), uniformly compatible (by the character of recurrence with the right hand side) solutions of strict dissipative equations (Subsection 4.2).
Nonautonomous Dynamical Systems with Convergence
Let us start by recalling some concepts and notations about the theory of nonautonomous dynamical systems which will be necessary for our analysis.
2.1. Compact Global Attractors of Dynamical Systems. Let (X, ρ) be a metric space, R (Z) be the group of real (integer) numbers, R + (Z + ) be the semigroup of nonnegative real (integer) numbers, S be one of the two sets R or Z and T ⊆ S (S + ⊆ T) be a sub-semigroup of the additive group S.
A dynamical system is a triplet (X, T, π), where π : T × X → X is a continuous mapping satisfying the following conditions: π(s, π(t, x)) = π(s + t, x) (∀t, s ∈ T and x ∈ X). If T = R (R + ) or Z (Z + ), the dynamical system (X, T, π) is called a group (semigroup). When T = R + or R, the dynamical system (X, T, π) is called a flow, but if T ⊆ Z, then (X, T, π) is called a cascade (discrete flow ).
The function π(·, x) : T → X is called the motion passing through the point x at the initial moment t = 0, and the set Σ x := π(T, x) is called a trajectory of this motion.
A nonempty set M ⊆ X is called positively invariant (negatively invariant, invariant) with respect to the dynamical system (X, T, π) or, simply, positively invariant The set W s (Λ) (W u (Λ)), defined by the equality is called the stable manifold (unstable manifold ) of the set Λ ⊆ X. -asymptotically stable if it is orbitally stable and attracting; -globally asymptotically stable if it is asymptotically stable and W s (M ) = X.
The dynamical system (X, T, π) is called: − point dissipative if there exists a nonempty compact subset K ⊆ X such that, for every x ∈ X, − compact dissipative if the equality (4) takes place uniformly w.r.t. x on the compact subsets of X; − locally complete (compact) if for any point p ∈ X, there exist δ p > 0 and l p > 0 such that the set π(l p , B(p, δ p )) is relatively compact.
Let (X, T, π) be compact dissipative, and K be a compact set attracting every compact subset of X. Let us set It can be shown [9, Ch.I] that the set J defined by equality (5) does not depend on the choice of the attractor K, but is characterized only by the properties of the dynamical system (X, T, π) itself. The set J is called a Levinson center of the compact dissipative dynamical system (X, T, π).
Some properties of this set can be found in [9,21].
Some sufficient conditions and criteria ensuring the convergence of a dynamical system can be found in [9, Ch.II].
Thus, a non-autonomous dynamical system (X, T 1 , π), (Y, T 2 , σ), h is convergent, if the systems (X, T 1 , π) and (Y, T 2 , σ) are compact dissipative with Levinson centers J X and J Y respectively, and J X has "trivial" sections, i.e., J X X y consists of a single point for all y ∈ J Y . In this case, the Levinson center J X of the dynamical system (X, T 1 , π) is a copy (a homeomorphic image) of the Levinson center J Y of the dynamical system (Y, T 2 , σ). Thus, the dynamics on J X is the same as on J Y .
Remark 2.1. 1. We note that convergent systems are, in some sense, the simplest dissipative dynamical systems. If Y is compact and invariant, T 2 = S, (X, T 1 , π), (Y, T 2 , σ), h is a convergent non-autonomous dynamical system, and J X is the Levinson center of (X, T 1 , π), then (J X , T 2 , π) and (Y, T 2 , σ) are homeomorphic. Although the Levinson center of a convergent system can be completely described, it may be sufficiently complicated.
2. The concept of convergent system of differential equations is well developed (see, for example, B. P. Demidovich [17,18,19], Pavlov et al. [24], V. A. Pliss [25,26], V. I. Zubov [34], and many others). The non-autonomous system of differential equations is called convergent, if it admits a unique solution defined and bounded on R, which is uniformly globally asymptotically stable. It is possible to show that the non-autonomous dynamical system generated by the convergent equation (6) is convergent. However, the concept of convergent non-autonomous dynamical system is much more general (see [9,12] and the bibliography therein).
A point x ∈ X is called almost recurrent (respectively, Bohr almost periodic), if for any ε > 0, there exists a positive number l such that, in any segment of length l, there is an ε−shift (respectively, an ε−almost period) of the point x ∈ X.
If the point x ∈ X is almost recurrent, and the set H(x) := {π(t, x) | t ∈ T} is compact, then x is called recurrent, where the bar denotes the closure in X. A point x ∈ X is called Poisson stable in the positive direction if there exists a sequence {t n } ∈ N x such that t n → +∞ as n → ∞.
Remark 2.2. Every recurrent point is pseudo recurrent, but there exist pseudo recurrent points which are not recurrent (see [28,30]).
An m-dimensional torus is denoted by is an irrational winding of the torus T m and ω ∈ T m .
A point x ∈ X of the dynamical system (X, T, π) is called Levitan almost periodic (see [2,23]) if there exists a dynamical system (Y, T, σ), and a Bohr almost periodic point y ∈ Y such that N y ⊆ N x .
Thus, we can prove the following result.
Proof. The first statement follows directly from the corresponding definition. Let x ∈ X y 0 be an arbitrary point. We will show that lim Indeed, if we suppose that it is not true, then there are ε 0 > 0 and t n → +∞ ({t n } ⊆ T 1 ) such that Since the dynamical system (X, T 1 , π) is compact dissipative, we may suppose that the sequences {π(t n , x)} and {π(t n , x 0 )} are convergent. Denote by p := lim n→∞ π(t n , x), and p 0 := lim n→∞ π(t n , x 0 ). Then, p, p 0 ∈ X q ⊆ J X , where q := lim n→∞ σ(t n , y 0 ). Since the non-autonomous dynamical system (X, is convergent, then p = p 0 . On the other hand, passing to the limit in (7), we obtain ρ(p, p 0 ) ≥ ε 0 > 0. This contradiction proves our statement.
Non-Autonomous Dynamical Systems with Weak Convergence
In this section we will study a class of non-autonomous dynamical systems which is very close to convergent systems, but possessing a non-trivial global attractor. This means that this class of non-autonomous systems will conserve almost all properties of convergent systems, but will have a "non-trivial" global attractor J X , i.e., there exists at least one point y ∈ J Y such that the set J X X y contains more than one point.
A non-autonomous dynamical system (X, T 1 , π), (Y, T 2 , σ), h is said to be weak convergent, if the following conditions hold: (i) the dynamical systems (X, T 1 , π) and (Y, T 2 , σ) are compact dissipative with Levinson centers J X and J Y respectively; (ii) it follows that Remark 3.1. It is clear that every convergent non-autonomous dynamical system is weak convergent. The opposite statement is not true in general.
Indeed, the last statement can be confirmed by the following example. Let (X, T, π) be an autonomous dynamical system with compact global attractor J, which possesses a unique stationary attracting point p (i.e., lim |t|→+∞ ρ(π(t, x), p) = 0 for all x ∈ J) and J = {p}). For example, consider the dynamical system (X, R, π) on the space X = R 2 , generated by following system of differential equations The phase plane of this dynamical system (8) is described in Figure 1. For more We can now prove the following result which will be crucial in the proof of one of our main results (namely, Theorem 3.5).
, h be a non-autonomous dynamical system, and assume that the following conditions hold: Then, there exists a unique compact minimal set M ⊆ X such that: for all x ∈ X.
Proof. Since the positive semi-trajectory Σ + x0 of x 0 is relatively compact, the ω-limit set ω x 0 of the point x 0 is nonempty, compact, invariant and contains at least one minimal subset M ⊆ ω x 0 . We will prove that the dynamical system (X, T 1 , π) has at most one minimal set. Indeed, if we suppose that M 1 and M 2 are two different minimal sets of (X, T 1 , π), then M 1 M 2 = ∅ and, in particular, M 1y M 2y = ∅ for all y ∈ Y , where M iy := h −1 (y) M i (i = 1, 2). Let x i ∈ M iy and t n → +∞ such that σ(t n , y) → y and π(t n , x i ) →x i ∈ M iy (i = 1, 2) as n → ∞; (10) lim n→∞ ρ(π(t n , x 1 ), π(t n , x 2 )) = 0.
It is now easy to see that there exists such a sequence. From the equality (10) we havex 1 =x 2 ∈ M 1y M 2y . This is a contradiction and, therefore, M is the unique compact minimal set of the dynamical system (X, T 1 , π).
Let y ∈ Y be an arbitrary point, then it is recurrent. By Lemma 6.5.19 in [12, Ch.VI, p.226], there exists a unique recurrent point m y ∈ M y such that the equality (9) holds for all x ∈ M y . Now, we will prove that M y = {m y }. Indeed, if x ∈ M y then, there exists a sequence t n → +∞ such that π(t n , m y ) → x because M is minimal. On the other hand, σ(t n , y) → y and, since the point m y is uniformly compatible by the character of recurrence with the point y, then π(t n , m y ) → m y . Thus, we have x = m y and, consequently, M y = {m y } for all y ∈ Y .
To finish the proof it is sufficient to note that lim t→+∞ ρ(π(t, x), m σ(t,h(x)) ) = 0, for all x ∈ X.
Corollary 3.3. Under the assumptions in Lemma 3.2, the dynamical system (X, T 1 , π) is point dissipative and Ω X := ∪{ω x | x ∈ X} is a compact minimal set.
Below we give an example of point dissipative, but not compact dissipative, dynamical system with weak convergence.
Example 3.4. Let ϕ ∈ C(R, R) be a function possessing the following properties: the function ϕ is monotone increasing from 0 to 1 and it is decreasing from 1 to 2; 6. xϕ(x −1 ) → 0 as x → +∞.
A function ϕ with properties 1. − 6. can be constructed as follows. Let Then, the function ϕ(t) := ϕ 0 (t − 1) is as desired. We set X : where θ is the function from C(R, R) identically equal to 0. It is possible to show that the set X is closed in C(R, R), and it is invariant with respect to shifts. Thus, on the set X is induced a dynamical system (on C(R, R) is defined the dynamical system of translations or Bebutov's dynamical system), which we denote by (X, R, σ). We will indicate some properties of this system: (i) for every function ψ ∈ X the set {σ(t, ψ) : t ∈ R} is relatively compact and ω ψ = α ψ = {θ}, where α ψ denotes the α-limit associated to ψ; (ii) the dynamical system (X, R, σ) is pointwise dissipative, and Ω X = {θ}; (iii) W u (Ω X ) = X and, consequently, the dynamical system (X, R, σ) is not compact dissipative because the set X, evidently, is not compact; (iv) the dynamical system (X, R, σ) does not admit a maximal compact invariant set.
The necessary example is therefore constructed.
The subset A ⊆ X is said to be chain transitive (see [16,22]) if for any a, b ∈ A, and any ε > 0 and L > 0, there are finite sequences We can now establish and prove the main results of our paper.
Proof. According to Lemma 3.2, there exists a unique minimal set M ⊂ X for the dynamical system (X, T 1 , π) with properties (b) and (c). To finish the first statement of our theorem it is sufficient to show that M ⊆ ∂J. Let x ∈ ∂J, then M ⊆ ω x ⊆ ∂J because J is the maximal compact invariant set of (X, T 1 , π), and the set ∂J is also compact and invariant.
Then, we can prove the following theorems concerning the structure of compact positively invariant sets. Theorem 3.6. Let (X, T 1 , π), (Y, T 2 , σ), h be a non-autonomous compact dissipative dynamical system, M = ∅ be a compact positively invariant set. Suppose that the following conditions are fulfilled: Then, M is orbitally stable.
Since M is compact, then we may suppose that the sequence {x n } is convergent. Let x 0 := lim n→+∞ x n , with x y n ∈ M y n , ρ(x n , M ) = ρ(x n , x y n ) and y 0 = h(x 0 ). Then, x 0 = lim n→+∞ x yn and x 0 ∈ M y0 . Let q n = h(x n ), and note that as n → +∞, because q n → y 0 and x qn → x 0 . Taking into account (15), and the asymptotic stability of the set M , we have (16) ρ(π(t n , x n ), π(t n , x qn )) → 0.
But the equalities (14) and (16) are contradictory. Hence, the set M is orbitally stable in (X, T 1 , π). (i) the dynamical systems (X, T 1 , π) and (Y, T 2 , σ) are point dissipative; (ii) Ω Y is a compact minimal set; (iii) lim t→+∞ ρ(π(t, x 1 ), π(t, x 2 )) = 0, Proof. Let N := Ω Y . Under our assumptions, N is a compact minimal set of the dynamical system (Y, T, σ). Consider the non-autonomous dynamical system (X, T 1 ,π), (N, T 2 ,σ),h , whereX := h −1 (N ), (X, T 1 ,π) (respectively, (N, T 2 ,σ)) is the restriction of (X, T 1 , π) (respectively, (Y, T 2 , σ)) onX (respectively, on N ) andh := h X . According to Lemma 3.2, the dynamical system (X, T 1 ,π) contains a unique compact minimal set M such that h(M ) = N , and for all y ∈ N , the set M y =h −1 (y) consists of a single point. It is clear that the set M is also minimal with respect to the dynamical system (X, T 1 , π). Repeating the arguments in the proof of Lemma 3.2, we can prove that M is the unique compact minimal set of (X, T 1 , π). Let now x ∈ X be an arbitrary point. Then, its ω-limit set ω x is a non-empty, compact and invariant set.
Note that ω x contains at least one compact minimal set. Taking into consideration that M is the unique compact minimal set in (X, T 1 ,π), we then have M ⊆ ω x and, consequently, ω x X y = {m y } for all y ∈ N . Thus, we conclude that ω x = M and, consequently, Ω X = M .
Denote by L x := {{t n } ∈ M x : t n → +∞}. Recall (see [11]) that the point x ∈ X is called comparable with y ∈ Y by the character of recurrence in infinity if L x ⊆ L y . Corollary 3.9. Let (X, T 1 , π), (Y, T 2 , σ), h be a non-autonomous dynamical system such that the following conditions hold: (i) the dynamical systems (X, T 1 , π) and (Y, T 2 , σ) are point dissipative; Proof. The first and second statements follow from Theorem 3.8. To complete the proof we need to establish the third statement. Let x ∈ X, y := h(x) ∈ H + (y 0 ) := {σ(t, y 0 ) : t ∈ T + } and {t n } ∈ L y . We will show that {t n } ∈ L x . Indeed, the sequence {π(t n , x)} is relatively compact because (X, T, π) is point dissipative. Let p be a limit point for the sequence {π(t n , x)}, then there exists a subsequence {t n k } ⊆ {t n } such that p = lim k→∞ π(t n k , x). Denote by q = lim k→∞ σ(t n , y), then h(p) = q. It is clear that q ∈ Ω Y = N and p ∈ L X . Since L X X q contains at most one point, we conclude that p is the unique limit point of the sequence {π(t n , x)} and, consequently, it is convergent.
Recall that the dynamical system (X, T 1 , π) is called asymptotically compact if for every positively invariant bounded subset M ⊆ X, there exists a nonempty compact subset K ⊆ X such that, The next result will be crucial for our applications (particularly, Theorem 4.6).
Proof. This statement directly follows from Theorem 3.11 and Lemma 2.4.
Remark 3.13. 1. Note that in [4] we established an analogous result to Theorem 3.11 but under a stronger assumption. Namely, instead of Condition 5. of Theorem 3.11, we used in [4] the following one: for all (x 1 , x 2 ) ∈ X×X \ ∆ X and t > 0.
2. It is clear that (17) implies Condition 5. of Theorem 3.11. The converse is not true. Below we give the corresponding counterexample. It is easy to check that the norm of the operator A is equal to 1 and, consequently, ||A n (φ 1 −φ 2 )|| ≤ ||φ 1 −φ 2 ||. Thus, the dynamical system (X, Z + , π) is V -monotone, if we take the function V : X × X → R + defined by V (φ 1 , φ 2 ) := ||φ 1 − φ 2 ||. In this way, condition (17) does not hold for the dynamical system (X, Z + , π) constructed above. On the other hand, it is also easy to check that the norm of the operator A n is equal to 1 n! . Thus, for all The necessary example is therefore constructed.
Applications
Let X and Y be two complete metric spaces. Denote by C(X, Y ) the space of all continuous functions f : X → Y equipped with the compact-open topology.
4.1.
Almost periodic solutions of almost periodic dissipative systems. Let (Y, R, σ) be a dynamical system on the metric space Y . In this subsection we suppose that Y is a compact space. Consider the differential equation The function f ∈ C(Y × R n , R n ) (respectively, equation (18)) is said to be regular (see [27]) if for all u ∈ R n and y ∈ Y , equation (18) admits a unique solution ϕ(t, u, y) passing through the point u ∈ R n at the initial moment t = 0, and defined on R + .
Thus, the triplet R n , ϕ, (Y, R, σ) is a cocycle (non-autonomous dynamical system) which is associated to (generated by) equation (18). In this case, the dynamical system (Y, R, σ) is called the base dynamical system (or driving system).
Example 4.1. Let us consider the equation where f ∈ C(R × R n , R n ). Along with equation (19), consider the family of equations where g ∈ H(f ) := {f τ : τ ∈ R} and f τ is the τ -shift of f with respect to the time variable t, i.e., f τ (t, u) := f (t+τ, u) for all (t, u) ∈ R×R n . Suppose that the function f is regular (see [27]), i.e., for all g ∈ H(f ) and u ∈ R n there exists a unique solution ϕ(t, u, g) of equation (20). Denote by Y = H(f ) and (Y, R, σ) the shift dynamical system on Y induced by the Bebutov dynamical system (C(R×R n , R n ), R, σ). Now, the family of equations (20) can be written as if we define F ∈ C(Y × R n , R n ) by the equality F (g, u) := g(0, u), for all g ∈ H(f ), and u ∈ R n .
In this section we suppose that equation (18) is regular. Equation (18) is called dissipative (see [9]), if there exists a positive number r such that lim sup t→+∞ |ϕ(t, u, y)| < r for all u ∈ R n and y ∈ Y , where | · | is a norm in R n .
It is well known (see [20,32]) that, a dissipative equation with almost periodic coefficients (Y is an almost periodic minimal set), does not have, in general, an almost periodic solution. For certain classes of dissipative equations of the form (18), in the works [5]- [8] one can find sufficient conditions for the existence of at least one almost periodic solution. In this subsection we give a simple geometric condition which guarantees existence of a unique almost periodic solution, and this solution, in general, is not the unique solution of equation (18) which is bounded on R.
We can now establish the following interesting result.
Theorem 4.2. Suppose that the following conditions are fulfilled: (18) is regular and dissipative; (ii) the space Y is compact, and the dynamical system (Y, R, σ) is minimal; where ϕ(t, u i , y) (i = 1, 2) is the solution of equation (18) passing through u i at the initial moment t = 0, which is bounded on R.
Since Y is compact, it is evident that the dynamical system (Y, R, σ) is compact dissipative and its Levinson center J Y coincides with Y . By Theorem 2.23 in [9], the skew-product dynamical system (X, R + , π) is compact dissipative. Denote by J X its Levinson center and by I y := pr 1 (J X X y ) for all y ∈ Y , where X y := {x ∈ X : h(x) = y}, and pr 1 is the projection function with respect to the first variable, i.e. pr 1 (α, y) = α. According to the definition of the set I y ⊆ R n , and by Theorem 2.24 in [9], u ∈ I y if and only if the solution ϕ(t, u, y) is defined on R and bounded (i.e., the set ϕ(R, u, y) ⊆ R n is compact). Thus, I y = {u ∈ R n | (u, y) ∈ J X }. It is easy to see the condition (21) means that the non-autonomous dynamical system (X, R + , π), (Y, R, σ), h is weak convergent. To finish the proof, it is sufficient to apply Lemma 6.5.19 in [12, Ch.VI, p.226] and Corollary 3.10 for the non-autonomous system (X, R + , π), (Y, R, σ), h generated by equation (18). (18), but equation (18) has, generally speaking, more than one solution defined and bounded on R. Below, we will give an example which confirms this statement.
Example 4.4. Consider the following almost periodic system of two differential equations It is easy to check that the almost periodic function ϕ : R → R 2 defined by the equality ϕ(t) := (sin t, sin √ 2t) is a solution of system (22). Let now x := u − sin t and y := v − sin √ 2t. Then, the system (22) reduces to (8). Thus, every solution φ of the system (22) possesses the form φ = ϕ + ψ, where ψ is some solution of the system (8). Since (8) is weak convergent and admits more than one solution which is bounded on R, the system (22) possesses the same property.
4.2.
Uniform compatible solutions of strict dissipative equations. In this section we consider equation (18) when the driving system (Y, R, σ) is pseudo recurrent, and the function f ∈ C(Y × R n , R n ) is strict dissipative with respect to its second variable x ∈ R n , i.e., for all x 1 , x 2 ∈ R n (x 1 = x 2 ) and y ∈ Y .
Recall (see [28,29,30]) that the point x ∈ X is called comparable (respectively, uniformly comparable) by the character of recurrence with the point y ∈ Y if N y ⊆ N x (respectively, M y ⊆ M x ).
Let us now recall a result which plays an important role in the proof of our main result in this subsection.
Theorem 4.5. (See [28,30]) Let (X, T, π) and (Y, T, σ) be two dynamical systems, x ∈ X and y ∈ Y . Then, the following statements hold: (i) If x is comparable by the character of recurrence with y, and y is τ -periodic (respectively, Levitan almost periodic, almost recurrent, Poisson stable), then so is the point x. (ii) If x is uniformly comparable by the character of recurrence with y, and y is τ -periodic (respectively, quasi periodic, Bohr almost periodic, almost automorphic, recurrent), then so is the point x.
Theorem 4.6. Let (Y, R, σ) be pseudo recurrent, f ∈ C(Y × R n , R n ) be strict dissipative with respect to the variable x, and assume that there exists at least one solution ϕ(t, x 0 , y) of equation (18) which is bounded on R + .
Then, (i) equation (18) is convergent, i.e., the cocycle ϕ associated to equation (18) 2. If we replace condition (23) by a stronger condition, then Theorem 4.6 is also true without the requirement that there exists at least one solution which is bounded on R + . Namely, if there exists a function ζ ∈ K such that for all u 1 , u 2 ∈ R n and y ∈ Y , where ζ possesses some additional properties (see, for example, [15]).
3. It is easy to see that Theorem 4.6 remains true also for equation (18) in an arbitrary Hilbert space H, if we suppose that the cocycle ϕ, generated by equation (18), is asymptotically compact (i.e., the corresponding skew-product dynamical system is asymptotically compact), and we replace the condition about the existence of at least one solution which is bounded on R + , by the existence of a relatively compact solution ϕ 0 on R + (this means that ϕ(R + ) is a relatively compact subset from H). | 8,107.4 | 2011-10-01T00:00:00.000 | [
"Mathematics"
] |
A Review of Deep Learning Based Speech Synthesis
: Speech synthesis, also known as text-to-speech (TTS), has attracted increasingly more attention. Recent advances on speech synthesis are overwhelmingly contributed by deep learning or even end-to-end techniques which have been utilized to enhance a wide range of application scenarios such as intelligent speech interaction, chatbot or conversational artificial intelligence (AI). For speech synthesis, deep learning based techniques can leverage a large scale of <text, speech> pairs to learn effective feature representations to bridge the gap between text and speech, thus better characterizing the properties of events. To better understand the research dynamics in the speech synthesis field, this paper firstly introduces the traditional speech synthesis methods and highlights the importance of the acoustic modeling from the composition of the statistical parametric speech synthesis (SPSS) system. It then gives an overview of the advances on deep learning based speech synthesis, including the end-to-end approaches which have achieved start-of-the-art performance in recent years. Finally, it discusses the problems of the deep learning methods for speech synthesis, and also points out some appealing research directions that can bring the speech synthesis research into a new frontier.
Introduction
Speech synthesis, more specifically known as text-to-speech (TTS), is a comprehensive technology that involves many disciplines such as acoustics, linguistics, digital signal processing and statistics. The main task is to convert text input into speech output. With the development of speech synthesis technologies, from the previous formant based parametric synthesis [1,2], waveform concatenation based methods [3][4][5] to the current statistical parametric speech synthesis (SPSS) [6], the intelligibility and naturalness of the synthesized speech have been improved greatly. However, there is still a long way to go before computers can generate natural speech with high naturalness and expressiveness like that produced by human beings. The main reason is that the existing methods are based on shallow models that contain only one-layer nonlinear transformation units, such as hidden Markov models (HMMs) [7,8] and maximum Entropy (MaxEnt) [9]. Related studies show that shallow models have good performance on data with less complicated internal structures and weak constraints. However, when dealing with the data having complex internal structures in the real world (e.g., speech, natural language, image, video, etc.), the representation capability of shallow models will be restricted.
Deep learning (DL) is a new research direction in the machine learning area in recent years. It can effectively capture the hidden internal structures of data and use more powerful modeling capabilities to characterize the data [10]. DL-based models have gained significant progress in many fields such as handwriting recognition [11], machine translation [12], speech recognition [13] and speech synthesis [14]. To address the problems existing in speech synthesis, many researchers have also proposed the DL-based solutions and achieved great improvements. Therefore, to summarize the DL-based speech synthesis methods at this stage will help us to clarify the current research trends in this area. The rest of the article is organized as follows. Section 2 gives an overview of speech synthesis including its basic concept, history and technologies. In Section 3, this paper introduces the pipeline of SPSS. A brief introduction is given in Section 4 about the DL-based speech synthesis methods including the end-to-end ones. Finally, Section 5 provides discussions on new research directions. Finally, Section 6 concludes the article.
Basic Concept of Speech Synthesis
Speech synthesis or TTS is to convert any text information into standard and smooth speech in real time. It involves many disciplines such as acoustics, linguistics, digital signal processing, computer science, etc. It is a cutting-edge technology in the field of information processing [15], especially for the current intelligent speech interaction systems.
The History of Speech Synthesis
With the development of digital signal processing technologies, the research goal of speech synthesis has been evolving from intelligibility and clarity to naturalness and expressiveness. Intelligibility describes the clarity of the synthesized speech, while naturalness refers to ease of listening and global stylistic consistency [16].
In the development of speech synthesis technology, early attempts mainly used parametric synthesis methods. In 1971, the Hungarian scientist Wolfgang von Kempelen used a series of delicate bellows, springs, bagpipes and resonance boxes to create a machine that can synthesize simple words. However, the intelligibility of the synthesized speech is very poor. To address this problem, in 1980, Klatt's serial/parallel formant synthesizer [17] was introduced. The most representative one is the DECtalk text-to-speech system of the Digital Equipment Corporation (DEC) (Maynard, MA, USA). The system can be connected to a computer through a standard interface or separately connected to the telephone network to provide a variety of speech services that can be understood by users. However, since the extraction of the formant parameters is still a challenging problem, the quality of the synthesized speech makes it difficult to meet the practical demand. In 1990, the Pitch Synchronous OverLap Add (PSOLA) [18] algorithm greatly improved the quality and naturalness of the speech generated by the time-domain waveform concatenation synthesis methods. However, since PSOLA requires the pitch period or starting point to be annotated accurately, the error of the two factors will affect the quality of the synthesized speech greatly. Due to the inherent problem of this kind of method, the synthesized speech is still not as natural as human speech. To tackle the issue, people conducted in-depth research on speech synthesis technologies, and used SPSS models to improve the naturalness of the synthesized speech. Typical examples are HMM-based [19] and DL-based [20] synthesis methods. Extensive experimental results demonstrate that the synthesized speech of these models has been greatly improved in both speech quality and naturalness.
Traditional Speech Synthesis Technology
To understand why deep learning techniques are being used to generate speech today, it is important to know how speech generation is traditionally done. There are two specific methods for TTS conversion: concatenative TTS and parametric TTS. This paper will give a brief introduction to the two kinds of methods in the following sections.
Concatenative Speech Synthesis
The waveform concatenation based synthesis method directly concatenates the waveforms in the speech waveform database and outputs a continuous speech stream. Its basic principle is to select the appropriate speech unit from the pre-recorded and labeled speech corpus according to the context information analyzed from the text input, and concatenate the selected speech unit to obtain the final synthesized speech. With the guidance of the context infomation, the naturalness of the synthesized speech has been improved greatly.
There are two different schemes for concatenative synthesis: one is based on linear prediction coefficients (LPCs) [21], the other is based on PSOLA. The first method mainly uses the LPC coding of speech to reduce the storage capacity occupied by the speech signal, and the synthesis is also a simple decoding and concatenation process. The speech synthesized by this method is very natural for a single word because the codec preserves most of the information of the speech. However, since the natural flow of words when people actually speak is not just a simple concatenation of individual isolated speech units, the overall effect will be affected by the concatenative points. To address this problem, PSOLA, which pays more attention to the control and modification of prosody, has been proposed. Different from the former method, PSOLA adjusts the prosody of the concatenation unit according to the target context, so that the final synthesized waveform not only maintains the speech quality of the original pronunciation, but also makes the prosody features of the concatenation unit conform to the target context. However, this method also has many defects: (1) as stated in Section 2.2, the quality of the synthesized speech will be affected by the pitch period or starting point; and (2) the problem of whether it can maintain a smooth transition has not been solved. These defects greatly limit its application in diversified speech synthesis [22].
Parametric Speech Synthesis
The parametric speech synthesis refers to the method that uses digital signal processing technologies to synthesize speech from text. In this method, it considers the human vocal process as a simulation that uses a source of glottal state to excite a time-varying digital filter which characterizes the resonance characteristics of the channel. The source can be a periodic pulse sequence that is used to represent the vocal cord vibration of the voiced speech, or a random white noise to indicate undefined unvoiced speech. By adjusting the parameters of the filter, it can synthesize various types of speeches [15]. Typical methods include vocal organ parametric synthesis [23], formant parametric synthesis [24], HMM-based speech synthesis [25], and deep neural network (DNN)-based speech synthesis [26,27].
Statistical Parametric Speech Synthesis
A complete SPSS system is generally composed of three modules: a text analysis module, a parameter prediction module which uses a statistical model to predict the acoustic feature parameters such as fundamental frequency (F0), spectral parameters and duration, and a speech synthesis module. The text analysis module mainly preprocesses the input text and transforms it into linguistic features used by the speech synthesis system, including text normalization [28], automatic word segmentation [20], and grapheme-to-phoneme conversion [29]. These linguistic features usually include phoneme, syllable, word, phrase and sentence-level features. The purpose of the parameter prediction module is to predict the acoustic feature parameters of the target speech according to the output of the text analysis module. The speech synthesis module generates the waveform of the target speech according to the output of the parameter prediction module by using a particular synthesis algorithm. The SPSS is usually divided into two phases: the training phase and the synthesis phase. In the training phase, acoustic feature parameters such as F0 and spectral parameters are firstly extracted from the corpus, and then a statistical acoustic model is trained based on the linguistic features of the text analysis module as well as the extracted acoustic feature parameters. In the synthesis phase, the acoustic feature parameters are predicted using the trained acoustic model with the guidance of the linguistic features. Finally, the speech is synthesized based on the predicted acoustic feature parameters using a vocoder.
Text Analysis
Text analysis is an important module of the SPSS model. Traditional text analysis methods are mainly rule-based, which require a lot of time to collect and learn these rules. With the rapid development of data mining technology, some data-driven methods have been gradually developed, such as the bigram method, trigram method, HMM-based method and DNN-based method. When using the latter two methods for text analysis, the Festival [4] system is usually used to perform phoneme segmentation and annotation on the corpus, which mainly includes the following five levels: Phoneme level: the phonetic symbols of the previous before the previous, the previous, the current, the next or the next after the next; the forward or backward distance of the current phoneme within the syllable.
Syllable level: whether the previous, the current or the next syllable is stressed; the number of phonemes contained in the previous, the current or the next syllable; the forward or the backward distance of the current syllable within the word or phrase; the number of the stressed syllables before or after the current syllable within the phrase; the distance from the current syllable to the forward or backward most nearest stressed syllable; the vowel phonetics of the current syllable.
Word level: the part of speech (POS) of the previous, the current or the next word; the number of syllables of the previous, the current or the next word; the forward or backward position of the current word in the phrase; the forward or backward content word of the current word within the phrase; the distance from the current word to the forward or backward nearest content word; the POS of the previous, the current or the next word.
Phrase level: the number of syllables of the previous, the current or the next phrase; the number of words of the previous, the current or the next phrase; the forward or backward position of the current phrase in the sentence; the prosodic annotation of the current phrase.
Sentence level: The number of syllables, words or phrases in the current sentence.
Parameter Prediction
Parameter prediction is used to predict acoustic feature parameters based on the result of the text analysis module and the trained acoustic model. For the SPSS, there are usually two kinds of parameter prediction methods: HMM-based parameter prediction and DNN-based parameter prediction. This paper will give a review of these methods in the following.
HMM-Based Parameter Prediction
The HMM-based parameter prediction method mainly generates the sequence of F0 and spectral parameters from the trained HMMs. It is achieved by calculating the sequence of acoustic features with the maximum likelihood estimation (MLE) algorithm given a Gaussian distribution sequence. Due to the differences between F0 and spectral parameters, different methods have been adopted to model the two kinds of feature parameters. For the continuous spectral parameters, the continuous density hidden Markov model (CD-HMM) is used and the output of each HMM state is a single Gaussian or a Gaussian mixture model (GMM) [27]. However, for the variable-dimensional F0 parameters which include voiced and unvoiced regions, it is difficult to apply discrete or continuous HMMs because the values of F0 are not defined in unvoiced regions. To address this problem, the HMM-based method adopts multi-space probability distribution to model the voiced and unvoiced regions (e.g., voiced/unvoiced (V/UV) parameters), separately. To improve the accuracy and flexibility of acoustic parameter prediction, the authors in [28] introduce the articulatory feature that is related to the speech generation mechanism and integrates it with the acoustic features.
DNN-Based Parameter Prediction
It is well known that the acoustic features of a particular phoneme will be affected by the context information associated with the phoneme [30]. It means that the context information plays a significant role in the prediction of the acoustic features. Researchers show that the human speech generation process usually uses a hierarchical structure to convert the context information into a speech waveform [31]. Inspired by this idea, the deep structure models have been introduced in predicting acoustic feature parameters for speech synthesis [32]. The framework of the DNN-based parameter prediction progress can be seen in [20].
To compare with the HMM-based parameter prediction methods, the DNN-based methods can not only map complex linguistic features into acoustic feature parameters, but also use long short-term context information to model the correlation between frames which improves the quality of speech synthesis. In addition, for the HMM-based methods, the principal of MLE is used to maximize the output probability which makes the parameter sequence a mean vector sequence, resulting in a step-wise function. The jumps cause discontinuities in the synthesized speech. To address this problem, the maximum likelihood parameter generation (MLPG) algorithm is used to smooth the trajectory by taking the dynamic features including the delta and delta-delta coefficients into account. However, the DNN-based methods cannot suffer from this problem.
Vocoder-Based Speech Synthesis
Speech synthesizer or vocoder is an important component of statistical parametric speech synthesis, which aims at synthesizing speech waveform based on the estimated acoustic feature parameters. Traditional methods usually use the HTS_engine [33] synthesizer since it is free and fast to synthesize speech. However, the synthesized speech usually sounds dull, thus making the quality not good. To improve the quality of the synthesized speech, STRAIGHT [34,35] is proposed and used in various studies, making it easy to manipulate speech. Other methods such as phase vocoder [36], PSOLA [18] and sinusoidal model [37] are also proposed. Legacy-STRAIGHT [38] and TANDEM-STRAIGHT [38] were developed as algorithms to meet the requirements for high-quality speech synthesis. Although these methods can synthesize speech with good quality, the speed still cannot meet the real-world application scenarios. To address this problem, real-time methods remain a popular research topic. For example, the authors in [34] proposed the real-time STRAIGHT as a way to meet the demand for real-time processing. The authors in [38] proposed a high-quality speech synthesis system which used WORLD [39] to meet the requirements of not only high sound quality but also real-time processing.
Deep Learning Based Speech Synthesis
It is known that the HMM-based speech synthesis method maps linguistic features into probability densities of speech parameters with various decision trees. Different from the HMM-based method, the DL-based method directly perform mapping from linguistic features to acoustic features with deep neural networks which have proven extraordinarily efficient at learning inherent features of data. In the long tradition of studies that adopt DL-based method for speech synthesis, people have proposed numerous models. To help readers better understand the development process of these methods (Audio samples of different synthesis methods are given at: http://www.ai1000.org/sampl es/index.html.), this paper gives a brief overview of the advantages and disadvantages in Table 1 and makes a detailed introduction in the following.
Restrictive Boltzmann Machines for Speech Synthesis
In recent years, restricted Boltzmann machines (RBMs) [40] have been widely used for modeling speech signals, such as speech recognition, spectrogram coding and acoustic-articulatory inversion mapping [40]. In these applications, RBM is often used for pre-training of deep auto-encoders (DAEs) [41,42] or DNNs. In the field of speech synthesis, RBM is usually regarded as a density model for generating the spectral envelope of acoustic parameters. It is adopted to better describe the distribution of high-dimensional spectral envelopes to alleviate the over-smooth problem in HMM-based speech synthesis [40]. After training the HMMs, a state alignment is performed for the acoustic features and the state boundaries are used to collect the spectral envelopes obtained from each state. The parameters of the RBM are estimated using the maximum likelihood estimation (MLE) criterion. Finally, RBM-HMMs are constructed to model the spectral envelopes. In the synthesis phase, the optimal spectral envelope sequence is estimated based on the input sentence and the trained RBM-HMMs. Although the subjective evaluation result of this method is better than that of traditional HMM-GMM systems, and the predicted spectral envelope is closer to the original one, this method still cannot solve the fragementation problem of training data encountered in the traditional HMM-based method.
Multi-Distribution Deep Belief Networks for Speech Synthesis
The multi-distribution deep belief network (DBN) [43] is a method of modeling the joint distribution of context information and acoustic features. It models the coutinuous spectral, discrete voiced/unvoiced (V/UV) parameters and the multi-space F0 simultaneously with three types of RBMs. Due to the different data types of the 1-out-of-K code, the F0, the spectral and the V/UV parameters, the method uses the 1-out-of-K code of the syllable and its corresponding acoustic parameters as the visible-layer data of the RBM to train the RBMs. In DBNs, the visible unit can obey different probability distributions; therefore, it is possible to characterize the supervectors that are composed of these features. In the training phase, given the 1-out-of-K code of the syllable, the network fixes the visible-layer units to calculate the hidden-layer parameters firstly, and then uses the parameters of the hidden layers to calculate the visible-layer parameters until convergence. Finally, the predicted acoustic features are interpolated based on the length of the syllable.
The advantage of this method is that all the syllables are trained in the same network, and all the data are used to train the same RBM or DBN. Therefore, it cannot suffer from the training data fragementation problem. In addition, modeling the acoustic feature parameters of a syllable directly can describe the correlation of each frame of the syllable and the correlation of different dimensions of the same frame. The method avoids averaging the frames corresponding to the same syllable, thus reducing the over-smooth phenomenon. However, since this method does not distinguish syllables in different contexts, it still averages the acoustic parameters corresponding to the same syllable. In addition, compared to the high-dimensional spectral parameters, the one-dimensional F0s don't contribute much to the model, thus making the predicted F0s contain a lot of noise that reduces the quality of the synthesized speech.
Speech Synthesis Using Deep Mixture Density Networks
Although the DNN-based speech synthesis model can synthesize speech with high naturalness, it still has some limitations to model acoustic feature parameters, such as the single modality of the objective function and the inability to predict the variance. To address these problems, the authors in [44] proposed the parameter prediction method based on a deep mixture density network, which uses a mixture density output layer to predict the probability distribution of output features under given input features.
Mixture Density Networks
Mixed density networks (MDNs) [45] can not only map input features to GMM parameters (such as the mixture weights, mean and variance), but also give the joint probability density function of y given input features x. The joint probability density function is expressed as follows: where M is the number of mixture components, and w m (x), µ m (x) and σ 2 m (x) are the mixture weights, mean and variance of the m-th Gaussian component of GMM, respectively. The parameters of the GMM can be calculated based on MDN with Equations (2)-(4): where z (5), the model is trained by maximizing the log likelihood of M, which is expressed as Equation (6): T(1) , y where N is the number of sentences and T(n) is the number of frames in the n-th sentence.
Deep MDN-Based Speech Synthesis
When predicting speech parameters with deep MDN, the text prompt is first converted into a linguistic feature sequence {x 1 , x 2 , ..., x T }, and then the duration of each speech unit is predicted using a duration prediction model. The acoustic features including the F0, spectral parameters and their corresponding dynamic features are estimated with the forward algorithm and the trained deep MDN. Finally, the acoustic feature parameters are generated by the parameter generation algorithm and speech is synthesized with a vocoder.
Deep Bidirectional LSTM-Based Speech Synthesis
Although the deep MDN speech synthesis model can solve the single modality problem of the objective function and predict the acoustic feature parameters accurately to improve the naturalness of the synthesized speech, there are still some problems as elaborated in the following. Firstly, MDN can only leverage limited contextual information since it can only model fixed time span (e.g., fixed number of preceding or succeeding contexts) for input features. Secondly, the model can only do frame-by-frame mapping (e.g., each frame is mapped independently). To address these problems, the authors in [46] proposed a modeling method based on recurrent neural networks (RNNs). The advantage of RNN is the ability to utilize context information when mapping inputs to outputs. However, traditional RNNs can only access limited context information since the effects of a given input on the hidden layer and the output layer will decay or explode as it propagates through the network. In addition, this algorithm also cannot learn long-term dependencies.
To address these problems, the authors in [47] introduced a memory cell and proposed the long short-term memory (LSTM) model. To fully leverage contextual information, bidirectional LSTM [48] is mostly used for mapping the input linguistic features to acoustic features.
BLSTM
BLSTM-RNN is an extended architecture of bidirectional recurrent neural network (BRNN) [49]. It replaces units in the hidden layers of BRNN with LSTM memory blocks. With these memory blocks, BLSTM can store information for long and short time lags, and leverage relevant contextual dependencies from both forward and backward directions for machine learning tasks. With a forward and a backward layer, BLSTM can utilize both the past and future information for modeling.
Given an input sequence x = (x 1 , x 2 , ..., x T ), BLSTM computes the forward hidden sequence → h and the backward hidden sequence ← h by iterating the forward layer from t = 1 to T and the backward layer from t = T to 1: The output layer is connected to both forward and backward layers, thus the output sequence can be written as: The notations of these equations are explained in [49] and φ(·) is the activation function which can be implemented by the LSTM block with equations in [49].
Deep BLSTM-Based Speech Synthesis
When using a deep BLSTM-based (DBLSTM) model to predict acoustic parameters, first we need to convert the input text prompt into a feature vector, and then use the DBLSTM model to map the input feature to acoustic parameters. Finally, the parameter generation algorithm is used to generate the acoustic parameters and a vocoder is utilized to synthesize the corresponding speech. For instance, the authors in [48] proposed a multi-task learning [50,51] of structured output layer (SOL) BLSTM model for speech synthesis, which is capable of balancing the error cost functions associated with spectral feature and pitch parameter targets.
Sequence-to-Sequence Speech Synthesis
Sequence-to-sequence (seq2seq) neural networks can transduce an input sequence into an output sequence that may have a different length and have been applied to various tasks such as machine translation [52], speech recognition [53] and image caption generation [54], and achieved promising results. Since speech synthesis is the reverse process of speech recognition, the seq2seq modeling technique has also been applied to speech synthesis recently. For example, the authors in [55] employed the structure with content-based attention [56] to model the acoustic features for speech synthesis. Char2Wav [16] adopted location-based attention to build an encoder-decoder acoustic model. To tackle the instability problem of missing or repeating phones that current seq2seq models still suffer from, the authors in [57] proposed a forward attention approach for the seq2seq acoustic modeling of speech synthesis. Tacotron, which is also a seq2seq model with an attention mechanism, has been proposed to map the input text to mel-spectrogram for speech synthesis.
End-to-End Speech Synthesis
A TTS system typically consists of a text analysis front-end, an acoustic model and a speech synthesizer. Since these components are trained independently and rely on extensive domain expertise which are laborious, errors from each component may compound. To address these problems, end-to-end speech synthesis methods which combine those components into a unified framework have become mainstream in the speech synthesis field. There are many advantages of an end-to-end TTS system: (1) it can be trained based on a large scale of <text, speech> pairs with minimum human annotation; (2) it doesn't require phoneme-level alignment; and (3) errors cannot compound since it is a single model. In the following, we will give a brief introduction to the end-to-end speech synthesis methods.
Speech Synthesis Based on WaveNet
WaveNet [58], which is evolved from the PixelCNN [59] or PixelRNN [60] model applied in image generation field, is a powerful generative model of raw audio waveforms. It was proposed by Deepmind (London, UK) in 2016 and opens the door for end-to-end speech synthesis. It is capable of generating relatively realistic-sounding human-like voices by directly modeling waveforms using a DNN model which is trained with recordings of real speech. It is a complete probabilistic autoregressive model that predicts the probability distribution of the current audio sample based on all samples that have been generated before. As an important component of WaveNet, dilated causal convolutions are used to ensure that WaveNet can only use the sampling points from 0 to t − 1 while generating the tth sampling point. The original WaveNet model uses autoregressive connections to synthesize waveforms one sample at a time, with each new sample conditioned on the previous ones. The joint probability of a waveform X = {x 1 , x 2 , ..., x T } can be factorised as follows: Like other speech synthesis models, WaveNet-based models can be divided into training phase and generation phase. At the training phase, the input sequences are real waveforms recorded from human speakers. At the generation phase, the network is sampled to generate synthetic utterances.
To generate speech of the specified speaker or the specified text, global and local conditions are usually introduced to control the synthesis contents. While the WaveNet model can produce high-quality audios, it still suffers from the following problems: (1) it is too slow because the prediction of each sampling point always depends on the predicted sampling points before; (2) it also depends on linguistic features from an existing TTS front-end and the errors from the front-end text analysis will directly affect the synthesis effect.
To address these problems, the parallel WaveNet is proposed to improve the sampling efficiency. It is capable of generating high-fidelity speech samples at more than 20 times faster [61]. Another neural model, Deep Voice [62], is also proposed to replace each component including a text analysis front-end, an acoustic model and a speech synthesizer by a corresponding neural network. However, since each component is trained independently, it is not a real end-to-end synthesis.
Speech Synthesis Based on Tacotron
Tacotron [63,64] is a fully end-to-end speech synthesis model. It is capable of training a speech synthesis model given <text, audio> pairs, thus alleviating the need for laborious feature engineering. In addition, since it is based on character level, it can be applied in almost all kinds of languages including Chinese Mandarin.
Like WaveNet, the Tacotron model is also a generative model. Different from WaveNet, Tacotron uses a seq2seq model with an attention mechanism to map text to a spectrogram, which is a good representation of speech. Since a spectrogram doesn't contain phase information, the system uses the Griffin-Lim algorithm [65] to reconstruct the audio by estimating the phase information from the spectrogram iteratively. The overall framework of the Tacotron speech synthesis model can be seen in [63].
Since Tacotron is a fully end-to-end model that directly maps the input text to mel-spectrogram, it has received a wide amount of attention of researchers and various improved versions have been proposed. For example, some researchers implemented open clones of Tacotron [66][67][68] to reproduce the speech of satisfactory quality as clear as the original work [69]. The authors in [70] introduced deep generative models, such as Variational Auto-encoder (VAE) [71], to Tacotron to explicitly model the latent representation of a speaker state in a continuous space, and additionally to control the speaking style in speech synthesis [70].
There are also some works that combine Tacotron and WaveNet for speech synthesis, such as Deep Voice 2 [72]. In this system, Tacotron is used to transform the input text to the linear scale spectrogram, while WaveNet is used to generate speech from the linear scale spectrogram output of Tacotron. In addition, the authors in [73] also proposed the Tacotron2 system to generate audio signals that resulted in a very high mean opinion score (MOS) comparable to human speech [74]. The authors in [73] described a unified neural approach that combines a seq2seq Tacotron-style model to generate mel-spectrogram and a WaveNet vocoder to synthesize speech from the generated mel-spectrogram.
Speech Synthesis Based on Convolutional Neural Networks (CNNs)
Although the Tacotron-based end-to-end system has achieved promising performance recently, it still has a drawback that there are many recurrent units. This kind of structure makes it quite costly to train the model and it is also infeasible for researchers without high-performance machines to conduct further research on it. To address this problem, a lot of works have been proposed. The authors in [69] proposed a deep convolutional network with guided attention which can be trained much faster than the RNN-based state-of-the-art neural system. Different from the WaveNet model, which utilized the fully-convolutional structure as a kind of vocoder or a back-end, Ref. [69] is rather a frond-end (and most of back-end processing) that can synthesize a spectrogram. The authors in [75] used CNN-based architecture for capturing long-term dependencies of singing voice and applied parallel computation to accelerate the model train and acoustic feature generation processes. The authors in [76] proposed a novel, fully-convolutional character-to-spectrogram architecture, namely Deep Voice 3, for speech synthesis, which enables fully parallel computation to make the training process faster than that of using recurrent units.
Discussion
Compared with the concatenative speech synthesis method, the SPSS system can synthesize speech with high intelligibility and naturalness. Due to the limitations of the HMM-based speech synthesis model (such as the use of context decision trees to share speech parameters), the synthesized speech is not vivid enough to meet the requirements of expressive speech synthesis. The DL-based speech synthesis models adopt complete context information and distributed representation to replace the clustering process of the context decision tree in HMM, and use multiple hidden layers to map the context features to high-dimensional acoustic features, thus making the quality of the synthesized speech better than the traditional methods.
However, the powerful representation capabilities of DL-based models have also brought some new problems. To achieve better results, the models need more hidden layers and nodes, which will undoubtedly increase the number of parameters in the network, and the time complexity and space complexity for network training. When the training data are insufficient, the models usually have over-fitting. Therefore, it requires a large amount of corpora and computing resources to train the network. In addition, the DL-based models also require much more space to store the parameters.
There is no doubt that the existing end-to-end models are still far from perfect [77]. Despite many achievements, there are still some challenging problems. Next, we will discuss some research directions: • Investigating context features hidden in end-to-end speech synthesis. The end-to-end TTS system, mostly back-end, has achieved state-of-the-art performance since it was proposed. However, there is little progress in front-end text analysis, which extracts context features or linguistic features that are very useful to bridge the gap between text and speech [78]. Therefore, demonstrating what types of context information are utilized in end-to-end speech synthesis system is a good direction in future.
•
Semi-supervised or unsupervised training in end-to-end speech synthesis. Although end-to-end TTS models have shown excellent results, they typically require large amounts of high-quality <text, speech> pairs for training, which are expensive and time-consuming to collect. It is important and of great significance to improve the data efficiency for end-to-end TTS training by leveraging a large scale of publicly available unpaired text and speech recordings [79].
•
The application of other speech related scenarios. In addition to the application of text-to-speech in this paper, the application to other scenarios such as voice conversion, audio-visual speech synthesis, speech translation and cross-lingual speech synthesis is also a good direction.
•
The combination of software and hardware. At present, most deep neural networks require a lot of calculations. Therefore, parallelization will be an indispensable part of improving network efficiency. In general, there are two ways to implement parallelization: one is the parallelization of the machines; the other is to use GPU parallelization. However, since writing GPU code is still time-consuming and laborious for most researchers, it depends on the cooperation of hardware vendors and software vendors, to provide the industry with more and more intelligent programming tools.
Conclusions
Deep learning that is capable of leveraging large amount of training data has become an important technique for speech synthesis. Recently, increasingly more researches have been conducted on deep learning techniques or even end-to-end frameworks and achieved state-of-the-art performance. This paper gives an overview to the current advances on speech synthesis and compare both of the advantages and disadvantages among different methods, and discusses possible research directions that can promote the development of speech synthesis in the future. Acknowledgments: The authors would like to thank Peng Liu, Quanjie Yu and Zhiyong Wu for providing the material.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
Abbreviations
The following abbreviations are used in this manuscript: | 8,289.4 | 2019-09-27T00:00:00.000 | [
"Computer Science"
] |
dS${}_4$ universe emergent from Kerr-AdS${}_5$ spacetime: bubble nucleation catalyzed by a black hole
The emergence of a four-dimensional de Sitter (dS${}_4$) universe on an expanding bubble in the five-dimensional anti-de Sitter (AdS${}_5$) background has been suggested as a possible cosmological scenario. It is motivated by the difficulties in the realization of a stable de Sitter vacua in string theory. The bubble can be nucleated in a meta-stable pure AdS${}_5$ spacetime, but it is known that a pure AdS spacetime is non-perturbatively unstable. It means that the pure AdS${}_5$ background is an idealized situation, and in realistic situations, non-linear perturbations in AdS may lead to the formation of black holes due to the gravitational turbulent instability. To investigate how the proposed scenario works in a more realistic situation, we here study the nucleation process of a vacuum bubble in the Kerr-AdS${}_5$ spacetime. Especially we investigate conditions sufficient to ensure the nucleation of a vacuum bubble with a rotating black hole and how the black hole affects the transition rate. We find that even in the Kerr-AdS${}_5$ spacetime, a quasi-dS${}_4$ expansion can be realized on the nucleated vacuum bubble without contradicting the de Sitter swampland conjectures.
Introduction
The difficulty of constructing de Sitter vacua in string theory (See ref. [1] for a review) leads to a discrepancy with cosmology. Also, the swampland conjecture [2][3][4] has revived the debate on how a de Sitter universe, modeling inflation or the Universe dominated by dark energy, can be consistent with string theory. Recently, it was pointed out that an expanding bubble in a five-dimensional anti-de Sitter (AdS) spacetime mimics a four-dimensional de Sitter spacetime on the bubble [5][6][7]. The idea of the realization of the Universe on a brane was pioneered in Ref. [8]. The nucleation of an expanding bubble can be realized in a meta-stable AdS 5 vacuum via the Coleman de Luccia (CdL) transition [9]. The nucleated bubble mediates two different AdS 5 vacua, and the interior vacuum has a smaller vacuum energy density.
On the other hand, a pure AdS vacuum is non-linearly unstable and may eventually form a black hole due to turbulent instability [10,11]. AdS spacetime has a boundary and acts like a confining box. Therefore, any finite excitation in the AdS would not be dissipated and might be expected to explore all possible configurations, which may eventually lead to the formation of small black holes. For example, a rotating black hole is expected to form as a result of gravitational turbulent instability in AdS [11]. To see if the proposed scenario [5][6][7] is applicable even to a Kerr-AdS 5 black hole, it is important to study if we can construct the solution of an expanding bubble nucleated via the vacuum decay in Kerr-AdS 5 spacetime. The nucleation of a vacuum bubble surrounding a black hole has been actively studied (see, e.g., Refs. [12][13][14][15][16][17][18][19][20][21]), mainly in the motivation of the Higgs metastability [22,23].
In this paper, we study the nucleation process of a vacuum bubble in the Kerr-AdS 5 spacetime with the restriction of equal rotations. We here assume a thin-wall bubble that separates two different vacua and whose dynamics is governed by the Israel junction conditions [24]. We then compute the on-shell Euclidean action of a bubble solution, S E , which gives the exponent of a transition amplitude. We find that the semi-classical approximation is broken with S E < 0 for a massive and rapidly spinning seed black hole. We also derive the Friedmann-like equation describing the expansion of the four-dimensional universe realized on the bubble or brane. It is well known that the mass of the black hole acts like radiation on the brane, called dark radiation [25,26]. We also find that the rotations of the Kerr-AdS 5 black hole give a term of ∼ 1/(scale factor) 6 in the Friedmann-like equation. In addition, the anisotropy originating from the rotations in the induced metric on the bubble decays in 1/(scale factor) 4 . On the other hand, the rotations of the seed black hole lead to the superradiant instability. Thus, our computation and results are applicable only when the time scale of the supperradiant instability is much longer than that of the vacuum metastability, and it depends on the parameters of the false vacuum state. We then compare the decay rate of the false vacuum with the instability rate of the superradiance in the Kerr-AdS 5 background and reveal the parameter region of the seed Kerr-AdS 5 black hole where the condition is satisfied. This paper is organized as follows. In Sec. 2, we briefly review the Israel junction conditions governing the dynamics of the bubble by following Ref. [27] in which the Israel junction conditions were applied to a thin shell in the Kerr-AdS 5 spacetime. Then we derive the Friedmann-like equation from the Israel junction conditions. We then show the formalism to compute the on-shell Euclidean action of the thin wall. In Sec. 3, we show the condition of the bubble nucleation in the Kerr-AdS 5 background. We then estimate the decay rate by employing the Euclidean path integral to see the most probable transition process. As a result, we find that the stationary solution, in which the Euclidean bubble does not oscillate in the radial direction, gives the most probable process of vacuum decay. Also, we find the breakdown of the semi-classical approximation for a rapidly spinning and massive black hole. In Sec. 4, we compare the lifetime of the false vacuum, caused by the quantum mechanical instability, with that of the seed Kerr-AdS 5 black hole unstable due to the (classical) superradiance. Sec. 5 is devoted to the conclusions. Figure 1 illustrates the outline of our paper along with some schematic pictures. Throughout the paper, we take the natural units and the five-dimensional Newton's constant is set to G 5 = 1.
Dynamics of a bubble in the Kerr-AdS 5 spacetime
Based on the Euclidean path integral, one can estimate the transition amplitude from a false vacuum state to a true vacuum one by computing the difference of the on-shell Euclidean action before and after the transition. In the thin-wall approximation, the dynamics of a meta-stable field is described by that of a thin-wall bubble. We are interested in the motion of a thin wall that mediates two different Kerr-AdS 5 spacetime, which was studied in Ref. [28]. Here we review their analysis that employs the Israel junction conditions to describe the thin-shell dynamics in Kerr-AdS 5 spacetime.
The Kerr-AdS 5 spacetime has two rotations, and we consider the case of equal rotations for simplicity. Given the cosmological constant, Λ(≡ −6l 2 ), spin, a, and mass parameter, M , of a black hole, the line element on the Kerr-AdS 5 spacetime is expressed by the coordinates of x µ = (t, r, θ, ψ, φ) [29]: We then set a junction surface Σ = {x µ : t = T (τ ), r = R(τ )} that separates the interior and exterior spacetime in which the cosmological constant is Λ − = −6l −2 − and Λ + = −6l −2 + , respectively. In the following, the superscript or subscript of + (−) denotes the exterior (interior) quantities. Given the junction surface, one can define the induced metric on the inner and outer surface by q ab on each surface. The first and second Israel junction conditions are given by where K (±) = q ab K (±) ab and S ij is the reduced energy momentum tensor of the thin wall. Going to the comoving frame with the following coordinate transformations: ij , as (2.7) where τ is the proper time on the thin wall. From the condition of the comoving frame and the first Israel junction condition (2.2), we have where a dot represents the derivative with respect to the proper time τ . Note that the junction surface we take is not spherical although it locates at r = R(τ ) in our coordinates. In the proper length, the surface is axisymmetric and is deformed from a spherical shape. Therefore, it is natural to regard the thin wall as an imperfect fluid with anisotropic components [28]: The extrinsic curvature, K where e µ i ≡ dx µ /dy i and n µ is the unit normal vector on the wall Then the second Israel junction condition reads [28] , and ± is the sign ofṪ ± . Combining those conditions along with an extra condition for the fluid, i.e., the equation of state of the thin wall P = wσ, (2.18) we have the following equation It reduces to the following equation that governs the dynamics of the thin wall where m 0 is an integration constant and has the mass dimension 1 . We finally obtain the (integrated) equation of motion of the thin wall where V eff is the effective potential of the thin wall (2.22) The effective potential depends on the mass parameter (M ± ), spin parameter (a ± ), AdS radius (l ± ), and m 0 . As an example, Figure 2 shows some effective potentials, and one can see that for each potential, there exists a forbidden region in whichṘ 2 < 0. Assuming the brane is dominated by the effective potential of the meta-stable field, we set the equationof-state parameter as w = −1 and we have The equation of (2.23) can be compared with the Friedmann equation of a closed universe where R s is the scale factor and G is the four-dimensional Newton's constant. Identifying the radius of the bubble wall R with the scale factor R s , the equation (2.23) can be regarded as the Friedmann equation. Indeed, the induced metric, q ij , has the form where we used the fact that the metric on S 3 sphere is described by the combination of the Fubini-Study metric and the Kähler metric, A a dx a , along with ψ [30] g ab dx a dx b + (dψ + A a dx a ) 2 = dχ 2 + sin 2 χ(dζ 2 + sin 2 ζdξ 2 ). (2.27) Note that the last term in (2.26) is of the order of O(R −2 ) and it becomes negligible as the bubble, i.e., an emergent four-dimensional universe, expands. Therefore, in the case of M a 2 /R 4 1, the induced metric reduces to the Friedmann-Lemaître-Robertson-Walker (FLRW) metric. One can read some intriguing features from the Friedmann-like equation in (2.23): • The asymptotic behaviour of the expansion is equivalent to the (four-dimensional) de Sitter expansion.
• In the Friedmann-like equation and the FLRW-like metric, the effect of the rotations of the black hole is diluted in 1/R 6 s and in 1/R 4 s , respectively, in the limit of R s → ∞.
• The curvature of the space is positive.
Euclidean action
The nucleation rate of a vacuum bubble Γ is given by The prefactor A has the dimension of (time) −1 and is determined by the zero modes and loop corrections of the Euclidean solution. The exponent B determines the transition amplitude and is given by the difference of the on-shell Euclidean action before and after the phase transition: where φ bubble is the configuration of a nucleated bubble and φ false is the trivial solution of a false vacuum state. We compute the Euclidean action by following the procedures in Ref. [12] (see Ref. [31] for the case of five-dimensional background). The transition rate is mainly determined by the factor B, and therefore, we will concentrate on the estimation of B in this paper, and the prefactor A will be determined by the dimensional analysis later. The Euclidean space of the Kerr-AdS 5 spacetime is obtained by performing the Wick rotation, t → −it E , which leads to a complex metric 2 [32]. The Wick rotation of the proper time, τ → −iτ E , also changes the integrated equation of motion of the thin wall (2.21) aṡ The whole configuration after the phase transition can be divided into four parts: the region near the black hole horizon, H, bubble wall, W, and the interior and exterior regions, M − and M + , respectively. The on-shell action of the final state, S where andK E± is the trace of the extrinsic curvature associated withñ ±µ . The contribution of the black hole horizon to the Euclidean action is [32] where A ± is the horizon area of the remnant black hole [29,30], and this is nothing but the Bekenstein-Hawking entropy [33] multiplied by −1. On the other hand, from the Hamiltonian constraints and having the Killing coordinate t E , the bulk component is where κ ± is the non-zero surface term that originates from the Ricci scalar κ ± ≡ñ ±νũ µ ± ∇ µũ ν ± . This surface term reads Thus, the Euclidean action, S where we usedK ± = ±K ± and the second junction condition (2.3) in the first and second equalities. Finally, the exponent of the transition amplitude is
Nucleation of a dS 4 universe in Kerr-AdS 5 background
In this section, we numerically compute the on-shell Euclidean action that is the exponent of the transition amplitude and clarify which parameters of the thin-wall model describe bubble nucleation.
Parameter region for the nucleation of a vacuum bubble
We here compute the transition amplitude, e −B , by substituting the Euclidean solution into (2.42). In the thin-wall model, we have several parameters: m 0 , a ± , M ± , and l ± . Depending on the parameters, we have two types of solutions to the integrated equation of motion (2.30): a stationary and a non-stationary solution. For a non-stationary solution, the bubble in the imaginary time oscillates between the two largest single roots of V eff ( Figure 3-(a)). For a stationary solution, the nucleated bubble has no radial velocity,Ṙ = 0, and stands at the largest double root of V eff (Figure 3-(b)). Also, depending on the parameters in the model, there are neither stationary nor non-stationary solutions, which means one cannot construct the semi-classical solution describing the corresponding bubble nucleation (see Figure 3-(c)).
We first numerically study the allowed parameter range for the existence of stationary solutions. We set l + = 7, l − = 4, m 0 = 500 in the following computations. Fixing l ± and m 0 is equivalent to setting an effective potential of a metastable field that has the false and true vacuum states of the AdS radii l ± and a potential barrier leading to the thin wall with the tension m 0 . We then put constraints on the parameters of a seed black hole, a + and M + , by imposing the condition that the seed and remnant black holes are regular, i.e., no naked singularity, as is shown in Figure 4. From the plot, one can read that the condition that the remnant black hole is regular is more stringent than that of the regularity of the seed black hole. In the allowed region, we can take the parameter sets in which Euclidean solutions, describing bubble nucleation, exist. We perform the computation of the Euclidean action (2.42) to estimate the transition amplitude and find that the stationary solution exists at the minimum value of M − as shown in Figure 5. For a non-rotating black hole (a + = 0), the stationary solution gives the least action, i.e., the highest transition amplitude. For a + > 0, on the other hand, the action at the stationary solution does not lead to the least action. The least action can be found at another solution where the integration part in (2.43) vanishes. The integration includes gravity on the bubble (i.e., curvature K E± and surface gravity κ ± ) and the anisotropic wall tension (i.e., P + ∆P ) and can be zero due to the balance between gravity and the anisotropic tension. For a spherical black hole, it vanishes at the stationary point [12]. We find that when the rotation is involved and the bubble wall is anisotropic imperfect fluid, the solution for which the integration part in (2.43) vanishes can be non-stationary. This non-trivial behaviour may be determined by the complicated balance among gravity, anisotropic wall tension, and rotations. We leave further discussion to future work. In such a case, the transition amplitude of the most probable nucleation process is governed only by the Bekenstein-Hawking entropy (3.1) In most cases, ∆S BH is negative and the Bekenstein-Hawking entropy decreases in the vacuum decay process. However, we find that for a rapidly spinning seed black hole, the ∆S BH can be positive and the semi-classical approximation is apparently broken (Figure 6). In such a case, we have e −B 1 and the standard prescription to estimate the transition amplitude in the Euclidean path integral is not valid due to the breakdown of the semiclassical approximation. However, in the context of the thermodynamics, this is nothing but the second law of thermodynamics, i.e., a preferred direction of a transition is determined so that entropy increases. Therefore, if the transition with ∆S BH > 0 describes a mere thermal phase transition, it would be regarded as a physical process.
Superradiant instability of a Kerr-AdS 5 black hole
It is known that a rotating black hole in AdS 5 is unstable at least against scalar perturbations, which is known as the superradiant instability. In this section, we briefly review the scalar quasi-normal (QN) modes for the Kerr-AdS 5 spacetime. We then compare the classical instability of QN modes with the quantum-mechanical instability of the metastable Kerr-AdS 5 background that we study in the present work. In the former part of this section, we introduce the QN modes of a Kerr-AdS 5 black hole. In the latter part, we then numerically compute the time scale of the superradiant instability and compare it with the lifetime of the false vacuum we estimated in the previous section.
Scalar perturbations in Kerr-AdS 5
The Kerr-AdS 5 black hole with different two spins has the metric where the AdS radius is set to unity and ρ 2 ≡ r 2 + a 2 1 cos 2 θ + a 2 2 sin 2 θ. The inner and outer horizon radii are denoted as r − and r + , respectively, and r 0 is the imaginary root of ∆ r . Then we evaluate the superradiant instability with a scalar field Ψ(t, r, θ, φ, ψ) with mass µ. Let us start with the Klein Gordon equation, and decomposing Ψ as Ψ = e −iωt+im 1 φ+im 2 ψ Θ(θ)Π(r), one has the following radial and angular equations: (ω + m 1 a 1 + m 2 a 2 ) 2 + µ 2 (a 2 1 cos 2 θ + a 2 2 sin 2 θ) Θ(θ) = 0, where λ is the separation constant that is determined so that Θ(θ) is non-singular at the
Superradiant instability vs. vacuum metastability in Kerr-AdS 5
The superradiant instability of the Kerr-AdS 5 black hole appears when it has a QN frequency ω = ω lm 1 m 2 n whose imaginary part is positive as the amplitude of each QN mode is ∼ e Im(ω lm 1 m 2 n )t . If Im(ω lm 1 m 1 n ) > 0, the background spacetime is unstable against the perturbations. The time scale of the superradiant instability, τ SR , is estimated as , (4.17) where max(Im(ω lm 1 m 2 n )) is the maximum positive value of Im(ω lm 1 m 2 n ) among all QN modes. It is known that for small black holes, r + 1, with rapid rotations, there exist unstable QN modes whose frequencies satisfy the following inequality where Ω + ≡ a(1 − a 2 )/(r 2 + + a 2 ). This condition means that if the unstable modes exist, higher angular modes have a wider frequency band of unstable QN modes. To see this, we plot QN modes for l = 1, 2, 3, 4 in Figure 7. Then we compare the superradiant instability with the vacuum metastability in Kerr-AdS 5 spacetime. The lifetime of the false vacuum state in the Kerr-AdS 5 background, τ vacuum , is where A is the pre-factor that originates from the zero modes of the instanton and loop corrections to the saddle point solutions, and B is the Euclidean action of a stationary solution. We usually determine the pre-factor with the dimensional analysis as the magnitude of transition amplitude is mainly governed by the exponential factor. We here assume A = 1/r + as the size of a seed black hole determines the typical scale of a nucleated bubble [12]. We set the Kerr-AdS 5 background with M + = 2, l + = 7, l − = 4 and set the mass of the scalar field to µ = 0.01. We then compare the two lifetimes in Figure 8. It shows that the superradiant instability is absent in the low-spin region (a 0.8), but it appears for rapid spins (a 0.8). The superradiant instability is significant, i.e., τ SR is shorter, for rapid rotations. On the other hand, the more rapid the rotation of the seed Kerr-AdS 5 black hole is, the longer τ vacuum is. From Figure 8, one can see that at least in the parameter set, the superradiant instability does not interrupt the nucleation process of a vacuum bubble 3 . Note that we do not exclude the possibility that the superradiant instability dominates the false vacuum state with τ SR τ vacuum in different parameter sets.
Discussion and Conclusion
In this paper, we studied the nucleation process of a vacuum bubble in the Kerr-AdS 5 spacetime. It is an extension of the cosmological scenario [5][6][7] , ! , " Figure 7: Plot of unstable QN modes for the angular modes of l = 1, 2, 3, and 4. One can see that l = 1 mode has the largest value of Im(ω lm 1 m 2 n ). We fix the background metric with a = 0.001 and M = 10 −5 , and the mass of the scalar field is set to µ = 0.01. Note that the AdS radius is set to unity in this plot. Figure 8: Plot of τ SR and τ vacuum with respect to the spin parameter a + . We set µ = 0.01, M + = 2 and a + ≤ 0.92. The value of τ vacuum is computed for the most probable process of vacuum decay whose decay amplitude is determined only by ∆S BH .
expansion is realized on the nucleated bubble in an AdS 5 spacetime, to a situation where a gravitational impurity, i.e., a Kerr-AdS 5 black hole, is involved in the false vacuum state. It is known that an AdS spacetime is non-linearly unstable and small black holes may form in the AdS due to the gravitational turbulent instability [10,11]. We then found that the cosmological scenario works even in the less symmetric background, i.e., Kerr-AdS 5 false vacuum state, without contradicting the de Sitter swampland conjectures. We here computed the transition amplitude, e −B , of the bubble nucleation process with the Euclidean path integral technique. We then found that there exists a parameter region that admits the bubble nucleation in the Kerr-AdS 5 spacetime, and that the most probable process is given by a stationary solution for which the bubble wall has no oscilla-tion in the imaginary time and the transition amplitude is governed by the change of the Bekenstein-Hawking entropy e −B = e ∆S BH . We also found that for a rapidly rotating and massive seed black hole, the Bekenstein-Hawking entropy in the system increases due to the bubble nucleation and leads to e −B = e ∆S BH 1. It can be naively regarded as the breakdown of the semi-classical approximation. On the other hand, one could interpret it as a thermal transition with the increment of the entropy based on the generalized second law of thermodynamics [35][36][37]. In that sense, we may admit the parameter region leading to e ∆S BH 1 as a parameter region where a vacuum bubble is nucleated by the thermal activation of the seed black hole (see e.g., Ref. [38]).
As the seed black hole has the rotation while it is confined in the AdS barrier, the false vacuum we considered is not even classically stable due to the superradiant instability. Then we compare the lifetime of the false vacuum state determined by the vacuum bubble nucleation τ vacuum and that associated with the superradiance τ SR . We found that there exists a parameter set for which τ SR τ vacuum . Although we do not exclude the possibility that the superradiant instability disturbs the nucleation of vacuum bubbles in the Kerr-AdS 5 spacetime, we show that the cosmological scenario can be realized even with the superradiance.
that we here assume P = −σ. The energy momentum tensor is as we use the co-rotating coordinates. Still, the rotation effect remains in the geometrical quantity (e.g., centrifugal force) in the total onshell action. | 5,895.4 | 2022-09-12T00:00:00.000 | [
"Physics"
] |
Number of Non-Unique Minors ( of Various Orders ) and Elements in the Calculation of General Determinants
Problem statement: Many distinct properties of determinants have been studied and are known, yet a considerable number of properties stil l need further examination. This study investigates the number of minors (of various orders) and elemen ts of a matrix A contained in the expansion of the general determinant of A, irrespective of the indep endence, principality and distinctness of such minors and elements. Approach: A mathematical proof based approach is taken. Minor s of all orders and elements in the calculations of general determinants of matrices of sizes 2 ×2, 3×3, 4×4 and 5×5 respectively, are considered. Results: Two general expressions involving factorial terms are found: the first being equivalent to the number of minors of various orders found in the analysis of the considered matrices (mentioned abov e) and the second being equivalent to the number of elements found in the same analysis. Proo fs are then presented showing that the expressions hold in the general case of a matrix of size n×n. Conclusion: The results of this study present, with proof, expressions for the total numb er of minors (of various orders) and elements, respectively, in the general determinant of a matri x of size n×n, irrespective of the independence, principality and distinctness of such minors and el ements. Scope for further theoretical study, with applications in applied mathematics and the physica l and computer sciences is also indicated.
INTRODUCTION
The calculation of minors and determinants are crucial in many areas of mathematics, particularly in the teaching of linear algebra. In fact, interest in knowing the number of minors in the expansion of a general determinant dates back to 1928, when Stouffer determined an expression for the general determinant of a matrix in terms of its principal minors (Stouffer, 1928). In the same paper, Stouffer claimed that an expression for the number of independent principal minors in a matrix was known to MacMahon and later simplified by Muir. Aitken studied the number of distinct terms in the expansion of symmetric and skew determinants. Later in 1960, Metzler revised Muir's work titled 'The Theory of Determinants in the Historical Order of Development' (Muir, 2008), in which he derived the number of additive terms in the expansion of the general determinant of a matrix to be equivalent to n!, where n represented the number of columns (or rows) of the matrix (Metzler, 1960). Indeed, the same quantity re-appears in the results of this study, but the calculation here is done using the first row of a matrix.
Since then, interest in general expressions or relations between determinants and the number of minors of various orders and elements contained in them has faded. More recently though some references have begun to appear in mathematical literature; three examples of such are (Malek, 2011;Wilde et al., 2010;Jones, 2011).
Whilst the results of the studies discussed above provide much insight into properties of general determinants, independent principal minors and the relations between them, little attention was paid to finding expressions for the total number of minors (of various orders) and elements in a general determinant (irrespective of whether the minors and elements are principal or independent). If the determinant of a matrix is to be examined with respect to the number of minors and elements contained in the determinant, then these results pertain only to the case when the minors and/or elements in question are independent and situated with principality in the associated matrix. In an attempt to fill this gap, the authors present expressions in this study for the total number of minors (of various orders) and elements in a general determinant, irrespective of the independence, principality and distinctness of such minors and elements. The proofs of the same are also included.
Method, analysis and results:
Notation and terminology: Herein, the term uniqueness refers to the qualities of distinctness, independence and principality of a minor or element. That is, when counting the various minors and elements that are present in the complete expansion of the general determinant, each minor and element is counted as many times as it appears.
A n denotes a square matrix of dimensions n×n: • r: r ∈ {2, 3, 4,…, n}, denotes a minor of order r of the matrix, A n . Here order is defined as the size of the respective minor. As such, the statement r = n is to interpreted as the general determinant of the matrix, A n • N (n, r): r∈{2, 3, 4,..,n}, n∈{2, 3,4,…} denotes the number of minors of order r, irrespective of uniqueness of such minors, that appear in the general determinant of A n • a ij : i ∈{1, 2, 3,..,n}, j∈{1, 2, 3,..n}denotes an element of A n situated in the i th row and j th column of A n • N(a ij ) denotes the number of elements, irrespective of uniqueness of each element, that appear in the general determinant of A n Beginning with the case when n = 2 the determinant is then given by: Similarly when we consider the case of n = 4, the determinant is: Rewriting the above expansion similarly as in the 3×3 case and disregarding uniqueness of all minors and elements the following minors are noted: • N (a ij ) = 96 • N(4, 2) = 12 • N (4, 3) = 4 • N (4, 4) = 1 In the same manner, the following results are obtained when, n = 5: • N (a ij ) = 600 • N(5, 2) = 60 • N (5, 3) = 20 • N (5, 4) = 5 • N (5, 5) = 1 Main results: From the above analysis, an expression for the number of minors of order, r≥2, irrespective of uniqueness, that are present in the expansion of the general determinant of A n is Eq. 1: , r 2,3, 4,.., n , n 2,3, 4,... , r n r! = ∈ ∈ ≤ (1) The expression for the number of elements, irrespective of uniqueness, that are present in the complete expansion of the general determinant of A n is the given by Eq. (2) Proofs: Case 1: r 2,3, 4,.., n , n 2,3, 4,... , r n ∈ ∈ ≤ Sub-Case 1.1: r<n: Firstly, we note that this sub-case presents restrictions on r and n, such that r ≥ 2 and n≥3. Then the following is observed. The general determinant of a square matrix A n must contain n minors of order r = n-1: N(n, n 1) n , r n 1 Likewise, a minor of order r = n-1 that is in the general determinant of A n must contain n-1 minors of order, n-2, each of which is also in the general determinant of A n and thus it follows that: N(n, n 2) n(n 1) , r n 2 N(n, n 3) n(n 1)(n 2) , r n 3 N(n, n 4) n(n 1)(n 2)(n 3) , r n 4 .....
Sub-Case 1.2: r = n: As noted earlier, this gives the general determinant of A n and as such, this sub-case is trivial. Proving the converse of the above is present in the following.
Now let us suppose r = n. Again, this gives the general determinant of A n and as such, this sub-case is trivial.
Thus: Firstly, it is observed that for, n ≥2, any minor of order r = 2 that appears in the complete expansion of the general determinant of A n will be multiplied by n-2 elements a ij : i ∈{1, 2, 3,..,n}, j∈{1, 2, 3,..n} corresponding to the n-2 elements lying at the intersections of the rows and columns that must be struck out in order to isolate that minor. Upon multiplication, each of these n-2 elements contributes one element to each of the additive terms in each of these minors. Consequently, when a minor of order r = 2 is multiplied by these n-2 elements, the number of elements contributed to the complete expansion of the determinant by that multiplication is equivalent to (n-2)+2+(n-2)+2 = 2n.
Using the result proved in Case 1 earlier, the number of minors of order r = 2 irrespective of uniqueness in the general determinant of A n is given by, n! n! N(n,2) 2! 2 = = . It then follows that for n≥2, the total number of elements, irrespective of uniqueness, in the complete expansion of the general determinant is given by:
CONCLUSION
The results of this study present, with proof, expressions for the total number of minors of various orders and elements in a general determinant, irrespective of the independence, principality and distinctness of such minors and elements. The mathematics educational literature, particularly in the field of Linear Algebra, could employ these results as a contribution to the known properties of determinants. The results also indicate scope for further theoretical study of this property with applications in applied mathematics and the physical and computer sciences. A geometric interpretation of the results or a method for the calculation of general determinants based on these results would be logical subsequent theoretical studies, both having applications that would extend to the physical and computer sciences. | 2,167.2 | 2012-10-13T00:00:00.000 | [
"Mathematics"
] |
Fitness of ALS-Inhibitors Herbicide Resistant Population of Loose Silky Bentgrass (Apera spica-venti)
Herbicide resistance is an example of plant evolution caused by an increased reliance on herbicides with few sites of action to manage weed populations. This micro-evolutionary process depends on fitness, therefore the assessment of fitness differences between susceptible and resistant populations are pivotal to establish management strategies. Loose silky bentgrass (Apera spica-venti) is a serious weed in Eastern, Northern, and Central Europe with an increasing number of herbicide resistant populations. This study examined the fitness and growth characteristics of an ALS resistant biotype. Fitness and growth characteristics were estimated by comparing seed germination, biomass, seed yield and time to key growth stages at four crop densities of winter wheat (0, 48, 96, and 192 plants m-2) in a target-neighborhood design. The resistant population germinated 9–20 growing degree days (GDD) earlier than the susceptible population at 10, 16, and 22°C. No differences were observed between resistant and susceptible populations in tiller number, biomass, time to stem elongation, time to first visible inflorescence and seed production. The resistant population reached the inflorescence emergence and flowering stages in less time by 383 and 196 GDD, respectively, at a crop density of 96 winter wheat plants m-2 with no differences registered at other densities. This study did not observe a fitness cost to herbicide resistance, as often hypothesized. Inversely, a correlation between non-target site resistance (NTSR), earlier germination and earlier flowering time which could be interpreted as fitness benefits as these plant characteristics could be exploited by modifying the timing and site of action of herbicide application to better control ALS NTSR populations of A. spica-venti.
INTRODUCTION
Herbicide resistance is an large global problem observed in 250 weed species for 23 of the 26 known herbicide sites of action (SoA) (Heap, 2016). The overuse over many generations of herbicides targeting a few specific SoA in weeds has led to the evolution of weed populations surviving increasing doses of herbicides. Evolution through natural selection in plants is a continuous process which allows weed populations to eventually overcome most selection pressures that humans apply as eradication strategies. Therefore, the use of herbicides with novel SoA might control herbicide resistant weeds for a while, but inevitably some populations will evolve resistance. The herbicide resistance mechanisms are an important factor in the strength and speed at which herbicide resistance can evolve de novo or spread from neighboring populations. Non-target site resistance (NTSR) implies to repurpose pre-existent stress and defense enzyme pathways to defend the plant against herbicides. NTSR pre-dates herbicide use and implicate a wide diversity of pathways and is inherited in complex manners due to polygenic identity (Délye et al., 2011;Délye, 2013). NTSR normally confers multiple resistance to other herbicide SoA (Burnet et al., 1994;Petit et al., 2009;Cummins et al., 2013;Délye, 2013). Some weeds have shown to be resistant to herbicides that they had never been exposed to Espeby et al. (2011). Therefore, management of herbicide resistance cannot rely solely on chemical solutions and must incorporate evolutionary biology knowledge when dealing with NTSR resistant weeds (Neve et al., , 2014. Traditionally, fitness is defined as the number of viable and fertile offspring contributing to the next generation. In plants, this implies that fitness could only be measured by evaluating the quantity and quality of seeds . However, the number of seeds produced can depend on the health and growth of the individual. This is based on the allocation of resource theory stating that individual's metabolism has a limiting amount of energy to allocate to vegetative growth versus reproduction and that any extra energy allocated to health (for example if plants live in a sub-optimum environment) will have a negative effect on its reproductive ability (Maxwell et al., 1990;Herms and Mattson, 1992;Park et al., 2004;Vila-Aiub et al., 2015). Therefore, fitness can also be evaluated by using proxies such as growth and health measurements. Specifically regarding agricultural weeds, fitness has been defined as survival and reproductive success in field conditions (Menchari et al., 2007) which is directly linked to competitive ability. Therefore, evaluating traits linked to competitive ability such as germination rate, vegetative growth characteristics and time needed to reach specific growth stages can indicate relative fitness differences.
The direction of natural selection depends on the selective pressure (e.g., herbicide) and also on the fitness of individuals facing this new pressure. In weeds, it was expected, and has also been observed, that herbicide resistant individuals had a lower fitness than their susceptible counterparts in the absence of the herbicide in terms of biomass and/or seed production (Bergelson and Purrington, 1996;Purrington, 2000;Ashigh and Tardif, 2009). The presence of a fitness cost could dictate different management strategies in order to reduce the resistant population. A resistant weed population with a strong fitness penalty could, in theory, be reverted to a susceptible status if the herbicide is not used.
Different resistance mechanisms [target-site (TSR) vs. NTSR] result in different fitness (Roux et al., 2005;Vila-Aiub et al., 2009;Wang et al., 2010). NTSR mechanisms are hypothesized to have a fitness cost because of negative consequences due to the changes in pathway dynamics which could even alter ecological interactions. Also negative effects on energetic resources being diverted from growth and reproduction to defense mechanism have been suggested (Jasieniuk et al., 1996;Vila-Aiub et al., 2009). Increased detoxification due to cytochrome P450 monooxygenase (P450s) in an acetyl-CoA carboxylase inhibitor (ACCase) NTSR resistant Lolium rigidum was correlated with a reduced seed production, biomass and growth rate (Ashigh and Tardif, 2009). Similarly, NTSR resistant Bromus tectorum had a reduced biomass, seed number and leaf area (Park and Mallory-Smith, 2005). However, the assumption of negative fitness in resistant weeds has been challenged as more observations of neutral and even positive fitness have arised. Neutral fitness has been observed in glyphosate NTSR resistant Amaranthus powellii (Giacomini et al., 2014). Positive fitness was observed in ACCase resistant Setaria viridis and triazine resistant Phalaris paradoxa (Schönfeld et al., 1987). NTSR mechanisms are diverse and a multitude of large gene families are involved such as P450s, glycosyltransferase (GT), glutathione S-transferase (GST), ABC-transporters, esterase, etc. (de Carvalho et al., 2009;Délye, 2013). These gene families all act in different ways (e.g., conjugation, compartmentalisation, transport) to allow the plant to survive. This multitude of factors is why the fitness of resistant weeds must be evaluated on a case by case basis (Lehnhoff et al., 2013a;Vila-Aiub et al., 2015).
Acetolactate synthase (ALS) inhibitor herbicides inhibit the synthesis of branched-chain amino acids valine, leucine and isoleucine (Duggleby et al., 2008). ALS herbicides have a high number of resistant weed species than any other SoA (Heap, 2016). Loose silky bentgrass [Apera spica-venti (L.) Beauv.] is one of the most serious grass weed in Northern, Eastern and Central Europe (Hamouzová et al., 2011;Schulz et al., 2011). At a density of 200 plants m −2 , A. spica-venti reduces the yield of winter cereals up to 30%, and it can cause greater yield losses than Alopecurus myosuroides (Melander, 1995;Melander et al., 2008). The highest number of resistance cases in A. spica-venti has been reported for ALS herbicides (10 cases) followed by photosystem II (seven cases) and ACCase (three cases) (Heap, 2016). Two studies have investigated fitness differences between ALS resistant and susceptible A. spica-venti biotypes, both evaluating germination efficiency (Soukup et al., 2006;Gerhards and Massa, 2011). These studies revealed contradictory results where Soukup et al. (2006) found no fitness differences in a population with unknown resistance mechanism, while Gerhards and Massa (2011) found a threefold increase in germination rates in the TSR resistant biotypes. However, accurate estimation of fitness of herbicide resistant weeds is a difficult methodological task. Because of the importance of both environmental conditions and genetic background, experimental conditions and genotypes to be tested must be thoroughly controlled otherwise the fitness assessment cannot be solely attributed to the resistance allele(s) (Jasieniuk et al., 1996;Ashigh and Tardif, 2009;Vila-Aiub et al., 2009).
The lack of information of fitness in NTSR resistant A. spica-venti limits the implementation of evolution-knowledgebased resistance management strategies. This study aimed to evaluate the fitness and growth characteristics throughout the life cycle of ALS NTSR resistant A. spica-venti biotype with a randomized genetic background. The genetic backgrounds of resistant and susceptible populations were randomized over two successive generations. ALS resistance level and mechanisms were assessed. Fitness was estimated as seed germination at three different temperatures. A target-neighborhood experiment was conducted with three crop density where time to multiple key growth stages were recorded, as well as biomass and seed yield and compared to non-competitive conditions. We hypothesized that either no or higher rate of seed germination would be observed based on previous studies (Soukup et al., 2006; and that no differences in growth characteristics would be found based on previous field observations of ALS resistant and susceptible A. spica-venti populations in Denmark (Babineau, personal observation).
Population Selection
An ALS susceptible meta-population, named "S" was created by mixing the same proportion of seeds from five individual susceptible populations collected all over Denmark from 2004 to 2009 (Table 1). A meta-population approach was selected because we aimed to incorporate spatial and temporal genetic variations of susceptible A. spica-venti populations showing similar response to ALS. The resistant population 859P was selected because it showed a high resistance to the ALS herbicide iodosulfuron [Hussar OD, 100 g L −1 iodosulfuron-methyl Na + 300 g L −1 mefenpyr-diethyl (safener), Bayer CropScience, Germany], as well as high levels of multiple resistance to two other herbicides SoA: ACCase using fenoxaprop-P [Primera Super, 69 g L −1 fenoxaprop ethyl ester + 75 g L −1 mefenpyrdiethyle (safener), Bayer CropScience, Denmark], and fatty acid elongation using prosulfocarb (Boxer EC, 800 g L −1 prosulfocarb, Syngenta Crop Protection, Denmark) ( Table 1). The resistant population was collected from a different location compared to the populations in the S meta-population ( Table 1) which ensures different evolutionary origin (Delye et al., 2013).
Randomization of Genetic Background
This study opted for the randomized genetic background method previously used (Roux et al., 2005;Wakelin and Preston, 2006) where a succession of two crosses (F2) was used to study the fitness of herbicide resistant in Lolium rigidum (Wakelin and Preston, 2006). A succession of two generations of crosses (859P × S) was performed between the resistant population and the susceptible "S" in order to obtain F2 generation populations that have a randomized genetic background. Plants from susceptible and resistant population were grown in 2 L pots (1 plant per pot) filled with a potting mixture consisting of soil, peat, and sand (2.1:1 w/w) containing all necessary micro and macro nutrients until early flowering stage. Two seed-proof isolation cabinets with automatic watering were used for cross pollination with two resistant and two susceptible plants in each cabinet. Seeds were collected from the resistant plants, threshed, cleaned, and pooled. Seeds from the F1 cross (859F1) were sown in the same conditions as the previous year for the second crossing (859F1 × S). Seeds from the randomized genetic background F2 generation (859F2) were harvested, cleaned, and kept at 4 • C for at least 3 weeks to reduce primary dormancy.
To further confirm NTSR mechanisms in population 859F2, increased cytochrome P450s monooxygenase detoxification was assessed by exposing plants of each populations to malathion immediately before spraying ALS herbicides (Christopher et al., 1994). Malathion is an insecticide and inhibits some P450s in plants. Four plastic racks containing 100 plants were prepared for population 859F2, S and ALS NTSR resistant ID80. One rack remained untreated, one sprayed with iodosulfuron only, one sprayed with malathion only and the fourth with malathion prior to iodosulfuron. At the 2-3 leaf stages, two racks were sprayed with 1000 g of malathion/ha (440 g L −1 malathion, Cheminova). Immediately after, respective racks were sprayed with 3 g iodosulfuron/ha. Plant survival was recorded 21 days after spraying.
ALS Resistance Bioassays on F2 Populations
The ALS resistance level of the F2 population was assessed with a discriminating dose of 2.5 g iodosulfuron ha −1 . A total of 100 plants were sown in plastic racks filled with potting mixture. Plants were sprayed at the 2-3 leaf stage (BBCH 12-13) in a cabinet sprayer with a boom fitted with two Hardi ISO-F-020-nozzles operated at a pressure of 300 kPa, and a speed of 5.1 km h −1 resulting in a spray volume of 150 L ha −1 . Fresh weight (FW) and the number of surviving plants was measured.
Seed Germination
Seeds from 859F2 and S were cleaned using a seed blower (New Brunswick General Sheet Metal Work, New Brunswick, NJ, United States) for 1 min at an upper air velocity of 4.0 ms −1 to eliminate empty seeds. A 100 seeds were placed in Petri dishes (9 cm) containing four cellulose filter paper (Whatman No. 1) covered by one glass-fiber filter paper (Whatman GF/A). Five Petri dishes were prepared for each population at each of the three temperature tested: 10/6, 16/10, and 22/10 • C all at 14/10 h of day/night photoperiod. Three temperatures were selected to assess germination at different conditions. The 16/10 and 22/10 • C temperatures were selected following previous germination experiment with A. spica-venti (Andersson and Åkerblom Espeby, 2009). Seeds were imbibed with potassium nitrate (0.3%) and placed in climate cabinets for 20 days. Distilled water was added to Petri dishes when needed. Seeds were considered to have germinated when the radicle had emerged (BBCH 05). Germinated seeds were counted daily and removed. This experiment was replicated twice.
Target-Neighborhood Experiment
A target-neighborhood design was used to compare the vegetative and reproductive ability of ALS a susceptible metapopulation and the resistant biotype 859F2, in response to increasing densities of neighboring winter wheat (cv. Torp) throughout the life cycle of A. spica-venti. Similar targetneighborhood methods has been previously employed to study weed competition (Wakelin and Preston, 2006;Walsh et al., 2009;Keshtkar, 2015). Four winter wheat densities of 0, 48, 96, and 192 plants m −2 were established by sowing 0, 2, 4, and 8 plants in 10 L pots filled with potting mixture. Winter wheat plants were planted in a circular pattern order to have an equal distance among each winter wheat plant and with the A. spica-venti individual in the middle. The experiments were conducted from December 2015 to July 2016. Six pots per density per A. spica-venti population were placed on tables with an automatic watering system in greenhouse condition simulating field conditions (light and temperature) typical of the time of the year (January: 10 h light with 9.5 • C in the day, June: 17 h light with 21.4 • C in the day). Additional winter wheat and A. spicaventi plants were sown and used to replace plants that did not germinate or emerged later than the rest of the plants in their pots. Replacement was performed until the 2-leaf stage, after which no replacements were performed. This experiment was replicated twice.
A series of vegetative growth stages were measured every week for the A. spica-venti plants in each pot; number of tillers , time to the beginning of stem elongation (STEM, BBCH 30), time to the first visible inflorescence (INVIS, BBCH 51), time to the first inflorescence fully emerged (INFEM, BBCH 59), time to 100% flowering (FLO) and time to 50% mature seeds (SEED50). Additionally, the above-ground biomass was measured 6 weeks after sowing using three of the six pots per density per population, and final above-ground biomass (34 weeks after sowing) on the remaining three pots. Fresh (FW) and dry weight (DW) of the A. spica-venti plant and winter wheat plants were measured. Reproductive ability was evaluated indirectly using the potential seed production measurement (Melander, 1995). First a seed number to panicle length ratio was calculated using 10 panicles from all populations. Then, at the final harvest, each panicle length was measured and potential seed production estimated. Aphids and powdery mildew were controlled with the insecticide imidacloprid (Confidor 70 WG, 700 g kg −1 , Bayer CropScience, Denmark) and the fungicide metrafenon (Flexity, 300 g L −1 , BASF A/S, Denmark) respectively. At the beginning of the flowering stage, each population was isolated so that crossing occurred only within populations.
Statistical Analysis
Seed germination data was analyzed using a three-parameter loglogistic (LL.3) model (Eq. 1) as a function of the temperature sum according to the time-to-event approach (Ritz et al., 2013) where E is the number of germinated seeds at temperature sum x ( • C), e (ED 50 ) is the temperature sum required to reach 50% of the maximum germination, and b is the relative slope at e indicating germination rate.
Each temperature was fitted individually and then as a combined dataset. The cumulative temperature was from time of imbibition until the end of the experiment. The accumulation of thermal time (GDD, • Cd) was calculated using a base temperature of 0 (Scherner et al., 2016). The five vegetative growth stages (STE, INVIS, INFEM, FLO, and SEED50) were analyzed using a twoparameter log-logistic (LL.2) model (Eq. 2) also as a time-to-event approach where E is the number of A. spica-venti plants that have achieved the specific stage at temperature sum x ( • Cd), with e being the temperature required for 50% of plants population to reach a particular growth stage.
Each stage was fitted separated by crop density (D0, D2, D4, and D8). The temperature sum of each week was calculated by summing the average temperature of each day of the week using the temperature log in the greenhouse. The FW was used to analyze the biomass response from the two harvest time points Frontiers in Plant Science | www.frontiersin.org to the four crop densities using a density-response LL.3 (Eq. 1).
In the case of biomass, e is the density producing a fresh weight response half way between the upper limit, d, and the lower limit, 0, and b is the slope at e. Seed production potential was analyzed using a LL.3 (Eq. 1) model where e is the density producing a seed yield half way between the upper limit, d, and the lower limit, 0, and b is the slope at e. The density-response model is exactly the same as the dose-response model (Ritz and Streibig, 2005) but substitutes herbicide dose with crop density. All time-to-event and density-response model fittings were followed by graphical analysis of the distribution of residuals and a test for lack of fit comparing the residual sum of square of a two-way analysis of variance and the non-linear regression. If the graphical analysis of the residuals was not satisfactory, the biomass was transformed using a Box-Cox transformation and the model was fitted again.
Once each population was fitted with a satisfactory model, the ED 50 was compared between populations using the Bonferroni correction (Benjamini and Hochberg, 1995), an adjustment for multiple comparisons. Time-to-event and dose-response analysis were all performed in R (R Core Team, 2015) using package drc (Ritz and Streibig, 2005). The tiller number values were analyzed in R (R Core Team, 2015) using linear regression with an alpha significance threshold of 95%. An ANOVA was fitted to the regression, followed by a pairwise t-test with a Bonferroni correction for multiple p-value comparisons. Finally, a Tukey honest significance differences test was performed to identify which populations were significantly different. Tiller numbers were analyzed separately by week, by crop density and all densities together.
ALS Resistance Level and Mechanisms
The 859F2 population with a randomized genetic background was less resistant than the parental 859P populations ( Table 1). The F2 population was nonetheless showing a low level of resistance to iodosulfuron ( Table 1). The NTSR experiment using malathione (Figure 1) showed that malathione alone had no effect on plant survival and that the 859F2 population had an intermediate ALS resistance level compared to S and ID80. The application of malathione prior to the ALS herbicide reduced plant survival to zero in all populations. The malathione synergy experiment together with the absence of the known mutations causing TSR and the multiple resistance status of its parental population strongly implied that resistance was due to NTSR. The randomization of the genetic background therefore decreased the resistance levels in the F2 populations, but did not eliminate the resistance alleles.
Germination
The ALS resistant population showed significant differences in the maximum germination (d) and ED50 (e) for seed germination at all temperatures compared to the S meta-population (Figure 2 and Table 2). 859F2 showed significant difference in germination rate (b) only at 10 • C. The difference in the ED 50 • Cd between susceptible and resistant biotypes was largest when germination took place at 16 • C (19.8 • Cd) and similar (9.5 • Cd) at 22 and 10 • C ( Table 2). The susceptible population produced a higher number of germinating seeds than the resistant population (Figure 2).
Vegetative and Reproductive Growth Stages
No significant differences were found for the number of tillers between susceptible and resistant populations for the four time points recorded ( Figure 3C). No significant differences in e were found for biomass 6 weeks after sowing and final biomass between populations (Figures 3A,B and Supplementary Table S1). Further, no significant differences were observed in panicle number or potential seed production between populations ( Figure 3D). The R 2 value for seed number as a function of panicle length was 0.71 which is in line with similar estimates also made on A. spica-venti panicles (Melander, 1995). Across all four crop densities, population 859F2 and S produced a mean of 123107 ( ± 164178) and 81433 ( ± 99354) seeds respectively, showing large variation between densities but no significant differences between the two populations. Tiller number, biomass, panicle number, and seed production were all inversely proportional to crop density in all populations.
There were no significant differences in e for STEM, INVIS, or SEED50 (Figures 4A,B,E) at any crop density. It took on average 1 800 • Cd and 2 500 • Cd respectively to reach STEM and INFVIS by both resistant and susceptible populations ( Table 3 and Supplementary Table S2). However, significant differences in ED 50 were observed at INFEM and FLO stages at D4 (Table 3 and Figure 4). At the INFEM stage at D4, 859F2 was faster by 383 • Cd, for FLO at D4, 859F2 was faster by 196 • Cd (Table 3). No significant differences were observed at D0, D2, or D8 in any of the five growth stages (Figure 4 and Supplementary Table S2). Even in stages with no significant differences (STEM, INFVIS, SEED50) trends between the different densities and populations Frontiers in Plant Science | www.frontiersin.org can be seen (Figure 4). The differences between resistant and susceptible population were also observable when estimating the GDD until 90% of the population reached INFEM and FLO (Figure 5).
DISCUSSION
The fitness and competitive ability of one ALS NTSR resistant population of A. spica-venti was tested against a susceptible biotype throughout its life cycle. No differences were found in terms of tiller number, biomass, time to stem elongation, time to first visible inflorescence, time to 50% mature seed, nor in final yield. The absence of biomass differences in competitive conditions was also observed in NTSR resistant Avena fatua (Lehnhoff et al., 2013a). A significant correlation was observed for the resistant population in terms of seed germination, time to emerged inflorescence (INFEM), and time to 100% flowering (FLO). The resistant population germinated 9 to 20 • Cd earlier than the susceptible population which in Denmark would represent 1-2 days difference at the time of sowing of winter cereals. This result is consistent with the results reported for TSR ALS resistant B. tectorum and Kochia scoparia biotypes which showed significantly higher germination rate at low temperatures (5 • C) compared to the corresponding susceptible biotypes (Dyer et al., 1993;Park et al., 2004). Differences in germination rate were observed only at the lowest temperature unlike a previous study of ALS resistant A. spica-venti where a threefold increase in germination for the TSR (Pro197, Trp574, and Arg377) resistant biotypes was observed at 20 • C/15 • C with 12 h photoperiod. This difference could be due to resistance mechanisms as TSR and NTSR have very different genetic implications and different genes are modified to allow survival (Délye, 2013). Single nucleotide mutations are hypothesized to have negative pleiotropic effect on gene function whereas NTSR mechanisms are thought to have negative effect on growth as a limited amount of resources have to be allocated to either defense or growth (Vila-Aiub et al., 2015). Therefore, the modified ALS gene, which is expressed in seeds and is essential in the synthesis of amino acids leucine, valine and isoleucine, in the TSR individuals could explain the increased germination rate found by Gerhards and Massa (2011). A reduction in total germination for the resistant population was observed in this study compared to the susceptible, in contrast to Soukup et al. (2006) who found no differences between four ALS resistant and four susceptible field populations of A. spica-venti tested in Petri dishes (20 • C) and pot experiment. The difference in total germination could be due to differences in dormancy proportions between resistant and susceptible, as A. spica-venti has previously shown variation in seed primary dormancy (Andersson and Åkerblom Espeby, 2009).
Weeds that germinate earlier can better compete with crop and other weeds which gives them a considerable advantage (Kleemann and Gill, 2013;Owen et al., 2015). More than 30 different P450s are expressed in A. spica-venti seeds at different germination stages and NTSR, via an increased cytochrome P450s activity, can be hypothesized to have a pleiotropic effect on germination (Babineau et al., 2017). The results of this study also show differences in the magnitude with germination at 16 • C showing the largest difference between resistant and susceptible population compared to 10 and 22 • C. Fitness differences between germination temperatures were also observed in resistant B. tectorum showing germination differences between ALS resistant and susceptible populations only at low (5 • C vs. 15 • C and 25 • C) temperature (Park et al., 2004). This could be explained by the known association of germination efficiency and temperature and might reflect regional adaption. Alternatively, there could be linkage between genes associated with germination timing and P450s under selection for ALS resistance. This is supported by the fact that P450s genes in Arabidopsis thaliana have been shown to evolved from ancient whole genome and successive gene duplication of P450s tandem arrays resulting in P40s genes being tightly linked and distributed on multiple chromosomes (Bak et al., 2011). Several P450s (CYP707A1-A4) have been found to be directly involved in decreasing the levels of abscisic acid (ABA) during seed imbibition in A. thaliana which allows seeds to germinate by lowering dormancy (Schopfer et al., 1979;Kushiro et al., 2004). The same relationship between P450s and seed germination has not yet been established in A. spicaventi, but the increased germination in individuals showing a high variation in P450s due to herbicide selection this is possible.
At the medium density of 96 wheat plants m −2 the ALS NTSR resistant population took less time to reach INFEM and FLO by 383 and 196 • Cd respectively, which corresponds to around 25 and 13 days, respectively, in field condition in Denmark. The importance of timing of trait has been underlined before (Paris et al., 2008). Similarly, NTSR resistant A. fatua reached anthesis earlier than susceptible biotypes (Lehnhoff et al., 2013b). Flowering time appears to be a plastic trait as flowering time in Raphanus raphanistrum was halved within five generations using directional selection (Ashworth et al., 2016). The ability to reach inflorescence emergence and flowering earlier is an advantage to weed populations allowing them to escape potential eradication by late season management strategies or harvesting. Herbicide NTSR resistance mechanisms can have different pleiotropic effects that could manifest themselves only at specific growth stages. In the field, difference in flowering time could imply selective interbreeding only between members of a resistant population which could result in sub-population structuring and differentiation over consecutive generations. Therefore, different ALS resistance mechanisms present in neighboring populations would not mix due to gene flow and populations could instead become increasingly genetically isolated.
The effect of crop competition on fitness and growth characteristics of ALS resistant biotypes shows an interesting pattern in this study. There were no significant differences at low densities (D0 and D2) neither at the high density (D8) in any of the five growth stages. Growth differences only manifested at 96 winter wheat plants m −2 (D4). This result indicates that high crop density reduces the fitness of resistant biotypes to an equal level to the fitness of susceptible populations as previous studies have been shown (Park et al., 2004;Paris et al., 2008). Recovery of growth timing differences at intermediate crop density and not at the lower densities has also been shown previously at D4 in A. fatua (Lehnhoff et al., 2013a,b). This could indicate differences in the regulation of NTSR constitutively expressed genes. Competition is known to influence the traits that exhibit fitness differences. Evolution in competitive conditions will favor fitness cost on early growth traits while evolution in non-competitive conditions will favor fitness cost on later growth traits (Paris et al., 2008). However, a fitness advantage of the resistant population in both early (germination) and later (inflorescence emergence and flowering) growth stages might imply very different and variable competitive conditions when ALS resistance evolved. The crop density in which we observe significant growth differences is much lower than the realistic wheat density (200-300 m −2 ) in farm conditions. We did not observed any differences in the growth of NTSR A. spica-venti at the lower end of the farm wheat density spectrum (D8; 192 m −2 ). Therefore, the timing of growth stages differences observed in this study most likely will not be observed in field conditions at normal wheat densities. However, in crops with lower farm densities or different competitive ability compared to wheat, the differences observed here in the timing of growth stages for this weed could be observable and would be worth investigating. This study did not demonstrated a fitness cost of resistance, as often hypothesized, but found a correlation with earlier germination and growth which could be a fitness benefit in some field conditions. Earlier germination can translate to a better access to nitrogen and nutrients and early vigor. However, this benefit could turn into a cost in case of false seedbed or pre-emergence application of foliar-active herbicides.
Fitness benefits have been attributed to two possible processes: compensatory and replacement hypothesis (McKenzie et al., 1982;Andersson, 2003;Paris et al., 2008;Darmency et al., 2015). Compensatory mechanism implies specific pleiotropic consequences, where the resistance allele is linked to other alleles (modifier genes) that help compensate for the fitness cost of resistance (Andersson, 2003;Darmency et al., 2015). These modifier genes can be linked to resistance alleles from the beginning or can evolve rapidly (McKenzie et al., 1982). The replacement hypothesis argues that resistance alleles carrying a fitness penalty can be replaced over time with similar resistance alleles that do not carry a fitness penalty (Paris et al., 2008). In the case of the resistant population examined in this study, both hypotheses could apply since it is unknown how long ago the resistance evolved and therefore replacement or modifier genes could have had enough time to evolve and be fixed.
In general, fitness is defined as "to survive and produce a number of fertile and viable offspring that will contribute to the next generation" (Barker, 2009). In strict evolutionary terms, there were no fitness cost or benefit observed in this study as the final seed number (viability was not measured) was not different between resistant and susceptible A. spica-venti. However, in an agricultural landscape, plants (crops and weeds) are managed throughout their life cycle and their competitive ability has a direct effect on their survival which often cannot be measured solely based on seed production. Small differences in competitive ability or in management practices can have large effect on weed population level which will obviously be observed in seed yield at the end of the season. But this difference in seed yield will not identify the precise life stage and precise differences that led to the higher total fitness which is key in weed management. Therefore fitness evaluation and measurement in agricultural conditions has to be enlarged to encompass growth and competitive differences much more prominently than viable offspring evaluation seen in natural environmental studies (Leimu et al., 2006). For these reasons, we surmise that fitness cost or benefits in weeds should consistently aim to assess plants throughout their life cycle. The strict definition of fitness based on viable offspring number is not appropriate for evaluating plants in intensively managed anthropogenic environments.
Another theoretical aspect emerging from this study is the use of randomized genetic background in studying NTSR mechanisms. This randomization of the genetic background method used to compare individual fitness in herbicide resistant weeds was originally designed for one allele difference, e.g., TSR and was used as such (Roux et al., 2004;Menchari et al., 2007;Delye et al., 2013). The handful of studies investigating fitness cost of NTSR mechanisms have used resistant and susceptible individuals from the same population (P450s; Vila-Aiub et al., 2005), reduced translocation (Wakelin and Preston, 2006;Pedersen et al., 2007), or the genetic background randomization (EPSPS gene amplification, Giacomini et al., 2014). However, just as in our study, the exact NTSR allele(s) was not identified (e.g., which P450s isoenzyme) and fitness evaluation was based on bulk assessment of involvement of one or multiple large gene families. The problem with evaluating fitness cost from NTSR is that, as mentioned before, these mechanisms are assumed to be polygenic and generally involves multiple alleles, from multiple loci across a varied functional background. All of these have not only their own evolutionary history regarding herbicide resistance, genetic linkage and inheritance, but most likely have their own fitness cost associated. This bulk assessment of NTSR fitness cost comprises a mixture of genes that could have additive, multiplicative, agonist, antagonist, and compensatory effects on fitness phenotype depending on the combination that has evolved in a particular population in a particular weed species.
The aim of the genetic randomization method is to have similar genetic background except for the alleles conferring resistance by equally mixing the whole genetic background of both herbicide resistant and susceptible individuals by performing crosses for a few generations. In this context, it is uncertain how the various NTSR alleles will be distributed after a few generation of genetic randomization background; they could be randomly distributed but then no "true" susceptible will be found, they could be lost, or some alleles could co-segregate due to linkage or recombination. The latter would result in individuals that are resistant to herbicide due to NTSR but not the same allele combinations than found in the field which could not guide weed management strategies. The assessment of field populations displaying NTSR will be difficult because of the reasons explained above and also because NTSR mechanisms pre-exist at a certain degree in every individuals no matter their phenotype. The study of the bulk effect of NTSR mechanism on fitness is useful as it represents more closely the situation in the field and also gives a response to the question of the presence or absence of a fitness cost. In the absence of a fitness cost, the resistant individuals are not likely to decrease without the herbicide and other management strategies can be sought. If a fitness cost (or benefit) is detected from the bulk analysis, then the identification of the specific alleles and their individual effect on fitness cost becomes necessary.
An appropriate method for estimating fitness cost from individual NTSR alleles and their respective roles in fitness would be to create different lines with very similar genetic backgrounds (using near isogenic lines or model plant A. thaliana) but differing by only one allele. Increasing gradually the number of NTSR alleles in some lines could allow to effectively estimate the epistatic effect (cumulative, antagonist, etc.) of the different genes involved in NTSR. This method requires time, large spatial resources and genetic transformation ability (if used on a weed species) but most importantly requires to identify each NTSR alleles thought to be involved in herbicide resistance in a particular individuals showing a fitness cost which would necessitate a differential gene expression pre-analysis. This method could also allow testing the hypothesis of modifier genes. To our knowledge, no study has performed such an analysis yet but with the increasing number of herbicide NTSR fitness cost studies (Park and Mallory-Smith, 2005;Vila-Aiub et al., 2005;Pedersen et al., 2007;Preston and Wakelin, 2008;Giacomini et al., 2014) and transcriptomic studies identifying herbicide resistant genes (Hu et al., 2009;Cummins et al., 2013;Gaines et al., 2014;Duhoux et al., 2015) this method has the basic knowledge available to be used. Once used across a range of NTSR alleles, the comparison with different NTSR weed species or populations might inform about the universality of fitness cost associated to certain alleles which could help weed management. In our study for example, the inconsistent fitness difference observed (germination and flowering stage only) could be explained by the role of different NTSR genes having pleiotropic effects at different life stages.
The presence of fitness differences between herbicide resistant and susceptible biotypes has been used previously to develop management strategies that exploit those differences (Neve et al., , 2014Preston et al., 2009), however, most have focused on a fitness cost assumption. The finding of potential beneficial differences in ALS NTSR resistant A. spica-venti, as in this study and by Gerhards and Massa , calls for the development of management strategies that take into account different direction and magnitudes of fitness consequences (Colbach et al., 2006). The early germination observed for the resistant biotype in this study could imply a selective eradication by preemergence application of foliar-active herbicides or stale seedbed methods. The earlier flowering and seed production stages will be difficult to manage. However, flowering differences were recovered at a crop density lower than in Danish fields, which means that at the current sowing densities of winter wheat these differences are most likely non-existent and would not cause new management problems. The management of such growth differences in NTSR biotypes cannot be chemically managed because of the high risk of multiple resistance, which emphasizes the importance of herbicide resistance prevention strategies.
Lastly, the results observed here come from only one NTSR population. Therefore, the investigation of more A. spica-venti NTSR populations in terms of germination and growth characteristics might confirm, or nuance, the results observed here, as sample number is an important factor is the estimation of herbicide resistance fitness cost in weeds (Cousens et al., 1997).
CONCLUSION
This study observed a correlation between germination and growth differences and NTSR resistance in an A. spica-venti population. The resistant population germinated 9-20 GDD earlier at 10, 16, and 22 • C, reached inflorescence 383 GDD earlier and flowered 196 GDD earlier at a density of 96 wheat m −2 compared to the susceptible population. No differences were observed at other wheat densities or regarding tiller number, biomass, time to stem elongation, time to first visible inflorescence, time to 50% mature seed, nor in final yield. The differences in growth characteristics identified could be used to better manage ALS NTSR loose silky bentgrass populations in the future.
AUTHOR CONTRIBUTIONS
MB designed, conducted, analyzed data and drafted the manuscript. SM, MK, and PK designed the experiment, advised with data interpretation and reviewed the manuscript.
FUNDING
Funding was provided by Aarhus University, the Danish Council for Strategic Research, and Bayer CropScience. | 9,231.6 | 2017-09-25T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Comparison of minimally invasive transforaminal lumbar interbody fusion and midline lumbar interbody fusion in patients with spondylolisthesis
Background This study aimed to compare surgical outcomes, clinical outcomes, and complications between minimally invasive transforaminal lumbar interbody fusion (MIS TLIF) and midline lumbar interbody fusion (MIDLIF) in patients with spondylolisthesis. Methods This study retrospectively compared the patients who underwent MIS TLIF (n = 37) or MIDLIF (n = 50) for spinal spondylolisthesis. Data of surgical outcomes (postoperative one-year fusion rate and time to bony fusion), clinical outcomes (visual analog scale [VAS] for pain and Oswestry Disability Index [ODI] for spine function), and complications were collected and analyzed. Results There was more 2-level fusion in MIDLIF (46% vs. 24.3%, p = 0.038). The MIS TLIF and MIDLIF groups had similar one-year fusion rate and time to fusion. The MIDLIF group had significantly lower VAS at postoperative 3-months (2.2 vs. 3.1, p = 0.002) and postoperative 1-year (1.1 vs. 2.1, p = < 0.001). ODI was not significantly different. The operation time was shorter in MIDLIF (166.1 min vs. 196.2 min, p = 0.014). The facet joint violation is higher in MIS TLIF (21.6% vs. 2%, p = 0.009). The other complications were not significantly different including rate of implant removal, revision, and adjacent segment disease. Conclusion In this study, postoperative VAS, operation time, and the rate of facet joint violation were significantly higher in the MIS TLIF group. Comparable outcomes were observed between MIDLIF and MIS TLIF in terms of fusion rate, time to fusion, and postoperative ODI score.
Introduction
Transforaminal lumbar interbody fusion (TLIF) has been proven to be an effective solution for spinal instability [1,2].To minimize surgical trauma and enhance patient's recovery, minimally invasive techniques are introduced in the past decades such as minimally invasive TLIF (MIS TLIF) [3].MIS TLIF provides less estimated blood loss, less tissue trauma, and shorter hospital stays compared to traditional open TLIF [4,5].
Pedicle screws used in MIS TLIF are placed in traditional trajectory.The traditional trajectory screw is inserted parallel to endplate, aims from lateral to medial, and placed convergently.Traditional trajectory screw provides well posterior fixation of spinal fusion.However, some disadvantages of traditional pedicle screw insertion including medial pedicle wall breaching [6], facet joint violation [7], damage of the medial branches of dorsal rami of spinal nerves (MBN) [8,9], and higher screw loosening rate in osteoporotic patients [10,11].The stability of traditional trajectory screws is provided by dorsal cortex and the surrounding cancellous bone.Osteoporosis would attenuate the screw strength and lead to higher screw loosening rate [12][13][14].
In order to enhance the screw strength, screw insertion through cortical bone trajectory (CBT) was first introduced by Santoni et al. in 2009 [15].CBT increased 30% pull-out strength and 70% insertion torque compared with traditional trajectory [15,16].With the new trajectory aims from medial to lateral and from caudal to cephalad, CBT can purchase more cortexes and provides more strength.The different trajectory also brings the advantages of less facet joint violation, less medial pedicle wall breaching, and less surgical trauma [17].A meta-analysis revealed that CBT and traditional trajectory received similar fusion rate, whereas CBT was associated with less blood loss and shorter hospital stays than traditional trajectory [17].Additionally, CBT screw insertion could be performed with laminectomy and decompression simultaneously through a posterior midline approach (MIDLIF).Due to the advantages of CBT, MID-LIF is gaining popularity recently.However, few studies directly compare MIDLIF to MIS TLIF.This study aimed to compare surgical outcomes, clinical outcomes, and complications between MIS TLIF and MIDLIF.
Patients
The retrospective cohort study was approved by the Institutional Review Board of Show Chwan Memorial Hospital (No. 1100706).Eligible patients were those who underwent MIS TLIF between November 2014 and March 2018 (MIS TLIF group) or underwent MID-LIF (MIDLIF group) between April 2018 and April 2021 in the Show Chwan Memorial Hospital, had spinal instability due to degenerative or isthmic spondylolisthesis Meyerding grade I-II [18], fusion levels less than three, and received postoperative follow-up for at least one year.Patients were excluded if they had active infection, malignancy, prior history of spinal surgery, or postoperative follow-up for less than one year.All operations were performed by the one experienced spinal surgeon.
Surgical techniques MIS TLIF
Operation was performed using Wiltse approach as previously described [19].A 4 cm incision was made on the cage insertion side.The dissection was made between multifidus and longissimus and down to the lamina and facet.Unilateral approach bilateral decompression was performed to relieve pressure on the spinal nerves in cases of spinal stenosis.The intervertebral body space was carefully prepared including disc material removal, decortication of bony endplate, autograft placement, and cage selection.Superior facet joint was preserved to prevent future adjacent segment disease.Staple wounds were made on the other side to facilitate percutaneous screwing.Pedicle screw was placed with guide pins and dilators under fluoroscopic guidance.Soft tissue around the entry point was not cauterized due to the limited operative field of the staple wound.After the procedure of decompression, cage placement and screw insertion were done, the rod were assembly and secured.Checked the final structure under fluoroscope before wound closure.The wound was closed layer by layer.The screws we used in MIS TLIF were MANTIS (Stryker, Kalamazoo, MI, USA) and Trend I systems (Biomech, Taipei, Taiwan).All polyetheretherketone cage (G cage, Biomech, Taipei, Taiwan) was used in MIS TLIF.
MIDLIF
MIDLIF was performed using CBT screw insertion as previously described [20,21].The midline incision was made and dissection was made between spinal process and paraspinal muscles.Exposed the lamina and facet.The bilateral entry point of the screw was exposed with electrocautery, which is different from MIS TLIF.The CBT screw was inserted divergently under fluoroscopy.The decompression was done on the symptomatic side and until the pulsation of the spinal cord restored.After the decompression and screw insertion were done, the rod and crosslink were assembled and secured.The final structure under fluoroscope was checked before wound closure.Wound was closed layer by layer.The CBT screws placed in MIDLIF were Wiltrom (Wiltrom, Hsinchu, Taiwan) and Trend II (Biomech, Taipei, Taiwan) systems.Interbody fusion was done using all polyetheretherketone cage (G cage, Biomech, Taipei, Taiwan).
Data collections
Data were collected through retrospective chart review.Demographics, body mass index (BMI), bone marrow density (BMD), smoking status, comorbidity (including diabetes mellitus, hypertension, and coronary artery disease), diagnosis, perioperative data (i.e., operation time and blood loss), postoperative data (i.e., change in hemoglobin and hospital stay), surgical outcomes, clinical outcomes, and complications were collected.
Surgical outcomes included fusion status at postoperative one-year and time to bony fusion.Fusion status was assessed by computed tomography (CT) scan.The CT was arranged once the lumbar spine flexion extension radiograph showed the angular motion change less than 5 degree at fusion level, trabecular bony bridge formation without radiolucent line, and no implant failure [22,23].
Clinical outcomes included pain degree and spine function, which were assessed using visual analog scale (VAS) and Oswestry Disability Index (ODI), respectively.The VAS score ranges from 0 to 10 (0 = least pain, 10 = worst pain).The ODI contains 10 patient-completed questions to evaluate spine function.Each question is scored on the scale of 0 to 5 (0 = best outcome, 5 = worst outcome).The overall ODI score ranges from 0 to 100% and a lower score indicates better function [24].Evaluation of pain and spine function were performed at preoperative, postoperative 3-month, and postoperative 1-year.
Complications included implant removal due to screw head irritation, revision surgery, screw loosening, and implants related complication.Implants related complications consisted of medial breaching, lateral breaching, and facet joint violation (Fig. 1).Screw loosening was defined as presence of radiolucent area of more than 1 mm surrounding the screw and double halo sign on lumbar spine radiograph [25].The CT scan was arranged when symptomatic complications occurred.The screw malposition was investigated by authors using the fusion CT as mentioned above.Safe zone was defined as breaching less than 2 mm [26].Screw breaching more than 2 mm were recorded.All the images were interpreted independently by two orthopedists.Disagreement of the interpretation was resolved by further discussion.Lumbar spine radiography was performed preoperatively, immediate postoperatively, and at postoperative 1-, 2-, 3-, 6, and 12-months.
Statistical analysis
Continuous variables were presented as mean (standard deviation) and categorical variables as count (percentage).To compare the MIS TLIF and MIDLIF groups, Mann-Whitney U test and Fisher's Exact test were used for continuous variables and categorical variables, respectively.One-year fusion rate was compared using log-rank test.A two-tailed p < 0.05 indicated statistical significance.All analyses were performed using IBM SPSS Statistics for Windows, version 24 (IBM Corporation, Armonk, NY, USA).
Results
A total of 87 patients were included in this study, 37 in the MIS TLIF group and 50 in the MIDLIF group.There were no significant differences between the two groups regarding age, gender, BMI, smoking, chronic diseases, pathology, and preoperative spondylolisthesis grade.There were lower BMD (0.885 vs. 0.697, p = 0.002) and more 2-level fusion (24.3 vs. 46%, p = 0.038) in MIDLIF (Table 1).
As presented in Table 2, the operation time (196.2 vs. 166.1,p = 0.014) was shorted in MIDLIF.No significant differences in blood loss, change in hemoglobin, and hospital stay were observed.These two groups had similar time to fusion and one-year fusion rate (Table 2).
Complications
No significant difference was observed between the two groups regarding implant removal, revision, and adjacent segment disease.The facet joint violation (21.6% vs. 2%, p = 0.009) was higher in MIS TLIF (Table 3).Medial breaching was also higher in MIS TLIF, but the difference was not statistically significant.
Five patients experienced reoperation, four in the MIS TLIF group and one in the MIDLIF group.The duration until reoperation ranged from 2 days to 34.4 months.The indications of reoperation included symptomatic medial screw breaching, facet joint violation with nonunion, and screw head irritation.Two patients received revision at postoperative 2-day due to symptomatic screw breaching, one in each group.Three patients presented with chronic low back pain postoperatively in MIS TLIF.Two were diagnosed screw head irritation and one facet joint violation with nonunion.The symptoms were resolved after the operations (Table 4).
Discussion
Although both MIS TLIF and MIDLIF are common surgical approaches for spinal disorders, evidence of directly comparing MIS TLIF and MIDLIF is limited.In this study, there were no significant differences in one-year fusion rate, time to fusion, and improvement of spinal function between MIS TLIF and MIDLIF, except that MIDLIF provided better effect of pain relief than MIS TLIF at postoperative 3-month and one-year.Other complications were comparable.The MIS TLIF group had numerical higher incidence of implant removal, and revision than the MIDLIF group.
The higher postoperative pain score and incidence of implant removal in the MIS TLIF group may be related with MBN-induced back pain after spinal instrumentation [27,28].MBN lies between facet and transverse process [29,30] and is fixed by the strong fibers of mammillo-accessory ligament, which extends between the mammillary process and accessory process (Fig. 3A) [9].A cadaveric study by Regev et al. compared MBN injury after mini-open versus percutaneous pedicle screw insertion [8].MBN transection was observed in 84% of the pedicles when using mini-open technique and in 20% of the pedicles when the screw was placed via percutaneous approach (P < 0.01).When the MBN is transected or ablated during pedicle screw insertion, there would be less MBN-related postoperative pain.Conversely, when performing percutaneous screw insertion via traditional trajectory, the screw head is just beside the intact MBN.Thus, it might result in nerve impingement or irritation, contributing to postoperative back pain (Fig. 3B).In MIDLIF, soft tissues around the entry point are ablated, which may damage the MBN.In MIS TLIF, the MBN is relatively preserved due to percutaneous insertion technique.The difference in entry points and soft tissue preservation around entry points may lead to greater postoperative pain with MIS TLIF.
Our results revealed that both MIDLIF and MIS TLIF groups had one-year fusion rate of over 90%, which were comparable with previous reports [31][32][33].Several studies also observed higher fusion rate in MIDLIF than in MIS TLIF [31][32][33].The greater proportion of two-level fusion in the MIDLIF versus MIS TLIF group (40.7% vs. 24.3%)lead to lower one-year fusion rate and longer time to fusion but none of these were statistically significant.Most of previous studies focused on the patient undergoing one-level spinal fusion [32][33][34][35] or included only a few patient with two-level fusion [31].By contrast, this study included more patients with two-level fusion, which indicated more complicated nature of the patients.This study revealed similar blood loss between two groups.The operation time was statistically faster in the MIDLIF group even with a larger proportion of two-level fusion.Previous reports [32][33][34] also showed shorter operation time in MIDLIF.The narrow surgical field of view and high technical demands of MIS TLIF increase operation time, especially when resecting contralateral lesions.On the other hand, MIDLIF is performed via a posterior midline incision and bilateral lesions could be approached more easily.
Our study observed a trend towards a lower complication rate for MIDLIF compared to MIS TLIF, although the difference was not statistically significant.The study by Wu et al. revealed similar results that overall complication rate was lower in MIDLIF than in MIS TILF A meta-analysis published by Hu et al. indicated that no difference was found in VAS score when comparing MIDLIF and other posterior fusion technique [17].A study published by Wu et al. better VAS leg pain at post operative 6 months but no difference found at 1 year follow up [33].In our study, we revealed the same tendency and the better VAS score at post operative 3 months and 1 year were noted.
This study had limitations.The first one came from the retrospective study design.Potential selection bias and reporting bias could not be avoided.All patients were operated in the same hospital.The single-institutional results may not be applicable in other institutions, which limited the external validity of this study.Additionally, all patients were postoperatively followed for at least one year.However, some long-term complications, such as adjacent segment disease, might not be thoroughly observed.Furthermore, there were more two-level fusion done in MIDLIF, which indicated the severity of the patient was not evenly distributed and may produce bias.
In conclusion, this study observed comparable oneyear fusion rate, time to fusion, function improvement, and complications between the patients receiving MIS TLIF and MIDLIF.MIDLIF provided better pain relief at postoperative 3-months and one-year.Further largescale studies are warranted for identifying the patients who would benefit most from MIS TLIF and MIDLIF respectively.
Fig. 1
Fig. 1 Coronal view (A) and axial view (B) of CT showing L5 right facet joint violation in a patient undergoing MIS TLIF
Fig. 2
Fig. 2 Changes over time in VAS (A) and ODI (B) scores between patients undergoing MIS TLIF and those who underwent MIDLIF.*Asterisks indicated statistical significance between the two groups (P < 0.05)
Table 1
Demographics and baseline characteristics between the patients undergoing MIS TLIF and those who underwent MIDLIF
Table 2
Operative data and surgical outcomes between the patients undergoing MIS TLIF and those who underwent MIDLIF
Table 3
Complications between the patients undergoing MIS TLIF and those who underwent MIDLIF *Screw malposition, including breaching and facet joint violation, was determined by CT | 3,473.4 | 2024-05-09T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Word Embedding for Response-To-Text Assessment of Evidence
Manually grading the Response to Text Assessment (RTA) is labor intensive. Therefore, an automatic method is being developed for scoring analytical writing when the RTA is administered in large numbers of classrooms. Our long-term goal is to also use this scoring method to provide formative feedback to students and teachers about students' writing quality. As a first step towards this goal, interpretable features for automatically scoring the evidence rubric of the RTA have been developed. In this paper, we present a simple but promising method for improving evidence scoring by employing the word embedding model. We evaluate our method on corpora of responses written by upper elementary students.
Introduction
In Correnti et al. (2013), it was noted that the 2010 Common Core State Standards emphasize the ability of young students from grades 4-8 to interpret and evaluate texts, construct logical arguments based on substantive claims, and marshal relevant evidence in support of these claims. Correnti et al. (2013) relatedly developed the Response to Text Assessment (RTA) for assessing students' analytic response-to-text writing skills. The RTA was designed to evaluate writing skills in Analysis, Evidence, Organization, Style, and MUGS (Mechanics, Usage, Grammar, and Spelling) dimensions. To both score the RTA and provide formative feedback to students and teachers at scale, an automated RTA scoring tool is now being developed (Rahimi et al., 2017). This paper focuses on the Evidence dimension of the RTA, which evaluates students' ability to find and use evidence from an article to support their position. Rahimi et al. (2014) previously developed a set of interpretable features for scoring the Evidence rubric of RTA. Although these features significantly improve over competitive baselines, the feature extraction approach is largely based on lexical matching and can be enhanced.
The contributions of this paper are as follows. First, we employ a new way of using the word embedding model to enhance the system of Rahimi et al. (2014). Second, we use word embeddings to deal with noisy data given the disparate writing skills of students at the upper elementary level.
In the following sections, we first present research on related topics, describe our corpora, and review the interpretable features developed by Rahimi et al. (2014). Next, we explain how we use the word embedding model for feature extraction to improve performance by addressing the limitations of prior work. Finally, we discuss the results of our experiments and present future plans.
Related Work
Most research studies in automated essay scoring have focused on holistic rubrics (Shermis and Burstein, 2003;Attali and Burstein, 2006). In contrast, our work focuses on evaluating a single dimension to obtain a rubric score for students' use of evidence from a source text to support their stated position. To evaluate the content of students' essays, Louis and Higgins (2010) presented a method to detect if an essay is off-topic. Xie et al. (2012) presented a method to evaluate content features by measuring the similarity between essays. Burstein et al. (2001) and Ong et al. (2014) both presented methods to use argumentation mining techniques to evaluate the students' use of evidence to support claims in persuasive essays. However, those studies are different from this work in that they did not measure how the essay uses material from the source article. Furthermore, young students find it difficult to use sophisticated argumentation structure in their essays. Rahimi et al. (2014) presented a set of interpretable rubric features that measure the relatedness between students' essays and a source article by extracting evidence from the students' essays. However, evidence from students' essays could not always be extracted by their word matching method. There are some potential solutions using the word embedding model. Rei and Cummins (2016) presented a method to evaluate topical relevance by estimating sentence similarity using weighted-embedding. Kenter and de Rijke (2015) evaluated short text similarity with word embedding. Kiela et al. (2015) developed specialized word embedding by employing external resources. However, none of these methods address highly noisy essays written by young students.
Data
Our response-to-text essay corpora were all collected from classrooms using the following procedure. The teacher first read aloud a text while students followed along with their copy. After the teacher explained some predefined vocabulary and discussed standardized questions at designated points, there is a prompt at the end of the text which asks students to write an essay in response to the prompt. Figure 1 shows the prompt of RT A M V P Two forms of the RTA have been developed, based on different articles that students read before writing essays in response to a prompt. The first form is RT A M V P and is based on an article from Time for Kids about the Millennium Villages Project, an effort by the United Nations to end poverty in a rural village in Sauri, Kenya. The other form is RT A Space , based on a developed article about the importance of space exploration. Below is a small excerpt from the RT A M V P article. Evidence from the text that expert human graders want to see in students' essays are in bold.
"Today, Yala Sub-District Hospital has medicine, free of charge, for all of the most common diseases. Water is connected to the hospital, which also has a generator for electricity. Bed nets are used in every sleeping site in Sauri." Two corpora of RT A M V P from lower and higher age groups were introduced in Correnti et al. (2013). One group included grades 4-6 (denoted by M V P L ), and the other group included grades 6-8 (denoted by M V P H ). The students in each age group represent different levels of writing proficiency. We also combined these two corpora to form a larger corpus, denoted by M V P ALL . The corpus of the RT A Space is collected only from students of grades 6-8 (denoted by Space).
Based on the rubric criterion shown in Table 2, the essays in each corpus were annotated by two raters on a scale of 1 to 4, from low to high. Raters are experts and trained undergraduates. Table 1 shows the distribution of Evidence scores from the first rater and the agreement (Kappa, and Quadratic Weighted Kappa) between two raters of the double-rated portion. All experiment performances will be measured by Quadratic Weighted Kappa between the score from prediction and the first rater. The reason to only use the score of the first rater is that the first rater graded more essays. Figure 1 shows an essay with a score of 3.
Rubric Features
Based on the rubric criterion for the evidence dimension, Rahimi et al. (2014) developed a set of interpretable features. By using this set of features, a predicting model can be trained for automated essay scoring in the evidence dimension. Provides pieces of evidence that are detailed and specific (SPC) Elaboration of Evidence Evidence may be listed in a sentence (CON) Evidence provided may be listed in a sentence, not expanded upon (CON) Attempts to elaborate upon evidence (CON) Evidence must be used to support key idea / inference(s)
Plagiarism
Summarize entire text or copies heavily from text (in these cases, the response automatically receives a 1) Table 2: Rubric for the Evidence dimension of RTA. The abbreviations in the parentheses identify the corresponding feature group discussed in the Rubric Features section of this paper that is aligned with that specific criteria (Rahimi et al., 2017).
Number of Pieces of Evidence (NPE):
A good essay should mention evidence from the article as much as possible. To extract the NPE feature, they manually craft a topic word list based on the article. Then, they use a simple window-based algorithm with a fixed size window to extract this feature. If a window contains at least two words from the topic list, they consider this window to contain evidence related to a topic. To avoid redundancy, each topic is only counted once. Words from the window and crafted list will only be considered a match if they are exactly the same. This feature is an integer to represent the number of topics that are mentioned by the essay. Concentration (CON): Rather than list all the topics in the essay, a good essay should explain each topic with details. The same topic word list and simple window-based algorithm are used for extracting the CON feature. An essay is concentrated if the essay has fewer than 3 sentences that mention at least one of the topic words. Therefore, this feature is a binary feature. The value is 1 if the essay is concentrated, otherwise it is 0.
Specificity (SPC): A good essay should use relevant examples as much as possible. For matching SPC feature, experts manually craft an example list based on the article. Each example belongs to one topic, and is an aspect of a specific detail about the topic. For each example, the same windowbased algorithm is used for matching. If the window contains at least two words from an example, they consider the window to mention this example. Therefore, the SPC feature is an integer vector. Each value in the vector represents how many examples in this topic were mentioned by the es-say. To avoid redundancy, each example is only to be counted at most one time. The length of the vector is the same as the number of categories of examples in the crafted list.
Word Count (WOC): The SPC feature can capture how many evidences were mentioned in the essay, but it cannot represent if these pieces of evidence support key ideas effectively. From previous work, we know longer essays tend to have higher scores. Thus, they use word count as a potentially helpful fallback feature. This feature is an integer.
Word Embedding Feature Extraction
Based on the results of Rahimi et al. (2014), the interpretable rubric-based features outperform competitive baselines. However, there are limitations in their feature extraction method. It cannot extract all examples mentioned by the essay due to the use of simple exact matching.
First, students use their own vocabularies other than words in the crafted list. For instance, some students use the word "power" instead of "electricity" from the crafted list.
Second, according to our corpora, students at the upper elementary level make spelling mistakes, and sometimes they make mistakes in the same way. For example, around 1 out of 10 students misspell "poverty" as "proverty" instead. Therefore, evidence with student spelling mistakes cannot be extracted. However, the evidence dimension of RTA does not penalize students for misspelling words. Rahimi et al. (2014) showed that manual spelling corrections indeed improves performance, but not significantly.
Prompt: The author provided one specific example of how the quality of life can be improved by the Millennium Villages Project in Sauri, Kenya. Based on the article, did the author provide a convincing argument that winning the fight against poverty is achievable in our lifetime? Explain why or why not with 3-4 examples from the text to support your answer.
Essay: In my opinion I think that they will achieve it in lifetime. During the years threw 2004 and 2008 they made progress. People didnt have the money to buy the stuff in 2004. The hospital was packed with patients and they didnt have alot of treatment in 2004. In 2008 it changed the hospital had medicine, free of charge, and for all the common dieases. Water was connected to the hospital and has a generator for electricity. Everybody has net in their site. The hunger crisis has been addressed with fertilizer and seeds, as well as the tools needed to maintain the food. The school has no fees and they serve lunch. To me thats sounds like it is going achieve it in the lifetime. Finally, tenses used by students can sometimes be different from that of the article. Although a stemming algorithm can solve this problem, sometimes there are words that slip through the process. For example, "went" is the past tense of "go", but stemming would miss this conjugation. Therefore, "go" and "went" would not be considered a match.
To address the limitations above, we introduced the Word2vec (the skip-gram (SG) and the continuous bag-of-words (CBOW)) word embedding model presented by Mikolov et al. (2013a) into the feature extraction process. By mapping words from the vocabulary to vectors of real numbers, the similarity between two words can be calculated. Words with high similarity can be considered a match. Because words in the same context tend to have similar meaning, they would therefore have higher similarity.
We use the word embedding model as a supplement to the original feature extraction process, and use the same searching window algorithm presented by Rahimi et al. (2014). If a word in a student's essay is not exactly the same as the word in the crafted list, the cosine similarity between these two words is calculated by the word embedding model. We consider them matching, if the similarity is higher than a threshold.
In Figure 1, the phrases in italics are examples extracted by the existing feature extraction method. For instance, "water was connected to the hospital" can be found because "water" and "hospital" are exactly the same as words in the crafted list. However, "for all the common dieases" cannot be found due to misspelling of "disease". Additional examples that can be extracted by the word embedding model are in bold.
Experimental Setup
We configure experiments to test several hypotheses: H1) the model with the word embedding trained on our own corpus will outperform or at least perform equally well as the baseline (denoted by Rubric) presented by Rahimi et al. (2014). H2) the model with the word embedding trained on our corpus will outperform or at least perform equally well as the model with off-the-shelf word embedding models. H3) the model with word embedding trained on our own corpus will generalize better across students of different ages. Note that while all models with word embeddings use the same features as the Rubric baseline, the feature extraction process was changed to allow non-exact matching via the word embeddings.
We stratify each corpus into 3 parts: 40% of the data are used for training the word embedding models; 20% of the data are used to select the best word embedding model and best threshold (this is the development set of our model); and another 40% of data are used for final testing. For word embedding model training, we also add essays not graded by the first rater (Space has 229, M V P L has 222, M V P H has 296, and M V P ALL has 518) to 40% of the data from the corpus in order to enlarge the training corpus to get better word embedding models. We train multiple word embedding models with different parameters, and select the best word embedding model by using the development set.
Two off-the-shelf word embeddings are used for comparison. Mikolov et al. (2013b) presented vectors that have 300 dimensions and were trained on a newspaper corpus of about 100 billion words. The other is presented by Baroni et al. (2014) and includes 400 dimensions, with the context window size of 5, 10 negative samples and subsampling. We use 10 runs of 10-fold cross validation in the final testing, with Random Forest (max-depth = 5) implemented in Weka (Witten et al., 2016) as the classifier. This is the setting used by Rahimi et al. (2014). Since our corpora are imbalanced with respect to the four evidence scores being predicted (Table 1), we use SMOTE oversampling method (Chawla et al., 2002). This involves creating "synthetic" examples for minority classes. We only oversample the training data. All experiment performances are measured by Quadratic Weighted Kappa (QWKappa).
Results and Discussion
We first examine H1. The results shown in Table 3 partially support this hypothesis. The skip-gram embedding yields a higher performance or performs equally well as the rubric baseline on most corpora, except for M V P H . The skip-gram embedding significantly improves performance for the lower grade corpus. Meanwhile, the skip-gram embedding is always significantly better than the continuous bag-of-words embedding.
Second, we examine H2. Again, the results shown in Table 3 partially support this hypothesis. The skip-gram embedding trained on our corpus outperform Baroni's embedding on Space and M V P L . While Baroni's embedding is significantly better than the skip-gram embedding on M V P H and M V P ALL .
Third, we examine H3, by training models from one corpus and testing it on 10 disjointed sets of the other test corpus. We do it 10 times and average the results in order to perform significance testing. The results shown in Table 4 support this hypothesis. The skip-gram word embedding model outperform all other models.
As we can see, the skip-gram embedding outperforms the continuous bag-of-words embedding in all experiments. One possible reason for this is that the skip-gram is better than the continuous bag-of-words for infrequent words (Mikolov et al., 2013b). In the continuous bagof-words, vectors from the context will be averaged before predicting the current word, while the skip-gram does not. Therefore, it remains a better representation for rare words. Most students tend to use words that appear directly from the article, and only a small portion of students introduce their own vocabularies into their essays. Therefore, the word embedding is good with infrequent words and tends to work well for our purposes.
In examining the performances of the two offthe-shelf word embeddings, Mikolov's embedding cannot help with our task, because it has less preprocessing of its training corpus. Therefore, the embedding is case sensitive and contains symbols and numbers. For example, it matches "2015" with "000". Furthermore, its training corpus comes from newspapers, which may contain more high-level English that students may not use, and professional writing has few to no spelling mistakes. Although Baroni's embedding also has no spelling mistakes, it was trained on a corpus containing more genres of writing and has more preprocessing. Thus, it is a better fit to our work compared to Mikolov's embedding.
In comparing the performance of the skip-gram embedding and Baroni's embedding, there are many differences. First, even though the skipgram embedding partially solves the tense problem, Baroni's embedding solves it better because it has a larger training corpus. Second, the larger training corpus contains no or significantly fewer spelling mistakes, and therefore it cannot solve the spelling problem at all. On the other hand, the skip-gram embedding solves the spelling problem better, because it was trained on our own corpus. For instance, it can match "proverty" with "poverty", while Baroni's embedding cannot. Third, the skip-gram embedding cannot address a vocabulary problem as well as the Baroni's embedding because of the small training corpus. Baroni's embedding matches "power" with "electricity", while the skip-gram embedding does not. Nevertheless, the skip-gram embedding still partially addresses this problem, for example, it matches "mosquitoes" with "malaria" due to relatedness. Last, Baroni's embedding was trained on a corpus that is thousands of times larger than our corpus. However, it does not address our problems significantly better than the skip-gram embedding due to generalization. In contrast, our task-dependent word embedding is only trained on a small corpus while outperforming or at least performing equally well as Baroni's embedding. Overall, the skip-gram embedding tends to find examples by implicit relations. For instance, "winning against poverty possible achievable lifetime" is an example from the article and in the meantime the prompt asks students "Did the author provide a convincing argument that winning the fight against poverty is achievable in our lifetime?". Consequently, students may mention this example by only answering "Yes, the author convinced me.". However, the skip-gram embedding can extract this implicit example.
Conclusion and Future Work
We have presented several simple but promising uses of the word embedding method that improve evidence scoring in corpora of responses to texts written by upper elementary students. In our results, a task-dependent word embedding model trained on our small corpus was the most helpful in improving the baseline model. However, the word embedding model still measures additional information that is not necessary in our work. Improving the word embedding model or the feature extraction process is thus our most likely future endeavor.
One potential improvement is re-defining the loss function of the word embedding model, since the word embedding measures not only the similarity between two words, but also the relatedness between them. However, our work is not helped by matching related words too much. For exam-ple, we want to match "poverty" with "proverty", while we do not want to match "water" with "electricity", even though students mention them together frequently. Therefore, we could limit this measurement by modifying the loss function of the word embedding. Kiela et al. (2015) presented a specialized word embedding by employing an external thesaurus list. However, it does not fit to our task, because the list contains high-level English words that will not be used by young students.
Another area for future investigation is improving the word embedding models trained on our corpus. Although they improved performance, they were trained on a corpus from one form of the RTA and tested on the same RTA. Thus, another possible improvement is generalizing the modelfrom one RTA to another RTA. | 4,981.2 | 2017-07-01T00:00:00.000 | [
"Computer Science"
] |
Maintenance of Cell Fate by the Polycomb Group Gene Sex Combs Extra Enables a Partial Epithelial Mesenchymal Transition in Drosophila
Epigenetic silencing by Polycomb group (PcG) complexes can promote epithelial-mesenchymal transition (EMT) and stemness and is associated with malignancy of solid cancers. Here we report a role for Drosophila PcG repression in a partial EMT event that occurs during wing disc eversion, an early event during metamorphosis. In a screen for genes required for eversion we identified the PcG genes Sex combs extra (Sce) and Sex combs midleg (Scm). Depletion of Sce or Scm resulted in internalized wings and thoracic clefts, and loss of Sce inhibited the EMT of the peripodial epithelium and basement membrane breakdown, ex vivo. Targeted DamID (TaDa) using Dam-Pol II showed that Sce knockdown caused a genomic transcriptional response consistent with a shift toward a more stable epithelial fate. Surprisingly only 17 genes were significantly upregulated in Sce-depleted cells, including Abd-B, abd-A, caudal, and nubbin. Each of these loci were enriched for Dam-Pc binding. Of the four genes, only Abd-B was robustly upregulated in cells lacking Sce expression. RNAi knockdown of all four genes could partly suppress the Sce RNAi eversion phenotype, though Abd-B had the strongest effect. Our results suggest that in the absence of continued PcG repression peripodial cells express genes such as Abd-B, which promote epithelial state and thereby disrupt eversion. Our results emphasize the important role that PcG suppression can play in maintaining cell states required for morphogenetic events throughout development and suggest that PcG repression of Hox genes may affect epithelial traits that could contribute to metastasis.
them to invade the overlying larval epidermis, creating perforations that coalesce and allow the wing discs to be externalized, and subsequently lead the epithelial migration that results in thorax closure. Failure of any these events can disrupt eversion leading to loss of thoracic tissue and midline clefts, and disruptions to the wings, including internalization, mis-positioning and reduction in size (Martín-Blanco et al. 2000;Pastor-Pareja et al. 2004;Ishimaru et al. 2004;Srivastava et al. 2007;Manhire-Heath et al. 2013).
To find EMT factors, we conducted an RNAi screen in which the Ubx-GAL4 driver, which expresses strongly in peripodial cells, was used to knockdown genes during third-instar larval development, and adult flies (both eclosed and pharate), were scored for eversion defects (Golenkina et al. 2021). This screen identified Netrin-A (NetA) as a key regulator of the peripodial EMT (Manhire-Heath et al. 2013). NetA facilitates the breakdown of the adherens junctions of the peripodial epithelium (PE) via downregulation of its receptor Frazzled.
Here we present our analysis of another gene identified in this screen, the Polycomb Group (PcG) gene: Sex combs extra (Sce). Sce is a Drosophila ortholog of vertebrate RING1, an E3 ubiquitin-ligase that monoubiquitinates H2A at K118 leading to chromatin compaction (Fritsch et al. 2003;Gorfinkiel et al. 2004). In Drosophila, PcG genes are well-known for their role in maintaining the patterns of Hox gene expression that are established during embryogenesis (Beuchle et al. 2001) but have not previously been associated with regulation of epithelial plasticity. In humans the PcG components EZH2 and Bmi1 have been linked with increased EMT and metastasis in cancer (Kleer et al. 2003;Wu and Yang 2011;Tong et al. 2012) as well as EMT during endometriosis (Zhang, Dong, et al. 2017). EZH2 forms a complex with Snail and HDAC1/HDAC2 to repress E-Cadherin expression (Cao et al. 2008;Tong et al. 2012), while Bmi1 cooperates with Twist to again silence E-Cadherin expression as well as the tumor suppressor p16INK4A (Yang et al. 2010;Wu and Yang 2011).
Here we show that loss of Sce results in a general failure of the wing disc to undergo the partial EMT of the PE, with effects on both the breakdown of zonula adherens (ZA) and basement membrane (BM). DamID transcriptional profiling revealed that Sce knockdown resulted in de-repression of the well-established PcG target genes abd-A and Abd-B along with a small group of other genes, which together comprise a strong epithelial signature. We found that Abd-B was upregulated in cells lacking Sce and RNAi knockdown of Abd-B was able to substantially repress the Sce RNAi phenotypes. Misregulation of Abd-B is clearly only partly responsible for the Sce phenotypes, however, as knockdown of other genes was also able to rescue to some extent, and ectopic expression of Abd-B, while having potent effects on epithelial morphology, did not, itself, recapitulate the Sce.IR phenotypes. Our results suggest that PcG activity in peripodial cells is required to keep them in a cell state that is competent to undergo the pEMT required for successful eversion. Loss of PcG repression causes a general shift in gene expression toward a more epithelial state, which inhibits eversion.
Targeted DamID
The Targeted DamID protocol was as described (Marshall and Brand 2017), with minor alterations. For each replicate of each genotype, 30 wing discs were dissected from wandering third instar larvae in 1xPBS, pooled, excess PBS removed, and then frozen at -80°until required. Tissue was processed using a Qiagen DNeasy Kit. For the Dam-Pol II experiments, tissue from the freezer was thawed, 40ul of 500mM EDTA, 180ul of ATL buffer, and 20ul Proteinase K added, mixed gently and incubated for 56°overnight, cooled to RT and 20ul of RNAase (12.5ul/ul) added and incubated for 2 min 400ul of a 1:1 mix of Buffer AL and 100% ethanol was added and mixed gently, before processing the solution through the DNeasy kit spin columns. The genomic DNA was then digested overnight with DpnI, cleaned up with a Qiagen PCR Purification kit, and DamID Adapters blunt ligated with T4 ligase, digested again with DpnII, and then adapterligated fragments PCR amplified using DamID primers and Advantage PCR kit DNA polymerase (Clontech). Adapters were then removed with AlwI digestion, and final DNA fragments processed by the Melbourne Australian Genome Research Facility with a shotgun library prep protocol and 100bp single end reads generated on an Illumina HiSeq machine. For the Dam-Polycomb experiment, wing discs were prepared in the same way, though MyTaq polymerase (Bioline) was used for amplification, a TruSeq Nano Low throughput kit (Illumina) was used for library preparation and 86 base single-end reads were obtained on an Illumina MiSeq. damidseq_pipeline, genome visualization and statistical analysis Sequencing data for Targeted DamID were mapped to release 6.03 of the Drosophila genome using damidseq_pipeline (Marshall and Brand 2015). Transcribed genes (defined by Pol II occupancy) were identified using a Perl script described in (Mundorf et al. 2019) based on one developed by (Southall et al. 2013) (available at https:// github.com/tonysouthall/Dam-RNA_POLII_analysis). Drosophila genome annotation release 6.03 was used, with a 1% threshold. To compare data sets, log2 ratios were subtracted, in this case, producing 2 replicate comparison files (as 2 biological replicates were performed). These data were then analyzed as described above to identify genes with significantly different Pol II occupancy. Due to the presence of negative log2 ratios in DamID experiments, these genes were filtered to check that any significantly enriched genes were also bound by Pol II in the experiment of interest (numerator data set). A gene list was generated from the transcript data using the values from the associated transcript with the most significant FDR.
Replicate bedgraph files for each genotype were scaled by dividing each dataset by its standard deviation and averaged to create the profiles shown in Figures 2 and Fig. S3 which were visualized using pyGenomeTracks (Ramírez et al. 2018). Gene Ontology enrichment analysis was carried out using Flymine (Lyne et al. 2007). For the Dam-Pc vs. Dam-Pol II analysis, log2 ratios were first scaled by the standard deviation and averaged, and then filtered to only include genes with significant occupancy in the Dam-Pc control, significant occupancy of Dam-Pol II in both genotypes, and with Dam-Pc occupancy .1 in control and below one in the Sce.IR discs.
Statistics
Fisher's exact test (two-tailed) was used for comparison of proportions of categories in disc culture and adult eversion tests. All 95% confidence intervals were calculated using the Wilson score method with no continuity correction.
Data availability
Reagents generated in this study are available on request. Figure S1 shows a validation of Sce knockdown. Figure S2 shows the peripodial driver expression patterns. Figure S3 shows that JNK activation and Fra expression are unaffected in Sce.IR discs. Figure S4 shows the Sce.IR derepression loci. Supplementary Data File1 shows gene lists showing Targeted DamID comparison of RNApol2 using the Ubx-GAL4 driver in third instar wing discs, with and without Sex Combs Extra RNAi. Transcriptome files generated in this study have been uploaded to the Gene Expression Omnibus (Edgar et al. 2002), Reference Series GSE153905. Supplemental material available at figshare: https://doi.org/10.25387/g3.12606437.
RESULTS
Polycomb group gene expression in the peripodial epithelium is required for wing disc eversion To find genes required for the peripodial EMT the Ubx-GAL4 driver was crossed to UAS-RNAi lines and pharate or eclosed adult flies screened for eversion defects. Phenotypes were categorized in increasing level of severity ( Figure 1) as: i. malformed wing; the thorax is normal but one or both wings are affected in some way such as being smaller, mispositioned, or crumpled ( Figure 1B). ii. thoracic cleft: both wings everted but a gap remaining in the middle of the thorax ( Figure 1C); iii. single-eversion failure: one wing failed to evert, resulting in an adult lacking half a thorax ( Figure 1D); iv. double-eversion failure: neither wing everted and thoracic tissue missing ( Figure 1E); v. early pupal lethal: adult structures such as wings, legs and head not discernible ( Figure 1F).
As expected, knockdown of genes known to play a role in eversion such as components of the JNK (fos, slpr) and TGFb pathways (dpp, punt, Mad) generated eversion phenotypes (data not shown) as did NetA and NetB as previously described (Manhire-Heath et al. 2013). Two other genes with highly penetrant, and phenotypically severe, eversion defects were the PcG genes, Sex combs extra (Sce) and Sex combs midleg (Scm). Knockdown of these genes had similarly strong effects. RNAi to Sce using UAS-Sce.IR B31612 resulted in a high proportion of single and double eversion failure (18.6%, n = 113) and crumpled wings (16.8%) (Table 1; Figure 1H). Similarly, knockdown of Scm with UAS-Scm.IR B31614 produced high levels of single and double eversion failure (80.5%, n = 41; Table 1; Figure 1). For further analysis we focused our attention on Sce.
To check for off-target effects, two other RNAi lines for Sce were tested: UAS-Sce.IR V106328 and UAS-Sce.IR V27465 . At 29°these also produced eversion defects, though in one case (UAS-Sce.IR V106328 ) the primary phenotype was early lethality (86.1%, n = 79; Table 1; Figure 1). However, subsequent tests using a temperature shift regime to restrict knockdown to a tighter developmental window, also produced a high proportion of double-eversion failures for this RNAi line (see below), suggesting that the early lethality was due to a stronger n■ Weak phenotype = malformed wing; Strong phenotype = thoracic clefts, single, and double eversion failure (see Table 1 n■ RNAi effect. Occasional eversion defects could also be generated by creating random Sce KO mutant clones using the MARCM technique (Lee and Luo 2001) ( Figure 1G). Immunostaining confirmed that Sce was expressed ubiquitously throughout the wing disc, including the peripodial epithelium, was predominantly nuclear, and appeared relatively constant between third instar and white prepupal stages (Fig. S1A, E). As expected there was a marked reduction of Sce levels in Ubx . Sce.IR V106328 peripodial cells (Fig. S1C'').
We next wished to see if Sce RNAi knockdown using other peripodial GAL4 drivers could also disrupt eversion. The PE has genetically distinct subdomains and different drivers express in different regions. The Ubx-GAL4 driver has a broad expression domain throughout the central area of the PE but posterior to the anterior/posterior border, while the odd-GAL4 driver expresses in the medial anterior cells, and the puc-GAL4 driver, a reporter for JNKactivation, expresses strongly in peripodial cells nearest the stalk region (Pastor-Pareja et al. 2004;Tripura et al. 2011;Aldaz et al. 2013) ( Fig. S2). Knockdown of Sce with both odd-GAL4 and puc-GAL4 produced eversion failures though the penetrance was less than for Ubx-GAL4 (Table 1; Figure 1H).
Note that although Ubx is part of the bithorax complex along with abd-A and Abd-B, and that region is known to be regulated by PcG repression, our TaDa expression profiling showed that the Ubx locus was not affected by loss of Sce (see below) making it unlikely the Ubx-GAL4 driver was itself being affected by loss of PcG repression.
Taken together these results show that Sce is required for eversion and suggest that target genes of PcG repression must remain repressed for successful eversion to occur.
Sce RNAi affects the partial EMT of the wing discs Since eversion is a complex multi-step process it can be affected at several stages: the initial apposition of the wing disc to the body wall, the degradation of the BM, the pEMT of the PE, the invasion of the epidermis, or the subsequent epithelial migration (Pastor-Pareja et al. 2004). Previously, we and others have found that the first steps of eversion, the pEMT and BM breakdown, can occur when discs are cultured in the presence of ecdysone (Milner 1977;Aldaz et al. 2010;Manhire-Heath et al. 2013). This provides an opportunity to determine if eversion failures are due to those early events, or later stages of the process. At 29°, eversion typically begins after 6-7 hr of culturing and is complete by 9-10 hr. To obtain an overall readout of eversion success we cultured discs for .16 hr, a period long enough to ensure complete eversion. Under these conditions we have found discs fall into three categories (Golenkina et al. 2021): i. successfully everted. discs that have flattened, wing-like morphologies and the PE forms a disorganised clump; ii. partially everted. discs show evidence of breakdown of the PE but have not flattened out, iii. uneverted. discs show no evidence of PE and BM breakdown although the DP may have undergone some bending.
Next, we looked at discs after 7 hr of culturing, which, at 29°, is a time when most discs are initiating epithelial dissociation by dismantling their AJs and are breaking down their BM. Discs were fixed and immunostained for E-Cadherin, Rhodamine-Phalloidin, and anti-Laminin to label AJs, F-Actin and BMs, respectively ( Figure 1J-M). The 7hr results were consistent with overnight eversion. In control discs only 9.1% (n = 66) of discs showed an intact AJs compared to 52% (n = 50) in Sce.IR discs (P = 0.0001)the remaining discs showing either a loss of AJs or small to large perforations in the PE ( Figure 1N). Similarly, the proportion of discs with an intact BM was doubled from 32.8% of control discs to 58% of Sce.IR discs (P = 0.0085) ( Figure 1O). Thus, there was overall inhibition of these processes in Sce.IR discs but no other obvious qualitative differences were detected.
Next, we tested whether two other key events in wing eversion were affected by loss of Sce: activation of the JNK pathway Figure 4 abd-A is partially repressed by Sce. (A-F) Third instar wing discs stained for abd-B and E-Cadh. In control discs (A-B) there is no nuclear expression of abd-B though some cytoplasmic staining in PE cells was apparent. (C-D) Ubx . Sce.IR discs, appeared the same, though the cytoplasmic staining appeared somewhat stronger. (E-F) In Sce MARCM discs there was clearly some nuclear expression of abd-A in some clones (E, E', F, F', arrows) though this was of varying strength within a clone (F', arrow), and some clones showed no expression (F', arrowhead).
Figure 5
Knockdown of de-repressed loci substantially represses Sce.IR eversion phenotypes. Effects on adult eversion failure when Ubx-GAL4 knockdown of Sce is accompanied by expression of the indicated UAS RNAi lines, or UAS-GFP control. Co-expression of GFP does not significantly decrease the rates of eversion failure in Ubx . Sce.IR discs, but coexpression of UAS RNAi lines for Abd-B, abd-A, cad, and nub all repress eversion failure. Ubx-GAL4 expression of Abd-Bm produces a high proportion of weak phenotypes in which the thorax is normal, but wings are deformed or mispositioned (56%, n = 222). Expression of abd-A has no effect (n = 84). Error bars = 95% confidence interval (Wilson score method). (Martín-Blanco et al. 2000;Pastor-Pareja et al. 2004;Srivastava et al. 2007), and downregulation of the Netrin receptor Frazzled (Manhire-Heath et al. 2013). However, expression of the JNK reporter Tre-RFP and Frazzled appeared normal (Fig. S3) suggesting that whatever genes were being misregulated, they were not involved in these pathways.
Targeted DamID identifies de-repression of a small set of genes
To find which genes were affected we used Targeted DAMID (TaDa) with Dam-Pol II (Southall et al. 2013) to examine the change in transcriptional profile when Sce was knocked down. UAS-mCherry-Dam-Pol II and UAS-mCherry-Dam were expressed in control and Sce.IR discs and the ratios between Dam-Pol II and Dam profiles calculated (see Materials and Methods). Reproducibility between replicates was good with pair-wise Pearson correlation coefficients for GATC values over the genome between replicates ranging from 0.52-0.7 for control discs and 0.59-0.68 for Sce.IR discs.
We determined the list of genes that were significantly expressed in both genotypes (FDR , 0.01; see Materials and Methods) and those whose expression was significantly increased or decreased in Sce.IR discs compared to control discs. 17 genes were significantly increased in Sce.IR discs (hereafter "de-repressed") (Table 2; Figure 2; Fig. S4; Supp. File 1). This list included the well-known PcG targets abd-A and Abd-B. 110 genes showed a significant reduction in expression, with a fold-change of .1.3 (Supp. File 1).
We performed Gene Ontology enrichment analysis on the lists of significantly changed genes, and on significantly expressed genes in the two genotypes (see Materials and Methods). For the 17 derepressed genes, the most significant terms for biological function are "epithelium development" (10/17 genes; P = 0.001178; Holm-Bonferroni correction used for all enrichment analysis; Table 2) and "anatomical structure morphogenesis" (12/17 genes; P = 4.3e-4). There is also significant enrichment of genes with molecular function of transcription factors (8/17 genes; P = 4.37e-4), seven of which contain homeodomains. In contrast, for genes whose expression significantly decreased in Sce.IR discs there is no GO Term enrichment in any category.
Similar results were obtained when analysis was expanded to the entire set of significantly expressed genes in the two genotypes. The most strongly enriched biological function in Sce.IR discs is "epithelium development" (333/2045 genes; P = 1.84e-45), whereas for control discs it is "cellular-metabolic-process" (917/1898 genes; P = 2.8e-11). In Sce.IR discs there is also an enrichment of "cellular component" for cell junction proteins (62 genes; P = 2.0e-11) and of "molecular function" for actin binding (51 genes; P = 3.1e-7) consistent with cellular changes impacting upon pEMT processes.
Since direct targets of the PcG complexes would be expected to have increased expression we focused our attention on the 17 derepressed genes. Changes in expression levels for these genes, averaged across the whole gene locus were relatively modest, ranging from 0.61 to 0.047 log 2 (i.e., fold-change of 1.5 to 1.03) averaged over the gene locus.
To confirm that these genes corresponded to regions of PcG repression we again used TaDa to examine the binding profile of the PcG component, Polycomb using UAS-myr-GFP-Dam-Polycomb and a UAS-myr-GFP-Dam control (Materials and Methods). The Dam-Pc ratio profile exhibited the expected genomic patterns of Polycomb binding for known PcG target areas, such as the engrailed/ invected and the bithorax regions (Tolhuis et al. 2006) indicating that the method had worked. For each of the 17 genes we then calculated the average level of Pc-binding in control discs (Fig. S4B). The genes with the most significant fold-change in Sce.IR discs vs. controls (FDR , 1e-4) (Figure 2) also tended to have higher levels of Pc-binding (Fig. S4). We also examined the Dam-Pc profile in Sce.IR discs but found the pattern of binding largely unchanged from control discs though the average ratio levels across the genome were reduced (Figure 2; Fig. S4; and data not shown).
Thus, the loss of Sce has resulted in increased expression of a small number of genes in PcG-repression regions, and this is accompanied by a genome-wide change in genes from those associated with cellular metabolism to those involved in epithelial development, consistent with an inhibition of the PE pEMT.
Abd-B is upregulated in the peripodial epithelium of Sce.IR discs and required for eversion failure Based on the expression profiles of the de-repressed genes, we conducted further tests on four of the genes that had a distinct change in expression profile and higher levels of Pc-binding: abd-A, Abd-B, cad and nub.
We first used immunostaining to determine if any of the four genes showed significant upregulation in the PE of Sce.IR discs. Of the four genes, only Abd-B showed a clear change in expression in PE cells with nuclear staining apparent in the Sce.IR discs but not in control discs ( Figure 3A-F). We further confirmed that loss of Sce was responsible for Abd-B upregulation by examining MARCM clones for the null allele Sce KO . Clones in both the PE and DP showed clear upregulation of Abd-B ( Figure 3G-I). In addition, there was a morphological change in both PE and DP clones in that they showed a "segregation-phenotype" whereby they became more rounded and developed furrowing/invagination at the borders with wild type cells as previously reported for several PcG genes (Beuchle et al. 2001;Fritsch et al. 2003;Gandille et al. 2010;Curt et al. 2013).
n■ Although no obvious change in abd-A, Nub or Cad expression/ localization was seen in Ubx . Sce.IR PE cells, a subset of Sce KO MARCM clones also showed clear upregulation of abd-A, though the levels were variable (Figure 4). We speculate that while Abd-B is directly controlled by PcG complexes, abd-A faces more complex regulation and may be being suppressed by Abd-B and/or the noncoding RNA mir-iab-8 which is also located in the de-repressed region between abd-A and Abd-B. In the case of Nub and Cad there was no nuclear expression though we cannot discount the possibility of a mild increase in cytoplasmic signal.
Next, we tested whether RNAi knockdown of any of the four genes could suppress the Sce.IR phenotypes. We utilized the Sce.IR V106328 RNAi line but used a temperature shift regime to restrict the period of GAL4 expression to third instar stages, thereby avoiding the excessive early pupal lethality. Two independent RNAi lines were used for each gene ( Figure 5). Knockdown of any of the four genes was able to partly rescue the defects while co-expression of an arbitrary UAS construct, UAS-GFP, had no effect (normal progeny = 4.8%, n = 165, P = 0.65). Of the four genes loss of Abd-B had the strongest effect increasing the proportion of normal eversion from 4% in Sce.IR discs (n = 379) to 47.15% in Sce.IR;Abd-B.IR discs (n = 397, Figure 5; Table 3; P , 0.0001). The results suggest that the inhibition of eversion may not be due to any one of these genes, but rather to a genome-wide change in transcriptional profile toward an epithelial state. The other implication is that the maintenance of epithelial/BM integrity in Sce.IR discs is relatively unstable, since knockdown of any of the four PcG targets was enough to substantially restore successful eversion.
Finally, we tested whether over-expression of either of the two genes with strongest rescue, Abd-B and abd-A, could phenocopy loss of Sce. Ubx-GAL4-driven expression of Abd-B in the PE did not block eversion, though a high proportion of adults had reduced/misplaced wings ( Figures 5, 6A-C). Clonal expression of Abd-B did, however, recapitulate the epithelial invagination/segregation phenotype, as has previously been described (Gandille et al. 2010) (Figure 6F-I). Ubx-GAL4-driver expression of abd-A had no effect, however clonal expression of abd-A also created invaginations suggesting that this phenotype is a conserved ability of Hox genes to regulate epithelial morphology (data not shown). Since sole expression of Abd-B was not able to recapitulate the Sce.IR phenotypes we conclude that while the epithelial morphology changes induced by Abd-B, and to a lesser extent, abd-A, may contribute to eversion failure, they are not sufficient.
Overall, our results imply that the eversion failure of Sce.IR discs is due to a genome-wide change in gene expression toward an epithelial state, and that Abd-B likely plays the major role in this change.
DISCUSSION
We have uncovered a new role for PcG repression during Drosophila development: maintenance of the state of peripodial cells such that they are able to undergo the partial EMT that allows eversion to proceed. Loss of Sce leads to de-repression of a small number of target genes and an overall shift in gene expression toward a cell-state associated with "epithelial development", and hence eversion is impeded. Thus, PcG repression is not only crucial for maintaining segmental identity but also for maintaining cells in a state of readiness for the epithelial plasticity events that occur later during development and which are necessary for successful eversion.
Our TaDa analysis of Dam-Pol II binding identified a surprisingly small number of genes that were upregulated in Sce.IR discs. Only 17 genes had an FDR , 0.01 and two of these were the known PcG targets, abd-A and Abd-B. Using Dam-Pc we confirmed that, for most of these genes, their loci corresponded to Polycomb binding regions of the genome.
In contrast there were 110 genes that were significantly downregulated in Sce.IR discs but these showed no GO-term enrichments and did not include well-known Drosophila EMT regulators, such as Snail and Serpent. However, one gene that is linked to EMT in mammals, and was among the most significantly reduced genes, was the lipid raft protein Flotillin-1 (Flo1). In Drosophila Flo1 has been shown to regulate collagen turnover (Lee et al. 2014) which could well promote the eversion process. In mammals Flotillins are more strongly linked to EMT, where they promote endocytosis and turnover of both cell adhesion molecules and ECM proteins and promote cancer metastasis (Gauthier-Rouvière et al. 2020). Interestingly, the Drosophila paralog Flo2, is also upregulated during wound healing (Juarez et al. 2011), a cellular event with many parallels to thorax closure, including the involvement of Src42A and the JNK pathway. It will be of great interest, therefore, to explore the role of the two Flotillins in the eversion process.
We focused our attention on four of the genes with a clear change in Dam-Pol II profile and tested whether RNAi knockdown could repress the eversion defects of Ubx . Sce.IR. Surprisingly, we found that all had a significant effect on rescue, though the knockdown of Abd-B was the most significant. It is possible that co-expression of multiple UAS lines might result in a reduction in the strength of the UAS-Sce.IR phenotype, simply due to competition for GAL4. However, we found no effect of combined expression of UAS-GFP. We speculate that PE breakdown and the eversion process as a whole, are "threshold events" that tend to proceed to completion once begunlike a membrane tearing. In a genotype such as Sce.IR, where eversion is failing about half the time, the PE is presumably poised at that critical thresholdsuch that a small change in gene expression can have a large effect. Other dominant modifier tests we have conducted involving eversion have shown a similar sensitivity to genetic perturbation (data not shown). Although the expression of these genes was clearly important in blocking eversion, over-expression of Abd-B and abd-A on their own, was unable to recapitulate the eversion blockage, suggesting that it is the combined expression that produces a cell state necessary to inhibit the pEMT and BM breakdown.
Others have shown previously that loss of various PcG genes in wing discs results in ectopic expression of Ubx, Abd-B and Cad, and epithelial morphogenesis changes (Beuchle et al. 2001;Fritsch et al. 2003;Gandille et al. 2010;Curt et al. 2013). Interestingly, the results of this study for Sce and Scm clones (Beuchle et al. 2001) was that only Ubx and Abd-B were expressed in the time-window used. Our results agree with these in that we saw Abd-B upregulation, occasional abd-A upregulation but no Caudal. We did not look at Ubx protein expression in disc-proper cells. Abd-B expression was the clearest effect of loss of Sce and could induce clear morphological changes on epithelial cells. Abd-B plays a well characterized role in the formation of posterior spiracles in the embryo, and this also involves invagination of epithelial tissue. In that case a small downstream regulatory network has been established involving the four immediate target genes, cut, spalt, upd1, and ems, as well as crumbs, Gef64C and five cadherins (Lovegrove et al. 2006). None of these genes showed significant upregulation in Sce.IR discs, however, suggesting that there may exist other Abd-B targets that affect epithelial plasticity.
The importance of PcG repression of Abd-B has also been seen in the context of testes development and the closure of the tergites. PcG repression of Abd-B in cyst stem cells of the testes is critical for normal cell fate identity and self-renewal of the stem cells (Zhang, Pan, et al. 2017). Mutation of regulatory elements the Boundary Elements and Polycomb Response Elements can also cause increased and ectopic expression of Abd-B that results in dorsal closure defects in the adult abdominal epithelium (Singh and Mishra 2015).
While Abd-B was always derepressed in cells lacking Sce (i.e., Sce.IR and Sce KO mutant cells) abd-A was intermittently and variably expressed. We speculate that this may be a manifestation of the posterior dominance rule, whereby expression of Abd-B expression can repress abd-A (Karch et al. 1990;Macías et al. 1990;Sánchez-Herrero 1991). It is also possible that abd-A is being regulated by the non-coding RNA mir-iab-8 (Gummalla et al. 2012) since it is also located in the region of increased Dam-Pol II binding.
In conclusion, we have demonstrated a new role for PcG repression in maintaining cell competency for a developmental EMT event and shown that silencing of abd-A and Abd-B is crucial in this process. An important question now is what downstream targets of Abd-B and abd-A, and perhaps other TFs like Caudal and Nubbin, are inhibiting the pEMT and are these gene-regulatory interactions conserved in mammals. Based on the effects of EZH2 and Bmi1 on E-Cadherin, we expected increased expression at the shg locus in the Sce.IR discs, but this was not seen. Mammalian Hox genes control many processes involving epithelial plasticity such as cancer metastasis, wound healing and angiogenesis, but they can have both positive and negative effects (Abate-Shen 2002; Kachgal et al. 2012). For example, HOXB9 promotes differentiation and mesenchymal-epithelial transition, while inhibiting migration and invasion, in both colon adenocarcinoma (Zhan et al. 2014) and gastric carcinoma cells (Chang et al. 2015). Conversely, other studies have found the same gene is overexpressed in breast carcinoma cells and correlates with high tumor grade (Hayashida et al. 2010) and overexpression in colon cancer cells promotes metastasis and poor prognosis (Huang et al. 2014). Thus, understanding how epithelial plasticity is regulated by Hox genes is likely to be complex and context dependent, but remains an important future goal. | 7,243.6 | 2020-10-13T00:00:00.000 | [
"Biology"
] |
Perceptions of registered nurses on facilitators and barriers of implementing the AI-IoT-based healthcare pilot project for older adults during the COVID-19 pandemic in South Korea
Objective This study explored the perceptions of registered nurses on the facilitators and barriers to implementing an AI/IoT (Artificial Intelligence/Internet of Things)-based healthcare pilot project, designed to prevent frailty and improve health behaviors by providing Bluetooth-enabled smart devices (including blood pressure and blood glucose meters) for the older adults aged over 65 years and above in South Korea. Methods Using a qualitative descriptive methodology, interviews and qualitative surveys were conducted with 15 registered nurses from 11 public health centers. Data were analyzed using qualitative content analysis. Results The study found that the AI·IoT-based healthcare pilot project was well received by participants, leading to increased client satisfaction and improved health behaviors. Government support and funding were crucial facilitators of project implementation. However, technical challenges and disparities in digital literacy among older adults pose significant barriers. Conclusion The findings highlight the potential of AI·IoT technologies in improving the healthcare of older adults. Efforts to address technological challenges and enhance digital literacy among vulnerable populations are necessary for successfully implementing such interventions. Government support and ongoing training for healthcare professionals can help optimize the AI·IoT-based healthcare services for older adults.
Introduction
South Korea's rapidly aging population and the ongoing COVID-19 pandemic have highlighted the need for innovative healthcare delivery methods (1).Among the approaches explored, the Artificial Intelligence/Internet of Things-based healthcare Pilot Project (AI•IoT-PP) launched by the Korean government in the latter half of 2020 stands out (1,2).The AI•IoT-PP, designed to prevent frailty and improve health behaviors, provides Bluetooth-enabled smart devices such as blood pressure meters, blood glucose meters, smart scales, activity-tracking Bluetooth pedometers, and AI speakers to older adults aged 65 years and above (1).These devices are incorporated into the country's home-visiting healthcare service, operated by registered nurses from public health centers (RN-PHC), to offer contactless public health services through a mobile health application (mHealth app).
The AI•IoT-PP initiative offers an integrated approach to remote healthcare consultations for older adults.In this method, participants undergo an initial health screening, after which they are categorized into different groups: healthy, high-risk frail, and frail.These categories determine the level and frequency of non-face-to-face health consultations.Each participant, based on their health conditions, is assigned specific health goals, fostering a proactive approach towards health maintenance.The Today's Health app, pivotal to the AI•IoT-PP, allows for efficient data sharing between the AI•IoT devices and RN-PHCs, streamlining communication and collaboration.
The implementation of AI•IoT-PP is crucial in an era where faceto-face services have become limited due to the pandemic, particularly for vulnerable groups, including older adults, those from low-income families, and individuals with health problems (1,3,4).These challenges in providing public health services to vulnerable community members have underlined the importance of advancements in mHealth technologies (5).Notably, mHealth technologies can facilitate improved self-management, efficient home healthcare services, and enhanced communication and collaboration among older adults with chronic diseases (6).
The current proliferation of digital health initiatives, including AI•IoT-PP, presents new opportunities but also significant challenges, particularly in the context of an aging society and amidst public health emergencies like the COVID-19 pandemic (3,6).While several studies have focused on the potential of digital health technologies, comprehensive research that specifically addresses the facilitators and barriers experienced by healthcare professionals in implementing these technologies remains scarce.Particularly in the context of home-visiting healthcare services, understanding these challenges is critical to the successful integration of AI•IoT technologies.Therefore, this study contributes to the existing body of knowledge by providing an in-depth analysis of RN-PHC's experiences and identifies strategies for overcoming barriers and leveraging facilitators.The findings of this study have the potential to guide future healthcare policies and strategies for the implementation of digital health technologies, ultimately benefiting vulnerable groups and enhancing the overall quality of healthcare services.
In light of these considerations, this study aims to investigate the facilitators and barriers encountered by RN-PHC during the implementation of AI•IoT-PP in 2021.The primary focus is on understanding the implications of incorporating this new technology into home-visiting healthcare services.This study seeks to identify key facilitator and barrier domains and provide recommendations for improving the delivery and dissemination of AI•IoT healthcare services to older adults in public health.The study's implementationfocused research question is: What are the facilitators and barriers related to AI•IoT-PP, as experienced by RN-PHC?
Design
This study adopted a qualitative descriptive method (7,8) to explore the perceptions of RN-PHC on their experience of facilitators and barriers while implementing AI•IoT-PP during the initial phase of the COVID-19 pandemic.The research employed a qualitative descriptive approach, as suggested by Sandelowski (7,8), to gather and document narratives from managers.This method prioritizes proximity to the original data by employing straightforward sorting and coding techniques.
Participants and setting
The AI•IoT-PP, is a pilot initiative funded by the government of South Korea.This program involved 24 out of 256 public health centers across eight out of 17 cities and provinces, nationwide.The selection of participating public health centers for this government-run pilot health project was made after considering the project's requisites and appropriateness within the predetermined budget constraints.The study focused on RN-PHC with a minimum of 2 years of experience in home-visiting healthcare services, and who had also engaged in the AI•IoT-PP for over 3 months.The study utilized a sample of 15 participants from 11 public health centers, with one to three participants per center (Table 1).To qualify for inclusion in the study, participants had to satisfy two criteria: (1) active involvement as a service provider in the AI•IoT-PP within the participating public health centers for over 3 months and (2) consented to permit the use of their data for the research objectives.The study did not define any specific exclusion criteria.Ethical approval was obtained from Ajou University medical center (AJOUIRB-SUR-2021-330).
Procedures and data collection
Data was gathered through a combination of individual interviews and qualitative surveys.Participants were given the option to select either a qualitative interview or a qualitative survey, taking into consideration their personal circumstances.The interview guide was developed through an extensive literature review on the adaptability of online-based public health projects (3,9,10).We used open-ended questions to encourage rich, detailed responses from the participants.Each interview lasted between 60-90 min.The interviews were carried out by the authors in the fall of 2021, either in a location chosen by the participant or online through the Zoom platform.All interviews were audio-recorded with the permission of the participants and later transcribed verbatim for analysis.Table 2 included examples of interview questions: "Can you describe your experience with the AI•IoT-PP?, " "What are the advantages of contactless programs compared to face-to-face home healthcare services (in terms of health promotion for recipients, nursing practice for nurses, health center budgets, workforce, etc.)?, " "What do you consider as the barriers of AI•IoT-PP programs?, " and "What do you consider as the facilitators of AI•IoT-PP programs?" In addition to the interviews, we employed a qualitative survey method that involved the distribution of open-ended questions via email (11).This approach was designed to give participants ample time and flexibility to provide thorough and in-depth responses, thereby reflecting their unique perspectives and experiences related to the research subject (11).The aim of this qualitative survey was to acquire a comprehensive and deep understanding of the phenomena being investigated.
Program
The goal of this initiative is to use AI•IoT technologies to deliver non-face-to-face healthcare consultations to older adults aged 65 years or older who experience difficulty in managing their own health, which not only improves the efficacy of home-visiting healthcare services but also assists in the provision of healthcare for the older adults in vulnerable groups, even during the COVID-19 pandemic.The program included initial health screening, tailored goal setting with the old adults, self-check of health using the Bluetooth device for 6 months, non-face-to-face health consulting using Today's Health app, and health reevaluation at 6 months of enrollment.The Today's Health app has been developed to automatically update the content of AI•IoT devices owned by older adults and share that information with RN-PHCs in an effort to enhance communication and collaboration.
Based on the initial health screening results, the older adults were divided into three groups: healthy, high-risk frailty, and frail.Every day, the participants measured their own blood pressure, blood sugar, and weight, using the provided health-measuring Bluetooth device and send the data to the Today Health app.Nurses in public health centers monitor the health status of participants in real-time via the Today Health app and provide health education and non-face-to-face health consultations to motivate participants to engage in healthy behaviors.The frequency and intensity of non-face-to-face health consultations vary among the groups.The frail group receives non-face-to-face health consultation and education twice a month, whereas the high-risk frail and healthy groups receive it once a month.All groups were assigned health related goals based on their health conditions, such as taking their medications on time every day, measuring their blood pressure daily, and walking for 30 min every day.The participants completed the missions and reported them on the app, while nurses checked whether they were completed.Incentives were provided according to how well the mission was accomplished.
Data analysis and rigor
We implemented both qualitative surveys and interviews in our data collection process, both adhering to the same interview protocols.The structured nature of the surveys ensured consistency across participants, while the open-ended interviews enabled the elicitation of deeper and more nuanced responses.This approach facilitated a comprehensive collection of data.
We utilized the qualitative data analysis method proposed by Hsieh and Shannon (12) for our research.The execution of this process was primarily carried out by one author (Oh) using ATLAS.ti8 Windows software, resulting in a robust coding structure.The process began with discussing the definitions of codes and examining their similarities and differences until consensus was achieved by authors.These related codes were then categorized based on similar experiences.Subsequently, the interrelationships among the categories were analyzed, and initial themes were identified.The following step entailed group deliberations on the identified subthemes, paying special attention to ensure no crucial detail was missed, thereby bolstering the validity of the themes.After these collective discussions, the themes were refined and finalized, then defined and categorized, resulting in precise and substantial findings (Table 3).
To enhance the credibility of our study, we initiated the analysis concurrently with data collection, which provided an immediate and continuous reflection on the data.We augmented the robustness of our findings through a cross-checking process with two other RN-PHCs.In addition, we adopted reflexivity and peer debriefing strategies.Reflexivity allowed us to continually assess our biases throughout the research process, while peer debriefing offered a platform for discussion with impartial peers, helping identify and correct potential biases.These strategies collectively fortified the credibility of our research.Regular meetings were held by the authors during the analysis phase to review the data, reflect on it, and discuss the results, further assuring the credibility and grounding of our findings (12,13).Transformation "People checked their blood pressure, blood sugar, steps, and weight, and could see how they compared to their old info.It was easy to manage their own health, and they could get into good habits with the daily missions.They looked at the numbers with the nurses, who could give advice without seeing them in person."(5:9) Self-management "Once the older adults get used to it, being able to manage their health with their smartphones gives them a sense of accomplishment.A notable advantage is that they can see the results of their efforts.The pedometer was especially popular, although it had limited functionality, only measuring Management "Because of the pandemic, managing your health became harder.But some people found it helpful that they could use technology to manage their health at home.For example, they could easily check their blood pressure and blood sugar levels and see how they were doing over time.
Provisions
Users encountered situations where they couldn't fix issues with their device while trying to measure their health, so they had to ask for help.They felt let down when the step count disappeared due to machine errors after putting in effort to walk more.Moreover, they faced difficulty when their device did not connect well with their phone due to differences in phone model, which required them to manually input data, and that was quite "Based on the participants' characteristics, it seemed like there was a correlation between their economic status, education level, and ability to use mobile phones.Those who found it easy to use the device participated more actively and even took on additional missions.However, some participants found even the basic mission of wearing the activity tracker every morning challenging and dropped out of the project." (6:12) It's taking a while for our participants to adapt to the devices.We found that there were only a few people who were suitable for the IOT project among the home visiting project participants.(9:2) Boo and Oh In order to meet our research goals and maintain a rigorous process, we adopted a clear methodological framework.This facilitated the capturing and recording of participants' expressions using straightforward descriptive data (7,8).We employed a comprehensive sampling strategy, enlisting participants from 11 out of 24 participating public health centers, which enabled us to encapsulate a spectrum of managerial perceptions regarding RN-PHCs.
Results
The RN-PHC expressed excitement about the potential benefits and possibilities of AI•IoT-PP.Despite identifying critical areas for improvement in the intervention and its implementation over the period of 6-9 months, they still gave the pilot program high evaluations.These insights are delineated below as key facilitators and barriers in the execution of the AI•IoT-PP.Furthermore, the AI•IOT initiative was viewed as an innovative healthcare approach and an expansion of public health services.Although remote public health services have been identified as a national priority, there are potential challenges concerning real-time interventions for the older adults who may be unfamiliar with digital devices.Consequently, these challenges need to be addressed, and the project has expanded.
Facilitators
The RN-PHC viewed the implementation of AI•IoT-PP as highly beneficial and pragmatic owing to several significant aspects.
Digital health empowerment and transformation
Digital health technology has been transformative for older adults, encouraging proactive healthcare management through personal devices.Although these individuals initially faced challenges in using the technology, consistent training and support led to their proficiency, heightened satisfaction, and self-assuredness.They embraced the immediate feedback and self-monitoring capabilities offered by these tools, appreciating the accessibility and personalization.The introduction of mobile devices particularly saw high levels of satisfaction, illustrating the power of digital health empowerment and transformation in fostering health autonomy and advancing comprehensive health.
The effect is greatly shown by improving interest in healthcare methods according to the use of smartphones and devices. Expectations for possible changes in health [-related] habits due to voluntary participation. (P1:14).
People checked their blood pressure, blood sugar, steps, and weight, and could see how they compared to their old info.It was easy to manage their own health, and they could get into good habits with the daily missions.They looked at the numbers with the nurses, who could give advice without seeing them in person.(P5:9).
Subthemes Themes
In my opinion, we should aid vulnerable individuals.With solutions for mobile phone compatibility issues and adequate education, I think the IOT project can work for older adults and vulnerable participants (with help from their children if needed).( 7
A notable advantage is that they can see the results of their efforts. The pedometer was especially popular, although it had limited functionality, only measuring steps and pulse. (P10:16).
RN-PHCs underscore the effectiveness of a daily digital health program that fosters health autonomy and facilitates proactive selfcare among participants.Central to this approach was a goal-setting functionality, viewed as indispensable for older adults to define and reach personal health objectives.Older adults were supported by diverse devices, incentives, and personalized health missions, all of which were integral to the sustained progress and success of the program.The ability to autonomously tailor and administer missions and rewards further underscored the program's commitment to personalized care and highlighted the transformative potential of combining AI•IoT-PP in public health care.
The best part for the people doing the program was that doing the missions every day, kind of like homework, made their lives better by helping them exercise and manage their health. (P8:12).
People were happy to take part because it was about their own health.The program included counseling about health and updates on their health indicators, and helped them learn how to take care of themselves.When they were given health info and rewards, they were more likely to keep participating in the program.(P2:19).
Government support for digital health management
During the COVID-19 pandemic, government support played a critical role in expanding digital health services to vulnerable groups, such as older adults.These efforts were made feasible through sufficient funding and capitalizing on South Korea's widespread mobile technology use.Digital health tools provided valuable data, allowing healthcare professionals to enhance their counseling services and improve the precision and individuality of their interventions.Through the allocation of funds, the government effectively promoted healthy behaviors, provided incentives for health-related missions, and enabled access to crucial health devices, demonstrating its commitment to enhancing health outcomes in challenging times.This comprehensive approach to digital health management has led to high levels of participant satisfaction and the overall success of the program.
During the COVID-19 pandemic, it was better to have fewer faceto-face visits and instead offer more remote services like phone counseling. This helped keep everyone safe. (P4:10).
Because of the pandemic, managing your health became harder.But some people found it helpful that they could use technology to manage their health at home.For example, they could easily check their blood pressure and blood sugar levels and see how they were doing over time.(P5:2).
Barriers
The implementation of the project encountered notable challenges, which were primarily attributed to technical issues and substantial disparities in digital literacy skills among older adults.This disparity was particularly pronounced among economically and educationally vulnerable seniors who exhibited lower levels of digital literacy and were at a higher risk of facing digital exclusion.
Tech challenges in digital health for the older adults
The adoption of digital health technologies by older adults has been marked by numerous technical challenges, spanning device malfunctions, software usability issues, and connectivity difficulties.A common problem is that devices like pedometers frequently disconnect or malfunction, which is particularly challenging given the older participants' limited familiarity with technologies like Bluetooth.Additionally, delays in notification about these issues further complicate the user experience and pose obstacles during phone consultations.Therefore, it's clear that additional tech support and resources are essential to address these challenges and ensure the successful integration of digital health technologies among the older population.
Users encountered situations where they could not fix issues with their device while trying to measure their health, so they had to ask for help.They felt let down when the step count disappeared due to machine errors after putting in effort to walk more.Moreover, they faced difficulty when their device did not connect well with their phone due to differences in phone model, which required them to manually input data, and that was quite annoying.(P4:4).
RN-PHC faced challenges in sustaining older adults' participation in the digital health program due to varied interest and skill levels.Complications arose from dropouts not returning devices and the absence of clear withdrawal guidelines.These issues underscore the need for additional resources and strategic planning for effective implementation of digital health initiatives for the older population.Some participants dropped out of the project due to low participation, loss of contact, and other reasons, and there were difficulties because they did not return the devices.Despite my attempts to encourage them to complete the mission and stay in contact, I was unable to reach them, and their level of participation in the project was very low.Furthermore, there are no clear guidelines for withdrawing from the project.(P5:12).
The digital alienation of vulnerable older adults
Among older adults and vulnerable populations, such as those residing alone, certain individuals faced obstacles in participating because of the limited functionality of their mobile phones.Older adults or diminished cognitive abilities also encounter difficulties with device connectivity and other technical complications.Furthermore, a considerable number of individuals did not possess smartphones, which complicates their involvement.Particularly, the older adults required extensive education, encouragement, and assistance to actively engage in the project.
Some participants wanted to join the project, but they had little experience with the device, especially older and less tech-savvy individuals. (P1:23).
Based on the participants' characteristics, it seemed like there was a correlation between their economic status, education level, and ability to use mobile phones.Those who found it easy to use the device participated more actively and even took on additional missions.However, some participants found even the basic mission of wearing the activity tracker every morning challenging and dropped out of the project.(P6:12).
The RN-PHC highlighted that certain older adults required more time to acclimate to the devices and faced challenges in adapting to AI•IoT projects.However, they emphasized the significance of delivering services to vulnerable older adults and expressed confidence that addressing technical issues would enable effective service provision.In essence, they believed that with adequate education, vulnerable older adults would be able to access the services they require.
It's taking a while for our participants to adapt to the devices.We found that there were only a few people who were suitable for the IOT project among the home visiting project participants.(P9:2).
Discussion
This study aims to explore the facilitators and barriers experienced by RN-PHCs during the implementation of AI•IoT-PP in 2021, with a primary focus on understanding the implications of integrating this emerging technology into home-visiting healthcare services.This study emphasizes the effective use of AI-IoT-PP in enhancing public health care for older adults, thanks to goal-setting functionalities and government support.
Our study highlights the transformative potential of digital health technologies, especially AI•IoT-PP interventions, in enabling older adults to actively manage their health.The pilot phase of the AI•IoT-PP intervention proved feasible and was positively received, demonstrating the effectiveness of mobile technology in enhancing communication and goal setting between RN-PHCs and clients.These findings echo community-based studies that underscored the critical role of health worker involvement and efficient clinician workflows in the successful adoption of mobile apps among low-income populations and vulnerable families (9,14).This aligns with the literature (14, 15), which advocated for mHealth applications as significant facilitators in health management.Mohammed's study showcased how mHealth apps can assist individuals with chronic conditions in effectively tracking their health-related goals, indicating a potential for these apps to boost health-promoting behaviors.Our findings further reaffirm the necessity of health worker participation in ensuring successful mobile app adoption within disadvantaged populations.The RN-PHCs were instrumental in implementing a daily digital health program that delivered high client satisfaction, leveraging AI•IoT-PP interventions to improve communication, goal setting, and provide user-friendly access to health features.In particular, the goal-setting functionality emerged as critical for helping older adults set and attain personal health objectives, demonstrating how technology can encourage self-care and consistent progress in health management.Moreover, RN-PHCs reported that AI•IoT-PP interventions enabled timely interventions, leading to positive health behavior changes, with optimism regarding the program's long-term outcomes.Second theme was Government Support for Digital Health Management.Given the nature of community-based health nursing, a limited number of RN-PHC are responsible for a large number of community-dwelling clients.Consequently, existing services have primarily focused on one-way interventions, including health screening, health and lifestyle counseling, referrals, and documentation (16).As part of the AI•IoT-PP, participants were equipped with health-related devices such as blood pressure monitors, blood glucose meters, pedometers, and smart scales.RN-PHCs reported that clients' heightened satisfaction stemmed from receiving these devices free of charge and monetary incentives, which was made possible, in part, by government-funded support.Almost all RN-PHCs emphasized the indispensability of government support for the continued provision of services.This support is crucial not only for enrolling more clients and delivering services, as previously suggested (10), but also because of concerns regarding the program's sustainability resulting from insufficient funding transparency.If the feasibility and effectiveness of the pilot project are confirmed, further analysis of the cost-effectiveness and suitability of government support should be conducted.Additional deliberation is necessary regarding the process of selecting appropriate candidates for AI•IoT projects among the recipients of home healthcare services.
The third theme identified in this study was the Tech Challenges, which served as barriers to Digital Health adoption among older adults.These difficulties encompassed technical glitches, usability issues with software, and connectivity problems, all of which obstructed the successful integration of these technologies, despite their immense potential.Existing literature echoes these challenges, indicating the widespread nature of these technical hurdles when older adults interact with digital health technologies (14,17).Further complicating this issue, not all available telehealth technologies are suitable for older adults due to age-related changes like diminished vision, impaired hearing, and reduced dexterity, potentially restricting their ability to efficiently use various telehealth devices (18,19).This study was unable to conclusively establish a direct correlation between the physical changes associated with aging in older adults, advanced age itself, and the technology challenges encountered.Further research is warranted to elucidate this relationship more precisely.
Moreover, these issues were further complicated by participant attrition and a lack of clear protocols for withdrawal, thereby underlining the pressing need for increased technical assistance, resources, and strategic planning for successful and sustainable adoption within this demographic (10).In response to these challenges, some studies have proposed potential solutions, such as the design of more user-friendly interfaces and the establishment of Initially perceived as minor, these technical difficulties morph into intricate and disruptive obstacles during the implementation of interventions, requiring the addressing of additional barriers, including the capacity and integration of workflows for RN-PHC.As part of their role, RN-PHCs are tasked with providing technical support to clients, which necessitates possessing the necessary skills and familiarity with the application while promptly resolving client concerns.
Despite the profound influence of information technology in driving digitization in daily life, a persistent disparity between those who have access to information and those who do not remains.Consequently, addressing this information gap among vulnerable populations has emerged as a pressing concern (21).Given this context, the incorporation of previous literature on similar technical challenges and their potential solutions provides a more holistic understanding of the complexities involved in adopting digital health technologies, especially among older adults.The digital alienation of vulnerable older adults, particularly those living alone or with diminished cognitive abilities, is exemplified by their challenges in using mobile phones, facing connectivity issues, and even lack of smartphone ownership, necessitating comprehensive education and support.However, RN-PHC highlighted that with time and adequate education, these vulnerable groups could successfully adapt to AI-IoT projects and gain access to essential services.
South Korea has the highest smartphone distribution rate globally.However, the level of digital health literacy varies significantly depending on age and income levels.Almost all individuals in their 20s and 40s own smartphones, while the ownership rate decreases to 80% for those in their 60s and is as low as 38% for individuals aged 70 and above (22).In the context of a globally aging population, it is crucial to engage older adults in digital technology, including mHealth, to promote their health and functioning (23).The literature suggests that telehealth and mHealth offer valuable solutions for remote support to frail older adults (24,25).
However, the older adult population in particular, frequently face exclusion and a sense of alienation from utilizing information technology.This phenomenon, known as digital alienation, often leads to feelings of unfamiliarity and helplessness due to a widening technology gap (26)(27)(28).Previous research demonstrated that individuals with higher income and educational levels are more inclined to adopt and express satisfaction with technology, whereas those with lower income and educational levels experience greater digital alienation (28).Considering that recipients of communitybased home healthcare services often belong to low-income groups with limited educational backgrounds, it is crucial to enhance their confidence and competence in using technology.Interventions incorporating video technologies and telephone support have shown promise in reducing isolation and improving health outcomes (4, 21).As frontline service providers, RN-PHCs play vital roles in endorsing and adopting technological solutions.Their adoption of technology enhances client engagement, satisfaction, and overall outcomes (9,29).Therefore, for participants experiencing exclusion and a sense of alienation, a variety of approaches including video technologies, telephone support, or a combination of online and offline visits could potentially be beneficial.
In the era of the digitalization and smartification of nursing practice, nurses are required to act as catalysts for change and possess the ability to seamlessly deliver advanced nursing services.As society increasingly embraces contactless practices, there will be a growing demand for various forms of nursing services.Hence, it is essential to develop and provide patient-centered, tailored nursing education, counseling, and care management programs.These programs should utilize blended approaches that combine online services with traditional face-to-face consultation and education.Amidst these transformative changes, it is of utmost importance for RN-PHC to recognize the unique needs of vulnerable populations and assume an even more significant role in ensuring that their health is effectively managed without facing marginalization.
Limitation
This study acknowledges a few limitations.Initially, as it was a pilot study, the AI-IoT-PP was deployed in a restricted number of health centers, potentially limiting the wide-ranging applicability of our results.Despite the lingering potential for either over-or underestimation, we believe the preservation of an open and non-judgmental interview is key to accurately depicting participants' experiences and viewpoints.While we do not foresee major issues regarding the authenticity of their responses, the possibility of overestimation or underestimation, given the RN-PHCs' role in implementing the pilot program, is a consideration that we have recognized.Despite efforts, the complete eradication of these biases remains a challenge due to sample size restrictions.An additional limitation lies in the relatively low adoption rate of AI•IoT-PP among enrollees of the home-visiting healthcare service, even with explicit registration suggestions.This pattern underscores potential hurdles in promoting the wider adoption of AI•IoT-PP within this demographic.To address these limitations, we understand the imperative for more extensive research, particularly emphasizing the verification of the authenticity and accuracy of data obtained from RN-PHCs.
Conclusion
This study underlines the transformative potential of the AI-IoT-PP in older adults' healthcare in the public health sector.Key facilitators include technology-assisted behavioral adoption, real-time interventions, and government support, which are particularly relevant during the COVID-19 pandemic.However, technological hurdles and disparities in digital literacy skills among the older adults, especially those who are economically and educationally disadvantaged, have emerged as significant barriers.Therefore, strategies aimed at enhancing digital literacy and addressing technological challenges are critical for ensuring a more inclusive and effective healthcare system.
While these challenges persist, this study also reveals that with continued support and training, the older adult population can adapt to healthcare technology.These findings reinforce the necessity for persistent efforts to support this demographic in an evolving digital healthcare landscape.Hence, this research underscores the significance of technological interventions, such as AI•IoT-PP, especially for vulnerable demographics in the public health sector.This further highlights the necessity for all-encompassing strategies to optimize their effectiveness while maintaining an inclusive approach.
What are the advantages of contactless programs compared to face-to-face home healthcare services (in terms of health promotion for recipients, nursing practice for nurses, health center budgets, workforce, etc.)?What do you consider as the barriers of AI•IoT-PP programs?What do you consider as the facilitators of AI•IoT-PP programs?Considerations and Improvements for Program Expansion Do you think the "AI•IoT-PP" can be applied to other public health centers with different conditions (population demographics, regional characteristics, etc.)?What aspects of the "AI•IoT-PP" need modification for program expansion, and how do you propose to make those modifications?What potential issues may arise when applying the program in other public health centers?How do you perceive the Today's health app and smart devices used in the "AI•IoT-PP"?AI•IoT-PP, artificial intelligence/internet of things-based healthcare pilot project.
steps and pulse." (10:16) Health Monitoring Real-time Health Advice Sense of accomplishment Voluntary Participation "The best part for the people doing the program was that doing the missions every day, kind of like homework, made their lives better by helping them exercise and manage their health."(8:12) Health management/Improvement Comprehensive Health Advancement through Structured Engagement "People were happy to take part because it was about their own health.The program included counseling about health and updates on their health indicators, and helped them learn how to take care of themselves.When they were given health info and rewards, they were more likely to keep participating in the program."(2:19) Participant Engagement Program Structure "During the COVID-19 pandemic, it was better to have fewer face-to-face visits and instead offer more remote services like phone counseling.This helped keep everyone safe." (4:10) (5:2)" Use of Technology for Health monitoring "This program has enough budget to promote healthy behaviors and provide incentives for individualized health-related missions." (5:19) Financial Incentives Support Beyond Material Challengesin Digital Health for the older adults Some participants dropped out of the project due to low participation, loss of contact, and other reasons, and there were difficulties because they did not return the devices.Despite my attempts to encourage them to complete the mission and stay in contact, I was unable to reach them, and their level of participation in the project was very low.Furthermore, there are no clear guidelines for withdrawing from the project.(5:12)Participant EngagementProgram management Challenges "Some participants wanted to join the project, but they had little experience with the device, especially older and less tech-savvy individuals."(1:23)
TABLE 2
Interview guides.
TABLE 3
Data analysis.
"The effect is greatly shown by improving interest in healthcare methods according to the use of smartphones and devices.Expectations for possible changes in health [-related] habits due to voluntary participation."(1:14) | 7,737.8 | 2023-10-10T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
A spread spectrum approach to time-domain near-infrared diffuse optical imaging using inexpensive optical transceiver modules
We introduce a compact time-domain system for near-infrared spectroscopy using a spread spectrum technique. The proof-of-concept single channel instrument utilises a low-cost commercially available optical transceiver module as a light source, controlled by a Kintex 7 field programmable gate array (FPGA). The FPGA modulates the optical transceiver with maximum-length sequences at line rates up to 10Gb/s, allowing us to achieve an instrument response function with full width at half maximum under 600ps. The instrument is characterised through a set of detailed phantom measurements as well as proof-of-concept in vivo measurements, demonstrating performance comparable with conventional pulsed time-domain near-infrared spectroscopy systems.
Introduction
Over the last few decades, near-infrared (NIR) spectroscopy (NIRS) systems have significantly evolved from merely experimental tools to proper, non-invasive monitoring methods with numerous clinical applications [1,2]. The predominant application of NIRS is monitoring and evaluating increases or decreases in oxygenated hemoglobin (HbO 2 ), de-oxygenated hemoglobin (HHb), total hemoglobin (tHb) and oxygen saturation (SO 2 ) in tissues. The use of NIRS for the purpose of real-time brain haemodynamics recordings while the subject is performing a functional task is called functional NIRS (fNIRS) and is nowadays a popular technique for functional neuroimaging [3][4][5][6][7]. Moreover, the application of the NIRS principle to non-invasive tomographic and topographic imaging of tissues and organs has seen a rapid increase over this period, resulting in the generation of various diffuse optical topography and tomography (DOT) systems, which allow the two-and three dimensional reconstruction of tissues and organs by solving an inverse problem [8] to reconstruct images from measured raw data [9].
NIRS measurements are categorised into three different interrogation techniques: (i) continuouswave (CW); (ii) frequency-domain (FD); (iii) time-domain (TD). Each one relies upon a different light emission and detection method, with its own advantages and disadvantages. CW NIRS instruments are capable of measuring only the intensity of the diffuse light, and consequently light scattering and absorption effects can not be easily differentiated. Despite this limitation, CW instruments are the most commercially exploited to date, in part owing to their simplicity and costeffectiveness. On the other hand, within the last decade, the interest in FD and TD NIRS has started to steadily increase, mainly due to the fact that the richer set of measurements allow the recovery of both absorption and scattering information, but also to potentially superior contrast-to-noise properties and the ability to detect signals deeper within a turbid medium [10][11][12][13][14]. However, with FD NIRS depth discrimination can sometimes become challenging (e.g. in reflectance geometry measurements) in comparison with TD NIRS, which was proven to be the most efficient approach when it comes to depth sensitivity, recovery of the absolute value of the optical properties of the subjects under test and tomographic results [9].
As it has been repeatedly shown in literature [15], conventional TD techniques are challenging mainly due to their long integration time, their sensitivity to the ambient environment, especially in the case of time correlated single photon counting (TCSPC)-based detection, and due to the fact that the single light source often needs to be split and routed to the various sources. Classical TD NIRS instruments are bulky, expensive, typically employ sensitive optoelectronics, which are susceptible to vibrations, and switching between wavelengths could potentially be slow depending on the system's architecture (in [15] Torricelli et al. report ∼10sec switching time for solid state lasers). In addition, it is worth mentioning that in the case where pulsed diode lasers are selected as light sources, the warm-up time required to achieve pulse time stability in the picosecond range may be long (potentially ≥60 min) [12,15]. These factors limit the applicability of the technique to use in a hospital or research environment. A smaller, more robust implementation could facilitate wider applications in emergency medicine, enabling, for example, deployment in an ambulance.
In this work, we demonstrate experimental measured results from an alternative TD NIRS instrumentation setup, relying upon the spread spectrum method for time-of-flight (TOF) resolved measurements. The instrument's light source utilises a commercially available, low-cost optical transceiver module, widely used in telecommunication applications, controlled by a Kintex 7 FPGA. The proposed setup can generate sub-ns system's instrument response functions (IRFs), which are competitive with conventional pulsed excitation systems, exhibits sufficient accuracy and low noise properties, requires very short warm-up time and even in its current, proof-ofconcept form occupies significantly less space compared to most traditional TD NIRS instruments. The paper is structured as follows: in Section 2, the implemented technique and instrument are demonstrated, while in Section 3, various characterisation experimental results are provided. In Section 4, tissue-equivalent phantom results are illustrated accompanied by a proof-of-concept in vivo arterial cuff occlusion experiment. Finally, a detailed discussion of the potentials, limitations and future improvements of the proposed setup is offered.
Spread spectrum technique
Typically in conventional single channel TD NIRS, an optical source and detector are placed appropriately around the object of interest. Subsequently, an ultra-short light pulse (circa few picoseconds pulse width) from the source is injected into a turbid medium, whilst the temporal point spread function (TPSF), i.e. the photon distribution of the time-of-flight (DTOF) is detected at the detector. A TPSF represents the tissue's impulse response function which is the optimal measurement to characterise a system and is assessed based on the level of its delay, broadening or attenuation. A TPSF can be evaluated by a series of techniques for modelling and data analysis, such as (a) the forward model [16], (b) the inverse model [8] or (c) semi-empirical approaches [17]. In conventional TD NIRS instruments using pulse excitation (PE) for TCSPC, the following linear relationship holds between the measured TPSF and the real TPSF: where * denotes convolution. With respect to the system's IRF PE (t), the following relationship holds: IRF with the source and detector IRFs depending on the selected optical fibres (their length and dispersion affect the delay and the width of the measured TPSF, respectively), coupling between different optical components in the setup and between the optics and the subject under test [18]. From equations (1) and (2) it is implied that in order to obtain the true DTOF, deconvolution between the measured TPSF and the system's IRF needs to be performed.
Recently spread spectrum methods were applied to TD NIRS as an alternative to pulsed excitation, which could help to overcome some of the instrumentational challenges discussed earlier. Spread spectrum techniques have been used for long time in telecommunication applications and their main advantages include low bit-error rate, interference rejection, and selective addressing capability [19]. This way a much higher signal-to-noise ratio (SNR) can be achieved for the same measurement. The spread spectrum method uses a noise-like signal such as a pseudorandom binary sequence (PRBS) to spread the input signal in the frequency domain, thus, making use of the full bandwidth of the communication channel and reducing the instantaneous power of the signal. For TOF measurements the input signal is a delta function in time, and thus it is the PRBS signal itself that is transmitted. By applying the spread spectrum approach to single photon counting, the response recorded by the TCSPC card (G SS (t)) does not resemble a conventional TOF histogram. Instead, the response is the convolution of the transmitted PRBS with the system's IRF and the impulse response of the medium, i.e.: with P(t) denoting the binary sequence with which the optical transceiver is modulated. In order to apply the spread spectrum technique to our setup, a maximum-length sequence (MLS) was chosen to be optically transmitted, due to its excellent autocorrelation properties, which are similar to a delta function [20]. MLSs are spectrally flat and provide the maximum possible period if N MLS =2 q -1 for a given degree q. The circular autocorrelation of an MLS is a Kronecker delta function with DC offset and time delay, depending on selected implementation. For a zero-symmetric mapping, its autocorrelation (R X X ) is given by: where P * denotes the complex conjugate, τ is the time delay and T is the transmitted sequence period. By cross-correlating the TCSPC card response, G SS (t), with the binary MLS P(t) we have: with TPSF SS (τ) representing the measured TPSF of the subject under test, valid only when the TPSF's duration is less than the transmitted MLS period.
So far, other research groups have demonstrated the benefits of spread spectrum techniques into TD NIR imaging by using either bulky PRBS generators [21][22][23][24][25] or FPGAs [26], programmed to generate PRBSs to modulate fast vertical-cavity surface-emitting lasers (VCSELs), as also shown in [27], where an FPGA was employed, in order to generate a dual-channel 2 10 -1 PRBS at 2.5Gb/s line rate. The core principle in [27] relies upon the modulation and demodulation of the produced PRBSs with a low-frequency reference signal, by means of an analogue modulator (AM), thermoelectrically cooled avalanche photodiodes and a data acquisition (DAQ) device, which eventually leads to a significant compromise with respect to the resolution of the system's IRF and acquired TPSFs. A schematic diagram of the proof-of-concept, developed instrument for TD NIRS can be found in Fig. 1. The system exploits the use of a Gigabit Optical transceiver (AFBR-709SMZ, Avago), utilising an 850nm VCSEL, a TCSPC card (DPC-230, Becker & Hickl), a single photon counting module (SPCM) using a thermoelectrically cooled and temperature controlled silicon avalanche photodiode (SPCM-AQR-14-FC, PerkinElmer) and the Multi-Gigabit Transceivers (MGTs) of a Kintex 7 FPGA (KC705, Xilinx). MGTs are in practice embedded Serialise/De-serialise (Ser-Des) devices, providing built-in solutions for gigabit bandwidth applications that require the transmission or reception of data using the FPGA [28]. The MGTs of Kintex 7 are called GTX Transceivers and are used as a basic block for common interface protocols (e.g. PCIe and SATA) [29]. The Kintex 7 FPGA features sixteen GTX ports, which can drive similar Gigabit optical transceiver modules in parallel, and modulate them in a synchronous or asynchronous manner, supporting line speeds from 500Mb/s to 12.5Gb/s. The selected optical transceiver is part of a family of enhanced small form-factor pluggable (SFP + ) modules, supporting data rates up to 16Gb/s. The transceiver's VCSEL is Class 1 with 0.79mW maximum output power, which is eye safe under all circumstances. The radiant power of the accessible laser beam is always below or equal to the maximum permissible exposure value, according to the BS EN 60825-1:2014 British standard. The module's overall power dissipation does not exceed 1W (typ. 600mW). As Fig. 1 illustrates, in the proposed setup two GTX ports of Kintex 7 FPGA were employed, both programmed to operate at 10Gb/s in a synchronous manner, clocked by the same, dedicated GTX reference clock. A fixed length MLS is transmitted from the SFP + port via the optical transceiver to the sample with fixed period. Similarly, a single 100ps width electrical pulse is sent to an SMA output port with the same period. The fast electrical pulse stemming from the SMA output of the FPGA board is used as the SYNC electrical signal for the TCSPC acquisition board. Single photon detection is performed by the SPCM and a single-ended TTL pulse is produced for every detected photon. The TTL signal from the SPCM acts as the STOP electrical signal for the TCSPC acquisition board. Accuracy in the various timings of the GTX process is of paramount importance and is ensured by employing an on-board ultra low jitter (<0.32ps) crystalto-low-voltage differential signalling (LVDS) clock generator as our system's GTX reference clock (ICS844021I, IDT). Both source and detector are fibre-coupled to the samples under test by means of multimode glass optical fibres with 50/125 µm of core/cladding. The electrical pulses are transmitted to the TCSPC card through standard high-frequency SMA/BNC cables. In order to obtain the correct polarity and amplitude for the optimal TCSPC card operation, the electrical pulses also pass through pulse inverter (A-PPI-D, Becker & Hickl) and appropriate attenuation modules.
System configuration
The GTX transceiver can be configured to transmit data of any of the following available widths, i.e. 16, 20, 32, 40, 64 or 80 bits. Taking into consideration normal TPSF times (typically around 4-6ns) and the fact that transmission needs to be performed with 10Gb/s line rate, an MLS with q=7, i.e. 127-bit width was chosen, corresponding to a sequence repetition rate of 78.74MHz. The transmitted MLS was first generated in Matlab (Mathworks, Inc.) using a linear-feedback shift register algorithm which was automatically transferred to a read-only memory (ROM) file that was subsequently loaded into the FPGA. Because a 127-bit MLS was too long to be transmitted at once, the MLS was split into 64 bit segments into the ROM file as shown in Fig. 2, transmitted in a circular manner. This method ensures that MLS data will always be transmitted without any transmission data loss or delay. It is worth mentioning that an MLS generator can be easily implemented on the FPGA, feeding its output data into the GTX without the need to use a ROM file.
Collected data post-processing
An indicative example of a typical TCSPC card response after the MLS excitation can be seen in the TCSPC Raw Data block of Fig. 3. The data are sent automatically to the user's PC, where the post-processing steps shown in Fig. 3 The post-processing stages presented in Fig. 3 are standard procedures for TCSPC data post-processing, apart from the cross-correlation step in stage 3, arising from the unconventional excitation method. More specifically, raw data are acquired with a fixed integration period and stored in multiple files. Subsequently, at stage 2, averaging and filtering of the raw data takes place. The digital filter employed is a standard low-pass finite impulse response (FIR) with N FIR =30 and cut-off frequency at 2GHz. At stage 3 the TCSPC raw data are cross-correlated with the optically transmitted 127-bit long binary MLS. The product of this cross-correlation is the sought after 127 points IRF or TPSF. Stage 4 introduces some additional processing, by subtracting the DC offset from the produced IRF and TPSFs and applying a time window. Finally, stage 5 is where mean time is calculated from the measured data.
Characterisation of the proposed system
A set of standardised experiments was performed, in order to characterise various aspects of the proposed system, evaluating both its hardware and software performance. The system was characterised across a range of performance metrics, closely following the Basic Instrumental Performance protocol proposed by [30], including responsivity, differential non-linearity (DNL) of the timing electronics and IRF's full width at half maximum (FWHM). Moreover, long term stability, linearity, accuracy and noise were investigated. In Table 1 the properties of the VCSEL source are summarised. The responsivity of the detection channel is ∼1.55×10 -8 m 2 sr and the DNL of the timing electronics of the system was found to be <1%, implying good uniformity of the width of the time channels. The system's IRF FWHM was found to be ∼583ps, as shown in Fig. 4. Figure 4(a) demonstrates an indicative response of the TCSPC acquisition system, once the aforementioned MLS is optically transmitted by the SFP + module. Figure 4b reveals the IRF, once the MLS response of Fig. 4(a) is cross-correlated with the transmitted binary MLS (stage 3 of Fig. 3). In Fig. 4(b) a distinct second peak appears, relatively long after the primary peak (∼10ns). This peak is likely to be due to a reflection occurring during the IRF measurement. From the inset of Fig. 4(b) the dynamic range of the IRF can be seen, which is one or two orders of magnitude lower than other reported conventional state-of-the-art TD NIRS instruments [11,12]. This seems to be an inherited property of the proposed source, once a spread spectrum technique is applied, that is difficult to overcome with commercially-available components. However, as it can be also seen in the various experimental results below, the system's overall performance is acceptable for the proposed applications.
Stability
To investigate the stability of the system we recorded the system's intensity and relative mean TOF variations over an eight hour period. The results, summarised in Fig. 5, indicate intensity and relative mean TOF stability during the whole experiment. Following an initial warm-up period of thirty minutes, the instrument demonstrated intensity stability of ±1% corresponding to standard deviation of ∼715 photons and stability in mean TOF of ±25ps. During the thirty minute stabilisation period, intensity drops and relative mean TOF increases slightly, however, the maximum intensity change does not exceed 3%, while for the relative mean TOF the maximum change does not exceed 60ps. Compared to the aforementioned conventional state-of-the-art TD NIRS instruments [11, 12], our system is able to stabilise significantly faster, achieving similar stability performance and decreasing warm-up time by a factor of 10 (30min instead of the reported 300min in [12]).
Linearity and accuracy
For the evaluation of the system's linearity and accuracy, homogeneous liquid-based phantoms were employed, in which suitable absorbing and scattering agents were added to deionised water to control the absorption and reduced scattering coefficients (µ a and µ s respectively) of the final solution. To accommodate the liquid-based phantoms, a custom-made clear acrylic tank was fabricated with dimensions 120×120×35mm. NIR absorbing dye S109564 (ICI, U.K.) was selected to adjust the absorption coefficient, while 20% W/V intralipid (Fresenius Kabi, U.K.) was used to modify the reduced scattering coefficient. For the characterisation of the near-infrared absorbing dye, a 1×1cm cuvette was inserted into a NIR spectrometer (PerkinElmer, USA) to measure the transmittance between 650nm and 950nm. From this spectrum the Beer-Lambert law was used to calculate the dye's absorption coefficient. The reduced scattering coefficient of the intralipid between 650nm and 950nm was obtained using a broadband TD-NIRS instrument in our group [31]. With knowledge of their spectral properties, we were able to quantify the exact concentration of each component required in a specific volume of deionised water to achieve the desired absorption and reduced scattering coefficients in our phantoms. coefficient, the scattering coefficient was held constant at µ s = 0.8mm −1 , whilst the absorption coefficient was varied over the range 0.007 ≤ µ a ≤ 0.026mm −1 . For the second experiment, where the linearity for reduced scattering coefficients was measured, µ a value was held constant and set equal to 0.01mm -1 , while µ s was increased to achieve nominal values of 0.5, 0.965 and 1.4mm -1 . Ten measurements on each phantom were taken. The experimental TPSFs were subsequently fitted to a standard model of photon diffusion theory [32] by using a nonlinear least-squares fit algorithm in Matlab. Before fitting, the ideal TPSFs produced by the model were convolved with our system's IRF. The linearity and accuracy results for absorption and reduced scattering are summarised in Fig. 6. In both Fig. 6(a) and Fig. 6(b), the dashed lines are the first order linear fit of the measured points, with coefficient of determination for the absorption coefficients R 2 =0.9998 and R 2 =0.9855 for the reduced scattering coefficients. In both cases, the relative accuracy error for each measurement ranged between 2 and 9%.
Noise
The Gigabit optical transceiver achieves fast modulation speeds by exploiting an internal optical DC offset, where light is modulated above and below this level. This inherited property of the optical transceiver combined with the applied spread spectrum technique, which spreads the system's noise across the whole frequency spectrum uniformly, results into a TD NIRS system that does not follow traditional noise statistics. More specifically, we expect the noise in our system to be constant and Gaussian, unlike the Poisson type of noise that exists in conventional TD NIRS systems. As one would expect for a Gaussian statistics, noise can be averaged away by increasing integration time. In order to validate the above, the following set of experiments was performed.
The developed proof-of-concept system does not have yet the ability to control the output power of the laser source and consequently the overall intensity. Therefore, in order to change count rate, different turbid media were employed. Two count rate cases were selected for our noise experiment. In the first one, a thin slab was chosen in transmittance geometry, resulting in a count rate of about 10 6 photons/sec. For the second case, a much thicker slab was selected in reflectance geometry, resulting into a count rate of approximately 1.15×10 5 photons/sec. In both cases, TPSFs were collected with integration time set at 0.2 seconds. In order to investigate the impact of integration time upon the noise levels of the system, the 0.2 seconds TPSFs were averaged (as shown in Fig. 3, stage 2), allowing us to obtain different integration time values. For both experiments, the coefficient of variation (CV) and the system's SNR were calculated. The results for the first set of experiment can be seen in Fig. 7(a) and Fig. 7(b), while for the second one in Fig. 7(c) and Fig. 7(d). Fig. 7(c) mean value and STD of indicative TPSFs, obtained using the low count rate setup and 20 seconds integration time, are shown. As expected, the STD in both cases is constant due to the spread spectrum technique. Moreover, in both cases, there is a linear relationship between integration time and the system's SNR. The overall behaviour of the two experimental setup is also consistent. For example, in the first case, in order to obtain a CV equal to 5% and SNR around 25dB, a 10 seconds integration time is required. Similarly in the second case, the aforementioned CV and SNR values are obtained when integration time is roughly 100 seconds.
Phantom and in vivo evaluation experiments
In this section TD-related results are presented using both a tissue-equivalent phantom and also by performing a proof-of-concept in vivo experiment.
Evaluation on a tissue-equivalent phantom
In this experiment a tissue-equivalent phantom was employed which was already presented and described in detail in [33]. The phantom consists of a solid block of epoxy resin (dimensions 95×175×60mm) with uniform optical properties and a cylindrical cavity through which a rod (diameter of 10mm and length of 130mm) can be manually translated back and forth. A threedimensional representation of the phantom can be seen in [33]. Based on the values provided, the rectangular block has a transport scattering coefficient µ s =1.0mm -1 and absorption coefficient µ a = 0.0112mm -1 at 850nm. The rod has exactly the same optical properties with the rectangular block apart from its central target region, which has an absorption coefficient µ a = 0.112mm -1 . This means that maximum attenuation should be observed when the target is translated across the centre of the cavity.
The source and the detector were positioned on the top of the surface with a ∼35mm distance between them, as described in [33]. For this experiment we selected to perform a 300 seconds continuous measurement comprising of three stages: (a) we kept the target off-centre for 100 seconds; (b) translated the target across the centre of the cavity for 100 seconds; (c) returned the target back to its off-centre position and record for another 100 seconds. Ten measurements were performed and the integration time for each TPSF was set equal to 5 seconds. Figure 8 and Fig. 9 summarise the obtained results of the experiment. In Fig. 8(a) the recorded intensity change is shown, related to the target's position. As approximated in [33], this change (relative to a homogeneous block and the change in the absorption coefficient) corresponds to approximately 0.5dB. Figure 8(b) demonstrates relative mean TOF changes over experiment time. An approximately 20ps-25ps mean TOF change can be seen during the different phases of the experiment. The spikes observed at 100 and 200 seconds in Fig. 8(b) are due to momentary exposure to ambient light (the selected SPCM can detect single photons of light over the 400nm to 1060nm wavelength range). Finally, Fig. 9 demonstrates another TD-related information for the phase angle of the Fast Fourier Transform (FFT) performed on the captured TPSFs for different frequency components (bins) of the FFT. It can be seen that frequency bins up to 470MHz reveal the change in the optical properties of the rod phantom. Once again, spikes observed at 100 and 200 seconds are due to momentary ambient light exposure. The results of Fig. 9 also verify that the post-processing stages of Fig. 3 did not affect the mean TOF results.
Arterial cuff occlusion in the arm
A standard arterial cuff occlusion (200mmHg) of the left arm was performed on an adult male subject. The protocol followed consists of three stages: In the first one the arm was in resting position for 2 minutes; the cuff was rapidly inflated to a pressure of 200mmHg to provide an abrupt vascular (venous and arterial) occlusion, maintained for another 2 minutes; and finally the cuff was released and the recovery phase followed for 2 more minutes. The source and detector were placed on the subject's forearm 25mm apart and were stabilised with bandages. The experiment was performed in dark room conditions with a blackout material also been placed around the source and detector on the subject's forearm. Once again, the integration time for each TPSF was set equal to 5 seconds. Figure 10 and Fig. 11 summarise the obtained results of the experiment.
In Fig. 10, intensity and relative mean TOF changes during the experiment are shown. More specifically, in Fig. 10(a), a steady intensity baseline can be seen for the first 120 seconds. During cuff inflation intensity drops and stabilises around 150 seconds (roughly 30 seconds after onset of occlusion). Subsequently, around 240 seconds, when the cuff is released intensity gradually returns towards its original baseline. The total intensity change corresponds roughly to 1.8dB. Figure 10(b) illustrates the relative mean TOF during the cuff occlusion experiment. It exhibits similar trends to the intensity curve, with a mean TOF change corresponding to ∼60ps. Finally, Fig. 11 demonstrates changes on the phase angle of the FFT performed on the captured TPSFs for four different FFT frequency components (bins). In this experiment, where the change is slightly bigger compared to the previous rod-phantom experiment, it can be seen that frequency bins up to 787MHz can reveal the change in the optical properties of the tissue. Once again, the results of Fig. 11 verify that the post-processing stages of Fig. 3
Discussion
The characterisation results in Section 3 demonstrated clearly the pros and cons of the aforementioned system/technique for TOF resolved measurements. More specifically, the responsivity and DNL of the proposed system is comparable with the state-of-the art systems shown in [15]. Our system's IRF is approximately 1.5-3 times wider compared to other published TD NIRS instruments. However, as literature also indicates, for (f)NIRS applications such system IRF is acceptable, since it is well within the sub-ns range. The dynamic range of the IRF is one or two orders of magnitude lower than other reported conventional state-of-the-art TD NIRS instruments [15]. As already mentioned previously, the limitation in dynamic range seems to be a combination of the optical transceiver's modulation capabilities (sacrifices dynamic range for speed) and the spread spectrum approach, which spreads the system's noise across the whole frequency spectrum uniformly, thus, deviating from the Poisson noise of conventional TD NIRS systems. The Gigabit optical transceiver achieves fast modulation speeds by exploiting an optical DC state, where light is modulated above and below this state. This means that compared to traditional instruments, the baseline of our TPSFs will be noisier. On the other hand, stability and linearity/accuracy is comparable to traditional systems. As shown in Fig. 5, our system is stable within the first ∼30 minutes, achieving ±1% intensity and ±25ps mean TOF temporal stability, a time period significantly lower compared to conventional reported systems that require an order of magnitude more time to reach similar intensity and mean TOF values. The linearity for both absorption and reduced scattering is satisfying, given the R 2 of Fig. 6, and accuracy error ranged between 2 and 9%. The cost (∼£60) and size (47.5×14×13mm) of the laser source combined with the reported performance makes the system an attractive candidate for portable, low-cost solutions to similar applications.
The evaluation results of Section 4 indicate that we are able to extract meaningful TD-related information. The tissue-equivalent rod phantom experiment shows that the system is capable of detecting small intensity changes, in the order of 0.5dB, and small mean TOF changes around 20-25ps with integration time set equal to 5 seconds. Longer integration times (e.g. 10 seconds) provide more distinct mean TOF changes, however, we appreciate that these values might not be very practical for challenging functional experiments. The results of the rod-phantom experiment demonstrate that a function task detection should be possible with the same integration period and reasonable averaging. Finally, our proof-of-concept in vivo experiment shows much larger intensity and mean TOF changes, due to the bigger change in optical properties of the subject under test. Both evaluation experiments are accompanied by useful phase angle change graphs. For the rod phantom case, frequency bins up to 470MHz can reliably reveal the change in the optical properties of the subject, while in the in vivo experiment frequency bins up to 787MHz can be used to extract information from the sample.
By using, to the best of our knowledge for the first time in NIRS, mature telecommunication transceiver modules, a single channel proof-of-concept system was developed, which exhibits not only performance comparable with conventional pulsed TD NIRS systems but also versatility. The small footprint, low-cost optical transceivers modulated by the FPGA at high line rates can transmit various types of optical sequences with sequence repetition rates that can be easily defined by the user, ranging from few MHz up to hundreds of MHz, depending on the application. The large number of MGT ports existing in standard modern FPGAs allow for the simultaneous modulation of many optical modules with different patterns in a synchronous or asynchronous manner. Switching time between the multiple optical modules can take place almost instantaneously, overcoming the slow switching problems that exist in some conventional TD NIRS instruments.
Work is ongoing to improve the presented system even further. One of the most important improvements would be to enable spectroscopic measurements, therefore, future work will concentrate on the development of custom-made SFP + optical transceiver modules, where VCSELs of different wavelengths will be driven by standard IC drivers, allowing us to control the speed and output power of the optical transceiver. Whilst the transmit side of the electronics is nicely integrated, the receive side still uses multiple pieces of hardware which could be integrated into the FPGA fabric, reducing significantly the system's overall size and cost. More specifically, the TCSPC module can at later stages be substituted by a custom-made, picosecond range time-to-digital converter, implemented entirely on the FPGA platform [34,35], reducing the complexity and the size of the total experimental setup even more (in practice only the FPGA platform and a SPCM will be needed). Finally, the relatively high-cost SPCM could be substituted later by silicon photomultipliers, which have been developed in recent years as an alternative to traditional photomultiplier tubes, allowing the whole setup to be even smaller, without compromising speed or accuracy. The width of our system's IRF is mainly due to the selected SPCM, whose timing properties and limitations are reported at length in [36]. An alternative choice of detector, and the use of graded index optical fibres could allow a substantial reduction in the FWHM of the IRF.
Conclusion
We have developed and characterised a proof-of-concept, single channel TD NIRS system, relying upon the spread spectrum technique. The proposed system utilises for the first time in literature a commercially available, low-cost, optical transceiver module, widely used in telecommunication applications, as a light source controlled by a Kintex 7 FPGA, which modulates the optical transceiver with MLS at 10Gb/s. The preliminary characterisation results of the system as well as the encouraging preliminary tissue-equivalent phantom and in vivo evaluation results demonstrate the potentials of this instrument as an alternative to conventional TD NIRS instruments, once more channels with different wavelengths are included. The specific approach to TD NIRS is still in its infancy, however, the obtained proof-of-concept results combined with the low-cost and small footprint of the instrument allow us to proceed even further with this new TOF resolved technique, not only for biomedical but also for industrial purposes. | 7,537.6 | 2018-05-10T00:00:00.000 | [
"Physics"
] |
CRISPR/Cas9‐mediated genome editing: From basic research to translational medicine
Abstract The recent development of the CRISPR/Cas9 system as an efficient and accessible programmable genome‐editing tool has revolutionized basic science research. CRISPR/Cas9 system‐based technologies have armed researchers with new powerful tools to unveil the impact of genetics on disease development by enabling the creation of precise cellular and animal models of human diseases. The therapeutic potential of these technologies is tremendous, particularly in gene therapy, in which a patient‐specific mutation is genetically corrected in order to treat human diseases that are untreatable with conventional therapies. However, the translation of CRISPR/Cas9 into the clinics will be challenging, since we still need to improve the efficiency, specificity and delivery of this technology. In this review, we focus on several in vitro, in vivo and ex vivo applications of the CRISPR/Cas9 system in human disease‐focused research, explore the potential of this technology in translational medicine and discuss some of the major challenges for its future use in patients.
embryonic stem cells to generate mice with a specific genotype. 6 Since then, this technique has enabled the study of human diseases in mouse and other animal models and contributed considerably in the process of drug discovery and development.
Nevertheless, this approach has several limitations, such as its low editing efficiency and unwanted genome-editing events where the donor DNA template is more frequently inserted into the genome randomly than at the desired location. 7 To overcome these limitations, several groups have developed tools that allowed the introduction of site-specific double-stranded breaks (DSBs) into a genomic locus of interest using 'meganucleases'. This refers to endonucleases with an extremely rare recognition site that recognizes and cleaves specific DNA sequences in order to stimulate homology-directed repair (HDR) mechanism. [8][9][10][11] This approach requires that a DNA donor template with ends homologous to the break site is delivered and used by the polymerase to copy information along the break site. 9,10 However, besides HDR, non-homologous end joining (NHEJ) also occurs at the sites of DSBs. 11 NHEJ is able to unify the two ends of the break by introducing a random nucleotide insertion or deletion (indels). While NHEJ repair mechanism is exceptionally successful in obtaining functional gene knockouts, the generation of indels emerges as an undesired side effect. 12 Therefore, the generation of site-specific DSBs that specifically trigger HDR and simultaneously blunt NHEJ activity is still a current challenge in the field.
Both ZFs and TALENs are fusion proteins made up of an engineered DNA binding domain fused to a non-specific nuclease domain from the FokI restriction enzyme. Unlike DNA-binding proteins, ZF and TALEN amino acid sequences can be designed to cleave virtually any target sequence in the genome with high specificity. [17][18][19][20][21][22] However, the routine use of these editing tools in the laboratory has been impaired by difficulties in protein design, synthesis and validation. 23 The development of the CRISPR/Cas9 system has proven to be a major scientific breakthrough and made gene editing more accessible.
Distinct from the protein-guided DNA cleavage used by TALENs and ZFs, CRISPR/Cas9 depends on a small RNA to introduce a site-specific DSB. [24][25][26] The requirements of the endonuclease Cas9 to match a DNA target sequence are elegant and simple: It only requires a 20-nucleotide 'guideRNA' (sgRNA) that base pairs with the target DNA and the presence of a DNA 'protospacer-adjacent motif' (PAM), a short DNA sequence adjacent to the complementary region that varies according to the bacterial species of the Cas9 protein being used. [23][24][25][26][27][28][29] This two-pronged system in which the sgRNA guides the Cas9 nuclease to target any DNA sequence of interest has replaced the laborious protein design procedure associated with ZFs and TALENs. 1,[24][25][26] The simplicity of CRISPR/Cas9 technology coupled with a unique DNA cleaving mechanism, the ability to target multiple regions, and the existence of different type II CRISPR-Cas system variants, has enabled notable progresses using this cost-effective and user-friendly technology to precise and efficiently modify the genomic DNA of a wide collection of cells and organisms. 23 Although the CRISPR/Cas9 system has been widely adopted as the preferred genetic editing tool for most researches worldwide, the use of this technology in pre-clinical and clinical setting is now bursting with new and exciting studies. In this review, we summarize some of the recent disease-focused studies that have applied the CRISPR/Cas9 system and explore the advantages of this technology as well as discuss the major obstacles involved in translating it to the clinic.
| CRIS PR /C A S9: HIS TORY AND MECHANIS M
In 1987, Ishino et al. 30 Interestingly, the biggest breakthrough came in 2005 when the same group realized that these spacer sequences were from unknown origin. [32][33][34] Together with the observation that many CRISPR-associated (Cas) genes encode proteins with putative nuclease and helicase domains, it was postulated that CRISPR may constitute an adaptive immunity system 33-36 by using RNAs as memory signatures of previous infections. 37 In 2007, Barrangou et al. 38 , using a well-characterized phage-sensitive S thermophilus strain and two bacteriophages, showed experimentally that CRISPR confers adaptive immunity. In 2008, CRISPR RNAs (crRNAs) were shown to serve as guides in a complex with Cas proteins to promote phage resistance. 39 The same year, Marraffini and Sontheimer recognized that CRISPR/Cas system was essentially a programmable restriction enzyme targeting DNA. 40 Interestingly, their paper was the first to explicitly predict that CRISPR might be repurposed for genome editing in heterologous systems. In recent years, work from different groups has been crucial to identify the different components that constitute the recombinant CRISPR/Cas9 system and immense work has been done to demonstrate its functionality in mammalian cells. 1,23,25,27,41,42 CRISPR mechanisms are very diverse but can be mainly classified into two distinct classes, class 1 and class 2, depending on the organization of the effector protein complex. Class 1 comprehend three different types I, III and IV that are further subdivided into 15 subtypes. Distinct from class 1, that is characterized by the presence of a multi-protein effector complex, class 2 is defined by a single-protein effector module. This class is divided into types II, V and VI. 43 The other CRISPR systems have been extensively reviewed elsewhere. 44,45 In CRISPR type II, DNA from viruses or plasmids of previous infections is cut into small pieces and integrated into a CRISPR locus amongst short repetitive sequences (30-40 bp) separated by equally short spacer sequences. The loci are transcribed, and precursor CRISPR RNAs (pre-crRNAs) are then processed to generate small crRNAs. The pre-crRNA processing relies on a trans-activating CRISPR RNA (tracrRNA) that has sequence complementarity to the CRISPR repeat sequence. Upon crRNA:tracrRNA base pairing, which is stabilized by Cas9, endogenous RNAse III cleaves the precursor RNA (pre-crRNA) into mature crRNAs. The latter are used as guide sequences that will lead Cas nucleases to target and cleave invading DNA based on sequence complementarity. Cleavage of the target sequence, also known as a protospacer, triggers a host immune response by destroying the invader's genome. [23][24][25][26][27]29,46 The characteristic that makes the type II CRISPR mechanism unique compared to other CRISPR systems is the fact that only one Cas protein (Cas9) is required for gene silencing. 23,27 During the destruction of target DNA, the two nuclease domains of Cas9, the HNH and RuvC-like nuclease domains, cleave both DNA strands matching the 20-nucleotide target sequence resulting in the formation of double-stranded breaks (DSBs). 25,47 The HNH domain and the RuvC domains cleave the complementary strand and non-complementary strand, respectively. 47 The Cas9 double-stranded endonuclease activity also requires that a short-conserved sequence (2-5 nucleotides), known as protospacer adjacent motif (PAM), is present immediately downstream of the 3´ crRNA. DNA is cleaved three base pairs upstream of the PAM sequence in the complementary DNA strand. In fact, the activity of Cas9 is impaired in the absence of a PAM sequence even if there is complete complementarity by the Cas9-RNA. 48 It is important to note that the Cas9 can cleave the non-complementary DNA strand and generate DSB within 3 bp to 8 bp upstream of the PAM. 25 This can be of relevance when aiming to perform precise gene editing in a therapeutic setting.
The natural occurring type II CRISPR mechanism is a simple three-component system (Cas9 along with the crRNA and tracRNA) that showed promising potential to be adapted for genome editing. 25 This combined version is shown to be as effective as Cas9 programmed with separate tracRNA and crRNA in guiding targeted gene alterations (Figure 1). 25 The CRISPR/Cas9 system is the most simple, effective and versatile system to date, requiring only the design of a customized sgRNA to generate DSBs at almost any DNA target site. For this reason, this editing technology has quickly widespread within the scientific community to manipulate the genome of numerous cell types and organisms ranging from mice and monkeys to primary human T cells, organoid cultures and stem cells, as well as plants, bacteria and fungi. 49
| The first studies: a proof of concept
In January 2013, three independent studies have shown that CRISPR/ Cas9 mechanism could be repurposed to generate DSBs in DNA. By tweaking this naturally occurring mechanism, researchers were able to perform mammalian genome editing using DNA repair systems, including the NHEJ and the less-frequent template HDR. 25,42 NHEJ is the preferred pathway to generate gene knockouts by inducing indels within a coding exon, which might ultimately lead to frameshift mutations and premature stop codons. Alternatively, HDR is used to introduce or alter a specific sequence by using properly designed repair templates ( Figure 1A). 25,42 Cong et al. 42 developed a more precise variant of the CRISPR/Cas9 system by generating a mutant form that only has nickase activity, known as Cas9D10A or Cas9n.
Cas9D10A cuts DNA to generate single-stranded breaks and does not activate NHEJ. Instead, the HDR repair pathway is activated in the presence of a homologous repair template resulting in reduced indel mutations ( Figure 1B
| Application of the CRISPR/Cas9 system in cancer biology
The cancer genetics field is one of the research areas in which
| Application of the CRISPR/Cas9 system in patient-derived primary and induced pluripotent stem cells
Since the discovery by Yamanaka and colleagues that somatic cells could be reprogrammed into a pluripotent state, human induced pluripotent stem cells (iPSCs) have held great promise in several disease models, regenerative medicine, drug discovery and F I G U R E 1 CRISPR/Cas9 genome-editing tools in mammalian cells. (A) Double-stranded DNA breaks (DSBs) are generated by CRISPR/ Cas9 system, which triggers endogenous DNA repair mechanisms resulting in genetic manipulation. Non-homologous end joining (NHEJ) is an error-prone mechanism that is able to disrupt the target gene through the formation of insertions/deletions (indels). Alternatively, homology-directed repair (HDR) could be activated in the presence of a properly designed DNA repair template to alter a DNA sequence at a specific locus. (B) Mutated Cas9 with only nickase activity (Cas9n) makes a site-specific single-stranded nick and does not activate NHEJ. Double-stranded breaks only occur upon delivery of two sgRNAs that can be later repaired by HDR or NHEJ. (C) Nuclease-deficient Cas9 (dCas9) can be fused to different effector domains, which allow for the activation or repression of particular target genes in their native context without creating DSBs development. 63,64 Because CRISPR has shown to be highly efficient at genome editing in iPSCs when compared to alternative systems like TALENs or ZFs, this technology has been commonly used to generate iPSC-based models of human disease. 65,66 There are different approaches to generate isogenic disease models in iPSCs using the CRISPR/Cas9 system. For example, it is possible to generate Cas9-mediated iPSC knockout cell lines via NHEJ that could be used to determine whether a given human mutation is indeed directly responsible for causing the disease or to simply study gene function. [67][68][69][70] As an alternative approach, specific disease-related mutations could be introduced into iPSCs using the CRISPR/Cas9 system and HDR-mediated genome editing to generate in vitro models of human disease. 67,71 A study by Wang et al. 72 have demonstrated how CRISPR could be used to help researchers around the world to decipher the underlying cause of human genetic diseases. In this study, the authors shed new light on the pathophysiology underlying the cardiomyopathy of Barth syndrome (BTHS), a mitochondrial disorder caused by a mutation on the tafazzin (TAZ) gene, by combining tissue engineering with patient-derived and genetically engineered iPSCs. 72 Furthermore, the authors were able to assess the effect of potential therapies for Barth syndrome using these BTHS iPSC-derived cardiomyocytes. This pioneering study lays groundwork to develop 'patient-to-patient' treatment strategies. 72 Finally, as iPSCs have the capacity to differentiate into any cell type, the generation of genetically engineered iPSCs allows the proper study of human genetic variations in a broad array of tissues in cell culture. 1 One of the most exciting CRISPR/Cas9 applications with relevance to human health is gene therapy, in which a patient-specific mutation or mutations are genetically manipulated in order to provide a definitive cure. 1 Different groups have used in their studies, the CRISPR/ Cas9 system to correct human genetic mutations in patient-derived primary cells, including Fanconi anaemia, 73 Duchenne muscular dystrophy (DMD), 74 haemophilia, 75 cystic fibrosis 76 and beta thalassaemia. 77 Additionally, primary immune cells have been edited to knockout the CCR5 or CXCR4 receptor genes using CRISPR/Cas9, resulting in cells resistant to HIV infection. 1,[78][79][80] Together, all these studies highlight the impact that this technology might have in the forthcoming future for the treatment of human genetic disorders. Cas9 version is referred to as CRISPR/Cas9 activation or CRISPRa.
| Application of the CRISPR/Cas9 system in transcriptional regulation
However, in the majority of the cases the dCas9-VP64 system required multiple sgRNAs complementary to the target sequence to achieve strong gene activation. 82,84 A strategy to boost gene expression levels was to couple several transcriptional activation domains to the dCas9/sgRNA complex (eg tripartite activator system [dCas9-VPR], synergistic activation mediator [SAM] or dCas9-SunTag). [85][86][87] These second-generation dCas9-activator fusions proved to exhibit robust transcriptional activation in wide panel of mammalian cell types ( Figure 1C). 88 Furthermore, CRISPRa can be used in genetic screens to unveil molecular targets of novel compounds or to study drug resistance mechanisms in cancer cells. Yang et al. 89 used a genome-scale CRISPRa screen and identified Sall1 as a gene that contributes to reprogramming mouse embryonic fibroblasts into induced pluripotent stem cells.
Conversely, dCas9 has also been utilized in genome-wide experiments for targeted gene transcriptional repression. 1,23 Commonly known as CRISPR interference (CRISPRi), this strategy relies on the fact that dCas9 shows high affinity to target DNA and therefore can be repurposed as a transcriptional repressor by blocking transcriptional elongation, RNA polymerase binding and recruitment of transcription repressors. 81 Moreover, dCas9 can also be fused to the Kruppel-associated box (KRAB) transcriptional repressor for efficient target gene silencing ( Figure 1C). 82,90,91 Overall, these dCas9 versions that allow for the activation (CRISPRa) or repression (CRISPRi) of target genes are powerful tools that can be used for functional genomic studies under different physiological and developmental conditions without creating DSBs.
More recently, an in vivo study on a type I diabetes mouse model repurposed Cas9 to epigenetically induce gene activation and observed a significant improvement on disease phenotypes such as acute kidney injury and muscular dystrophy. 92 This study further supports that a Cas9-mediated epigenetic remodelling of target loci could potentially be used as a powerful therapeutic tool to treat several human diseases.
| Application of the CRISPR/Cas9 system in the rapid generation of animal models
CRISPR/Cas9 technology brought a lot of excitement within the scientific community since it revolutionized how fast researchers are able to make a genetically modified animal models. 60 Previously, the generation of a mouse model was a time-consuming process that comprehended several laborious steps. Initially, an embryonic stem cell had to be edited to introduce the desired mutation and then injected into the mouse blastocyst. Finally, the offspring had to be screened for germline transmission. 93 This process was inefficient, labour-intensive and expensive, which has slowed the generation of genetically engineered animal models. In 2013, the CRISPR/Cas9 system was adapted as an efficient gene-targeting technology to generate mice carrying mutations in multiple genes in a single editing step by zygote injection. 94 A few months later, the same group used the CRISPR/Cas9 system to develop a one-step knock-in procedure to generate mice carrying reporter and conditional alleles. 95 Since then, several studies have shown that injecting CRISPR/Cas9 components (Cas9 messenger RNA or protein; sgRNA; HDR template) into a zygote can lead to efficient gene knockout at multiple loci in several animal species, including mice, 96,97 rat, 97,98 rabbits 99 and monkeys, 100 bypassing targeting in embryonic stem cells. Moreover, the microinjection of zygotes with CRISPR/Cas9 enables researchers to generate additional mutations in pre-existing animal models of diseases without the need for embryonic stem cell derivation or complex genetic crosses. 60 Finally, being that CRISPR/ Cas9 is a novel genome-engineering technology that facilitates multiplexed gene targeting, multiple genes can be targeted simultaneously. Therefore, it is easy to obtain mice with multiple gene knockout without the need for crossing single knockout strains. 60 This is of great interest when the goal is to generate animal models for complex diseases such as cancer. It is important to bear in mind that the majority of the published studies have been performed in murine cancer models that only harbour a low number of mutated genes or alleles. 101,102 Therefore, the CRISPR/Cas9 system provides an alternative to study cancer in models that resemble the genetic heterogeneity of human cancer genomes. It facilitates the generation of genetically engineered mouse models that harbour mutations in multiple genes involved in cancer progression and also allows the induction of chromosomal translocations or other chromosomal rearrangements, characteristic of many human cancers. 50 Altogether, CRISPR/Cas9 promises to revolutionize the generation of genetically modified animal models of disease for translational applications by reducing the cost and the time that is necessary to generate in vivo targeted models. 103 This was one of the first studies that utilized the CRISPR/Cas9 system to efficiently correct a genetic disease. This approach was also used in another study that use the mdx mice, a model of Duchenne muscular dystrophy. It is a rare disorder that is inherited in an X-linked recessive pattern caused by mutations in the gene that encodes for dystrophin, a protein essential for muscle fibre integrity. DMD is characterized by rapid and progressive muscle weakness and a shortened lifespan, and there is no known cure. Mouse zygotes were injected with Cas9 nuclease, sgRNA and a donor template capable of correcting the Dmd gene mutation. This experiment resulted in genetically mosaic progeny ranging 2%-100% gene correction and varying degrees of muscle phenotypic rescue. 104 In 2014, another study demonstrated that using the mouse model of hereditary tyrosinemia type 1 (HT1), CRISPR/Cas9 system could be used to successfully correct a mutation in post-natal animals. 105 They used the Fah59815B mouse model that harbours a homozygous G to A point mutation in the fumarylacetoacetate hydrolase wall. 111 The authors found that co-injecting Cas9 with sperm into M-phase oocytes resulted in 72.4% of embryos showing a homozygous wild-type genotype. 110 Another group was able to extend survival and improve cardiac function after ablating the PLN gene using CRISPR/Cas9 in a transgenic mice overexpressing a model of severe heart failure. 112 The availability of viable and efficient delivery methods represents one of the biggest challenges of translating the CRISPR/ Cas9 into the clinic, as it will be discussed in more detail in the following section. Finn et al. 113 reported the development of a lipid nanoparticle system capable of genetically editing the transthyretin (Ttr) gene in the mouse liver with a single administration. The authors combined the nanoparticle system with CRISPR/Cas9 components to target the Ttr gene and observed a significant reduction in serum protein levels that persisted for at least 12 months. It will be interesting to test whether this approach will be effective and durable in other disease models other than cardiac amyloidosis.
| Application of the CRISPR/Cas9 system in ex vivo gene therapy
The success of ex vivo gene therapy relies on the establishment of optimized protocols for culturing patient-derived primary cells that after genome editing can be transplanted back into the patient. The hematopoietic system is an excellent target for this approach, because target cells can be easily withdrawn from the patient peripheral blood and can be re-injected after editing and expansion. 60 Clinical trials using ZFs as a tool for ex vivo gene therapy are being conducted on patients with several blood disorders, including severecombined immunodeficiency, Fanconi anaemia, Wiskott-Aldrich syndrome and sickle-cell anaemia. 114,115 Recently, a clinical trial has shown that gene editing can be used in humans to test and treat HIV safe and effectively. 116 In this study, ZFs were used to disrupt the C-C motif chemokine receptor 5 (CCR5), the major co-receptor used by HIV strains to infect T cells. The infusion of autologous T cells genetically edited at the CCR5 locus resulted in the partial induction of acquired genetic resistance to HIV infection. 116 This approach is now being tested in phase 1/2 clinical trials. However, genetically manipulated T cells do not self-renew and so this treatment could only be effective for a specific period of time. The disruption of CCR5 in human self-renewing hematopoietic stem cells (HSC) as shown by Holt et al. 117 using ZFs could potentially solve this limitation. A more recent study used the CRISPR/Cas9 gene-editing technology to target the CCR5 gene in human CD34+ hematopoietic stem and progenitor stem cells (HSPCs). HSPCs that were successfully edited via CRISPR/Cas9 technology maintained multi-lineage potential. 118 Another important example of ex vivo CRISPR/Cas9 application is the CAR T cell-mediated immunotherapy discussed in more detail in the 'cancer biology' section above.
The precise selection of genetically modified cells harbouring the correct edited allele without undesirable off-target mutations represents one of the most important aspects of ex vivo gene therapy. As the process of selection is very efficient, and only selected cells will be transferred back into the patient, the accuracy of CRISPR/Cas9 is less critical in ex vivo than in in vivo gene therapy. 119 However, one of the major downsides of ex vivo approaches is that additional genomic alterations can occur during the required cell expansion step in culture. This is of pivotal importance, as the cells used for the gene-editing step are normally stem/progenitor cells susceptible to accumulate mutations and copy number variations during reprogramming and expansion. Accordingly, it will be important to develop assays to measure the integrity and normal functioning of genetically modified stem/progenitor cells before advancing a therapy to the clinics. Nevertheless, despite these challenges, the CRISPR/Cas9 genome-editing tool shows enormous potential for bringing ex vivo gene therapy into the clinic in a near future. It is important to note that unwanted effects of CRISPR/Cas9 such as off-target editing and off-target binding might result in malignant transformation and other unforeseeable consequences. [120][121][122][123][124][125][126][127][128][129] The development of methods that minimize off-target effects of CRISPR/Cas9 approaches has been a major focus of research.
| CRISPR/Cas9-mediated genomeediting applications in translational medicine
One of these strategies requires that two separate Cas9 binding events simultaneously occur at the same locus in order to execute cleavage of DNA. The inactivation of either of the two catalytic residues within Cas9 converts the enzyme into a nickase (Cas9n) which cleaves or 'nicks' a single DNA strand instead of a double strand. 1,130 By generating two distinct sgRNAs, DSBs only take place with simultaneous binding events because separate Cas9ns nick opposite DNA strands. As the probability of two off-target sites being adjacent in the genome is low, this strategy increases the stringency significantly. 130 Alternatively, the dimerizing FokI nuclease domain which is part of the genome-editing tools ZFS and TALLENs can be fused to a nuclease-deficient dCas9 and thereby induce DSBs exclusively upon paired binding. [131][132][133] A recent strategy to limit off-target events is based on sgRNA or protein engineering increasing specificity. 134 More interestingly, Cas13 nuclease has non-specific RNAse activity highlighting its potential as a diagnostics tool. 143,144 Besides cutting the target DNA or RNA, this family of nucleases is able to cleave surrounding single-stranded RNAs (ssRNA). This unique property facilitated the development of diagnostic kits by using ssRNA reporter molecules that fluoresce upon Cas nuclease activity towards a specific disease-guided RNA.
Another major obstacle for the clinical translation of CRISPR/ Cas9 is the limited efficiency of HDR-mediated gene correction.
Factors known to determine this efficiency include cell type, cell state and competition with the NHEJ. As many treatments of human genetic diseases are based on HDR-mediated gene correction, in which a template sequence is delivered to replace the mutated version, major progress in the efficiency of HDR is necessary.
Several efforts have been made to increase HDR efficacy, including the rational design of single-stranded DNA donors. 145 Importantly, the design of the sgRNA is also critical to ensure complementarity to the target sequence and minimize off-target cleavage. It has been described that mismatches at the proximal 5' region, relative to the PAM sequence, are better tolerated than those at the 3' region. 146 Therefore, design of sgRNA should avoid mismatches further away from the PAM as it increases the probability of off-target events. Another strategy is inhibiting the NHEJ pathway 147,148 or increasing the similarity of the donor template and the double-stranded break sites. 149 'Base editing' is a recent method of genome engineering that facilitates direct, irreversible conversion of a specific target DNA base into another through RNA-programmed mechanism, without a dsDNA backbone cleavage or the need of a donor template. 'Base editing' could represent an alternative to HDR-mediated gene correction. 150 Fusion of dCas9 to a cytidine deaminase enzyme, that acts on single-stranded DNA, allows C to U conversion within a window as small as approximately five nucleotides. The fused enzyme is capable of efficiently correcting a range of disease-relevant point mutations. 150 Another group has also developed an adenine base editor that mediates conversion of AT to GC in genomic DNA using a tRNA adenosine deaminase fused with Cas9. 151 Besides its low efficiency, HDR has been considered to be mainly limited to applications in dividing cells. 152 This fact represents an important setback for its broad use in the treatment of human genetic diseases, making it challenging to apply the technique to post-mitotic cells. However, more recently it has been shown that adeno-associated virus (AAV)-mediated delivery of donor template in combination with DNA cleavage by CRISPR/Cas9 allows for precise genome editing through HDR in post-mitotic neurons in mouse brain. 153 Another challenge imposed by the need of correcting specific mutations refers to the mutational variability amongst patients with the same disease. This becomes a big hurdle to overcome when there is the need of designing patient-tailored sgRNAs and DNA donor templates. In particular, customizing CRISPR/Cas9 gene therapy drugs represents a major challenge for effectively scaling production in the future. 154 Virtually, all macromolecular therapies have to solve issues of delivery that often limit their efficacy. 155 Efficient in vivo gene therapy using CRISPR/Cas9 will depend on the efficient and tissue-specific delivery of its components. The majority of in vivo studies report the delivery of therapeutic CRISPR/Cas9 components through viral vectors, especially AAV. [156][157][158] AAV vectors engineered for gene therapy seem particularly promising because they can infect both dividing and non-dividing cells, they do not integrate into the host genome, and they fail to induce a significant host immune response and efficiently transduce a broad range of cell types. 159 However, AAVs have a limited packaging capacity for foreign DNA of ≃4.5 kb. 160 Consequently, it is generally not possible packaging all the CRISPR/ Cas9 components, including the Streptococcus pyogenes Cas9 (spCas9) gene(4.2 Kb), the sgRNA, the donor template as well as associated promoters and regulatory sequences into AAV. 1 Strikingly, a recent study used a significantly smaller Cas9 gene (3.2 Kb) from Staphylococcus aureus (saCas9) allowing for the integration of a Cas9 together with a sgRNA into a single AAV. 129 Alternatively, the genes coding for SpCas9 and its sgRNA can be packaged into separate AAV vectors as demonstrated for in vivo CRISPR/Cas9-mediated genome editing in mouse brains 161 and livers. 162 Host immune responses induced by the delivery of bacterial Cas9 proteins or gene therapy vectors represent another challenge for the translation of CRISPR/Cas9 approaches into the clinic. More recently, a mouse model for non-alcoholic steatohepatitis (NASH), a frequent liver disease in humans characterized by excessive fat build-up in the liver, was generated using spCas9 to delete Pten, a tumour suppressor gene involved in NASH and a repressor of the PI3/AKT pathway.
Surprisingly, this study describes the production of Cas9-specific antibodies and the secretion of IL-2 from splenocytes that had been engineered with Cas9 system targeting the Pten locus. 163 In this study, the Cas9 was delivered by adenoviral vectors, known to trigger an immune response and might have enhanced that outcome. 164 A promising way to avoid the immunogenicity of viral vectors is the use of non-viral vectors including nanoparticle-and lipid-based vectors. 165,166 A possible strategy to limit the immunogenicity of Cas9 peptides is humanizing the Cas9 protein. 154 Accordingly, finding methods that reduce the immunogenicity of the in vivo CRISPR/Cas9-mediated gene editing will be an important focus of upcoming researches.
| Ethical concerns
The burst of CRISPR/Cas9 applications also highlighted the potential of this system and the ethical concerns associated with the possible creation of permanent and inheritable changes in the human genome. Having in mind the ethical implications, legal action was taken to delay germline genome editing. The first report using CRISPR in human embryos was back in 2015 where Liang et al. 167 performed experiments in discarded embryos with an extra pair of chromosomes. Nonetheless, gene correction efficiency was very low and the successful ones showed genetic mosaicism with a low percentage of cells being accurately edited. Another group reported that they had successfully edited three out of six viable human embryos. 168 They have used immature oocytes that had to go through in vitro maturation. More recently, Ma et al. 110 utilized CRISPR in human diploid zygotes to correct a mutation causative of hypertrophic cardiomyopathy, a congenital heart disease, claiming high-efficiency and few side effects. The group of Dr Huang at ShangaiTech University used a new CRISPR method, the recently developed base editing (discussed in the previous section) to correct a single base in the FBN1 gene involved in the Marfan syndrome, a rare autosomal dominant disorder in heterozygous embryos. 169 The embryos were obtained by injecting sperm from a patient with Marfan syndrome into a mature oocyte. The authors showed that 89% of the embryos were efficiently edited, and more importantly, no off-target and indels were detected. It is evident that we are on an accelerated pace towards using CRISPR genomic engineering as a biomedical therapy. But is also urgent that discussions about ethical guidelines within international multidisciplinary groups take place to regulate and minimize the potential risks associated with this powerful tool.
| CON CLUS IONS
The CRISPR/Cas9 RNA-guided DNA endonuclease system is a versatile technology that has rapidly transformed genome editing and basic science research. The development of improved CRISPR/ Cas9 tools with high degree of DNA specificity, increased selectivity and low level of by-products made this technology accessible to researchers worldwide to study human diseases. For example, it is now feasible to generate in vivo animal models of specific diseases in a few weeks. It is now possible to envision the treatment of genetic diseases in the near future using this technology. In fact, several clinical trials using CRISPR/Cas9 approach to treat human genetic diseases are underway (NCT03872479 or NCT03399448). However, we still need to improve the efficiency, specificity and delivery of this technology for its broader application in the clinics. A major concern that accompanies the use of CRISPR/Cas9 in the clinical setting relates to the potential risk of misusing this technology. The development of ethical and regulatory guidelines is critical to ensure that the benefits outweigh and minimize the risks. The discovery of CRISPR/Cas9 technology and its application in the clinics is a true example of the importance of bridging basic research and translational medicine. Once the CRISPR/Cas9 mechanism was unveiled, the possibilities of medical exploitation were enormous and will definitely change the way we will treat genetic disorders in the future.
ACK N OWLED G EM ENTS
We thank Nelma Ferreira, BA, for designing and creating Figure 1.
CO N FLI C T O F I NTE R E S T
The authors declare that they have no conflict of interest.
AUTH O R CO NTR I B UTI O N
FV.J performed research and wrote the manuscript. WL wrote the manuscript. BI.F conceptualized the study, edited, wrote and submitted the manuscript. | 7,656.6 | 2020-02-25T00:00:00.000 | [
"Biology"
] |
Environmental Stability and Its Importance for the Emergence of Darwinian Evolution
The emergence of Darwinian evolution represents a central point in the history of life as we know it. However, it is generally assumed that the environments in which life appeared were hydrothermal environments, with highly variable conditions in terms of pH, temperature or redox levels. Are evolutionary processes favored to appear in such settings, where the target of biological adaptation changes over time? How would the first evolving populations compete with non-evolving populations? Using a numerical model, we explore the effect of environmental variation on the outcome of the competition between evolving and non-evolving populations of protocells. Our study found that, while evolving protocells consistently outcompete non-evolving populations in stable environments, they are outcompeted in variable environments when environmental variations occur on a timescale similar to the average duration of a generation. This is due to the energetic burden represented by adaptation to the wrong environmental conditions. Since the timescale of temperature variation in natural hydrothermal settings overlaps with the average prokaryote generation time, the current work indicates that a solution must have been found by early life to overcome this threshold.
Introduction
Beyond the answers provided by mythology and religion, the question of how life originated on Earth has fascinated scientists for well over a century [1].Since then, scientists have defined cells as the basic unit of life on Earth.Thus, questioning the origins of life has mainly meant investigating how the first cells came to be.Because cells are understood as being an integrated network of functional units (a compartment, a genome, and a metabolism), the main research axes explored in the origins of life field concern the origins of these different units: the first genomic material [2][3][4], the emergence of metabolism [5][6][7][8][9][10][11], and the first compartments [12][13][14][15][16][17][18][19][20].
The emergence of evolutionary processes is also a critical prerequisite for cellular life to have reached the complexity and diversity it shows today.Several hypotheses have been put forward on the origin of evolution, and while the appearance of nucleic acids is critical, evolution may have occurred before the genome existed [21][22][23][24][25].
Yet, when hypothesizing about the emergence and evolution of life on Earth, it must be remembered that not the entire space of possibility is relevant.Rather, a proposed scenario must remain consistent with the geological environment of its time [26], as inferred on the basis of modern environments and Early Earth paleoenvironmental records.
Presentation of the Model
In the following, parameters written in italics change during the course of the simulation, and parameters written in bold are constant for the duration of the simulation.The numerical model describes a population of N protocells inside a compartment containing five different types of molecule: food molecules (concentration F), and growth catalyst molecules A 1 , . .., A 4 (constant concentrations in the compartment A X,C ).This compartment could represent pores in hydrothermal chimneys, or pools in terrestrial fields of hot springs.Neither protocells nor molecules A 1 , . .., A 4 are able to enter or exit the compartment, but there is a regular input of food molecules at the rate Fi.The conditions in the compartment can also change.The environmental parameter P (which represents temperature, pH, redox level, etc.) varies following a sinusoidal function of period Pt and amplitude Pa.Protocells ingest food, to achieve their own maintenance and growth.Once the volume of a protocell reaches a threshold value (equal to double their initial volume), it divides into two daughter cells.
It is assumed that food enters the cells through diffusion, following Fick's law [37].Since all food that enters the cells is consumed immediately, the amount of food entering a cell during an infinitesimal amount of time dt is equal to: df = Df•V (2/3) •dt, where V is the cell volume and Df is the diffusion constant.
An important assumption in this model is that the dependence on the environment is mediated by the growth catalyst molecules A1, . .., A4.Depending on the value of the environmental parameter P, different molecules are active for growth catalysis (see Figure 1).This aspect simulates the effect of environmental parameters, such as temperature or pH, on the speciation of critical biomolecules.
Catalyst molecules are exchanged between the cell and the compartment through diffusion, but they can also be actively uptaken by cells (e.g., through energy consumption).This active uptake can be represented by protein membrane pumps.The change in concentration in the cell of molecule A x , A X , during an infinitesimal amount of time is therefore: where D is the constant of diffusion for catalyst molecules, A X,C is the concentration of A x in the compartment, and Ux is the active uptake rate of Ax.The food required for maintenance and for the active uptake of molecules A 1 , . .., A 4 during dt is: where fm and fu are two constants.
Any remaining food is used for the growth of the protocell; the volume change dV during dt is therefore: where the growth rate Gr is a constant and A i is the concentration in the growth catalyst that is active under the current value of P.
The concentrations of the molecules A X in the protocells are constantly adjusted in response to changes in protocell volume V, in accordance with the following equation: dA X /A X = V/dV Protocells may die with a probability that increases when the food 'debt' dfm − df increases.This represents the necessity for cells to spend energy to maintain their structure.The probability of cell death as a function of dfm − df follows a sigmoidal law (see Figure S1), with a threshold for dfm − df = 0, according to: where L is a constant.
When a protocell divides, the daughter cells have the same concentration of A X and a volume corresponding to half of the volume of the mother cell.For non-evolving protocells, the rates of active uptake U x stay the same between the mother and daughter cells.For evolving protocells, the rates of active uptake U x change stochastically in the new generation.The distribution of changes in U x (∆U x ) between generations follows a normal law: where S is a constant; higher values of S translate into faster changes in U x between generations.
Life 2023, 13, x FOR PEER REVIEW 3 o Catalyst molecules are exchanged between the cell and the compartment throu diffusion, but they can also be actively uptaken by cells (e.g., through energy consum tion).This active uptake can be represented by protein membrane pumps.The change concentration in the cell of molecule Ax, AX, during an infinitesimal amount of time therefore: where D is the constant of diffusion for catalyst molecules, AX,C is the concentration of in the compartment, and Ux is the active uptake rate of Ax.
Reference Population
First, we describe how a population of non-evolving protocells (also referred to as 'Type 1') behaves under the reference conditions defined below: F0 = 0.5 and N0 = 100 are the initial food concentration and initial number of protocells in the compartment, respectively.Fi = 0.2 is the rate of food input in the compartment.A X,C = 100 refers to the concentration of catalyst molecules in the compartment and the protocell.
Under these reference conditions, since the cells do not evolve, the environmental parameter P does not influence the concentrations A 1 , . .., A 4 .The outcome of the model is therefore unaffected by P.
In Figure 2, it can be seen that the number of cells (N) and the amount of food in the compartment (F) vary over time under these reference conditions for six different iterations.Variation between individual runs under these conditions is solely due to the randomness of protocell death (see the probability law for death described in Section 2).
Reference Population
First, we describe how a population of non-evolving protocells (also referred to 'Type 1') behaves under the reference conditions defined below: F0 = 0.5 and N0 = 100 are the initial food concentration and initial number of p tocells in the compartment, respectively.Fi = 0.2 is the rate of food input in the compartment.AX,C = 100 refers to the concentration of catalyst molecules in the compartment a the protocell.
Under these reference conditions, since the cells do not evolve, the environmen parameter P does not influence the concentrations A1, …, A4.The outcome of the mo is therefore unaffected by P.
In Figure 2, it can be seen that the number of cells (N) and the amount of food in compartment (F) vary over time under these reference conditions for six different ite tions.Variation between individual runs under these conditions is solely due to the r domness of protocell death (see the probability law for death described in Section 2).It can be seen that, under these conditions, the system evolves towards a dynamic equilibrium, characterized by: (1) A common period of oscillations for N and F of approximately 10 timesteps, which can also be considered an average generation time; (2) A number of protocells in the compartment oscillating between 40 and 80; and (3) An amount of food in the compartment oscillating between 0.2 and 0.5.
The average number of protocells reached under equilibrium-which can be named the 'carrying capacity' of the compartment-is not dependent on the initial amount of food F0 (Figure 3A, curve color) or on the initial number of cells N0 (Figure 3B, curve color) in the compartment, but depends on the rate of food input in the compartment Fi (Figure 3, curve style).
can also be considered an average generation time; (2) A number of protocells in the compartment oscillating between 40 and 80; and (3) An amount of food in the compartment oscillating between 0.2 and 0.5.
The average number of protocells reached under equilibrium-which can be named the 'carrying capacity' of the compartment-is not dependent on the initial amount of food F0 (Figure 3A, curve color) or on the initial number of cells N0 (Figure 3B, curve color) in the compartment, but depends on the rate of food input in the compartment Fi (Figure 3, curve style).
Influence of Protocell Constants
We then explore how the different constants that are intrinsic to protocells, Df, fm, Gr and L, influence the protocell populations.
It can be observed that for low Df values (Figure 4A, Df < 0.00125), the extremely slow diffusion of food in the protocells prevents their maintenance (see purple curve).For higher values, and up to Df = 0.01, the value of Df does not influence the population dynamics-they reach the same dynamic equilibrium.However, for values beyond Df = 0.01, the number of protocells in the compartment appears to grow exponentially.
Decreasing the food maintenance factor fm has a marked influence on population dynamics (Figure 4B).Lower values of fm translate into a lower probability of death and more food available for growth.For values of fm less than 0.00025, the number of protocells in the compartment increases exponentially.On the other hand, for fm = 0.016, the maintenance cost is too high, and the population dies out.
An increase in the growth factor Gr (Figure 4C) primarily modifies the period of oscillations, or generation time, of the system.With increasing Gr, growth is accelerated, and the period of oscillations decreases.Once Gr reaches a threshold value, the dynamic equilibrium is altered, and an exponential increase in the number of protocells is observed.
When the life factor L increases (Figure 4D), the probability of death of protocells decreases.For L values equal to 4000 or higher, the probability of death is very low, and the number of protocells increases exponentially.
Due to obvious resource limitations, the exponential growth of a population in a closed compartment cannot be realistically sustained over extended periods of time.
Influence of Protocell Constants
We then explore how the different constants that are intrinsic to protocells, Df, fm, Gr and L, influence the protocell populations.
It can be observed that for low Df values (Figure 4A, Df < 0.00125), the extremely slow diffusion of food in the protocells prevents their maintenance (see purple curve).For higher values, and up to Df = 0.01, the value of Df does not influence the population dynamics-they reach the same dynamic equilibrium.However, for values beyond Df = 0.01, the number of protocells in the compartment appears to grow exponentially.
Competition between Evolving and Non-Evolving Population-Stable Environment
The parameters used in the following are the reference parameters listed in Section 3.1.A second population of protocells-the evolving population, also referred to as 'Type 2'-is added to the compartment.For this population, the active uptake rates U1, Decreasing the food maintenance factor fm has a marked influence on population dynamics (Figure 4B).Lower values of fm translate into a lower probability of death and more food available for growth.For values of fm less than 0.00025, the number of protocells in the compartment increases exponentially.On the other hand, for fm = 0.016, the maintenance cost is too high, and the population dies out.
An increase in the growth factor Gr (Figure 4C) primarily modifies the period of oscillations, or generation time, of the system.With increasing Gr, growth is accelerated, and the period of oscillations decreases.Once Gr reaches a threshold value, the dynamic equilibrium is altered, and an exponential increase in the number of protocells is observed.
When the life factor L increases (Figure 4D), the probability of death of protocells decreases.For L values equal to 4000 or higher, the probability of death is very low, and the number of protocells increases exponentially.
Due to obvious resource limitations, the exponential growth of a population in a closed compartment cannot be realistically sustained over extended periods of time.Consequently, when modeling protocell populations in hydrothermal compartments-whether those compartments represent chimney pores or larger hot spring pools-the results shown in Figure 4 indicate that upper boundaries of 0.1, 0.001, 0.02 and 1000 must be adopted for Df, fm, Gr and L, respectively.
Competition between Evolving and Non-Evolving Population-Stable Environment
The parameters used in the following are the reference parameters listed in Section 3.1.A second population of protocells-the evolving population, also referred to as 'Type 2'-is added to the compartment.For this population, the active uptake rates U 1 , . .., U 4 of the four catalyst molecules A 1 , . .., A 4 vary stochastically between generations.The average change in U x between two timesteps is S, in parallel to a non-genomic evolution rate.Figure 5 shows how the two populations, evolving and non-evolving, compete with each other for different evolution rates S of the Type 2 population.For low values of S (S = 1), the outcome of competition is highly variable.On average, evolving populations tend to outcompete non-evolving ones, but the standard deviations for both curves strongly overlap.
For S values up to 8, increasing S is favorable to evolving populations (green curves), which more consistently outcompete the non-evolving ones.In this stable environment, the same catalyst molecule Ai is active for the entire simulation.Type 2 pro- For low values of S (S = 1), the outcome of competition is highly variable.On average, evolving populations tend to outcompete non-evolving ones, but the standard deviations for both curves strongly overlap.
For S values up to 8, increasing S is favorable to evolving populations (green curves), which more consistently outcompete the non-evolving ones.In this stable environment, the same catalyst molecule A i is active for the entire simulation.Type 2 protocells that randomly see an increase in the active intake for this molecule, U i , grow and divide faster than other protocells: they are positively selected.This can be observed in Figure 6: the molecule A 3 is the active catalyst under these conditions, and the average U 3 (green line) in Type 2 protocells increases throughout the duration of the run.In general, the Type 2 population therefore adapts to this environment, leading to it outcompeting the non-evolving Type 1 population.For low values of S (S = 1), the outcome of competition is highly variable.On ave age, evolving populations tend to outcompete non-evolving ones, but the standard dev ations for both curves strongly overlap.
For S values up to 8, increasing S is favorable to evolving populations (gre curves), which more consistently outcompete the non-evolving ones.In this stable env ronment, the same catalyst molecule Ai is active for the entire simulation.Type 2 pr tocells that randomly see an increase in the active intake for this molecule, Ui, grow an divide faster than other protocells: they are positively selected.This can be observed Figure 6: the molecule A3 is the active catalyst under these conditions, and the avera U3 (green line) in Type 2 protocells increases throughout the duration of the run.In ge eral, the Type 2 population therefore adapts to this environment, leading to it outcom peting the non-evolving Type 1 population.Figure 6.Evolution of the average of the four different U x in evolving (Type 2) protocells in an example run.The environmental parameter is constant at P = 100, corresponding to molecule A 3 being active (see Figure 1).The correspondence between the curve color and U x is given on the graph legend.The average values of U x , calculated over all Type 2 protocells in the simulation, are represented.
However, beyond S = 8, there is a reversal in this trend, and the outcome of the competition varies between runs, even reversing in favor of the Type 1 population for S = 16 and higher (Figure 5F-H).This is because, with high evolution rates, although an increase in U x in one generation will lead to a higher growth rate, this increase is likely to be compensated by a decrease in the following generation.As a consequence, U x and the growth rate are very weakly correlated across generations.This is a phenomenon akin to the error threshold described in evolutionary biology [38], and prevents the adaptation of the Type 2 protocell population to its environment.
Competition between Evolving and Non-Evolving Populations-Changing Environment
In the following, the reference values used in Section 3.1 are employed, but sinusoidal temporal variations in the environmental parameter P (see Figure 1) are introduced.We studied how these P variations influence the outcome of the competition between nonevolving and evolving protocells.In order to discard the known effect of the error threshold, a maximum S value of 12 was used in these simulations.
Three example runs with Pa = 75, S = 8 and three values of Pt-100, 10 and 800-are shown Figure 7. it can be clearly observed that, in contrast to stable environments, evolving protocells do not always outcompete non-evolving protocells in changing environments (Figure 7A,B).This is in accordance with the initial hypothesis that environmental variations can be detrimental to evolving protocells.During a period in which the molecule Ai is the active catalyst, evolving protocells with higher Ui-that is, higher active intake for the molecule Ai-are selected (see Figure 5).However, as P oscillates and the active catalyst changes to molecule Aj (Figure S1), this higher Ui becomes a nonfunctional energetic burden.
Second, the frequency of environmental variations also appears to influence the system, with evolving populations that are more favored when fluctuations are very fast or very slow (Figure 7E-H).
In order to more systematically assess the effect of environmental oscillations, different combinations of values for the evolution rate S (1 to 12), the period of environmental oscillations Pt (10 to 800) and their amplitude Pa (25 or 75) were explored.For First, it can be clearly observed that, in contrast to stable environments, evolving protocells do not always outcompete non-evolving protocells in changing environments (Figure 7A,B).This is in accordance with the initial hypothesis that environmental variations can be detrimental to evolving protocells.During a period in which the molecule A i is the active catalyst, evolving protocells with higher U i -that is, higher active intake for the molecule A i -are selected (see Figure 5).However, as P oscillates and the active catalyst changes to molecule A j (Figure S1), this higher U i becomes a nonfunctional energetic burden.
Second, the frequency of environmental variations also appears to influence the system, with evolving populations that are more favored when fluctuations are very fast or very slow (Figure 7E-H).
In order to more systematically assess the effect of environmental oscillations, different combinations of values for the evolution rate S (1 to 12), the period of environmental oscillations Pt (10 to 800) and their amplitude Pa (25 or 75) were explored.For each Life 2023, 13,1960 9 of 15 combination of values, 20 simulation iterations of 400 timesteps were conducted.The results are shown in Figure 8. N1 and N2 respectively refer to the number of non-evolving (Type 1) and evolving (Type 2) protocells at the end of the simulations.Here, the standard deviation is important, and highlights that although general trends are clear, there can be large differences between individual runs.The possibility of evolving protocells being outcompeted in changing environments is confirmed by the negative N2-N1 values observed in Figure 8.Additionally, as the amplitude of P oscillations (Pa) increases, it can be observed that evolving protocells generally become less competitive (the brown curve is significantly lower than the blue curve).
In accordance with the runs presented in Figure 7, the rate of environmental change is found to have a nonlinear influence on the system.Increasing Pt from 10 to ~100 negatively impacts Type 2 protocells.However, a further increase in Pt from ~100 to 800 favors Type 2 protocells.This effect appears even more significant at high mutation rates S and/or with greater environmental changes (higher Pa).The possibility of evolving protocells being outcompeted in changing environments is confirmed by the negative N2-N1 values observed in Figure 8.Additionally, as the amplitude of P oscillations (Pa) increases, it can be observed that evolving protocells generally become less competitive (the brown curve is significantly lower than the blue curve).
In accordance with the runs presented in Figure 7, the rate of environmental change is found to have a nonlinear influence on the system.Increasing Pt from 10 to ~100 negatively impacts Type 2 protocells.However, a further increase in Pt from ~100 to 800 favors Type 2 protocells.This effect appears even more significant at high mutation rates S and/or with greater environmental changes (higher Pa).This effect of Pt can be understood in light of the average doubling time of protocells, which is roughly 10-15 timesteps with the parameters used here (Figures 2, 3, 5 and 8).
For short periods (P T ~10), the transition between active catalysts occurs more often than protocell division, so selection does not occur under the same environmental conditions over several generations, limiting the development of energetic burdens.
For intermediate periods (Pt ~100), selection occurs with the same active catalyst over a few generations.However, the frequent change in the active catalyst leads to frequent energetic burdens.
For longer periods (Pt ~800), selection occurs under the same active catalyst for a large number of generations.In Figure 7I, the successive selection of U 3 , U 4 , and U 3 again can be clearly seen.Consequently, the energetic burdens are larger (note the significant drop in Type 2 cells at the timesteps 100 and 300), but also less frequent.
Both the degree and frequency of energetic burdens thus play a role in the long-term competitive abilities of evolving protocells.
Comparison of the Model with Hydrothermal Environments and Single-Celled Organisms
One important result is that the effect of environmental variations depends on their time scale relative to protocell generation period (Figure 8).The current model indicates that environmental variations that are shorter or much longer than the generation period do not negatively affect evolving protocells.However, environmental variations that are only slightly longer than the generation time disfavor evolving protocells.In the following, we compare timescales of (1) environmental variations in modern hydrothermal environments and of (2) generation or doubling times in prokaryote cells.
The existence of spatiotemporal variations in parameters such as temperature, pH and redox levels in hydrothermal environments are well known, but, partly due to technical difficulties, few studies have reported time series.Nonetheless, precise temperature measurements from several marine hydrothermal vents ( [33] and references therein) indicate short-term temperature variations on the order of 0.1 K/min, with longer-term variations on the order of 30 K/day.Time series from the siliceous hot spring of El Tatio, Chile [35] and from Fox Glacier, Uruni and Hanmer Springs, New Zealand [33] indicate daily variations on the order of 10 K, partly due to daily fluctuations in air temperature.Volcanic fumaroles from the La Soufriere volcano were found to have several orders of temperature variations, with daily variations of 1-2 K and monthly variations of 10 K [39].
On the other hand, laboratory cultivation experiments indicate typical generation times for prokaryotes ranging from a few minutes to several days [40][41][42][43].Dormant or vegetative cells, in the deep biosphere, for example, probably present much longer doubling times on the order of a year [44].The generation timescale of prokaryotes and the environmental variation timescales therefore significantly overlap (Figure 9).Note that protocell division has also been achieved in the laboratory, with a combined time for growth and division ranging from ~30 min [45] to a few days [46], which are similar durations to prokaryote generation time.
Assuming that these timescales are similar to those existing when early life evolved, this overlap indicates that environmental variation had a probable influence on the survival and the competitive abilities of the first evolving protocells.
The current study also indicates that evolution rate is an important factor for the outcome of the model.In stable environments, increasing S from 1 to 8 leads to evolving protocells outcompeting non-evolving protocells in a shorter time (Figure 5).A further increase leads to error thresholds, disfavoring evolving protocells, which are outcompeted for S values of 16 and above.In a changing environment, increasing S from 1 to 5 favors evolving protocells, but a further increase is detrimental to them (Figure 8). 1) the timescales of environmental variation in hydrothermal environments-where temperature variations of more than 10 K are considered-against (2) the average timescales of generation/doubling times in prokaryote populations [33][34][35][40][41][42]44].The asterisk (*) represents the uncertainty of the position of the upper boundary for the generation times, which is little known for vegetative life (e.g., in the Deep Biosphere).
Assuming that these timescales are similar to those existing when early life evolved, this overlap indicates that environmental variation had a probable influence on the survival and the competitive abilities of the first evolving protocells.
The current study also indicates that evolution rate is an important factor for the outcome of the model.In stable environments, increasing S from 1 to 8 leads to evolving protocells outcompeting non-evolving protocells in a shorter time (Figure 5).A further increase leads to error thresholds, disfavoring evolving protocells, which are outcompeted for S values of 16 and above.In a changing environment, increasing S from 1 to 5 favors evolving protocells, but a further increase is detrimental to them (Figure 8).
In modern prokaryotic cells, mutations occur with a frequency of 1/10 9 base pairs per generation in DNA [47], and with a frequency of around 1/10 5 bases during RNAm transcription [48,49].However, the evolution rate used in the current model is a phenotypic rate, which describes the average change of active uptake Ux.It is very difficult to compare those phenotypic rates with mutation rates, since the effect of genome changes on phenotype is not linear and highly variable depending on the phenotypic parameter.Relative to the reference values of Ux of 100, S rates of 1-12 correspond to changes of 1-12% per generation, which seems high.However, homeostatic processes in protocells were probably much simpler than they are today, potentially leading to higher phenotypic plasticity.
The current study indicates that heritable changes in the first evolving protocells could not have exceeded the error threshold (S ~ 16 here), since this would have (i) prevented any adaptation to the environment and (ii) led to evolving populations being systematically outcompeted by non-evolving populations (Figure 5).Alternatively, with low evolution rates (S = 1 here), the outcome of competition between evolving and nonevolving populations is variable in both stable (Figure 5) and variable environments (Figure 8).Two alternative scenarios can therefore be proposed: either (1) the first evolving protocells had intermediary evolution rates, corresponding to S ~ 2-12, or (2) the first evolving protocells evolved slowly, with S values of 1 or lower, but competition with non-evolving protocells occurred several times, maybe in several places, leading evolving protocells to outcompete in at least one occurrence.
Limitations of the Model
This model is a strong simplification of the biology of protocells.First, it is assumed that evolution relies on the selection of stochastic changes in the active uptake of different catalyst molecules.In line with previous studies [21][22][23][24][25], we consider it unlikely that the first evolutionary processes were based on genomes.However, this is only one rep- In modern prokaryotic cells, mutations occur with a frequency of 1/10 9 base pairs per generation in DNA [47], and with a frequency of around 1/10 5 bases during RNA m transcription [48,49].However, the evolution rate used in the current model is a phenotypic rate, which describes the average change of active uptake U x .It is very difficult to compare those phenotypic rates with mutation rates, since the effect of genome changes on phenotype is not linear and highly variable depending on the phenotypic parameter.Relative to the reference values of U x of 100, S rates of 1-12 correspond to changes of 1-12% per generation, which seems high.However, homeostatic processes in protocells were probably much simpler than they are today, potentially leading to higher phenotypic plasticity.
The current study indicates that heritable changes in the first evolving protocells could not have exceeded the error threshold (S ~16 here), since this would have (i) prevented any adaptation to the environment and (ii) led to evolving populations being systematically outcompeted by non-evolving populations (Figure 5).Alternatively, with low evolution rates (S = 1 here), the outcome of competition between evolving and non-evolving populations is variable in both stable (Figure 5) and variable environments (Figure 8).Two alternative scenarios can therefore be proposed: either (1) the first evolving protocells had intermediary evolution rates, corresponding to S ~2-12, or (2) the first evolving protocells evolved slowly, with S values of 1 or lower, but competition with non-evolving protocells occurred several times, maybe in several places, leading evolving protocells to outcompete in at least one occurrence.
Limitations of the Model
This model is a strong simplification of the biology of protocells.First, it is assumed that evolution relies on the selection of stochastic changes in the active uptake of different catalyst molecules.In line with previous studies [21][22][23][24][25], we consider it unlikely that the first evolutionary processes were based on genomes.However, this is only one representation, and early evolution may have been vastly different from what is modeled here e.g., [24].
It is also assumed that the environmental parameter P influences protocells by controlling the nature of the active catalyst molecule.This type of environmental influence is consistent with observations in modern environments, with temperature, pH or redox levels affecting the speciation of critical biomolecules.However, the environment affects biochemistry and metabolism in a much more complex way.In today's cells, for example, temperature or pH are known to modify the conformation of proteins, often leading to their inactivation.We are limited in this modeling work by contemporary knowledge of early life and of the biochemical functioning of protocells. in this model, catalyst uptake consumes protocells' energy, meaning that changes in the nature of the active catalyst can lead to energetic burdens for evolving protocells.Such a phenomenon is unlikely in modern cells, where the characteristics of cytoplasm are regulated through homeostasis.However, in protocells, homeostatic processes were likely less developed, and the active uptake represents the (risky) energetic investment required for adaptation.
In order to give further breadth to these results, the model could be improved to test (1) alternative modalities of evolution, (2) alternative modalities of coupling between the environment and protocells, and (3) alternative modalities representing the energetic investment of adaptation.
Conclusions
The emergence of the first evolutionary processes on Early Earth remains a fundamental step in the development of life as we know it.The current study sheds some critical light on the conditions necessary for the first evolving protocells to survive over time in their environment.
In this study, using a numerical model, we assessed the influence of various factors on the outcome of the competition between the first evolving protocells and non-evolving protocells.
It was found that, through adaptation, in stable environments, evolving protocells with small to moderate evolution rates can consistently outcompete non-evolving protocells in a few generations.However, very high rates of evolution prevent adaptation and lead to the demise of evolving protocells.
With this model, we also confirm the hypothesis that in environments with fluctuating conditions, such as hydrothermal environments, evolving protocells can be outcompeted by non-evolving ones.This is because adaptation to certain conditions requires an energetic investment that becomes a burden if conditions change.This phenomenon is amplified when the environmental changes are greater (which is modeled here by the transition between a larger number of catalyst molecules).
The period of environmental change is also critical, since evolving protocells are only negatively affected by changes occur on timescales greater than one and shorter than tens of timescales that actually also correspond to the timescales of temperature variations in modern hydrothermal environments.
Lastly, during this study we noted large variations between individual runs (see error bars in Figure 8).Consequently, even if evolving populations are outcompeted on average, where evolutionary processes appear a large number of times and/or in many places, they may survive in some instances.
Overall, this study emphasizes the need to consider more thoroughly in future studies how early protocells interacted with their complex biotic and abiotic environment on the origin of life.
Figure 1 .
Figure 1.Principle of environmental dependency.Parameter P varies following a sinusoidal tre The different growth catalyst molecules are each active for a specific range of values of P (shad areas).As a consequence, over time, there are transitions in the nature of the active growth catal molecule (red dashed line).
Figure 1 .
Figure 1.Principle of environmental dependency.Parameter P varies following a sinusoidal trend.The different growth catalyst molecules are each active for a specific range of values of P (shaded areas).As a consequence, over time, there are transitions in the nature of the active growth catalyst molecule (red dashed line).
Figure 2 .
Figure 2. Evolution of (A) the number of non-evolving protocells (N, green curve) and of (B) food concentration in the compartment (F, blue curve) over the course of the simulation under reference conditions and for 6 different iterations.
Figure 2 .
Figure 2. Evolution of (A) the number of non-evolving protocells (N, green curve) and of (B) the food concentration in the compartment (F, blue curve) over the course of the simulation under the reference conditions and for 6 different iterations.
Figure 3 .
Figure 3. Evolution of the number of non-evolving protocells over the course of the simulation under the reference conditions.(A) Comparison of different combinations of food input rates Fi (curve style) and initial food amounts F0 (curve color).(B) Comparison of different combinations of food input rates Fi (curve style) and initial cell numbers N0 (curve color).
Figure 3 .
Figure 3. Evolution of the number of non-evolving protocells over the course of the simulation under the reference conditions.(A) Comparison of different combinations of food input rates Fi (curve style) and initial food amounts F0 (curve color).(B) Comparison of different combinations of food input rates Fi (curve style) and initial cell numbers N0 (curve color).
Life 2023, 13, x FOR PEER REVIEW 6 of 16Consequently, when modeling protocell populations in hydrothermal compartmentswhether those compartments represent chimney pores or larger hot spring pools-the results shown in Figure4indicate that upper boundaries of 0.1, 0.001, 0.02 and 1000 must be adopted for Df, fm, Gr and L, respectively.
Figure 4 .
Figure 4. Evolution of the number of non-evolving protocells (vertical axis, log-scaled) over the course of the simulation under different values of (A) food diffusion factor Df, (B) factor of maintenance fm, (C) volume growth factor Gr, and (D) life factor L.
Figure 4 .
Figure 4. Evolution of the number of non-evolving protocells (vertical axis, log-scaled) over the course of the simulation under different values of (A) food diffusion factor Df, (B) factor of maintenance fm, (C) volume growth factor Gr, and (D) life factor L.
Life 2023, 13, x FOR PEER REVIEW 7 of 16
Figure 5 .
Figure 5. Number of non-evolving protocells (black curve) and number of evolving protocells (green curve) plotted over the course of the simulation.From top to bottom and left to right, the eight graphs present increasingly high values for the evolution rate S: (A) S = 1, (B) S = 2, (C) S = 5, (D) S = 8, (E) S = 12, (F) S = 16, (G) S = 25, (H) S = 50.The average and standard deviation, calculated over 15 iterations, are presented.
Figure 5 .
Figure 5. Number of non-evolving protocells (black curve) and number of evolving protocells (green curve) plotted over the course of the simulation.From top to bottom and left to right, the eight graphs present increasingly high values for the evolution rate S: (A) S = 1, (B) S = 2, (C) S = 5, (D) S = 8, (E) S = 12, (F) S = 16, (G) S = 25, (H) S = 50.The average and standard deviation, calculated over 15 iterations, are presented.
(
green curve) plotted over the course of the simulation.From top to bottom and left to right, t eight graphs present increasingly high values for the evolution rate S: (A) S = 1, (B) S = 2, (C) S = (D) S = 8, (E) S = 12, (F) S = 16, (G) S = 25, (H) S = 50.The average and standard deviation, calcul ed over 15 iterations, are presented.
Figure 7 .
Figure 7. Example runs with a Pa of 75 and S of 8 illustrating the effect of Pt on selection and on competition.Three different values of Pt (Pt = 100, Pt = 10 and Pt = 800) are shown on the three rows.(A,D,G) P variations.(B,E,H) Changes in the numbers of non-evolving protocells (black curve) and evolving protocells (green curve) over the course of the simulation.(C,F,I) Change in the average active intake for the four molecules A1, …, A4 in Type 2 protocells over the course of the simulation.
Figure 7 .
Figure 7. Example runs with a Pa of 75 and S of 8 illustrating the effect of Pt on selection and on competition.Three different values of Pt (Pt = 100, Pt = 10 and Pt = 800) are shown on the three rows.(A,D,G) P variations.(B,E,H) Changes in the numbers of non-evolving protocells (black curve) and evolving protocells (green curve) over the course of the simulation.(C,F,I) Change in the average active intake for the four molecules A 1 , . .., A 4 in Type 2 protocells over the course of the simulation.
Life 2023 ,
13, x FOR PEER REVIEW 10 of 16 each combination of values, 20 simulation iterations of 400 timesteps were conducted.The results are shown in Figure8.N1 and N2 respectively refer to the number of nonevolving (Type 1) and evolving (Type 2) protocells at the end of the simulations.Here, the standard deviation is important, and highlights that although general trends are clear, there can be large differences between individual runs.
Figure 8 .
Figure 8. Influence of Pa, Pt and S on the competition between non-evolving (Type 1) and evolving (Type 2) protocells.From top to bottom and left to right, the four graphs present increasingly high values for the evolution rate S: (A) S = 1, (B) S = 5, (C) S = 8, (D) S = 12.The differences in numbers of both types of protocell at the end the run (N2-N1) are shown on the vertical axis (the average and standard deviation, calculated over 20 iterations of 400 timesteps, are presented).As a consequence, positive values mean that evolving protocells outcompete non-evolving protocells, while negative values mean that the former are outcompeted by the latter.The horizontal axis represents the period of P oscillations, Pt, and the colors of the curves correspond to two different amplitudes of P (Pa = 25 in blue, and Pa = 75 in red).
Figure 8 .
Figure 8. Influence of Pa, Pt and S on the competition between non-evolving (Type 1) and evolving (Type 2) protocells.From top to bottom and left to right, the four graphs present increasingly high values for the evolution rate S: (A) S = 1, (B) S = 5, (C) S = 8, (D) S = 12.The differences in numbers of both types of protocell at the end the run (N2-N1) are shown on the vertical axis (the average and standard deviation, calculated over 20 iterations of 400 timesteps, are presented).As a consequence, positive values mean that evolving protocells outcompete non-evolving protocells, while negative values mean that the former are outcompeted by the latter.The horizontal axis represents the period of P oscillations, Pt, and the colors of the curves correspond to two different amplitudes of P (Pa = 25 in blue, and Pa = 75 in red).
Figure 9 .
Figure 9.Comparison of (1) the timescales of environmental variation in hydrothermal environments-where temperature variations of more than 10 K are considered-against (2) the average timescales of generation/doubling times in prokaryote populations [33-35,40-42,44].The asterisk (*) represents the uncertainty of the position of the upper boundary for the generation times, which is little known for vegetative life (e.g., in the Deep Biosphere).
Figure 9 .
Figure 9.Comparison of (1) the timescales of environmental variation in hydrothermal environments-where temperature variations of more than 10 K are considered-against (2) the average timescales of generation/doubling times in prokaryote populations [33-35,40-42,44].The asterisk (*) represents the uncertainty of the position of the upper boundary for the generation times, which is little known for vegetative life (e.g., in the Deep Biosphere). 2 | 9,760.8 | 2023-09-25T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Dynamic Frame Update Policy for UHF RFID Sensor Tag Collisions
The current growing demand for low-cost edge devices to bridge the physical–digital divide has triggered the growing scope of Radio Frequency Identification (RFID) technology research. Besides object identification, researchers have also examined the possibility of using RFID tags for low-power wireless sensing, localisation and activity inference. This paper focuses on passive UHF RFID sensing. An RFID system consists of a reader and various numbers of tags, which can incorporate different kinds of sensors. These sensor tags require fast anti-collision protocols to minimise the number of collisions with the other tags sharing the reader’s interrogation zone. Therefore, RFID application developers must be mindful of anti-collision protocols. Dynamic Frame Slotted Aloha (DFSA) anti-collision protocols have been used extensively in the literature because EPCglobal Class 1 Generation 2 (EPC C1G2), which is the current communication protocol standard in RFID, employs this strategy. Protocols under this category are distinguished by their policy for updating the transmission frame size. This paper analyses the frame size update policy of DFSA strategies to survey and classify the main state-of-the-art of DFSA protocols according to their policy. Consequently, this paper proposes a novel policy to lower the time to read one sensor data packet compared to existing strategies. Next, the novel anti-collision protocol Fuzzy Frame Slotted Aloha (FFSA) is presented, which applies this novel DFSA policy. The results of our simulation confirm that FFSA significantly decreases the sensor tag read time for a wide range of tag populations when compared to earlier DFSA protocols thanks to the proposed frame size update policy.
Introduction
Traditionally, Radio Frequency Identification (RFID) technology applications focused on item identification, location, and authentication. In the past years, the growing interest in wireless sensors has also reached RFID, and it has been transformed to a technology for both identification and sensing applications. As a result, RFID has become a crucial element of the Internet of Things (IoT) platform. Industry alliances, such as the NFC forum (for HF RFID) and the RAIN RFID alliance (for UHF RFID), have been formed to motivate and promote these efforts. The use of RFID technology to sense our physical world has expanded tremendously in the last decade. This has enabled the sensing ability of RFID technology to gather information from real-world objects and seamlessly integrate this data within the IoT.
RFID applications using wireless sensors require a fast communication protocol to read the sensor's data, especially with increasing tag populations. The main purpose of the protocol is for the 1. An analysis and classification of the state-of-the-art DFSA tag anti-collision protocols according to their frame update policy.
2.
A novel fast frame update policy for DFSA protocols. This policy first applies fuzzy logic to select the value of the slot where the frame size is updated. It then calculates the frame size as a function of the estimated number of tags inside the reader interrogation zone and the duration of the different time slots of the RFID platform.
3.
We introduce the anti-collision Fuzzy Frame Slotted Aloha (FFSA) protocol, which applies the previous policy to lower the average time to read a sensor data packet from one tag compared with existing recent strategies.
The rest of this paper is organised as follows. Section 2 analyses the frame update policy of DFSA protocols. Next, Section 3 presents the related work and classifies the main state-of-of-art DFSA protocols according to their frame update policy. A novel frame update policy and the FFSA protocol are presented in Section 4. Section 5 provides the results of the performance evaluation followed by some of the limitations that we have identified. Finally, Section 6 concludes this paper and presents some recommendations for future work.
Analysis of Frame Update Policy of Dfsa Protocols
In order to improve different metrics regarding the process of tag identification, several DFSA anti-collisions protocols have been studied in the literature. Each strategy employs a different approach to update the frame size, with the focus of improving different performance metrics. Establishing a clear classification of all DFSA protocols is not straightforward. The key feature that differentiates DFSA protocols is the strategy that they follow to update the frame size. This section establishes a novel approach to classify the main frame update policies employed by DFSA anti-collision protocols. This classification considers three different perspectives to update L, which respond to the following three questions: how is L calculated? When is L examined? And, when must a new frame be started? The classification of the main up-to-date policies is summarised in Table 1. The literals in this table will be defined in the next section.
Frame Size Calculation
The reader adjusts L in each reading cycle according to the responses from the competing tags in each frame. Two main strategies can be found in the literature to set a value for the frame size in DFSA protocols: the first calculates L as a function of the parameter Q, and the second sets L as a function of the estimated number of tagsn. The parameter Q is an integer value used in the EPC C1G2 to set L as L = 2 Q .
1.
Parameter Q, f(Q): the frame size can be adjusted by controlling the number and types of the slots in each frame with the parameter Q, so that Q increases when collisions are detected and decreases with increasing number of idle slots. Several approaches in the literature update L by adjusting Q [1,7,14-17].
2.
Tag set size estimation: several works in the literature have addressed the tag estimation task to provide an optimal frame size according to the estimated number of tags. It is known that a DFSA protocol reaches its maximum slot efficiency, which is defined as the ratio between the number of tags and the total number of slots required to identify them, when the frame size is equal to the number of tags. Therefore, to maximise this metric, the reader should set the frame size equal to the estimated number of tags. However, this condition of setting L =n is only satisfied if the reader assumes that the three types of slots have equal duration. However, the standard EPC C1G2 determines that each time slot has a different duration. Consequently, some approaches set the frame size according ton but assume unequal processing duration for each type of slot (single, collision, idle) [12,18].
Once the tag set size has been estimated, the next step is to calculate L according ton. Two main strategies to set L as a function ofn can be found in the literature, which is presented next.
•
Continuous function ofn, f(n): the first strategy is to set L as a continuous function ofn. The reader analyses the information extracted from the tags' responses and then sets L as a function of these values. Several anti-collision protocols follow this strategy [9][10][11][12][19][20][21][22][23][24]. • Look-up table (LUT) according ton, LUT(n): the second strategy is to set L according to an LUT based onn. The idea is to define different ranges ofn and assign a different value of L for eachn range. Several approaches in the literature follow this strategy, including [8,13,[25][26][27][28].
Frame Size Examination
This section answers the question related to when (and in which slot) L must be examined, considering that an examination refers to a new calculation of L. DFSA algorithms update L dynamically. Therefore, a strategy is defined to establish in which slot or slots within a frame the value of L must be examined. Three main strategies can be found in the literature, as follows:
2.
Pointer by Pointer (PbP): some protocols have defined some particular slots within the frame, referred as the pointer p in the present paper (p < L), where L is examined to check its appropriateness [7,8,12,13,28]. These pointers are usually set as a fraction of the current frame size.
Frame Break Condition
This section presents the different policies followed by the reader to decide, after calculating L, whether a new frame must be started, or if the reader must proceed with the next slot. Six main strategies can be found in the literature.
2.
L fits n from an LUT, LUT(n): some algorithms define an LUT based onn and L [8,13] to check the appropriateness of L. First, the reader searches in the LUT for the corresponding value of L for the previously obtainedn. Then, if this new value differs from the current one, a new frame is started. Otherwise, the reader proceeds to the next slot of the current frame.
3.
Higher expected number of successful slots, c s (n, L): the authors in [10] define a policy to break the current frame and start a new one if the expected number of successful slots in the rest of the current frame c s1 (n, L) is less than that expected in the new frame c s2 (n, L). In other words, a new frame is started if c s2 (n, L) > c s1 (n, L).
4.
Lower Identification Time, (lower t IT ): the authors in [16] present a frame cancellation strategy to minimise the total expected time to identify a tag set.
5.
Lower sensor read time, (lower t R ): this work presents a strategy where a new frame is started if the expected average time for reading one sensor packet t R (n, L) in the new frame is lower than the one in the current frame. 6.
End of Frame, (EoF): a new frame is started when the current frame has finished. This strategy is intrinsic to a DFSA-based anti-collision protocol and it is applied in all the protocols analysed in the present paper.
Related Work: Classification of Dfsa Protocols
In this section, we will present and classify some of the most relevant related work in DFSA protocols, including Slot Counter [1], FuzzyQ [7], Chen14 [8], Eom [9], ILCM-FbF [11], ILCM-SbS [10], Chen16 [12], and SUBEB-Q [13]. The performance evaluation of these protocols will be analysed and evaluated with detail in the next section, and will also be compared with the proposed solution FFSA.
The analysis performed in this work is based on the standard RFID wireless communication model, and it is shown in Figure 1. This figure shows the different reader and tags messages along with their corresponding duration meeting the EPC C1G2 requirements. A sequence of L slots is referred to as a frame, where L represents its size. The reader distinguishes between three different type of slots: idle (no tag respond), collision (two or more tags transmit a message simultaneously), and single read (the reader correctly receives the tag EPC during T s and one sensor data packet with during T p ). The duration of each slot is referred to as T i , T k , and T sp , respectively. T 1 , T 2 , and T 3 separate the reader commands and tags responses. Next, Table 2 presents a novel classification of the previous protocols, including the proposed FFSA. The classification is made according to the frame update policy followed by each protocol to identify a group of tags of size n. Table 2. Classification of main DFSA anti-collision protocols according to their frame update policy.
L Calculation L Exam Frame Break Condition
Lower t R(n,L) at p or EoF
The Proposed Frame Update Policy
This section introduces the novel fuzzy frame update policy. The arbitration of RFID communication is a stochastic process of unknown behaviour. Therefore, fuzzy logic is an efficient tool to model the process of identifying RFID tags. Fuzzy control for RFID anti-collision protocols was first introduced in [7], where a fuzzy system was used to give an intuitive value of the frame size. This work presents a Fuzzy Rule Based System (FRBS), which obtains the value of the pointer slot p to only accurately examine the frame size when appropriate. This solution is combined with a time-minimising function to update the value of L at slot p. The resulting proposed policy lowers the average time required to read one sensor packet from one tag compared to existing strategies. The three parts of the proposed policy (frame size calculation, frame size examination, and frame break condition) are presented next.
Frame Size Calculation to Minimise T R (N, L)
The first part of the policy sets the value of L to minimise the expected time to receive one sensor data packet from one tag in a frame. For this purpose, the sensor data read time t R (n, L) is defined as the expected time to identify one tag among n in a frame of size L and read one sensor data packet: where c s (n, L), c k (n, L), and c i (n, L) are defined as the expected value of the number of single, collision, and idle slots in a frame, respectively. The duration of the slots, T s , T p , T k , and T i , are set according to the standard and where T command is the duration of the reader command Qc, QA, or QR, referred as T Qc , T QA , and T QR , respectively. The parameters T EPC and T RN16 correspond to the duration of the EPC and RN16 tag messages, respectively. These two parameters are calculated as a function of the Tag-to-Reader synchronisation time T Preamble TR , the length of each parameter, and the tag data rate DR t , calculated as The parameter BLF refers to the Backscatter-link frequency. Thus, and The length of the sensor data packet T data is calculated by taking a commercial UHF RFID accelerometer sensor tag as a reference [6]. According to the sensor data sheet, each accelerometer data packet contains 10 bytes of data.
The reader transmits one QA or Qc command in the first slot of each frame. Then, it transmits consecutive QR commands in the following slots of the frame until it reaches the last slot of the frame. Assuming a frame with sufficiently large L, T command = T QR is applied in Equations (2), (4), and (5) when one frame is analysed.
The duration of the reader commands Qc, QA, QR, Req RN , and ACK are calculated as and The duration of the Read command T Read is calculated using a commercial UHF RFID accelerometer sensor tag as a reference [6]. Thus, The parameters T FSync RT or T Preamble RT correspond to the Reader-to-Tag synchronisation time as defined in [1], and the reader data rate DR r is obtained as where T symbol 0 = Tari, and T symbol 1 = 1.5·Tari. Tari represents the reference time interval for a symbol-0 (FM0 symbol) transmission. Next, the value of L minimising t R (n, L) is obtained by evaluating an RFID system with n tags and one reader. In this system, we can apply a a binomial distribution P r (n, L) [9] to approximate the probability that r tags among n select one slot along a frame of size L Additionally, p s (n, L), p k (n, L), and p i (n, L) correspond to the probabilities that only one tag, more than one tag or no tag, respectively, occupy a slot [7]. In order to obtain the expected number of idle, single, and collision slots in a frame with a size L sufficiently large, we can apply a Poisson distribution with mean ρ = n/L [9]. c i (n, L) is approximated with r = 0 in Equation (18) by c s (n, L) is approximated with r = 1 in Equation (18) by Then, c k (n, L) is approximated by By substituting Equations (19), (20), and (21) into (1), and applying n/ρ n/ρ−1 ≈1, the following expression is obtained Computing the derivative of t R (ρ) in Equation (1) with respect to ρ yields Then, by posing dt R (ρ) dρ = 0, we obtain the following equation By solving Equation (24), the value of ρ that minimises t R (ρ) is obtained: where W(x) is the Lambert W-function. Finally, the optimal frame size which minimises t R (n, L) is where ρ is obtained from Equation (25). The value of ρ in Equation (25) is evaluated and presented in Figure 2 as a function of T i /T k . It can be appreciated that ρ decreases when the difference between T i and T k grows, which results in an increasing L. In conclusion, a higher difference in the values of T i and T k (with T i ≤ T k ) will result in a higher L. This result is coherent regarding the process of RFID tags identification and sensor data reading. If the duration of collision slots is much higher than that of idle slots, then it is necessary to increase L to reduce the number of collision slots. This occurs at the expense of an increase in the number of idle slots. However, because idle slots are much shorter than collision slots, this is an acceptable effect.
The previous analysis and Equation (26) demonstrate that the frame size calculation of the proposed policy is timing-aware. This means that the calculation is made as a function of the number of tags n and the timing parameters (ultimately the duration of the reader commands and tags responses) of the RFID scheme.
Frame Size Examination: Pbp
The second part of a frame update policy refers to the slot where L is examined. The FbF strategy is not efficient in the case of large frames filled with many collisions because the reader must wait until the frame has finished to update the frame size, which increases the identification time [11]. The SbS strategy involves the calculation of L at every single slot of the frame. As a consequence, one drawback of this solution is that it could overload a system with limited resources. Finally, the PbP strategy provides the flexibility of breaking the current frame before it ends, which maintains a low computational complexity in the reader. Therefore, the proposed policy applies a PbP strategy where the value of the pointer slot is dynamically updated using fuzzy logic.
The proposed policy applies a fuzzy rule-based system (FRBS) to adjust the value of the pointer efficiently. Consequently, the current L and the tag collision rate col_rate are modelled as fuzzy sets to adaptively update the value of the pointer. A zeroth-order Takagi-Sugeno-Kang fuzzy system with a complete AND-composed rule [29] is proposed. The membership functions that we have used to codify the input variables are trapezoidal (see Figure 3) and the t-norm minimum is used to implement the AND operator. Among the traditional shapes of membership functions (triangular, trapezoidal, Gaussian, generalized bell, and sigmoid), trapezoidal membership functions have been selected due to their representation simplicity, which allows faster calculations. The proposed system has two inputs, as follows: • Q: codifies the current value of this parameter which determines L, where Q ∈ N and 0 ≤ Q ≤ 20. • col_rate: codifies the tag collision rate up to the current slot. This is defined by col_rate = c k /slot_index, and 0 ≤ col_rate ≤ 1.
Additionally, the variable slot_index represents the reader's internal counter, which keeps track of the present slot in the current frame. The output p represents the slot where L must be examined. Specific values for membership functions and consequents in the rule base have been adjusted experimentally. The rules were designed also experimentally, considering the typical behaviour of an RFID system: on the one hand, more collisions (higher col_rate) require us to promptly examine L (smaller output p); while on the other hand, a smaller frame size (smaller Q) requires the examination of L in a later time slot (higher output p). The experimental values for the membership functions and the rules have been obtained by evaluating different ranges and selecting the one with the best performance in t R . Figure 4 shows the surface representation of the proposed FRBS that determines the output p, normalised to L = 16. To illustrate an example, for the inputs Q = 10 and col_rate = 0.3, the output is p = L/9. Then, the new value of the pointer slot is p = round(2 Q /9) = round(2 10 /9) = 114.
Frame Break Condition: Lower T R (N, L)
Finally, the last part of the policy determines the condition to break the current frame and start a new one. The expected average time to read one sensor data packet [6] among n sensor tags in the current frame of size L c is obtained as t R c = t R (n, L)| n=n,L=L c , (27) and the expected average time to read one sensor data packet among n sensor tags in the newly calculated frame of size L n is t R n = t R (n, L)| n=n,L=L n .
To lower the tag sensor data read time, a new frame will be started if the condition t R n < t R c is satisfied. Thus, at slot p, the reader obtains t R n and t R c with Equations (27) and (28), assuming T command = T QR , and then compares these values. Following this strategy, the reader guarantees that if a new frame is started at slot p, then the expected average time required to read one sensor data packet will be reduced.
The Proposed Fuzzy Frame Slotted Aloha Protocol
The novel FFSA protocol is introduced in this work, which applies the previously presented DFSA policy: determines the frame size minimizing t R (n, L) (Section 4.1), examines the frame size following a PbP strategy (Section 4.2), and starts a new frame with the condition to lower t R (n, L) (Section 4.3). FFSA is compliant with the EPC C1G2 standard, meaning that it meets the specific communication timing requirements and uses power-of-two values for L. As a consequence, this policy can be used to identify commercial sensor tags.
In order to calculate the frame size in Equation (26), FFSA applies the traditional Mean Minimum Square Error (MMSE) estimator [26] to calculaten aŝ MMSE has been applied in FFSA due to its computational simplicity while providing a relatively low estimation time.
The pseudocode of FFSA is presented in Algorithm 1. Initially, the reader sets the value of ρ with Equation (25) according to the RFID system timing parameters, and starts the identification procedure by broadcasting Qc. Each tag selects a slot in the frame to transmit its RN16, and the reader updates the variables c s , c k , and c i accordingly. When the reader reaches the last slot of the frame, the remaining tag population size is estimated with Equation (29). Then a new frame is started by broadcasting QA, specifying the new frame size as Q n = log 2 ((n − c s )/ρ), L n = 2 round(Q n ) . At every slot, col_rate is calculated and p is set as the current slot if col_rate = 1. If the current slot is a pointer, the reader calculatesn with Equation (29) and sets L n with. Then, it obtains t R c and t R n with Equations (27) and (28). If the condition t R n < t R c is satisfied, a new frame is started and p is updated with the FRBS. Otherwise, the reader broadcasts QR to proceed to the next slot. The sensor tags reading process ends when there are no collision slots in the current frame and the frame is terminated. col − rate = c k /slot_index 14: if col − rate=1 then 15: p = slot_index 16: end if 17: if slot_index = p then 18:n =MMSE(c s , c k , c i ) 19:
20:
if t SR n < t SR c then 21: p = FRBS(col − rate,Q n ), L c =L n
Performance Evaluation
This section evaluates the performance of FFSA in terms of the average time to read one sensor data packet from one tag t R . This metric is calculated as the total sensor read time divided by the total number of tags n in one inventory round: In one inventory round, the variables c s T , c k T , and c i T are the total number of single, collision, and idle slot, respectively. The value of T command in Equations (2), (4), and (5) will vary depending on the slot position within a frame:
•
First slot of the inventory round: T command = T Qc . • First slot of the frame: T command = T QA . • None of the above: T command = T QR . Table 3 summarizes the most relevant variables covered in this work. For each scenario, t R is evaluated as a function of the control variable indicated with *, n in S1 and BLF in S2. BLF is varied from 40 kbps (the minimum value allowed by the standard) to 640 kbps (maximum). S2 represents a special case because BLF also influences Tari, which represents the reference time interval for a data-0 transmission, and affects RTcal, TRcal, T 1 , and T 2 . These parameters are also modified every time that BLF changes during the simulation. In both scenarios, the initial L is set to 16. In FFSA, the initial value for p is set to eight and this value has been obtained experimentally. Table 4 shows the parameter values that we have employed. Expected value of the number of idle, single, and collision slots in one frame t R Time to read one sensor data packet from one tag t R (n, L) Expected time to read one sensor data packet from one tag among n in a frame of size L Next, the protocols presented in Section 3 are evaluated and compared with FFSA for different performance metrics. A scenario with one reader and a varying number of tags has been evaluated with Matlab R2019, where the tags are uniformly distributed. The simulation responses have been averaged over 1000 iterations to ensure accuracy in the results. The performance evaluation followed in this work focuses on the media access control layer, ignoring the physical layer effects (assuming no capture effect and a non-impaired channel). This approach is widely accepted and incorporated by several studies in the related literature [8,[10][11][12]. Our evaluation is performed for one inventory round, which is defined as the period of time that begins when the reader transmits the initial command Qc and which ends when the reader interrupts the reading process and the tags lose their state.
Impact of the Number of Tags in S1
This section compares the selected protocols in terms of t R of Equation (30) by varying the number of tags n from 64 to 8192 (see Table 4). Additionally, c k T and c i T per tag are measured because t R is mostly influenced by them. The results of t R evaluation are shown in Figure 5. The average percentage improvement of FFSA compared to the rest of the protocols in terms of t R ranges from 3% to 9% in S1. This improvement will be more notable (above 9%) for shorter sensor data length (lower T d ata). Most protocols show a quasi-constant t R for n up to 2048 in Figure 5. FFSA requires the lowest average time read one sensor data packet from one tag. The strategy Chen14 shows an increasing t R for n > 2048 because it limits the frame size to 1024 when n is greater than 710. FuzzyQ also presents a peak at n = 2048, because the value of the Q parameter is upper-bounded. The improvement in FFSA comes from the reduction in c k T at the expense of an increase in c i T , as can be appreciated in Figure 6a,b, respectively. Because the duration of an idle slot (Equation (5)) is shorter than that of a collision slot (Equation (4)), the reduction in c k T leads to a lower t R for FFSA. The strategies ILCM-FbF and Chen14 present the highest c k T , leading to the highest t R .
Impact of the Tag Backscatter Link Frequency in S2
This section compares the selected protocols in terms of t R (Equation (30)), while varying the tag Backscatter Link Frequency BLF. Therefore, the previous protocols are evaluated by varying BLF from 40 to 640 kbps; minimum and maximum values specified in the current standard. Tari is set to its minimum value 6.25 µs. The simulation results are averaged for n from 64 to 8192, and are shown iupn Table 5. The value of ρ employed by FFSA is also presented, which has been obtained with Equation (25). All of the protocols present a decreasing t R with increasing BLF. For the highest values of BLF, all of the protocols present a similar behaviour and FFSA does not introduce a significant performance improvement. This occurs because the value of ρ (see Table 5) takes a significantly higher value, which causes a larger number of collision slots. As BLF decreases, FFSA shows a significant reduction in t R in relation to the prior protocols.
To analyse the previous results, c k T and c i T per tag are measured as functions of BLF and averaged for all the tag set sizes n in S2, and the simulation results are shown in Figure 7. When BLF gets close to its upper limit, the increase in c i T of FFSA is not compensated by the small reduction in c k T in relation to the prior protocols, which limits the performance improvement of the proposed protocol. On the other hand, while the prior protocols present a quasi-constant c k T with decreasing BLF, FFSA presents a notably decreasing c k T , which is reflected in a reduction in t R in relation to the prior protocols. Although for BLF > 80 kbps Chen16 behaves similarly to FFSA, the improvement introduced by FFSA becomes notably clear when BLF gets close to its upper or lower limit. This occurs because as BLF gets closer to its lower bound, Chen16 results in a low value of y (y is used by Chen16 in its algorithm to obtain L), which leads to an increasing c k T and decreasing c i T .
Discussion
The previous section evaluated the performance of FFSA in terms of t R (Equation (30)). To demonstrate the benefits of the proposed protocol, its performance was compared with several related works presented in Section 3. In terms of the sensor data read time, the main parameter evaluated in this work, FFSA, presents the lowest t R for most of the values of n and BLF evaluated. The parameter BLF was selected as a control variable because it is related to the tag data rate. The two scenarios evaluated in this work consider that tags use Miller modulation with M = 4. The relationship between the tag BLF and the tag data rate is DR t = BLF/M. Thus, a higher BLF results in a faster tag, and vice versa. Consequently, the time to identify one tag t R is lower for higher BLF values. This effect is appreciated in Table 5. FFSA analyses this characteristic and takes into consideration the value of BLF to adjust L according to Equation (25). Therefore, FFSA lowers the sensor read time of the comparative protocols for a wide range of tag data rate configurations. In conclusion, the savings in the tag sensor data read time of FFSA is substantial for most of the range of BLF and n, which confirms that the proposed protocol is a time saving procedure in S1 and S2.
Identified Limitations
The protocols performance evaluation analysed in this work assumed an ideal communication channel, because it focused on the media access control layer. However, in a real scenario for passive RFID systems, the capture effect is typically present [30]. The capture effect occurs when the reader successfully resolves one tag reply in a collided slot. This effect could benefit the performance of FFSA because fewer collided slots and more single slots would occur, decreasing t R . However, there is a negative impact of this effect over FFSA performance. The capture effect may hide some tags, which provides erroneous information to the tag estimator and increases the estimation error. Thus, the updated L value may not be appropriate, which negatively affects t R . A study of the capture effect on t R and an evaluation of FFSA taking this effect into account is recommended for future work.
Conclusions
A comprehensive survey and classification of the frame update policies for RFID DFSA anti-collision protocols has been presented. In general, this policy can be divided into three parts: L update, L calculation, and frame break condition. Then, several state-of-the-art DFSA anti-collision protocols have been analysed and classified according to this policy. Finally, a novel frame update policy has been proposed. This results in the Fuzzy Frame Slotted Aloha (FFSA) protocol, which is a fast DFSA anti-collision protocol and is compliant with the current UHF RFID standard. With a significant improvement in the sensor data read time in relation to the current anti-collision protocols, FFSA is a suitable candidate where low sensor data read time is sought in UHF RFID systems that require a varying number of sensor tags.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. | 7,893.2 | 2020-05-01T00:00:00.000 | [
"Computer Science"
] |
Nanobots in Medical Field: A Critical Overview
Today, the health care industry is focusing on developing minimum invasive techniques for diagnosis, as well as for the treatment of ailments. Advancement in technology is more essential for the treatment of many problems such as implant of bones and membranes. One such technique flourishes robots using nanotechnology which are known as nanobots. Nano robotics is an emerging technology creating machines or robots whose components are at or near the scale of a nanometre ( meters). Nanobots will help to build a platform between the technological gaps of physics, chemistry and biology on the nano-scale. These nanobots can address the path to many creative approaches and will result in new methods and products for both technological and medical-pharmaceutical applications. Furthermore, nanobots are good applicants for complex treatments with their size being very small. Nanobots are used as drug delivery systems and contrast agents. It is contended that coated nanobots are functionalized with targeted molecules are interacting with external devices and offering real potentials for medical applications. This paper directs how a nanobot work and how it contribute to medical robotization with their advantages
INTRODUCTION Nanobots are robots that carry out a very specific function and are approximately 50-100 nm wide. They can be used very actively for drug delivery. Normally, drugs pass through the entire body before they reach the diseaseaffected area. Using nanotechnology, the drug can be picked out to a specific site which can reduce the chances of possible side effects. More specifically, Nano robotics refers to the nanotechnology engineering discipline of designing and building Nano robots ranging in size from 0.1-10 micrometers and is constructed in a nanoscale or molecular components. In the present circumstances, the nano devices which are under research and development are known by the terms nanobots, nanoid, nanite, nanomachine or nanomite. Nanomachines are used primarily in the research and development phase but some primitive molecular machines and nanomotors have been tested.
The study of manipulating matter on an atomic and molecular scale based on nanotechnology is basically termed as nanobots. Generally the size of nanobots lies between 1 to 100 nm. It can play a major role in medical applications, especially for targeted interventions into the human body through the vascular network. The first crucial step toward developing real-world applications for targeted drug delivery and other uses are represented by recent fabrication, actuation, and steering demonstrations of nanoscale robots. It has huge impact in targeted cancer therapy, such as delivering therapeutic agents directly to the tumor through the vascular network. Today, when we talk about nanobots, the self-propelled nanomotors and other biodegradable nano devices made of bio-nano components, which carry cargo to the target sites, i.e. deliver drugs to diseased cells. For example, these nanorobots can transport molecular payloads throughout the body by programming and cause on-site tumor blood supply blockages which can lead to tissue death and thus shrink the tumor. Currently nanobots and their application in the medical field are under development. In association with medicine, nanobots are programmed to perform specific biological tasks and when they are injected into the blood, they work on cancer cells or any other affected cells. Nanobots blended with biological research will set a new milestone in the development of medical studies.
II. ADVANTAGES OF NANOROBOTICS OVER C CONVENTIONAL MEDICAL TECHNIQUES We Homo sapiens (advanced humans) have always been fascinated with our own anatomy. Ages ago, techniques to diagnose body ailments as well as to repair them have been developed. The progression of humanity has been quite a lot in terms of safety and reliability of the procedure. Techniques such as endoscopy have been developed to give a better understanding of the innermost parts as well as aid diagnosis. But as we all know, all technology certainly has to be phased out sometime. And as historical procedures have developed to overcome the drawbacks of their predecessors, nanorobotics will aim to overcome the following drawbacks of today's medical technology: 1. Cut harmful tissue layers which take time to heal.
2. Painful Anesthesia can be used to limit the pain to a great extent, yet it is only for a short time.
3. Still there is no 100% success for delicate surgeries such as eye surgery. 4. In any of the invasive techniques, the patient's life is totally in the hands of the operator, surgeon or physician. It is risky, as one mistake could take the life of the patient.
For last few centuries, conventional techniques of investigation and diagnosis have been widely used and thus, soon it is going to fall behind as the technological age advances. Also all these procedures will soon become robotically controlled by machines.
However scientists and researchers are working on compelling, reliable and bio-compatible approach. Instead of curing a disease from outside, they course to defend it inside the body. That is where medical nanorobotics comes in. The major advantages of this technology are: 1. Minimal or no tissue trauma. 2. Considerably less recovery time.
3. Less post-treatment care required. 4. Rapid response to a sudden change. 5. Continuous monitoring and diagnosis from the inside.
Some features of nanobots would also allow us to store and process previous data, identify patterns and hence, help to predict the attack of an ailment. Nanobots can be guided externally as per programmed and deliver payloads such as drugs, or healthy cells to the specific location in the body. An added advantage is that these nanobots can navigate through natural biological pathways.
A. Sugar level monitering bots
The sugar level in the blood can be monitored by inserting special sensor nanobots into the blood in which an electrical impulse signal is emitted by microchips which are coated with human molecules. The drug carriers consist of walls that are just 5-10 atoms thick and inner drug-filled cell which is usually 50-100 nm wide. When they detect signs of the disease, thin wires in their walls emit an electrical pulse which causes the walls to melt and the drug to be released. An intensive advantage of using nanobots for drug delivery is that the the electrical pulse can be controlled which easily controls the amount and time of drug release to the specific site. Moreover, the walls melt and dissolve easily and are therefore harmless to the body.
B. Enzyme-proprelled nanorobot
Urea-coated nanotubes turn into a propulsion system in a urea-containing liquid because the enzyme breaks down the urea into gaseous products. A current in the liquid is generated by the reaction products since the tubes always have small asymmetries. This active motor based drug delivery approach promises an effective and improved drug delivery compared to conventional methods.
C. Cancer detection and treatment
Nanorobots are successfully programmed by the scientists from Arizona State University and China's National Centre for Nanoscience and Technology (NCNT) to detect and shrink cancer tumours in the brain. With 25 million nanometers per inch, these miniature robots may provide an extra help that oncologists need to reduce cancer, such as enhancing their capabilities to detect, diagnose and treat cancer cells. Fig3: Nanobots fighting cancer Today, drug delivery for cancer is difficult to control. Chemotherapy anguishes healthy tissue in addition to malignant tissue. We cannot prevent the harm effects of chemotherapy on other parts of our body. But this is not done by nanobots. Nanobots could be used to deliver drugs particularly only to the tumor cells, thus preventing the auxiliary effect of the drug. Primarily, nanobots are sent to the targeted tissue or tumor to provoke it which is a part of machine gun approach but a lot of the bots will be wasted. However, only the tumor is provoked and not any other tissue in the entire body is affected. Now, a second wave of bots is sent to targeted tissue and this wave of bots contains the actual chemotherapy drug. It releases its payload i.e. the drug only after sensing the provoked tissue. Thus, we have highly concentrated targeted action, with no peripheral impact.
IV. POTENTIAL USES OF NANOBOTS
The budding uses for nanorobotics in medicine include prior diagnosis and targeted drug-delivery for cancer, surgery, pharmacokinetics, monitoring of diabetes and biomedical instrumentation. In such a deal, future medical nanotechnology is expected to make use of nanorobots that are injected into the patient's body to perform its work at a cellular level. These nanorobots should not identical because duplication leads to increase in device complexity, reduce its accuracy and tamper with the medical mission.
Nanotechnology provides a wide range of new technologies for developing adapted means to adjust the delivery of pharmaceutical drugs. Today, harmful side effects of treatments such as chemotherapy are mutually a result of drug delivery methods that doesn't distinguish their intended target cells accurately.
Another useful application of nanorobots is cooperating in the repair of tissue cells along with white blood cells. Mobilizing inflammatory cells or white blood cells (which include neutrophil granulocytes, lymphocytes, monocytes, and mast cells) to the affected area is the first response of tissues to any injury. Because of their small size, nanorobots could attach themselves to the surface of mobilized white blood cells to crunch their way out through the walls of blood vessels and arrive at the injury site, where they can cooperate in the tissue repair process.
A. Detect Bacteria
A lot of nanobot's estimate uses are related to medicine in some manner. For instance, it is believed that nanobots will be able to detect the presence of bacteria and other microbes in the human body, which in turn, means that they will be able to detect whether someone has been infected or not as well as what kind of response should be set based on the kind of infection.
B. Detect Cancer
It is esteemed that nanobots will be able to act as an early warning system, which will pick up changes in the human body that signal the mutation of healthy cells into cancer cells. By doing so, they will allow the implementation of solutions in a timely manner.
C. Determines the Effectiveness of Drug
One of the biggest challenges in medicine is to figure out the effect that a particular medicine is having on the patient so that the medical expounder can tackle the problem by decreasing the side effects. Moreover, determining the effectiveness of medicine is also important because it allows the medical expounder to know how to treat the patient as soon as possible. Nanobots will be able to help with both of these tasks.
D. Detect Particular Chemicals
On a associated note, it is believed that nanobots will be able to detect the presence of particular chemicals in the human body as well, which will provide crucial information to medical expounders about the condition of the patient so that it can be used to ensure more efficient and effective treatment.
E. Deliver Cancer-Fighting Drugs
Chemotherapy can be cruel on a cancer patient because it can kill cancer cells as well as the healthy cells surrounding the cancer cells. There has been some motion made regarding the use of nanobots to make sure that the cancer-fighting drugs are delivered right to the cancer cells, thus restricting the parallel damages. This means that nanobots can be used to target harder-to-reach portions of cancerous tumors, which in turn, means that there will be a potential increase in the chances of successful chemotherapy as well.
F. Clear Blocked Blood Vessels
There is a lot of interest coming up with potential solutions as well as potential preventatives for cardiovascular disease which is one of the most common killers. Theoretically, blockages in blood vessels which are responsible for both strokes and heart attacks can be cleared by using nanobots. But practically, if these bots are not able to wholly solve the problem, they are able to reduce the chances of dying from either one of those conditions, which will be an incredible improvement even.
G. Serve as Antibodies
Nanobots are used to boost the existing antibodies for the people with weak immune systems who cannot manage all the bacteria and other microbes that they await to attack. Here, the nanobots actually being used to potentially destroy the dangerous foreign substances in the human body. Alternatively, this could consist of the nanobots that direct the existing immune processes at the sources of danger.
H. Clean Up Pollution
In future, it might be possible to use nanobots to clean up pollutions, thus restoring polluted environments to a clean and virgin condition. Inspecting the impacts that pollution can cause the health of entire ecosystems including human health, nanobots can be considered as an inestimable boon because nanobots would be deployable in toxic sites thus reducing the risk to human counterparts.
VI. ADVANTAGES AND DISADVANTAGES OF NANOBOTS ADVANTAGES
The main advantages of the bots are its speed and longevity. There are abundant benefits through nanobots that are provided by present methods of drug delivery. Nanobots are very specific and accurate with fewer side effects which release drugs in a controlled manner. It also minimizes surgeon mistakes. Computer controlled drug delivery and greater speed of drug action are its sublimit.
DISADVANTAGES
The main disadvantage is that it is expensive to design the nanobots and a lot of complications are involved in designing them as well. The most daunting obstacle is the power supply. For the bots to overcome the body's immune response, more research work has to be done so that. If the
International Journal of Engineering Research & Technology (IJERT)
ISSN: 2278-0181 http://www.ijert.org nanobots are misused by terrorists, it could even be used as bio-weapons and may become a threat to the society. If nanobacterias are present in our body, they can cause a serious of effects, which means nanobots are foreign to the body. Due to so many foreign particles inside the body, biodegradability will be a significant problem. Hence, a vigorous care has to be taken to overcome all these drawbacks. A harmful version of the bots could be created if nanobots self-replicate and our immune system can be challenged if we depend a lot on nanotechnology.
VII. CONCLUSION In the field of medicine, the use of nanorobotics has a wider scope than any other sub-field that has emerged to date. These bots can be used comparatively high anywhere in association with human physiology. It provides enormous advantages over conventional medicine such as lower cost, quicker recovery and low or almost no invasion. There will be a great revolution in medicine, comparable to the industrial revolution which reshapes the world in the age of integrative activity. With a flock of nanobots protecting us from inside, we could actually be free from disease in the next few decades with life expectancy. Cancer detection, data storage, and pipeline monitoring offers some of the strongest cases of development using nanorobotics for the future. Thus, nanorobotics is an ideal field to explore progressively. | 3,462.2 | 2019-12-11T00:00:00.000 | [
"Medicine",
"Engineering"
] |
The relative contributions of weathering and aeolian inputs to postglacial formation of Mediterranean alpine loess
1 Between the southern margin of the European loess belt and Sahara Desert, thin and irregularly 2 distributed loess deposits occur in Mediterranean mountains. During the most recent deglaciation, along 3 the Pleistocene-Holocene boundary, the deposition of glacial, periglacial and outwash sediments, was 4 the main local source of Mediterranean alpine loess, whereas proximal alluvial planes comprised a 5 secondary source. The mid-Holocene termination of African Humid Period and subsequent aridification 6 of Sahara Desert occurred simultaneously with a change of the regional climate from Atlantic to 7 Mediterranean-dominated, characterized by frequent episodes of southerly winds. This resulted to a 8 change of the loess source, as deflation of quartz rich silts enriched in Zr during intense episodes of 9 Sahara dust transport became more dominant. Here, a 32cm loess profile from the Plateau of Muses 10 (PM), below the summit of Mount Olympus, Greece, is investigated on the basis of grain size, 11 mineralogy, environmental magnetism and geochemistry. Comparisons of loess samples with glacial 12 and periglacial deposits, enables us to differentiate relative contributions of local sources and 13 allochthonous aeolian inputs. Calcite sand rich in feldspars makes up the glacial and periglacial clast 14 free matrix. In contrast, PM loess is composed by clay and fine silt fractions with minor calcite sand 15 contributions. The mineralogical matrix of loess contains quartz, phyllosilicates and mixed layer clays, 16 while its geochemical composition contains high amounts of detrital Fe-Ti oxides and aeolian 17 transported Al and Zr. Based on the multi-proxy approach applied here, the loess profile is partitioned 18 in three layers. Holocene average deposition rates (~2.5 cm/ka) broadly agree with modern Sahara dust 19 deposition (~2.0 cm/ka) and long-term postglacial Mediterranean mountain denudation rates (~0.5 20 cm/ka). Such low rates provided ample time for post depositional modifications, such as decalcification, 21 deferrification and removal of K, evident from the trends of chemical weathering proxies Ca/Sr, Fe/Ti 22 and K/Rb, respectively. 23 27 28
INTRODUCTION 29
The most recent deglaciation of the Mediterranean mountains between 12 and 9.5 ka BP resulted 30 to deposition of large sequences of glacial, periglacial and outwash sediments that were mainly confined 31 in the highest valleys of the massifs (Hughes and Woodward, 2016;Oliva et al., 2018;Allard et al., 32 2020). Antecedent to glacial retreat was the deposition of loess and subsequent formation of alpine soils 33 on moraines, plateaus and outwash plains (e.g. Muhs 2007). Synergistic to the in-situ genesis of alpine 34 soils, is the deposition windblown dust, which results to the formation of alpine loess soils (Muhs and determines the rate of geomorphic processes, such as landscape denudation. Furthermore, the study of 39 deflated sediments within alpine soils and loess can provide insights on the local and regional 40 atmospheric circulation patterns, reflected by the depositional dynamics of aeolian dust (e.g. Muhs et 41 al., 2007). 42 In the Mediterranean region, the formation of loess is influenced to a large extent by its proximity to 43 Sahara Desert (Pye, 1995 Mount Olympus is the highest mountain of Greece, rising 2918 meters above the Aegean Sea (Fig. 1). 74 In the lower part of the mountain, Mediterranean type climate prevails with wet winters and generally 75 dry summers. Wet winters are linked to cyclogenesis in the Aegean Sea basin that results from enhanced 76 mid-latitude westerlies ( Fig. 1 pattern B) and the influence of Atlantic climate (Xoplaki et al., 2000). 77 This pattern was dominant during the first part of the Holocene (Peyron et al., 2017). Dry winters are 78 associated with outbreaks of northerly continental cold and dry airflows ( Fig. 1 pattern B) funneling 79 through the large fluvial valleys exiting on the Aegean Sea (Rohling et al., 2002), which are connected 80 to the presence of high-pressure systems over the northern Balkans and/or Siberia (e.g. Xoplaki et al., 2000;Bartzokas et al., 2003). This pattern was persistent throughout the Holocene, when short periods 82 of cold and dry winters linked to the intensification of Siberian High (e.g. Rohling et al., 2002;Marino 83 et al., 2009) and resulted in Mediterranean rainfall minima associated Sahara dust transport episodes 84 (Zielhofer et al., 2017a). The transport of Sahara dust in the North Aegean occurs today under strong 85 southerly (Sirocco) winds ( Fig. 1 pattern C) during the winter and spring (Nastos, 2012), but there is 86 lack of evidence of how the southerly winds outbreaks evolved during the Holocene. However, the study 87 of Mediterranean alpine loess archives, where the Sahara dust signal is not blurred by erosion, reworking 88 and pedogenesis, can provide valuable information on the tempo of southerly warm and moist wind 89 outbreaks and their impacts on different ecosystems.
Glacial erosion 98
The geologic structure of Mount Olympus involves a stratigraphic upwards sequence of Triassic, and 99 Lower Cretaceous to Eocene metacarbonates, uplifted since the late Miocene along a major NW -SE 100 trending frontal fault ( Fig. 2A) (Nance, 2010). During uplift, deposition of erosional products along the 101 eastern (marine) and the western (continental) piedmonts occurred ( Fig. 2A). Their Quaternary counterparts 102 include thick sequences of glaciofluvial and alluvial fan deposits with intercalated soils, exposed along the 103 main river valleys and the frontal fault scarp (Fig. 3 in Smith et al., 2006). During the 104 Last Glacial Maximum (LGM), between 28 and 24 ka BP (Allard et al., 2020), an ice cap covered Mount 105 Olympus' highest cirques and upland plateaus extending to elevations of ~ 2000m (Kuhlemann et al., 106 2008). The post LGM retreat was followed by a Late Glacial (LG) glacier expansion at ~15 ka BP that 107 was confined in the highest cirques at elevations above 2200 m (Styllas et al., 2018). (Fig 2E). The Plateau of Muses extents 0.8 km 2 and is covered by unconsolidated 143 glacial and periglacial sediments. Periglacial features such as solifluction beds are present below the exposed 144 bedrock of the surrounding peaks, while patterned grounds exist along its topographically lower surface 145 (Styllas et al., 2018). These features are tentatively considered to have formed during cold intervals over the 146 last ~12 ka BP, following the deglaciation of TZ cirque, but may be still active today as the permafrost 147 elevation of the region is placed at 2700 m (Dobiński, 2005). The formation of PM is the result of the 148 combined action of glacial scouring and karstic dissolution. The low relief in combination to the elliptical 149 to circular plan shape of the plateau, points to a doline type karstic depression filled with glacial and 150 periglacial sediments with a thickness between 4 to 10m (unpublished data from geophysical survey). The 151 surface layer (> 35cm) of PM sedimentary sequence is composed by a red to yellow homogenous fine-152 grained accumulation, with its basal part composed by glacial and/or snowmelt outwash limestone sand and 153 gravel, mixed with silty sediments (Fig. 2E, Fig. 3). 154 155 Fig. 3. Pictures of the PM 32cm soil loess profile with the respective discrete samples taken every 2cm.
HYPOTHESIS AND STUDY DESIGN 157
Based on the considerations regarding: the onset of deglaciation Mediterranean mountains at ~12 ka BP 158 and the termination of the African Humid Period at ~6 ka BP, this study, based on a suite of analytical 159 methods applied to samples from Mount Olympus, Plateau of Muses loess, is challenging the hypothesis 160 that the evolution of Mediterranean alpine loess, occurred along three distinct phases: ii) The early to mid-Holocene phase between 10 and 6 ka BP when, under a warming and 166 seasonal Mediterranean climate (Peyron et al., 2017), the formation of loess was mainly 167 sourced by local glacial sediments and expanding alluvial planes in lower elevations. iii) 168 The mid to late Holocene from 6 to 0 ka BP, where following the termination of the African 169 Humid period and desiccation of Sahara Desert, along with a change of regional winter 170 climate from Atlantic to Mediterranean, increasing amounts of Sahara dust reached the 171 Mediterranean mountains during episodes of southerly advection. 172
Grain-size analyses 175
Grain-size analyses were performed on 21 samples. Five samples were retrieved from distinct clast free 176 horizons of the MK and TZ stratified scree deposits and sixteen samples from the PM loess sequence 177 (Fig. 2B). Samples were wet-sieved through a 350 μm sieve and were then analyzed with a Mastersizer 178 3000 laser diffraction particle size analyzer (Department of Earth Science, University of Bergen, 179 Norway), with a sensitivity of 0.01 -350 μm, to define the bulk grain-size distributions (GSD) of the 180 fine sand to clay fractions. GSD statistical analyses were performed with MATLAB Curve Fitting Lab (CFLab), which performs curve fitting on sediment grain size distributions using Weibull Probability 182 Distribution Functions (Wu et al., 2020). 183
Chemical methods (XRF) and mineral analysis (XRD) 185
All samples were analyzed for their bulk mineralogy and geochemistry. The relative elemental 186 composition was determined by X-ray fluorescence using an ITRAX core scanner in the Department of 187 Earth Science of the University of Bergen in Norway. One cubic centimeter of the finer (<350 μm) 188 fraction of the samples was air-dried, filled into sample cups and compacted by hand. Four units of 21 189 sample cups were mounted on sample holders and measurements with the ITRAX XRF core-scanner 190 were performed using a Mo-tube, which can detect a wide range of elements from Al to U (Croudace et 191 al., 2006). Counting time was 10 s and power supply at 30 kV/55 mA. XRF spectra were translated into 192 element counts by mathematical peak fitting using Q-spec software (Croudace et al., 2006). Major and 10 -8 m /kg). During the measuring procedure, every sample was measured at least 3 times and the average 233 value was assigned as a measurement. Two air measurements before and after the samples' measurements were performed. Additionally, frequency dependent susceptibility (χFD%) was calculated according to 235 Dearing et al. (1996) [χFD% = 100(χLF-χHF)/χLF]. 236
RESULTS 238
The PM sediment profile (Fig. 2E) Quartz grains display a rounded shape and variable grain sizes smaller than 15μm (Fig. 7F), which imply 338 their presence within M1, M2 and M3 grain size modes, respectively. The rounded shape of quartz grains 339 depicted from the SEM images, likely is a product of long-range aeolian transport. PM loess is additionally enriched in Zr and Cr, which appear in negligible quantities in MK and TZ 373 samples (Fig. 8E to H), so their origin from mechanical weathering of bedrock, is implausible. An 374 alternative mechanism for their transport and deposition in PM loess is deflation from distal or local 375 sources, such as the Sahara Desert, the proximal ophiolitic Pieria Mountains and Katerini alluvial plane. for the upper layers, respectively (Fig. 9D). Due to its sensitivity to super paramagnetic (SP) particles, 392 χfd is often used to identify ultrafine-grained iron oxide formation e.g., magnetite, maghemite, and with an increase in M3 concentration (Fig. 9 D & E) and with a decrease of the K/Rb ratio (Fig. 9F),
Local weathering 409
The low correlation between M5 grain size and the Ca XRF counts among all samples (r = 0.45, p < 410 0.05, n = 21), contrasts the notion that the coarse rich sands, are only produced by physical weathering 411 of bedrock carbonate formations. The low correlation can be attributed to dissolution kinetics and 412 leaching of Ca during disintegration of carbonate bedrock to gravel and sand. Within PM loess sequence, 413 the positive correlation between M1 and M2 concentrations with M5 grain-size (r = 0.67, p < 0.05), 414 suggests that the production of coarser sandy debris is associated with higher concentrations of fine 415 particles. A physical mechanism that can explain this statistical relationship is the isovolumetric 416 replacement of Ca-rich sand to clay, as proposed by Merino Within PM loess profile, the weight percent (wt%) concentration of mica and Zr XRF counts, display 466 high correlations (r > 0.70, p < 0.003) with M3 concentration. This relation argues that additionally to 467 mica (muscovite) presence in bedrock formations (TZ and MK samples) depicted from XRD spectra, 468 micaceous silt grains are also transported during Sahara dust episodes. The range of the M3 mean grain 469 size ranges between 14 and 28μm and is similar to modern values Sahara dust modal and median grain 470 sizes from Crete (Fig. 1), which range between 8 and 30 μm (Goudie and Middleton, 2001) and 4 to 16 471 μm (Mattson and Niéhlen, 1996), respectively. Thus, it is reasonable to support that M3 can be 472 considered a representative grain size mode of Sahara dust contribution to PM loess. 473 474 However, rounded quartz grains occur in a variety of grain sizes from 2 to 15 μm (Fig. 7F), which is 475 also supported by the correlation between the sum of M1, M2 and M3 modal concentrations with quartz 476 wt% (r = 0.74, p < 0.001). This suggests that transport of Sahara dust to Mount Olympus includes finer 477 particles in clayey silt range, assuming that all aeolian quartz comes from Sahara region. Since quartz is 478 traced in minor quantities in MK and TZ samples (Fig. 5A), the conclusion that the finer modes M1 and 479 M2 contain aeolian components, either from Sahara, or from local sources (Pieria Mountains and 480 Katerini alluvial plane), is valid, but the exact origin of quartz cannot be defined from the existing 481 analyses. Therefore, synergistic to the weathering of Mount Olympus carbonates and deposition of 482 detrital components with subsequent post depositional production of fine particles and aggregates rich 483 in Fe-Ti oxides, is the deposition of fine dust incorporated into M1 and M2. Background dust with grain 484 size similar to M1 and M2 (~3μm), is found in many European loess sequences and represents local, by the fact that decalcification of PM loess, largely occurs within the finer fractions, with subsequent 520 replacement of calcite to the formation of clay particles and mixed aggregates found in SEM images 521 (Fig. 7). 522 The observed Rb enrichment in PM, compared to MK and TZ samples (Fig. 6D), results from the 523 weathering of K-bearing minerals, such as mica (e.g. Anderson et al., 2000;Hošek et al., 2015;Zech et 524 al., 2008). In the previous section, it is argued that in addition to its detrital origin, mica is an inherent 525 component of Sahara dust transport to Mount Olympus and is identified in small concentrations (~6%) 526 in PM loess. The loss of mica to smectite cannot be quantified, but it appears that after its initial 527 deposition, mica is subjected to post depositional weathering with removal of K (Buggle et al., 2011, 528 Bosq et al., 2020. This is supported by the low values of K/Rb elemental ratio (Fig. 9F), used on many 529 occasions to describe the weathering intensity and removal of K from loess deposits (Profe et al., 2016). conditions under the cirque headwalls are low due to extensive snow cover, slope steepness, aspect, and 535 high production rates of coarse carbonate debris that enhances percolation of snowmelt. Translocation 536 of clay particles in the coarse matrix of glacial till and stratified scree deposits may also be responsible 537 for the minor contents of fine particles, but the assessment of these factors is beyond the scope of this 538 study. 539 540
Relative chronology of PM loess 541
The main step in establishing the relative chronology of PM loess deposition is to constrain the transition 542 period between the upper and lower layers from 14 to 16 cm of profile depth that partition several 543 sedimentological and geochemical changes. The 15% increase of M3 concentration along the transition 544 layer (Fig. 4C), suggests a growth in Sahara dust availability that can be associated with the 545 midHolocene termination of the African Humid Period (AHP; 10-6 ka BP) and the regional climatic The curve similarity of the three profiles shown in Fig. 10, tentatively confirms the previous 573 consideration that the transition period between the lower and upper layers of PM loess broadly coincides with the termination of African Humid Period at ~6 ka BP. A subsequent peak in Sahara dust transport 575 around 4.5 ka BP marks the upper boundary of this transition layer. Of particular interest is the temporal 576 constrain of the profile base with the relative date of sample PM3 placed ~10 ka BP. This implies that 577 the calcite rich samples PM1 and PM2 were deposited during the initial stages of Mount Olympus 578 deglaciation phase, between 12 and 10 ka BP, in agreement with the stabilization of moraines in TZ 579 cirque ( Fig. 2A). During this phase, influx of meltwater from the retreating cirque glaciers provided 580 aggressive solutions that were reacting with the carbonate bedrock dissolving it in high rate. westerlies. A mid-Holocene swift in the regional climate from Atlantic to Mediterranean type with drier 625 conditions and more frequent periods of Scirocco winds coincided with the termination of AHP and 626 increased deflation of Sahara dust grains from the desiccated areas. This regional climatic shift resulted 627 in prominent increases in the aeolian silt deposition (increase of M3 concentrations) and Zr/Al ratio 628 between 6 and 4.5 ka BP, with a concomitant decrease in the pedogenic modification of the deposited 629 dust and decreasing clay particle formation (decrease of M1 and M2 concentrations). Contrary to the 630 enhancement of Sahara dust transport on Mount Olympus since 6 ka BP, is the decrease of local dust 631 from the Pieria mountaintops and Katerini plain, as shown by the correlation of clay and fine silt with 632 Cr and Ni. The associated decrease of clay concentration with the heavy elements, can result either from 633 decreases either in summer convection and/or to northern winds outbreaks. 634 635 PM loess is decalcified and subjected to secondary syn or post depositional chemical weathering, which 636 include removal of Ca and K respectively. The upwards decreasing trends of Ca/Sr and K/Rb imply that 637 the elemental modification of PM loess has been gradual and independent of the aeolian deposition and regional climatic dynamics. The secondary mineralogical modification may be responsible for the high 639 amounts of smectite and kaolinite observed in the clay fraction, through weathering of mica to smectite 640 and plagioclase to kaolinite, but further conclusions on these processes cannot be achieved through the 641 analyses presented here. In addition, during deposition of the upper PM loess layer (6 -0 ka BP), wetter 642 than present summer conditions likely resulted in waterlogging and subsequent dissolution of Fe from 643 the Fe-Ti oxides (deferrification) and to pedogenic depletion of the magnetic signal. 644 645 Overall, the mechanisms responsible for the formation of PM loess are complex and involve several 646 convoluting processes, such as mechanical weathering of the glacial carbonate debris, chemical 647 dissolution of the weathered products, syn and post depositional alteration and formation of aggregates, 648 pedogenetic modification and aeolian dust deposition from local and regional sources. In the absence of 649 continuous reconstructions from Mediterranean alpine settings, future analyses of alpine loess deposits 650 in the sub cm scale can provide a powerful tool to study the local weathering dynamics and regional 651 atmospheric circulation patterns, focusing on periods of Sahara dust events and enhanced Sirocco winds 652 throughout the Holocene. 653 654 655 | 4,552.8 | 2021-10-19T00:00:00.000 | [
"Geography",
"Environmental Science",
"Geology"
] |
TASK-DEPENDENT BAND-SELECTION OF HYPERSPECTRAL IMAGES BY PROJECTION-BASED RANDOM FORESTS
The automatic classification of land cover types from hyperspectral images is a challenging problem due to (among others) the large amount of spectral bands and their high spatial and spectral correlation. The extraction of meaningful features, that enables a subsequent classifier to distinguish between different land cover classes, is often limited to a subset of all available data dimensions which is found by band selection techniques or other methods of dimensionality reduction. This work applies Projection-Based Random Forests to hyperspectral images, which not only overcome the need of an explicit feature extraction, but also provide mechanisms to automatically select spectral bands that contain original (i.e. non-redundant) as well as highly meaningful information for the given classification task. The proposed method is applied to four challenging hyperspectral datasets and it is shown that the effective number of spectral bands can be considerably limited without loosing too much of classification performance, e.g. a loss of 1% accuracy if roughly 13% of all available bands are used.
INTRODUCTION
The semantic analysis of hyperspectral images is of outmost importance in many applications as for example urban planning (Taubenbck et al., 2012) or agriculture surveys (Alcantara et al., 2012), but states on the other hand a hard challenge due to the high dimensionality of the data, the high spatial and spectral correlation, the high in-class variation, as well as measurement noise.The high number of spectral bands hinders a direct and exhaustive visualization of the image data and makes the usually applied approach of extracting a large set of image features infeasible.
Common approaches to deal with the large number of spectral bands range from (semi-)automatic preselection of bands (e.g.manual rejection of noisy bands), to band-fusion by dimensionality reduction (e.g.principal component analysis (Benediktsson et al., 2005)), to more sophisticated band-selection techniques.The work of (Guo et al., 2006) proposes an approach based on information theory.It uses the mutual information between the spectral signatures of different target variables to select bands that are considered to be of equal information content with respect to the given classification task.In (Tuia et al., 2014) the authors propose an incremental selection of the best features from a large set of possible features.In each iteration new features are generated and only added to an active set, if the overall performance increases.The choice of an efficient classifier as well as enforcing sparseness of the active set reduce the computational load.The classification performance increases, if PCA is applied to the spectral bands prior to feature computation.
After reducing the amount of spectral bands, a (mostly predefined) set of feature extraction operators is applied to the remaining set of channels to further extract meaningful information.Examples are texture descriptors ((Pacifici et al., 2009)) and morphological operators ( (Tuia et al., 2009)).Landcover maps tend to be smooth in the sense that neighboring pixels have a high probability to belong to the same class (Schindler, 2012).This spatial context is exploited by the application of spatial image filters.Their output is used (potentially additionally to the original bands) during further classification steps (Fauvel et al., 2013).1.The set of features is defined a priori and heavily depends on expert knowledge.The set of filters might be suboptimal by lacking features important for the classification task while including other, less informative features.
2. The computational load of applying all filters of a large filterbank to all (or a reasonably sized subset of) spectral bands is tremendous.
This paper proposes an approach of automatic band selection, that relies on neither any kind of predefined features nor taskindependent dimensionality reduction techniques.Instead of any kind of preprocessing or explicit feature extraction, Projection-Based Random Forests (ProB-RFs) are directly applied to the hyperspectral image data.ProB-RFs have been introduced in (Hänsch, 2014) in the context of object classification from polarimetric synthetic aperture radar images and are a variation of the general concept of Random Forests (Breiman, 2001).They are designed and optimized for the semantic analysis of images, but keep the general advantages of RFs including their high efficiency during training and application as well as the ability to provide robust and accurate results.
ProB-RFs have been adapted and applied to hyperspectral data in (Hänsch and Hellwich, 2015), where the authors show the general applicability to classification tasks from hyperspectral images.The work discussed here focuses on the usage of ProB-RFs for automatic band selection instead of a standalone classification framework.In this context the high efficiency and the built-in feature selection of RFs are of particular interest.The band selection proposed by this work is based on two steps: On the one hand, the correlation of classification maps based on single bands (Figure 1 shows the corresponding correlation matrix in grey) is used to reject redundant bands, i.e. bands that contain information with similar descriptive power as other bands.On the other hand, provide RFs a built-in feature selection that forces the classification to focus on bands with superior information content (with respect to the given classification task).The usage frequency of the individual bands within the RF (visualized as blue curve in Figure 1) is an interesting insight into the given classification problem and can serve as information source to built more specialized systems.
The proposed approach of band selection, especially regarding correlation-based band rejection, is closely related to band clustering (Li et al., 2011).While band clustering merges only adjacent bands in one cluster, the groups as used by the proposed approach do not follow any predefined order.The idea to use correlation coefficients to group bands is also investigated in (Zhao et al., 2011).While the authors used the correlation between the data itself, the proposed methods computes the correlation of classification maps.Even if two bands show very distinct features (i.e.correlate less on the data level), the information contained in these bands might still be redundant given a specific classification task.Other works of band selection apply methods based on information theory such as mutual information (Martinez-Uso et al., 2006, Bigdeli et al., 2013, Li et al., 2011).The disadvantage of these approaches is, that two different methods are used to judge the descriptive power of a band and to actually use it to infer the classification decision.In the proposed work the classifier selects meaningful bands by itself.Redundant bands are rejected beforehand, but by classifiers of the same framework which ensures a higher consistency.
ProB-RFs as used in this work and their implicit feature computation and selection are discussed in Section 2., while Section 3. explains their usage for band selection.The proposed framework is applied to hyperspectral datasets in Section 4..The experimental results show that ProB-RFs not only lead to an accurate probabilistic estimate of the class posterior.They also provide information about which bands have been useful to solve the classification task and which bands do not contain descriptive information.These bands can be used to develop optimized expert systems to further increase classification performance or lower the computational load.
PROJECTION-BASED RANDOM FORESTS
As an instance of Ensemble Theory (Dietterich and Fisher, 2000) Random Forests combine the output of many (suboptimal) decision trees to one final system answer.Over the last decade many different tree-based ensemble learning methods have been proposed including Randomized Trees (Dietterich and Fisher, 2000), Extremely Randomized Trees (Geurts et al., 2006), Perfect Random Trees (Cutler and Zhao, 2001), Rotation Forests (Rodriguez et al., 2006), and Projection-Based RFs (ProB-RFs) (Hänsch, 2014).
The task of pixel-wise labelling is usually solved by computing a feature vector for each pixel, which serves as input to a classifier.The features can be as simple as the radiometric information contained in one pixel alone, or more sophisticated by including spatial and radiometric information from the neighborhood.Common Random Forests define decision boundaries, which are piecewise constant and parallel to the coordinate axes of the feature space.Instead of treating the provided pixel-wise feature vectors independently, ProB-RFs analyse the spatial context of images and are therefore especially well suited for image analysis problems.In (Hänsch, 2014) their classification capabilities have been shown in various image processing tasks with a focus on object categorization of polarimetric synthetic aperture radar data.
Their usage in the work proposed here is based on their adaption to hyperspectral images in (Hänsch and Hellwich, 2015).Similar to this work, no preprocessing of the data is performed, in particular no manual band selection or feature extraction.The classifier is directly applied to the hyperspectral images as they are.However, in contrast to (Hänsch and Hellwich, 2015) this work does not focus on the mere classification performance but rather on how ProB-RFs can be used in order to detect spectral bands that are meaningful for the classification task at hand.
ProB-RFs as used in this work belong to the group of supervised learning methods, i.e. tree creation and training are based on training data.For each sample of the training data the class label is provided additionally to the spectral information itself.Instead of using the whole dataset, each tree creates its own individual subset by drawing random samples from the training data (Bagging, (Breiman, 1996)).The process of tree creation can be interpreted as a partitioning procedure of these training samples.Each non-terminal node has two child nodes.Starting from the root node of each tree, each node applies a binary test on every data point of the local subset of the whole dataset, which was propagated to this node by its parent node.Based on the outcome of this binary test, a data point is propagated to either the left or the right child node, respectively.
To exploit the spatial context of images image patches are used instead of single pixels.This allows the classifier to access not only radiometric information of the center pixel (i.e. the pixel under investigation) as well as the radiometric information in its surrounding, but also spatial (e.g.texture) information.
In this work hyperspectral images with B spectral bands are used.
No predefined features are computed, but the hyperspectral data is used as it is.ProB-RFs represent each data point x as a threedimensional data cube x ∈ R B,Nx,Ny , where Nx × Ny is the spatial dimension of the used local neighborhood.
The test function te : R B,Nx,Ny → {0, 1} is not defined in this high-dimensional space directly.All data points x are projected to scalar values x ∈ R by a projection function pr : R B,Nx,Ny → R.
The projection function selects one spectral band b and applies an operator op (e.g.average, minimal/maximal value) to one to four regions Ri within the patch (based on the projection type pt).
The final scalar value is the difference of the operator outputs (see Equation 2).
All parameters of the projection used by a node (spectral band, region size and position, reference value rv, operator) are randomly sampled (see (Hänsch, 2014) for more information).
The spectral-spatial projection represents each high-dimensional data cube x as a single real-valued scalar x.The test function (Equation 1) becomes a simple threshold operation (i.e.x < θ?) within this one-dimensional space.The test outcome determines whether a data point is propagated to the left or right child node.
There exist several approaches to define the split point θ, ranging from simple uniform sampling to supervised selection approaches for example based on the probability of misclassification.This work defines the split point as the median of the set of projected values Dn t ⊂ R at node nt.It is sufficiently easy to compute and leads to equally sized subsets that are propagated to the child nodes.Given a sufficient tree height, this provides a fine partition of the input space and leads to accurate results.
The created splits rely only on the data itself, but do not depend on the supervision signal provided by the training data.To generate splits that are stronger optimized with respect to the classification task, each node creates several test functions, i.e. based on different projections (e.g. by selecting different spectral bands).Optimal splits would lead to child nodes, that are of equal size (i.e.contain the same amount of data) and are as pure as possible (i.e.contain samples of as few classes as possible ).While balanced splits are ensured by the median-based split function, the impurity I(n) of the child nodes is estimated by the Gini-index (Equation 3) based on the local posterior class distribution of the corresponding sample subset.For each of the possible split candidates the drop of impurity ∆I(θ) from the parent nt to the child nodes n t,L/R is computed by Equation 4, where C is the set of class labels, P (c|n) is the local class posterior estimate at node n, and P L/R denotes the relative size of the child nodes.From all split candidates the one with the largest drop of impurity is selected.
The recursive splitting of the input set is stopped and a terminal node (leaf) is created instead of a non-terminal node, if either the tree has reached a maximal height, all samples belong to the same class, or the number of samples at this node is below a certain threshold.In this case the local class posterior P (c|nt) is estimated from the samples within this leaf nt and assigned to it.
A query sample x is propagated through all trees of the forest during prediction.It falls in exactly one leaf per tree.The final class posterior is estimated as the weighted sum in Equation 5, where the weight wn t depends on the size of nt (see (Hänsch, 2014) for details).
RFs offer an interesting method of assessing the strength of the individual base learners, which plays an important role within this work.Since each tree only uses a certain subset of all training samples (bagging), there is a set of samples that have never been used by a tree.These out-of-bag (OOB) samples can be used to estimate an approximation of the generalization error without the need of an additional hold-out set.Each tree uses its own OOBsamples, estimates their class posterior and compares it with the reference signal.The computed error serves as measurement of the strength of each individual tree.
AUTOMATIC BAND-SELECTION
While the previous Section 2. briefly explains the principle framework of ProB-RFs, this section discusses how the characteristics and mechanisms of ProB-RFs can be used for automatic band selection of hyperspectral data.The overall goal is to limit the total number of spectral bands without loosing too much classification accuracy.
This work investigates two possible reasons to decrease the influence of a spectral band on the classifier: 1) The band does not contain information, that is meaningful for the given classification task given the information contained within the other bands.The reason might be, that it does not contain any meaningful information at all, as for example very noisy bands, or another band contains similar information but in higher quality as for example with less noise or higher contrast.This case is discussed in Subsection 3.1.
2) Certain spectral bands might be redundant with respect to the classification task.The measurements of hyperspectral images are not completely independent of each other, but the data of two spectrally adjacent bands will correlate to a certain extent.The similarity of the data contained in these bands causes the classification decisions based on them to be correlated as well.Even if the data of two bands is not similar in terms of correlation, the classification results obtained from these bands can still correlate.
In both cases, the data contained in one band can not contribute new information to the classification decision, if the data of the other band is already available and used.This case is discussed in Subsection 3.2.
Descriptive Band Selection
In this work each node of the ProB-RF creates multiple candidate splits and selects the best of these splits based on the drop of impurity (see Section 2.).The different splits are based on different projections, in particular different spectral bands are used.In this way, each band is tested many times with different spatial projections, whether it can lead to a significant decrease of impurity.
A band that is more descriptive than others will be selected more often.A band that does not contain meaningful information with respect to the classification task given the other bands will be less often selected.Thus, the overall frequency with which the nodes of the forest used a specific spectral band is a strong indicator for its descriptive power.
This band selection is carried out during the training procedure of the ProB-RF.It is a direct byproduct of the tree creation process and does not require any additional calculations.
Redundant Band Rejection
The built-in feature selection of ProB-RF (as discussed above) is only able to select bands that are descriptive with respect to the classification task, but it cannot detect bands that contain redundant information.A straight-forward attempt to detect those bands is to compute the correlation coefficient between the corresponding slices of the hyperspectral image cube.However, with respect to the classification task, the image data of two bands can show only small correlation, but are still not able to contribute new information to the classification decision.
In order to detect those redundant bands, an additional ProB-RF is created, which contains as many trees as spectral bands.Each of these trees have access to only one single band and no two trees have access to the same band.Consequently, the preliminary classification decision of each of the homogeneous-feature trees (HFT) depends on one of the spectral bands alone.After all HFTs have been trained, they are applied to the provided image data and the individual classification maps are compared.
The correlation between two of these maps is used as indicator, whether the two corresponding spectral bands are redundant.If the correlation coefficient between two classification maps lies above a specified threshold, only the band is kept, where the corresponding HFT has the lower OOB-error.
After the redundant bands have been detected and rejected, the subsequent training and application of the final ProB-RF is based on the remaining bands alone.
EXPERIMENTS
Four different datasets are used to evaluate the proposed band selection schemes.These datasets cover natural and man-made targets, contain low as well as high-resolution images with different numbers of spectral bands, and represent classification tasks with different numbers of classes.
Indian Pines 1992
The Indian Pines 1992 dataset was acquired by the AVIRIS spectrometer over North-western Indiana in June 1992.The image data contains 145 × 145 pixels with a resolution of 20m and contains 220 spectral bands in the wavelength range 400−2500 nm.The available ground truth provides labels of 16 different crop types for 10, 366 pixels.Figure 2a shows an exemplary band of this dataset, which often serves as benchmark due to two major challenges: 1) The number of training samples are unevenly distributed among the classes.2) Some of the crops are in a very early stage of growth causing a strong mixture between plant and soil signatures.A typical preprocessing step for this dataset is the manual removal of bands covering the region of water absorption.This preprocessing step is omitted in the current study.
Kennedy Space Center
The Kennedy Space Center dataset shown in Figure 2b is acquired by the AVIRIS sensor over Florida in 1996.Since the images are taken at an altitude of approximately 20km, the spatial resolution of this dataset is 18m.The ground truth provides labels of 13 different land cover types.2d show a sample band of these datasets, where areas with no information have been removed.
Label generation and evaluation criteria
The classification performance is measured by the balanced accuracy, i.e. the average true positive rate over all classes.In each experiment 10% of all labelled samples are randomly selected for testing.The remaining samples are used for training, but excluding those in a 3 × 3 neighborhood of the test samples.Each experiment is repeated ten times and the performance averaged.It should be noted, that this information is a direct byproduct of the classification process based on any kind of RFs, that generate multiple tests for split selection.No additional dimensionality reduction techniques have to be applied beforehand or afterwards.This built-in feature selection allows the classifier to focus on information that is actually important to solve the classification task at hand.It is therefore highly task-dependent.Given the same data but a different classification task, the usage frequency of the individual bands will change if other bands prove to contain descriptive information for this task.The obtained information can subsequently be used to built expert systems to further improve the classification performance if necessary.
Classification performance
Figure 5 shows the confusion matrices obtained by averaging over ten experiments.In each run a single ProB-RF as described in Section 2. is trained and evaluated on the hyperspectral data, where the corresponding usage frequencies of the spectral bands are presented in Section 4.3.1In all four cases the balanced accuracy a (i.e.average true positive rate) is over 90%.
To put the achieved performance into perspective, the work of (Tuia et al., 2014) shall serve as an example which proposes a highly sophisticated method of iterative feature selection based on active sets.The reported performance for the Indian Pines 1992 dataset is κ = 0.83 ± 0.02, which is further increased to κ = 0.89 ± 0.03 by using PCA to the original bands before feature computation.The processing steps necessary to achieve this performance involve the manual rejection of noisy bands, application of PCA, enforcing a balanced training set, computation of a large set of features, and a complex iterative feature selection method.As discussed above, ProB-RF are directly applied to the original data and still achieve a performance of κ = 0.85 ± 0.03.
Redundant band removal
Although the built-in feature selection of ProB-RF reliably rejects bands with no descriptive power, it cannot detect bands that contain redundant information with respect to the given classification task.For this goal an additional ProB-RF is generated prior to the creation of the final classifier.This forest consists of as many homogeneous feature trees (HFTs) as there are spectral bands, i.e. trees that have access to only a single spectral band (see Section 3.2).
Figure 6 shows the correlation matrices of the spectral bands based on the corresponding HFTs for all four datasets.A high correlation at position (t1, t2) means, that the corresponding HFTs t1 and t2 made the same decisions despite having access to two different bands b1 and b2.The higher the individual strength of the trees (i.e.low OOB-error), the more correct decisions are made, and the higher is the correlation.Figure 6 shows, that several bands show low correlation with all other bands.These bands are very likely to contain less information about the given classification task, causing the corresponding HFT to make wrong decisions which (by definition) do not correlate with the (more correct) decisions of other trees.
Also visible in Figure 6 are groups of bands, that have high correlation between each other, but low correlation to bands outside of the group.These bands are considered to contain redundant information with respect to the given classification task.Using the whole group, or only one suitable exemplar of this group will not significantly change the quality of information the ProB-RF has access to solve the classification task.Therefore, these bands can be removed to limit the total amount of bands.Figure 7 shows how many bands remain and how the accuracy of the classification maps changes, if from a group of bands, that have a pairwise correlation over a given threshold, only the strongest (in terms of OOB-error of the HFT) is used and the others are rejected.If the correlation threshold is low, all bands are considered as redundant and only the strongest band is selected.Although the subsequent classification by a ProB-RF is based on one single spectral band alone, the classification accuracy is still in a reasonable range.
When the correlation threshold is increased, less and less bands are considered as redundant, leading to a larger number of bands and an increased classification performance.When the correlation threshold is high enough, no bands are considered as redundant and all bands are used leading to the highest performance.However, as can be seen in Figure 7, a classification performance close to the top-performance of using all bands can already be achieved with considerably few bands.Using only a single strong band leads to an accuracy of 83.8% for the Indian Pines 1992 dataset, which is considerably increased to 89.3% by using 24 bands.By using all 220 bands, the further gain in accuracy is only 1.3%.The usage of only 20 out of 176 bands of the Kennedy Space Center dataset leads to an accuracy of 97.3%, which could not be improved by using more bands.For the Pavia Center and Pavia University datasets, 12 and 18 of the roughly 100 bands have been enough to reach a performance of 97.8% and 96.2%, respectively, which increased only slightly by less than 1% if all bands are used.This fact is emphasized in Figure 8, which summarizes the relationship between the number of used bands and the classification accuracy.Although the achieved accuracy is a monotonous function with respect to the number of bands, it increases only slightly if more than a certain number of bands is used.The largest increase of accuracy is achieved by using around the 20 strongest of the available bands.On average, using more than 13% of the spectral bands increased the classification accuracy less than 1%.The effective number of bands, i.e. number of bands with a significant usage frequency, is therefore lower than the number of input bands.A final classifier, which might incorporate also more sophisticated features than the bands itself, should be based on these bands alone.They contain original (i.e.non-redundant) as well as highly meaningful information for the given classification task.If necessary, it might be worth to try to access this information by more complex features.Since the number of bands is considerably limited at this step, an exhaustive application of feature operators is feasible.
CONCLUSION AND FUTURE WORK
The Prob-RF classifier, that is used in this work, does not rely on a computationally expensive feature extraction step, but does work directly on the hyperspectral images.Nevertheless, it automatically computes semantic maps with state-of-the-art accuracy.
The high efficiency, accuracy, and robustness of this classifier is exploited to gain a deeper insight into the classification task.The built-in feature selection capabilities of RFs is used to estimate how relevant each spectral band is for the given classification task.The relevance measurement is based on the relative frequency with which a spectral band is used by the nodes of all trees in the forest.
The feature selection of RFs is only able to detect whether a band contains descriptive information with respect to a specific classification task.It is however unable to detect, whether a group of bands contains the same or similar useful information.In order to find these sets of bands an additional ProB-RF is created prior to the final classification.This RF contains only trees, that have access to one single band.The correlation of the resulting classification maps of two individual trees serves as measurement whether the information contained in the corresponding bands is redundant.
The experiments show that both approaches increase the classification accuracy.The number of spectral bands can be considerably limited without a significant loss of classification accuracy.
On average the usage of only roughly 13% of all available bands resulted in a decreased accuracy of less than 1%.
Future work will investigate the characteristics of the proposed method further, especially with respect to two effects: 1.The automatic feature selection of RFs becomes more and more random the higher within a tree it is carried out.Most of the "easy" decisions are already made by then and the continued splitting is more and more based on noise or random fluctuations within the data.This effect should be taken into account, if the relative usage frequency of a band is used to measure its importance for a given classification task.
2. The stronger the individual trees, the higher is the correlation between the corresponding classification maps.This leads to the fact that good bands show stronger correlation and are more likely to be considered as redundant than weak bands.Consequently, the redundancy estimation based on the correlation of classification maps should be corrected for this bias.
Figure 1 :
Figure 1: Proposed band selection based on usage frequencies of individual bands in a ProB-RF (red, blue) and correlation between classification maps (grey) 4.1.3Pavia Center and Pavia University Both datasets are acquired over Pavia, Italy, by the ROSIS sensor and have a spatial resolution of 1.3m.Pavia Center is a 1096×1096 pixels large image with 102 spectral bands, while Pavia University is 610 × 610 pixels large and consists of 103 spectral bands.The ground truth provided labels of nine different classes.Figures 2c-
Figure 4 :
Figure 2: Sample bands of Pavia datasets
Figure 5 :
Figure 5: Confusion matrices (with blue and red colors corresponding to zero and one, respectively), κ-statistic, and balanced accuracy a for different datasets
Figure 7 :
Figure 6: Correlation of bands based on classification maps
Figure 8 :
Figure 8: Classification accuracy versus (relative) number of bands
Figure 9 :
Figure 9: Usage frequency of all (blue) and selected (red) spectral bands | 6,718.6 | 2016-06-07T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
MPRAdecoder: Processing of the Raw MPRA Data With a priori Unknown Sequences of the Region of Interest and Associated Barcodes
Massively parallel reporter assays (MPRAs) enable high-throughput functional evaluation of numerous DNA regulatory elements and/or their mutant variants. The assays are based on the construction of reporter plasmid libraries containing two variable parts, a region of interest (ROI) and a barcode (BC), located outside and within the transcription unit, respectively. Importantly, each plasmid molecule in a such a highly diverse library is characterized by a unique BC–ROI association. The reporter constructs are delivered to target cells and expression of BCs at the transcript level is assayed by RT-PCR followed by next-generation sequencing (NGS). The obtained values are normalized to the abundance of BCs in the plasmid DNA sample. Altogether, this allows evaluating the regulatory potential of the associated ROI sequences. However, depending on the MPRA library construction design, the BC and ROI sequences as well as their associations can be a priori unknown. In such a case, the BC and ROI sequences, their possible mutant variants, and unambiguous BC–ROI associations have to be identified, whereas all uncertain cases have to be excluded from the analysis. Besides the preparation of additional “mapping” samples for NGS, this also requires specific bioinformatics tools. Here, we present a pipeline for processing raw MPRA data obtained by NGS for reporter construct libraries with a priori unknown sequences of BCs and ROIs. The pipeline robustly identifies unambiguous (so-called genuine) BCs and ROIs associated with them, calculates the normalized expression level for each BC and the averaged values for each ROI, and provides a graphical visualization of the processed data.
INTRODUCTION
Although numerous regulatory elements have been identified in eukaryotic genomes (Narlikar and Ovcharenko, 2009;Taher et al., 2011;Kellis et al., 2014), so far there is no complete understanding of why these elements are active in specific cell types and at specific levels. Accordingly, the effect of a particular mutation within a regulatory element can be hardly predicted, especially for a particular cell type (1000Genomes Project Consortium et al., 2015Albert and Kruglyak, 2015;Rojano et al., 2019). The recent development of massively parallel reporter assays (MPRAs) allows high-throughput functional characterization of native transcriptional regulatory elements (first of all, enhancers and promoters) as well as their mutant variants (reviewed in Haberle and Lenhard, 2012;Inoue and Ahituv, 2015;Trauernicht et al., 2020;Mulvey et al., 2021). In an MPRA, regions of interests (ROIs), e.g., putative enhancers or promoters, together with unique barcodes (BCs) are assembled into reporter constructs to obtain MPRA plasmid libraries that consist of thousands or even millions of individual molecules (Kheradpour et al., 2013;Kwasnieski et al., 2014;van Arensbergen et al., 2019). Specific MPRA libraries can also be packaged in lentiviruses to deliver reporter constructs into the target genome (O'Connell et al., 2016;Inoue et al., 2017;Maricque et al., 2017;Gordon et al., 2020).
From the structural point of view, BCs are always placed within the transcription unit [usually in the 5 or 3 untranslated region (UTR)], whereas ROIs are typically outside this unit ( Figure 1A). As a result, the BC sequences are present in the reporter mRNA molecules and, thus, allow quantitative evaluation of the regulatory effects caused by their cis-paired ROI variants using next-generation sequencing (NGS) ( Figure 1B and Supplementary Figure 1). For that, cells of interest are transfected by an MPRA plasmid library or transduced by a lentiviral MPRA library, and subsequently, transcriptional activity levels of barcoded reporters are assessed on episomal plasmids and/or after stable integration of the constructs at random or specific genomic loci (Melnikov et al., 2012;Sharon et al., 2012;Kheradpour et al., 2013;White et al., 2013;O'Connell et al., 2016;Tewhey et al., 2016;Ulirsch et al., 2016;Maricque et al., 2017;Inoue et al., 2019). More specifically, the "expression" and "normalization" samples are prepared by PCR amplification of the BC sequences from cDNA synthesized on total RNA isolated from the transfected/transduced cells and the plasmid DNA used to transfect cells or total DNA isolated from the transduced cells, respectively. These samples are subjected to NGS to determine the normalized expression level of each BC, which is calculated as the ratio between the BC abundance in the expression and normalization samples.
Depending on the MPRA library design, the ROI and BC sequences as well as their associations can be either a priori known or not. Completely predetermined MPRA libraries are generated by using sequences synthesized on custom highdensity DNA microarrays (Patwardhan et al., 2009;Melnikov et al., 2012;Sharon et al., 2012;Kwasnieski et al., 2014). MPRA libraries with unknown sequences of ROIs and BCs are made by cloning randomly sheared genomic fragments or pooled synthetic DNA fragments or by PCR-mediated mutagenesis and/or by cloning oligonucleotides containing randomized stretches of nucleotides (Patwardhan et al., 2012;Mogno et al., 2013;Vvedenskaya et al., 2015;Verfaillie et al., 2016;van Arensbergen et al., 2017;Kircher et al., 2019;Omelina et al., 2019). In some cases, the ROI sequences are predetermined although associated BCs are not known in advance (Smith et al., 2013;O'Connell et al., 2016;Tewhey et al., 2016;Grossman et al., 2017;Gordon et al., 2020). For the libraries that are not completely predetermined, there is a need to identify cloned ROI and/or BC sequences as well as their associations. Hereafter, the procedure of finding unique BC-ROI associations is referred to as "mapping" by analogy with the thousands of reporters integrated in parallel (TRIP) experiments (Akhtar et al., 2013(Akhtar et al., , 2014. The mapping is typically done by PCR amplification of BC-ROI regions of MPRA constructs followed by Illumina NGS (Patwardhan et al., 2012;Mogno et al., 2013;Tewhey et al., 2016;Omelina et al., 2019). Importantly, associations of the same BC with different ROI sequences are excluded from the further analysis although the association of the same ROI with different BCs allows revealing and excluding the possible influence of particular BC sequences on the measurements.
The MPRAdecoder pipeline described in this study was developed for the processing of NGS data generated for MPRA libraries with a priori unknown sequences of ROIs and BCs, for example, those cloned by the usage of oligonucleotides with randomized stretches of nucleotides. The pipeline (i) robustly identifies unambiguous (hereafter genuine) BCs and their mutant variants as well as associated ROIs, (ii) calculates the normalized expression level for each genuine BC and the averaged values for each ROI, and (iii) provides a graphical visualization of the processed data. The functionality of the pipeline was demonstrated using a data set obtained for an MPRA library designed to study the effects of sequence variations located at a certain distance downstream of the transcription termination site (TTS) of the eGFP reporter on its expression at the transcription level.
Preparation of the MPRA Mapping, Expression, and Normalization Samples and Illumina NGS
The MPRA plasmid library, in which random-sequence BC and ROI are separated by an 83-nt fixed-sequence region and located, respectively, in 3 UTR and downstream of the TTS of the eGFP reporter, was generated earlier (Omelina et al., 2019). The wild-type and mutant deltaC (Boldyreva et al., 2021) reporter plasmids carrying specific 20-nt BCs were constructed by standard molecular cloning procedures and verified by sequencing. An equimolar pool of two such wild-type and two deltaC mutant plasmids was mixed in a 1:99 molar ratio with the MPRA plasmid library. Immortalized human embryonic kidney (HEK293T) cells were obtained from ATCC (United States) and were maintained and transfected as described previously (Boldyreva et al., 2021).
The mapping samples were prepared according to a previously reported two-round conventional PCR procedure that prevents the formation of chimeric products (Omelina et al., 2019). Briefly, primers specific to the ends of fixed sequences mCP1 and mCP3 ( Figure 1B and Table 1) were used, and a specific, customdesigned 8-nt index along with other sequences necessary for Illumina NGS was introduced in the PCR products of each sample replicate. The normalization samples were obtained in the same way, using primers specific to the ends of fixed sequences neCP1 and neCP2 ( Figure 1B and Table 1) and 2.5 ng of the plasmid library as a template. To prepare expression samples, BCs were amplified as specified above but using 1/20 of cDNA prepared from the transfected cells as described earlier (Boldyreva et al., 2021) as a template. Phusion High-Fidelity DNA Polymerase (Thermo Fisher Scientific) was used for all amplification reactions. All obtained PCR products were purified on spin columns, mixed together, and sequenced on an Illumina MiSeq instrument as 151-nt single-end reads. Notice that the read length was shorter than the amplified plasmid fragments for all samples. Therefore, there was no need to remove Illumina adapter sequences from the reads. Finally, to prepare an example data set, a representative subset of the reads was randomly selected from the obtained fastq file. A copy of this subset was demultiplexed using Cutadapt (Martin, 2011).
Pipeline Code and Documentation Availability
The MPRAdecoder pipeline source code written in Python, the example data set, and the corresponding expected outputs as well as detailed documentation are publicly available on GitHub repository 1 .
Hardware and Software Requirements
The MPRAdecoder installation and analyses were performed on a computer with an Intel R Core TM i7-3770 processor, 31.4 Gb RAM, Linux Ubuntu 14.04 64-bit system, and Python version 3.8.6.
Overview of the MPRAdecoder Pipeline
A workflow of the MPRAdecoder pipeline is shown in Figure 2. Briefly, after providing details of a particular MPRA data set to be analyzed, the pipeline parses the input fastq file(s) and demultiplexes them if required. Next, all expected parts of the mapping, normalization, and expression reads are detected, particularly the sequences of BCs and ROIs. Then, a list of BCs common for all samples is generated with the assumption that some BCs have zero counts in the expression data. After that, genuine BCs and their mutant variants as well as associated 1 https://github.com/Code-master2020/MPRAdecoder ROIs are identified. Finally, the data are averaged over expression and normalization replicates, normalized, and averaged over ROIs, and the results are visualized in different plots. Below, these steps are described in more detail with the help of the example MPRA data set.
Characteristics of the Example Data Set
To demonstrate the capabilities of the MPRAdecoder pipeline, we used a data set consisting of two biological replicates of mapping, normalization, and expression samples obtained using an MPRA library, in which the BC and ROI (both cloned by using oligonucleotides containing fully randomized sequences) are located in 3 UTR and downstream of TTS, respectively (the option is shown at the bottom of Figure 1A), being separated by 83 nts of fixed sequence (Omelina et al., 2019). The samples were sequenced as 151-nt single-end reads on the Illumina MiSeq platform and were indexed with custom-designed 8-nt sequences located at the beginning of the reads ( Figure 1B). Important features of the data set are listed in Table 1. Note that the BC sequences were in forward and reverse-complement orientations in the mapping and normalization/expression samples, respectively. In addition, about 1% of the reads in each sample contained four unique 20-nt BCs associated with spikedin reference constructs; the TTCCAAGTGCAGGTTAGGCG and TGTGTACGGCTTGCTCTCAA sequences tagged the wildtype construct, whereas GAGCCCGGATCCACTCCAAG and TGTCACGTCAGCTAACCCAC sequences marked the deltaC mutant construct that is characterized by a higher expression level than the wild-type one (Boldyreva et al., 2021). The substantially longer length of the BC (18 nts) compared to the ROI (8 nts) ensures that each ROI is associated with multiple different BCs in a representative large plasmid library. This allows controlling the potential influence of individual BC sequences on the studied phenomenon.
Specifying Characteristics of an MPRA Data Set to Be Analyzed
The information on the input MPRA data set is provided in the two complementary forms. First, most details, such as (i) names and lengths of all expected parts in the mapping and normalization/expression reads for each MPRA library (including indexes), (ii) sequences of the predetermined parts (including indexes and optional reference BCs), (iii) relative orientation of BC sequences in mapping and normalization/expression reads, (iv) a maximum allowed error rate and the Phred quality score threshold for different parts, (v) a minimum number of read counts required for a BC and a BC-ROI association, and (vi) settings for identification of genuine BCs and associated ROIs, are specified in the configuration file. A detailed description of this file is available on the GitHub page of this project. Second, a user has to manually input the following details in the command prompt: (vii) names of the appropriate fastq file(s) and their locations as well as a location for output files, (viii) a number of replicates of each sample for each MPRA library, (ix) names of indexes used for sample multiplexing and (x) information on whether the fastq file(s) should be demultiplexed by the pipeline.
MPRA Data Demultiplexing by Pairwise Sequence Alignment
The pipeline is able to process either fastq files that are already demultiplexed, for example, by the Illumina software, or fastq files containing custom-designed index sequences at the beginning of the reads. In the latter case, detection of a predetermined index sequence in each read is performed using a pairwise sequence alignment tool from Biopython (Cock et al., 2009). For that, all index sequences specified in the configuration file are aligned, one by one, against the beginning of a read. The following alignment scoring system is used: +1 for a match, 0 for a mismatch, and -1 for an indel. If the maximum alignment score is higher than or equal to the threshold value (calculated as the index length -the maximum allowed error rate + 1 for each insertion) and the Phred quality score for each base (Cock et al., 2010) is higher than a threshold (equal to 10 for the example data set), the corresponding index sequence is considered to be identified; otherwise, the read is discarded. To generate the example data set, 8-nt index sequences differing from each other by at least 2 nts were used as suggested for the short (5-10 nts) predefined BCs (Patwardhan et al., 2009;Sharon et al., 2012). At the same time, the maximum allowed error rate was set to ∼10% based on our experience with PCR-amplification and subsequent NGS of predetermined sequences under experimental conditions identical to those used in this study (including the quality of oligonucleotide primers). Together, these factors ensure that one allowed single-base mutation (substitution, deletion, or insertion) in the index sequence cannot lead to an error in its identification. At the end, the reads are divided into an appropriate number of groups based on the detected indexes.
Identification of the BC and ROI Sequences in the Reads
Detection of the mCP1, mCP2, mCP3, neCP1, neCP2 ( Figure 1B and Table 1), and reference BC sequences in the reads is performed for each replicate of each sample by using the pairwise sequence alignment approach described above for the index, taking into account location(s) of the preceding part(s), which can be already identified (e.g., the mCP1/neCP1) or just estimated (e.g., the BC). Sequences of BCs and ROIs are defined as spacers between the appropriate constant parts. By default, the Phred quality scores are ignored for the mCP1, mCP2, mCP3, neCP1, and neCP2 sequences. For the BCs (including the reference ones) and ROIs, the quality score for each base should be higher than a threshold (e.g., set to 10 for the example data set); otherwise, reads are discarded from the downstream analysis. More specifically, in the case of the mapping reads, the process includes the following sequential steps. First, the mCP1 sequence is detected. Second, if sequences of the reference BCs are specified in the configuration file, the reads with such BCs are identified and excluded from the subsequent structural analysis. This is done because the functional sequences (e.g., wild-type or deltaC in the example data set) associated with the reference BCs might be located outside the ROI (e.g., within the mCP2 sequence as in the example data set). Third, the mCP2 sequence is detected, and the sequence between mCP1 and mCP2 is recognized as the BC if its length is within the range set in the configuration file (e.g., ≥16 and ≤20 nts for the example data set). Fourth, the mCP3 sequence is identified, and the sequence between mCP2 and mCP3 is recognized as the ROI if its length is within the range defined in the configuration file (e.g., ≥7 and ≤9 nts for the example data set). In the case of the normalization and expression reads, the last step is omitted. Lastly, if the ROI and/or BC sequences are in reverse-complement orientations in the mapping or normalization/expression samples (this is specified in the configuration file), they are converted to their forward counterparts.
Data Filtering and Generation of a List of Unique BCs
At the next step, the number of supporting reads for each BC (with a random or reference sequence) is counted for each replicate of all samples. Then, these numbers are divided by the total number of effective reads (i.e., those that passed all filters described above) in a replicate and multiplied by 1 × 10 6 to calculate the reads per million (RPM) values. After that, unique BC-ROI associations and BCs are assessed for reproducibility and robustness. Although preliminary results can be obtained using single replicates of the mapping, normalization, and expression samples, at least two replicates of each sample are strongly recommended. Under such conditions, only the BC-ROI associations that are revealed with at least m raw read counts (e.g., one for the example data set) in at least two out of any available number of replicates of the mapping data are retained for further analysis. Also, only the BCs with n raw read counts (e.g., three for the example data set) in each replicate of the normalization data are kept. For the expression data, the threshold read count e is set by default to zero, as some BCs might be present with very low frequency or even completely absent in the reporter transcripts due to the properties of particular ROI sequences. The threshold values m, n, and e are arbitrarily set in the configuration file. Finally, a list of BCs that are common for all samples is generated considering that some BCs might have zero counts in some or all replicates of the expression data.
Identification of Genuine BCs
Oligonucleotides with a totally randomized part (characterized by an equal representation of all four nucleotides at each position) of 15-20 nts in length can ensure cloning of ∼1 × 10 9 to 1 × 10 12 unique BCs, some of which might be different from each other just at one position. However, in practice, the size of FIGURE 3 | Identification of genuine BCs, their mutant variants, and associated ROIs. (A) The clustering of similar BC sequences is achieved by their decomposition into overlapping k-mers and by the subsequent pairwise alignment of BCs that share identical k-mers. At the top, three BCs are shown as an example. K-mers (6-mers) shared by BC 1 and BC 2 and by BC 1 and BC 3 are indicated by red and green arrows, respectively. At the bottom, the pairwise sequence alignment of the candidate similar BC sequences is depicted. The BC 1 and BC 2 are recognized to be similar because their sequences differ from each other only at two positions (≤the maximum allowed error rate). BC 1 and BC 3 are considered to be different because their sequences differ from each other at three positions (>the maximum allowed error rate) even though these BCs share more common k-mers than the BC 1 and BC 2 . (B) Identification of genuine BCs. One cluster of seven similar BCs along with the associated ROI sequences is shown as an example. BC 1 is the most abundant BC (as in A) and the ROI sequence, which is associated with it most frequently (n 1 > n 4 and n 1 > n 7 ), is considered as the putative ROI for the cluster. By default, if the putative ROI is supported by at least 90% of normalized read counts calculated for all ROI sequences found in the cluster, the BC 1 becomes genuine. Otherwise, the entire cluster is excluded from the subsequent analysis. Optionally (indicated by an asterisk), if mismatches within the ROI are permitted (e.g., a difference at one position could be allowed for the example data set), then normalized read counts for the putative ROI and its allowed mutants are summed. Notice that differences between the ROI sequences associated with similar BCs should be allowed with caution, especially for very short ROIs. Dashed horizontal lines separate different groups of ROIs: the putative sequence, its allowed mutants, and all other sequences. Gray arrowheads denote mismatches in both panels.
a typical MPRA plasmid library is significantly less (by orders of magnitude) than the theoretical values. Nevertheless, in MPRA data sets, BCs with similar sequences do appear, partly due to errors introduced during PCR amplification and NGS steps. Thus, there is a need to find similar BC sequences, group them, and identify the genuine BCs in each such group (referred to below as a cluster). Two BC sequences are considered to be similar if they differ at no more than s positions (by substitutions, deletions, and/or insertions), where s is equal to the maximum allowed error rate for this part. By default, up to two mismatches (E) Effect of the BC sequences on normalized expression values as estimated by using a subset of the ROIs, each associated with more than one BC. For each such ROI, only two different BCs, which are randomly assigned to groups "1st BC" and "2nd BC" are used for the comparison (for the ROIs associated with three or more BCs, only two of them are randomly sampled). At the top, density plots of normalized expression values obtained for BCs from the groups "1st BC" and "2nd BC" are shown. At the bottom, the correlation of these values between the groups is visualized as regular (left) and density (right) scatterplots. The rest of the description of the left scatterplot is as in (A). (F) Distribution of normalized expression values of genuine BCs, each associated with the ROI of the wild-type (WT) sequence. For (A-F), it is worthwhile noting that the plots shown were generated by using the entire fastq file obtained (see section "Materials and Methods"), from which the example data set was randomly sampled.
are allowed for BCs of the example data set, as suggested previously (Akhtar et al., 2013). Because identification of similar BCs by the means of alignment approaches is rather time-consuming, especially for thousands or even millions of sequences to compare (Song et al., 2014;Zielezinski et al., 2017), the MPRAdecoder pipeline first preselects candidate BCs for their subsequent pairwise sequence alignment ( Figure 3A). The preselection is achieved by decomposing all unique BC sequences into overlapping k-mers and then revealing BCs that share identical k-mers (Haubold, 2014;Zielezinski et al., 2017). The length of k-mers (e.g., six for the example data set) is calculated as the BC length/(s+ 1) rounded down to the nearest whole number. Next, BCs sharing each particular k-mer are directly compared by using the pairwise sequence alignment (see above), taking into account their normalized read counts (RPM values). Then, similar BCs are grouped into clusters, and a number of quality control steps are applied to ensure the absence of overlap between the clusters (ambiguous cases are removed).
After that, for each cluster, it is verified whether the most abundant ROI associated with the most abundant BC is supported by the majority of normalized read counts obtained for all ROI sequences present in a cluster ( Figure 3B). As a default setting, an arbitrary cutoff at ≥ 0.9 (specified in the configuration file) is used, similar to earlier studies (Akhtar et al., 2013;Mogno et al., 2013). If the criterion is not satisfied, probably due to associations of the same BC with different ROIs during the cloning by a chance or formation of chimeric molecules during PCR amplification of the mapping samples (Omelina et al., 2019), the entire cluster is excluded from the downstream analysis. If the criterion is satisfied, the most abundant BC and all other BCs are recognized as genuine and its mutant variants, respectively (the appropriate information is saved in a tab-delimited text file), and the RPM values of all BCs in such cluster are summed for each replicate of each sample. Eventually, all genuine BC sequences are different from each other by at least s + 1 position(s) (e.g., three for the example data set).
Data Normalization and Visualization
Once genuine BCs are identified, their RPM values in the normalization and expression replicates are averaged. Next, for each genuine BC, the normalized expression value is calculated as a ratio between its expression and normalization RPM values. Then, if reference constructs were spiked in the plasmid library, the pipeline can further normalize data by dividing them by the value obtained for one of these references (specified in the configuration file; e.g., for the wild-type construct in the case of the example data set). After that, values obtained with different genuine BCs but for the same ROI sequence are averaged. The raw and normalized read counts per unique BC-ROI association for each replicate of the mapping samples and per unique BC for each replicate of the expression and normalization samples, the RPM values averaged over these replicates as well as the ultimate expression values obtained for genuine BCs after each step of the data normalization and averaging are saved as tabdelimited text files. Also, the important details of data processing are reported in additional files. Among them are the numbers of allowed mismatches in the expected parts of the reads; the list of input fastq files used for a run; and statistics on (i) total and effective read counts per fastq file, (ii) numbers of unique and genuine BCs, and (iii) numbers of genuine BCs per ROI.
Finally, the pipeline generates a number of plots to help evaluate data quality and interpret the results (Figure 4). In particular, the reproducibility of the measurements between the replicates of the expression and normalization samples, the potential influence of the BC sequences on the measurements, and the sequence peculiarities of the ROIs with different properties are visualized.
Performance of the Pipeline
The pipeline can process 1 million reads of a non-demultiplexed fastq file in ∼20 min using the hardware and software specified in Materials and Methods. For larger data sets, the processing time can be estimated by assuming a linear dependence on the read number.
The MPRAdecoder pipeline is primarily intended for the processing of data obtained for MPRA libraries generated using oligonucleotides with randomized stretches of nucleotides for cloning the ROI and BC sequences. Such libraries are most suitable for the investigation of the properties of all possible sequence variants within a certain small region of a regulatory element. Considering the current capabilities of NGS as well as the necessity for several different BCs per ROI, the length of the region that can be subjected to saturation mutagenesis is in the range of 8-10 nts. The need for multiple BCs per ROI is dictated by the following two main factors. First, the BC sequences themselves might influence the measurements performed (Ernst et al., 2016;Ulirsch et al., 2016; Figure 4F), most probably due to occasional occurrence of binding sites for specific DNA-or RNA-binding proteins or microRNA in them. Therefore, in order to identify and exclude such cases, it is necessary to analyze each ROI sequence in combination with different BCs. Second, mutations may appear in both the ROI and BC sequences due to errors in PCR amplification and NGS although the frequency of such events was previously estimated to be relatively low (the error rate per nt ≤ 0.3%) (Pfeiffer et al., 2018;Ma et al., 2019). At the same time, all possible variants of the short ROI sequence are expected to be present in a high-quality MPRA library, making identification of mutant ROI variants in the reads practically impossible. However, the use of multiple BCs for each ROI allows detecting outliers, which can be, in particular, caused by mutated ROI sequences, and excluding them from the analysis.
Multiple BCs per ROI can be simply ensured by a longer sequence of the BCs compared to the ROIs (e.g., 18 and 8 nts, respectively, in the example MPRA library). In addition, such design allows excluding as much as possible mutant or just very similar BC sequences from the analysis. Namely, only such BCs (referred to as genuine) (Akhtar et al., 2013;Omelina et al., 2019) are used, which sequences differ from each other by at least a certain number of nts. For example, when predefined BCs up to 20 nts in length are used, the difference between each pair of them of at least at two to three positions is typically set (Patwardhan et al., 2009;Sharon et al., 2012). For BCs with random sequences of 16 nts in length, the minimum difference at three positions also provides reliable measurements (Akhtar et al., 2013(Akhtar et al., , 2014. In our case, we linked the allowed error rate in the BC sequences (as well as in all other parts of the reads, except for the ROI, in which we do not allow errors by default) with the experimentally determined error rate detected for fixed sequences amplified and sequenced in same conditions. Note that with the ROI length of 8 nts, a total of 4 8 = 65,536 sequence variants are possible, whereas the BC length of 18 nts provides 4 18 = 68,719,476,736 variants. Of the latter, obviously, not all can be genuine BCs (satisfy the Levenshtein distance ≥ 3) (Faircloth and Glenn, 2012;Hawkins et al., 2018), but nevertheless, each ROI can be associated with more than enough number of different BCs.
The use of oligonucleotides with randomized stretches of nucleotides to clone the ROIs and BCs as well as regular primers to amplify the mapping, normalization, and expression samples means that the following considerations should be taken into account during the processing of raw MPRA data. First, although synthetic oligonucleotides are purified by polyacrylamide gel electrophoresis (PAGE) or high-performance liquid chromatography (HPLC), their actual length in the preparation may vary due to the presence of deletions (more often) and insertions (less often) ( Figure 4B). Second, our experience shows that most errors found in the reads come from imperfection in oligonucleotide primer synthesis and purification (however, this could strongly depend on a supplier). Therefore, substitutions, deletions, and insertions are quite possible in the sequences of the ROIs and BCs as well as in the regions of the constant parts flanking them (that were generated by oligonucleotides used at the plasmid library cloning step). The same is true for the edges of PCR-amplified products, which are introduced by appropriate primer pairs. Along with the general drop in the quality of sequencing toward the end of the reads, this is the main reason why we allow a fairly high percentage of errors (∼10%) in all expected parts of the reads. The described issues with the use of synthesized oligonucleotides are generally consistent with previous studies (Faircloth and Glenn, 2012;Hawkins et al., 2018). In addition, considering the possible variation in the BC length, especially its shortening (Figure 4B), it seems reasonable to equip the reference constructs that can be spiked into an MPRA library with slightly longer BC sequences (e.g., 20 nts in the example MPRA library). This could minimize the chance of accidental coincidence of sequences of the reference BC and a random BC.
Because many of the pipeline settings are arbitrary (set in the configuration file), it is important to note the following. First, of course, it is possible to set the allowed error level for all expected parts of reads to 0%; however, in the case of the example data set, this leads to a decrease in the number of genuine BCs by more than two times compared with the default settings described above. Second, because it is well known that the quality of sequencing gradually decreases toward the end of the reads, it seems appropriate to map the mCP3 and neCP2 regions in the reads not completely, but only by their beginnings. In particular, the use of only 10 instead of 17 nts for mCP3 and 20 instead of 86 nts for neCP2 for the example data set ultimately makes it possible to detect more than ∼1.5 times more genuine BCs with the error level in all parts of the reads set to 0%, but this gives only negligible gain (<0.1%) with the default settings described above. Third, the difference in the number of minimum reads, in which unique BCs should be detected in replicates of the mapping and normalization samples (parameters m and n), is associated with the fact that, when performing the mapping procedure, it is more important to identify the fact of different BC-ROI association(s) although data from the normalization samples are eventually quantified. Moreover, both of these parameters, as well as the parameter e, which determines the minimum number of reads for each unique BC in replicates of the expression samples, largely depend on both the complexity of a particular MPRA library (the number of unique clones in it) and the sequencing depth of the samples. Fourth, the threshold level of 0.9 controlling the identification of genuine BCs can be increased if necessary. This parameter is also highly dependent on the expected number of unique BC-ROI associations in the samples and their sequencing depth.
Although it is strongly recommended to obtain at least two biological replicates of the mapping, normalization, and expression samples, we notice that the pipeline nevertheless can process single replicates of these samples as well. This option can be useful when performing pilot experiments for a quick and preliminary evaluation of the results. Also, it is possible to load raw data obtained for different MPRA libraries into the pipeline simultaneously.
Finally, the results obtained for the example data set ( Figure 4C) indicate that sequence variations in the region located after the TTS (which is not present in mature mRNA molecules) are able to substantially influence the reporter transcript level. This suggests a potentially high regulatory potential of the sequences located at the 3 -ends of genes, which has not yet been systematically studied.
DATA AVAILABILITY STATEMENT
The MPRAcode code written in Python is publicly available at https://github.com/Code-master2020/MPRAdecoder. The example input data as well as expected outputs are included in the GitHub repository. Detailed information on program can be found in the GitHub repository.
AUTHOR CONTRIBUTIONS
AL and AP conceived the study. AL, EO, and AI developed the pipeline. EO and AL performed experiments and applied the pipeline to the obtained data sets. AP supervised the project. AP, EO, and AL wrote the manuscript. All authors contributed to the article and approved the submitted version.
FUNDING
This work was mainly supported by the Russian Science Foundation Grant 16-14-10288 and in part of the preparation and deposition of the materials to the GitHub repository by the Russian Science Foundation Grant 20-74-00137. | 8,148 | 2021-05-11T00:00:00.000 | [
"Computer Science"
] |
Modified Newtonian Dynamics as an entropic force
Under natural assumptions on the thermodynamic properties of space and time with the holographic principle we reproduce a MOND-like behaviour of gravity on particular scales of mass and length, where Newtonian gravity requires a modification or extension if no dark matter component is introduced in the description of gravitational phenomena. The result is directly obtained with the assumption that a fundamental constant of nature with dimensions of acceleration needs to be introduced into gravitational interactions. This in turn allows for modifications or extensions of the equipartion law and/or the holographic principle. In other words, MOND-like phenomenology can be reproduced when appropriate generalised concepts at the thermodynamical level of space and/or at the holographic principle are introduced. Thermodynamical modifications are reflected in extensions to the equipartition law which occur when the temperature of the system drops below a critical value, equals to Unruh's temperature evaluated at the acceleration constant scale introduced for the description of the gravitational phenomena. Our calculations extend the ones by Verlinde (2011) in which Newtonian gravity is shown to be an emergent phenomenon, and together with it reinforces the idea that gravity at all scales is emergent.
I. INTRODUCTION
The laws for black hole mechanics have suggested a remarkable similarity with the three laws of thermodynamics, in such a way that quantities associated to black holes properties have their corresponding thermodynamic equivalent interpretation [1][2][3]. In particular, the black hole area, which is determined by its horizon, is related to the associated black hole entropy, in the sense that it cannot decrease in time under any physical process on a closed system. The temperature of the black hole is given by the Hawking-Zeldovich temperature and is inversely proportional to the mass of he black hole [8,23]. The well known interpretation of entropy as a quantity that offers a measure of non-available information or disorder in a system, has leaded directly to the idea that the increase in entropy, and therefore in area, is due to the lost of information when a particle crosses the horizon and it has no more causal relation with the rest of the universe [2].
All the above suggest the possibility for a deep relation between thermodynamics and gravity. This has been studied mainly in the relativistic regime under the concept of emergent gravity, considering thermodynamics as a more fundamental theory from which, general relativity can be derived [see e.g. 20, and references therein]. Using a metric treatment of the thermodynamic variables in a curved space-time, Jacobson [11] has been able to derive Einstein's field equations. In the non-relativistic regime, Verlinde [24] used very simple assumptions about space, energy and information in order to show that the first law of thermodynamics, along with an entropy formula, leads directly to Newton's law of gravity. This is encouraging, since similar arguments like Verlinde's ones can be used * Email address<EMAIL_ADDRESS>† Email address<EMAIL_ADDRESS>to search a more profound fundamental basis for an extended gravity theory like the one proposed by Mendoza et al. [16], which under simple conditions reproduces a MOND-like phenomenology [18] and has proven to be in good agreement with observations in astrophysical systems across different scales without invoking any dark matter component [5,6,9,10,15,16].
In this work, we show how, using arguments about thermodynamics and information, it is possible to derive an equation for the gravitational force in a MONDlike modified gravity regime, which support the idea that gravity can be understood as an emergent force, i.e. a consequence of deeper fundamental principles. The article is organised as follows: in section II we use dimensional analysis arguments to find an expression for the number of bits contained inside a surface under the assumption that Milgrom's acceleration constant a 0 is a fundamental constant of nature. In section III we perform a similar approach as the one made by Verlinde [24] in order to show how a MOND-like gravity force can be obtained from thermodynamical arguments. Finally, in section IV we discuss our main results.
II. DIMENSIONAL ANALYSIS
One of the most important assumptions made by Verlinde [24], is that the information describing a physical system is stored on spatial surfaces, or screens, that are ruled by the holographic principle. Every surface behaves as a "stretched horizon" of a black hole and when a particle interacts with it, the entropy, and consequently, the amount of information gets affected. In principle we do not know the shape of the surfaces, so, for simplicity we can consider each screen as closed and spherical with radius r. Each surface contains N bits of information. One can also think that on each fundamental Planck square area, the maximum information that can be stored is one bit. This length is constructed with three fundamental constants of nature: (1) Newton's constant of gravity G, (2) the velocity of light c and (3) Planck's constant . As shown by [4,5,16], the introduction of a new fundamental constant of nature a 0 ≈ 1.2 × 10 −10 m s −2 allows for a general understanding of gravity on extended regimes where either Newtonian and/or relativistic standard gravity fails to explain different phenomena usually ascribed to the existence of an unknown dark matter component. For the same reason, when Milgrom's acceleration constant a 0 is introduced in the description of gravitational phenomena, the precise way in which the information is stored on a particular screen must differ from Verlinde's calculation in this extended regime of gravity.
Buckingham's Π-theorem of dimensional analysis [21] gives a useful approach to the problem of finding the characteristic area in which the information is contained. With the introduction of a new fundamental quantity a 0 , there are two considerations that become important: the first one is the choice of the independent variables, and the other one is the existence of a degree of freedom in the resultant system of equations. In order to begin visualising the dimensional problem, note that two characteristic lengths can be constructed: The characteristic length l P is Planck's length and the length λ is a characteristic length that appears in a natural way when the fundamental constant a 0 is introduced in the description of gravitational phenomena [16]. Since there are two natural lengths that appear in this extended description of gravity, it is not straightforward to know how to calculate the fundamental area storing one bit of information. To do so, note that the number of bits N stored on a screen at a distance r from the origin is given by a general unknown function f as: Assuming the validity of the holographic principle, the amount of information must be proportional to the area of the screen, i.e. to r 2 . For the Newtonian case analysed by Verlinde [24], the amount of information N ∝ r 2 /l 2 P which follows directly from Buckingham's Π-Theorem since a 0 does not enter into the description of gravitational phenomena at the Newtonian level. With this, we can write equation (2) as: for Newtonian gravity. F (λ/l P ), for MOND-like gravity, for a dimensionless unknown function F (λ/l P ), which according to Buckingham's Π-Theorem is given by F ∝ (λ/l P ) b . The precise value of the unknown exponent b will be found by requiring a match with MOND's force formula. In other words, in the extended regime of gravity we expect the number of bits of information to be:
III. EMERGENCE OF MODIFIED GRAVITY
The main motivation by Verlinde [24] to think of gravity as a force related to entropy has its origin on the the restitutive force that acts on a polymer when it suffers a displacement ∆x. This force tends to back the polymer to its original position since this configuration maximises the entropy. The link with gravitation consists on a similar idea for which there is an entropic force that emerges as a consequence of the system searching for a configuration of maximum entropy when a particle approaches to a particular screen. We assume that inside the screen, the dynamics allow us to define energy, and consequently, the associated mass M and temperature T are well defined quantities. With this, we can use the first law of thermodynamics to find the force F associated to changes in the stored information, i.e. due to a change in entropy ∆S given by: for a constant volume. Let us now find the expression for the gravitational force by considering gravity as an entropic force. For this, we follow the approach by Verlinde [24] & Jacobson [11] analysing the behaviour of a mass m particle near a black hole horizon. At a distance of one Compton length from the horizon, the particle can be considered to be part of the black hole and so its entropy is increased in the following way [24]: when a displacement ∆x occurs. In other words, a change in the particle position causes an increment on the system entropy, such that it tries to maximise it and as such, the horizon can be substituted by a screen. The other assumption to make is that the energy contained inside the surface satisfies the principle of equipartition, and that it can be transformed into a mass M . Thus, using these facts on equations (4), (5) and (6) it follows that: In this form, the last equation has a non-straightforward MOND-like representation and has a 1/r 2 Newtonianlike behaviour. However, it can be written in a more convenient way using the fact that the total mass M can be expressed in terms of Planck's mass M P := ( c/G) 1/2 as: Direct substitution of this last equation into relation (7) yields: in which the value N has been substituted by the righthand side of equation (4)
IV. DISCUSSION
As explained by Famaey and McGaugh [7], Mendoza [14] and in a more profound and empirical way by Mendoza and Olmo [17], if gravitational phenomena requires to be modified at a certain scales of mass and length, one needs to incorporate a new fundamental constant of nature relevant in all gravitational phenomena at those scales. As shown by Mendoza and Olmo [17] this gravitational constant is as important as Newton's constant of gravity and can be mathematically manipulated as to have dimensions of acceleration which converge to Milgrom's acceleration constant a 0 . This is so since gravitational phenomena does not follow the standard Newtonian (or general relativistic) behaviour of gravity at scales which greatly differ from the ones in which precise gravitational experiments have been performed to test the validity of Newton's law of gravitation (or Einstein's general relativity cf. [25]). The behaviour of gravity at those scales can be considered as independent of the behaviour of standard gravity and as such, a new fundamental constant of nature has to be introduced in the description of gravitational phenomena [21]. In this article we have introduced this extra fundamental constant of nature a 0 in the description of gravity and used thermodynamic and information properties of space and time in order to show that a MONDian force law can be obtained by assuming the validity of the holographic principle.
A full non-relativistic theory of gravity can be constructed assuming a modification of inertia as described by Famaey and McGaugh [7], but as shown in this work the modification naturally appears in the force sector and not on the dynamical one. As such, the developments made by Mendoza et al. [16] in which the extensions of gravity are made in the force sector seem to be more appropriate. From basic fundamental principles and with no field equations in the description of gravitational phenomena, Mendoza and Olmo [17] have shown that such an extended theory of gravity can be relativistically constructed. An example of such a theory has been developed in the works of Bernal et al. [5], Mendoza [14] and Mendoza et al. [15]. Furthermore, in recent years a growing number of independent observations have suggested that gravity requires modification [9,10,12,13,19,22] and not the inclusion of unknown dark matter entities.
With a few natural assumptions about space and information, the main result of this article is to show that gravity can be considered an emergent phenomenon also in the MONDian regime. This suggests that the force of gravity on this extended regime is not a fundamental force of nature, but a consequence of the inherent properties of space and time. Since Verlinde [24] showed that Newtonian gravity emerges from the thermodynamical properties of space and time, this all suggests that gravitation is an emergent phenomenon at all scales of mass and length. | 2,940 | 2014-02-25T00:00:00.000 | [
"Physics"
] |
Portable Automated Radio-Frequency Scanner for Non-destructive Testing of Carbon-Fibre-Reinforced Polymer Composites
A portable automated scanner for non-destructive testing of carbon-fibre-reinforced polymer (CFRP) composites has been developed. Measurement head has been equipped with an array of newly developed radio-frequency (RF) inductive sensors mounted on a flexible arm, which allows the measurement of curved CFRP samples. The scanner is also equipped with vacuum sucks providing mechanical stability. RF sensors operate in a frequency range spanning from 10 up to 300 MHz, where the largest sensitivity to defects buried below the front CFRP surface is expected. Unlike to ultrasonic testing, which will be used for reference, the proposed technique does not require additional couplants. Moreover, negligible cost and high repeatability of inductive sensors allows developing large scanning arrays, thus, substantially speeding up the measurements of large surfaces. The objective will be to present the results of an extensive measurement campaign undertaken for both planar and curved large CFRP samples, pointing out major achievements and potential challenges that still have to be addressed.
Introduction
Future generation of engineered structures in the civil, aerospace, automotive and marine industries will consist mostly of carbon composite materials, due to their performance and structural efficiency. However, one of major issues to be resolved is that the modes of failure in composite intensive structures such as the increasingly popular carbon-fibrereinforced polymer (CFRP), composite reinforced concrete beams [1][2][3] or the recent operational aircraft, Boeing 787 Dreamliner (with 50 % composite) [4], are not fully known as they are still near the beginning of their design life. It is clear that these CFRP materials are susceptible to internal impact damage, not visible with an un-aided eye at the surface. In spite of this, inspections at the point of manufacture and in service is largely manual with consequent low area coverage. Operation downtime is usually inevitable during scheduled or unscheduled inspection. Common non-destructive testing (NDT) techniques utilized for CFRP include ultrasonic testing (UT) [5,6], eddy current testing (ECT) [7][8][9], shearography [10], microwave and millimeter wave characterization [11,12]. However, results obtained are difficult to interpret for most NDT techniques due to the intrinsic anisotropy and inhomogeneity of the CFRP structure [13]. Moreover, there are requirements for specific techniques which may be easily applicable in-situ.
One of promising techniques that successfully addresses the aforementioned challenges is radio-frequency inductive testing (RFIT) [14,15] with a single sensor built of two coupled spiral inductors manufactured on a printed circuit board (PCB). First of all, as it has been shown in [14], the point spread function (PSF) of the sensor is strongly anisotropic, thus, enabling the characterization of such materials like CFRP. Second of all, the measurement with the RFIT sensor allows easily determining the depth of defects buried in the CFRP materials and obtain a cross-sectional scan by the appropriate combination of measurements data at a few frequencies. The RFIT technique does not require calibration as the obtained RF C-scan is a differential measure of the magnitude of power transmission coefficient. In addition, RFIT sensors can be developed on PCB with a very high repeatability, thus, substantially suppressing any issues associated with the use of large scanning arrays.
The RFIT technique is similar to ECT [9,14], as both methods take the advantage of electromagnetic fields to sense the material under test (MUT). However, there are a few differences thoroughly pointed out in [14]. First of all, RFIT is not necessarily based on eddy currents sensing, but rather on surface currents induced in the vicinity of the gap between adjacent inductors, which gives more flexibility in adjusting sensor's characteristics to the given properties of the MUT. For instance, induction of typical coils wounded on a ferrite core applicable in ECT drops at frequencies higher than a couple of MHz, mainly due to increasing capacitive parasitics and magnetic loss of ferrites. On the contrary, simple planar inductors manufactured on a printed circuit board, which do not exhibit any frequency limitations in RF spectra, are used in the RFIT technique. Moreover, magnitude of power transmission measured in that technique allows achieving sensitivity to buried defects at the level of 3 dB, which means 100 % change of transmitted power with respect to the measurement over the area without defects.
As a result of the recent successful development of the planar coupled spiral inductors tailored for the NDT of CFRP composites [14,15], the sensor and auxiliary electronic modules were subject to further development [16] in which a line array of sensors is realized. The sensor array is integrated with a portable automated scanner. Since the sensors and measurement procedures have already been discussed in [14][15][16], the attention is focused in this paper on the scanner developed to carry out the experiments validating the whole RFIT NDT system on both planar and curved CFRP composites with typical buried and surface defects. The results clearly demonstrate the applicability of such automated RFIT system for reliable and efficient CFRP inspection.
Scanner
The whole NDT system, as shown in Fig. 1, consists of the XY scanner with a sensor array, data acquisition board (DAQ), and a PC station.
XY Scanner
One of major objectives was to minimize a scanner's weight, while providing high scanning rates, good resolution, positioning repeatability in both absolute and relative For that reason, an important functional requirement was the development of an ergonomic portable chassis with the weight less than 10 kg. Furthermore, the necessary space for seamless integration of all modalities that will implement the requested functional requirements of the scanner should be provided. Based on the aforesaid, the scanner frame was designed and manufactured through the use of two aluminum plates as presented on a schematic design in Fig. 2. The chassis was secured via the application of four tubes with collars that interconnect the two layers of aluminum plates. The aluminum plates were pre-cut with all necessary cut-outs and bores so as to be able to attach all necessary parts, gantry, motor supports, electronics, and pneumatic systems. The chassis structure has the advantage that all parts are accessible and can be removed readily for repair or replacement. It can be easily disassembled, machined if modifications are necessary during service period and reproduced on request. The overall size of the chassis frame is 600×400×250 mm 3 .
The main function of the x-y gantry system is to implement a scanning motion of a sensor arm. For that purpose, an Igus DryLin stage drive was selected, which provides lubricant-free linear axles that are driven either by trapezoidal thread, steep thread or toothed belt. The user can choose a suitable individual solution from lightweight solid plastic units up to massive stainless steel solutions. For our appli- cation trapezoidal threads with 2 mm pitch were chosen for both X and Y directions. Along with robust design of these components, their main features include ruggedness and insensitivity to dirt, water, chemicals, heat or impacts.
Magnetic encoders have a resolution of 1024 ticks per shaft revolution. One revolution is converted to a linear motion of 2 mm with a resolution of 10 µm per revolution of the DC motors. The power source is mounted within the chassis boundaries and provides the scanner with voltage supply options of 5-12-24 VDC. The scanner is controlled via RS-485 protocol converted to USB before it is connected to the PC.
Another function of the scanner is the attachment to the CFRP surface. Due to operational principles of the inspection method and the composite material characteristics, it is not possible to implement magnets and clamping systems for the robust mounting of the scanner. For that reason, plastic suction cups were developed, which satisfy two crucial conditions: (a) there is no magnetic field interference with RF sensors, and (b) no surface damage can be induced to the MUT. The mounting of the scanner is performed by the following procedure. Initially, the operator places the scanner on the MUT's surface, ensuring that the supporting legs and the plastic cups are conformed on the surface. Subsequently, pressure can be produced to the cups by pressing an air compressor activation button. As a result of a network pressure of 8 bar, the scanner becomes firmly attached to the surface.
Sensor Arm
Sensor arm, as shown in Fig. 3, is a mounting platform for the array of RF sensors. It is made of aluminum housed with bearings, pivot brackets, and springs to produce a lab jack mechanism with better conformance of the sensors to the MUT's surface. RF sensors are placed in PA6 plastic protection pads, which are mounted to a holder arm with the aid of a plastic holder plate to prevent any large metallic parts in the Fig. 4 fit precisely to the plastic protection pads, the attachment can be further enhanced with thermoplastic adhesive. The dimensions of the holder plate are 100 × 120 mm 2 with a thickness of 8 mm (see Fig. 5). Vertical force pushing the sensors to the CFRP surface is provided by the lab jack mechanism that drives a passive spring on the outside of a telescopic pair shown in Fig. 4. An exerted force over the sensor plate is adjusted by the use of springs in order to maintain a firm contact with the MUT's surface during a sliding movement of the sensors. The total travel distance on the z-axis is 40 mm, enabling the application of proper pressure to the sensor plate, in order to ensure its smooth conformance on curved surfaces. The vertical displacement can be monitored with an integrated absolute encoder.
RF Sensors
Sensor array is made in the form of a line of RF inductors manufactured on the PCB, although arrays are also developed and used, as shown in Fig. 6. Single measurement utilizes two adjacent inductors treated as primary and secondary windings of a transformer. As a result, there are N − 1 measurement points for N aligned inductors [16].
If RF sensors are attached to the MUT's surface with the aid of the sensor arm, a magnetic field penetrates the MUT provided that the penetration depth d p is large enough at a given frequency. For instance, conductivity of CFRP composites is usually at the level of about σ = 10 4 S/m or more [13,14], which means d p is over 1 mm for frequencies below ca. 30 MHz. It implicitly determines the frequency range for RF inductors if the thickness of CFRP panels is given.
Data Acquisition Board
In the view of the above considerations of the frequency range of operation of the RF sensors, electronic circuitry, dedicated to the measurement of the number of individual sensor channels, has been developed (see Fig. 7). The main role in that system is played by a direct digital synthesizer (DDS), controlled with a 32-bit ST microcontroller, which also serves as an interface with a PC host station [16]. DDS generates signals spanning from 20 MHz up to 300 MHz, as this is the spectrum where the largest sensitivity to defects buried in ca. 1 mm thick CFRP panels is expected. The signal is multiplexed sequentially to all measurement channels. Each channel is equipped with an individual logarithmic wideband power detector, which is to be connected to the secondary winding of a given RF sensor. As a result, each measurement shot, consisting of frequency sweeping, provides the whole spectrum of power transmission through each coupled sensor pair, as indicated in Fig. 6. The system does not require any calibration procedures to be invoked as the whole RF image can be studied using comparative measures (see details in [14]).
Measurements
Characterization of flat and curved CFRP panels will be presented in this Section. Both panels, manufactured by ATARD [17], consist of four layers of CFRP twill immersed in epoxy resin. All the samples have been measured with a single line of RF inductive sensors shown at the top of Fig. 6 mounted in the scanner depicted in Fig. 1 with N a = 12 measurement pairs. As the scanning step has been set to 2 mm, an image of a 300 × 200 mm 2 surface consists of over 15,000 points, which means that the scanner has to undertake 1250 incremental shifts across the surface. That number can Fig. 3), which will be the subject for future enhancements of the system.
A single measurement of an individual sensor at a given frequency takes about t e = 8 µs, while each mechanical shift to another position lasts for ca. t m = 0.5 s. Hence, the scanning still much room for further improvements of the scanning rate by the reduction of t e , e.g. by exchanging sequential data acquisition with a parallel one, which may speed up the measurements by the order of magnitude or more.
Although the measurement step is as large as 2 mm, a large area of the PSF of a single RF sensor allows detecting even strong localized spots (see [14] for details). The knowledge on the spatial distribution of the PSF can be used to enhance the RF image resolution much further, but that issue goes beyond the scope of this paper.
Flat CFRP Panel
A flat CFRP panel is shown in Fig. 8. As it can be noticed, a few types of defects have been intentionally introduced, such as holes, bubbles, cracks, and delamination. As in [14], all RF images presented hereafter are constructed from a power transmission coefficient given in a logarithmic dB-scaling. Figure 9 shows RF images stored at two distinct frequencies at the south west part of the panel shown in Fig. 8. The image at 50 MHz clearly indicates a large hole with the diameter of ca. 55 mm, which is not visible with an unaided eye at the front surface of the panel. It is confirmed by the lack of distinctiveness of the hole at the RF image measured at 200 MHz, which can be attributed to a substantially smaller penetration depth at this frequency.
In particular, regular fracture appearing in Fig. 9b is correlated with the orientation of the front CFRP twill [15]. Moreover, the RF image measured at a higher frequency of 200 MHz, which is more sensitive to surface roughness of the inhomogeneous CFRP sample, contributes to larger fluctuations visible in Fig. 9b. In addition, scanner arm and other mechanisms controlling the movement of the sensors introduce additional uncertainty, thus, leading to irregular features of RF images. Figure 10 indicates an RF image measured at 20 MHz at the areas of the flat CFRP panel shown in Fig. 8, where buried bubbles were intentionally introduced by the manufacturer. Indeed, several tiny fluctuations can be observed in the central part and the right-hand side of the image, while the changes at the left-hand side of the image are rather smooth and results mostly due to slight misalignments of the CFRP twill.
Subsequently, Fig. 11 shows RF images stored at the south east part of the panel shown in Fig. 8 with vertical cracks Eventually, Fig. 12 shows delamination buried in the CFRP sample shown in Fig. 8. As it can be noticed in Fig. 12a, there is large deep minimum at a left-hand side of the Figure 13 shows the sample of a curved CFRP panel with several types of defects introduced in a similar way as in the flat sample. Curvature radius of the panels is ca. 1.5 m. As it will be shown, measurement of curved panels brings additional issues related to uneven attachment of the sensor line to the surface (lift off), which has to be de-embedded from raw measurement data. Figure 14 depicts specific bubbles occurring at a front surface of the curved CFRP panel. Due to the chosen direction of the sensor indicated in Fig. 14, the sensor's line in unevenly attached to the curved surface and, consequently, the corresponding RF image consists of four individual stripes, as show in in Fig. 15a. Fortunately, the uneven attachment results in a linear lift off occurring in power transmission measured along the sensor line, so it can be determined and removed at a post-processing stage. Consequently, Fig. 15b is smoother enabling better recognition of the bubbles, the location of which corresponds very well with those shown in Fig. 14. Figures 16,17,18 show another types of defects measured with the RF scanner at 20 MHz with the linear trend already removed. Figure 16 indicates the RF image stored in the area where delamination is expected and, indeed, it can be seen in the middle of the image. In addition to that, there is also a vertical defect visible at the left-hand side of the RF image shown in Fig. 16, which may be some buried crack unintentionally introduced by the manufacturer.
Curved CFRP Panel
Subsequently, Fig. 17 shows several horizontal cracks measured on the curved CFRP sample. Eventually, Fig. 18 depicts the hole clearly visible in the top right corner of the image. Similarly to the unintentional crack depicted in Fig. 16, there is a horizontal fault visible in Fig. 18, which has not been explicitly highlighted by the manufacturer. | 3,969 | 2016-03-28T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Fine Facet Digital Watermark (ffdw) Mining from the Color Image Using Neural Networks Abstract— on Hand Watermark Methods Employ Selective Neural Network Techniques for Watermark Embedding Efficiently. Similarity Based Superior Self Organizing Maps (sbs_som) a Neural Network Algorithm for Watermark
is learned by the SBS_SOM neurons and the very fine RGB feature values are mined as digital watermark. Discrete Wavelet Transform (DWT) is used for watermark entrench. Similarity Ratio and PSNR values prove the temperament of the Fine Facet Digital Watermark (FFDW). The Proposed system affords inclusive digital watermarking system.
INTRODUCTION
Digital image watermarking is a technique which embeds additional information called digital signature or watermark into the digital content in order to secure it [10].A watermarking system is usually divided into four distinct steps, collecting digital watermark, embedding, attack and detection.Digital watermark may be any text, image, signals or any derived values.The proposed system generates watermark from host image by using Similarity Based Superior Self Organizing Maps (SBS_SOM) a neural network algorithm.In embedding, an algorithm accepts the host and the data to be embedded and produces a watermarked signal.The watermarked signal is then transmitted or stored.
If anybody makes a modification, this is called an attack.While the modification may not be malicious, the term attack arises from copyright protection application, where pirates attempt to remove the digital watermark through modification.There are many possible modifications, for example, lossy compression of the data, cropping an image or intentionally adding noise.This analysis verifies the robustness of watermark by smooth JPEG compression and adding standard noise.Detection (often called extraction) is an algorithm which is applied to the attacked signal to attempt to extract the watermark from it.In robust watermarking applications, the extraction algorithm should be able to produce the watermark again, even if the modifications were strong.In fragile watermarking, the extraction algorithm should fail if any change is made to the signal.Reference [8] uses BPN (Back Propagation Network) model to learn the relationship between the watermark and the watermarked image.Reference [5] used a full counter-propagation neural network (FCNN) for copyright protection where the ownership information was embedded and detected by a specific FCNN.Reference [1] proposed a new blind watermarking scheme in which a watermark was embedded into the DWT (Discrete Wavelet Transform) domain.It also utilized RBF Neural network to learn the characteristic of the image, using which the watermark would be embedded and extracted.Reference [4] presented a specific designed full counter-propagation neural network for digital image watermarking.Most of the systems used CPN, BPN and RBF algorithms.Various neural network algorithms were used to strengthen and retrieve the watermark values, but not for watermark value generation.The proposed system SBS_SOM a neural network algorithm was trained to generate digital watermark values from the image.The proposed system submits robust watermarking scheme.The section two gives detail about basic techniques Similarity based superior SOM which is used for watermark generation and detection and Discrete Wavelet Technique (DWT) for embedding.Section three describes the proposed system process.The experimental results are discussed in the section four.Section five summarizes the result.
II. PROPOSED SYSTEM ENVIRONMENT
Instead of using standard media as digital watermarks or mathematically derived watermarks the projected system imply a neural network algorithm called Self Organizing Maps (SOM).The SOM is a particular type of neural network used in clustering, visualization and abstraction [6].It is an unsupervised competitive learner.Its learning pattern and order are unpredictable.
Existing applications on SOM express the benefits of using the SOM in applications with massive sets of data in finance, macroeconomics, medicine, biology, and other fields.The Self-Organizing Maps have been used at the Research Center in such applications as: Automatic speech recognition, Clinical voice analysis, Monitoring of the condition of industrial plants, processes Cloud classification from satellite images, analysis of electrical signals from the brain organization and retrieval of data from large document collections.The outputs of SOM applications are highly visual, which assists the analyst in understanding the data"s internal relationships.
However in the field of watermarking SOM was used for the process of embedding.The paper recommends the improvised SOM known as Similarity Based Superior SOM (SBS_SOM) for watermark generation.Conventional SOM and most of the extended SOM are using Euclidean-based distance metrics.The proposed SBS_SOM is more efficient www.ijacsa.thesai.orgthan the conventional SOM.It uses Jaccard or Dice measures for winner node selection.So the training starts from maximum distance nodes, whereas SOM stars it"s training from minimum distance nodes.Its factors such as weight initialization, learning rate, training epochs and neighborhood function are influencing the training nature of the network.These settings are standardized as per the rules.The same environment is created with conventional SOM for the analysis intention.
A. Similarity Based Superior SOM
References [2] and [3] reliably recommend Similarity based Self Organizing Maps neural network for image training.The Similarity based Superior Self Organizing Maps is the focal process to generate watermark.Significant initial factors are set by authorized person.Without knowing those values detection of watermark is impossible.
1) Algorithm:
Step0: Initialize weights with random method or by having previous knowledge of Pattern distribution.Set Topological neighborhood parameters.Set learning rate parameter Step1: While stopping condition is false, do steps 2 -8 Step2: For each input vector x, do steps 3-5 Step3: For each j , compute j d by using any one of the following distance measures as in ( 1) or (2).a) Jaccard Coefficient: Step4: Find index J such that d ( j) is a minimum.
Step5: For units j within a specified neighborhood of J and for all I W i j ( new)= W i j ( old)+ [ X i -W i j (old)] Step6: Update learning rate Step7: Reduce radius of topological at specified times Step8: Test stopping condition.
The Learning Rate is a slowly decreasing function of time.The radius of the neighborhood around a cluster unit also decreases as the clustering process progresses.The updated weight network is well equipped with the host image neural structure.SBS_SOM exactly imitates human neural learning logic.Hence trivial imbalanced values can be identified through the analysis.These insignificant map elements in SBS_SOM network is determined and used as watermark values.
B. Discrete Wavelet Transformation(DWT)
The first Discrete Wavelet Transform (DWT) was invented by the Hungarian mathematician Alfred Haar.For an input represented by a list of 2 n numbers, the Haar wavelet transform may be considered to simply pair up input values, storing the difference and passing the sum.This process is repeated recursively, pairing up the sums to provide the next scale: finally resulting in 2 n − 1 differences and one final sum.
DWT decomposes input image into four components namely LL, HL, LH and HH.The lowest resolution level LL consists of the approximation part of the original image.The remaining three resolution levels consist of the detail parts and give the vertical high (LH), horizontal high (HL) and high (HH) frequencies.In the proposed technique, embedding and extraction of watermark takes place in the high frequency component.For a one level decomposition, the discrete twodimensional wavelet transform of the image function f(x, y) is found in [7] and [9].DWT transform based watermarking scheme is robust against many common image attacks.The analysis results proved robustness very well.
III. FINE FACET DIGITAL WATERMARK SYSTEM (FFDW)
In the proposed system host image is learned by the SBS_SOM neurons and the very fine RGB feature values are mined as digital watermark.Discrete Wavelet Transform (DWT) is used for watermark entrench.
A. FFDW Generator 1) Preprocessing:
1) Collect the input images.Select the image for watermarking.2) Extract its RGB colors attributes in separate 2-D spaces.
2) Apply SBS-SOM: 1) Set three SBS_SOM network for representing Red, Green and Blue with three layers.(Input, hidden and output with two dimensional space matrix).2) Initialize its weight vectors, neighborhood function, fix the epochs, set initial learning rate and fix reduction of learning rate for each epoch.3) Embedding: 1) Activate one-level DWT to original image"s red vectors.
2) The red attribute watermark is embedded in to the high frequency component HH of DWT. 3) Execute inverse wavelet transform to obtain the watermarked red features.4) Repeat the above three steps for other two green and blue colors too.www.ijacsa.thesai.org5) By combining RGB watermarked plane the watermarked image will be obtained.
B. FFDW Detector
Projected watermarking proposal is capable of mine watermark information in the absence of the original image or secrete key.Hence it is unsighted watermarking.
1) Trigger one level DWT to the destination image and take away the embedded watermark from the HH sub band.
2) Regenerate watermark from the transferred image by using SBS_SOM neural logic as mentioned in the generation algorithm.
IV. PROFICIENCY ANALYSIS ON FFDW
In order to prove the efficiency of SBS_SOM mined FFDW, the quality assessment experiments were done.It is proved that the proposed watermark technique has robustness, imperceptibility and authenticity.3), and Similarity Ratio (SR) as in (1), are estimated between host image and watermarked image.Besides, the same process was done with conventional SOM to compare the efficiency of the proposed SBS_SOM for watermarking.The experiment was carried out with the sample image gallery.The ten image samples and its corresponding watermarked image PSNR and SR values are tabularized in the Table 1.Among those watermarking process four different color host images and its related watermarked images are given in the Fig. l (b) with SBS_SOM high level imperceptibility is proved.Visibly there is no degradation of image was found in the watermarked image.From the Table 1, it was proved that host image and watermarked image has fine similarity ratio "1".Well decent PSNR also found in proposed watermarking technique.Compressed and transferred watermarked image without any attack at the destination system receives imperceptible watermarked images.On the other hand SOM watermarking shows visible changes in the watermarked images.
Fig. 1(c) column is the evident for SOM watermarking.
The robustness of the watermarking is checked by applying special kinds of noise like Gaussian, Poisson, Salt & Pepper and Speckle.The attacked image of cap girl is shown in the Fig. 2. The watermarks are detected and extracted from the transferred watermarked image.The extracted watermark is One and the same results were evaluated in the case of other images too.The reasonable PSNR value even after noise attacks in the case of SBS_SOM proves the robustness of the watermarking.At the receiver end the attacked image can be strongly identified.For this the watermark is detected by authorized method and removed from the received image to get the original image back.Watermark removed image and host image PSNR and SR are calculated and tabulated in the Table 3.The Table 3 clearly states that received watermarked image without any attack will produce exact host image values.The PSNR is at its maximum infinitive.This means that there is no difference between the values of host and watermark removed image at the receiver side.
At the receiver end the attacked image can be strongly identified.For this the watermark is detected by authorized method and removed from the received image to get the original image back.Watermark removed image and host image PSNR and SR are calculated and tabulated in the Table 3.It clearly states that received watermarked image without any attack will produce exact host image values.The PSNR is at its maximum infinitive.This means that there is no difference between the values of host and watermark removed image at the receiver side.FFDW system does not affect the originality of the host image.And also the watermark is not detectable because the FFDW system uses unsupervised SBS_SOM for watermark generation.This neural network is unpredictable and the almost similar results can be evaluated only through the known parameter settings.Each and every parameter settings have their own authority on neural node organization.Hence using common statistical analysis or correlative attacks will never be able to find the watermark.Without proper watermark detection the original image cannot be restructured.Since FFDW system assures for secured watermarking.
The attacked images had lost their originality.Even though the watermark is robust the original image was not reconstructed.The Table 3 flaunts the trim down of PSNR and SR.For the copy right claim and authentication applications, the infinitive PSNR values are needed.FFDW system can be strongly proposed for authentic applications.The Fig. 3 visibly presents the secured image transformation with its selfgenerated digital watermarking quality assurance.The robustness, imperceptibility and securities were analyzed and plotted as a graph to explicitly show the good quality of FFDW system with SBS_SOM.The Fig. 3 exhibits the significance of proposed watermarking.SBS_SOM based FFDW system in watermark generation is one more land mark in watermarking techniques.
V. CONCLUSION SBS_SOM mined Fine Facet Digital Watermark system is an innovative technique for watermarking research.The other image, text or audio digital values are not used as digital watermark to embed.The purpose of embedding values in this process is not protecting embedding values but identifying the misuse of the host image.The watermark of each individual image is mined from the corresponding host image.Hence collecting watermark values from other image is impossible.Unauthorized users cannot detect the watermark values from watermarked image by using any statistical or correlation calculation because SBS_SOM is a complete unsupervised network trainer.FFDW is a good digital watermarking technique because it satisfies the basic requirements such as Robustness, Imperceptibility and Security.The image communication to claim authentication may use the proposed system confidently.The proposed system used DWT for the watermark embedding.In future the SBS_SOM itself will be used for embedding to prove its comprehensiveness.
3 )
Train the RGB networks.Find the trained feature maps from output layer.4) Find the difference between input values and trained values for each color.The resultant values are accepted as watermark values.Thus the first pass obtains three sets of watermarks.
For the experimentation, the digital watermark values with 64×64 size was generated from the host image which is be watermarked.Since the FFDW is mined through SBS_SOM neural network for each individual host image, it is unique.No one can predict the watermark values by means of common calculations.Minute details having inaptness were identified and collected as digital watermark.The RGB colors are trained individually.Hence the result organized three sets of digital watermarks.Each 2-D plans are embedded by means of corresponding color FFDW values by using DWT.Subsequent to the embedding process the three 2-D planes are combined to form 3-D watermarked image.The Peak Signal to Noise Ratio (PSNR) as in (
Figure 3 .
Figure 3. Fig. 4.3 (a) Imperceptibility Analysis of watermarked image (b) Robustness verification of watermark (c) Security Check of watermark removed image at the destination system
TABLE 1 .
PSNR & SR OF 10 SAMPLE HOST IMAGES
TABLE 2 .
The cap girl image watermark PSNR values with various attacks are mentioned in the Table 2. ROBUSTNESS OF WATERMARK
TABLE 3 .
QUALITY OF AN IMAGE AFTER THE REMOVAL OF WATERMARK | 3,378.2 | 2011-01-01T00:00:00.000 | [
"Computer Science"
] |
A Semantic-Based Framework for Summarization and Page Segmentation in Web Mining
ion: a. first abstraction level: a semantic network is used to extract a set of concepts from every token; eventually, a list of concepts is obtained; b. second abstraction level: the concepts are grouped in homogeneous sets (domains).
Introduction
The World Wide Web has become a fundamental resource of information for an increasing number of activities, and a huge information flow is exchanged today through the Internet for the widest range of purposes.Although large-bandwidth communications yield fast access to virtually any kind of contents by both human users and machines, the unstructured nature of most available information may pose a crucial issue.In principle, humans can best extract relevant information from posted documents and texts; on the other hand, the overwhelming amount of raw data to be processed call for computer-supported approaches.Thus, in recent years, Web mining research tackled this issue by applying data mining techniques to Web resources [1].This chapter deals with the predominant portion of the web-based information, i.e., documents embedding natural-language text.The huge amount of textual digital data [2,3] and the dynamicity of natural language actually can make it difficult for an Internet user (either human or automated) to extract the desired information effectively: thus people every day face the problem of information overloading [4], whereas search engines often return too many results or biased/inadequate entries [5].This in turn proves that: 1) treating web-based textual data effectively is a challenging task, and 2) further improvements are needed in the area of Web mining.In other words, algorithms are required to speed up human browsing or to support the actual crawling process [4].Application areas that can benefit from the use of these algorithms include marketing, CV retrieval, laws and regulations exploration, competitive intelligence [6], web reputation, business intelligence [7], news articles search [1], topic tracking [8], and innovative technologies search.Focused crawlers represent another potential, crucial area of application of these technologies in the security domain [7,9].
The research described in this chapter tackles two challenging problems in Web mining techniques for extracting relevant information.The first problem concerns the acquisition of useful knowledge from textual data; this is a central issue for Web content mining research, which mostly approached this task by exploiting text-mining technologies [1].The second problem relates to the fact that a web page often proposes a considerable amount of information that can be regarded as 'noise' with respect to the truly informative sections for the purposes at hand [10].According to [10], uninformative web page contents can be divided into navigation units, decoration items, and user interaction parts.On one hand, these elements drain the user's attention, who has to spend his/her time to collect truly informative portions; on the other hand, they can affect the performances of algorithms that should extract the informative content of a web page [10].This problem is partially addressed by the research area of semantic Web, which aims to enrich web pages with semantic information accessible from humans and machines [5].Thus semantic Web mining aims to combine the outcomes of semantic Web [11] and Web mining to attain more powerful tools that can reliably address the two problems described above [5].
The approach adopted in this work, however, does not rely on semantic information already embedded into the Web resources, and the semantic characterization of words and sentences plays a crucial role to reach two outcomes: • to work out from a Web resource a concise summary, which outlines the relevant topics addressed by the textual data, thus discarding uninformative, irrelevant contents; • to generate a web page segmentation that points out the relevant text parts of the resource.
Semantic characterization is obtained by applying semantic networks to the considered Web resource.As a result, natural language text maps into an abstract representation, that eventually supports the identification of the topics addressed in the Web resource itself.A heuristic algorithm attains the latter task by using the abstract representation to work out the relevant segments of text in the original document.Page segmentation is then obtained by properly exploiting the information obtained on the relevant topics and the topics covered by the different sections of the Web page.
The novelty contribution of this work lies in a framework that can tackle two tasks at the same time: text summarization and page segmentation.This result is obtained by applying an approach that extracts semantic information from the Web resource and does not rely on external information that may not be available.Combining effective page segmentation with text summarization can eventually support advanced web content mining systems that address the discovery of patterns, the tracking of selected topics and the efficient resource finding.
Experimental results involved the well-know DUC 2002 dataset [12].Such dataset has been used to evaluate the ability of the proposed framework to consistently identify the topics addressed by a document and eventually generate the corresponding summary.The ROUGE tool [13] has been used to measure the performance of the summarization algorithm exploited by the present framework.Numerical results proved that the research described in this chapter compares positively with state-of-the-art approaches published in the literature.
The rest of the chapter is organized as follows.Section 2 gives an overview of the state of the art in the different research areas involved.Section 3 introduces the overall approach proposed in this research, while Section 4 discusses the actual implementation of the framework.Section 5 presents the experimental results.Some concluding remarks are made in Section 6.
Related work
The current research proposes a web mining algorithm that exploits knowledge-based semantic information to integrate text-summarization and web page-segmentation technologies, thus improving the overall approach effectiveness.The following sections overview the state of the art in the different research areas involved: web content mining, text summarization, and web page segmentation.The Section also highlights the points of novelty introduced by the present research with respect to previous works.
Web content mining
Web mining is the use of data mining techniques to automatically discover and extract information from web documents and services; the applicative areas include resource finding, information selection, generalization and data analysis [14].Incidentally, machine-learning methods usually address the last two tasks.Web mining includes three main sub-areas: web content mining, web structure mining, and web usage mining [15].The former area covers the analysis of the contents of web resources, which in general comprise different data sources: texts, images, videos and audio; metadata and hyperlinks are often classified as text content.It has been proved that unstructured text represents the prevailing part of web resources [14,16] this in turn motivates the large use of text mining technologies.
A wide variety of works in the literature focused on text mining for web content mining [17].Some web content mining techniques for web search, topic extraction and web opinion mining were explored in [18].In [19], Liu et al. showed that web content mining could address applicative areas such as sentiment classification, analysis and summarization of consumer reviews, template detection and page segmentation.In [20], web content mining tackled business applications by developing a framework for competitive intelligence.In [21], an advanced search engine supported web-content categorization based on word-level summarization techniques.A web-page analyzer for detecting undesired advertisement was presented in [22].The work described in [23] proposed a web-page recommendation system, where learning methods and collaborative filtering techniques cooperated to produce a web filter for efficient user navigation.
The approach presented in this research differs from those related works in two main aspects: first, it exploits semantic-based techniques to select and rank single sentences extracted from text; secondly, it combines summarization with web page segmentation.The proposed approach does not belong to the semantic web mining area, which refers to methodologies that address the development of specific ontologies that enrich original web page contents in a structured format [11,24].To the best of the authors' knowledge, the literature provides only two works that used semantic information for web content mining.The research described in [25] addressed personalized multimedia management systems, and used semantic, ontology-based contextual information to attain a personalized behavior in content access and retrieval.An investigation of semantic-based feature extraction for web mining is proposed in [26], where the WordNet [27] semantic network supported a novel metrics for semantic similarity.
Text summarization
A summary is a text produced by one or more other texts, expressing important information of original texts, and no longer than half of the original texts [28].Actually, text summarization techniques aim to minimize the reading effort by maximizing the information density that is prompted to the reader [29].Summarization techniques can be categorized into two approaches: in extractive methods, summaries stem from the verbatim extraction of words or sentences, whereas abstractive methods create original summaries by using natural language generators [30].
The works of Das et al. [30] and Gupta et al. [31] provided extensive surveys on extractive summarization techniques.Several methods relied on word frequency analysis, cue words extraction, or selection of sentences according to their position in the text [32].More recent works used tf-idf metrics (term frequency -inverse document frequency) [33], graphs analysis, latent semantic analysis [34], machine learning techniques [35], and fuzzy systems [36,37].Other approaches exploited semantic processing: [38] adopted lexicon analysis, whereas concepts extraction supported the research presented in [39].Abstractive summarization was addressed in [40], where the goal was to understand the main concepts of a document, and then to express those concepts in a natural-language form.
The present work actually relies on a hybrid extractive-abstractive approach.First, most informative sentences are selected by using co-occurrence of semantic domains [41], thus involving an extractive summarization.Then, abstractive information is produced by working out the most representative domains for every document.
Web page segmentation
Website pages are designed for visual interaction, and typically include a number of visual segments conveying heterogeneous contents.Web page segmentation aims to grasp the page structure and split contents according to visual segments.This is a challenging task that brings about a considerable number of issues.Different techniques were applied to web page segmentation in the past years: PageRank [42], graphs exploration [43], rules [10,44,45], heuristics [46,47,48,49], text processing [50], image processing [51], machine learning [52,53], and semantic processing [54].
Web page segmentation methods apply heuristic algorithms, and mainly rely on the Document Object Model (DOM) tree structure that is associated to a web resource.Therefore, segmentation algorithms may not operate properly when those ancillary features are not available or when they do not reflect the actual semantic structure of the web page.Con-versely, the approach presented in this chapter only relies on the processing of the textual information that can be retrieved in the web resource.
A Framework for Text Summarization and Segmentation
The processing of textual data in a Web page yields two outcomes: a text summary, that identifies the most relevant topics addressed in the Web page, and the set of sentences that are most correlated with those topics.The latter indirectly supports the segmentation of the web page, as one can identify the substructures that deal with the relevant topics.Several advanced applications for Web mining can benefit from this approach: intelligent crawlers that explore links only related to most informative content, focused robots that follow specific content evolution, and web browsers with advertising filters or specific content-highlighting capabilities.This Section presents the overall approach, and introduces the various elements that compose the whole framework.Then, Section 4 will discuss the actual implementation of the framework used in this work.
Overall system description
The approach relies on a two-level abstraction of the original textual information that is extracted from the web page (Figure 1); semantic networks are the tools mainly exploited to accomplish this task.First, raw text is processed to work out concepts.Then, concepts are grouped into domains; here, a domain represents a list of related words describing a particular subject or area of interest.According to Gliozzo et al [55], domain information corresponds to a paradigmatic relationship, i.e., two words with meanings that are closely related (e.g., synonyms and hyponyms).
Semantic networks allow to characterize the content of a textual resource according to semantic domains, as opposed to a conventional bag of words.The ultimate objective is to exploit a coarse-grained level of sense distinctions, which in turn can lead to identify the topics actually addressed in the Web page.Toward that end, suitable algorithms must process the domain-based representation and recognize the relevant information in the possibly noisy environment of a Web page.Indeed, careful attention should be paid to the fact that many Web pages often address multiple, heterogeneous domains.Section 4 presents in detail the procedure implementation to identify specific domains in a Web page.
Text summarization is obtained after the identification of the set, Θ, of domains that characterize the informative content of the Web page.The summary is obtained by detecting in the original textual source the sentences that are mostly correlated to the domains included in Θ.To complete this task sentences are ranked according to the single terms they involve, since the proposed approach only sets links between terms and concepts (domains).The process can generate the eventual summary according to two criteria: the first criterion yields a summary that describes the overall content of the Web page, and therefore does not distinguish the various domains included in Θ; the second criterion prompts a multiplicity of summaries, one for each domain addressed in Θ.
That approach to text summarization supports an unsupervised procedure for page segmentation, too.Indeed, the described method can 1) identify within a Web page the sentences that are most related to the main topics addressed in the page itself, and 2) label each sentence with its specific topic.Thus text summarization can help assess the structure of the Web page, and the resulting information can be combined with that provided by specific structure-oriented tools (e.g., those used for tag analysis in html source code).
Figure 2 shows the two alternative strategies that can be included in the Web mining system.The first strategy uses the text summarization abilities to find relevant information in a Web page, and possibly to categorize the contents addressed.The second strategy targets a selective search, which is driven by a query prompted by the user.In the latter case, text summarization and the eventual segmentation allow the mining tool to identify the information that is relevant for the user in the considered Web page.
Overall system description
The overall framework can be schematized according to the following steps (Figure 3): b. build a summary by using the most significant sentences according to the rank.
Page Segmentation: a. use the sentences ranking to select the portions of the web page that deal with the main topics.
Step 4 (Content Analysis) and Step 5 (Outputs) can be supported by different approaches.Section 4 will discusses the approaches adopted in this research.
Implementation
The processing starts by feeding the system with the download of a web page.Raw text is extracted by applying the 'libxml' parsing library [56] to the html source code.
Text preprocessing
This phase receives as input the raw text and completes two tasks: 1) it identifies the beginning and the end of each sentence; 2) it extracts the tokens from each sentence, i.e., the terms that compose the sentence.Additional subtasks are in fact involved for optimal text processing: after parsing raw text into sentences and tokens, idiom is identified and stop-words are removed accordingly; this operation removes frequent and semantically non-selective expressions from text.Then, lemmatization simplifies the inflectional forms of a term (some-times derivationally related forms) down to a common radix form (e.g., by simplifying plurals or verb persons).These subtasks are quite conventional in natural language processing systems [57], and aim to work out a set of representative tokens.
The process that extracts sentence and tokens from text is driven by a finite-state machine (FSM), which parses the characters in the text sequentially.The formalism requires the definition of the following quantities: • state STARTT: a token begins; • state ENDT: end of token achieved; • state STARTS: a sentence begins (hence, also a token begins); • state ENDS: end of sentence achieved (hence, end of token also achieved); • set tdelim, which includes space, tab and newline codes, plus the following characters: "\',/:;.!?[]{}()*^-~_= • set sdelim, which includes common sentence delimiter characters, such as :;!?'" • set number, which includes all the numbers; • set lower, which includes all the lower case alphabet characters; • set upper, which includes all the upper case alphabet characters; • set character, which is obtained as the union of set lower and set upper; • set dot, which only include the dot character.
A detailed description of the complete procedure implemented by the FSM is provided in Figure 4. Actually, Figure 4(a) refers to the core procedure, which includes the initial state STARTS; Figure 4(b) refers to the sub-procedure that starts when the state NUMBER is reached in the procedure of Figure 4(a); Figure 4(c) refers to the sub-procedure that starts when the state ALPHA is reached in the procedure of Figure 4(a).In all the schemes the elements with circular shape represent the links between the three procedures: the light-grey elements refer to links that transfer the control to a different procedure; the dark-grey elements refer to links that receive the control from a different procedure.
The process implemented by the FSM yields a list of tokens, a list of sentences and the position of each token within the associated sentence.Stop-word removal takes out those tokens that either are shorter than three characters or appear in a language-specific list of terms (conjunctions, articles, etc).This effectively shrinks the list of tokens.Finally, a lemmatization process reduces each token to its root term.Different algorithms can perform the lemmatization step, depending on the document language.WordNet morphing features [27] support best lemmatization in the English idiom, and has been adopted in this research.
In the following, the symbol Ω will define the list of tokens extracted after text preprocessing: Ω = {t i ; i = 1,..,N t }, where t i is a token and N t is the number of tokens.
The abstraction process: from words to domains
The framework uses a semantic network to map tokens into an abstract representation, which can characterize the informative content of the basic textual resource on a cognitive basis.The underlying hypothesis is that to work out the topics addressed in a text, one cannot just depend on the mentioned terms, since each term can in principle convey different senses.On the other hand, the semantic relations that exist between concepts can help understand whether the terms can connect to a single subject or area of interest.
The present approach implements such an abstraction process by mapping tokens into domains.An intermediate step, from tokens to concepts, supports the whole procedure.Two well-known semantic networks have been used to complete this task: EuroWordNet [58], i.e the multilanguage version of WordNet [27], and its extension WordNet Domains [41].Both EuroWordNet and WordNet Domains are ontologies designed to decorate words or sets of words with semantic relations.The overall structure of EuroWordNet and WordNet Domains are based on the conceptual structures theory [59] which describes the different types of relations that can tie together different concepts.
From tokens to concepts
The abstraction from tokens to concepts is accomplished by using EuroWordNet.Euro-WordNet is an extension of WordNet semantic knowledge base for English, inspired by the current sycholinguistic theory of human lexical memory [27].Nouns, verbs, adjectives and adverbs are organized in sets of synonyms (synsets), each of which represents a lexical concept.Actually, the same word can participate in several synsets, as a single word can have different senses (polysemy).Synonym sets are connected to other synsets via a number of semantic relations, which vary based on the type of word (noun, verb, adjective, and adverb); for example, synsets of noun can be characterized by relations such as hyponymy and meronymy.Words can also be connected to other words through lexical relations (e.g., antinomy).EuroWordNet supports different languages; thus, in principle, the approach proposed in this chapter can be easily extended to documents written in Italian, Spanish, French, and German.Table 1 gives, for each language, the number of terms and the number of concepts provided by EuroWordNet [58].
In the present research, the list of concepts that characterize a text is obtained as follows: a.For each token t i ∈ Ω, extract the list of concepts (i.e., synsets) Χ i that EuroWordNet associate to the token: Χ i = {c k ; k = 1,..,N c,i }, where N c,i is the number of different concepts in Χ i .
b. Assemble the overall list of concepts
To not inflate the list of concepts, in this work the tokens that connect to more than eight concepts are discarded.Such threshold has been set empirically by exploiting preliminary experiments.The list of concepts, Σ, represents an intermediate step to work out the domains; this step will be discussed in the next subsection.
The use of synsets to identify concepts possibly brings about the drawback of word disambiguation.The problem of determining which one, out of a set of senses, are invoked in a textual context for a single term is not trivial, and specific techniques [55,60,61] have been developed to that purpose.Word disambiguation techniques usually rely on the analysis of the words that lie close to the token itself [61,62].Other approaches exploit queries on a knowledge base.A notable example of this approach exploits WordNet Domains and is discussed in [63].As a matter of fact, word disambiguation methods suffer from both high computational complexity [60,64] and the dependency on dedicated knowledge bases [65].
In this work, word disambiguation is implicitly obtained by completing the abstraction from concepts to domains.
From concepts to domains
WordNet Domains [41] supports the abstraction from concepts to domains.A domain is a structure that gathers different synsets belonging to a common area of interest; thus a domain can connect to synsets that pertain to different syntactic categories.Conversely, one synset can be linked to multiple domains.Each domain groups meanings into homogenous clusters; therefore, one can use the abstraction from concepts to domains to work out the topics that are actually addressed in the underlying set of tokens Ω.This can be done as follows: a. identify the domains that can be associated to the concepts included in Σ; b.For each concept c l ∈ Σ, extract the list of domains Θ l that WordNet Domains associate to that concept: Θ l = {d j ; j = 1, …, N d,l }, where N d,l is the number of different domains in Θ l .
design a criterion to work out the foremost domains from Θ.
Different approaches can support the latter step.The implicit goal is to attain word disambiguation, i.e. to remove the ambiguity that may characterize single tokens when they are viewed individually.Thus, one should take advantage of the information obtained from a global analysis; the underlying hypothesis is that the actual topics can be worked out only correlating the information provided by the single tokens.In the present work, that information is conveyed by the list of domains, Θ.The domain-selection algorithm picks out the domains that occur most frequently within the text.The procedure can be formalized as follows: a. Create an array F with N d elements, where is the cardinality |Θ| of set Θ = {d j ; j = 1,..,N d } b.Set each element of F to 0: Identify the list of domains to which t i is linked: The array F eventually measures the relevance of each domain d j .The algorithm evaluates the relevance of a domain by taking into account the intrinsic semantic properties of a token.Thus, the relative increment in the relevance of a domain is higher when a token can only be linked to one domain.The rationale behind this approach is that these special cases are not affected by ambiguities.
The array of relevancies, F, provides the input to the task designed to work out the most relevant topics and eventually generate the summary.
Text Summarization
The framework is designed to generate a summary by identifying, in the original text, the textual portions that most correlate with the topics addressed by the document.Two tasks should be completed to attain that goal: first, identifying the topics and, secondly, correlating sentences with the set of topics themselves.
Figure 5. Two examples of array of domains relevancies
The first subtask is accomplished by scanning the array of relevancies, F. In principle, the relevant topics should correspond to the domains having the highest scores in F. However, the distribution of relevancies in the array can play a crucial role, too. Figure 5 illustrates this aspect with two examples.Figure 5(a) refers to a case in which a fairly large gap separates a subset of (highly relevant) domains from the remaining domains.Conversely, Figure 5(b) depicts a case in which the most relevant domains cannot be sharply separated from the remaining domains.The latter case is more challenging as it may correspond either to a text that deals with heterogeneous contents (e.g., the home page of an online newspaper) or to an ineffective characterization of the domains.
To overcome this potential issue, the proposed algorithm operates under the hypothesis that only a limited number of domains compose the subset of relevant topics.The rationale behind this approach is that a tool for content mining is expected to provide a concise description of the Web page, whereas a lengthy list of topics would not help meet such a conciseness constraint.The objective of the algorithm therefore becomes to verify if the array F can highlight a limited subset of domains that are actually outstanding.
The algorithm operates as follows.First, a threshold α is used to set a reference value for the relevance score of a domain; as a result, all the domains in F that did not achieve the reference value are discarded, i.e., they are considered not relevant.Then, a heuristic pruning procedure is used to further shrink the subset of candidate domains; the eventual goal -as anticipated above-is to work out a limited number of topics.
The selection procedure can be formalized as follows: a. Sort F in descending order, so that f 1 gives the score r 1 of the most relevant domain
Else it is not possible to select relevant domains
The heuristic pruning procedure is applied only if the number of selected domains (i.e., the domains included in F*) is larger than a threshold θ, which set an upper limit to the list of relevant topics.The heuristic procedure is designed to identify a cluster of relevant domains within the set F*; to achieve this goal, the gap between consecutive domains is evaluated (the domains in F* are provided in descending order according to the relevance score).The parameter χ sets the threshold over which a gap is considered significant.As anticipated, the latter procedure may also provide a void subset of relevant topics.
The eventual summary is obtained by picking out the sentences of the original text that most correlate with the relevant topics.To do so, the list of available sentences is sorted in order of relevance scores.Score values are worked out by considering the tokens that form each sentence: if a token can be related to any selected topic, then the relevance of the associate sentence increases.The eventual score of a sentence, finally, stems from normalizing the number of tokens linked to the relevant topics with respect to the total number of tokens that compose the sentence.The procedure can be outlined as follows: a. Inputs: The list of selected domains Φ = {d j ; j = 1,..,N w }, where N w is the cardinality of Φ.
The list of sentences Σ = {s l ; l = 1,..,N s }, where N s is the cardinality of Σ.
The list of tokens included in a sentence s l , Ω l = {t lq ; q = 1,..,N tl }, where N tl is the cardinality of Ω l .
b. Create an array R with N s elements; each element registers the relevance of the l-th sentence c.For each sentence s l ∈ Σ For each token t lq ∈ Ω l If the token can be linked to a domain in Φ The most relevant sentences are obtained by ranking the array R. Actually the selection removes the sentences that are too short to be consistently evaluated.The eventual rank of the sentences is used to build the summary.In general, the summary will include all the sentences that achieved a relevance greater than a threshold.
Experimental Results
The DUC 2002 dataset [12] provided the experimental basis for the proposed framework.
The dataset has been designed to test methodologies that address fully automatic multidocument summarization.It is organized as follows: • 59 subjects; • for each subject, from 5 to 10 different news about that event; • for each subject, an extractive summary (400 word) created by involving human participants.
Thus, a summarization technique can be evaluated by comparing the outcome of the computer-driven process with that provided by the dataset (the ground truth).
In this work, the DUC 2002 dataset supported two experimental sessions.The first session aimed at evaluating the ability of the proposed framework to generate an effective summary from the documents included in the dataset.The second session was designed to analyze the behavior of the framework in a typical scenario of Web mining: a text source obtained from a Web page that includes different contributions possibly addressing heterogeneous topics.
The first experimental session: summarization effectiveness
To evaluate the method's ability at effective summarization, this session adopted the ROUGE software [13].This made it possible to measure the performances of the proposed approach (as per Section 4) on the DUC 2002 dataset.
ROUGE is a software package for automatic evaluation of summaries that has been widely used in recent years to assess the performance of summarization algorithms.The ROUGE tool actually supports different parameterizations; in the present work, ROUGE-1 has been implemented, thus involving 1-gram co-occurrences between the reference and the candidate summarization results.Using DUC 2002 as a benchmark and ROUGE as the evaluation tool allowed a fair comparison between the present approach and other works already published in the literature.Table 2 shows that the methodology presented in this chapter attained results that compared favorably with those achieved by state-of-the-art algorithms [66] on DUC 2002.
In this regard, one should consider that the best performance obtained on DUC 2002 is characterized by the following values [66]: recall = 0.47813, precision = 0.45779, F-measure = 0.46729.This confirmed the effectiveness of the underlying cognitive approach, mapping raw text into an abstract representation, where semantic domains identified the main topics addressed within each document.Numerical results point out that the highest F-measure was attained when the summarization algorithm picked out at least the most 20 relevant sentences in a text.
An additional set of experiments further analyzed the outcomes of the proposed approach.In this case, the goal was to understand whether the topic-selection criterion actually fit the criterion implicitly applied by human subjects when summarizing the texts.This involved the array, F, measuring the relevance of a set of domains (as per section 4.2.2); for each subject included in DUC 2002, the array F was computed with respect to: • the news linked to that subject; • the corresponding summary provided by the dataset.
Figure 6 gives a sample of the pair of arrays associated with one of the subjects in the DUC 2002 dataset; in the graph, light-grey lines are associated with the actual reference scores in the benchmark, whereas dark-grey lines refer to the relevance values worked out by the proposed method.
Statistical tools measured the consistency of the domain-selection process: chi-square test runs compared, for each subject, the pair of distributions obtained; the goal was to verify the null hypothesis, namely, that the two distributions came from the same population.The standard value of 0.05 was selected for the confidence level.
The results obtained with the chi-square tests showed that the null hypothesis could not be rejected in any of the 49 experiments involved (each subject in DUC 2002 corresponded to one experiment).This confirmed that the distributions of the relevant domains obtained from the whole text could not be distinguished from those obtained from the (human generated) summaries in the DUC 2002 dataset.
The second experimental session: web mining
The first experimental session proved that the framework can effectively tackle this task (and eventually generate a proper summary) when the input was a news-text, which mainly dealt with a single event.A web page, however, often collects different textual resources, each addressing a specific, homogenous set of topics.Hence, the second experimental session was designed to evaluate the ability of the proposed framework to identify the most informative subsections of a web page.
The experiments involved the DUC 2002 dataset and were organized as follows.A set of new documents were generated by assembling the news originally provided by DUC 2002.Each new document eventually included four news articles and covered four different topics.Then, the list of documents was processed by the proposed framework, which was expected -for each document -to select as the most relevant topics those that were chosen in the set up.Table 3 reports on the results of this experiment; each row represents a single document: the first column gives the topics actually addressed by the document, while the second column gives the topics proposed by the framework.The table reports in boldface the topics that the framework was not able to pinpoint.
Experimental evidence confirmed that the proposed framework yielded satisfactory results in this experiment, too.In this regard, one should also take into account that • the relative length of the single news somewhat influenced the overall distribution of the topics relevance; • in several cases the real topics not identified by the framework as the most relevant (i.e., the topics in bold) had relevance scores very close to those characterizing the selected ones.The dataset involved in the experiment was artificially generated to evaluate the effectiveness of the proposed framework in a scenario that resembles a "real word" case.Hence, a fair comparison with other methodologies cannot be proposed.However, Table 3 provides a solid experimental evidence of the efficiency of the approach introduced in this research, as the 'artificial' web pages were composed by using the original news included in the DUC 2002 dataset.As a result, one can conclude that the performances attained by the framework in terms of ability to identify the relevant topics in an heterogeneous document are very promising.
5.3.Web Page Segmentation
The framework can analyze a web page according to two different strategies.The first strategy, identifying the most relevant topics, typically triggers further actions in advanced web-content mining systems: gathering a short summary of the web page (possibly a short summary for each main topic), page segmentation, graphic editing of the web page to favor readability.In both cases, the web page included a main section that actually defined the addressed contents, together with other textual parts that did not convey relevant information.The framework supported web content mining by identifying the sentences that actually linked to the relevant topics.These sentences have been highlighted in Figure 7 and Figure 8.
The second strategy typically aims to support users that want to track selected topics.In this case, the goal is to identify the web-page sections that actually deals with the topics of interest.Figure 9 provides an example: the selected topic was 'pharmacy/medicine,' and the web page was the 'News' section of the publisher InTech.The figure shows that an advanced web content mining system could exploit the information provided by the framework to highlight the text parts that were considered correlated with the topic of interest.
Conclusions
The research presented in this chapter introduces a framework that can effectively support advanced Web mining tools.The proposed system addresses the analysis of the textual data provided by a web page and exploits semantic networks to achieve multiple goals: 1) the identification of the most relevant topics; 2) the selection of the sentences that better correlates with a given topic; 3) the automatic summarization of a textual resource.The eventual framework exploits those functionalities to tackle two tasks at the same time: text summarization and page segmentation.
The semantic characterization of text is indeed a core aspect of the proposed methodology, which takes advantage of an abstract representation that expresses the informative content of the basic textual resource on a cognitive basis.The present approach, though, cannot be categorized under the Semantic Web area, as it does not rely on semantic information already embedded into the Web resources.
In the proposed methodology, semantic networks are used to characterize the content of a textual resource according to semantic domains, as opposed to a conventional bag of words.Experimental evidences proved that such an approach can yield a coarse-grained level of sense distinctions, which in turn favors the identification of the topics actually addressed in the Web page.In this regard, experimental results also showed that the system can emulate human assessors in evaluating the relevance of the single sentences that compose a text.
An interesting feature of the present work is that the page segmentation technique is based only on the analysis of the textual part of the Web resource.A future direction of this research can be the integration of the content-driven segmentation approach with conventional segmentation engines, which are more oriented toward the analysis of the inherent structure of the Web page.The resulting framework should be able to combine the outcomes of the two modules to improve the performance of the segmentation procedure.Future works may indeed be focused on the integration of semantic orientation approaches into the proposed framework.These techniques are becoming more and more important in the Web 2.0 scenario, where one may need the automatic analysis of fast-changing web elements like customer reviews and web reputation data.In this regard, the present framework may provide content-filtering features that support the selection of the data to be analyzed.
Figure 1 .
Figure 1.The two abstraction layers exploited to extract contents from textual data.
From
the Web page to textual data: a. get a Web page; b. iextract textual data from the source code of the Web page.Text preprocessing: Text Mining a. identify words and sentences terminators to split text into words (tokens) and sentences; b. erase stop words; c. lemmatization.Abstraction:a.first abstraction level: a semantic network is used to extract a set of concepts from every token; eventually, a list of concepts is obtained;b.second abstraction level: the concepts are grouped in homogeneous sets (domains).Content analysis: a. strategy: automatic selection of domain b.identify the informative contents addressed by processing the list of domains obtained after Step 3 (Abstraction); c. strategy: user-driven domain process the list of domains obtained after Step 3 (Abstraction) to search for the topics indicated by the user.Outputs: Summarization: a. use the output of Step 4 (Content Analysis) to rank the sentences included in the textual source;
Figure 2 .Figure 3 .
Figure 2. The proposed system can automatically detect the most relevant topics, or alternatively can select single text sections according to the user requests
Figure 4 .
Figure 4.The Finite State Machine that extracts sentences and tokens from text.The three scheme refers to as many sub-procedures
b. 1 a.. Else 1 . 2 .
Obtain F* by removing from F all the domains with relevance smaller than α r If the cardinality of F* is smaller or equal to θ bFind the largest gap g mn between consecutive domains in F* If g mn is larger than χ and m is smaller or equal to θ select as relevant all the domains from d 1 to d m
Figure 6 .
Figure 6.Comparison between the relevance of domains -for the same subject of DUC 2002-in the DUC summary and in the summary provided by the proposed algorithm
Figure 7 .
Figure 7.An example of web page analysis supported by the proposed framework
Figure 7 and
Figure7and Figure8provide examples of this kind of application.In both cases, the web page included a main section that actually defined the addressed contents, together with other textual parts that did not convey relevant information.The framework supported web content mining by identifying the sentences that actually linked to the relevant topics.These sentences have been highlighted in Figure7and Figure8.
Figure 8 .
Figure 8.A second example of web page analysis supported by the proposed framework
Figure 9 .
Figure 9. Tracking a selected topic by using the proposed framework
Table 1 .
EuroWordNet: supported languages and corresponding elements
Table 2
gives the results obtained by the proposed framework on the DUC 2002 dataset.The Table compares experiments tested under different configurations of the summarization algorithm; in particular, experimental set-ups differ in the number of sentences used to generate the summary.The first column gives the number of most informative sentences extracted from the original text; the second, third, and fourth columns report on recall, precision, and f-measure, respectively, as measured by ROUGE.
Table 2 .
The performance achieved by the proposed framework on the DUC 2002 dataset as assessed by ROUGE
Table 3 .
Comparison between actual document topics and topics proposed by the framework | 9,518.6 | 2012-11-21T00:00:00.000 | [
"Computer Science"
] |
An innovative asphalt patch repair pre–heating method using dynamic heating
method for patch repair has been investigated. Asphalt slabs with 45 mm, 75 mm and 100 mm deep pothole excavations were subjected to dynamic heating with infrared heater operating power from 6.6 kW to 7.7 kW. The heater was kept either stationary or moving slowly across the exca- vations at 130 mm and 230 mm offsets. The tests included evaluating temperature increase throughout the excavations and inside the slab, recording heat power of infrared heater and heating time to avoid burning the asphalt. Irrespective of excavation depth, heating power and offset, the temperature distribution was found non-uniform in the pothole excavations and into the asphalt slab. The temperatures were higher at the faces of the excavation than inside the slab. Dynamic heating for approximately 10 min yielded better heat distribution while minimising the possibility of asphalt overheating and long pre-heating time. It has been concluded that 45 mm and 100 mm deep pothole excavations can be preheated with 6.6 kW and stationary heater or 7.5 kW and moving heater at 230 mm and 130 mm offset respectively. 75 mm deep excavation can be pre-heated with 7.1 kW and stationary heater at 230 mm offset. (cid:1) 2018 The an the CC BY license
Heating technology evolution in asphalt patch repair
One of the major distresses in asphalt pavement is potholes. They can be locally developed and are created due to the presence of water into the pavement and repeated traffic loading [1]. The main objective of permanently repairing a pothole is to create high quality repair in terms of (a) patching lifetime (this meaning quality and durability same as existing pavement), (b) low patching costs (high costs are mainly caused by labor, equipment and traffic control) [2], (c) minimum traffic disruption time ((b) and (c) can be achieved by fewer repetitions of the same patching), and (d) effective patching process (this referring to patching done in any weather conditions) [3]. To reach these objectives, infrared, microwave and induction heating has been used in asphalt paving operations for the last thirty to forty years.
Anderson and Thomas [4] mention that infrared or radiant heat is typically used for repairing overlays, smoothing and blending utility cuts and levelling of old patches. However, they do not recommend the use of infrared heat for full-depth repair. Blaha [3] built an automated patching machine and used infrared technology to heat asphalt to its softening point and ensure high bonding between the new fill mixture and old pavement. A comprehensive description of the machine is given, however, the procedure of the experiments and the study of heat flow to determine productive use of the heating system in asphalt repair are unclear and roughly explained. The study jumps to the conclusion of 1 min heating time for a surface asphalt softening point between 71°C and 82°C with the heater set to an extremely high heat power of 58 kW.
Clyne et al. [5], Uzarowski et al. [6], Freeman and Epps [7] and Leininger [8] used infrared or microwave heat to clear failed asphalt and/or heat the pothole fill material. The purpose of preheating was to achieve high adhesion between fill mixture and cold old pavement increasing its temperature. In these studies, only the surface temperature of the formed repairs is measured. The authors suggest a heating pattern and arrangement between the heater and distressed area to soften the asphalt. However, the authors do not acknowledge the influence of the following parameters in preheated asphalt repairs: climatic conditions, asphalt pavement temperature and thermal properties, asphalt ageing, repair geometry and pre-compaction temperatures of fill mixture. Further, the interaction between asphalt mixture and infrared heat has been studied mainly from on-site observations under diverse climatic conditions, not from controlled laboratory tests.
Obaidi et al. [9] performed, analysed and evaluated in the laboratory pothole repairs using asphalt tiles. They were bonded in the pothole cavity with a styrene-butadienestyrene (SBS) membrane filled with metal particles, steel fibres or chicken wire, induction heating and slight compaction. Tensile bond tests (TBT's) and shear bond tests (SBT's) were used to evaluate the tensile adhesion strength and shear strength respectively of the repair interface. The authors found that depending on the number of bonding layers, percentage of open area of loose fibres and induction heating time, maximum TBT and SBT were 0.35 MPa and 0.2 MPa respectively. In the case of the repairs with chicken wire of 37% to 74% open area, the TBT ranged from approximately 0.1 to 0.37 MPa and SBT ranged from 0.04 to 0.13 MPa.
Further, in the same study, test samples repaired with tiles and cold mix and test samples without any repair (original test samples) were tested using the wheel track test. The results showed that tests samples with asphalt tiles suffered 16.9% more rutting than the original test samples. Rutting in test samples with cold mix asphalt was approximately 40 times higher than in the original test samples. Therefore, tests samples with asphalt tiles outper-formed repairs with cold mix. However, further research is suggested mainly when the excavated pothole contains loose stones or dirt between the tile and the old pavement or has uneven surfaces [9].
Infrared heat transfer in asphalt pavement
Thermal radiation is emitted by any object with a temperature above 0°Kelvin (À273°C). Typical transmission of radiation is by electromagnetic waves that are defined by their wavelength and frequency categorized by the electromagnetic spectrum. The infrared portion of the spectrum is from 0.7 mm (equal to a 7e À7 m) to 10 3 mm (equal to a 1e À3 m). The energy transmitted by an infrared heater is proportional to its temperature. The higher the temperature, the shorter the wavelength and the higher the amount of energy radiated [10].
When the transmitted radiation energy of the heater hits the asphalt surface, then infrared heat transfer occurs. A portion of this radiation is absorbed and increases the temperature of the asphalt mixture by conduction, whereas other portions are transmitted or reflected back to the surrounding area [10]. Therefore, in an infrared-heater-asphalt thermal efficient relationship, the effectiveness of radiant energy emittance of the heater (associated with the heater emissivity (e)), the transmitted percentage of radiative energy by the heater that strikes the asphalt (associated with the view factor (F)) and the amount of this energy absorbed by the asphalt (associated with asphalt emissivity (e)) are dominant.
Other parameters to add to this relationship are the thermophysical properties of the asphalt mixture. These properties affect heat transfer and storage inside the pavement initiated by the absorbed radiation energy of radiative heat application on the surface of the pavement. There are two distinct categories of these properties: transport and thermodynamic properties. The transport properties relate to energy transfer through asphalt and are absorptivity (a), albedo (1-a), emissivity (e) and thermal conductivity (k). The thermodynamic properties relate to the equilibrium state of asphalt mixture and are density (q) and specific heat capacity (c P ) [11].
Thermal conductivity of asphalt is affected by the mixture type, aggregate type [12], aggregate gradation [13], mixture density [14], mixture temperature [15] and presence of moisture in the mixture [13,14]. For example, Hassn et al. [14] found that as density increases thermal conductivity may increase too since the voids of air in the mixture decrease. In addition, moisture and freezing conditions may also increase asphalt thermal conductivity as reported by Mirzanamadi et al. [13]. However, asphalt thermal conductivity may decrease at temperatures higher than 25°C reported by Chadbourn et al. [15]. Specific heat capacity and thermal diffusivity are both affected by thermal conductivity levels. For example, Hassn et al. [14] found that when air voids increase, and thermal conductivity decreases then specific heat capacity and thermal diffusivity decrease too.
Research motivation
As discussed above, heating the underlying layer prior to pothole filling and compaction enhances the bonding between the cold host pavement and the new hot-fill mix. Infrared, microwave and induction heating technologies have been investigated for this purpose. In the case of infrared heated repairs that concern this research, it seems that the current literature lacks fundamental experimental investigation and theoretical analysis of those repairs. To address this, the authors have concluded that the effect of the following parameters in the infrared repair operation should be investigated and fully understood: pothole geometry and depth; ambient temperature; host pavement initial temperature; fill mix-ture temperature; host pavement and fill mixture thermophysical properties; infrared heater properties; infrared heating time; infrared heater offset and position; temperature distribution in host pavement external faces and heat flow inside the host pavement resulting from infrared heat application and repair work. This study has worked along these parameters to assess the use of infrared heat in asphalt repair and set a scientifically based foundation for infrared heated repairs. Below is described the status of each parameter set for this study:
Materials
Asphalt slabs were manufactured with 20 mm dense bitumen macadam (DBM). The mixture comprised of granite coarse and fine aggregate and limestone filler. The bitumen used was 100/150 pen. The mixture gradation curve is shown in Fig. 1 and its design con-forms to BS EN 13108, part 1 (2016) [16]. The binder complies with the Manual of contract documents for Highway works, Volume 1, Specification for highway works (2008) [17]. The preparation of the aggregate, filler and bitumen prior to mixing, the asphalt mixing, and the control procedure of the mix temperature conform with BS EN 12697, part 35 (2016) [18].
Construction of asphalt slabs
The construction parameters of the asphalt slabs are shown in Table 1. The construction process for the 45 mm deep pothole excavations is shown in Fig. 2. For the 75 mm and 100 deep excavation similar construction method was followed but with a larger number of batches as shown in Table 1 and aluminium tubes (discussed in Section 2.3 below). In total twelve slabs of 695 (±5) mm  695 (±5) mm were built. Each slab was designed with one pothole excavation of 305 (±2) mm  165 (±2) mm located in the middle of it. The depths of the excavations were 45 (±2) mm, 75 (±2) mm and 100 (±2) mm with respective slab heights 100 (±5) mm and 140 (±5) mm. The chosen pothole depths represent shallow and deep potholes considering pothole depth range stated by the following authors. Miller and Bellinger [19] note that potholes may be deeper than 50 mm and McDaniel et al. [2] state that patches may range from 38 mm to 152 mm.
Each slab was constructed upside down in batches of 7.6 kg. Twelve, seventeen and eighteen asphalt batches were used to build slabs S1 -S4, S5 -S8 and S9 -S12 respectively. Slabs S1 -S4 were compacted in two lifts, whereas slabs S5 -S12 were compacted in three lifts. Each lift was approximately 50 mm deep and was compacted for 7 min using a vibrating plate as described in Standard Code of Practice, New Roads and Street Works Act 1991, Specification for the Reinstatement of Openings in Highways (2010) [20]. The lifts were bonded together by dynamically pre-heating each compacted lift with infrared heat to an average surface temperature 110 (±10) o C. The pre-heating time was 3 min.
The slabs were demoulded 19 h after their construction. The pothole moulds were removed using infrared heat. To do this, the heater was put above the pothole mould at 230 mm offset. The mould was then heated two or three times for 45 s with 1 min cooling time between the heating times. This was done to allow heat to be conducted from the pothole mould to the excavation wall and warm up the asphalt. The mould was then removed by manually pulling it out. There was not any destruction observed in the excavations with the conducted process.
Method for measuring temperatures within the pothole excavation under dynamic heating
T-type thermocouples (accuracy of 0.5°C) [21] were used to measure real-time temperatures inside the slabs under dynamic 50, 50, 40 S10 S11 S12 infrared heating. The positions of the thermocouples are described in Tables 2 and 3 and shown in Fig. 3. Hollow aluminium tubes, 4 mm in diameter, were put into the slabs during their construction to accommodate the thermocouples. For a 45 mm deep pothole excavation, 4 slabs (S1-S4, Table 2) were in total built and 7 aluminium tubes of lengths L1, L2 and L3 (Table 3) were put at varying depths inside each slab. Six tubes were fixed perpendicular to sides 1 and 2 shown in Fig. 2(b), (c) and (f) and one tube was positioned below and near the bottom face of the excavation ( Fig. 2(g)). Although the positioning of the tubes for all slabs was similarly done, the tubes moved during the construction of the slabs. This is expected to have been mainly happening during the compaction of the slab mixture. For example, Tables 2 and 3 show that thermocouple T1 measured temperature in a depth from the slab surface of 22 (±5) mm and a distance L2 = 243 (±5) mm (Fig. 3). This means that temperature at T1 may have been mea- T-type thermocouples [21] were also used to measure temperatures at eight points into the pothole excavations. A thin steel mesh was used to keep the thermocouples in place during the application of dynamic heat. The mesh helped also to retain the shape of the excavations. Two thermocouples were placed at the bottom corner and mid-area respectively of the excavation, four thermocouples were placed in the middle of the excavation vertical sides, and two thermocouples in the mid-top periphery of the excavation ( Fig. 4 and Table 4).
Description of infrared heating equipment and dynamic heating method for thermal tests
The parameters of the thermal tests are summarised in Table 5. An experimental infrared heater was used to perform the thermal tests of this study. The heater is described in Byzyka et al. [22] and is shown in Fig. 5 of Section 2.5. The heater consists of a steel frame of 1.60 m (L) Â 1.55 m (W). The frame has adjustable height and is supported by four wheels. The heater contains two heating elements of 165 mm (L) Â 455 mm (W) Â 102 mm (H). They can operate at heat powers between 6.6 kW and 7.7 kW and they can be set to be either stationary or moving over the pavement with a constant speed of 0.04 m/s. The heater is operated by its central control unit.
For the thermal tests, the pothole excavations were heated in heating-cooling cycles, referred to as ''dynamic heating". For the heating part of the cycle, the heater was operating at heat powers 6.6 kW, 6.7 kW, 7.1 kW, 7.5 kW and 7.7 kW. The excavations were heated until the thermocouples closest to the heating element plate (see T32, T33, T40, T41, T48 and T49 in Fig. 4 and Table 4) measured asphalt temperatures between 140°C and 160°C. This was done to avoid burning the asphalt. Similar asphalt heating levels are also suggested by previous studies. Uzarowski et al. [6] suggest heating the repair area to a temperature not greater than 190°C. Nazzal et al. [23] suggest to pre-heat the old pavement until temperatures reach 135°C to 190°C levels. In addition, Huang et al. [24] note that heating asphalt between 137°C and 226°C reduces ageing or charring of asphalt binder. Asphalt ageing happens due to volatilisation, oxidation, and other chemical processes. Asphalt oxidation should be avoided because it leads to pavement failure due to asphalt binder hardening, change in viscosity, asphalt separation, asphalt embrittlement, loss of bitumen cohesion and bitumen-aggregate adhesion [25]. For the cooling part of the cycle, the heater was simply turned off and no heating was applied until temperatures of T32, T33, T40, T41, T48 and T49 reached temperatures between 70°C and 80°C. This was done to allow heat to be conducted within the slab and warm up the asphalt mixture around the pothole excavations.
Further, the tests were conducted with heating applied at 130 mm and 230 mm offsets from the surface of the slab. The heating elements were stationary above the excavations or moving across the excavations. The moving distance within the heater steel frame was 1 m. This distance could be automatically set to be covered by the heating elements by moving forward and backwards repetitively within heater steel frame.
Thermal tests set up
Dynamically heating the pothole excavation is intended to improve common pothole repair practices with or without preheating to ultimately increase interface bonding and therefore pothole repair durability. The pothole pre-heating method studied in this paper would be expected to follow after the failed asphalt mixture of the pavement is removed and the excavation is cleaned from debris and water. Since the pothole excavations studied in the paper were artificially created, these steps were not included in the described processes.
The thermal test set up is shown in Fig. 5. Sixty thermal tests were in total completed (20 tests per pothole excavation depth). Temperatures were measured for a dynamic heating duration of approximately 30 min. The tests were performed 30 days after the construction of the slabs and measurement of thermal conductivity. As the sample size was big, this timeframe was necessary to optimise laboratory resources during sample preparation and production and carry out additional test (like thermal conductivity) on the whole sample. The ambient temperature during the tests ranged between 20°C and 22°C.
Thermal conductivity (k)
For this study, thermal conductivity was measured at eleven points per slab prior to commencing the thermal tests. During the measurement of thermal conductivity, the ambient temperature ranged between 18.5°C and 22.5°C. At the same time, the temperature of the slabs ranged between 20.5°C and 22°C. Thermal conductivity was measured with the transient line source (TLS) [26] shown in Fig. 6. The temperature of the slabs was also measured using the TLS.
The TLS includes the TLS controller and a 50 mm needle designed for testing samples that are tough to drill such as rock, concrete or asphalt samples. The TLS has an accuracy of 5% and reproducibility of 2%. The TLS method follows ASTM D5334 (2000) [27] and has been previously used in investigations of thermal conductivity by Chadbourn et al. [15], Blázquez et al. [28] and Lu et al. [29]. To measure thermal conductivity with the TLS-50 first a 4 mm (D) x 50 mm (H) hole was drilled in the asphalt slab. Then, the excess powder in the hole from drilling was blown out with compressed air. The needle was covered with a thermal paste called Arctic Alumina [30] before inserting it completely into the slab. The thermal paste helps to fill any air gaps in the hole and promote good thermal contact between the slab mixture and the needle. Thermal conductivity was calculated by the TLS using Eq. (1): where k = thermal conductivity, W/m K; q = heating power, W; a = slope. The slope comes from a plot of temperature rise in the sample when heated by the TLS versus the logarithm of time [26].
where m = mass of each material, kg; and c = specific heat capacity of each material, J/kg K.
Air voids content
The air voids content of the compacted asphalt slabs was calculated based on the calculation of the maximum specific gravity and the maximum theoretical specific gravity of asphalt mixture. The bulk specific gravity (G mb ) was determined through the AASHTO T166 (2007), method A [31] and the maximum theoretical specific gravity (G mm ) was calculated with Eq. (4) [32]. In this equation, the effective specific gravity of aggregate (G se ) was used as 2.65 and the specific gravity of bitumen (G b ) as 1.01. Thereafter, the percentage of air voids in the mixture was calculated with Eq. (5).
where W T = total eight of asphalt mixture, gr; Wagg = weight of aggregate, gr; W AC = weight of total asphalt binder, gr.
where VTM = voids in total mix, %. Table 6 shows air voids, thermal conductivity, calculated specific heat capacity and thermal diffusivity of the compacted asphalt slabs. The table shows the percentage of air voids from the average of five cores per slab for slabs S1 and S2 and from the average of eight cores per slab for slabs S3 to S12. The cores were extracted throughout the whole sample in order to test overall air voids distribution.
Thermophysical properties
The cores were extracted from the slabs at the end of the thermal tests conducted per slab. After the coring, the slabs were not used in any testing. The results show that air voids content ranged from 10% to 13%. The effect of asphalt preheating in air voids has not been investigated in this study due to lack of laboratory equipment and no other study for infrared heated patch repair was found to mention it. However, the authors acknowledge that in some areas of the slabs, air voids may have been reduced, while in other areas they may have been increased. This assumption was done after Norambuena-Contreras and Garcia [33] noted similar effect of microwave and induction heating in asphalt mixture air voids. The affected areas by the pre-heating would probably be those closer to the heated pothole excavation, near the heater where some cores were also taken or in the areas where heat was conducted due to excavation pre-heating. Table 6 demonstrates that thermal conductivity of the asphalt slabs ranged from 0.98 W/m K to 1.24 W/m K. The results show that thermal conductivity is significantly affected by the high percentage of air voids in the slabs. This happened because air has much lower thermal conductivity (0.025 W/m K [34]) compared to the thermal conductivity of granite aggregate (2.68 W/m K [34]) and limestone filler (2.92 W/m K [35]) used in this study and binder (0.39 W/m K [36]). High air voids mean also lower interlocking between the aggregates of the mixture, less thermal conductance paths into the asphalt mixture and therefore lower thermal conductivity. This effect of air voids in asphalt thermal conductivity has been also reported by Mirzanamadi et al. [13] and Hassn et al. [14].
Further, in this study, thermal conductivity was measured with the TLS method. The TLS needle was inserted into holes drilled throughout the slabs and thermal conductivity was calculated by the TLS controller using the heat conduction equation. This means that the measurement of thermal conductivity was affected by the distribution of mineral materials around the drilled hole, the cleanliness of the hole and the thermal contact between the asphalt mixture and the needle. In addition, specific heat capacity was found equal to 865.44 J/kg K. Specific heat capacity is considerably affected by the temperature and mass of the asphalt mixture. In this study, both parameters remained at similar levels for all slabs. Finally, thermal diffusivity had been found to range between 5.20 Â 10 À7 m 2 /s and 6.72 Â 10 À7 m 2 /s. These values were affected by the range of the thermal conductivity and volumetric heat capacity. The discussed effect is also noted by Mirzanamadi et al. [13] and Hassn et al. [14].
Temperature distribution under dynamic heating
The temperatures captured in the 45 mm, 75 mm and 100 mm deep excavations wall and inside the slabs are presented in Figs. [7][8][9][10][11][12]9 and 11 show temperature profiles for stationary heater above the excavations. Whereas, Figs. 8, 10 and 12 show temperature profiles for moving heater across the excavation. Each figure contains ten graphs and demonstrates temperatures per thermocouple position at the end of approximately 10 min, 20 min and 30 min dynamic heating. Temperatures are reported for operation of heater with 6.6 kW, 6.7 kW, 7.1 kW, 7.5 kW and 7.7 kW heat powers and for 130 mm and 230 mm heater offsets. The slab number used to perform the thermal tests and the number of heating-cooling cycles for each dynamic heating time are also reported. All thermal tests finish with the heating part of the cycles (or half cycle) and therefore the heating-cooling cycles are reported with numbers such as 3.5, 6.5, 8.5 etc. For example, in Fig. 7, for 10 min dynamic heating and heater operating at 6.6 kW heat power at 130 mm offset, three and a half cycles were done. For the three whole cycles, the heater was on and off repeatedly. For the half-cycle, the heater was on for some time and then removed from the pothole pre-heating procedure. The hating-cooling cycle procedure was also described in Section 2.4.
Overall, the results show higher temperatures in the faces of the pothole excavation than inside the slabs. This happened because the temperatures in the excavation increase due to radiation. Whereas, the percentage of the heater radiative energy that reaches the slab and is absorbed by it will increase the temperature of the asphalt mixture inside the slab. The absorptivity of asphalt mixture depends on the colour of the mixture, as light colour asphalt surfaces have higher reflectance [37], and the surface roughness [38]. Hassn et al. [14] showed that asphalt pavements with high percentage of air voids, like the slabs in this study, have higher reflectance than pavements with lower percentage of air voids. This happens because the illumination surfaces and angles are higher for high air void content mixtures [38]. Once the heater energy is absorbed by the asphalt, the increase of mixture temperature inside the slabs depends on initial slab temperature and heat transfer mainly due to conduction which is dependent on the thermal properties of the mixture.
Slabs with 45 mm deep pothole excavation
The heating effects of dynamic heating are shown in Figs. 7 and 8. It is observed that temperatures in the pothole excavation and inside the slab were increased non-uniformly. The temperature increase rate inside the slab was higher for the first 10 min of heating than between 10 min and 30 min of heating. This happened because thermal conductivity decreases while mixture temperature increases. The effect of temperature in thermal conductivity of asphalt mixture was not measured in this study. However, it has been previously noted by Chadbourn et al. [15] for temperatures between 25°C and 75°C and by Pan et al. [35] for temperatures between À20°C and 60°C. Further, Pan et al. [35] suggest that the decrease of thermal conductivity at higher mixture temperatures is mainly affected by the thermal conductivity of aggregates, as they account for more than 90% of the mixture, than the thermal conductivity of the binder.
Overall, temperatures measured in the mid-bottom of the pothole excavation (T26) exhibited the highest increase in temperature and reached 140°C to 160°C. High temperatures but lower than T26, mainly in the region of 120°C to 140°C, were observed in the mid-top periphery of the pothole excavation (T32 and T33). There were two reasons that T26 had higher temperatures than T32 and T33. Firstly, temperatures on the heating element plate were higher in the central region of the plate than in the periphery and the ends of the plate (this is shown in Ref. [22]). Secondly, T32 and T33 were located at the top of the excavation periphery and it seems that the heat loss was higher in those points than at the bottom of the excavation. This also shows that the effect of the external environment was higher for T32 and T33 than T26. In addition, the lowest temperatures were observed in the sides (T28-T31) and the corner (T27) of the pothole excavation. This shows that the heater had a larger view of the points located in the horizontal faces of the pothole excavation than the points located in the vertical faces of the excavation. To this extent, temperatures for T28-T31 and T27 ranged from 80°C to 120°C.
Overall, temperatures in the pothole excavation were higher for 7.7 kW than for 6.6 kW. However, no increasing or decreasing trend was observed for temperatures resulted for heater heat powers in between the 6.6 kW-7.7 kW range. This meaning that temperatures in the pothole excavation may either increase or decrease without depending on the heater heat power between the noted above heat power range. Similar reaction was also observed when excavation temperatures were compared (a) between 130 mm and 230 mm heater offsets for stationary and moving heater for each heater operating heat power and (b) between stationary and moving heater for 130 mm and 230 mm heater offsets for each heater operating heat power.
Heat inside the slabs was transferred from top to bottom. Temperatures tend to increase during the heating-cooling cycles but had a lowering trend from the top to the bottom of the slabs. The latter is mainly attributed to the slow heat transfer due to low thermal conductivity and volumetric specific heat capacity of the asphalt mixture of this study. Overall, thermocouples T1 and T5 located closer to the top surface of the slabs captured temperatures between 40°C and 80°C. Below T1 and T5, temperatures for T2-T4, T6 and T7 ranged from 20°C to 70°C. A similar pattern in temperature increase between temperatures in pothole excavation and inside the slabs was observed for the three different durations of dynamic heating with heater being either stationary or in motion.
Slabs with 75 mm deep pothole excavation
The temperature profile under dynamic heating is shown in Figs. 9 and 10. Similarly, with the 45 mm deep pothole excavation, temperatures inside the slab were lower than in the excavations as a result of the thermal properties of the mixture. Temperatures inside the slabs showed also a lowering trend from top to bottom. The effect of dynamic heating in the temperatures of the excavation faces was larger for 230 mm heater offset than 130 mm offset. This happened for all heat powers with stationary heater above the excavation and for heat powers between 6.7 kW and 7.5 kW for moving heater. However, temperature was at similar levels in the pothole excavation for moving heater with 6.6 kW and 7.7 kW and 230 mm heater offset. Overall temperature distribution in the 75 mm deep pothole excavation was more uniform than in the 45 mm deep pothole excavation for stationary heater. T39 located in one of the vertical faces of the excavation was the only thermocouple that had lower temperatures than the rest of the excavation. This may have happened because the thermocouple moved or was mistakenly covered by a wire of the steel mesh that was holding the thermocouples in place during the thermal tests. From Fig. 10, it is also observed that for moving heater thermocouple T41 had significantly lower temperatures than T40 although both thermocouples were located at the top of the exca-vation. This happened because the heater was moving across the excavation and perpendicular to the long sides of the pothole where T41 was located in the middle of it. The air circulation due to the moving heater seems to have significantly affected and cooled down the temperatures of T41 compared to T40.
Whereas, at heater offset 230 mm, temperatures for T34-T38 were 96°C -140°C; for T39 were 83°C -110°C; for T40 were 140°C -156°C; and for T41 were 90°C -137°C (Fig. 10). For internal slab temperatures, for both stationary and heater in motion at 130 mm offset, temperatures were fluctuating from 30°C to 70°C for T8. This sensor was located closer to the top surface of the slab. Below T8, temperatures ranged between 25°C and 50°C (T9-T11 and T14-T16). Similar temperatures to T8 were captured from thermocouple T12 located below the bottom surface of the pothole excavation. For 230 mm offset, the temperature profile inside the slab did not change significantly. Specifically, T8 captured temperatures between 50°C and 90°C and temperatures for T9-T11 and T14-T16 ranged from 25°C to 75°C and 20°C to 55°C respectively. T12 measured temperatures between 40°C and 75°C. T13 showed temperatures between 50°C and 110°C (Figs. 9 and 10).
Slabs with 100 mm deep pothole excavation
The heating effects of dynamic heating are shown in Figs. 11 and 12. Higher temperatures were seen between the excavation and inside the slab. Temperatures inside the slab were higher in the upper part of the slab and lower near the bottom of the slab.
Both temperature trends were seen for the 45 mm and 75 mm deep excavations. However, temperatures measured in the 100 mm deep excavation were even more uniform than the 45 mm and 75 mm deep excavations. Overall, temperatures for thermocouples T42 to T46 located within the excavation ranged between 80°C and 110°C. The lowest temperatures in the excavation were seen for T47. For this sensor, for stationary and heater in motion at 130 mm offset, temperatures were between 50°C and 65°C. For 230 mm offset, temperatures ranged between 65°C and 85°C. The highest temperatures were observed in the top of the excavation. These temperatures were captured by T48 and T49 and ranged between 110°C and 160°C. The effect of cooling at T48 due to heater moving across the excavation was mainly observed at heater operating heat powers between 6.7 kW and 7.7 kW. Inside the slab mixture, the temperature for points closer to the top surface of the slab ranged from 30°C to 70°C (T17) and from 45°C to 85°C (T22). At lower depths, temperatures were between 20°C and 55°C. These temperatures were measured by T18-T20 and T23-T25 and were affected by the thermal properties of the mixture as already discussed. Finally, temperatures between 30°C and 55°C were captured from thermocouple T21 located below the bottom surface of the pothole excavation. For each temperature sampling point, the effect of heater power, offset and state above the excavations was analysed. This was discussed in Sections 5.2.1-5.2.3. The optimum heating method per pothole excavation was found by examining only the temperatures inside the slabs for approximately 10 min of heating. This was done because the results showed that the temperature increase rate inside the slab was higher for the first 10 min of heating than between 10 min and 30 min of heating. The optimum methods were chosen after finding the method that offered the highest increase of temperature per thermocouple; was performed with lower number of heating-cooling cycles; and low heat power was used for the heating part of the cycles. The temperatures in the pothole excavation were not considered because the results showed that same temperature levels were achieved with each heating-cooling cycle. However, more heating time was needed to warm up the internal mixture of the slabs.
Initially, five best heating methods were chosen among the twenty thermal tests conducted per pothole excavation. Figs. 13, 14 and 15 present the concluded five methods. In these figures, the temperature increase per thermocouple location is compared against each method. Then, the optimum pre-heating method for 45 mm, 75 mm and 100 mm deep excavation was concluded and shown in Fig. 16, after eliminating the heating methods that did not meet the criteria discussed in the previous paragraph.
The time frame of heating-cooling cycles for optimum dynamic heating methods for all excavations are presented in Fig. 17. For 45 mm and 100 mm deep pothole excavations optimum heating methods with stationary and heater in motion are suggested. How- Fig. 17. Heating -cooling cycle times for optimum dynamic heating methods for 45 mm, 75 mm and 100 mm deep pothole excavations. ever, to avoid overheating a larger area around a pothole excavation of 305 Â 165 mm 2 , stationary heater is preferable.
Conclusions and future work
This study experimentally investigated the temperature profile in asphalt slabs and various depths of pothole excavations during dynamic infrared heat application. Temperatures were studied for various slab-heater configurations. These were 130 mm and 230 mm heater offsets and operating heat powers from 6.6 kW to 7.7 kW for stationary and moving heater. For each temperature sampling point, the effect of heater power, offset and state above the excavations was analysed. The main conclusions drawn from the research are the following: Temperatures under dynamic infrared heating in 45 mm, 75 mm and 100 mm deep pothole excavations and their host pavement were non -uniformly distributed. There was higher concentration of temperatures in the pothole excavation than inside the host pavement. This happened because temperatures in the excavations increase due to radiation. Whereas, the temperature profile inside the host pavement depends on the thermal properties of the asphalt mixture. Temperatures inside the host pavement increased more for the first 10 min of heating than for heating between 10 min and 30 min. Dynamically heating a pothole excavation ensures heating up of its external surfaces and internal asphalt mixture of host pavement without burning or overheating the asphalt. For this reason and to keep patching time to a minimum, 10 min-12 min dynamic heating time is better. It is suggested that 45 mm deep pothole excavation is dynamically heated for approximately 10 min with (a) 6.6 kW heat power and stationary heater above the pothole excavation at an offset from the asphalt surface 230 mm and (b) 7.5 kW heat power and heater in motion at an offset 130 mm. Method (b) preferred for pothole areas bigger than 305 Â 165 mm 2 . It is suggested that 75 mm deep pothole excavation is dynamically heated for approximately 10 min with 7.1 kW heat power at an offset 230 mm. It is suggested that 100 mm deep pothole excavation is dynamically heated for approximately 10 min with (a) 6.6 kW heat power and stationary heater at an offset 230 mm and (b) 7.5 kW heat power and heater in motion at an offset 130 mm. Method (b) preferred for pothole areas bigger than 305 Â 165 mm 2 . Dynamically heating the pothole excavation is intended to improve interface pothole repair bonding and therefore repair durability over time. The suggested optimum heating may be implemented in asphalt patch repairs prior to any pothole filling and compaction and after the failed asphalt is removed and the cavity is cleaned from debris and water. Further investigation is underway to evaluate interfacial bonding and long-term performance under moving wheel load as a quality improvement tests. In addition, the impact of preheating at different environment condition, host asphalt pavement mixtures and pothole fill materials are also under investigation. From the research it has also been concluded that future research must explore the effect of dynamic infrared heating in bitumen loses and disturbance of air voids in host pavement. The effect of thermal properties, asphalt absorptivity and surface roughness in infrared heated repairs should also be further investigated. | 8,995 | 2018-11-01T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Monocyte and Macrophage in Neuroblastoma: Blocking Their Pro-Tumoral Functions and Strengthening Their Crosstalk with Natural Killer Cells
Over the past decade, immunotherapy has represented an enormous step forward in the fight against cancer. Immunotherapeutic approaches have increasingly become a fundamental part of the combined therapies currently adopted in the treatment of patients with high-risk (HR) neuroblastoma (NB). An increasing number of studies focus on the understanding of the immune landscape in NB and, since this tumor expresses low or null levels of MHC class I, on the development of new strategies aimed at enhancing innate immunity, especially Natural Killer (NK) cells and macrophages. There is growing evidence that, within the NB tumor microenvironment (TME), tumor-associated macrophages (TAMs), which mainly present an M2-like phenotype, have a crucial role in mediating NB development and immune evasion, and they have been correlated to poor clinical outcomes. Importantly, TAM can also impair the antibody-dependent cellular cytotoxicity (ADCC) mediated by NK cells upon the administration of anti-GD2 monoclonal antibodies (mAbs), the current standard immunotherapy for HR-NB patients. This review deals with the main mechanisms regulating the crosstalk among NB cells and TAMs or other cellular components of the TME, which support tumor development and induce drug resistance. Furthermore, we will address the most recent strategies aimed at limiting the number of pro-tumoral macrophages within the TME, reprogramming the TAMs functional state, thus enhancing NK cell functions. We also prospectively discuss new or unexplored aspects of human macrophage heterogeneity.
Introduction
In the past decade, we have witnessed a scientific breakthrough with immunotherapy that has grabbed the cover of the most prestigious scientific journals due to its significant impact on patients' survival [1,2]. In particular, the blockade of PD-1/PD-Ls immunecheckpoint molecules represents the gold standard immunotherapeutic approach in different cancers including melanoma [3,4] and non-small cell lung cell cancer (NSCLC). Currently, it has become the first-line monotherapeutic approach in some types of cancer such as advanced NSCLC lacking mutations in targetable Tyrosine Kinases [5,6]. However, despite the impressive results seen in a percentage of patients, others showed unresponsiveness or resistance to this immunotherapy. This represents a significant challenge for further application and forces the exploration of new combined strategies to overcome the failures [7]. Moreover, some patients, despite having clinical benefits, developed autoimmune diseases that forced the interruption of therapy with fatal tumor recurrence. To understand what mostly impacts the therapy effectiveness and disease's course, different aspects of the tumor landscape have been analyzed, including patients' microbiota [8][9][10], and the grade and quality of tumor infiltration by different immune cell types, termed "Immunoscore" [11][12][13][14]; these include macrophages and natural killer (NK) cells showing prognostic and predictive significance also in terms of immunotherapies' efficacy [15,16]. chemo, and radiotherapy) by killing tumor cells through complement activation and triggering the cytotoxicity of FcγRpos cells, such as macrophages and NK cells. Unfortunately, however, the anti-GD2 immunotherapy does not cause long-lasting benefits and more than 50% of HR-NB patients fatally relapse within five years. Whether some properties of phagocytes and NK cells have been positively correlated with a better response, including the presence at genotypic levels of FcγRs polymorphism with different affinities [36,38], other "immune responsiveness signatures" are far from being identified. These could include the presence of specific macrophage and NK cell populations present at the tumor site.
With this scenario, the review aims to sum up and discuss the most relevant data on macrophages in NB, looking at future promising immunotherapeutic strategies able to potentiate their antitumor activity and their crosstalk with other cell types colonizing TME, particularly NK cells.
Macrophages in Neuroblastoma Microenvironment
Most pre-clinical immunotherapeutic approaches against NB have been tested administrating to mice syngeneic tumor cell lines. More recently, the Tyrosine hydroxylase-MYCN (TH-MYCN) transgenic mouse model has been adopted. Although lacking spontaneous metastasis, TH-MYCN transgenic mice overexpress MYCN under the control of the tyrosine hydroxylase promoter, presenting aggressive tumors that recapitulate the location, histology, biology, and cytogenetics abnormalities of human NBs.
An immortalized 9464D cell line was derived from a spontaneous NB arising in TH-MYCN transgenic C57BL/6 mice. It grew much more quickly when injected intra-adrenally (IA) in TH-MYCN mice as compared to the subcutaneous (SC) injection. Moreover, intraadrenal tumors were much more densely infiltrated by TAMs, which expressed low levels of MHC class II and displayed a more immunosuppressive M2-like phenotype [39]. These models were also utilized to test immunotherapies. After treatment with cyclophosphamide to create a therapeutic window of minimum residual disease allowing host immune development, it was observed that immune cell infiltration was dramatically different between IA and SC murine NB models. While showing similar GD2 and MHC class I expression, IA tumors showed a type of immune infiltration more similar to that observed in human cancers. Cyclophosphamide was also administered to TH-MYCN transgenic in combination with an anti-GD2 or anti-4-1BB monoclonal antibody (mAb). In both combination regiments, increased survival was observed. Thus, data indicate that the TH-MYCN transgenic mouse represents a suitable model for investigating NB immunobiology and testing immunotherapies in a preclinical scenario [40].
Recently, patient-derived orthotopic xenografts (PDXs) have been proposed as preclinical models more reliable than cell line-derived xenografts since they would better predict clinical outcomes. Undissociated tumor fragments from HR-NB patients were implanted into the para-adrenal area of immunocompromised NOD/SCID/gamma(c)(null) (NSG) mice. PDXs reproduced the genetic and histological features of original tumors cells and were capable of metastasizing to lungs, liver, and BM. The main TME hallmarks of the aggressive parental tumors, such as the presence of abundant cancer-associated fibroblasts (CAFs), TAMs, extracellular matrix (ECM) components, pericyte lining, and abundance of lymphatic and blood vessel vascularization, were maintained. However, information is still lacking about the survival capability of co-engrafted human tumor stroma, and the relative contribution of the human and murine stroma. In this context, in PDXs from both MYCN-amplified and -non-amplified tumors, Braekeveldt et. al. observed infiltration of mouse F4/80pos macrophages but not positivity for the human macrophage CD68 marker. This suggests the involvement of murine stroma in the tumor formation and lack of survival of the human stromal counterpart [41]. Given the relevance of the human TME in the disease progression, NB-PDXs need to be optimized to more precisely predict the efficacy of current and novel anti-cancer immunotherapies.
It should not be disregarded that some cancers can develop after a period of protracted chronic inflammation caused by microbial infections or non-biological events, such as physical or chemical stress; this would occur more in adults than in very young children, such as those affected by NB. This could have an impact on the TME composition, which seems to be quite different in adult and pediatric cancers in terms of infiltrating inflammatory cells. While in adults tumor-infiltrating cells typically include a variety of leukocytes, in pediatric ones the majority of cells are represented by macrophages, which tend to accumulate in necrotic regions [42,43]. Moreover, in the NB microenvironment, differences in cell composition and functions could be also related to the heterogenic nature of the tumor, which arise from errors in the neural crests' differentiation program during the early phase of embryonic development. It is conceivable that tumor growth and immune cell development could occur simultaneously [44]. It is known that NBs have an immunosuppressive TME, related at least in part to MYCN amplification [45]. This allows cancer cells to evade host immune responses. In a cohort of 102 non-MYCN-amplified, untreated, primary NB tumors, high levels of inflammation-related genes characterizing M2 macrophages and a restricted gene signature (IL-6, IL-6R, IL-10, and TGF-β) were found to correlate with a worse prognosis [46]. In HR-NB patients, cancer promoting macrophages predominate both in locoregional tumors, showing high expression of CD163 and CD206 M2 markers [42], and in metastatic NBs characterized by a high presence of TAMs. Moreover, a TAM-associated gene signature, including CD33/CD16/IL-6R/IL-10/FCγR3 genes, was more frequently detected in metastatic patients lacking MYCN amplification and diagnosed at age ≥ 18 months than in patients diagnosed at age <18 months. The signature above contributed to 25% of the accuracy of a novel 14-gene-based tumor classification score significantly correlated with a worse five-year progress free survival [47].
Crucial questions remain. How TAMs can contribute to the development of NB and what are the main mechanisms acting at the level of the TME? In this context, it has been observed in several types of malignancies, including HR-NB, that high levels of IL-6 and the soluble form of its receptor (sIL-6R) in the patient's blood and BM, correlated with bad prognosis [48,49] and would support tumor growth [50]. Furthermore, monocytederived IL-6 and sIL-6R activate STAT3 in NB cell lines promoting drug resistance. NB cell lines pretreated with IL-6 showed a remarkable increase in survival rate when exposed to chemotherapeutic agents. The effect was boosted by the addition of human monocyte-derived IL-6R known to have a trans-acting agonistic effect [51] and was linked to the upregulation of survival factors, such as survivin (BIRC5) and Bcl-xL (BCL2L1). Accordingly, the protection from drug-induced apoptosis was lost in the presence of STAT3 inhibitors or STAT3 gene knockdown. These data provide new insights into the role of monocytes in promoting resistance of NB to cytotoxic effects of therapeutic agents through STAT3 activation [52]. This opens the way to the possible targeting of TME inflammationassociated biologic pathways in NB. In this context, anti-IL-6 mAb have been already tested in adult cancers [53].
It has also been investigated whether IL-6 released by TAMs influenced the proliferation of NB cells through STAT3 activation and up-regulation of the c-MYC transcription. Surprisingly, slow down of NB growth, reduction of STAT3 activation, and c-MYC upregulation were not observed in vitro, blocking IL-6, or in IL-6 knockout mice. On the contrary blocking of JAK-STAT activation, greatly inhibited the TAMs-sustained development of NBs implanted subcutaneously in NSG mice. c-MYC protein levels were also partially reduced by the inhibition of STAT3 phosphorylation, indicating that TAMs can affect NB proliferation stimulating c-MYC expression by a STAT3-and IL-6-independent mechanism [54].
All the findings above highlight the relevance of the functional interactions between tumor cells and TAMs, which should be further explored and possibly targeted in NB treatment. NBs are constitutively characterized by low expression of MHC class I [55][56][57][58], and easily evade the killing activity of cytotoxic T cells. Therefore, potentiating T-independent anti-tumor responses could represent a more effective approach limiting NB growth.
Along this line, Buhtoiarov et al. analyzed mice engrafted with the NXS2 mouse NB cell line and treated with the anti-CD40 mAb. Macrophages increased the expression of intracellular toll-like receptor 9 (TLR9), becoming more sensitive to CpG-containing oligodeoxynucleotides (CpG), a TLR9 agonist. This effect was accompanied by an increased release of IFN-γ, IL-12, and TNF-α by phagocytes, and significant inhibition of tumor growth [59]. Moreover, in NSG or NOD/SCID immunodeficient mice, the depletion of monocytes/macrophages through blockade of colony stimulating factor 1 receptor (CSF1R, also known as macrophage colony-stimulating factor receptor, M-CSFR) significantly enhanced the cyclophosphamide plus topotecan combination therapy. This is in line with the in vitro observation that, topotecan can increase the release of CSF-1 (M-CSF) by NB cells, favoring TAMs differentiation [60]. Previous studies also suggested that CFS-1R blockade could antagonize the activity of CSF-1 released by stromal cells in response to chemotherapy [61,62]. Overall, these studies point out the central role of TAMs in favoring NB growth and resistance to pharmaceutical treatments.
It is of note that the TME consists of a complex network of tumor and different nonmalignant cells, all of which can have fundamental interplays with macrophages. In this context, studies by Hashimoto and colleagues provided important data supporting the cooperation of TAMs and CAFs in supporting tumor progression. They demonstrated that there is a reciprocal influence among NB cells, TAM-like macrophages (CD68pos, CD163pos, CD204pos), and CAFs, identified as alpha smooth muscle actin (αSMA) positive cells. In in vitro experiments, PBMC-derived macrophages and BM-derived mesenchymal stem cells differentiated into TAM-like and CAF-like cells, respectively, after being attracted by the NB cell line. In turn, TAMs and CAFs colonization increased tumor invasiveness and growth. Moreover, TAM-like macrophages significantly promote CAFs proliferation, resulting in a synergistic effect favoring NB progression [63]. Thus, TAMs and CAFs may serve as prognostic indicators and possible therapeutic targets in NBs. Their abundance together with the co-presence of mesenchymal stromal cells (MSCs), correlated in human NBs with high histological malignancy and low T and NK cell infiltration [63,64]. Still, how NB, MSCs, CAFs, and monocytes/macrophages collaborate in establishing a protumorigenic TME and supporting immune evasion is poorly understood. In a recent study, the interaction of monocytes and MSCs with NB cells was shown to cause significant and peculiar upregulation of several pro-tumorigenic factors including TGF-β1 and IL-6, which protects monocytes from spontaneous apoptosis, promoting TAMs differentiation. This cross-talk has been confirmed in both xenotransplanted tumors and primary tumors from patients. It was also shown that there was a strong correlation between the presence of CAFs and the activation in NB of p-SMAD2 and p-STAT3, which participate in the TGF-β1 and IL-6 signal, respectively [65].
With these premises, new strategies targeting the stromal compartment may heavily impact the clinical outcomes of HR-NB patients. This possibility drive studies aimed at better clarifying the cellular properties of TME including metabolic pathways regulating their activity. In this regard, arginase 2 (ARG2), which regulates arginine metabolism, was found to play a key role in NB proliferation and the establishment of immunosuppressive TME. In particular, a reciprocal crosstalk between cancer and immune cells occurred regulating ARG2 expression in NB cells and favoring tumor development. In contrast with most studies, Fultang and colleagues showed that NB-conditioned PBMC-derived monocytes turned into CD206neg Arg1low cells, a phenotype that the authors defined as M1-like; cells produced IL-1β and TNF-α that, in turn, stimulated ARG2 expression in NB. Accordingly, in patients with stage IV NB, the presence of an IL-1β and TNF-α enriched TME correlated with a worse prognosis. These findings suggest a clinically exploitable, immunological metabolic regulatory loop between tumor cells and myeloid cells regulating ARG2 [66]. The unusual detection of M1-like macrophages in NB specimens might further underline the complexity and heterogeneity of the human NB microenvironment and suggest that different areas of the tumors could be colonized by cells with different functional properties.
Regarding lipidic metabolism, the fatty acid binding protein 4 (FABP4) expression in TAMs was associated with advanced clinical stages and adverse NB histology. Invasion, migration, and growth of NB were all accelerated by FABP4pos macrophages. In macrophages, FABP4 physically interacted with ATP Synthase B (ATPB), leading to its ubiquitination with the reduction of ATP levels, deactivation of the NF-κB/RelA-IL-1α pathway, and reprogramming towards an anti-inflammatory phenotype. Thus, FABP4 could be considered a new functional marker of protumor TAMs in NB, and a possible target in immunotherapeutic approaches [67].
Other strategies to limit the pro-tumoral macrophage activity could consist in hampering monocyte recruitment at tumor sites. In this context, the lipid sphingosine-1-phosphate (S1P) was shown to promote in NB the expression of the CCL2 chemokine, known to attract monocytes to inflammation sites. Blocking the S1P2 downstream signal by selective antagonists reduced CCL2 expression, and resulted in a remarkable reduction of F4/80pos macrophages in NB xenograft, and decreased tumor growth [68,69]. In humans, non-cytotoxic doses of the tyrosine kinase inhibitors (TKI) imatinib and nilotinib caused interesting off-target effects, including the reduction in the expression in monocytes of CCR1 and CSF-1, and inhibition of their differentiation towards macrophages [70].
Efforts have been made to understand how to effectively reprogram the M2-like macrophages towards an M1-like, anti-tumoral, phenotype. Bacillus Calmette-Guerin (BCG) can induce M1 polarization of M0 macrophages and revert that of M2 [27], and BCG treatment is increasingly accepted by multiple guidelines for invasive bladder cancer [71].
A recent work published by Relation et al. engineered MSCs to produce and release IFN-γ at the tumor site. This strategy led to the transient polarization of macrophages toward the M1-like phenotype (expressing IL-17 and IL-23p19) in orthotopic NB xenografts, with reduced tumor growth and increased overall survival, without any systemic toxicity [72]. CAF-derived prostaglandin E2 (PGE2) stimulates NB growth and alters immune responses via a variety of mechanisms [73,74], including the induction of M2 macrophage polarization. The inhibition of PGE2 in TH-MYCN transgenic mice reprogrammed macrophages to M1 phenotype and reduced NB growth, angiogenesis, and CAFs infiltration [75].
Interactions between different cellular types occurring in the NB TME have been represented in Figure 1, while innovative therapeutic approaches in the pre-clinical scenario targeting the monocyte/macrophage compartment have been summarized in Table 1.
Macrophages and Natural Killer Cells Crosstalk
As discussed above, strategies aimed at reprogramming TAMs represent a promising approach. This is also due to the positive effect of M1 polarized macrophages on NK cell functions, as demonstrated by studies in mouse models and humans. In particular, TLR agonists, such as LPS or BCG, engage M2 and TAMs, inducing their polarization toward M1 that, in turn, activate human NK cells, as demonstrated by the increased cytotoxic function and IFN-γ release [23,27]. Most of the effects require NK-to-macrophages contacts. In this context, the interactions between DNAM-1 and 2B4, on NK, and their ligands on macrophages play a fundamental role. sIL-18 released during M1 polarization provides a significant contribution to NK cell activation that was compromised by mAbs blocking either the cytokine or the specific receptor. Interestingly, treatment with monensin, which hampers intracellular protein transport, indicates that the sIL-18 released could derive by shedding of the membrane form. mIL-18, expressed on the cell surface of M0, M2, and TAM [23,24], is lost upon TLR activation, a phenomenon that is paralleled by the detection of the soluble form in the supernatant and NK cell activation. The mIL-18 expression is induced by M-CSF in a subpopulation (30-40%) of macrophages differentiating from both CD16neg and CD16pos monocytes; while it is undetectable in monocytes, GM-CSFtreated monocytes, and monocyte-derived DC. mIL-18 expression is significantly reduced by the treatment with the caspase-1 inhibitor suggesting the requirement of an assembled inflammasome for IL-18 surface expression ( Figure 2). Interestingly, high percentages (up to 90%) of macrophages present in the peritoneal fluid of ovarian cancer patients expressed mIL-18, suggesting a possible role in TME [23].
contacts. In this context, the interactions between DNAM-1 and 2B4, on NK, and their ligands on macrophages play a fundamental role. sIL-18 released during M1 polarization provides a significant contribution to NK cell activation that was compromised by mAbs blocking either the cytokine or the specific receptor. Interestingly, treatment with monensin, which hampers intracellular protein transport, indicates that the sIL-18 released could derive by shedding of the membrane form. mIL-18, expressed on the cell surface of M0, M2, and TAM [23,24], is lost upon TLR activation, a phenomenon that is paralleled by the detection of the soluble form in the supernatant and NK cell activation. The mIL-18 expression is induced by M-CSF in a subpopulation (30-40%) of macrophages differentiating from both CD16neg and CD16pos monocytes; while it is undetectable in monocytes, GM-CSF-treated monocytes, and monocyte-derived DC. mIL-18 expression is significantly reduced by the treatment with the caspase-1 inhibitor suggesting the requirement of an assembled inflammasome for IL-18 surface expression (Figure 2). Interestingly, high percentages (up to 90%) of macrophages present in the peritoneal fluid of ovarian cancer patients expressed mIL-18, suggesting a possible role in TME [23]. In addition to TAMs reprogramming, the activation of NK cells may reduce in vivo the number of TAMs. In vitro experiments showed that properly activated NK cells, isolated from PBMC and peritoneal fluid of ovarian cancer patients, efficiently killed autologous TAMs, which were characterized by low, "non-protective" levels of MHC class I molecules [23]. Mattiola I. and collaborators described another macrophage/NK functional interaction potentially relevant for future therapeutic approaches [28]. This involves the membrane-spanning four domains A4A (MS4A4A) molecule whose expression is detected in CD163pos TAMs. MS4A4A colocalizes with dectin-1 in lipid rafts and is crucial to support optimal Syk phosphorylation and dectin-1 functions such as the production of inflammatory cytokines and reactive oxygen species. Importantly, dectin-1 induces on macrophages the expression of IFN regulatory factor 3-dependent NK-activating molecule (INAM, also known as Fam26), promoting NK cell activation with increased tumor cell killing.
Other strategies potentiating the NK cell function could involve methods blocking the immunosuppressive loops occurring in TME, particularly those involving the monocyte and macrophages compartment. As already discussed above, IL-6, released by mononuclear phagocytes upon NB conditioning, and TGF-β1, produced by both immune and tumor cells, inhibit the IL-2-mediated activation of NK cells, through the activation of the STAT3 and SMAD2/3 pathways and suppression of IFN-γ, granzymes, and perforin release. This is in line with the well-documented regulatory role of TGF-β in NK cell activation that emerged from several studies [32,76] supporting the development in NB of preclinical [77] and clinical studies with combining immunotherapies including the block of the TGFβ activity. Importantly, in NK cells, TGF-β also decreased the expression of activating receptors involved in NB recognition and modified the chemokine receptor repertoire, likely hampering the NK cell recruitment at the tumor sites [78]. This observation could have in vivo a profound pathophysiological impact.
The use of immune-modulating drugs was also revealed to be effective in restoring NK cell activity. Lenalidomide, which is known to induce in T cells the secretion of IL-2, IFN-γ, and TNF-α, showed promising results in several pre-clinical cancer models in combination with mAbs inducing in NK cells antibody-dependent cytotoxicity (ADCC) (e.g., anti-CD20 in lymphoma and chronic lymphocytic leukemia) [79][80][81] and in clinical trials both in adult and children, with an increased number of cytotoxic NK cells [82,83]. Moreover, in in vitro and in NOD/SCID mouse models, lenalidomide blocked the adverse effects of both IL-6 and TGF-β1, adjuvating the anti-tumor effect of anti-GD2 immunotherapy [84]. Along this line, the combination of histone deacetylase inhibitors (HDACi, Vorinostat) and anti-GD2 immunotherapy is also presently being investigated with encouraging results. In an aggressive orthotopic mouse model, the combined approach increased NB cell death and shaped tumor and stromal cell phenotype and composition. In particular, tumor cells surviving the drug treatment increased the expression of GD2, and TME was characterized by a high number of macrophages, expressing high amounts of MHC class II and FcRs, and a reduced quantity of myeloid-derived suppressor cells (MDSC). Collectively, these data provide a rationale for the clinical testing of anti-GD2 mAbs and Vorinostat combining therapy in NB patients [85,86].
Strategies boosting NK cell cytotoxicity and reducing the pro-tumoral effects of macrophages and other suppressor cells may represent promising adjuvants potentiating standard immunotherapy. Along this line, in a recently published study, the anti-GD2 mAb hu14.18 has been linked to IL-15 or IL-21 immunostimulatory cytokines. In immunocompetent mice engrafted with syngeneic NB, this approach enhanced NK cell-mediated ADCC against NB; it also increased in TME the number of CD8pos T cells and M1-polarized TAMs, while decreasing that of regulatory T cells and MDSC [87]. Current immunotherapy mad be also influenced by tumor-derived small extracellular vesicles highlighted the role of tumor-derived small extracellular vesicles (sEV). Liu et al. highlighted the role of sEVs as crucial mediators regulating responses to immunotherapy demonstrating that NB-derived sEV attenuated the in vivo effectiveness of the anti-GD2 mAb dinutuximab (Qarziba) and promoted an immunosuppressive TME rich in TAMs and poor in NK cells. NB-sEVs were also able to block anti-GD2-mediated NK cell ADCC in vitro and splenic NK cell maturation in vivo. When sEVs secretion was pharmacologically reduced using tipifarnib, an FDA-approved farnesyltransferase inhibitor, a significant improvement in the dinutuximab efficacy was observed with reduced tumor growth and immunosuppressive environment [88]. Another possibility to increase the effectiveness of the anti-GD2 immunotherapy could be contrasting the detrimental effects of MSCs. In this context, MSCs and monocytes promoted NB growth and negatively affected ADCC mediated by dinutuximab-activated NK cells, both in vitro and in NSG mice, and using NB cell lines and PDX. This detrimental effect was efficiently antagonized by anti-CD105 antibodies that depleted MSCs, endothelial cells, and macrophages from the TME [89].
Conclusions and Future Directions
HR-NB represents a worldwide emergency due to the high failure rate of those patients who do not respond to the current standard therapy. It is commonly recognized that non-malignant cells, residing or recruited within the tumor site, are fundamental for the development and growth of tumors such as NB. The TME, which includes immune cells, also supports cancer cells in evading the anti-tumor activity of the immune system. As discussed above, a crucial role is played by the myeloid compartment. In particular, macrophages assume an anti-inflammatory M2-like phenotype, which promotes tumor progression through a reciprocal crosstalk. This often correlates with a worse prognosis in HR-NB patients. Several studies investigated a variety of approaches possibly contrasting these tumor-promoting effects in different cancers including NB; they comprise the reprogramming of macrophage polarization toward M1, enhancement of mAb-dependent phagocytosis, and reinforcement of the NK-mediated cytotoxicity by the standard clinically approved anti-GD2 mAbs used alone or in the combination with other therapeutics [90].
The understanding of the heterogeneity of TAMs within the TME still remains a challenge. It is widely accepted that the original M1/M2 dichotomy in macrophage polarization is an oversimplification and new techniques will contribute to the puzzle solving. For example, the proteomic analysis could identify molecules differentially expressed in the various macrophage subsets; two of these molecules, mIL18 and MS4A4A, have been identified, whose role in macrophage heterogeneity needs to be clarified. It will be relevant to understand the prognostic value of the macrophage subpopulations and correlate their phenotypic/functional properties with anti-tumor responses and, in particular, the capability of modulating NK cell activity. In this regard, it will be also crucial to better investigate the phenotypic and functional heterogeneity of NK cells infiltrating NB to design novel and more effective therapeutic approaches simultaneously enhancing the anti-tumoral activity of NK cells and macrophages as well as their reciprocal crosstalk. Decisive is adding information on the NK [58,91,92] and macrophage landscape in tissues, particularly in the BM, the most frequent site of NB metastasis and relapses. The strategies aimed at potentiating macrophage/NK interactions should also consider the possible modulation of molecules negatively regulating their function. In this context, the currently exploited therapeutic approach can result in undesired side effects, such as the upregulation of immune checkpoints [93][94][95][96], observations that are guiding the choice of promising combination therapeutic approaches.
Many efforts are also dedicated to find alternative effective therapeutic approaches with less toxicity. For example, O-acetylated GD2 (OAcGD2) is a promising novel tumorassociated molecule that is not expressed by peripheral nerves, being targetable with reduced painful side effects. In a pre-clinical setting, the anti-OAcGD2 mAb activated the immune system and increased the macrophage infiltration/function within the TME. However, the treatment efficacy was hampered by the upregulation on NB cells of CD47 [95], which interacts with the SIRPα receptor on macrophages, limiting phagocytosis [97][98][99]. This further highlights the need to increasingly evaluate the use of combined therapies.
Various mechanisms occur within the TME favoring cancer progression, which are often mediated by stromal and immune cells. In this context, even if the T cell-mediated surveillance could be poorly relevant in NB lacking MHC-I expression, T cells represent essential effectors in cancer immunotherapy and are subjected to modulation by specific pathways arising within the TME. For example, human cancers characterized by a poor T cell infiltration showed strong activation of the WNT/β-catenin pathway [100][101][102][103][104]. This pathway is involved in T cell exclusion, as well as in tumor progression, invasion, and metastasis. In mouse models of ovarian cancer, the inhibition of this pathway decreased tumor progression, enhanced the survival, and increased the number of CD8pos T cells within the TME [105,106]. In NB, WNT/β-catenin plays a pivotal role in cellular proliferation and apoptosis as well as in the embryonic development, with implications in NB onset, progression, and relapse. Its activation also enhances MYCN amplification and favors chemoresistance [107,108]. Wang et al. demonstrated that MYCN knockdown in NB cell lines remarkably reduced cell viability, accelerated apoptosis, and blocked WNT/βcatenin signaling [109]. Several factors seem to interact with this pathway inducing in NB cells a malignant phenotype, cancer stemness, or epithelial-to-mesenchymal transition (EMT). These include the nucleotide binding oncotarget BORIS [110], cell surface proteoglycan Glypican-2 (GPC2) [111], and transmembrane protein human tripartite motif 59 (TRIM59) [112]. WNT/β-catenin was shown also to regulate the CAFs activity [113] and macrophage interactions with tumors. In hepatic tumors, reciprocal influence between cancer cells and phagocytes has been demonstrated; TNF-α produced by TAMs induced EMT and stemness in liver tumor cells [114], the latter in turn promoted M2 macrophage polarization via the Wnt/β-catenin signaling [115]. It was also reported that macrophagederived soluble factors activated the WNT signaling pathway in colorectal cancer [116]. For instance, tumor cells stimulated macrophages to release IL-1β, which enhanced the levels of β-catenin, resulting in higher expression of WNT target genes in cancer cells [102,103,116]. Furthermore, macrophage-induced IL-6 favored the migration and invasion of colon cancer cell via WNT/β-catenin in a STAT3/ERK-dependent manner [117]. As recently investigated in mice, WNT/β-catenin signaling blockade might be used in combination with therapeutic strategies, limiting the expression of inhibitory ligands in cancer cells such as CD47 [101] or blocking PD-1/PD-Ls interactions [118].
Novel therapeutic approaches need to be tested in highly predictive preclinical platforms, a crucial step when investigating new drugs. However, the currently available tools are only partially reliable and present limitations that could explain the high rate of failure in translating novel approaches into clinics. As mentioned above, most studies have been conducted using human NB cell lines or various mouse models. Currently used long-term NB cell lines have been extensively characterized. However, they could develop genetic alterations and undergo clonal selection due to their prolonged expansion in 2D cultures, acquiring phenotypic and functional properties far from the original tumor. The in vivo models have been largely used in cancer research. Different kinds of mouse models have also been exploited to understand the role of the TME in NB immune resistance, with a particular focus on TAMs. Syngeneic, orthotopic, and transgenic NB models, as well as PDXs, have been largely used to study disease progression and validate innovative therapeutic approaches. However, researchers are aware that even the more complex mouse models have important limitations: the great variety of incidence of metastases among different models, which is often dramatically poor; the absence in NOD-SCID and NSG mouse of immune pressure due to the lack of T, B, and NK cells; the presence in commonly used immunodeficient mice models of an incomplete TME containing cells (macrophages, fibroblasts) of mouse origin. In this context, it is unlikely that PDXs could maintain their original characteristics and, surely, could not follow the tumor evolution occurring in the patient.
The host xenogeneic stroma replacing the human counterpart in implanted tumor specimens hampers the possibility to analyze the interactions occurring among NB and stromal and immune cells. Therefore, the scientific community is working hard to realize more complex and reliable tools. These include three-dimensional (3D) cell cultures or organ-on-chip in vitro models allowing cells to grow in a spatial organization more similar to in vivo tissues and to experience dynamic stimuli, e.g., mechanical stimulations, and fluid flow, occurring in different human organs. Relevant 3D systems have been developed in recent years and promising results have been obtained that will be useful to better address the complex functional crosstalk occurring at the tumor site [94,[119][120][121][122][123][124][125][126]. In addition, the development of "humanized" mice generated by the transplantation of human hematopoietic cells into immune-compromised mice could help in studying the in vivo human immune response in tumors and during inflammation. Indeed, the socalled MISTRG mice robustly develop multiple immune cell types, including macrophages, neutrophils, dendritic cells (DC), and NK cells [127,128].
Although important steps forward in the discovery of novel cures have been made, HR-NB still represents a challenge. Given the high heterogeneity of this tumor, strategies may lead to good responses in specific subgroups of patients while being ineffective in others [129]. Therefore, to personalize the diagnostic and therapeutic approach, it is mandatory to better characterize the TME, taking into consideration the tumor, stroma, and immune compartment as well as their molecular and functional crosstalk.
Author Contributions: Conceptualization, C.V., C.B. and R.C.; writing-original draft preparation, C.V.; writing-review and editing, R.C. and C.B. All authors have read and agreed to the published version of the manuscript.
Funding:
The authors don't declare a specific grant for this article from any funding agency in the public, commercial or not-for-profit sectors.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable. | 7,544.6 | 2023-03-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Antihelical Edge States in Two-dimensional Photonic Topological Metals
Topological edge states are the core of topological photonics. Here we introduce the antihelical edge states of time-reversal symmetric topological metals and propose a photonic realization in an anisotropic square lattice of coupled ring resonators, where the clockwise and counterclockwise modes play the role of pseudospins. The antihelical edge states robustly propagate across the corners toward the diagonal of the square lattice: The same (opposite) pseudospins copropagate in the same (opposite) direction on the parallel lattice boundaries; the different pseudospins separate and converge at the opposite corners. The antihelical edge states in the topological metallic phase alter to the helical edge states in the topological insulating phase under a metal-insulator phase transition. The antihelical edge states provide a unique manner of topologically-protected robust light transport applicable for topological purification. Our findings create new opportunities for topological photonics and metamaterials.
Introduction.-The fundamental concepts in condensed matter physics introduced to topological photonics inspire the rapid development of photonic topological states [1][2][3]. The chiral edge states of topological insulators unidirectionally propagate along boundaries and require the breaking of time-reversal symmetry [4][5][6][7][8][9]. The degenerate clockwise and counterclockwise modes of ring resonators experience opposite artificial magnetic fields and provide a pseudospin degree of freedom [6]. The helical edge states of time-reversal symmetric topological insulators with different pseudospins unidirectionally propagate in opposite directions. The edge states of topological metals are topologically-protected in the gapless phase. Antichiral edge states have been proposed and implemented on zigzag edges by modifying the next-nearestneighbor hopping phase of the Haldane model [10][11][12][13]. Antichiral edge states propagate along the parallel lattice boundaries in the same direction. Recent progress in antichiral edge states has led to the development of a solution for creating topological metals. Topological metals are difficult to create because of the challenge of separating gapless bands and breaking time-reversal symmetry in photonics. Thus, is it possible to have timereversal symmetric topological metals with topologicallyprotected edge states and robust propagation?
Here, we introduce the antihelical edge states [ Fig. 1(a)] of time-reversal symmetric topological metals and a photonic realization is proposed in the twodimensional anisotropic square lattice of coupled ring resonators. The pseudospins are time-reversal symmetric counterparts, and the introduction of pseudospins addresses the difficulty of breaking time-reversal symmetry in photonics. The symmetric component of next-nearestneighbor couplings creates a nontrivial topology, and the anti-symmetric component of next-nearest-neighbor couplings separates energy bands and supports the antihelical edge states on both horizontal and vertical boundaries. Antihelical edge states robustly copropagate along corners with opposite pseudospins that separate and converge at the opposite corners on the diagonal of a lattice.
The antihelical edge states become helical edge states during a metal-insulator phase transition [ Fig. 1(b)]. Photonic topological metals provide a new direction for research on topological photonics.
Anisotropic square lattice of ring resonators.- Figure 1(c) presents a two-dimensional anisotropic square lattice of coupled ring resonators (see Supplementary materials A). The clockwise mode (pseudospin-up) and counterclockwise mode (pseudospin-down) are timereversal counterparts, and they experience the opposite Peierls phases in the horizontal couplings between the nearest-neighbor resonators [6]. The lattice for the pseudospin-up (pseudospin-down) is presented in Fig. 1(d) [ Fig. 1(e)]. The horizontal couplings ±iJ 1 indicated in green and ±iJ 2 indicated in orange are tunneling-direction-dependent, break the time-reversal symmetry of the Hamiltonian for each individual pseudospin, and separately affect the edge states localized on the upper and lower boundaries. The vertical coupling between the nearest-neighbor resonators is κ indicated in black. The cross couplings between the next-nearestneighbor resonators are χ 1 = χ + δ, χ 2 = χ − δ [14]. The symmetric component χ opens a band gap [ Fig. 1(f)] and creates a nontrivial topology, whereas the anti-symmetric component δ affects the band structure, separates the bands in both the x and y directions, and creates antihelical edge states that are localized on the left and right boundaries in the topological metallic phase.
Topological phases.-The Bloch Hamiltonian for the pseudospin-up is h ↑ (k) = d 0 (k) σ 0 + d (k) · σ, where σ 0 is the identical matrix and σ = (σ x , σ y , σ z ) is the Pauli matrix. The first term adjusts the band energy and the second term determines the band topology. We have d 0 (k) = (J 1 + J 2 ) sin k x and d (k) = r 1 (k x ) − r 2 (k y ) with r 1 (k x ) = (κ + 2χ cos k x , −2δ sin k x , (J 1 − J 2 ) sin k x ), r 2 (k y ) = (−κ cos k y , κ sin k y , 0). h ↑ (k) respects the particle-hole symmetry σ z h T ↑ (k) σ −1 z = −h ↑ (−k). The topological phase belongs to the class D and it is characterized by a Z topological invariant, that is, the spin-Chern number Antihelical 0 Fig. 1(g)]. The band topology is captured by the effective magnetic field d (k). After d (k) is substituted with r 1 (k x ) − r 2 (k y ) in C ↑ , the spin-Chern number becomes the definition of a linking number of the two independent periodic vectors r 1 (k x ) and r 2 (k y ) (see Supplementary materials B). r 1 (k x ) is an ellipse that passes through (κ − 2χ, 0, 0) and (κ + 2χ, 0, 0). r 2 (k y ) is a circle centered at the origin with a fixed radius |κ| in the z = 0 plane. Thus, the two closed curves r 1 (k x ) and r 2 (k y ) are linked when 0 < |χ| < |κ|.
In Fig. 2(a), the red circle r 2 (k y ) that is fixed on the z = 0 plane is always clockwise; the rotation direction of the blue ellipse r 1 (k x ) reverses along the topological phase transition plane J 1 = J 2 , where the band gap vanishes and r 1 (k x ) becomes coplanar to r 2 (k y ). The blue ellipse r 1 (k x ) circles counterclockwise around the red circle r 2 (k y ) once in the region J 1 > J 2 (C ↑ = +1); and the blue ellipse r 1 (k x ) circles clockwise around the red circle r 2 (k y ) once in the region J 1 < J 2 (C ↑ = −1). The righthand (left-hand) rule identifies C ↑ = +1 (C ↑ = −1): orient the thumb pointing along the arrow of the red circle, curl the rest of the fingers pointing along the arrow of the blue ellipse. Notably, in the phase diagram [ Fig. 2(b)], two topological phases with opposite spin-Chern numbers have an identical number of edge states.
Antihelical edge states.-Topological edge states appear in the situation that the next-nearest-neighbor coupling χ is weak than the nearest-neighbor coupling κ. The nontrivial topology supports helical and antihelical edge states [ Fig. 1(h)] that are different from their band structures and their ways of propagation. The antihelical (helical) edge states on the parallel boundaries propagate in the same (opposite) direction, being robust against disorder because of the topological protection and the spatial separation between the edge and bulk states [10].
In Fig. 1(b), the regions J 1 J 2 < 0 are the topological insulating phase; the helical edge states with different pseudospins propagate clockwise or counterclockwise along the lattice boundaries. The topological phase undergoes a metal-insulator phase transition at J 1 J 2 = 0. The regions J 1 J 2 > 0 are the topological metallic phase [15]; the edge states excited by the pseudospin-up propagate toward the left (or right) on the two parallel boundaries, but the edge states excited by the pseudospindown propagate toward the right (or left) boundaries for J 1 , J 2 > 0 (or J 1 , J 2 < 0). These edge states associated with the two pseudospins constitute the antihelical edge states, and the required counterpropagating modes for the antihelical edge states are the bulk states. The bands touch at the topological phase transition, where topological edge states are degenerate and exhibit identical dispersion and propagation velocity.
The square lattice exhibits nontrivial topology throughout the parameter space J 1 -J 2 -δ [ Fig. 2(a)]. The various types of edge states, as distinguished by their propagation, are presented in Fig. 2(b). The helical edge states are in J 1 J 2 < 0 indicated in cyan and the antihelical edge states are in the other regions J 1 J 2 > 0. The antihelical edge states differ from the helical edge states in term of their copropagation on the parallel boundaries [6,10]. The antihelical edge state excitations with different pseudospins unidirectionally propagate in the opposite directions along the lattice boundaries and enable the separation of the robustly copropagated chiral mode.
(b) , , , , The helical edge states in the topological insulating phase always appear in both the horizontal and vertical directions; however, the antihelical edge states in the topological metallic phase usually appear only in one direction because of the inseparable band energy in the other direction. Here, the antihelical edge states are simultaneously present in both the horizontal and vertical directions for δ 2 > J 1 J 2 > 0, where the energy bands are separable in both directions. The antihelical edge states are only present in the horizontal directions for δ 2 < J 1 J 2 . In Fig. 2(b), the surfaces J 1 J 2 = 0 and δ 2 = J 1 J 2 divide the parameter space J 1 -J 2 -δ into three phases with eight regions. The eight types of edge state propagation for the pseudospin-up excitations are presented in Fig. 2(c). Fig. 2(c), the helical edge states propagate counterclockwise (clockwise) along the lattice boundaries for C ↑ = +1 (C ↑ = −1) in the cyan region J 1 > 0, J 2 < 0 (J 1 < 0, J 2 > 0). The antihelical edge states along the horizontal direction in the red region J 1 , J 2 > 0 (J 1 , J 2 < 0) of δ 2 < J 1 J 2 ; the edge state excitation propagates leftward (rightward), scatters into the bulk at the corners, and goes backward. The antihelical edge states are present along both the horizontal and vertical directions when 0 < J 1 J 2 < δ 2 ; the edge state excitation propagates half a closed-loop along the lattice boundaries, scatters into the bulk at the corners, and goes backward along the diagonal direction with the support of the scattering states. The four cases are respectively distributed in the orange regions (J 1 , J 2 , δ > 0), (J 1 , J 2 < 0, δ > 0), Figures 2(d)-2(g) present the four cases of the band structures for the pseudospin-up presented on the lefthalf of Fig. 2(c). The propagation of antihelical edge states in Figs. 2(f) and 2(g) for the pseudospin-up are simulated in Figs. 2(h) and 2(i), and the corresponding propagation for the pseudospin-down are simulated in Figs. 2(j) and 2(k). The separation and convergence of different pseudospins toward the opposite corners of the square lattice are observed.
In conclusion, we have introduced the antihelical edge states in the time-reversal symmetric topological metallic phase, and proposed the photonic realization in coupled ring resonators. The antihelical edge states with different pseudospins propagate in the opposite directions along the corners toward the diagonal of the square lattice. Unconventional copropagation is applicable for the robust spin purification. The antihelical edge states become the helical edge states after undergoing a metal-insulator phase transition. The wide range of reconfigurable robust light propagation enables the flexible control of light flow at the edges. Our findings provide insight into the photonic topological metals and are applicable for the acoustic lattices and other two-dimensional metamaterials. Concepts pertaining to topological metals and antihelical edge states are inspiring in the condensed matters and topological materials.
SUPPLEMENTARY MATERIALS A. Experimental realization of the square lattice
In this section, the experimental realization of the square lattice in the two-dimensional (2D) coupled resonator array is discussed. Figure 3(a) is the schematic of the 2D coupled resonator array. The ring resonators are the primary resonators for the sites of the square lattice. The resonators in green stand for the sites A and the resonators in orange stand for the sites B. The linking resonators mediate the photons tunneling among the primary resonators and induce the effective couplings between the nearest-neighbor primary resonators and the next-nearest-neighbor primary resonators. The linking resonators and the primary resonators are coupled through their evanescent fields and the coupling strengths depend on the positions between the linking resonators and the primary resonators. For example, the couplings for the CW mode (pseudospin-up) of the primary resonators are mediated by the CCW/CW mode of the linking resonators.
The vertical coupling κ is reciprocal, the path lengths for the photons tunneling upward and downward between the neighbor ring resonators are equal. The coupling strength is approximately characterized by κ = κ 2 l /∆ l [16], where κ l and ∆ l = ω c − ω link are the hopping and detuning between the primary resonators and the linking resonators; and the frequency of the primary (linking) resonators is ω c (ω link ).
The horizontal couplings iJ 1 and iJ 2 are nonreciprocal, carrying the Peierls phase e iπ/2 in the couplings. The Peierls phase is implemented through the optical path length difference for the photons tunneling rightward and leftward between the neighbor ring resonators. The CW mode photons tunneling rightward from the lower half of the linking resonator experience an additional path length l = λ/2 than the CW mode photons tunneling leftward from the upper half of the linking resonator [6], where the wave length is λ. Therefore, the photons tunneling rightward acquire an extra phase factor e iφ = e iπl/λ = i and the photons tunneling leftward acquire an extra phase factor e −iφ = e −iπl/λ = −i in the front of the horizontal couplings iJ 1 and iJ 2 . The cross coupling χ 1 (χ 2 ) between the primary resonators on the diagonal of the square plaquette is directly (indirectly) mediated by the CCW (CW) mode of the linking resonator along the main diagonal of the square plaquette as indicated by the blue (red) arrows. The main diagonal of the square plaquette refers to the line along the upper-left and the lower-right corners of the square plaquette. The cross couplings χ 1 and χ 2 are independently mediated through the linking resonators along the diagonals of the square plaquette [14].
The antihelical edge states can appear in the topological metal phase of the square lattice at the specific case χ 1 χ 2 = 0. Thus, the single cross coupling case is adequate for the observation of antihelical edge states in experiments. A concrete system is χ = δ. In this situation, one of the two cross couplings χ 2 = 0 vanishes as schematically illustrated in Fig. 3(b). This simplifies the setup in the experiment and facilitates the realization of antihelical edge states. The robust propagation with strong χ 2 can be observed in the situation χ 1 = 0 through switching the orientation of the linking resonator to alter the connection between the nearest-neighbor resonators on the diagonals of the square plaquette. In this manner, the proposed square lattice can alter between two simple situations χ 2 = 0 and χ 1 = 0.
For example, we consider a simple configuration in experiment that the resonators are neatly arranged in both horizonal and vertical directions of the 2D square lattice. Consequently, the horizontal couplings have equal strengths J 1 = J 2 . A candidate platform for the possible realization of the photonic topological metal and the antihelical edge states can be chosen as follows. In a situation κ = 1, χ = δ = 1/2, J 1 = J 2 = 1/4, the two cross couplings are χ 1 = 1, χ 2 = 0 and the cross coupling strength equals to the vertical coupling strength χ 1 = κ. The round trip length of the resonator is about 70 µm. The resonator supports a single mode transverse electric field at the telecom wavelength 1.55 µm [17]. The coupling strengths evanescently decay as the width of the air gap between the neighbor resonators; and the coupling strengths approximately decay from ∼ 30 GHz to ∼ 5 GHz for the air gap width increasing from 150 nm to 250 nm [18]. The topologically robust transport can be experimentally implemented in a 20 × 20 size square lattice of coupled resonators at the vertical coupling strength κ ∼ 20 GHz, the cross coupling strength χ 1 ∼ 20 GHz, χ 2 = 0, and the horizontal coupling strengths J 1 ∼ 5 GHz, J 2 ∼ 5 GHz. In this situation, the 2D square lattice has a nontrivial topology because of χ = (χ 1 +χ 2 )/2 < κ; and the system is a topological metal hosting the antihelical edge states in both horizontal and vertical directions because of δ 2 > J 1 J 2 > 0.
B. Spin-Chern number characterized by the linking number of the effective magnetic field
The Chern number characterizes the band topology of the 2D topological phase and the Chern number is proportional to the Hall conductance. In a 2D time-reversal invariant system, although the total Chern number is zero, the spin-Chern numbers are quantized. The spin-Chern number of the lower band for the pseudospin-up (CW) mode is defined by where |Ψ − ( k) is the eigenstate of the lower band [19]. The spin-Chern number is an integral of the Berry curvature in the entire Brillouin zone (BZ). Figure 4 provides the numerical results of the Berry curvature in the BZ for the topological phases C ↑ = +1 and C ↑ = −1 [20].
In the following, we explain the relation between the spin-Chern number and the linking number. The spin-Chern number for the two-band Hamiltonian h ↑ (k) = d 0 σ 0 + d(k) · σ for the pseudospin-up is expressed in the form of [21] The effective magnetic field in h ↑ (k) is d(k) = r 1 (k x ) − r 2 (k y ) with r 1 (k x ) = (κ + 2χ cos k x , −2δ sin k x , (J 1 − J 2 ) sin k x ) and r 2 (k y ) = (−κ cos k y , κ sin k y , 0); and d 0 = (J 1 + J 2 ) sin k x . Substituting d(k) = r 1 (k x ) − r 2 (k y ) into equation (2), the spin-Chern number is rewritten in the form of (3) In geometry, equation (3) is the definition of the linking number of two independent closed curves r 1 (k x ) and r 2 (k y ). The linking number is a topological invariant that characterizes the number of times that r 1 (k x ) and r 2 (k y ) wrap around each other [22]. Thus, the spin-Chern number is equivalent to the linking number of two closed curves r 1 (k x ) and r 2 (k y ) of the effective magnetic field d (k) for the pseudospin-up Hamiltonian h ↑ (k). The spin-Chern numbers shown in Fig. 4 are in accords with the representative links shown in Fig. 2(a) The spin-Chern number obtained from the linking of the effective magnetic field d x , d y , − d z yields C ↓ = −C ↑ . This is straightforward because that the curve r 2 (k y ) and two components of the curve r 1 (k x ) in the x and y directions are unchanged, and the component of the curve r 1 (k x ) in the z direction changes into the opposite.
C. Robustness of the antihelical edge states
In the topological phases of the square lattice, both the helical edge states and the antihelical edge states are topologically-protected and robust to disorder. In comparison, the helical edge states in their propagations are even more stable to the disorder because of the band gap protection. In this section, we provide more details for the robustness of antihelical edge states and their unidirectional propagations in the presence of coupling disorder; notably, the couplings with random disorder satisfy the particle-hole symmetry, which also exists in the square lattice for the pseudospin-up (pseudospin-down). The numerical simulations of the band structures and the robust propagations of antihelical edge states are shown for the imperfect square lattice. The localized edge states and the extended bulk states are spatially separated. Thus, the imperfection in the bulk almost does not affect the edge states and their robust propagations.
The distribution and the localization of the edge states on the boundaries are insensitive to the lattice size; however, the distribution and the extended feature of the bulk states closely depend on the lattice size. The larger lattice size leads to a better spatial separation between the edge states and the bulk states. The influence on the edge states for the disorder in the bulk of the square lattice is slight in comparison with the influence on the edge states for the disorder on the boundaries of the square lattice. This point is elucidated in Fig. 5, where we depict the energy bands for the disorder in the bulk and on the boundaries, respectively; and the random disorder is chosen on all the couplings either inside or outside the central one-half area of the square lattice as schematically illustrated in the upper panels of In the numerical simulations, the initial excitation has the Gaussian profile in the form of |Ψ(t 0 ) = Ω −1/2 kε e −(kǫ−k0) 2 /(2α 2 ) e −iNc(kǫ−k0) |ψ , (4) where |ψ is the edge mode with the momentum k ǫ = k x or k y , and k 0 = π. N c is the center of the wave packet and α controls the width of the Gaussian profile. aries. The dynamics in the numerical simulations are close to the dynamics in the absence of disorder exhibited in the main text Fig. 2; this is a consequence of the limited lattice size 40 × 40 in the numerical simulations. The robust propagation against disorder is more excellent in the numerical simulations performed in the larger size system as shown in Fig. 7.
In the topological metal phase possessing the antihelical edge states in both the horizontal and vertical directions, there are two types of right-angled (90 • degrees) corners in the square lattice. One type of corners help the right-angled turning of the edge mode excitations when they propagate along the boundaries; and the other type of corners help the edge mode excitations entering the bulk of the square lattice. Each corner is associated with a cross coupling; and the two types of corners are distinguished from their associated cross couplings χ + δ and χ − δ. The formation of the right-angled corners associated with the weak cross coupling enables the 90 • degrees turning across the corners for the edge mode excitations propagating along the lattice boundaries. The formation of the right-angled corners associated with the strong cross coupling enables the edge mode excitations entering the bulk of the square lattice. | 5,563.6 | 2023-01-01T00:00:00.000 | [
"Physics"
] |
Prevention of acute rejection after rescue with Belatacept by association of low-dose Tacrolimus maintenance in medically complex kidney transplant recipients with early or late graft dysfunction
Background Increased acute rejection risk in rescue protocols with Belatacept may limit its use particularly in medically complex patients where preexisting increased risk of rejection couples with CNI toxicity. Methods Retrospective analysis was performed in 19 KTs shifted to a Belatacept-based immunosuppression with low-dose Tacrolimus (2–3 ng/mL) after evidence of allograft disfunction, including patients with primary non-function (PNF), chronic-active antibody-mediated rejection (cAMR), history of previous KTs and/or other concomitant transplants (liver, pancreas). Evaluation of CD28+ CD4+ effector memory T cell (TEM) before conversion was performed in 10/19. Results Kidney function significantly improved (median eGFR 16.5 ml/min/1.73m2 before vs 25 ml/min after; p = 0.001) at a median time after conversion of 12.5 months (9.1–17.8). Overall graft and patient survival were 89.5% and 100% respectively. Definitive weaning from dialysis in 5/5 KTs with PNF was observed, whereas 7/8 patients lost their graft within first year in a control group. eGFR significantly ameliorated in re-trasplants (p = 0.001) and stabilized in KTs with other organ transplants or cAMR. No acute rejection episodes occurred, despite the significant risk suggested by high frequency of CD28+ CD4+ TEM in most patients. Opportunistic infections were limited and most common in early vs late-converted. Conclusions Rescue association of Belatacept with low-dose Tacrolimus in medically complex KTs is a feasible option that allows prevention of acute rejection and amelioration of graft function.
Introduction Belatacept, a selective costimulation blocker consisting of soluble CTLA4/IgG fusion protein, prevents T cell CD28 signaling by efficiently binding with its ligands CD80 and CD86 expressed by antigen-expressing cells (APCs) [1][2][3]. A long-term trial has shown an improvement of graft survival in kidney transplanted patients in comparison to cyclosporine [2,4]. Improved graft function was observed also in comparison to Tacrolimus (TAC) maintenance [5].
However, an increased incidence of acute rejection (AR) [2,4,[6][7][8] in patients treated with Belatacept was observed, mainly in free-calcineurin inhibitors (CNI) regimen [9], and raised the concern of its use in patients with moderate or high immunologic risk. AR occurs very early, 82% within three months from conversion [2]. Recently, Adams et al [5] contained the incidence of AR in patients started on Belatacept from the beginning of the transplant by transiently combining TAC to Belatacept. In order to obtain an acceptable rejection rate (about 16%) TAC should be tapered slowly in 9 months after KT [5].
Based on its characteristics, Belatacept is now been mainly adopted as rescue therapy in case of CNI-induced nephrotoxicity or graft function impairment, especially in marginal kidneys recipients [10,11]. Both early and late conversion were explored [10,12,13]. Switch to Belatacept within three months after KT demonstrated better results in terms of estimated glomerular filtration rate (eGFR) increasing [10]. However, also in this setting, AR occurs (8.2% in Retrospective Multicenter European Study [10], 4% and 11.4% in Le Meur et al's study [14] and in Brakemeier et al's study [15] respectively). The AR rate reaches 25% in Perez-Saez et al [16] probably due to the inclusion of patients at high immunological risk, even if these data are not confirmed by Gupta et al [17].
In both ab-initio and rescue protocols the majority of AR are classified as T-cell mediated (TCMR) [18] with good response to steroids; nonetheless, some patients need a second-line treatment with anti-lymphocyte polyclonal antibodies, and a few number also experiences antibody-mediated rejection (AMR) and graft loss [10]. Moreover, also if the episode was successfully treated, all AR-related therapies are associated with increased morbidity and mortality especially due to infectious cause, with higher risk in old and frail subjects [10,19].
In the present study, we analyze our experience with the adoption of Belatacept-based immunosuppression in association with low-dose Tacrolimus (2-3 ng/mL) in a specific population of KTs at high immunological risk with a high medically complex profile (i.e. combined transplants). The rational of this protocol is to combine the Belatacept positive effects with a reduced CNI exposure for minimizing the risk of AR.
Study design
We performed a retrospective analysis, including 19 adult KT recipients. Belatacept was associated to maintenance immunosuppressive therapy between May 2017 and August 2019.
Patients were converted in case of a) early allograft disfunction, intended as primary non function (PNF) (dialysis dependence or creatinine clearance <20 ml/min after three months from KT) or persistent graft disfunction (after the third month and within 9 months post KT) or b) late allograft disfunction [suboptimal kidney function with histological diagnosis of chronic antibody mediated rejection (cAMR) and/or interstitial fibrosis-tubular atrophy (IF-TA)].
Exclusion criteria for Belatacept association were: Epstein Barr virus (EBV) negative serology, pregnancy or breastfeeding, no active contraception for women, acute infections.
All patients were closely monitored for adverse events and severe adverse events, also including serial evaluation of EBV and CMV viral load. CMV prophylaxis was administered post-KT according to donor/recipient serologic status and induction therapy.
The study was performed in adherence with the last version of the Helsinki Declaration and with the Principles of the Declaration of Istanbul on Organ Trafficking and Transplant Tourism. All patients signed an informed consent before switching to Belatacept-based immunosuppressive therapy, including their permission to have data from their medical records used in research. This study is covered by our Ethical Committee (
Betalacept based-immunosuppression protocol
Our immunosuppressive protocol is summarized in Fig 1. Briefly, Belatacept was administered intravenously at a dose of 5 mg/Kg in 30 minutes on day 1, 15, 29, 43 and 57 with subsequent doses scheduled every 28 days thereafter. During the first two weeks following Belatacept initiation TAC dosage was unchanged. On day 15 it was reduced by 40-50% of the initial dose; after the 3th dose of Belatacept TAC was maintained at trough level 3-5 ng/ml and then 2-3 ng/ml.
Mycophenolate mofetil/mycophenolic acid (MMF/MPA) and prednisone (PN) were also maintained in association with Belatacept and low dose TAC unless clinical conditions required discontinuation.
Statistical analysis
Continuous variables were described as median and interquartile range (IQR) according to their non-normal distribution. To compare independent groups we used Mann-Whitney test and to compare related variables we used the Wilcoxon signed-rank test.
Categorical variables were presented as fraction and Pearson's or, for small samples, Fisher's exact test was employed to compare groups. Cumulative survival was analyzed by Kaplan-Meier (KM) curves. Significance level for all tests was set at α<0.05.
All statistical analyses were performed using Spss (IBM Corp. Released 2020. IBM SPSS Statistics for Windows, Version 26.0. Armonk, NY: IBM Corp.), including an additional analysis with a historical cohort of KTs with PNF before Belatacept availability in our center or clinical contraindication to its use (negative EBV serology).
Patients characteristics and causes of association
A total of 19 KT recipients (13 males, 6 females) were included in the study. Baseline characteristics of our population are summarized in Table 1. Table 1. Baseline characteristics of studied population.
One out of 19 received a kidney-pancreas combined transplant; 2 patients had another organ transplant before KT (one liver and one liver-pancreas). Seven out of 19 have an history of previous KT, including 3 third and 1 fifth KT. This fifth KT was performed during the treatment with Belatacept, administered as rescue therapy for the failing fourth graft.
Seven patients experienced rejection before Belatacept association: three were diagnosed as acute T-cell mediated rejection (aTMR), one as acute antibody mediated rejection (aAMR), and three as chronic antibody mediated rejection (cAMR). All cAMR KTs started Tocilizumab (TCZ; 8 mg/kg/monthly) before conversion at a median time of 6.7 months (min-max 2.1-9.6).
HLA-DSAs, mostly anti-class II, were already detected in 4 out of 19 patients before the start of Belatacept.
Reason for Belatacept association was rescue therapy in early or late allograft disfunction (median time after KT 4.2 months, 1.3-7.4). Early association was performed in 15 patients (79%, 11/15 with PNF and 4/15 with persistent renal functional impairment). The remaining 4 KTs were late associations with significant IF-TA on kidney biopsies (2/4 were also treated with TCZ for cAMR). Follow up median time from the association was 12.5 months (9.1-17.8).
Immunosuppressive medications pre-and post-Belatacept were detailed in Table 2.
At transplantation time all patients received induction therapy, consisted of either basiliximab (Simulect; Novartis Pharmaceuticals Corp., East Hanover, NJ) or rabbit anti-thymocyte globulin (rATG; Thymoglobulin; Genzyme, Cambridge, MA) in association with steroids, according to donor (standard or ECD) and recipient characteristics (i.e. immunological risk). Maintenance therapy was composed by TAC, MMF/MPA (18/19) or azathioprine (1/19) and steroids; two out of three patients with other organ transplant already received TAC and MMF/MPA before KT (the remaining one was also treated with steroids). During the observation period after conversion, one patient stopped AZA and 9 MMF/MPA. In our study population, according to Literature data [22,23], we used TCZ in patients (5/ 19) with histological diagnosis of cAMR. In 3 cases, it was associated before and in 2 cases after Belatacept starting. During the follow-up 2 patients suspended TCZ, one because of an episode of severe sepsis from cholangitis secondary to biliary obstruction of the transplanted liver and one for functional deterioration and poor patient compliance.
Eighteen out of 19 patients received a kidney biopsy at a median time before conversion of 3,02 (0.82-6.55) months (Banff scores are summarized in Table 3). A significant estimated glomerular filtration rate (eGFR, CKD-EPI formula) amelioration after belatacept association was observed at the end of f/up (median improvement of 8.5 ml/ min/1.73m 2 ; p = 0.001).
Renal function, graft survival and acute rejection rate
Stratifying the population according to shifting time, optimal results were obtained in early-converted KTs (15/19), both in dialysis (HD) and non-dialysis dependent patients. We Table 2. Immunosuppressive medication before and after conversion. (Table 4 and Fig 3). In these eight patients no stable recovery of renal function was reached at a similar f/up (p<0.001) and 7/8 patients lost definitely their graft. The 10/15 early converted non-HD dependent KTs also showed a significant eGFR improvement (from 17.3 to 23.6 ml/min/1.73m 2 ; p = 0.005).
Additional analysis in high medically complex subgroups
Renal function modifications after association of belatacept was analyzed in specific subgroups of high complex patients, such as those with diagnosis of cAMR concomitantly treated with TCZ (5/19), kidney re-transplants (7/19) or other organ transplants besides KT (3/19) (Fig 5).
Among kidney re-transplants we observed significant amelioration of renal function (12.7 vs 25 ml/min/1.73m 2 , p = 0.028) at the end of follow-up. In one patient bearing a fourth kidney transplant in end-stage renal disease, Belatacept was able to bridge to a pre-emptive fifth transplant that was successfully performed, nine months later, in the same immunosuppressive regimen. The low number of patients with other organ transplants does not allow to really demonstrate an eGFR improvement in this subgroup; additionally, 2/3 were late conversion with both severe renal failure and IF-TA before the shift.
Cardiovascular and metabolic changes
All patients were monitored for systolic and diastolic blood pressure during the observational period. No significant changes in mean systolic and diastolic (vs) blood pressures were observed before conversion and at the end of the follow-up (133±21 and 78±15 mmHg vs 134 ±14 and 76±11 mmHg, respectively). A total of 11 patients (57,9%) received at least one antihypertensive medication at baseline and at last f/up; among them, 3/11 were treated with three or more drugs before conversion vs 4/11 at last evaluation.
Mean cholesterol values were similar during the observation time (167±47 mg/dL at baseline vs 166±49 at the end of the follow-up). Despite not significant, a decrease in triglyceride Table 4
Belatacept-treated PNF (n = 5) Historical cohort PNF (n = 8)
Age values with a combined increase in HDL cholesterol was observed after conversion (168 ±73 mg/dL and 42± 13 vs 133±38 mg/dL and 49±18 mg/dL, respectively). There were no significant differences in the proportion of patients using lipid-lowering drugs. In a total of 16 patients (84%) HbA1c was measured. Compared to baseline, HbA1c levels were slightly lower at the end of the follow-up (39±9 mmol/L vs 38±10). Six patients were treated with subcutaneous insulin at baseline, 4/6 after occurrence of new-onset diabetes after transplantation (NODAT); one had a pancreas transplantation before KT needing insulin therapy since the first transplant, and another one discontinued insulin administration after conversion due to improved glycemic control.
Adverse events
No death or neoplastic complication (including post-transplant lymphoproliferative disease) were recorded during the follow-up. Twelve out of 19 patients experienced viral (including CMV and EBV reactivation) or bacterial infections (incidence 0.062 episode/month of exposure considering a cumulative exposure time of 257 months of Belatacept) ( Table 5).
Infection episodes are more common in early (94% of total events) vs late-converted KTs. The only patient in late rescue group experienced acute cholangitis secondary to biliary obstruction of the transplanted liver with severe sepsis, leading to graft failure.
Hospitalization was needed only in 7/19 patients with significant clinical symptoms, and all of them recovery after appropriate therapy without Belatacept interruption (except the one KT with cerebral toxoplasmosis who stopped the drug). Pneumocystis pneumoniae occurred in one patient, who ended routinely 6 months post-KT prophylaxis before the event.
Discussion
Improvement in both graft function and survival for KTs treated with Belacept vs cyclosporine ab-initio is clearly demonstrates in BENEFIT study [1,2,24,25]. Similar results on large registry data are observed comparing Belatacept to TAC [5]. However, in all available protocols AR incidence is not negligible, raising concerns about drug adoption in real-life settings [17,18,26,27]. This issue may be of great importance in high-complexity KTs with combined evidence of CNI toxicity (i.e. patients with previous history of KTs, high PRA and or combinedorgan transplants): in these patients an approach to minimize CNI exposure is challenging but imperative, considering that re-transplant may be an unfeasible option in most cases. A theorical use could be also hypothesized in children, despite Belatacept is being studied cautiously in this subgroup owing to concern about the risk of EBV-related PTLD [28]. Rescue conversion to Belatacept was shown to have a 20% risk of rejection [2,19]. Most of the observed rejection events were cellular rejection and resolved after steroid pulse; however, few of them need treatment with anti-lymphocyte polyclonal antibodies, few were antibodymediated rejection and some graft losses occurred [18]. In association, even if a rejection event is cured, its treatment may be associated with increased infection morbidity and even mortality, particularly for older patients. An even higher risk of rejection after conversion (25%) has been observed by Perez-Saez et al [16] and has been associated to the higher rate of retransplants and PRA% >30% in their case-series.
In a recent report [29] the avoidance of CNI and steroids in maintenance immunosuppressive regimen with Belatacept, showed a high risk of AR (36%), causing the early interruption of the trial.
Adams et al effectively reduces AR episodes maintaining Belatacept and TAC transiently for 9 months after transplantation, without an increase in recorded infections [5]. The association of CNI with Belatacept in the maintenance immunosuppressive regimen has not been originally conceived in the obvious attempt of overcoming the need of chronic use of CNI. Moreover, in preclinical studies, association of CNI with Belatacept paradoxically lead to increase in rejection rate and reduction of allograft tolerance [30].
In our study we explored the feasibility of the maintenance of TAC at very low doses after association of Belatacept in patients where the rejection risk was higher and/or where the occurrence of rejection with subsequent treatment side effects would have been more dangerous due to the medical complexity of these KTs.
Even in the absence of CNI withdrawal, the drastic reduction in its exposure induces a significant eGFR amelioration, especially in patients with HD-dependent PNF. In this setting, the benefit of conversion was particularly striking by acting as a graft saving protocol when compared to the outcome of a historical cohort.
Alternative strategies to reduce CNI levels without exposing to excessive risk of rejection in the case of prolonged DGF and poor graft function may theoretically involve the use of mTOR inhibitors (mTORi), but these drugs may conversely reduce acute tubular necrosis recovery through their anti-proliferative effect, and are frequently not well-tolerated (30% of drop-out rate due to side-effects) [31,32].
A portion of the present cohort was composed by patients bearing also a liver or a pancreas transplant, and in one case both. An initial trial comparing Belatacept vs TAC regimen in liver transplantation was stopped early for increased rate of AR, graft loss and death in the Belatacept arm [33,34]. This is the reason why we maintained low-dose TAC in these patients.
There is no published experience of the use of Belatacept in pancreas transplantation, and therefore this work includes the first report.
We also adopt Belatacept as rescue therapy in a selected group of patients with cAMR. All these patients were contemporary treated with TCZ, a IL-6 receptor-blocker, which previously showed positive effects in cAMR by reducing DSAs and microvascular inflammation (g+ptc score) [22,23].
Based on our observation of the absence of rejection with combined Belatacept/low-dose TAC, we adopted this regimen together with TCZ as a rescue in patients with poor graft function, cAMR and coexisting histological signs of acute CNI nephrotoxicity. In 4 out of 5 patients this protocol achieved eGFR stability whereas one patient is currently on dialysis after an episode of severe sepsis from a cholangitis secondary to biliary obstruction of the transplanted liver.
Considering the observational nature of the study and the characteristics of the patients, it is difficult to determine if the combination of Belatacept with low-dose TAC increased infections. Particular infectious events, such as cerebral toxoplasmosis or pneumocystis pneumonia, that are usually rare albeit possible in immunosuppressed patients, were observed in one case each.
Indeed, opportunistic infections after full conversion to Belatacept were also recently observed by Bertrand et al [35]. In our work, where low dose TAC is kept, seems consistent with his observation [35]. In the study of Adams et al [5] the combined treatment didn't show difference in serious infection rate when compared to standard TAC-based regimen. In association, in a rescue setting the risk of infection has to be compared to that occurring in case of dialysis reentry, where a marked increase in mortality risk mostly due to infectious events is present [36,37]. This may be particularly true for patients with a previous transplant of other organs (i.e. liver or pancreas) who are forced to maintain full immunosuppressive therapy.
We are aware of the limits of our study (low numerosity, retrospective design, absence of a control group) and, obviously, randomized controlled trials for rescue protocols are highly needed to set the exact risk/benefit ratio. On the other hand, high medically complex patients who may mostly benefit from this therapeutic approach are frequently excluded from RCT.
In conclusion, in our experience maintenance of low-dose Tacrolimus after rescue conversion to Belatacept in high medically complex KTs is a feasible option that allows prevention of acute rejection, amelioration of graft function without increasing substantially infectious complications.
These data may expand the use of Belatacept, thus conferring new interesting perspectives for its adoption. | 4,442.2 | 2020-10-15T00:00:00.000 | [
"Medicine",
"Biology"
] |
Integrating Sensor Networks for Energy Monitoring with Service-Oriented Architectures
More accurate predictions of energy consumption are a strong motivator for utility providers to deploy a smart grid infrastructure. However, measurements which only reflect the consumption of a household lose the details associated with the behaviour of individual devices. Finding a flexible and efficient way to process these readings is essential. Using standard application techniques to integrate device-oriented sensor networks and data-oriented applications is a serious challenge due to the architectural gap between the different approaches. Additionally, this device-level information should be shared with the end-users in a trusted manner to increase their energy awareness. We propose a novel platform for the smart grid which enables the seamless integration of sensor networks with a service-oriented architecture approach. The platform hides the device-specific details from the applications and transforms data into a device-independent format. Specifically, we present an in-depth description of the architecture of our platform and a full implementation and evaluation of it in a live residential energy management deployment.
Introduction
Accurate prediction of electricity consumption is a major challenge for utility providers.Even a small increase in accuracy can lead to significant improvements in regulating supply and therefore costs for the utility provider [1].The switch to using smart metering infrastructure has allowed the utility provider to get frequent (typically around every 15 minutes) updates as to the power consumption perhousehold.However, this information only reflects the consumption of the household in total, not the individual devices (e.g., DVD player, TV, fridge, washing machine, etc.).Since each device has a different usage pattern, it is important for accurate predictions to take into account the specific characteristics of each device.
The convergence of consumer electronics and information technology together with the developments in communication systems gave rise to a new range of services, for example, building automation, smart metering, health, fine-grained demand-side management, safety and security, and so forth.In accordance with current best practices in software engineering, a service-oriented architecture (SOA) approach is often used to provide the infrastructure for such interoperating services or applications.SOA is useful as it creates a set of independent, loosely coupled services which communicate with each other through a set of well-defined interfaces.One of the key challenges with developing applications and services that interface to sensor networks is the architectural gap between device-oriented sensor networks and data-oriented applications.For example, the addressable component of a typical sensor network such as ZigBee or Z-Wave is the network interface address of the sensor whereas an application may provide electricity consumption information to an end-user.Therefore, there is a significant difference between the granularity of data produced by the sensors and the data which a SOA system can typically process.
In order to make per-device measurements available to both the utility provider and the end-user for both energy prediction and energy awareness purposes in a trusted way, we present a platform for the smart grid based on a SOA approach which allows the integration of sensors measuring the energy consumption of individual devices.The platform is capable of both supporting the usage of per-device sensors International Journal of Distributed Sensor Networks and also hosting multiple applications which manipulate and visualize the per-device sensor data for the end-users.
In order to bridge such an architectural gap, existing SOA systems typically integrate heterogeneous services by using an Enterprise Service Bus (ESB) [2] in order to create a set of loosely coupled services.However, the protocol-or technology-specific parts of the ESB (also known as binding components) export data-oriented service interfaces rather than device-oriented interfaces.Thus, the challenge remains of how to integrate device-oriented sensor networks into a data-or service-oriented architecture.
The main contributions of this paper are twofold: (1) the design and implementation of a novel platform for the smart grid which enables the integration of perdevice sensor measurements with a SOA approach (Sections 3 and 4); (2) an evaluation of our platform, used as a smart energy monitoring system, in a number of real-world deployments and a selected set of experimental results (Section 5).
In addition, we also present a discussion of the relevant related work in Section 6 and a summary of our insights in Section 7.
PeerEnergyCloud Architecture
Sensor networks have successfully been deployed for various and different use cases.These include controlling and monitoring industrial plants, supervising weather stations, and military purposes.However, with declining hardware costs, sensor networks also find their way into private homes.
Corresponding research projects focus on monitoring those homes concerning energy consumption as well as security and safety reasons [3].Furthermore, actuators allow for home automation based on measured sensor values or user input.
One such project is PeerEnergyCloud (PEC) [4], which specifically focuses on the energy domain.The importance of sensor networks for PEC lies in the detailed logging of energy consumption on a per-device level (e.g., refrigerator, television, and washing machine).Analyses of logged values furthermore provide the opportunity to predict future consumption values and hence an in-depth planning of energy production on the utility provider's side.
The architectural framework of PEC including the key components is shown in Figure 1.
In the architecture drawing, all components except the private home installations are located in the Cloud.Cloud computing allows for the abstraction from real IT infrastructures and hence supports elastic scalability for the handling of big data.That is, computing power and storage may be easily increased or decreased depending on the current sensor data load.This is an important feature since for the described use case the load varies, for example, depending on the time of day.Algorithmic frameworks such as MapReduce support elasticity and scalability when deployed in the Cloud.
Concerning PEC, algorithms specifically attract notice when it comes to predicting energy consumption which will be performed by the energy trading agents.
An important component of the architecture is the Backend which represents the point of contact between the private home installations and the different parts of the PEC architecture.
The private home installation is displayed on the lower left.It consists of a Home Gateway which represents the coordinator between the network and sensors/actuators.All sensors and actuators are connected to the Home Gateway and regularly provide it with measured values.Deployed sensor types include so-called smart plugs for the energy measurement, temperature sensors, light intensity sensors, air pressure sensors.These data will be pushed to the different parts of the PEC architecture through the Backend.
With regard to the user's privacy settings, the Backend pushes selected sensor data to the trusted smart grid datastore (TSGD), registered valued-added services, and its energy trading agent.The TSGD is responsible for managing data from all connected smart homes and provides access to this data for valued-added services which the user has registered to.
Value-added services may include energy analysis tools, home security management, and intelligent automation applications.
Energy trading agents use the detailed consumption data in order to predict future energy requirements and to place contracts for energy purchase through the marketplace.Such a system allows for explicit energy management and the easy integration of customer-owned energy generators, such as solar panels or wind engines.
Figure 1 provides an outlook for the overall system complexity.It also shows the importance of a reliable sensor network with detailed measurements.
With the overall picture in mind, the following sections will describe the Smart Home Subsystem which includes the Sensor Network, the Home Gateway, and the Backend.
Smart Home Subsystem Design
The overall Smart Home Subsystem architecture is depicted in Figure 2, whereas Figure 1 provides the deployment view of the end-to-end PEC architecture where the software components that comprise the Backend system have been deployed using Cloud technologies.Figure 2 shows the component view of the introduced platform in terms of the functional components of the system and their interconnections.
The architecture comprises a set of sensors/actuators (e.g., smart plugs), one or more Gateways, a Backend, a Frontend, and one or more client applications.All the parts of the architecture are independent from each other; in this way it is possible to modify/update each of them without the need of updating or rebuilding the whole architecture.
Users can benefit from such an architecture due to the fact that it allows a simple connection of new devices (enabling in this way the possibility to increase the number and the variety of connected devices), and developers can exploit the ease of implementing new applications thanks to the abstraction provided, without worrying about the hardwarespecific details of the devices.
In the following, we describe in detail the components of the architecture.
Sensors and Actuators.
In general, the network connecting sensors/actuators can be realized by using wireline or wireless technology.Wireline communications have the advantage of using the existing wiring infrastructure and can have higher data rate as compared to wireless technologies, but some technologies, such as Power Line Communications [5], are exposed to varying channels conditions which depend on the number and type of appliances connected to the power line.Other wireline technologies, such as KNX, require dedicated infrastructure to be installed.
One of the main advantages in using wireless sensors and actuators is that there are less restrictions regarding where the sensors have to be installed.This is a real benefit in deployment scenarios in hazardous or irregular environments.However, the lack of cabling introduces some challenges related to the energy consumption of the wireless nodes.
Gateway.
The Gateway provides an interface between the sensor network and the rest of the system.This means not only bridging the gap between the likely short-range network used by the sensors, but also the device-specific details present in the Gateway and the higher-level, device-agnostic middleware in the Backend.
The Gateway for this solution provides an abstraction layer which removes the device-specific details of the sensors and offers the gathered data in a standardized, deviceindependent way.The Gateway also handles the management of the devices which are attached to it, for example, managing the registration of new devices, security provisioning, and fault detection.The Gateway's functionality should be extensible during the deployment lifetime of the Gateway and the gathered data should also be stored in a cache on the Gateway for optimization purposes.
The Gateway design is split into three main components (see Figure 2).
Core Bundles.
The core bundles perform the majority of tasks carried out by the Gateway.The set of Core Bundles comprises the following components.
(i) The Device List which should handle joins and leaves of the devices to and from the sensor network so that the registration and subsequent deregistration of devices can be pushed to the Backend.(ii) Sensor Data and Descriptions: the attached sensors need to be described in a device-independent way so that the Backend can manipulate the data and descriptions without need of having explicit knowledge of the underlying implementation details of the various sensors.(iii) Cache Manager: to guard against failures of network connectivity, the Gateway needs to cache sensor data temporarily until connectivity has been restored.The data needs to be kept in a cache for a predetermined amount of time before being removed.should have a number of device-specific protocol adaptors so that a number of different devices using different networking technologies can be attached to the Gateway in a modular fashion.Each protocol adaptor should completely encapsulate the device-specific details of the sensor, leaving the rest of the Gateway to operate in a generic way as possible.
The protocol adaptor will handle the actual interface to the sensor network in order to receive data and all associated additional device-specific functionalities such as sending commands to the sensors for both actuation (where available) and management purposes.
Platform Services.
Platform Services are generic housekeeping services to manage the Gateway as a service platform.In order for the Gateway to be upgraded during its deployment lifecycle, it must be implemented in a modular way.These modules must be "hot pluggable, " meaning that they can be dynamically upgraded without having to stop the running system.In order to achieve that, a set of Platform Services must be present providing an appropriate system and service platform which enables such hot plugging of software components.
Backend.
The Backend hosts both data processing and data management functionalities (see Figure 2).Complying strictly to the idea of loosely coupled systems, the Backend has been designed based on an ESB architecture.That is, various service-oriented components communicate through the ESB to exchange data without relying on objects of other components.The Backend is designed to work with sensor data in a generic, high-level fashion.
Depending on the accessing components, the Backend is typically subject to two different information processing flows.
The first information processing flow is initiated by the Gateway whenever a new sensor reading comes in through the Backends Data Push Interface.This reading is then pushed on the ESB and consumed by the Sensor Data Parser.The parser transforms the data into a valid format and pushes it back to the bus.Finally, the Persistence Connector accesses the sensor data and persists it in the corresponding table of the Sensor Data Repository.
In addition to the persistence connector, a Complex Event Processing (CEP) engine receives the incoming sensor data [6].The CEP engine allows to run continuous queries on the incoming data, like filtering, aggregation, or pattern matching.Outputs of the CEP engine are filtered or aggregated sensor data as well as higher level events that are derived from a set of multiple sensors values.
The aggregated sensor data and the events produced by the CEP are again pushed on the bus.From there, they can be forwarded to the applications using the Data Push Interface.
The second information processing flow is initiated by the Frontend through the Aggregate Sensing/Actuation Interface.This interface allows the applications to access aggregated sensor data as well as events that have been produced by the CEP.Moreover, it allows the applications to access the Gateway forwarding the requests from the Backend to the Gateway using the ESB.This enables the applications to actuate and configuring devices as well as administrate the Gateway itself.Thereby, the Aggregate Sensing/Actuation Interface enriches the transmitted data with data already stored in the Backend repository.If, for example, a list of all devices connected to a Gateway is requested, the interface adds metadata about the sensors taken from the repository (e.g., the units that the sensors measure in).In addition, the interface also provides basic functions to administrate the CEP.
Another feature that can be activated using the Aggregate Sensing/Actuation Interface is the Replay Control service, which can be used for analyzing stored data and testing features.It is activated when an application sends a specific replay command (including a timestamp and a time duration) through the interface.The interface then pushes this command to the ESB in order to activate the Replay Control.Once this happen, the Replay Control service accesses the database through the Persistence Connector and requests all the data stored after the specified timestamp and before the end of the time duration.The Replay Control then produces sensor data messages from these stored data and sends the messages in the same interval they have been received.Moreover, it uses the same format and access mechanism requested by the Data Push Interface.Only the timestamps are updated to the current values; this means that all of the other services cannot distinguish the replayed data from the real current sensor data.Thereby, every action that usually would be executed, including persisting the data, is executed as well.
Frontend and Client Applications.
The generic access of sensor data through the Backends Aggregate Sensing/Actu-ation Interface allows the development of arbitrary Frontend applications.
In the case of residential energy management, examples of applications include monitoring energy consumption (how much energy is currently consumed by the devices?), analyzing energy usage history (which is the most expensive device?), recording temperature fluctuation (is the heater working well?), and switching devices on and off (comfort).
Smart Home Subsystem Implementation
4.1.Sensors and Actuators.In our implementation, we use ZigBee devices [7].ZigBee is a low-power wireless mesh network standard based on IEEE 802.15.4 standard for wireless personal area networks, and it is commonly used in building automation deployments.Compared to other competing technologies, such as X10 and KNX, which typically use dedicated wiring for communication, ZigBee has the advantage of being wireless.Additionally, unlike Z-Wave or EnOcean, it is an open standard and offers a high level of configurability that is critical for a research project.Finally, it provides comprehensive security features compared to competing technologies [8].
In particular, in our deployments we have used XBee devices, working in the 2.4 GHz ISM band [9], in a mesh topology with the Personal Area Network (PAN) coordinator located in the Gateway.
Gateway.
In order to meet the requirements described in the previous section and satisfy the modularity and extensibility needs, we chose to build the Gateway using the OSGi service framework [10].OSGi provides a platform where Java modules (called bundles in OSGi) can be dynamically installed, stopped, started, updated, and uninstalled.By offering a service-oriented architecture, bundles can register themselves as services, discover existing services, and bind to listening services.These factors together mean that the functionality of the Gateway can be modified during its deployment life-cycle, thereby meeting our requirements.
We created the following services in Java on top of OSGi.
The XBee protocol driver implements the XBee specific driver and performs the following functions: (i) device discovery: it discovers XBee devices in the network; (ii) data handling: data is received from the attached devices, and the payload is parsed and sent to the Cache Manager and also to the Data Push Interface so that the data can be forwarded to the Backend for further processing; (iii) actuation: if an application wishes to switch a device on or off, then the XBee protocol driver creates the appropriate message to be sent to the device so that it can be activated or deactivated.
The Sensor Data and Descriptions are a set of high-level descriptions of the sensors themselves and the data they collect.The sensor descriptions are used by the Gateway to keep track of which sensors are attached to the Gateway at any particular time as we typically have sensors of multiple types in a deployment.The sensor data descriptions are used to convert the data sent by the sensors into a format which can be used by the Backend and thereby the rest of the system.
When a message arrives from a sensor, it is processed by the Gateway and the relevant data is extracted.A single sensor device can send messages with different types of data as it may have multiple sensors onboard (e.g., temperature, light, humidity, etc.), so the Gateway needs to be aware of the different types of data that can be expected from the sensors.Box 1 shows an example of the unformatted sensor data as it arrives from the sensor at the Gateway.The Gateway parses this data and creates an internal representation of this data using the sensor data itself, information from the packet header, and the time and date when the packet has been received.
This internal representation is then converted into a JSON stanza which is then sent to the Backend for processing.An example of the JSON stanza is shown in Box 2.
The Cache Manager interacts with a local database (e.g., PostgreSQL) to temporarily store the gathered sensor data.The database is also accessible via the Sensing/Actuation Interface so that the applications can access the cache if necessary.
The Representational State Transfer (REST) Interface is the implementation of the interfaces used by the Gateway to interact with the Backend.It provides the following features: getting the latest data from the database for a specific device, turning the relay of a device on or off (actuation capabilities), and sending gathered data from a sensor to the Backend.
The REST Interface is implemented using the JAX-RS framework [11] and formats the payload of the REST calls in a JavaScript Object Notation (JSON) format [12].
Backend.
In order to implement the service-oriented architecture as shown in the previous section, we implemented the Data Processing Tier of the Backend using ServiceMix 4.4.2[13].This allows reliable messaging using Apache ActiveMQ, routing using Apache Camel, and the relatively simple creation of complex RESTful interfaces using Apache CXF and JAX-RS.Thereby, the ActiveMQ allows the fast transportation of messages using the Java Messaging Service (JMS) which enables the communication using several protocols such as Stomp or OpenWire.
On top of that, ActiveMQ supports a wide number of language clients besides Java, for example, Ajax, C#, PHP, and Ruby.Thanks to it, applications can easily be integrated into the system.
The integration framework Apache Camel then allows the definition of routing and rules on top of the messaging.Therefore, several domain-specific languages, for example, Spring, Fluent API, or Scala DSL, can be used.As Apache Camel uses URIs, it can work directly with messaging models as HTTP, JMS, and ActiveMQ.
The Sensor Data Repository has been realized using a PostgreSQL 9.1 database [14].It has been integrated into Ser-viceMix using the EclipseLink 2.3.2 implementation of the Java Persistence API.As a low-memory alternative, we also implemented a Sensor Data Repository using PostgreSQL and JDBC.The chosen data exchange format is again the JSON due to its lightweight data exchange format which requires less resources when compared to certain XML formats.
CEP is implemented based on the open-source solution ESPER [15].This engine provides high throughput and supports an expressive declarative language for complex event queries.The engine is wrapped as a service and receives all incoming sensor data from the bus in a push-based manner.Incoming sensor data are encoded in JSON format.The wrapper maps the data to an internal Plain Old Java Object (POJO) representation of raw events, and a sensor stream is created for each sensor type.CEP queries can be defined over these streams or any combination of them via the ESPER query language.Query results are continuously pushed to listeners, which transform the results into a JSON representation and push them to the bus.Thereby, filtered sensor data and derived high level events are made available in the Frontend.
Frontend and Applications.
To enable interaction with the overall system through a variety of end devices from smart phones to desktop PCs, we have decided to use a web application server to implement the Frontend.
The Frontend provides an interface which client applications can use to access the sensor data of the system in a user-friendly way.In particular, we used the Apache [13] HTTP Server for the server-side, and we have implemented sample Web applications on the client-side using a dedicated JavaScript framework which supports touch devices.
Smart Home Subsystem Evaluation
5.1.Deployment Description.In order to evaluate the platform, we have developed an energy usage management application based on our architecture and tested it in a field trial.Using this application, one can identify large consumers and unnecessary energy consumption in order to reduce overall electricity consumption.It also provides an interface for remote monitoring and control of the appliances.We tested this application in a field trial where the sensors were deployed in a private household.In order to collect energy consumption values and control the appliances, we used smart plugs with metering and actuating capabilities.All the smart plugs were connected to the Gateway using ZigBee, establishing a mesh topology network, and the smart plugs reported current consumption data to the Gateway every 2 seconds.Their distance to the Gateway was ranging from less than one meter (line-of-sight connection) to around ten meters, in which case radio signal was propagating through two inner walls and one inner floor of the house.Good radio propagation conditions resulted in a long-term stable operation of the sensor network, in our experience.
All the measurements related to the energy consumption were sent to the Backend which was hosted in a Cloud environment built with virtual machines in a remote data center.In addition, we implemented a web-based Frontend that provides the user with statistics over the collected sensor data.Examples of charts reported are shown in Figures 3 and 4. Specifically, Figure 3 shows the average consumption during days of a week at the trial household over a four-week period, and Figure 4 drills down into the devices'-specific consumption for a more detailed analysis.
Figure 3 reveals two consumption peaks, that is, one smaller peak after lunchtime and a higher peak in the evening.Drilling down into the devices'-specific consumption (see Figure 4) explains these peaks.Among the observed devices, there are a freezer and a fridge that run through regular cooling cycles during the day and explain a large percentage of the energy consumption.After lunch, a TV is switched on for a few houses, resulting in the first peak.A second and bigger peak occurs in the late afternoon, when the TV is switched on in combination with further entertainment electronics (i.e., a second TV set and a stereo).These statistics enable the user to get detailed feedback about the energy usage and show potential for energy saving, for example, by adapting the behavior or replacing energy consuming devices with more efficient ones.For instance, Figure 4 shows that, among the two TV used, the less energy efficient one is used more frequently than the other and swapping the two would result in savings.
Support for Electricity Demand Prediction.
As already shown previously, the proposed platform was developed in the context of the PEC project [4], where one of the aims is to improve the electricity demand prediction for private households.In the PEC architecture, trading agents use our system for collecting sensor data as input to their prediction models (see Figure 1).Fine-grained devices'-specific measurements can improve the accuracy of short term forecasts as illustrated in Figure 5. Specifically, the figure shows a sevenhour snapshot of data from our field trial.The displayed curves are the individual load profile (average weekday load of the specific household over 4 weeks), the real load, the predicted load using a persistent prediction, and load prediction based on device-specific measurements and machine learning techniques (ML based) [16].
The results show that the actual load drastically diverges from the long-term average.However, the prediction mechanisms both provide a good match for short-term forecasts [17].The highlighted parts in the figure mark where the persistent prediction and the ML-based prediction differ.As one can see, the ML-based prediction that uses devicespecific sensor data adapts better to sharp drops in the load and provides better matches to short spikes.This results in an overall better prediction accuracy that is enabled by our sensing solution.
Performance Evaluation Results
. Since the typical deployment has multiple Gateways connected to one Backend, we expect the Backend to be the bottleneck of the system.For that reason, the evaluation presented in this section focuses on this component.Another critical component impacting on the performance of the system as a whole is the sensor network.However, its performance evaluation is not closely related to the software architecture, but it is rather in the domain of the radio propagation, which is out of scope for this paper.
For the performance evaluation of the Backend, we studied an urban site comprising 50 households (i.e., 50 Gateways), each of them equipped with 15 smart plugs reporting electricity consumption.Each message delivering the consumption report was 192 bytes long.We used JMeter [18] to emulate the data and signaling load going to the Backend REST interface.The JMeter engine was running on a dedicated server machine connected to the same Gigabit Ethernet segment as the Backend server machine.All the operating system tasks that were irrelevant to the load test were dormant, while network traffic on the Ethernet segment was monitored by Wireshark.
In the first test plan, we emulated only the traffic coming from the Gateway with the updates from smart plug measurements, and in the second one, we evaluated the performance of the system under mixed traffic coming from the Gateways and from the applications reading the measured consumption values.
Test Plan 1.
In order to measure the performance under different load intensities, we varied the frequency of smart plug measurements updates as presented with the black curve (target traffic) in Figure 6.In a real deployment (i.e., a production environment), the Backend will be required to demonstrate robustness to varying loads.Accordingly, we chose a stepwise pattern for the load test, rather than reset the Backend to a clean state before setting a particular load.This approach is useful in understanding the steady-state behavior of the system and for assessing the stability of the system's performance for different (steady-state) load levels, until the system reaches saturation.Specifically, with the last step of the target traffic shape we evaluate how the system recovers after operating in saturation.
As a primary measure of performance, we analyzed the rate of HTTP responses and the response time as a function of the load.Figure 6 depicts the rate of HTTP requests to the Backend REST interface over time.In the first 4 levels of the load (up to 100 requests/s), the rate of HTTP responses matches the input load.As the target input load grows further up to 140 requests/s, the total measured rate does not grow anymore, and the processing time at the Backend increases.This clearly indicates that the system is in saturation.As the load drops down to 70 requests/s, the system recovers very fast (within seconds) and continues to respond to all the incoming HTTP requests.
Figure 7 shows measured response time as a function of time, and therefore, load.We observed a steady performance for the first four levels of load which clearly deteriorates as the load increases (i.e., when the system is saturated).This clearly demarcates the good operational region of the system in terms of load and also provides insight into the scale-out properties of the system.
Test Plan 2.
The tests performed in the second stage had the goal to evaluate the performance of the Backend under more realistic conditions.The traffic generated by JMeter was a mix of HTTP requests coming from the Gateways (writing the sensor measurements) and from applications (reading them from the Backend).We used the same setup as in Test Plan 1, and in addition we assumed 50 applications (one for each household).The total target traffic for the Gateways was kept constant to 60 requests/s, as shown in Figure 8, whereas the applications traffic is changed stepwise from 30 to 140 requests/s and dropped to 70 in the last stage (see Figure 9).The grey curves in Figures 8 and 9 show the traffic that can be successfully handled by the Backend.As it is possible to see, the traffic coming from the Gateways can be handled throughout the whole duration of the test plan.On the other hand, the traffic coming from the applications cannot be handled at a rate higher than 100 requests/s, that is, when the Backend reaches saturation.Similar to the first test plan, the system recovers very fast once the rate of HTTP requests is reduced.What is also interesting to note is that the traffic from the Gateways has higher variance from minute 8 of the test, which corresponds to the saturation point.Figures 10 and 11 show the response time of the HTTP requests from the Gateways and from the applications.Delayed response by the Backend from minute 8 until minute 16 confirms that the Backend is in saturation at this point.Also, once the load from the applications drops to 70 requests/s, the response time is reduced as well.
In order to have an overview on the number of HTTP requests affected by high delay, in Figures 12 and 13 we show the response time distribution for both Gateway and applications requests, respectively, during the whole duration of the test plan.As it is possible to see, the majority of queries show a response time below 100 ms, that demonstrates again the good operational region of the system in terms of load.Since the database represents the crucial component for our system's performance, we performed some measurements of the PostgreSQL that are reported in Tables 1 and 2. Looking at Table 1, we notice that the average duration time of the queries is 5 ms whereas the minimum and maximum duration is below 1 ms and 216 ms, respectively.The maximum duration has been achieved with a queries peak of 787 queries/s.As expected, SELECT queries last more than INSERT queries due to the fact that the SELECT function includes also a search inside the database, while the INSERT is only an operation of "append." In order to have insight about the high values of delay perceived during the phase of system saturation, we report in Table 2 some measurements related to the database checkpoint.A checkpoint is a point in time where all modified or added data in the system's memory are guaranteed to have been written to the disk.In case the database is written too heavily, such as in our case, it is possible to suffer from significant delay (many seconds long) during the periodic database checkpoints [14].As it possible to see, during our evaluation time a recycling checkpoint (meaning that after the writing procedure, an old log segment file no longer needed is renamed to be used afterwards), lasting about 184 s, has been performed.During this time, the system performance is affected causing the delayed response by the Backend described previously.
Related Work
The last years have shown a growing trend toward interconnecting devices to cooperate with each other to fulfill complex functionalities.Several systems integrating sensor networks with energy management systems at the consumer premises have been proposed so far.In [19], the authors evaluate the performance of an in-home energy management system based on a ZigBee wireless sensor network.The focus here is on the performance of the system on the application level, meaning the investigation of potentials of both energy management and demand management.Without going into details of the architecture of the underlying software or its performance, evaluation that has been performed is based on simulation results, and not on real network deployments in households as in our work.
Similarly, in [20], energy management in homes has been investigated, but based on a pilot consisting of only 3 households in Sacramento.The solution has been implemented using off-the-shelf components based on power line communication, and it included a web-based monitoring and control of home appliances.
In [21], the authors provide a set of algorithms and systems for enabling energy saving in a smart home.This work also makes use of the OSGi service platform; however, the authors do not focus on how the data gathered by their system could be reused by multiple applications nor do they provide details on how their system is implemented or how well it performs.
Closing the gap between the diversity of sensors, actuators, network technologies, and the ease of new application development, have been widely explored in the last few years, but there is still lacking a widely accepted solution and a robust set of implementations.
One of the first efforts to provide a higher level of abstractions for applications, development in wireless sensor network domain has been done in [22] where authors introduce a simple service-oriented model in which the responsibility for handling the service requests is delegated to an external entity acting as a bridge between external requestors and internal system functionalities.An adaptive middleware supports mechanisms for cooperative data mining, self-organization, networking, and energy optimization to build higher-level service structures.Whilst the concepts are promising, no implementation has been done for validating the presented concepts.
In the work presented in [23], service-oriented models, in particular web services approaches, have been applied to WSN.This is similar to our work (e.g., RESTful web services and JSON data format for data exchange), but this proposal lacks a real implementation which can unveil realworld problems related to the adoption of the proposed architecture.The same is the case for the work presented in [24], where a middleware service for sensor networks is proposed.Moreover, services on the sensor networks do not follow the conventional request/response service model which is widely accepted in SOA applications, and to this purpose new solutions should be developed.
In [25], the idea of providing an abstraction between the devices and the applications has been applied to a real use case: technology in the home.Specifically, authors propose HomeOS which represents the first architecture capable of providing a PC-like abstraction for home technology.Users and applications can find, access, and manage these devices via a centralized operating system.This operating system also simplifies the development of applications abstracting differences among devices and homes.The HomeOS project has a different focus to our work as it concentrates more on general consumer services rather than energy-specific services.Additionally, they do not provide a similar set of Backend features such as a CEP engine which is necessary for performing real-time analytics.
An IP-based smart monitoring system where nodes send data using web services is presented in [3].In particular, authors use the REST-based web services onto the 6LoWPAN architecture, allowing a direct IP access to the sensor network.This work looks more into the routing aspects of the wireless sensor network, which is a different angle to the work presented in this paper.Additionally, they work more with the REST API on the devices, rather than having a REST service on the Gateway and the Backend.Finally, the CEP engine integration which provides a great deal of functionality to our system is also not present in this work.
A Distributed Operating System (DOS) to support peer Internet access to home network devices is proposed in [26].Besides cache management, it schedules access to device services in coordination with their power saving policies and realizes resource control policies.Whilst the project is promising, there are very little details provided about the implementation details, and there appears to be no plans for a large-scale field trial such as the one underway as part of the PEC project that this work is part of.
An architecture for an adaptive system comprising selfadaptive gateways and sensors is proposed in [27].A middleware layer employs machine learning mechanisms to detect recurring event patterns.Contextual information (e.g., temperature, humidity, etc.) conveyed by the sensors drive the dynamic selection of the appropriate service, thus enabling adaptation.This approach focuses on evaluating a selfclustering (i.e., adaptive) protocol in the sensor network to optimize its power usage.Whilst there are architectural similarities between the design of their work and the work presented in this paper, our implementation efforts go significantly further.In particular, we provide a full implementation of our Backend system and show how it can be used in a live deployment.
Conclusions
We presented a novel platform which enables a smooth integration of sensors/actuators (e.g., smart plugs) with a SOA approach.The proposed platform hides the device-specific details and transforms data into a device-independent format, giving the possibility to use a variety of different applications on top of the platform.As an example, an application for energy consumption monitoring and its prediction for the optimization of the processes of utility companies has been presented.As a proof of concept, we have shown implementation and evaluation of our platform when used for residential energy monitoring.
For future work, we are deploying our system in a field trial where 100 households will be equipped with a residential energy monitoring system.This system will use the software presented in this paper and a variety of sensors, and it will offer a number of value-added services running on top of the platform.In particular, data analytics over the data set collected by the households will be investigated in order to provide support for value-added services.Coupled to this, another topic of our future work will be in the area of security and privacy, which is a prerequisite for a wider deployment of such a system.
Figure 4 :
Figure 4: Measured consumption on one day.
Figure 5 :
Figure 5: Demand forecast with smart building sensors.
Figure 6 :
Figure 6: Test Plan 1: target traffic sent from the HTTP traffic emulator and the rate of HTTP responses.
Figure 9 :Figure 10 :
Figure 9: Test Plan 2: target traffic sent from the application HTTP traffic emulator and the rate of HTTP responses.
Figure 11 :Figure 12 :
Figure 11: Test Plan 2: end-to-end response times for the application requests over time.
Figure 13 :
Figure 13: Test Plan 2: end-to-end response time distribution for the application requests over time. | 9,276.4 | 2013-06-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Perceptual Gaps Between Clinicians and Technologists on Health Information Technology-Related Errors in Hospitals: Observational Study
Background: Health information technology (HIT) has been widely adopted in hospital settings, contributing to improved patient safety. However, many types of medical errors attributable to information technology (IT) have negatively impacted patient safety. The continued occurrence of many errors is a reminder that HIT software testing and validation is not adequate in ensuring errorless software functioning within the health care organization. Objective: This pilot study aims to classify technology-related medical errors in a hospital setting using an expanded version of the sociotechnical framework to understand the significant differences in the perceptions of clinical and technology stakeholders regarding the potential causes of these errors. The paper also provides some recommendations to prevent future errors. Methods: Medical errors were collected from previous studies identified in leading health databases. From the main list, we selected errors that occurred in hospital settings. Semistructured interviews with 5 medical and 6 IT professionals were conducted to map the events on different dimensions of the expanded sociotechnical framework. Results: Of the 2319 identified publications, 36 were included in the review. Of the 67 errors collected, 12 occurred in hospital settings. The classification showed the “gulf” that exists between IT and medical professionals in their perspectives on the underlying causes of medical errors. IT experts consider technology as the source of most errors and suggest solutions that are mostly technical. However, clinicians assigned the source of errors within the people, process, and contextual dimensions. For example, for the error “Copied and pasted charting in the wrong window: Before, you could not easily get into someone else’s chart accidentally...because you would have to pull the chart and open it,” medical experts highlighted contextual issues, including the number of patients a health care provider sees in a short time frame, unfamiliarity with a new electronic medical record system, nurse transitions around the time of error, and confusion due to patients having the same name. They emphasized process controls, including failure modes, as a potential fix. Technology experts, in contrast, discussed the lack of notification, poor user interface, and lack of end-user training as critical factors for this error. Conclusions: Knowledge of the dimensions of the sociotechnical framework and their interplay with other dimensions can guide the choice of ways to address medical errors. These findings lead us to conclude that designers need not only a high degree of HIT know-how but also a strong understanding of the medical processes and contextual factors. Although software development teams have historically included clinicians as business analysts or subject matter experts to bridge the gap, development teams will be better served by more immersive exposure to clinical environments, leading to better software design and implementation, and ultimately to enhanced patient safety. (JMIR Hum Factors 2021;8(1):e21884) doi: 10.2196/21884 JMIR Hum Factors 2021 | vol. 8 | iss. 1 | e21884 | p. 1 http://humanfactors.jmir.org/2021/1/e21884/ (page number not for citation purposes) Ndabu et al JMIR HUMAN FACTORS
Background
The widespread use of information technology (IT) has contributed to improved patient safety in the hospital setting [1][2][3][4][5]. However, many different kinds of medical errors attributable to the use of IT in health care have negatively impacted patient safety [6,7]. The number of patients who experience adverse events is estimated to be 40% of all patients who visit primary and ambulatory care [8]. These safety events may lead to an extended hospital stay, additional side effects, or distress and in some cases death. In addition to the loss of life and health impairment, the consequences of adverse events include increased financial costs to patients and the society at large [9].
In hospital settings, several benefits, including health care delivery improvement and reduction in medication errors, have been attained through the use of health information technology (HIT) [3]. However, new patient safety errors attributable to the use of HIT continue to be a significant issue [7]. For example, according to a recent study [10], in Pennsylvania alone, a total of 889 medication error reports listed HIT as a factor contributing to events submitted to the Pennsylvania Patient Safety Authority in the first 6 months of 2016. The study also shows that dose omission, wrong dosage, and extra dosage were the most commonly reported events. The most common HIT systems implicated in the events were the computerized prescriber order entry system, the pharmacy system, and the electronic medication administration record. Several government agencies and academic and clinical practitioner committees have been concerned about the unintended consequences of introducing IT in clinical environments. Several articles [9][10][11] report such adverse patient safety events related to HIT and emphasize the need for more cohesive HIT development processes to reduce the gulf of evaluation between medical and IT teams.
This pilot study seeks to classify patient safety events in hospital settings and to understand the differing perspectives of HIT designers and users concerning the potential causal factors of technology-related medical errors. In addition, the study suggests prescriptive measures to prevent reoccurrences of errors. Understanding the perspectives of both medical and IT stakeholders could help resolve the root causes of medical errors. The proposed classification could be used in facilitating medical and technology stakeholders in working together and working through different perspectives on the causes of HIT-related errors to identify likely solutions and ultimately design better HIT artifacts. To better understand the significant differences, we selected from our list of errors collected through the literature review, 12 archetype errors that occurred in a clinical setting, and examined them using the lens of sociotechnical theory from both clinical and IT systems perspectives. In the next section, we introduce the sociotechnical framework and present the proposed error classification. Following this, the Methods section details data collection and analysis. Subsequently, the results and discussion are presented before the Conclusions section.
Sociotechnical Framework
The sociotechnical theory posits that organizational performance depends on the interactions between social and technical factors, grouped into 4 pillars: technology, process, people, and environment [12]. Prior research suggests that developing applications that cater to end-user needs requires designers and developers to understand the workflow structures, organizational culture, and environment in which these systems will operate [13]. Hence, patient safety improvement is contingent on the joint optimization of social and technical factors in the hospital setting.
This paper creates a more detailed taxonomy by adding subcomponents of the 4 central pillars to the sociotechnical framework [12,13]. The expanded taxonomy allows for a better classification of errors and the development of more precise solutions. Furthermore, we classify the errors in terms of the causes based on the feedback of medical experts and IT professionals. Using the results of this classification process, we provide more in-depth insights into the significant differences in medical and clinical staff members' and IT professionals' perceptions regarding these errors and offer a prescription to mitigate them.
Several studies have used the sociotechnical framework to examine several aspects of HIT implementation and use, including human-computer interaction [14], the impact of policy, infrastructure, and people on the quality of health information [15], ergonomic and macroergonomic aspects of health technologies [16][17][18][19][20], risk assessment of electronic medical record safety [18], and usability factors [14,18]. The sociotechnical framework has also been used to classify patient safety events [21][22][23]. However, these studies have classified errors on the sociotechnical framework's high-level dimensions on which errors map the most ( Table 1 shows a comparison of the 3 published papers closest to our efforts and details how this study is different). The sociotechnical framework suggests that multiple forces from multiple dimensions (and different hierarchical levels of a particular dimension) are at work when errors occur [24]. As patient safety events occur in a complex environment, there is a need for a classification that considers the impacts of multiple dimensions of the framework on each patient safety event's occurrence. Table 1 provides a summary differentiating the studies closest to the work in this paper. These studies were included because the authors used the sociotechnical framework to classify medical errors [21,23] or HIT-related sentinel events [22]. Medical error classifications have been developed using other approaches. The System Theoretic Accidents Models and Process framework has been used to classify medical errors in 3 broad categories: feedback, control action, and knowledge errors [25]. The Human Factors Classification Framework [26] has been adapted to health care to classify medical errors in 5 categories: decision errors, skill-based errors, perceptual errors, routine violations, and exceptional violations [27,28]. Other studies have developed taxonomies without the use of a particular framework [29][30][31]. Prior studies have not applied the sociotechnical framework on medical errors with the intent of exploring the root causes and potential avenues through which the errors can be fixed. Furthermore, the dimensions of sociotechnical frameworks described in the extant research literature have not considered the emergence of new technologies such as cloud computing, n-tier architectures, and new management paradigms, including DevOps and microservices architecture. We adapted and extended the sociotechnical framework with additional dimensions that reflect new trends in IT. A group of expert researchers in information systems and sociotechnical theory reviewed this model [32]. Feedback from these experts was incorporated to refine the classification model, which is presented in Figure 1.
Proposed Classification
Sociotechnical theory emphasizes the interplay of the social and technical aspects of adopting and using technology [17,18,33]. The theory hinges on four basic constructs (technology, people, process, and environment) and the interaction between these constructs. In the expanded version of the sociotechnical framework, we detail the components of the technology dimension to include the IT infrastructure, which in turn comprises hardware, software, and apps. These also include emerging technologies, such as cloud computing, the internet of things, mobile apps, and the use of artificial intelligence, predictive and prescriptive analytics, and robotics. The technology dimension can also be partitioned based on the type of use, broadly classified as either administrative (including administrative IT and resource scheduling) or clinical. The need to investigate at this level of detail stems from the fact that the type of interaction varies based on the interacting subcomponents. Furthermore, the app layers can be viewed as comprising the user interface, middleware (including the logic layer), backend (including the logic layer), and data.
The process dimension includes administrative and clinical workflows. Administrative workflows related to IT include the collection, storage, processing, and presentation of information for more effective resource management, such as clinical and IT staff management, operating room scheduling, risk and safety management, billing and facility management, and inventory management to ensure the business management of the hospitals. The subdimensions of IT processes are software development, HIT implementation and maintenance, and training and support. Clinical processes include patient record management, clinical pathways, patient bed assignment, and physician notes. Some processes are both clinical and administrative; these include the inventory management of drugs and clinical supplies, surgery room and equipment scheduling, and patient discharge management. Processes in health care settings allow all stakeholders to perform tasks in a predetermined manner to obtain successful outcomes [24,34,35]. Patient safety errors manifest when there is a misalignment between the elements of IT and clinical processes.
The people dimension includes patients, clinical staff, and administrative staff. People interact with each other and with the technology available to them. The hospital employee space consists of providers with different competencies and clinical authorities and administrative staff with priorities that are often very different from those of clinical providers. Several examples are worth mentioning here. First, clinical staff members prioritize patients' clinical health, whereas IT personnel are more concerned with the processes involved in health care. Inconsistencies in their priorities often lead to errors. As people interact with the entire work system, a mismatch between people and any other components increases the risk of harm to patients. Human errors are also a threat to patient safety [36]. Therefore, it is essential to build user interfaces and systems that consider the priorities and goals of the different types of users of the system, and these goals go beyond the purely functional and technical requirements of the job.
The environment consists of the care setting (eg, ambulatory, emergency, and in-patient), regulatory (eg, compliance, privacy, and security related), and culture. Culture stems from management style, organizational policy, and other systemic factors. Furthermore, different types of employees prioritize different goals, and conflicts in achieving these goals are often manifest in the building, implementation, and functioning of systems. Patients receiving services are external to the health care organization. To ensure more effective health care service provisioning, patient participation in the process is very important. In some areas, tasks must be performed by patients away from the health care organization. Contextual environments and skills to perform the required tasks differ from those of health care providers [33,35]. Regulations can also have a constraining effect on the error-free functioning of all subsystems. A thorough classification of patient safety events should consider specific areas of interaction between the environment dimension and all other dimensions. We use this expanded classification model to understand the gap in the mental models of clinical staff and technology professionals regarding the root cause of errors and how they should be addressed. We articulate our research design in the next section.
Research Design
The research design is comprised of 2 significant steps: developing a shortlisted set of IT-related patient safety issues and the classification of the root causes of medical errors with the sociotechnical lens using expert interviews. Figure 2 depicts the flow of the study.
Error Collection Using Literature Review
In this study, we first developed an extended sociotechnical framework that includes a finer level of granularity. Next, we systematically reviewed the literature on patient safety and medical errors from Ovid-MEDLINE, Embase, and Web of Science, which are leading medical databases in addition to Google Scholar. The systematic review process shown in Figure 2 aligns with commonly used steps of the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines [37], as depicted by several exemplar papers [38][39][40]. The searches were performed using the following search terms: ("Patient Safety" OR "Medical") AND ("issue" OR "error") AND ("health information technology" OR "information technology"). Initially, the title, abstract, and index terms were used to screen published journal papers, conference papers, proceedings, case studies, and book chapters. We also used ancestral search to locate potentially relevant articles. Subsequently, the shortlisted papers were reviewed entirely. Two reviewers performed the screening independently. The reviewers met regularly to discuss the inclusion of the studies. A third reviewer was consulted when there was a discrepancy. Interrater reliability indicated a high agreement (Cohen κ value of 0.95).
Inclusion criteria included studies that addressed patient safety by identifying specific issues that occurred in health care settings and linked these errors to HIT. Furthermore, we excluded studies that were not available as the full text in the final search; were not in English; or were reports, abstracts only, letters, or commentaries.
Expert Interviews
An invitation email to participate in the study was sent to the alumni of the University at Buffalo. The email contained the eligibility criteria consisting of ≥5 years of HIT experience and at least 1 IT-related professional certification. A separate invitation email mentioning the selection criteria was sent to medical experts through the Office of Business Coordination at the University at Buffalo. A minimum experience of 5 years working as a medical doctor or as a registered nurse was required to qualify for the interview. All participants who responded met the selection criteria and were included in the study.
To better understand the perspectives of different stakeholders, we conducted multiple semistructured interviews [41] with different stakeholders, namely 6 IT and 5 medical experts to map the errors on the different dimensions of the expanded sociotechnical framework. Experts could map an error on multiple (or on all) subdomains of the sociotechnical framework to show the different sociotechnical factors that could contribute to the error. The purpose of accounting for the different perspectives was to understand how each group understood the predicates of the problem and allow us to reflect on how best the error could be addressed. Interviews were selected based on their domain experience, education, and industry certifications. The IT experts, who were recruited from the alumni list of the State University of New York at Buffalo, were software development professionals with a master's degree and IT professional certifications, such as the certified scrum master, the health level 7 control specialists, and the project management professional certifications. The minimum work experience cutoff for IT experts was 5 years for HIT in addition to possessing at least one IT-related professional certification.
IT experts who were interviewed had extensive IT experience (mean 10.33, SD 1.11 years) with significant HIT experience (mean 8.83, SD 2.03 years; Multimedia Appendix 1 uploaded as a supplementary file for brief profiles of IT interviewees). The medical experts interviewed were physicians and registered nurses with broad primary care experience from working with multiple health care institutions across the United States and Canada. They are all currently working with hospitals and institutions affiliated with the university at Buffalo (Multimedia Appendix 2). Medical experts had a mean experience of 16.6 (SD 7.33) years. The minimum and maximum numbers of years of HIT experience for IT experts were 5 and 12, respectively. The work experience of medical experts varied from 8 to 27 years. The questionnaire and interview process are detailed in Multimedia Appendix 3. Experts were asked to provide their opinions on why the selected errors (Multimedia Appendix 4 [42][43][44][45][46][47][48]) occurred and how the errors could be prevented. The extensive experience of both IT and medical experts in their respective domains qualifies them to map medical errors on the sociotechnical framework. The study was approved in November 2019 (IRB# STUDY00003838).
Search Results
The literature search resulted in 344 articles, 141 of which were duplicates. After removing articles based on their content, we retained 36 articles [10,28,[42][43][44][45][46][47] that met the 2 criteria set for the study. We then extracted 67 unique patient safety events from the articles in which 12 specific issues related to IT use in the hospital setting were shortlisted. The process followed the PRISMA methodology [37] as detailed in Figure 3. The remaining errors occurred outside a health care setting and were excluded from the study. The error description includes the error context in the literature review format commonly known as problems, interventions, comparisons, and outcomes model [37]. The articles describing the errors contained a clear purpose, literature review, research methodology, results, and conclusions.
Study Characteristics and Error Classification
In this study, experts categorized errors based on their opinion of where the source of the error lies. Experts were provided with the definitions of the elements of the framework and were informed that an error could result from multiple sources. They were asked to map each error at the lowest level of one or multiple dimensions of the sociotechnical framework. The authors then interacted with the experts to understand the reasons behind their mapping selection. The interactions included questions related to suggestions on the best way to address the problems and prevent them from occurring. In line with extant literature on data analysis in qualitative research coding [77,78], expert interviews were subsequently deconstructed into keywords and phrases and then grouped into ideas and concepts. The output of the analysis is summarized in the "key observations" below, for example, in Error 1: "Copied and pasted charting in the wrong window: Before, you could not easily get into someone else's chart accidentally...because you would have to pull the chart and open it." Medical experts highlighted several contextual issues, such as the number of patients a health care provider is set to see in a short time frame, unfamiliarity with a new electronic medical record system, nurse transitions around the time of the error, and confusion due to patients having the same name. They emphasized process controls, including failure modes, as a potential fix. The technology experts discussed the lack of notification, poor user interface, and lack of end-user training as critical factors in this error. Error 2: "Incompatible data standards across multiple mobile applications led to the missing of vital data fields, which led to information loss." Like the first sample, medical experts attributed this error to system software-related interoperability issues. They also highlighted several changes in the International Classification of Diseases (ICD) during the transition from ICD 9 to ICD 10 as an example of a situation that could lead to errors. Technology experts, however, emphasized data formats, data transfer protocols, and service-orientated architecture as potential causes of errors.
Although we have detailed 2 instances here, the experts reviewed all 12 errors and identified the most likely set of possible dimensions to which the errors could be attributed. The sample errors used in the study are presented in Multimedia Appendix 2, and the results of analyzing these data are presented in Table 2, followed by several key observations. A patient received only half of their usual quantity of blood pressure medication because a repeat prescription for the medication did not transfer to a new software system when the patient's historical records were migrated. Because they did not have enough medication the patient tried to stretch out the old dose by taking the medication on alternate days. The patient had a stroke but made a full recovery.
Implementation and maintenance Patient A child had a full body x-ray. Some of the images went missing from the archival system where they were digitized. The x-ray was repeated to acquire the missing images, re-exposing the child to high levels of radiation A compound in high demand such as Rifampicin was not listed in the computerized physician order entry system. The consequence was that the physician could not order rifampicin.
When an update is made to the frequency field on an existing prescription, the frequency schedule ID is not simultaneously updated on new orders sent to the pharmacy via (application) Software-systems Software-systems Monitoring and Eavesdropping on Patient Vital Signs by hacking into the packet transfer from an internet of things device to the central system Vulnerabilities of the hospital's IOT devices were exploited to initiate a denial-of-service attack to bring down hospital's servers which disrupted normal functioning Use of portable devices that are not password protected makes the patient record vulnerable to the invasion of privacy Incompatible data standards across multiple mobile applications led to the missing of vital data fields which led to information loss
Principal Findings
Some of the crucial observations include (1) The identified potential sources of the errors and solution areas differed considerably between clinicians and IT specialists; (2) both groups identified multiple factors as potential causes of the errors; (3) the clinicians often focused on postproduction (eg, implementation, maintenance, training, context, and the way the application is used) issues as causal factors; (4) IT experts focused on software functionality, software development, and technical implementation issues as causal factors; (5) on most occasions when IT experts identified an issue as a "data" problem, clinicians seemed to think that the problem lay elsewhere, including the software system, software development, or patient pathways; (6) both groups seem to be congruent with the issues of compliance and security; and (7) IT experts rarely identified clinical pathways or workflows as an issue.
The classification of the identified medical errors using the framework is presented in Table 2. The continued occurrence of many errors is a reminder that current HIT software testing and validation do not seem adequate in terms of ensuring the functioning of the software within the health care organization. The attribution of the errors to different aspects of the sociotechnical framework by clinicians and IT professionals informs us that technologists and clinicians generally differ in their perspectives on factors that impact IT-related safety events. Software experts are often not acclimatized to the environment in which HIT software and tools are used, which could be a cause to the problem.
Although IT and medical experts' perceptions are similar in security and privacy, IT specialists often tend to assume that the issues are either software or hardware or user interface related. In contrast, clinicians tend to consider environmental, contextual, and process factors as contributors to patient safety events. The benefit of such a classification suggests that designers and developers who fix the errors consider the artifact's environment and the people using the artifact. A key realization is that such errors will continue to occur if health IT system developers do not fully grasp the importance of technology functioning in an environment of care delivery where the patient needs are paramount.
A careful review of the IT experts' classification of errors highlights the view that IT experts consider technology as the source of most errors and suggest solutions that are mostly technical. The IT experts highlighted the software systems and development as the top 2 sources of most errors. Similarly, the suggestions of potential fixes mostly revolve around the software. However, a common refrain that accompanied their answers was, "The doctor should double-check..." In contrast, clinicians tended to assign the source of errors within the people, process, and contextual (environmental) dimensions for the most part.
The difference in perspective could be explained by the fact that clinicians tend to deal with the system after implementation. In contrast, IT experts tend to look at the same problem from an IT development perspective. For example, for "Error 1," for which IT experts were asked how they would prevent a doctor from using the wrong chart when he had multiple charts open, the answer was always to restrict access to 1 open chart at a time. However, clinicians prefer having multiple windows open so that they can quickly consult with multiple patients in different rooms without having to close out and reopen a chart. For them, the issue is, "How easy is it for a physician to realize the mistake," and "Physicians should still be able to open multiple charts." The differing perspectives between designers and developers of the technology and its users can contribute to medical errors.
The development teams of clinical applications typically include clinicians who provide domain expertise. However, our study indicates that this may not be sufficient as IT experts do not fully grasp the clinical environment and how workloads and other patient-related variabilities impact the use of the software. Therefore, as a future investigation, we suggest that software companies immerse developers in clinical environments for a short period, so that the understanding of the environment is built into their psyche and translates into a more robust design.
HIT systems can be made less error prone if programmers and systems developers understand the health care organization's operating environment. Current systems do not have fail-safe mechanisms that could prevent some of the errors. For example, consider the documented error, "the nurse was supposed to enter a prescription...the nurse failed to change the default amount and dispensed too much medication"; from a software perspective, better checks and warnings can be developed. In this specific instance, a system challenge asking the nurse to review the dosing amount could have prevented the problem. From a process perspective, nurses could be trained to reexamine the dosage. Creating a poka-yoke (like a check-off box for dose amount) would force nurses to check the dosing before refilling the prescriptions. As the clinical experts and IT experts suggested slightly different predicates for the error, a solution that addresses the issue from both technical and from a process and workforce training perspective would provide multiple layers of defense against such failures. The different views expressed by IT and clinical experts can be used to create technical and process solutions so that there is a more robust defense against these types of errors.
Limitations and Future Studies
The results of this study should be interpreted cautiously, as there are several limitations to this study. The first shortcoming is related to the smaller number of participants interviewed in this study. Only 11 interviews comprising 5 medical providers and 6 HIT professionals were conducted. Therefore, this study should be considered a pilot study suggesting the differences in the mental models of the clinical and technical staff, which potentially leads to ineffective systems analysis and ultimately manifests as errors in practice. In addition, both IT and medical experts have, for the most part, acquired their education and expertise at affiliated institutions in the Northeast of the United States. Future studies should examine the hypothesis that medical experts are more likely to attribute medical errors to contextual factors, whereas IT experts on technical factors use a nationally representative sample.
Second, we shortlisted 12 unique errors that occurred in a hospital setting; the findings of this study cannot be generalized beyond that context. Furthermore, we extracted the errors used in this study from articles written in the English language. Future studies could examine errors that occurred in medical homes, patients' homes, or other nonhospital settings or include studies written in other languages.
Third, the study did not examine errors that were discovered by HIT users before the occurrence of a patient safety event. Future studies should examine near-miss errors to determine their potential root causes and fixes using the lens of sociotechnical theory.
Conclusions
This study classifies medical errors gathered from extant literature based on an expanded sociotechnical framework. Interviews from health care and IT experts reveal differing perspectives on why medical errors occur in clinical settings. Health care experts were more likely to attribute the source of an error to the implementation and use of an IT tool, whereas IT experts were likely to identify software design and functionality as causal factors of medical errors. From the results of this study, we offer several error-prevention prescriptions that can be tested in future research. First, IT experts should observe the functioning of HIT postimplementation and collect metrics related to its impact on (1) physician consultation time, (2) physician efficiency, (3) patient-physician relationship, (4) training needs, and (5) how the software fits into the workflow and culture of the organization. Software developers should be trained to be sensitive to the provider and patient needs because their lack of exposure to postproduction issues and usage contexts leads to the development of applications that do not cater to all user situations. Understanding these situations may lead to building software constraints and improved user training. Although software development teams have historically included clinicians as business analysts or subject matter experts to bridge the gap, development teams will be better served by more immersive training and exposure to clinical environments, leading to better software design and software implementation strategies. | 7,116.2 | 2020-06-29T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Ishige okamurae Extract and Its Constituent Ishophloroglucin A Attenuated In Vitro and In Vivo High Glucose-Induced Angiogenesis
Diabetes is associated with vascular complications, such as impaired wound healing and accelerated vascular growth. The different clinical manifestations, such as retinopathy and nephropathy, reveal the severity of enhanced vascular growth known as angiogenesis. This study was performed to evaluate the effects of an extract of Ishige okamurae (IO) and its constituent, Ishophloroglucin A (IPA) on high glucose-induced angiogenesis. A transgenic zebrafish (flk:EGFP) embryo model was used to evaluate vessel growth. The 3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT), gap closure, transwell, and Matrigel® assays were used to analyze the proliferation, migration, and capillary formation of EA.hy926 cells. Moreover, protein expression were determined using western blotting. IO extract and IPA suppressed vessel formation in the transgenic zebrafish (flk:EGFP) embryo. IPA attenuated cell proliferation, cell migration, and capillary-like structure formation in high glucose-treated human vascular endothelial cells. Further, IPA down regulated the expression of high glucose-induced vascular endothelial growth factor receptor 2 (VEGFR-2) and downstream signaling molecule cascade. Overall, the IO extract and IPA exhibited anti-angiogenic effects against high glucose-induced angiogenesis, suggesting their potential for use as therapeutic agents in diabetes-related angiogenesis.
Introduction
Diabetes is associated with secondary metabolic complications, such as insulin resistance and hyperinsulinemia, leading to abnormal angiogenesis [1]. Development of new microvessels from the existing vessels is known as angiogenesis [2]. Diabetes is characterized by inadequate angiogenesis in some organs and excessive angiogenesis in some others [3]. Apart from its role in several pathological conditions, angiogenesis also plays a crucial role in normal growth and development [4]. Excessive angiogenesis causes the degradation of vascular endothelial cells from the extracellular matrix, enhancement of cell proliferation, migration, and formation of extravascular networks [5]. Excessive angiogenesis is observed in diabetic retinopathy and nephropathy, resulting in the loss of vision and renal failure, respectively [6].
Marine algae are considered a prolific source of important bioactive compounds that aid in maintaining normal health and mitigating disease risks [7]. Among the marine algae, brown algae and its constituent phlorotannins are widely studied globally for various biological effects by several research groups [8]. Ishige okamurae (IO) is an edible brown alga found abundantly in the coastal areas of Jeju Island. It has been reported that IO exerts several biological activities, such as anti-α-glucosidase, free-radical scavenging, cytoprotective, anti-obesity, and anti-inflammatory activities [9][10][11]. Diphlorethohydroxycarmalol (DPHC), also a kind of phlorotannin isolated from IO extract has been studied in our previous work [11], and it exhibited anti-angiogenic effects against high glucose-induced angiogenesis. Ishophloroglucin A (IPA) is a novel phlorotannin isolated from IO extract, which has been studied for standardizing the anti-α-glucosidase activity of IO [12]. However, the effects of IO extract and IPA in the context of diabetic-related pathologies have not been examined. Therefore, in the present study, IO extract and IPA were studied for their anti-angiogenic effects on high glucose-induced vascular growth.
The zebrafish model is widely used in studies on angiogenesis due to its characteristics. Transgenic zebrafish lines are more suitable for imaging of the vessels with fluorescent labeling and the alterations can be clearly visualized [13]. In this study, we used transgenic zebrafish Tg (flk:EGFP), which has fluorescently-labeled complete vasculature and is widely used for screening anti-angiogenic compounds [14]. The vascular endothelium is a biologically-important layer present in the blood vessels, and its dysfunction results in various vascular pathologies [6]. The EA.hy926 cell line is frequently used in different angiogenesis studies. It is established by the fusion of primary human umbilical vein endothelial cells (HUVEC) and human lung carcinoma cell line A549 [15]. These cells are more appropriate than primary vascular cells because they are mortal and do not possess variations associated with the donor [16]. The cell line EA.hy926 has been experimented for vascular endothelial cell characteristics and is known to possess characters of macro and micro vessels [16]. In this study, the anti-angiogenic effects of IPA were evaluated in EA.hy926 cells. We also investigated cell proliferation, cell migration, and capillary-like structure formation in high glucose-treated EA.hy926 cells.
In diabetes-induced angiogenesis, vascular endothelial growth factor receptor 2 (VEGFR-2) is activated, and the downstream signaling events associated with its activation play a significant role in angiogenesis [17]. Therefore, the expression of VEGFR-2 and its downstream signaling molecules were evaluated to elucidate the mechanisms by which IPA affects high glucose-induced angiogenesis.
Effects of IO Extract on High Glucose-Treated Zebrafish Embryo
The toxicity of IO extract in transgenic zebrafish (flk:EGFP) embryo was investigated using different concentrations of IO extract (10, 30, and 100 µg/mL). As shown in Figure 1A, 10 µg/mL IO extract showed no significant toxicity in transgenic zebrafish (flk:EGFP) embryo. Furthermore, there was no toxicity following treatment with 10 µg/mL and 130 mM glucose together ( Figure 1B). Hence, this concentration was used in further experiments.
Transgenic zebrafish (flk:EGFP) embryos were treated with 130 mM glucose [11] to induce angiogenesis in the whole body, including hyaloid-retinal vessels ( Figure 1). Treatment with glucose (130 mM) yielded 162.7% of retinal vessel compared with that of the blank (no glucose). Treatment with 10 µg/mL IO extract significantly suppressed the retinal vessel diameter (99.5% similar to the blank) ( Figure 1C,D). Fluorescence intensity was measured for the quantitative analysis of vascular growth in the whole body. Treatment with glucose (130 mM) yielded 182.8% fluorescence intensity compared with that of the blank ( Figure 1E,F). Treatment with 10 µg/mL IO extract suppressed the high glucose-induced vascular growth in the whole body (107.1% intensity compared with that of the blank).
Effects of IPA on High Glucose-Treated Zebrafish Embryo
Initially, the toxicity of IPA on transgenic zebrafish (flk:EGFP) embryo was investigated with different concentrations of IPA (0.3, 1.5, 3, and 5 µM). The results showed (Figure 2A) that IPA at concentrations of up to 3 µM had no significant toxic effects. Hence, we selected IPA concentrations of 0.015, 0.05, 0.15, and 0.5 µM to evaluate the anti-angiogenic effects in transgenic zebrafish (flk:EGFP) embryo.
Glucose treatment yielded 170.4% retinal vessel. When treated with IPA at concentrations of 0.015, 0.05, 0.15, and 0.5 µM, the retinal vessel diameters were decreased to 144.49%, 117.87%, 109.14%, and 104.36%, respectively, compared with that of the blank ( Figure 2B,C). The fluorescence intensity of glucose treatment was 157.8%. Following treatment with IPA at concentrations of 0.15 and 0.5 µM, the fluorescence intensity significantly decreased to 124.43% and 120.9%, validating the anti-angiogenesis effect of 10 µg/mL IO extract with 0.0907 µM IPA ( Figure 2D,E). After observing vascular growth in the hyaloid-retina and the whole body, it could be inferred that treatment with IPA may lead to anti-angiogenic effects against high glucose-induced angiogenesis. Figure 2B,C). The fluorescence intensity of glucose treatment was 157.8%. Following treatment with IPA at concentrations of 0.15 and 0.5 µM, the fluorescence intensity significantly decreased to 124.43% and 120.9%, validating the anti-angiogenesis effect of 10 µg/mL IO extract with 0.0907 µM IPA ( Figure 2D,E). After observing vascular growth in the hyaloid-retina and the whole body, it could be inferred that treatment with IPA may lead to anti-angiogenic effects against high glucose-induced angiogenesis.
Effects of IPA on High Glucose-Induced Cell Proliferation, Migration, and Capillary-Like Structure Formation
Prior to assessing the anti-angiogenic effects of IPA, the 3-(4,5-Dimethylthiazol-2-yl)-2,5 -diphenyltetrazolium bromide (MTT) assay was performed to evaluate its cytotoxicity in EA.hy926 cells. The cell viability was 92.94%, 91.31%, 90.24%, 86.78%, and 78.48% when treated with IPA at concentrations of 0.05, 0.15, 0.5, 1.5, and 2.5 µM, respectively ( Figure 3A). The non-toxic IPA concentrations of 0.05, 0.15, 0.5, and 1.5 µM were used in later experiments, as >80% cell viability was selected for use in the cellular experiments [18]. The anti-angiogenesis effect of IPA was evaluated with regard to cell proliferation, cell migration, and capillary formation. The cell viability was used as an indicator of cell proliferation, while in our previous study [11], we used Muse™ Cell Analyzer to confirm the significant cell proliferation at 30 mM glucose treatment. As shown in Figure 3B, significant cell proliferation (124.93%) was observed after treatment with 30 mM glucose. Once the cells were treated together with 30 mM glucose and ascending concentrations of IPA, cell proliferation was decreased significantly in a concentration-dependent manner. The results were 117.12%, 102.95%, 97.80%, and 92.21% when treated with IPA at concentrations of 0.05, 0.15, 0.5, and 1.5 µM, respectively. These results suggest that IPA exerts anti-angiogenic effects by inhibiting high glucose-induced vascular cell proliferation.
The scratch-wound cell migration and transwell migration assays were used to determine the effects of IPA on high glucose-induced cell migration. In the scratch-wound cell migration assay, the cell migration ability was compared by calculating the gap closure percentage ( Figure 4A,B). The higher gap closure percentage indicated higher cell migration ability and vice versa. The highest cell migration recorded was 22.91% after treatment with 30 mM glucose. It was significantly decreased, by 20.42%, 17.76%, and 16.8%, following treatment with IPA at concentrations of 0.15, 0.5, and 1.5 µM, respectively.
A similar result was obtained in the transwell migration assay. The percentage of cell migration through the transwell was higher when the cells were treated with 30 mM glucose than that under normal glucose condition. With IPA treatment, high glucose-induced cell migration was inhibited significantly in a dose-dependent manner ( Figure 4C,D). The migrated cell percentage was 113.33%, 110.54%, and 99.66% when treated with IPA at concentrations of 0.15, 0.5, and 1.5 µM, respectively. These observations indicated that IPA effectively suppressed high glucose-induced cell migration.
Vascular endothelial cells cultured in Matrigel ® matrix can differentiate into capillary-like structures [19]. This characteristic feature was used to evaluate the IPA effects on high glucose-induced capillary-like structure formation ( Figure 5A,B). The angiogenic score was determined for quantitative evaluation of capillary formation. An increased angiogenic score is an indicator of higher capillary formation. According to the results, a higher angiogenic score was reported as 7.01 × 10 5 in the cells treated with 30 mM glucose. After IPA treatment, the angiogenic score was significantly decreased. The angiogenic score was 5.47 × 10 5 , 4.97 × 10 5 , and 2.47 × 10 5 when treated with IPA at concentrations of 0.15, 0.5, and 1.5 µM, respectively. These data suggest that IPA exerts anti-angiogenic effects by suppressing capillary formation.
Effects of IPA on VEGFR-2 and the Downstream Signaling Cascade
The expression of pVEGFR2 and its downstream signaling molecules were detected by western blotting (Figure 6). pVEGFR-2 expression was significantly increased in the high glucose-treated EA.hy926 cells compared with that of the blank. As shown in Figure 6B, high glucose-induced pVEGFR-2 expression was decreased significantly in the cells treated with IPA. In addition, high glucose treatment showed higher protein expression in the downstream signaling molecules extracellular signal-regulated kinase (ERK), protein kinase B (AKT), c-Jun N-terminal kinase (JNK), and endothelial nitric oxide synthase (eNOS). With IPA treatment, these parameters were significantly down regulated.
Effects of IPA on VEGFR-2 and the Downstream Signaling Cascade
The expression of pVEGFR2 and its downstream signaling molecules were detected by western blotting (Figure 6). pVEGFR-2 expression was significantly increased in the high glucose-treated EA.hy926 cells compared with that of the blank. As shown in Figure 6B, high glucose-induced pVEGFR-2 expression was decreased significantly in the cells treated with IPA. In addition, high glucose treatment showed higher protein expression in the downstream signaling molecules extracellular signal-regulated kinase (ERK), protein kinase B (AKT), c-Jun N-terminal kinase (JNK), and endothelial nitric oxide synthase (eNOS). With IPA treatment, these parameters were significantly down regulated. capillary formation were normalized to C (control (30 mM glucose + 0 µM IPA)). Scale bar (A) 1000 µm. *p ˂ 0.05, **p ˂ 0.01, ***p ˂ 0.001, ### p ˂ 0.001.
Discussion
Studies have demonstrated that seaweeds are rich in bioactive components with medicinal values [20]. IO has long been used as an edible seaweed in Korea. Previous studies have demonstrated the potential of the ethanolic extract of IO to treat chronic inflammation [21]. A recent study has shown that the ethanolic extract of IO possesses anti-diabetic activities by inhibiting αglucosidase [12]. To the best of our knowledge, there has been no study demonstrating the antiangiogenic effects of the ethanolic extract of IO. Here, for the first time, we demonstrated the antiangiogenic effects of the ethanolic extract of IO on high glucose-induced-angiogenesis.
Discussion
Studies have demonstrated that seaweeds are rich in bioactive components with medicinal values [20]. IO has long been used as an edible seaweed in Korea. Previous studies have demonstrated the potential of the ethanolic extract of IO to treat chronic inflammation [21]. A recent study has shown that the ethanolic extract of IO possesses anti-diabetic activities by inhibiting α-glucosidase [12]. To the best of our knowledge, there has been no study demonstrating the anti-angiogenic effects of the ethanolic extract of IO. Here, for the first time, we demonstrated the anti-angiogenic effects of the ethanolic extract of IO on high glucose-induced-angiogenesis.
IPA is a phlorotannin isolated from the IO extract, and it is known for its α-glucosidase inhibitory activity and constitutes 1.81% ± 0.362 of IO [12] (Supplementary Figure S1). Treatment with 10 µg/mL of IO extract showed anti-angiogenic effects against high glucose-induced vascular growth. Based on this observation, we hypothesized that IPA from IO extract could be a key molecule involved in the anti-angiogenic effects.
IPA contains hydroxyl groups bonded with its benzene structure (Supplementary Figure S2). The comparatively higher number of hydroxyl groups may be advantageous to its biological activities. It has been reported that phlorotannins, which contain > 10 hydroxyl groups, show relatively high anti-oxidant activities [22]. Analysis of our data revealed that IPA exerted anti-angiogenic effects in the concentration range of 0.05−0.15 µM. This represents approximately 0.1−0.3 µg/mL of IPA (molecular weight of IPA, 1984 g/mol). Therefore, the anti-angiogenic effects of IO extract could be attributed to the IPA present in the IO extract. Further studies were carried out with IPA in vascular endothelial cells EA.hy926 to evaluate the cellular mechanisms against high glucose-induced angiogenesis.
Angiogenesis is a step-by-step process involving cell proliferation, migration, and capillary formation [23]. In angiogenesis, cell migration is an essential event where the cells move towards a controlled direction before capillary morphogenesis [24] and, in high glucose treatment, cell migration is also increased [25]. IPA significantly reduced the high glucose-induced cell migration in the scratch wound migration and transwell migration assays. This was further validated via inhibition of MMP-2 and -9. The MMPs are a kind of proteases that are critically important in degrading the extracellular matrix (ECM) to facilitate endothelial cell migration in the angiogenesis process [26]. Among various MMPs, MMP-2 and -9 more efficiently degrade basement membrane components [27]. Furthermore, capillary formation was evaluated because, in the process of developing drugs targeting angiogenesis, the 3D capillary formation is an important aspect [28]. Overall, our data showed that IPA was efficacious in inhibiting high glucose-induced endothelial cell proliferation, migration, and capillary formation.
VEGFR-2 is the principal receptor involved in endothelial cell development and has attracted considerable attention in the anti-angiogenic therapeutic intervention [17]. Bevacizumab is an example of a drug that targets VEGFR inhibition, although it causes several adverse effects, such as hypertension, fatigue, rash, and myalgia, due to lack of target specificity [29]. Therefore, today, interventions by anti-angiogenic drugs obtained from natural compounds are preferred because of the low adverse effects profile. Qi et al. [30] reported the anti-angiogenic effects of bromophenol bis(2,3-dibromo-4,5-dihydroxybenzyl) ether from a marine source in vascular endothelial cells by suppressing the VEGFR signaling pathway. Further, Lu and Basu [31] studied chebulagic acid, a polyphenol of myrobalan fruits that suppressed the VEGFR-2 phosphorylation and inhibited the angiogenesis in vascular endothelial cells.
According to previous supporting evidence [32], downstream signaling mediators of VEGFR-2, including ERK, AKT, JNK, and eNOS, are involved in the regulation of endothelial cell proliferation and survival. ERK and JNK are actively involved in endothelial cell proliferation [33], whereas AKT plays an important role in endothelial cell survival [34]. eNOS is involved in the production of large amounts of nitric oxide (NO) in the endothelial cells and plays a critical role in all the processes of angiogenesis, including matrix breakup, endothelial cell migration, proliferation, network structure organization, and lumen formation [35]. Our results demonstrated that IPA, isolated from a marine alga, exerts its anti-angiogenic effects by interfering with the VEGFR-2 signaling pathway.
Preparation of IO Extract and IPA
Ishige okamurae was collected in April 2016 in Seongsan, Jeju Island, South Korea. IO extract was prepared and IPA isolated using a previously described method [12]. Briefly, 50% ethanolic extract of IO was fractionated using centrifugal partition chromatography (CPC 240, Tokyo, Japan) and further purified using semipreparative HPLC column (YMC-Pack ODS-A; 10 mm × 250 mm, 5µm) to obtain IPA. The identity of IPA (99% of purity) was verified using MS fragmentation of m/z 1986.26 using ultrahigh resolution Q-TOF LC-MS/MS coupled with an electrospray ionization (ESI) resource (maXis-HD; Bruker Daltonics, Breman, Germany) at the Korea Basic Science Institute (KBSI) in Ochang, South Korea. According to a previously validated method [12], the IO extract used in this study had 1.81% ± 0.362 IPA.
Treatment of Zebrafish Transgenic (flk:EGFP) Embryos with IO Extract and IPA
Before assessing the anti-angiogenesis effect, the survival rate following treatment with IO extract and IPA was determined in zebrafish transgenic (flk:EGFP) embryos. Five embryos were placed in each well of 24-well plates and maintained in embryonic water containing different concentrations of IO extract (10, 30, and 100 µg/mL) or IPA (0.3, 1.5, 3 and 5 µM), and the survival rate was assessed for 168 h post fertilization (hpf). The survival rate of zebrafish transgenic (flk:EGFP) embryos after treatment with 10 µg/mL IO extract and 130 mM glucose was determined.
Zebrafish Transgenic (flk:EGFP) Embryos and Angiogenesis Assay
Zebrafish embryos with high glucose-induced angiogenesis was developed as described previously [36] by maintaining zebrafish transgenic (flk:EGFP) embryos (3 days post fertilization (dpf)) in embryonic water containing 130 mM glucose for 3 days. High glucose-treated embryos were treated with 10 µg/mL of IO extract or different concentrations of IPA (0.05, 0.15, 0.5, and 1.5 µM), and vessel growth was examined in hyaloid retinal vessels and in the whole body. After 3 h of treatment, images were captured using a fluorescence microscope (LIONHEART FX automated live cell imager). Vessel formation in retinal vessels was evaluated by measuring the retinal vessel diameter of the images (10× magnification) at five different places using Gen5 3.04 software, and then it was averaged. Vessel formation in the whole body was assessed by measuring the fluorescence intensities of the images (4× magnification) using Image J software, followed by calculation of the corrected total object fluorescence (CTOF).
Cell Culture and MTT Assay
The human vascular endothelial cell line EA.hy926 was cultured in DMEM containing 10% FBS and 1% penicillin-streptomycin mixture. The cells were maintained in an atmosphere of 5% CO 2 at 37 • C, and plates were split 1:3 when they reached confluence.
The cytotoxicity of IPA in EA.hy926 was assayed using the MTT assay. Briefly, 1 × 10 5 of EA.hy926 cells were seeded in each well of 96-well plates. After incubation for 24 h, the cells were treated with different concentrations of IPA (0, 0.05, 0.15, 0.5, 1.5, and 2.5 µM), with three replicates for each concentration. After 24 h, the medium was replaced with 50 µL of MTT stock solution (2 mg/mL in PBS), followed by incubation for 3 h at 37 • C. The insoluble formazan product was dissolved in 100 µL of DMSO, and the absorbance was measured at 540 nm using a microplate reader (Synergy HT, BioTek Instruments, Winooski, VT, USA). Cell viability is expressed as a percent of the blank (no IPA treatment).
The effect of IPA on high glucose-induced cell proliferation was determined by measuring cell viability using the MTT assay. The cells were treated with 30 mM of glucose to induce angiogenesis [11]. The cells were treated simultaneously with glucose and different concentrations of IPA (0, 0.05, 0.15, 0.5, and 1.5 µM). After 24 h of treatment, cell viability was assessed using the MTT assay. Cell viability was expressed as a percent of the blank (0 mM glucose + 0 µM IPA). The effect of 30 mM glucose for cell viability was compared with that of the blank, and the effect of IPA on high glucose-induced cells was compared with that of the control (30 mM glucose + 0 µM IPA).
In further experiments, the blank and control treatments were defined as follows: blank: 0 mM glucose + 0 µM IPA and control: 30 mM glucose + 0 µM IPA.
Scratch-Wound Cell Migration Assay
Cell migration was evaluated according to a previously described method with slight modifications [30]. EA.hy926 cells were seeded in 96-well plates, and the cells were grown to 80% confluence. The cell monolayer was scraped at the middle of the well using a sterile 10-µL pipette tip, and the cells were washed twice with PBS. The cells were treated together with glucose and IPA (0.15, 0.5, and 1.5 µM). After sample treatment, the cells were photographed (LIONHEART FX automated live cell imager), and the initial gap length (0 h) was measured (Gen5 3.04). After 12 h of incubation, the final gap length was measured. The gap width was measured at five different places, and then it was averaged. To determine the effect for cell migration, the gap closure percentage was calculated as follows [11]: Gap closure % = Initial gap length − f inal gap length initial gap lenth × 100 (2)
Transwell Migration Assay
Cell migration was evaluated using 8-µM pore-sized transwell filter chambers. Briefly, 100 µL of EA.hy926 cells was added to the upper chamber at a density of 3 × 10 4 cells per well. The cells were subjected to different treatments with glucose and IPA (0.15, 0.5, and 1.5 µM) in serum-free media. Then, 500 µL of medium with 20% FBS was added to the lower chamber followed by incubation at 37 • C for 24 h. The cells on the upper side of the filter membrane were removed using cotton swabs. The cells on the lower side of the membrane were fixed by soaking in 4% paraformaldehyde for 30 min and stained with hematoxylin. Cell migration was determined by counting the stained cells at five different microscopic fields, and migrated cell percentage was calculated. EA.hy926 cells were seeded in six-well plates at a density of 1 × 10 5 cells/well. The cells were treated together with glucose and IPA (0.15, 0.5, and 1.5 µM). The culture media were collected after 48 h of incubation, and MMP (MMP-2 and -9) expression levels were evaluated using commercial ELISA kits, according to the manufacturer's instructions.
Tube Formation Assay
EA.hy926 cells were seeded on top of Matrigel ® matrix to determine the effect of IPA in high glucose-induced capillary formation according to a previously described method [37]. Briefly, 96-well plates were coated with 75 µL of Matrigel ® per well and polymerized at 37 • C for 30 min. The trypsinized EA.hy926 cells were divided into approximately equal number of cells (1 × 10 5 ), and the cell pellets were subjected to different treatments with glucose and IPA. After 6 h of incubation, cultures were photographed (4×) and analyzed using the plugin "Angiogenesis Analyzer" of image J software. Angiogenic score was calculated as follows [38]: Angiogenic score = Number o f branches × total branch length (3)
Western Blot Analysis
Protein extraction was performed separately for cytosolic proteins and membrane proteins, using a protein extraction kit (MEM-PER™ Plus Kit; Thermo Scientific, Waltham, MA, USA). The extracted proteins were quantified (Pierce™ BCA Protein Assay Kit; Thermo Scientific, Waltham, MA, USA), and equal amount of proteins (30 µg) were separated using 7.5% or 12% SDS-PAGE. The resolved proteins were transferred onto nitrocellulose membranes (GE Healthcare Life Science, USA) and blocked for 3 h with nonfat dry milk at room temperature. The membranes were then incubated overnight at 4 • C with the following primary antibodies: phosphorylated and/or total VEGFR-2, extracellular signal-regulated kinase (ERK), protein kinase B (AKT), c-Jun N-terminal kinase (JNK), endothelial nitric oxide synthase (eNOS), and glyceraldehyde 3-phosphate dehydrogenase (GAPDH). Following incubation with the secondary antibodies for 2 h, protein bands were detected using a chemiluminescence reagent (Maximum sensitivity substrate; Thermo Scientific, Waltham, MA, USA), and images were captured using Fusion Solo apparatus (Vilber Lourmat, Collégien, France). The relative levels of protein expression were measured using image J software and normalized to expression of the respective total form or GAPDH.
Conclusions
The findings of the present study demonstrated the anti-angiogenic effects of IO from a marine source. IO extract attenuated high glucose-induced vascular growth in transgenic zebrafish (flk:EGFP). IPA isolated from IO extract exerted anti-angiogenic effects against high glucose-induced angiogenesis. Furthermore, the anti-angiogenic mechanism of IPA in vascular endothelial cells showed that IPA suppressed high glucose-induced cell proliferation, cell migration, and capillary formation, which are known to be key steps involved in angiogenesis. IPA treatment suppressed VEGFR-2 receptor expression and downstream signaling cascade in high glucose-induced vascular endothelial cells. Thus, IO extract and IPA could be developed as potential therapeutic candidates for diabetes-related angiogenesis. | 5,838 | 2019-11-01T00:00:00.000 | [
"Biology"
] |
Rehabilitation of a green sea turtle (Chelonia mydas) after collision with motorboat in the archipelago of Fernando de Noronha, Brazil
1 Projeto Cetáceos da Costa Branca, Universidade do Estado do Rio Grande do Norte, Mossoró, RN, Brazil. 2 Centro de Estudos e Monitoramento Ambiental, Areia Branca, RN, Brazil. 3 Programa de Pós-Graduação em Ciência Animal, Universidade Federal Rural do Semi-Árido, Mossoró, RN, Brazil. 4 Programa de Pós-Graduação em Ciências Naturais, Universidade do Estado do Rio Grande do Norte, Mossoró, RN, Brazil. 5 Programa de Doutorado em Desenvolvimento e Meio Ambiente, Universidade Federal do Rio Grande do Norte, Natal, RN, Brazil. 6 Programa de Pós-Graduação em Biologia Estrutural e Funcional, Departamento de Morfologia, Centro de Biociências, Universidade Federal do Rio Grande do Norte, Natal, RN, Brazil. 7 ICMBio Instituto Chico Mendes de Conservação da Biodiversidade, Fernando de Noronha, PE, Brazil.
Of the seven species found worldwide, green turtles (Chelonia mydas) account for most of the reported deaths caused by boat collisions [11], which may be due to several factors, especially the coastal habit of the species [12].
Traumatic injuries may not cause death immediately, but lead to disorientation and weakness, and may serve as entry points for pathogens that then act as immunosuppressants and promote the development of opportunistic infections [13], [14].
Most scientific publications have focused on collisions between boats and North Atlantic whales [15], [16], and there is a scarcity of information on collisions of boats with smaller species, and with all animals in the South Atlantic [17]. Transverse fractures of the carapace, one in the anterior region on the second vertebral shield and corresponding secondary structures including the lateral and marginal shield on the right side ( Figure 1A), and the second, posterior, fracture on the fourth vertebral shield with trauma to the lateral shields on both left and right sides ( Figure 1B After clinical stabilization, the turtle was referred for surgical reconstruction of the carapace. The turtle was positioned in dorsal recumbency, tramadol hydrochloride was given at 5 mg/kg IM as a preanesthetic medication, and anesthesia was induced by inhalation of 4% isoflurane via a vaporizer with 100% oxygen by mask. The trachea was intubated with a cuffed endotracheal tube. The time from induction to extubation was 110 minutes. After intubation, anesthesia was maintained with 2% sevoflurane via a vaporizer. The turtle moved when we started the operation, so anesthesia was changed to isoflurane at a concentration of 3.5-4.0% during the surgical procedure. The endtidal CO2 partial pressure immediately after intubation was 63 mmHg and was maintained at 22-46 mmHg during surgery using artificial ventilation as needed. All unstable fracture fragments of the lesions were removed, 15 mm steel cannulated conical screws were fixed at the edges of the fracture line, and 0.8 mm steel cerclage wire was used to approximate the edges to support regeneration of discontinuous structures ( Figure 3A).
Discussion and Conclusion
Turtle injuries resulting from trauma caused by collisions with boats usually include multiple carapace and plastron fractures, which are difficult to restore [18], [19], [13]. The type and severity of the injuries may depend on the part of the boat that collides with the animal [19], [13]. In particular, propellers produce multiple wounds that are linear, parallel, penetrating, and regularly spaced at a given angle [13]. These findings were also observed in the present case report. Several techniques have been described for carapace repair, including epoxy resin, screws, wires, and metal plates [20], [21]. However, each technique should be evaluated for the specificities of trauma and the animal's physical and clinical conditions [18]. In addition to fracture, soft tissue injuries with secretion leakage, immediate repair, or use of epoxy-type materials for repair can prevent drainage and cleaning of the underlying wound. This increases the risk of systemic infection, which supports the need for caution and attention when selecting the method to be used [22].
The clinical stabilization and presurgical treatment recommended in this report proved to be effective in controlling pain and infection. The topical use of a supersaturated sugar solution, in addition to keeping the injured tissue viable, contributed to tissue remodeling and wound healing [23], [24], [25], [26].
The healing time reported for carapace damage in turtles is quite variable, as some reports indicate 3 to 9 months [27] and others up to 30 months [28]. In this case, complete wound healing of the affected regions was observed after 115 days.
The good body condition and hydration of the animal at the time of stranding were fundamental to the period in which the turtle was fasted and had to be kept in an area out of the water, with only parenteral nutrition. During this phase, parenteral nutrition provides the minimum necessary conditions until surgical reconstruction of the carapace is performed. | 1,107.4 | 2021-01-01T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Time Series Risk Prediction Based on LSTM and a Variant DTW Algorithm: Application of Bed Inventory Overturn Prevention in a Pant-Leg CFB Boiler
The pant-leg design is typical for higher capacity circulating fluidized bed (CFB) boilers because it allows for better secondary air penetration, maintaining good air-coal mixing and efficient combustion. However, the special risk, nominated as bed inventory overturn, remains a big challenge and it hinders the application of pant-leg CFB boilers. For a time series risk, it is critical to do the bed inventory overturn prevention to leave enough time for the adjustment. This paper proposed a new framework combing long short-term memory (LSTM) and dynamic time warping (DTW) methods to do the risk prediction. Pattern matching of data difference discrimination is employed for DTW algorithm, instead of the traditional Euclidean metric. The pattern matching has the merits in reduction of calculation and improvement of the adaptability to variables with different dimensions. After variable processing of the time series data by the variant DTW algorithm, the bed pressure drop prediction model is established based on the LSTM structure in this framework. Compared with some traditional prediction method, the framework in this paper has achieved superior results in the application of bed inventory overturn prevention.
I. INTRODUCTION
The circulating fluidized bed (CFB) boiler is believed to be an improvement over the conventional pulverized coal furnace in some respects [1]. Operation of industrial CFB boilers has confirmed some advantages like fuel flexibility, low NOx emissions, high sulphur capture efficiency and so on [2]. As for the CFB furnace, the heat import is proportional to bed cross-sectional area while the heat absorption is proportional to the perimeter of the furnace [3]. With the growing required capacity, both the furnace volume and heat transfer surface increase accompanied by the contradiction that the former increases faster, naturally in need of higher furnace height to control the temperature [4]. However, the furnace height of a 300MW CFB boiler is mostly limited to about 50m due to commercial consideration.
The associate editor coordinating the review of this manuscript and approving it for publication was Wei Liu. To have extra heat transfer surface, the external heat exchanger (EHE) could be a good alternative option to a scaled-up CFB boiler [5], and the corresponding structure of the furnace is called the pant-leg [6]. A pant-leg bottom CFB boiler with EHE has several advantages [7]: (1) The reheat steam temperature can be adjusted through regulating the control valves of EHE rather than the spray water, improving the efficiency of power unit.
(2) Bed temperature can be controlled flexibly and reliably.
(3) EHE increases the heat storage of the unit to a certain extent, which could enhance the fuel flexibility.
The industrial operations prove that the pant-leg structure can improve the mixing of air and solid in the furnace and reduce the carbon content in fly ash greatly. However, the two independent distributors at the bottom of the pant-leg boiler will always cause bed inventory imbalance. When the imbalance aggravates, one special phenomenon, nominated as bed inventory overturn, will occur and cause the shutdown of boiler without timely adjustment [8]. Because of the considerable negative influence on CFB boiler operation security, more and more attention is paid to the risk prediction and prevention of bed inventory overturn. To understand the mechanisms underlying the bed inventory overturn, Wang [9] looked into the bed material imbalance on both sides related to the bed inventory overturn. Liu et al. [10] proposed the reasons for bed inventory overturn based on mechanism operation analysis, and the imbalances of bed pressure are specially analyzed. Basu et al. [11] had showed the details about the pressure balance behavior affected by the loop-seal operation. The effects of particle diameter, particle density and gas distributor design on the hydrodynamics of CFB were studied by Qi et al. [12]. Li experimentally studied the lateral transfer of solid particles in a small-scale, cold CFB riser with pant-leg structure [13]. A compounded mathematical model of pressure drop was established, and it was concluded that the main reason for lateral migration of solid particles is the lateral pressure gradient of the gas phase in the CFB and once the pressure balance is broken, it is difficult to restore the balance without timely adjustment of the primary air fan. Therefore, monitoring the risk trend of bed inventory overturn is vital, which can be considered as a typical accident with time series characteristics. A prediction method of bed pressure fluctuation in CFB riser was proposed by Zhao et al. [14], and this method reconstructed the phase space trajectory evolution by establishing discrete dynamic mapping equation. Afsin Gungor [15] established a CFB axial pressure distribution prediction model based on particle method (PBA). Some of mechanism model methods above have been well applied in laboratory situations, providing clear analyzation of both reasons and modes for the bed inventory overturn. However, due to the high complexity and low adaptability of the mechanism modeling, the existed methods are difficult to be applied in industrial situation to do the bed inventory overturn prevention. Therefore, mechanic-based prediction model, rather than a simple risk predictor, is employed for condition design application in most investigation. With the developments of machine learning and data management system equipped in modern industrial process, more and more data driven methods are applied in performance monitoring and prediction.
Using the least squares support vector machine (LSSVM) to construct the model, a dynamic model was developed to predict the bed temperature of a CFB boiler [16]. Li proposed a method based on wavelet decomposition (WD) and a second-order gray neural network combined with an augmented Dickey-Fuller (ADF) test is proposed to improve the accuracy of load forecasting [17]. A novel hybrid ensemble deep learning (HEDL) approach was presented for deterministic and probabilistic low-voltage load forecasting. And the deep belief network (DBN) is applied to low-voltage load point prediction with the strong ability of approximating nonlinear mapping [18]. A forecasting study of hydroelectricity consumption in Pakistan was presented based on Auto-Regressive Integrated Moving-Average (ARIMA) modeling, and the research was useful in better planning and management for future [18]. Que proposed a data-driven integrated framework for health prognostics for steam turbines, which is based on extreme gradient boosting (XGBoost) and dynamic time warping (DTW). And the proposed framework has achieved good results in practical application [19].
Since deep learning specialized in abstracting complex relationship among variables with multiple layers, the risk prediction model, with time series characteristics, could be established by analysing temporal relevance of operating industrial data. Recurrent neural network (RNN) is one of the effective algorithms that can accommodate relevance between consecutive time steps [20]. Long short-term memory (LSTM) units are suggested as a possible solution to the vanishing gradient problem noticed in the simple RNN [21]. The sequence to sequence approach based on LSTM has been previously employed in speech recognition, speech emotion classification machine translation applications [22], short-term weather forecasting [23], medium-to-long term electricity consumption for commercial and residential buildings [24]. For the bed temperature monitoring in a 300 MW CFB unit, Li [25] presented a 2D-interval prediction model based on LSTM. The results revealed that the model structure could effectively described the characteristic of bed temperature of CFB unit.
Based on the characteristics of bed inventory overturn, this paper, combining LSTM and DTW algorithms, proposed a new monitoring framework to predict the risk and prevent the overturn. DTW would be employed to extract the temporal dynamic characteristics, while LSTM has unique superiority in time series analysis, thus leading to an effective framework to prevent the overturn.
The rest of paper is organized as follow, Chapter 2 describes of investigated object, and Chapter 3 introduces the method used in risk prediction. The prediction framework will be discussed in Chapter 4. After that, the verification results are shown in Chapter 5, and Chapter 6 draws the final conclusion.
A. THE GENERAL LAYOUT OF THE INVETIGATED BOILER
This paper mainly investigates a 300MW coal-fried CFB boiler, which belongs to the 1# unit of JoinLion power plant in China. It is a subcritical reheat boiler that respectively represent the typical pant-leg CFB furnace currently.
The material balance in the main loop of the pant-leg CFB furnace can be shown in Fig. 1. The coal and limestone are recycled many times to increase the fuel combustion efficiency as well as improve the utilization rate of limestone. To separate the heavier particles from the flue gas and return to the furnace for recirculation, two cyclones are arranged at each side of the furnace that with a pant-leg type in lower part of boiler. Circulating solids captured by cyclones enter loop seal and external heat exchanger (EHE) installed at the end of each cyclone standpipe respectively. A cone valve is set at the inlet of each EHE, then the portion of low and high temperature solids return to the furnace are controlled by adjusting cone valves opening [28].
Once the bed inventory occurs, the operator has to adjust the air valve to improve the air flow rate in the leg with defluidization but turn down the flow rate in the leg with little bed inventory. If adjustments are effective and timely, the imbalance would be reduced and the bed material would restore to the origin. Then the risk of bed inventory overturn could be avoided. Due to the hysteresis and complicated dynamic of the bed inventory process, it is quite difficult to do the risk prediction. The untimely or inaccurate adjustments usually cause the shuttle of bed inventory between the two legs. The risk of bed inventory overturn is a typical accident with time series characteristics. It can make a great contribution to the effective and timely adjustment strategy with the precise risk prediction of the bed inventory imbalance between the two legs in a pant-leg CFB boiler.
B. THE BED INVENTORY OVERTURN IN CFB
The pant-leg structure can strengthen the penetration of secondary air in large CFB boilers to improve the fluidization and combustion efficiency. However, the unique structure of two independent distributors at the bottom will always cause bed inventory imbalance between the two legs. If the imbalance aggravates further, the bed inventory overturn would happen. As shown in Fig. 2, the bed pressure drop in the right leg increases while the air flow rate decreases, until the bed materials in the left leg is blown out and transferred into the right leg. The material quantity in the dense phase area of the legs on both sides is different due to the deviation between two air distribution plates. As a result, the bed pressure drop in the right leg gets further increase until the bed inventory is too vast to be fluidized by the primary air.
III. METHODS OF THE ALGORITHM A. LONG SHORT-TERM MEMORY
Based on the recursive structure, RNN algorithm has the ability to memorize results of different layers, thus solving the time series problems successfully and remarkably. However, there is a demerit for RNN, known as long-term dependence, which means if dealing with the key points in message needs to employ former information with long distance, RNN tends to have errors and even break down. The LSTM algorithm has avoided this disadvantage via improving the structure, and it is displayed as Fig. 3.
Compared with the single layer in RNN structure, the process for LSTM is more complicated. As shown in Fig. 3, there are four neural networks interacting with each other in a particular way within one single LSTM cell. Represented asthe horizontal line through the top of the diagram, the state of the cell in LSTM is transferred as a conveyer belt, running through the whole chain structure with only a few minor linear interactions. The output of the specific layer is to convey the state of the cell into C t from C t−1 .
Three control gates are employed to command the state of the cell, including forget gate, input gate and output gate. In the forget gate processing, input data x t and state data h t−1 would be concentrated and calculated with the equation as follows.
where σ is the sigmoid function, and • is the Hardamand product operation.
The result f t would be put into the state C t−1 . If f t is 0, the forget gate would delete all the information. On the contrary, the information would be saved thoroughly. And the expression of the sigmoid layer σ is To add new information for the whole process, input gate needs deciding whether to save or delete information with the equation.
In the meanwhile, the updating factor of cell status, the vector C t , would be generated with the layer operation tanh.
The forget gate and the input gate provide essential elements to convert the cell status intoC t .
The output information would be assigned by the output gate, which would be based on the state of the cell as well as going through some filtration. Once the sigmoid layer has determined the output content of the cell, the tanh layer would be employed to transfer the state of the cell between -1 and 1, following with calculating the output of the sigmoid layer.
B. DYNAMIC TIME WARPING DTW is a similarity measurement method which is able to match and map the time series morphology by bending the time axis. It can measure the time series data with the same length, and has the ability to measure the similarity among time series with various length. The best merit is represented in its insensitivity to the abnormal and abrupt point in time series, for which the asynchronous similarity comparison is implemented well. Take two time series Q and U for analysis, where Q = {q 1 , q 2 , . . ., q n }, and U = {u 1 , u 2 , . . ., u n }. By calculating the Euclidean distance among points within the two sequences, the distance matrix D n×m for these data points can be implemented as (8), and d(i, j) is the Euclidean distance between q i and u j . The matrix D n×m represents distances among data of various time points in the two sequences.
The specific similarity between two sequences can be calculated by DTW, and DTW algorithm finds the shortest distance path in the distance matrix. The similarity between two sequences is characterized by the sum of distances on the path. DTW is dedicated to find a continuous path H = {h 1 , h 2 ,. . . , h s } such that the sum of all elements in the path is minimal while three necessary requirements are satisfied, including boundary limits, continuity and monotonicity.
Optimal path (H ) is searched by implementing dynamic programming, constructing the cost matrix D c (also known as the cumulative matrix) based on the matrix D n×m to record the shortest path from the beginning point to the end point, which would be obtained mainly relying on the following steps: 1) The element in the first row, the first column is the element in the first row, the first column of D n×m . 2) The values of elements sitting in other locations (D c (i, j)) would be calculated step by step, by the formula: The element in the final row, the final column of D c illustrates the distance between the two sequences Q and U, shown the output of DTW meanwhile.
IV. DEVELOPMENT OF THE RISK PREDICTION FRAMEWORK
This paper proposed a new framework combing LSTM and DTW methods to do the risk prediction. Firstly, with analyzing the data correlation and filtering out irrelevant variables, the time series relationship among data has been adjusted by DTW method. In the processing part, the filtered data is normalized and non-linear conversion is performed by the Sigmoid function. Secondly, the data is employed to train the prediction model and obtain the prediction value, which could carry out early warnings. Instead of conventional Euclidean metric, DTW employs the data-difference-discrimination pattern matching, which could reduce the calculation and improve the adaptability of variables in different dimensions. Once the time series data are preprocessed by variant DTW algorithm, the bed pressure drop prediction model would be established based on LSTM algorithm. The whole process is illustrated in the following figure.
A. VARIABLE PROCESSING OF THE TIME SERIES DATA BY THE VARIANT DTW ALGORITHM
DTW is used to analyze the correlation of variables and obtain the analysis results, according to which unnecessary variables would be filtered out and the variable data structure would be adjusted. DTW model is computationally intensive in practical application, especially for a large number of time series data. And Euclidean metric is used as distance formula in the traditional DTW model, which is disadvantageous for the variables with different dimensions.
In this paper, a pattern matching method is used in DTW distance formula. The continuous time series is converted into discrete representation features, which not only simplifies the computation, but also enhances the compatibility of data in different dimensions. Continuous data are divided into three categories, rising, maintaining and falling. The concept of pattern matching is used to replace the Euclidean metric as a representation of the differences between data. In the following (10), 1, 0.5, 0 represent rise, hold and VOLUME 8, 2020 fall models respectively In the practical industrial process, as a consequence of the large inertia characteristics of objects, the influence among data, like the coal supply and the unit load, tends to be out of synchronization. When the coal supply changes, the unit load often lags for some time before it changes. Therefore, in data pre-processing, we should not only analyse the correlation between the data but also consider the inertia between the data. The optimization path of DTW indicates the direction of time series compression, which enables data to be inertia processed and be lagged to a certain extent. y = x + b is used to fit the optimization path. The number of lag samples is obtained according to the lookback of the model and the absolute value of b.
B. DATA STANDARDIZATION AND SELECTION OF MODEL LOSS FUNCTION
Data features are extracted from various dimensions to avoid inaccuracy of model precision and confusion of optimization trajectory when training model. The prediction target is bed pressure, so the bed pressure is normalized by min-max shown as (11). (11) where x min and x max represent the minimum and maximum values in the data, y i represents the data after min-max standardization, and x i represents the data before processing. Linear change of data between 0 and 1 also benefits for training and initialization of network model weights and offsets. In the actual industrial process, there are usually some wrong data points or mutation points. Min-Max normalization method cannot solve the influence of these error points or mutation points on the overall data. On the contrary, this kind of normalization is so sensitive to the abnormal data that the overall trend of the data would be not clear under the influence of a single bad value. However, linear change cannot solve the problem distinguishably. In this paper, Sigmoid function is used to convert data nonlinearly. Since the domain of Sigmoid function is (−∞, +∞) and the corresponding domain is (0, 1), the information-loss problem would be avoided when converting data with Sigmoid function. If the computer precision allows, when a sudden change point occurs, its conversion value will only be infinitely close to 1 or 0, and the value of the sudden change point can be restored by inverse transformation. Sigmoid function is shown as follows: For the normalized data, we transform Sigmoid function to adapt to our data. The formula is as follows: where y represents the converted data and x represents the data before conversion. a and b respectively represent the extent of expansion of data trends. The values of a and b are determined according to the results of data normalization.
C. BED PRESSURE DROP PREDICTION MODEL BASED ON THE LSTM
The conventional LSTM network, including LSTM layers and full connection layers, performs inadequately in data prediction with high complexity, while high-depth layers requires longer training time and more advanced machine configuration. Thus, the structure combining with deepening width, full connection layer and LSTM layer is applied shown as Fig. 5.
V. RESULTS AND DISCUSSIONS OF A CASE STUDY
In this paper, a typical 330MW unit is considered in the case study. The data come from JoinLion 1 # 330MW circulating fluidized bed unit in China. In order to facilitate the training of LSTM, Sigmoid function is used to adjust Min-max normalization. The Sigmoid function enlarges data change trend and reduces the influence of deviation point on overall data. In addition, DTW path is used to process the time-data symmetrically to reduce the influence of large inertia links in mechanism objects on model prediction.
A. CORRELATION ANALYSIS BASED ON DTW
Correlation analysis is carried out on data variables, and input variables of the model are selected according to the correlation with target variables. The target variable of the model is bed pressure difference, so correlation analysis is carried out on bed pressure and bed pressure difference simultaneously. As long as the variables satisfy one of them, they can be used as input variables of the model. The correlation coefficients are shown in Tab.1 and Tab.2. According to experience, the DTW coefficient threshold is set to 260. According to the results, we selected 15 related variables such as corrected total fuel quantity, instantaneous flow of 4# weighing coal feeder, instantaneous flow of 3# weighing coal feeder and instantaneous flow of 2# weighing coal feeder.
B. INERTIA PROCESSING WITH DTW ALGORITHM
DTW algorithm finds the shortest path in the metric space with the dynamic time adjustment, which also represents the correlation between two groups of data. Besides, the direction of associated data are presented in the path, describing the relationship among data. In the industrial process, huge inertia exists among variables, which means data out of asynchronization should be noticed and arranged. This paper analyses variable correlation with DTW algorithm. What's more, in order to reduce the influence of inertia, the article adjusts the time matching of the data based on the DTW path. As shown in Fig. 6, line y = x + b gets close to the DTW path. With this procedure, the input data would combine the data from DTW path and learn the mapping relationship based on the LSTM sequential processing.
Taking the lower left secondary air as an example, since the model choses 100 lookback, the closing line in the DTW image should contain the DTW path as much as possible. The part between the dotted red line and the solid line represents the input region of the model. This part should contain DTW paths as much as possible to ensure that the input model data contains correlation information between variables. After data processing, we could see the prediction from the model. Here the article compares the results from seven different algorithms.
C. NORMALIZATION WITH SIGMOID FUNCTION
The framework will normalize the data and perform Sigmoid nonlinear conversion according to the normalization result. The result of the left primary wind after normalization is illustrated in Fig. 7, in which the data almost presents a straight line with the occurrence of few abnormal points data in the whole data set. According to the result, we select 0.996 and 1 as the values of a and b respectively. After Sigmoid processing, the data are shown in Fig. 8. It is obvious that the data trend has been enlarged, enabling it to be clearer and conducive to model training. Similarly, all the screened variables are processed.
D. COMPARISON AND DISCUSSION OF THE RISK PREDICTION PERFORMANCES
To develop the model, we choose Mean Square Error (MSE) as the loss function of the model, train the model with 20000 sets of data, and test with 1440 sets of data. Three common neural networks models are selected for comparison, including BP neural network, RNN neural network and LSTM neural network model. The comparison results are shown in the Fig. 9.
As illustrated in these figures, best performance, meaning the fastest speed to adjust trend when breakdown happens, occurs in the approach presented in this paper, the bed pressure drop prediction model. The presented model can effectively predict bed pressure, with an average prediction advance time of about 25s. In the normal status, since the normal data take up a large proportion, the advantage of differential prediction is not obvious. As shown in the chart, the prediction accuracy of RNN and LSTM algorithm is fluctuated while normal running, and the accuracy is low when the breakdown happens. BP algorithm has high accuracy at the fault point. However, it behaves worse than other algorithms while normal running. The algorithm raised by the article could guarantee the accuracy while normal running and response quickly when the fault happens at the same time. The stability of its prediction is very helpful to the operation of the field person.
Compared with the conventional error function, which employs the Euclidean distance among data to obtain accuracy, the advance time weights more in practical application. Hence, we use the concept of advance degree to characterize the result of model prediction. The formula for the advance is as follows: According to the formula, the four algorithm models are calculated, and the result is shown in the Tab. 3 As illustrated in Tab.3, it can be seen that the algorithm in this paper has the best advance at the fault and can ensure a certain advance during normal operation. This is consistent with the results of previous analysis of the resulting images.
In this paper, the data processing also plays a very important role in the prediction accuracy of the model. Fig. 10 shows the comparison of processed data and unprocessed data. It is observed that the prediction under normal operation is smoother after the data processing with DTW algorithm, and the prediction for faults is much more accurate than the model without data processing.
VI. CONCLUSION
A data-driven framework for time series risk prediction has been proposed and validated with a case of bed inventory overturn prevention in a pant-leg CFB boiler. The underlying principle in the proposed framework is to excavate the time series characteristics between variables. Instead of conventional Euclidean metric, DTW employs the data-difference-discrimination pattern matching to reduce the calculation and improve the adaptability of variables in different dimensions. After variable processing of the time series data by the variant DTW algorithm, the bed pressure drop prediction model is established based on the LSTM structure. The framework performance is discussed and validated with real operational data. The tested variant DTW approach found consistent correlations between variables and the input model data. The LSTM model can effectively predict bed pressure, with an average prediction advance time of about 25s. Compared with other common neural network models, including BP, RNN and LSTM, the bed pressure drop prediction model has the best performance both during normal running and when fault happens. Therefore, the proposed framework does have the ability to leave enough time for boilers to adjust the primary air volume prevent the occurrence of bed inventory overturn risks, thus protecting CFB boilers to operate stably. JIYU CHEN received the B.S. degree in automation from North China Electric Power University, Beijing, in 2018, where he is currently pursuing the Ph.D. degree in control theory and control engineering. His main research field is the application of artificial intelligence algorithm in industrial process.
ZHIYU ZHANG is currently pursuing the B.S. degree in telecommunications with management with the Beijing University of Posts and Telecommunications (BUPT), China.
From 2018 to 2019, he was a member of JP Innovation Project. His research interest includes the application of deep learning in thermal power generation and the network communication.
Mr. Zhang is a member in the joint-program of BUPT and NCEPU. He has participated some of the researches in the group.
RUI WANG was born in Shandong, China, in April, 1999. She is currently pursuing the degree in telecommunications engineering with management with the International School, Beijing University of Posts and Telecommunications (BUPT).
From 2018 to 2020, she was selected into Yepeida Innovation and Entrepreneurship College of BUPT, getting education in innovation and artificial intelligence. In Summer 2019, she went to the University of Cambridge for Summer School, learning artificial intelligence and entrepreneurship. For the three years in BUPT, she has won second-class scholarship and first-class scholarship. Under the guidance of her mentor, her research interest is employing some artificial intelligence algorithms in evaluating and adjusting the performance of boilers in electric grids.
MINGMING GAO was born in Shanxi, China, in 1979. He received the B.S. degree in computer science and technology from Central South University, Changsha, in 2002, the M.S. degree in computer software and theory from Central South University, Changsha, in 2005, and the Ph.D. degree in control theory and control engineering from North China Electric Power University, Beijing, in 2013.
He is currently an Associate Professor with the School of Control and Computer Engineering, North China Electric Power University, Beijing. He is the author of more than 40 articles. His research interest includes the optimal control and engineering and operation condition monitoring of thermal power generation systems. | 6,851.6 | 2020-01-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Coherent multi-mode dynamics in a Quantum Cascade Laser: Amplitude and Frequency-modulated Optical Frequency Combs
We cast a theoretical model based on Effective Semiconductor Maxwell-Bloch Equations and study the dynamics of a multi-mode mid-Infrared Quantum Cascade Laser in Fabry Perot with the aim to investigate the spontaneous generation of optical frequency combs. This model encompasses the key features of a semiconductor active medium such as asymmetric,frequency-dependent gain and refractive index as well as the phase-amplitude coupling of the field dynamics provided by the linewidth enhancement factor. Our numerical simulations are in excellent agreement with recent experimental results, showing broad ranges of comb formationin locked regimes, separated by chaotic dynamics when the field modes unlock. In the former case, we identify self-confined structures travelling along the cavity, while the instantaneous frequency is characterized by a linear chirp behaviour. In such regimes we show that OFC are characterized by concomitant and relevant amplitude and frequency modulation.
Introduction
Quantum Cascade Laser (QCLs) have attracted a remarkable interest as THz and Mid-IR sources capable of self-starting Optical Frequency Combs (OFCs) under DC current injection [1][2][3][4]. OFCs are generally meant as lasers emitting, under particular bias conditions, a set of equally spaced optical lines with low phase and amplitude noise [5]. These optical sources are appealing for a wealth of applications in the Mid-IR and THz range, encompassing high precision molecular spectroscopy, broad band free space optical communication and hyperspectral imaging [6,7]. From an experimental point of view, the OFC regime has been mainly characterised through the intermode beatnote (BN) spectroscopy and associated with a narrow BN linewidth (typically less than 100KHz). By sweeping the bias current, it was found bias current ranges of irregular dynamics (phase unlocked optical and wide BN linewidth) alternated with current ranges of OFC operation (phase locked lines and narrow BN linewidth) [6,8,9]. Figures of merit of the OFC are typically the number of locked modes in the −40dB (or −20dB) spectral bandwidth and the OFC dynamic range, intended as the range of bias current where OFC emission occurs. In this regard, THz QCLs emitting at 3.1T Hz can provide up to few tens of modes in the −40dB spectral bandwidth of about 1.1T Hz; whereas Mid-IR QCL can give self-locked optical lines in a bandwith of about 3THz centered at 36.5THz [10,11]. In absence of any dispersion compensation [12,13] or microwave modulation [14], stable OFC regimes have been found in ranges of current of about one hundred milliamperes starting from about two times the lasing threshold [10,11]. Only recently the temporal dynamics of the optical field became accessible through the Shifted Wave Interference Fourier Transform Spectroscopy (SWIFTS) technique [3] that allows to retrieve the amplitude and phase of the optical field from experimental data [11,15]. This additional information revealed the true nature of the self-generated OFC in QCLs: it occurs not only in presence of a Frequency Modulated (FM) laser emission, but its formation also implies a significant (or even dominant) Amplitude Modulation (AM), appearing as intra-cavity optical pulses which propagate on a quasi-homogeneous background field [11,16]. In addition, the inspection of the temporal evolution of the field phase and the consequent instantaneous frequency, demonstrates the existence of linear frequency chirp with a frequency jump at the time instants where the field amplitude is modulated by the pulse [11,15,16]. Well before experimental SWIFTS measurements, theoretical predictions of such pulses having a solitary wave character was provided in [17]. Although several theoretical efforts have been made in order to provide a physical understanding of the fascinating phenomenon of self-starting OFCs in QCLs, to the best of our knowledge there is still a lack of models able to reproduce the experimentally measured coexistence of optical pulses and linear frequency chirp, and also the alternance between locked and unlocked regimes. We believe that such tools would be promising to predict possible strategies to extend the OFC dynamic range by employing externally controllable signals, by optimizing the device gain material or the laser cavity design. The approaches proposed so far are based on the classical set of Maxwell-Bloch equations valid only for an ideal two o three levels atom-like material system [18][19][20][21][22]. This model, while grasping some basic features of the laser physics, fails in correctly describing the phase-amplitude coupling (quantified by the linewidth enhancement factor, LEF ) peculiar of semiconductor lasers. In absence of the phase-amplitude coupling, the relevant mechanism in determining the multimode instability threshold and influencing the possibility to observe OFCs was only ascribed to the Spatial Hole Burning (SHB) [20,23,24] consisting in a carrier grating excited by the interfering counter propagating field of the Fabry-Perrot (FP) laser cavity. More recently, a non-null LEF and an inhomogeneous gain broadening have been "ad hoc" included in [15,16]; new kind of CW instability and multi-mode dynamics have been found with a better match with some of the experiments reported for e.g. in [11] In [17] we adopted a model consisting on a set of Effective Semiconductor Maxwell-Bloch Equations (ESMBEs) [25] to study THz QCLs and we demonstrated it could well reproduce the experimental observation of self-generated OFCs alternated with ranges of irregular multi-mode regimes [8]. The ESMBEs was based on a non-linear optical susceptibility model that describes radiation-matter interaction by fitting microscopic calculated and/or experimentally measured optical gain spectra and refractive index dispersion. This allowed us to point out the role played by the LEF in reproducing and explaining: • the instability of the CW lasing even close to lasing threshold, whereas it was originally predicted to occur above about ten times the threshold current due to the Risken-Nummedal-Graham-Haken (RNGH) instability • the multi-mode dynamics, due to the onset of solitary pulses travelling in the resonator, and narrow BN spectra at the round trip frequency (or at the double of the BN explaining the disappearance of the BN in some experiments) • the self-generation of OFCs not only close to the threshold current but also in current ranges beyond regions of irregular unlocked dynamics Since the model in [17] was rigorously valid only in a unidirectional ring configuration and with no coupling to the output waveguide, the aim of this work is extending that approach to the case of a more realistic FP laser by including in the model the formation of the standing wave field pattern along the laser cavity through the additional presence of the SHB caused by the carrier grating [24,26]. We apply the model to study Mid-IR FP QCL and we show that simulation results are in very good agreement with the experimental evidences reported in [11]. Specifically, we can exhaustively describe the phenomenon of self-starting OFCs and the coexistence of the AM and FM regime characterized by weak pulses with the cavity round trip repetition rate and FM linear chirp of the output optical field. Thanks to a campaign of simulations exploring the parameter space of LEF, carrier lifetime and optical gain bandwidth, we can show the impact of these parameters on the extension of the current range of self-OFC generation and on the number of locked optical modes. In Section 2 we derive the model for a FP QCL, remarking the role of the carrier grating in the medium dynamics and linking the model parameters to physical quantities, relevant for comparison with experiments. In Section 3 we start by illustrating results from our model, relative to a realistic case. We show that laser field, upon current ramping, can exhibit locking regimes, multiple locking windows, window of chaotic dynamics, AM and FM dynamics associated thereof as well as instantaneous frequency chirping (FCh), with close similarity to experimental evidence. Some quantifiers are introduced, to characterize both OFC formation (beyond standard reference to BN linewidth) and FCh regimes. Following this reference case, in paragraph 3.1 we illustrate the role of critical medium parameters, mainly the LEF and the gain bandwidth, in ruling the OFC regime extension, the spectral character of the OFC and we compare our evidence to laser dynamics in absence of SHB and approaching the two-level case. Finally in paragraph 3.2 the role of carrier rates is considered, confirming that 'slower' carriers give rise to longer pulses and eventually lead to the loss of the pulsed regime. Conversely, faster carriers give rise to shorter pulses and we can show that, interestingly, the corresponding simulations also reveal the formation of OFC encompassing larger number of modes, occurring for larger gain linewidth and in ampler current ranges, thus in very good agreement with the results recently reported for e.g. in [11]. Sec.4 draws conclusions and prospects future developments.
The model. Effective Semiconductor Maxell-Bloch Equations for a Fabry-Perot multimode QCL
Our model encompasses the semiconductor susceptibility, typical of a QCL, originally developed in [17] for an unidirectional resonator, combined with the multiple scale approach adopted for Quantum Dot (QD) lasers in [26] to account for carriers grating due to standing wave pattern and responsible for Spatial hole burning, with the goal to properly describe a bidirectional Fabry-Perot (FP) resonator (see also chapter 14 in [28]). We consider a FP cavity a few millimeters long and start by treating the spatio-temporal evolution of the electric field. We start from the d'Alembert equation where E is the electric field, P is the polarization and v is the radiation group velocity. We then assume that the electric field and the polarization can be expressed as where E + (z, t), E − (z, t) are respectively the slowly varying envelopes for the forward field and backward field inside the resonator and P 0 (z, t) is the polarization envelope, assumed to vary slowly only in time for reasons that will be clarified in the following passages,ω 0 and k 0 are respectively the reference frequency (cold cavity mode closest to the gain peak) and its wavenumber. Inserting Eqs. (2) and (3) in Eq. (1) and applying the slowly varying envelope approximation (SVEA) we obtain the following equation: where g is a complex coefficient given by: N p is the number of stages in the cascading scheme, Γ c is the optical confinement factor (that takes into account the overlap between the optical mode and the active region) and n is the effective background refractive index of the medium. The field dynamics is coupled to the active medium's one. Starting from the carrier density, we assume that in each transition, within the cascaded superlattice, the ground state is always empty, because of the fast depopulation due to the LO phonon-electron scattering processes. Therefore in our model only the carrier density of the upper laser level N(z, t) appears as a dynamical variable. The evolution equation is retrieved from the Bloch two-level approach [28] in the rotating wave approximation. We consider a pumping current I, the carrier nonradiative decay time τ e , and take into account the forward and backward field envelopes, as it is required for FP cavities. We obtain: where V is the medium volume and e is the electron charge.
The equation for the polarization dynamics is derived following the approach described in detail in Sec.2 of [17]. We start by introducing a phenomenological optical susceptibility χ(ω, N) that allows to describe spectrally asymmetric curves for gain and dispersion, generally dependent on the carrier density; it has the form 1: In Eq. (7) we have assumed, for simplicity, that the gain maximum coincides with the reference cavity frequency ω 0 2. Eq. (7) is associated in the time domain to the following polarization equation where the peculiar feature of the FP resonator is made evident by the dependency from the counterpropagating field envelopes: where α is the LEF and 1 τ d is the effective polarization decay rate 3. For further convenience we introduce δ hom = 1 πτ d , which is a measure of the FWHM of the gain spectrum in the limit α << 1 where the susceptibility χ(ω, N) becomes that of homogeneous broadened two-level system gain [17].
At this point our equations include field-carrier interactions at all spatial orders (measured in multiples of λ), but in order to retain physical insight and numerical viability, a relevant simplification can be introduced by exploiting a multiple scale approach [29-31]. Specifically, we expand in Fourier series the spatial variation at the wavelength scale of P and N [31]: 1We use a different sign convention respect to [17], due to the assumptions for the expression of the complex electric field and polarization (Eqs. (2)-(3)).
2The FSR is large enough so that a moderate frequency shift of the gain peak is of little relevance to the laser dynamics. 3Note that the effective polarization decay time corresponds to Γ τ d in Eq.(13) of [17].
Inserting Eqs. (9) and (10) into Eqs. (4), (6) and (8) and neglecting the terms with spatial frequency higher than 2k 0 , we get the final set of Effective Semiconductor Maxwell-Bloch Equations (ESMBEs) for QCL in FP configuration in the form: Finally, the model equations must be completed by the boundary conditions which read: where R is the reflectivity of each mirror of the FP cavity.
The numerical simulations: Self-generated frequency and amplitude modulated OFCs
In this section we present the results obtained by numerical integration of the ESMBEs (11-16) with the boundary conditions (17)-(18) for typical Mid-IR QCL parameters reported in Table 1 and adopted from literature [4,11]. The numerical code is based on a TDTW algorithm, which exploits a finite differences scheme, discretizing both in time and space [26]. Our first aim is the reproduction of OFC regimes with characteristics similar to those experimentally observed [8,11,[32][33][34], namely: a combination of FM and AM OFCs occurring close to the lasing threshold and in a significant bias current range, followed by a current range of unlocking with irregular dynamics and, possibly, occurring again in a second window for larger bias currents, a feature that is commonly observed in experiments, but that, to the best of our knowledge, was never found theoretically.
In such perspective, we first present the typical results adopting the realistic values of α = 0.4 and δ hom = 0.48THz. The corresponding light-current plot is reported in Fig. 1. Further on, we will present the results of a massive campaign of simulations showing a broad zoology of Table 1.
dynamical regimes and the impact of the LEF, optical gain bandwidth and carrier lifetime on the figure of merit of the self-generated OFC. We first focus on the identification of OFC regimes by sweeping the bias current I. In our simulations, the emergence of a OFCs regime can be characterized, as typically done in experiments, by a narrow BN linewidth at Radio Frequency (RF). However a better assessment can be achieved by estimating some additional phase and amplitude noise quantifiers that we have recently introduced for the numerical characterization of OFCs in QD lasers [26]. To calculate them, the spectrum of the optical field at z=L (exit facet of the simulated device) is filtered so as to retain only the modes within a 10dB power ratio to the spectral maximum. We then consider the temporal evolution of each filtered optical line of the spectrum: the modal amplitudes P q (t), q = 1, ..., N 10 and the temporal phase difference between one mode and the adjacent one ∆Φ q (t), q = 1, ..., N 10 , where N 10 is the number of optical lines in the −10dB spectral bandwidth [26]. Given the amplitude and phase dynamics of each optical line, we calculate the quantities: where: and the symbol < > indicates the temporal average. The indicators defined by Eq.19 measure the average fluctuations of the power and phase of the selected optical lines. An ideal OFC should have no intensity noise fluctuation of the power of each line (ie: low RIN per line) and zero differential phase noise such that both indicators should be zero. In our simulations we observe residual fluctuations, so that we will define in the following an OFC regime when the indicators are M σ P < 10 −2 mW and M ∆Φ < 2 · 10 −2 rad.
An example of dynamical behaviour corresponding to the self-starting OFC is shown in Fig. 2, for I/I thr = 2.31, where I thr is the threshold current of the laser. The propagation of confined field structures sitting on an almost constant background in the intensity trace ( Fig. 2.a) appears intrinsically paired with an instantaneous frequency chirp in the time range where the intensity is Table 1. Temporal evolution of laser power (blue curve) and instantaneous frequency (red curve). A propagating pulse at the round trip frequency sits on an almost constant background associated with a linear frequency chirp. (b) Optical spectrum of the emitted radiation showing 10 modes in the −10dB spectral bandwidth. c) Zoom around one peak of the optical spectrum. almost constant, followed by discontinuous jumps when the field structure occurs (Fig. 2.a); note the remarkable similarity with the experimental evidences in Fig. 2.b of [11]. This evidence suggests that OFC is a locking phenomenon where concomitant AM and FM is a commonplace (see also fig.8).
Additionally, we observe 10 locked lines in the −10dB spectral bandwidth of 0.2 THZ (Fig. 2.b); each line has a very narrow linewidth as shown by the zoom around one line in Fig. 2.c When the laser unlocks, an irregular dynamics is observed as for example at I/I thr = 3.46. The field intensity and its instantaneous frequency versus time are shown in Fig. 3a; whereas the whole optical spectrum of Fig. 3b is apparently not too different from Fig. 2.b, we note that each line is significantly enlarged with several side bands close to the main peak (Fig. 3.c)). Fig. 2 indicates, in excellent agreement with experimental evidence, that the OFC regime with a broad and flat optical spectrum is characterized by an almost linear frequency chirp. To quantify the linearity of the chirp at different bias currents and/or for different sets of parameters, we introduce here an indicator of chirp linearity, based on the comparison of the simulated instantaneous frequency with a perfect frequency sawtooth [35]. Since the moduli of two adjacent Fourier coefficients (c n,st and c n+1,st ) of the Fourier series of an ideal sawtooth stay in the ratio |c n+1, s t | |c n, s t | = n n+1 ; we calculate the Fourier transform of the instantaneous frequency signal, we define c n the peak of each n-th component of the spectrum and the ratio R n = |c n+1 | |c n | . The relative error ( n ) between n n+1 and R n and its average over N c components ( c ) are defined respectively as: n = R n − n/(n + 1) n/(n + 1) The indicator c is therefore a relative error aimed at quantifying the discrepancy between the QCL instantaneous frequency signal and an ideal sawtooth. We assume that a regime can be reasonably defined as 'linearly chirped' when c < 10 −1 .
As Fig. 4.a shows, our QCL starts off with a CW emission at threshold (I thr = 260mA), which is soon destabilized towards a multimode dynamics associated with the appearance of a BN at I/I thr between 1.25 and 1.64. In this current range we see an OFC regime characterized by a gradual increase of N 10 , low intensity and phase noise (since M σ P < 10 −2 mW and M ∆Φ < 2 · 10 −2 rad) and rather large linear chirp indicator ( c > 10 −1 ). We also report a BN shift of 0.03GHz around I/I thr = 1.34, which is in agreement, in terms of order of magnitude, with recent experimental results [36]. Around I/I thr = 1.73 the OFC regime is lost; we observe the onset of several lines around the BN causing an important broadening of the BN linewidth. This broadening is a finger print of an unlocked regime characterized by an amplitude modulation with a period equal to the inverse of the separation between the BN and adjacent side bands. The corresponding phase and intensity noise indicators increase of nearly two order of magnitude. This regime ceases just before I/I thr = 1.83 where a new OFCs regime appears, thus reproducing the locked/unlocked state alternance found in some experiments [8]. This regime is even more sizeably extending up to I/I thr = 3.08, after which chaotic emission sets in. Comparing in the Fig.4.e the linear chirp indicator of the first and the second locking window, we see that for all current I/I thr < 2 the value of c is higher than 10 −1 . In this region N 10 is less than 9. In the second locking region for I/I thr > 2 we have linear chirp with N 10 > 10 and an increase of the number of locked modes is accompanied by a further reduction of the linear chirp indicator. The observed correlation between the reduction of c and an increasing number of locked modes suggests that linear chirp is a complex cooperative phenomenon involving a highly multimode dynamics (note that in calculating our c we choose N c =5). As proposed in [6] the spontaneous formation OFCs is due to efficient Four Wave Mixing (FWM) that for sufficiently high interactivity field intensity (or bias current) acts as a self-injection locking mechanism in compensating the cavity mode dispersion and fixing their relative phase differences.
OFCs properties: the role of LEF and gain/dispersion bandwidth
In order to highlight the role of the LEF and the gain/refractive index dispersion in affecting both the bias current range of the OFC regime and the figure of merit of the optical comb, we run systematic sets of long (> 500ns) simulations by sweeping the bias current between the threshold I thr and 3I thr with a step of 0.19I thr , and considering α ∈ (0.4, 1) and δ hom ∈ (0.16THz, 1.27THz). The other parameters are those in Table 1. Our results are conveniently summarized in Fig. 5, where we report for each pair (α, δ hom ) a black circle when no locking is observed, and a red circle in case of OFC emission; in the latter case inside the circle we also report the FWHM of the gain spectrum at threshold, the maximum number of locked modes found in the −10dB spectral bandwidth and the extension of the bias current interval ∆I where the OFC regime is found.
We first observe that spontaneous OFC formation is found diffusely throughout the considered values of α and of δ hom . Also, as a general trend, in the locked regime the number of locked modes N 10 tends to increase with the FWHM of the gain curve. We also report that, for a fixed value of δ hom , larger values of α increase the modal competition via nonlinear dispersion and reduce the range of ∆I where OFC is met in agreement with the results in [17]. As an example, for e.g. δ hom = 0.32THz where OFCs are reported for all values of α, we found that ∆I drastically decreases as α increases. For fixed value of δ hom , the increase of LEF is equivalent to an increase of the asymmetry or inhomogeneity of the semiconductor material gain spectrum which is deviating from the ideal symmetric homogeneous gain of two-level atoms. On the contrary, low value of LEF implies a symmetric small inhomogeneous gain broadening, whereas the increase of δ hom can be read as a reduction of the de-phasing time as typically observed increasing temperature. At fixed α, as a general trend an increment of δ hom reduces the current range (or occurrence) for OFC regimes. These evidences seem consistent with the fact that the number of dispersed cavity modes for which the gain overcomes the losses increases with δ hom , but the quantity N 10 is actually limited by the efficiency of the FWM in locking the lasing modes that typically is an inverse function of distance from the resonance [6]. In this regard an anomalous behaviour is found at the map edge where, for α = 1 and δ hom = 0.16THz, we could not find any locked regime contrary to what happens for the two neighbouring circles of the map. We may argue that this low value of the gain FWHM implies a destabilization of the single mode solution for high bias currents where the multimode regime is prone to be chaotic for the relatively high value α = 1. To corroborate this interpretation we checked that for α = 1 and δ hom < 0.16THz only irregular multimode regimes are reliazed beyond the CW instability threshold. Let us briefly analyze the results about the size ∆I of bias current generating the combs. If we focus on the case α = 0.4 where we report OFC formation for all the considered δ hom , for the lowest value of δ hom we found a comb regime spanning just a few mA in the whole simulation interval (I thr , 3I thr ); nevertheless, an extended comb regime of ∆I = 1000mA can be found for higher values of the pump current (I/I thr > 3). For larger values of δ hom , ∆I keeps growing, it is maximum at δ hom = 0.48THz and then decreases.
In order to clarify the role of α in triggering the CW multimode , we observe that it was already shown how increasing this parameter lowers the threshold for the multi-mode lasing (see Fig.3a in [17]). In fact, since amplitude fluctuations lead to frequency fluctuations via α, in presence of sufficiently large gain and bias current, we expect that a CW emission will be destabilized more easily in presence of larger α. This mechanism is the only possible multimode source in an unidirectional ring resonator, but in a FP configuration it would compete with SHB, a second well known mechanism for CW instability [20,24].
We numerically verified the previous considerations by simulating the QCL dynamics for α = 0 (ideal two-level system). We set δ hom = 0.48THz, since it corresponds to the largest ∆I and maximum N 10 when α 0. In absence of SHB, we verified the expected CW emission even very far from threshold. We estimated the instability threshold (see chapter 20-22 in [28]) and could verify that beyond that value (I inst > 13I thr ) a RNGH multimode instability sets in reducing our code to match the treatment of an unidirectional resonator and in the limit of small transmissivity. This result is consistent with the expectation that in unidirectional, two-level case the well known RNGH instability is the only means to destabilize the single mode emission, triggered by the resonance of one cavity mode with the Rabi oscillation. By increasing α (e.g. setting α = 1.5) and without SHB, we can confirm, in line with [17,37], that the multimode instability affecting the single mode CW emission appears just above threshold. When instead, keeping α = 0, and the SHB is switched on, we observe again CW destabilization just above the lasing threshold as we recently demonstrated for the QD laser case [26]. We therefore conclude that either the LEF or the SHB can (alone or together) contribute to the multi-mode emission which however does not necessarily lead to an OFC regime. The self-locked regime is found only for proper bias currents, for proper combinations of LEF and homogeneous braodening linewidth and, as shown in the following, for fast enough carrier dynamics.
Pulses, chirping and OFC: the role of carrier dynamics
A relevant role in the formation of regular dynamics from multimode emission is played by the carrier decay time. In slow (τ e 100ps − 1ns) conventional semiconductor lasers (for example in quantum well laser diodes) the spontaneous OFC formation is scarcely reported. In agremement with that, our numerical simulations showed that increasing τ e from 1ps to 1.3ps leads to a pulse broadening (Fig.6). For larger τ e , mode locking is lost for the same set of parameters of Fig.4.
On the other direction, we investigated the behaviour for a fast carrier life time τ e = 0.2ps (smaller than the value considered in previous sections). We also set α = 0.4 and δ hom = 3.18THz, which gives a FWHM of the gain bandwidth at threshold of 3.7T Hz, much larger then those considered in the map of Fig. 5. This gain bandwidth is comparable with the one measured in [38]. We interestingly found that a reduction of the carrier lifetime is very beneficial in giving OFC regimes in quite wide bias current range and even for very large gain bandwidth FWHM. Whereas the map of Fig.5 shows that increasing the gain FWHM the OFC regime might be lost, we stress here that the OFC regime is also very dependent on the carrier lifetime. Thanks to the increased gain bandwidth we also observe a significant increase of the number of comb lines N 10 . The OFC indicators versus bias current are in Fig.7, where we see one very large comb region (red rectangle) characterized also by the presence of linear chirped regime, since c < 10 −1 for all the current values in this region. The maximum number of locked modes is N 10 = 61 found at I/I thr = 2.16; the corresponding AM and FM dynamics at this bias current, shown in Fig.8, shows shorter pulses and markedly linear chirps as compared to Fig.2. Fig.9 reports the map for τ e = 0.2ps in the parameter space α ∈ (0.4, 1) and δ hom ∈ (3.18THz, 5.74THz); for each parameter configuration the bias current has been scanned between I thr and Table 1. The width of blue pulse is estimated 25ps, and 35ps for the red one. 3I thr , with current step 0.08I thr of 100mA. The other values are those in Table 1. For α = 0.4 we find locked cases for all the considered values of δ hom . The wider bias current range for OFC corresponds to δ hom = 3.18THz and the highest number of locked modes is achieved with a FWHM gain linedwith of 6.47THz. Locked states are found also for an higher (and probably more realistic) value of α = 0.7 [39], whereas locking is completely lost for α = 1. The trend is similar to the one in Fig.5: the increase of the LEF causes a reduction of N 10 as well as a reduction of the bias current range of OFC operation. Fig. 9. Case τ e = 0.2ps: analysis of locked regimes upon variation of parameters δ hom and α. Black dots indicate that no locked regime could be found upon scanning the pump current in the interval ((I thr , 3I thr ). Red dots indicate parameter pairs where such regime could be found. In the dots the FWHM gain bandwidth (see text) in THz is reported along with the locking current range where locking was found and the corresponding value of N 10 .
Conclusions
In this paper we have presented results concerning spontaneous OFC obtained in an original model we developed to encompass critical features for the coherent multimode dynamics of a QCL such as 1) a FP resonator with counterpropagating fields, which allows to include SHB effect in the gain dynamics, 2) effective semiconductor medium dynamics which reproduce asymmetric gain and dispersion spectra. Simulations correctly predict formation of OFC for bias currents close to lasing thresholds and, spanning the current up to a few time the threshold, they could also predict the recurrence of OFC ranges, spaced out by current intervals where modes delock and cause irregular field dynamics. Our work is thus successful in providing a unique model capable of replicating the main evidences of several experiments in the field. We have characterized the OFC regimes and their dependence on the laser's gain bandwidth and LEF, finding in particular that an increase of the LEF, which corresponds to an increase of the phase-amplitude coupling, determines a reduction of the locking regime extension and the predominance of a chaotic behaviour and also implies a reduction of the number of locked modes. We qualified OFC regimes not only on the basis of a narrow BN spectral line (which is nevertheless a commonplace in experiments), but also observing reduced instantaneous frequency jitter and modal power fluctuations, as measured by purposely introduced quantifiers. Another feature of our simulations is the confirmation that OFC associated to a sufficiently large number of locked modes exhibit the propagation of well defined pulses inside the cavity (on an almost flat field background) and a linear chirping of the instantaneous frequency, which we also conveniently characterized. This allows us to evidence how AM and FM modulations of the emitted field are simultaneously present in OFC. Finally, we investigated the role of carrier decay rates, i.e. the speed with which the medium evolves in time with respect to coherence and optical field, showing that faster carriers, with rates below (1ps) −1 , allow for shorter pulse formation in the OFC regimes, and, in association, for broader period of linear frequency chirping. Having achieved such a powerful model opens a broad range of possible investigations aimed to improving the search for better-quality, more robust OFC existing in ever-wider current ranges. Also, we plan to extend our analyses towards devices where RF injection provides a forcing element for active frequency locking, as well as towards lasers with an external coherent injection, acting as an external control exploitable in principle for locking and structure formation addressing. On a more fundamental basis, the analysis of the instability leading to multimode emission in a QCL will be a focus of interest, since the characterization of phase/amplitude instabilities is crucial for the determination of the general dynamical behaviour of our optical system. | 7,904.4 | 2020-04-28T00:00:00.000 | [
"Physics"
] |
Lower bound on the radii of light rings in traceless black-hole spacetimes
Photonspheres, curved hypersurfaces on which massless particles can perform closed geodesic motions around highly compact objects, are an integral part of generic black-hole spacetimes. In the present compact paper we prove, using analytical techniques, that the innermost light rings of spherically symmetric hairy black-hole spacetimes whose external matter fields are characterized by a traceless energy-momentum tensor cannot be located arbitrarily close to the central black hole. In particular, we reveal the physically interesting fact that the non-linearly coupled Einstein-matter field equations set the lower bound $r_{\gamma}\geq {6\over5}r_{\text{H}}$ on the radii of traceless black-hole photonspheres, where $r_{\text{H}}$ is the radius of the outermost black-hole horizon.
I. INTRODUCTION
Theoretical [1][2][3][4][5][6][7] as well as observational [8] studies have recently established the fact that closed light rings exist in the external spacetime regions of generic black holes.It has long been known that the presence of null circular geodesics in highly curved spacetimes has many implications on the physical and mathematical properties of the corresponding central black holes .
For instance, the unstable circular motions of massless fields along closed null rings determine the characteristic relaxation timescale of a perturbed black-hole spacetime in the short wavelength (eikonal) regime [9][10][11][12].In addition, the optical appearance of a black hole to far away asymptotic observers is influenced by the presence of a light ring in the highly curved near-horizon region [13][14][15].Moreover, as measured by asymptotic observers, the equatorial null circular geodesic determines the shortest possible orbital period around a central non-vacuum black hole [16,17].
Intriguingly, it has also been proved [5,11,18,19] that the innermost light ring of a nontrivial (non-vacuum) black-hole spacetime determines the non-linear spatial behavior of the supported hair.In particular, it has been revealed, using the non-linearly coupled Einsteinmatter field equations, that the non-linear behavior of external hairy configurations which have a non-positive energy-momentum trace must extend beyond the null circular geodesic that characterizes the curved black-hole spacetime [5,11,18,19].
Motivated by the well established fact that null circular geodesics (closed light rings) are an important ingredient of generic black-hole spacetimes [1][2][3][4][5][6][7][8], in the present paper we raise the following physically intriguing question: How close can the innermost light ring of a central black hole be to its outer horizon?This is a seemingly simple question but, to the best of our knowledge, in the physics literature there is no general (model-independent) answer to it which is rigorously based on the Einstein equations.
In the present compact paper we shall reveal the fact that, for spherically symmetric hairy black-hole spacetimes whose supported field configurations are characterized by a traceless energy-momentum tensor, the non-linearly coupled Einstein-matter field equations provide an explicit quantitative answer to this physically important question.In particular, we shall explicitly prove that the radii of light rings in spherically symmetric traceless hairy black-hole spacetimes are bounded from below by the functional relation where r H is the radius of the outermost horizon.
It is worth noting that our theorem, to be presented below, is valid for the canonical family of colored black-hole spacetimes that characterize the non-linearly coupled Einstein-Yang-Mills (EYM) field theory (see [23,24] and references therein).In particular, it is worth emphasizing the fact that the highly non-linear character of the coupled Einstein-Yang-Mills field equations has restricted most former studies of this physically important field theory to the numerical regime.It is therefore of physical interest to reveal, using purely analytical techniques, some of the generic physical characteristics of this highly non-linear field theory.This is one of the main goals of the present paper.
II. DESCRIPTION OF THE SYSTEM
We shall study, using analytical techniques, the radial locations of compact photonspheres (closed light rings) in spherically symmetric hairy black-hole spacetimes which are described by the curved line element [16,22,25] where {t, r, θ, φ} are the Schwarzschild-like coordinates of the spacetime.
The radial functional behaviors of the matter-dependent metric functions µ = µ(r) and δ = δ(r) are determined by the non-linearly coupled Einstein-matter field equations G µ ν = 8πT µ ν [16,22]: and where the radially-dependent matter functions [26] ρ in the differential equations ( 3) and ( 4) are respectively the energy density, the radial pressure, and the tangential pressure of the external matter configurations in the non-trivial (non-vacuum) black-hole spacetime (2).
Our theorem, to be presented below, is based on the assumption that the external matter fields respect the dominant energy condition, which implies that the energy density is positive semi-definite [27], and that it bounds from above the absolute values of the pressure components of the matter fields [27]: In addition, we shall assume that the external matter fields are characterized by a traceless energy-momentum tensor: where T = −ρ+p+2p T .In particular, the analytically derived lower bound on the characteristic radii of compact photonspheres [see Eq. ( 31) below] would be valid for the well-known colored black-hole spacetimes that characterize the composed Einstein-Yang-Mills field theory [23].
Taking cognizance of the Einstein field equation (3), one finds the functional relation for the dimensionless metric function µ(r), where the radially-dependent physical parameter is the gravitational mass which is contained within an external sphere of radius r ≥ r H .
Here m(r H ), which is characterized by the simple relation is the horizon mass (the mass contained within the black hole).
III. LOWER BOUND ON THE RADII OF LIGHT RINGS IN SPHERICALLY SYMMETRIC TRACELESS BLACK-HOLE SPACETIMES
In the present section we shall address the following question: How close can a black-hole photonsphere be to its outer horizon?Intriguingly, below we shall prove that an explicit answer to this physically important question, which is based on the non-linearly coupled Einstein-matter field equations, can be given for non-trivial (non-vacuum) hairy black-hole spacetimes whose external matter fields are characterized by a traceless energy-momentum tensor.In particular, we shall reveal the fact that the innermost light rings cannot be located arbitrarily close to the outer horizons of the central black holes.
The radial locations of null circular geodesics (closed light rings) in spherically symmetric hairy black-hole spacetimes are determined by the roots of the dimensionless function [18] N (r) ≡ 3µ − 1 − 8πr 2 p .
Taking cognizance of the fact that non-extremal black holes are characterized by the dimensionless horizon relations [27] 0 one finds that the function ( 16) is characterized by the horizon boundary condition [see Eq. ( 6)] In addition, from Eqs. ( 8), ( 11), (13), and ( 14) one deduces the asymptotic functional behavior which implies the simple radial behavior The characteristic properties ( 18) and ( 20) of the dimensionless radial function ( 16) guarantee the existence of an external compact sphere with the property r = r γ > r H for which The functional relations ( 21) and ( 22) determine the radial location of the innermost light ring which characterizes the spherically symmetric non-vacuum (hairy) black-hole spacetime (2).
Before proceeding, it is worth emphasizing that it has recently been proved [6], using the non-linearly coupled Einstein-matter field equations, that extermal black-hole spacetimes are characterized by the horizon relations N (r = r H ) = 0 and [dN /dr] r=r H < 0 which, together with the asymptotic radial behavior N (r → ∞) → 2 [see Eq. ( 20)] of the dimensionless function ( 16), guarantee that extremal black holes, like non-extremal ones, possess external light rings (with r = r γ > r H ) which are characterized by the functional properties (21) and (22).Thus, our analysis is also valid for spherically symmetric extremal black-hole spacetimes.
Taking cognizance of the Einstein equations ( 3) and ( 4) together with the characteristic conservation equation one finds the gradient relation which yields the functional relation [see Eqs. ( 3) and ( 16)] [28] Substituting Eq. ( 25) into (22) and using the trace relation (12) for the external matter fields, one obtains the relation which, using the dominant energy condition (11), yields the characteristic dimensionless at the radial location of the black-hole innermost photonsphere.Furthermore, substituting into (27) the relation (21), which characterizes the null circular geodesics of the black-hole spacetime (2), one obtains the inequality [6µ(r) − 1] r=rγ ≥ 0 (28) which, using the functional relation (13), can be written in the form Finally, taking cognizance of Eqs. ( 10), ( 14), (15), and ( 29), one obtains the series of inequalities It is interesting to point out that the canonical family of electrically charged Reissner-Nordström black-hole spacetimes are characterized by the relations r 29], in which case one finds that the dimensionless ratio r γ /r H is a monotonically increasing function of the dimensionless charge-to-mass ratio |Q|/M of the black hole from the value r γ /r H = 3/2 for Q = 0 to the value r γ /r H = 2 for the extremal black hole with |Q| = M. Thus, charged Reissner-Nordström black-hole spacetimes respect the analytically derived lower bound (30).
IV. SUMMARY
The non-linearly coupled Einstein-matter field equations of general relativity predict the existence of compact photonspheres in the external regions of curved black-hole spacetimes.
In particular, it is well established in the physics literature that closed light rings (null circular geodesics on which photons and gravitons can perform closed orbital motions around highly compact astrophysical objects) are of central importance in determining the physical, mathematical, and observational properties of generic (non-vacuum) black-hole spacetimes .
Motivated by the important roles that photonspheres play in the physics of black holes, in the present paper we have addressed the following question: How close can the blackhole innermost light ring be to the outer horizon of the corresponding central black hole?Perhaps somewhat surprisingly, to the best of our knowledge there is no general answer to this intriguing question in the physics literature.
Interestingly, in the present compact paper we have proved, using analytical techniques, that an explicit answer to this physically important question can be given for spherically symmetric black-hole spacetimes whose external hairy configurations are characterized by a traceless energy-momentum tensor [It is worth noting that our main focus here is on the canonical family of colored black-hole spacetimes that characterize the non-linearly coupled Einstein-Yang-Mills field equations [23].However, it should be emphasized that our analytically derived results are also valid for any Einstein-matter field theory for which the external matter fields satisfy the traceless energy-momentum condition (12)].
In particular, we have presented a remarkably compact theorem that reveals the physically interesting fact that the non-linearly coupled Einstein-matter field equations set the dimensionless lower bound [see Eq. ( 30)] on the radii of photonspheres (closed light rings) in spherically symmetric [30] traceless hairy black-hole spacetimes. | 2,469.6 | 2023-11-29T00:00:00.000 | [
"Physics"
] |
Magnetic Domain Patterns in Bilayered Ribbons Studied by Magnetic Force Microscopy and Magneto-Optical Kerr Microscopy
The magnetic domain patterns of amorphous bilayered FeSiB/FeNbSiB and FeNbCuSiB/CoSiB ribbons are observed and analysed using the magneto-optical Kerr microscopy (MOKM) and magnetic force microscopy (MFM). Both microscopic techniques are highly sensitive to the sample surface; possibility of Kerr microscopy to visualize the domains separately in both layers is achieved by focusing the laser spot on the ribbon cross section. Wide curved domains as well as fine fingerprint domains were detected at the surface of ribbons due to presence of local stresses coming from the preparation process. With respect to high lateral resolution of MFM and its out-of-plane magnetization sensitivity, the perpendicularly magnetized crossed stripe domain patterns can be selected as well. Coiling of the ribbons on the half-round-end sample holder is often used to induce and control the magnetic anisotropy of these alloys. Changes in the magnetic domain structure at the outer-coiled surface and its dependence on the sign of magnetostriction coefficient are discussed in detail. Finally, the MFM images in the presence of external in-plane magnetic field up to ±40 kA/m are shown.
Introduction
The amorphous and/or nanocrystalline alloys are deeply examined by many research teams world-wide due to their excellent soft magnetic properties [1,2]. They are produced in many forms and geometries (ribbons, wires, and thin films) by various techniques [1,3,4]. Besides other fabrication methods suitable for production of soft magnetic materials the planar flow casting (PFC) is referred to as most utilizable [5].
Recently, the innovations of PFC technology connected with the integration of double-nozzle allow the preparation of bilayered (BL) and/or multilayered (ML) functional materials. They are used mainly in sensor applications, like deflection sensors [6] and displacement sensors [7] and also as ferromagnetic shape-memory alloys [8] or as alloys with enhanced magnetocaloric [9] and GMI effect [10]. The initial production of monolithic BL system started back in the 1990s, where two compositions of FeNiB/CoFeCrSiB were put together [11]. Promising step towards the development of such structures resulted in the inhomogeneous properties of mentioned layers mainly due to the fact that both compositions were separated in two crucibles during the injection on rotating wheel. Consequent efforts led to the production of BL ribbons involving one crucible with separated chambers for production of bilayered ribbons, where two melts are cast almost at the same time. Since that time, the preparation process has been significantly improved 2 Scanning [6] leading to better homogeneity of the layers and interface [12]. Particular applications of these materials are closely related to the magnetic anisotropy originated in the bulk and on the surface during the ribbon preparation process [13]. However, the changes of magnetic anisotropy in these soft magnetic materials are reflected by the magnetic domain patterns. Nowadays, the magnetic force microscopy (MFM) is advanced well established surface-sensitive technique for magnetic domain observations in a variety of magnetic materials (e.g., recording media [14], particles [15], nanocomposites [16,17], amorphous and/or nanocrystalline alloys [18][19][20], and thin films [21]). It is considered as easy available micromagnetic method with sufficient resolution; on the other hand, quantitative expertise of MFM images remains debatable and still very challenging. Soft magnetic materials studied by MFM are very sensitive to the perturbation effects of the tip (thin ferromagnetic films) or sample stray fields and their mutual changes during measuring process. As a consequence the domain structures are occasionally hard to interpret. Anyway, there is an optical technique based on magneto-optical Kerr effect (MOKE) suitable for detection of surface magnetic properties in these alloys. Surface magnetic anisotropy and depth sensitivity are often investigated by measuring the MOKE hysteresis loops [22,23]. Magnetic domains from the near-surface region are observed using the magneto-optical Kerr microscopy (MOKM) based on light polarization and its change after reflection from the sample surface [24]. The resolution achieved by MOKM compared to the MFM is lower and strictly limited by the resolution of an optical element (objective). Despite this the MOKM offers fast measurements directly sensitive to the sample magnetization with sufficient contrast of magnetic images and therefore could serve as a proper tool for interpreting MFM response. The combination of both techniques has been successfully presented, for example, on Co and NdFeB crystals [25], where the force sensor was integrated into the objective revolver of an optical polarization microscope, on Fe-Ga bulk alloys [26], or on a single iron crystal [27].
The aim of the paper is complex observation and analysis of magnetic domains in bilayered FeSiB/FeNbSiB and FeN-bCuSiB/CoSiB amorphous ribbons. We benefit from high resolution of MFM setup compared with MOKM flexibility and direct interpretation of domain images. MOKM technique is used to detect the induced magnetic anisotropy that is changing during the ribbon fixing on the half-roundend sample holder. Possibility of obtaining magneto-optical contrast at the cross section of both bilayered samples is presented. MFM domain patterns without and with the presence of external magnetic field are also deeply investigated.
Materials and Methods
As-cast, 36 m thick and 8mm wide, amorphous bilayered Fe 77.5 Si 7.5 B 15 /Fe 74.5 Nb 3 Si 13.5 B 9 (BL-FF) and Fe 73.5 Nb 3 Cu 1 Si 13.5 B 9 /Co 72.5 Si 12.5 B 15 (BL-FC) ribbons were prepared by PFC technique using crucible divided into two chambers [6]. During the preparation process FeSiB and FeNbCuSiB layers of the sample were in contact with surrounding atmosphere (air side), while the opposite FeNbSiB and CoSiB layers were in contact with rotating wheel (wheel side). As confirmed by the X-ray diffraction (XRD) the ribbons are fully amorphous [13]. Basic structural and magnetic parameters of ribbons are summarized in Table 1.
Magnetic domains were investigated by two surfacesensitive techniques. Magneto-optical Kerr microscopy (MOKM) consists of specially designed polarization microscope for direct observation of magnetic domains; see schematic description in Figure 1. The white light from the Xe lamp passes through the system of optical elements composed of aperture diaphragm, polarizer, and polarization objective and incidents of the sample surface. Reflected light goes through the analyser almost crossed with the polarizer. Such arrangement is necessary for optimal domain contrast that is obtained by subtracting two images. Firstly we apply the magnetic field necessary to saturate the sample and the surface image is stored as a reference. Then the value of magnetic field is gradually decreased and we observe the difference between the actual image at applied magnetic field < and reference. In most cases the MOKM domains were investigated in remnant state, that is, after switching off the magnetic field (H = 0). Sensitivity to individual magnetization components can be adjusted using the aperture diaphragm. Opening and closing of the diaphragm enable illuminating different areas of the conoscopic image, "Maltese cross," occurring in the microscope back focal plane. As seen in Figure 1 at all MOKM experiments the light was screened to incident the edge part of conoscopic image and in this way the sensitivity to magnetization components lying in the ribbon plane (longitudinal, longitudinal with transversal sensitivity) was obtained. However, it is well known that out-of-plane (polar) magnetization component is also present due to oblique angle of light incidence. Subplots ribbon, the special sample holder was fabricated [12]. Vertical position of the sample is ensured by plastic clamp mounted into the acrylic case. The surface of the cross section of the BL ribbon was treated by grinding wheel with fine grain sizes and additionally polished for 1 hour using the Vibromet machine. MOKM magnetic domain investigation was supplemented by atomic/magnetic force microscopy (AFM/MFM) measurements with and/or without external magnetic field. The AFM/MFM experiments were carried out in air at room temperature with the scanning probe microscopy (SPM) platform (Ntegra Prima, NT-MDT, Russia) using the Co-Cr coated cantilevers (see Table 2) in semicontact (tappinglift) mode. The tips were magnetized perpendicularly to the sample surface and MFM senses the vertical component of the derivative of force between the sample and the tip. The coercivity of the tips is up to 16 kA/m. Firstly, the topography of specimen is obtained, and then the magnetic contrast is achieved by lifting the probe into the distance of 250 nm above the surface. All images have been collected both at remnant state and/or as a function of external magnetic field.
The AFM/MFM setup takes advantage from longitudinal magnetic field generator (electromagnetic coil) that is able to create the magnetic field along the sample surface up to 80 kA/m (see Figure 2). To suppress the influence of metal parts on probe position while operated in magnetic field, the measuring head and exchangeable mount are made from nonmagnetic materials. ribbon surface deeply in the bulk. In the near-surface region (a few tens of nm) the Kerr microscopy is sensitive mainly to their in-plane magnetization component. Presence of weak out-of-plane component can be easily verified at normal light incidence configuration, when the MO contrast practically diminishes. Contrary the wide curved domains originate as a consequence of tensile stress and follow the in-plane easy magnetization axis. Due to higher roughness on the wheel side of the samples (see Table 1), where the magnetic domains are overlapped by irregularities and structural defects coming from preparation, it is more complicated to visualize the magnetic pattern. There are no fingerprint domains; there is only the glimpse of wide domains that traces the direction of magnetization within the ribbon plane. Generally we can say that each ribbon place will have its own unique magnetic domain structure reflecting on the one hand stresses coming from preparation process and on the other hand postpreparation treatment. Differences between the ribbons and their sides are visible from Figure 3. Due to preparation process the investigated BL-FF and BL-FC ribbons exhibit low coiling either with the air or wheel side out. Their uncoiling on the planar sample holder used for magnetic domain observations and the sign of magnetostriction coefficient of corresponding ribbon layer (see Table 1) are responsible for random fluctuations of the easy magnetization axis on the ribbon surface. Due to mentioned factors (i) the wide stripe domains have directions close to the ribbon axis on both sides of BL-FF ribbon, while on the wheel side of BL-FC sample they are rather perpendicular to it, and (ii) fingerprint-like domains corresponding to the prevailing planar compressive stress are visible partly on the BL-FF air surface and almost everywhere on the BL-FC air surface. Figure 4 shows the possibility of controlling the induced magnetic anisotropy due to coiling of the ribbons into the half-round-end sample holder of diameter 13 mm. The measurements were done at the outer tensile-stressed side of the BL-FC sample, while the inner side was exposed to the planar compressive stress. MOKE surface-sensitive hysteresis loops with the magnetic field applied along the ribbon axis (Figure 4(a)) as well as the changes of domain patterns (Figure 4(b)) in comparison to the nonstressed samples clearly confirm the origin of easy and hard magnetization axis on the air and wheel side due to coiling. Different magnetic behaviour observed at both sides is connected, however, with nonidentical size and sign of magnetostriction coefficients (see Table 1). Similarly, positive sign of in both layers of BL-FF ribbon induces the easy magnetization axis in the ribbon axis at both surfaces.
Results and Discussion
The magnetic behaviour observed at the ribbon interface using the MOKM is shown in Figure 5. Movement of domain walls in the BL-FF sample is presented at Figure 5(a), while magnetic domains observed in remnant state of BL-FC ribbon are shown in Figure 5 incidence is parallel with magnetic field applied along the interface between layers. As was already discussed in the previous papers [13,28] the behaviour of magnetic domains is influenced mainly by the magnetostriction of individual layers. Both layers of BL-FF sample have positive magnetostriction coefficients and domains propagate separately in each of them. The domains move as whole blocks from surface towards the interface (or vice versa) with walls parallel to the ribbon interface. BL-FC sample consists of the layers with magnetostriction of opposite signs and exhibits higher tendency to coiling. Typical domain patterns similar to the "chess-board" with alternating black and white (grey) fields are detected. Domains are separated by the walls that are perpendicular to the interface inside the layer and touch the interface on the boundary between layers avoiding the domain propagation in the second layer. Movement of domains along the interface is observed contrary to the BL-FF sample.
Magnetic Force Microscopy.
Due to higher roughness of the ribbon wheel sides (see Table 1) only the air sides of BL-FF (FeSiB side) and BL-FC (FeNbCuSiB side) samples have been investigated by AFM/MFM. Figures 6 and 7 show the experiments without the presence of external magnetic field, where (a) corresponds to the AFM topography and (b) corresponds to the magnetic image (phase shift, MFM). The BL-FF structure ( Figure 6) consists of crossed stripe domains. The orientation of magnetization in bright and/or murky regions is very similar to the fingerprint ones measured by MOKM, indicating the local perpendicular anisotropy. Sensitivity of MFM to outof-plane magnetization components has been published by many authors analysing the properties of magnetic materials [17,21,26,27].
However, there are also places, where the crossed stripe domains completely vanish. Those regions can be found on the surfaces of both ribbons. An example of such situation is depicted in the case of BL-FC sample (Figure 7). We expect that in these places the wide curved domains observed by the MOKM are present, but they have not been detected by MFM probably due to their large size (see MOKM experiments) and low sensitivity of MFM to locally homogeneous magnetic field and in-plane magnetization components. Both mentioned cases on the surface of amorphous ribbons are schematically sketched in Figure 8(a). Nevertheless, the situation presented in Figure 7 is not quite clear. Although MFM image is closely related to the topography, it is unlikely that the observed fluctuations are just topography artefacts, as the second pass MFM lift height was high above the surface structures (250 nm) and the influence of the topography should be suppressed. Figure 8(b) schematically explains low MFM contrast of observed cross stripe domains. It is generally known that, to minimize the magnetoelastic anisotropy energy of the system, the closure domains are formed in small areas close to the surface. Therefore, the out-of-plane domains pass continuously into the in-plane domains in the near-surface region and the whole pattern looks like a horseshoe. For cross stripe structure the horseshoe is very broad and the tip influence area reflects only its apex. Therefore one can see the surface closure domains having greater in-plane and weaker out-of-plane magnetization components. MFM phase contrast is then much weaker due to low interaction between the tip and the surface. These results are in good agreement with MOKM detection of fingerprint domains.
In Figure 6 the BL-FF ribbon was investigated without the presence of external magnetic field. The same place was used for analysis of MFM domains with an applied in-plane external magnetic field ext . Results are shown in Figure 9. The direction of ext is indicated by the arrows. The bottom figure is divided into two parts each corresponding to the positive and negative values of applied field during the same measurement, respectively, where upper and lower row of images refer to the domain structure obtained for positive and negative values of ext = 4, 8, 16, and 40 kA/m. Because both the sample and the tip are exposed to the in-plane external magnetic field, their out-of-plane magnetization components become weaker at the expense of increasing the in-plane ones. This influences the magnetic interaction between the tip and the sample and stripe domain patterns differ from Scanning 9 that observed without applied ext . One can see slow broadening of the stripes with increasing ext . At 16 kA/m the sample is partly saturated and the magnetic domain structure completely disappeared after reaching 40 kA/m. Then the external magnetic field was switched off (remnant state) and its amplitude was further increased with the opposite polarity. However, the remnant magnetization of the sample from the previous step is responsible for changes in the domain patterns in comparison to the positive magnetic field polarity. Therefore, stripe domain arrangement is visualized even at 40 kA/m showing practically symmetric zig-zag domain pattern.
Conclusions
Combination of magneto-optical Kerr microscopy (MOKM) and magnetic force microscopy (MFM) was successfully used for observations of magnetic domain patterns at surfaces and at cross section of bilayered FeSiB/FeNbSiB and FeN-bCuSiB/CoSiB ribbons. The following types of domains have been detected and discussed.
Fingerprint Domains. Surface closure domains indicating the presence of out-of-plane magnetic anisotropy in the ribbon bulk, coming from local planar compressive stresses originated during the ribbon preparation. MOKM shows high surface magnetic contrast arising from strong in-plane magnetization component and weak out-of-plane magnetization component; MFM exhibits lower magnetic contrast due to the sensitivity to weak out-of-plane magnetization component. MFM response completely vanishes in external magnetic field of 40 kA/m due to the ribbon saturation.
Wide Curved Domains. In-plane magnetic domains coming from local tensile stresses, their directions reflect random orientation of local in-plane easy magnetization axis on the surface. MOKM detects wide curved domains at the outer tensile-stressed side thanks to the sensitivity to in-plane magnetization component; practically no corresponding MFM response was observed.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper. | 4,050.2 | 2018-03-26T00:00:00.000 | [
"Materials Science"
] |
Accurate Determination of Camera Quantum Efficiency from a Single Image
Knowledge of spectral sensitivity is important for high-precision comparison of images taken by different cameras and recognition of objects and interpretation of scenes for which color is an important cue. Direct estimation of quantum efficiency curves (QECs) is a complicated and tedious process requiring specialized equipment, and many camera manufacturers do not make spectral characteristics publicly available. This has led to the development of indirect techniques that are unreliable due to being highly sensitive to noise in the input data, and which often require the imposition of additional ad hoc conditions, some of which do not always hold. We demonstrate the reason for the lack of stability in the determination of QECs and propose an approach that guarantees the stability of QEC reconstruction, even in the presence of noise. A device for the realization of this approach is also proposed. The reported results were used as a basis for the granted US patent.
Introduction
The determination of camera spectral sensitivity (quantum efficiency (QE)) is important for many problems related to image acquisition.These problems include color correction for comparison of colors in images acquired by different cameras and under different illuminations, camera simulations [1], and sensor designs [2].Another example problem is the reconstruction of the "true" color of an object imaged through an absorbing medium (for example, water), i.e., the reconstruction of the color that the object would have in the image if it were taken in air.
The "Gold Standard" colorimetric camera calibration procedure is described in [3][4][5].This is a time-consuming procedure requiring expensive, specialized equipment and controlled conditions.However, even this procedure suffers from subjectivity.Acquired images lack spatial homogeneity; hence, the authors of Ref. [4] used averaging over a 21 × 21-pixel patch in the center of the image.Thus, although the QECs recovered by this technique are termed "ground truth", the validity of this designation remains questionable.Thus, it is not surprising that several approaches have been proposed to simplify the calibration procedure, such as utilizing an LED-based emissive chart [6], taking several images under arbitrary lighting conditions [7], or even taking just a single image of a multicolored target [8].
In these approaches, the image being processed is usually that of a standard reflective color target, such as the Gretag-Macbeth chart with known reflection spectra for each colored patch.Reconstruction of QECs is an ill-posed problem, as noted in [8,9], so the proposed techniques make use of additional constraints, such as the smoothness of the illuminant spectrum, fulfillment of Luther conditions, and non-negativity of QE functions.The ill-posedness of the problem is usually related in the literature to the limited dimensionality of the reflectance spectra [10][11][12].
For example, it was concluded that out of 1257 reflectance spectra from [13], only seven or eight are truly independent [10], and the rest can be constructed from the minimal set.From this conclusion, it follows that using only these seven or eight "almost" linearly independent spectra, QECs can be recovered at seven to eight wavelengths only, which is insufficient for practical purposes.This, in turn, leads to the conclusion that the Munsell chips and the Gretag-Macbeth chart are non-optimal choices for QEC recovery.
It was concluded that the optimal choice of 20 color samples gives almost as good a reconstruction of QECs as the use of all Munsell chips [12].Those authors proposed to minimize the influence of noise by using the principal eigenvector (or rank-deficient pseudoinverse) solution.This paper also states that the sensor dynamic range plays an important role, and the increase of the range from the common 8 to 12 bits significantly improves the reconstruction.The simulation described by the authors has shown that the best root-mean-square error for spectral sensitivity estimation is 0.01796 (all 1269 reflectance spectra, 12-bit dynamic range).
The use of 16 glass dichroic transmission filters was proposed in [14].The reconstruction of QECs required taking 16 images, cubic spline interpolation of the averaged measurements, and power correction.Note a strong overlap between filter transmission curves that led to distortion of the reconstructed curves.In this paper, it is shown that high overlap is the main cause of the reconstructed QECs' distortion.
A spectrally tunable light source was employed for the same purpose in [15].Recently, a comprehensive review of the spectral sensitivity estimation methods and a framework for QE estimation for consumer cameras was published [16].However, the accuracy of the QEC recovery by the proposed approach remains questionable.
Currently, fast, reliable estimation of sensor QECs remains a problem for individual photographers and small companies lacking expensive equipment.It is worth noting that even cameras of the same make and model may have different QECs, as mentioned in [16].The objective of this paper is to propose and describe a fast and accurate method for QEC determination.
Mathematical Formulation
To define notation for parameters, measured values, and spectral functions, the equations describing the color formation model for a trichromatic sensor and Lambertian shading conditions can be written as follows: where v f is a pixel value recorded by a color channel f , I(λ) is a light source depending on wavelength λ, s f (λ) is the sensor quantum efficiency, C(λ) is a target reflectivity function (or spectral signature), and ω describes settable camera-related properties, such as gain, exposure time, etc. Effectively, integration is carried out over the visible range of the spectrum.By sampling spectral functions with the often-chosen ∆λ = 10 nm interval, the integral for a pixel i can be rewritten as a sum: where N is a number of samples with an interval ∆λ in the visible spectrum, and C i (λ n ) is the reflectance imaged at the pixel i.For M color patches, the known light source spectrum, and the patches' reflectivity spectra, the above can be rewritten in matrix form, with The elements of the N × M matrix P consist of patches' reflectivities for each dλ interval, and → F f is an element-wise product of I(λ) and S f .P must be inverted (or pseudoinverted, if M > N) to obtain three QECs.Due to this inversion being ill-posed, several techniques for obtaining sensible solutions have been proposed, such as Tikhonov regularization, Tikhonov derivatives-based regularization [17], linear models using some basis functions [9,12,18], and quadratic programming [19].
Note that M defines an upper bound for the number of samples N, and the greater the number of color patches used, the higher the spectral resolution of reconstructed QECs.Expecting a commonly accepted 10 nm resolution, no fewer than 31 different colors are needed for the 400-700 nm range and no fewer than 36 colors for the extended 380-730 nm range.
Previous Work and the Proposed Approach
The original Macbeth ColorChecker consists of 24 colored patches [20], which are chosen to represent "primary colors" and are "of general interest and utility for test purposes" [21].The latest versions of charts manufactured by X-Rite have 140 or 240 patches.The reflectivity of these patches is known for the spectral range 380-730 nm with a 10 nm resolution.Increasing the number of different colors used in the QEC's reconstruction process or choosing an "optimal" subset of colors does not improve the stability of the solution to Equation (3).The reason for this instability is the large condition number of the matrix P, as was already noted in [5].Even minor perturbations of input data → V f lead to dramatic changes in recovered QECs.Whether all the Munsell colors are used or an optimally chosen subset of these colors, the condition number remains large, which guarantees instability in inversion.
To get a feel for the condition number value, 36 different random patches from the X-Rite ColorChecker were chosen.Repeating colors and glossy patches had previously been eliminated from consideration, leaving 189 different spectra.By increasing the number of random selections and keeping those with the smallest condition number, the latter saturates around the value of 31,000.According to [22], this means that matrix inversion leads to the loss of more than four digits of accuracy (in addition to the loss of precision due to the specifics of arithmetic methods and inaccuracy in input data measurements).In other words, the errors in the input data are multiplied by ~31,000, resulting in a significantly erroneous output.This leads to the conclusion that the reflectivity spectra of the X-Rite patches are not the ones that would allow for accurate QEC recovery.
However, if the reflectivity spectra of patches comprise a disjoint set (only one reflectivity spectrum has a non-zero value at any wavelength), the condition number associated with the QECs' recovery problem is exactly 1, and the precision of the solution is no worse than the input data (that is, measurements of the reflectivity spectra and RGB triplets).For the proof of concept, we conducted a numerical simulation based on Equation (3).To estimate the bandwidth of the spectra required for noise-tolerant QEC recovery, it is assumed that 36 reflective spectra have the Gaussian shape with the same standard deviation σ and maxima that are evenly distributed over the (extended) visible spectrum 380-730 nm, i.e., the number of Gaussians is L = 36.Note that this number of spectra allows for the recovery of up to 36 points on each QE curve.For simulations, the standard spectrum of the incandescent lamp shown in Figure 1 was used, as were QECs found on the Internet for GoPro cameras (their exact shapes and the illumination spectra are irrelevant for the proof of concept).Colors recorded by a hypothetical camera were calculated using Equation ( 3) and scaled such that the maximum value for all colors and all color channels equals 255 (8 bits per pixel per channel).
Subsequently, the RGB triplets were corrupted by random noise, with the amplitude proportional to each value.Thus, K-percent noise changes the pixel value ρ to ρ = min 0, max 255, ρ × 1 + R K 100 , where R is a random number in the [-1, 1] interval.All reported simulations used K = 5 unless otherwise stated.Subsequently, the RGB triplets were corrupted by random noise, with the amplitude proportional to each value.Thus, -percent noise changes the pixel value ρ to = min (0, max 255, * 1 + ℛ ), where ℛ is a random number in the [-1, 1] interval.
All reported simulations used = 5 unless otherwise stated.
Figure 2a shows the overlap of reflectivity spectra when Gaussians have a standard deviation of 15 nm. Figure 2b shows the deviation of the recovered QECs from the ground truth.The recovered QECs look much like those in [23].The metric reflecting the quality of recovery is as follows: where is the ground truth value of QE at the k-th wavelength and is the corresponding recovered value.Reducing the standard deviation to 10 nm leads to an almost perfect recovery of the QECs. Figure 3a shows the dependence of () on the standard deviation of the Gaussians, and Figure 3b shows the dependence of the condition number on σ.The non-monotonic behavior of the standard deviation of error is likely to be related to the randomness of the added noise.The main result of the simulation is that the reduction of the standard deviation of Gaussians representing the reflectivity spectra of color chips from 15 nm to 10 nm leads from nonsensical recovered QECs to almost perfect ones.Figure 2a shows the overlap of reflectivity spectra when Gaussians have a standard deviation of 15 nm. Figure 2b shows the deviation of the recovered QECs from the ground truth.The recovered QECs look much like those in [23].The metric reflecting the quality of recovery is as follows: where Q GT k is the ground truth value of QE at the k-th wavelength and Q R k is the corresponding recovered value.Reducing the standard deviation to 10 nm leads to an almost perfect recovery of the QECs. Figure 3a shows the dependence of E(σ) on the standard deviation of the Gaussians, and Figure 3b shows the dependence of the condition number on σ.The non-monotonic behavior of the standard deviation of error is likely to be related to the randomness of the added noise.The main result of the simulation is that the reduction of the standard deviation of Gaussians representing the reflectivity spectra of color chips from 15 nm to 10 nm leads from nonsensical recovered QECs to almost perfect ones.Subsequently, the RGB triplets were corrupted by random noise, with the amplitude proportional to each value.Thus, -percent noise changes the pixel value ρ to = min (0, max 255, * 1 + ℛ ), where ℛ is a random number in the [-1, 1] interval.
All reported simulations used = 5 unless otherwise stated.
Figure 2a shows the overlap of reflectivity spectra when Gaussians have a standard deviation of 15 nm. Figure 2b shows the deviation of the recovered QECs from the ground truth.The recovered QECs look much like those in [23].The metric reflecting the quality of recovery is as follows: where is the ground truth value of QE at the k-th wavelength and is the corresponding recovered value.Reducing the standard deviation to 10 nm leads to an almost perfect recovery of the QECs. Figure 3a shows the dependence of () on the standard deviation of the Gaussians, and Figure 3b shows the dependence of the condition number on σ.The non-monotonic behavior of the standard deviation of error is likely to be related to the randomness of the added noise.The main result of the simulation is that the reduction of the standard deviation of Gaussians representing the reflectivity spectra of color chips from 15 nm to 10 nm leads from nonsensical recovered QECs to almost perfect ones.However, the authors are not aware of paints or pigments with reflectivity spectra satisfying the criteria formulated above.In this paper, the use of transmitted light instead of reflected light is proposed.Interference filters with ultra-narrow transmission bands are available from many manufacturers.It should be noted that the use of interference filters for colorimetric calibration has been proposed in [11].Those authors used a tungsten-halogen light to illuminate the standard ColorChecker through a set of broadband and narrowband interference filters.Photographs of the board and the spectral power distribution from each patch recorded by a spectrophotometer were then used to estimate the QECs of a camera.In this paper, the selection of a set of filters with non-overlapping bands and their illumination by a broadband light source through a diffusion plate for spatial homogenization are proposed.The transmitted light blob is then photographed.The typical sizes of filters are 1/2 inch and 1 inch.Forty filters assembled in an 8 by 5 array would have a size of approximately 16 by 10 cm.Thus, the use of a single light source is inconvenient due to the inhomogeneous illumination of different filters.The use of an array of identical LEDs, each back-lighting the corresponding interference filter (Figure 4), is proposed.Note that ambient light might substantially affect the accuracy of QEC recovery.Thus, the image must be taken in a dark room.However, the authors are not aware of paints or pigments with reflectivity spectra satisfying the criteria formulated above.In this paper, the use of transmitted light instead of reflected light is proposed.Interference filters with ultra-narrow transmission bands are available from many manufacturers.It should be noted that the use of interference filters for colorimetric calibration has been proposed in [11].Those authors used a tungsten-halogen light to illuminate the standard ColorChecker through a set of broadband and narrowband interference filters.Photographs of the board and the spectral power distribution from each patch recorded by a spectrophotometer were then used to estimate the QECs of a camera.In this paper, the selection of a set of filters with non-overlapping bands and their illumination by a broadband light source through a diffusion plate for spatial homogenization are proposed.The transmitted light blob is then photographed.The typical sizes of filters are 1/2 inch and 1 inch.Forty filters assembled in an 8 by 5 array would have a size of approximately 16 by 10 cm.Thus, the use of a single light source is inconvenient due to the inhomogeneous illumination of different filters.The use of an array of identical LEDs, each back-lighting the corresponding interference filter (Figure 4), is proposed.Note that ambient light might substantially affect the accuracy of QEC recovery.Thus, the image must be taken in a dark room.However, the authors are not aware of paints or pigments with reflectivity spectra satisfying the criteria formulated above.In this paper, the use of transmitted light instead of reflected light is proposed.Interference filters with ultra-narrow transmission bands are available from many manufacturers.It should be noted that the use of interference filters for colorimetric calibration has been proposed in [11].Those authors used a tungsten-halogen light to illuminate the standard ColorChecker through a set of broadband and narrowband interference filters.Photographs of the board and the spectral power distribution from each patch recorded by a spectrophotometer were then used to estimate the QECs of a camera.In this paper, the selection of a set of filters with non-overlapping bands and their illumination by a broadband light source through a diffusion plate for spatial homogenization are proposed.The transmitted light blob is then photographed.The typical sizes of filters are 1/2 inch and 1 inch.Forty filters assembled in an 8 by 5 array would have a size of approximately 16 by 10 cm.Thus, the use of a single light source is inconvenient due to the inhomogeneous illumination of different filters.The use of an array of identical LEDs, each back-lighting the corresponding interference filter (Figure 4), is proposed.Note that ambient light might substantially affect the accuracy of QEC recovery.Thus, the image must be taken in a dark room.
Algorithm for Estimating QECs
Summing up the proposed approach, one can outline the following algorithm: 1.
Take a single image of an array of cells containing interference filters covering the entire visible spectrum (36 or 40 individual filters).
2.
Calculate the average intensity for each cell around the brightest pixel.The choice of radius for averaging depends on the camera resolution but must be the same for all cells.
3.
A vector of reflectivities (or transmissivities in this case) can be obtained from the known peak wavelength for each cell and its conversion to an RGB triplet (for example, as in [24]).4.
Matrix P. is (pseudo-)inverted.The inversion is stable as the matrix condition number is close to 1 (i.e., the matrix is almost diagonal).
The vector
→ F f is element-wise divided by known illumination intensity to obtain QECs for all three channels.
Proposed Implementation and Installation
The most comprehensive sets of narrow band-pass interference filters are offered by Spectrogon [25], Omega Optical [26], Andover Corporation [27], and Thorlabs [28].The transmission spectra of 10 nm FWHM filters manufactured by Omega Optical are published by the manufacturer, and some are shown in Figure 5. Simulations have demonstrated that recovered QECs have a sizable standard deviation of error E = 0.051192 (i.e., around 5 percent) (Figure 6), which is consistent with the calculated value of the condition number 1237.71.The difference between the condition numbers calculated in the simulations described above and those of Omega Optical filters is due to the filters' shape; the latter are far from having a Gaussian shape.
Algorithm for Estimating QECs
Summing up the proposed approach, one can outline the following algorithm: 1. Take a single image of an array of cells containing interference filters covering the entire visible spectrum (36 or 40 individual filters).2. Calculate the average intensity for each cell around the brightest pixel.The choice of radius for averaging depends on the camera resolution but must be the same for all cells.3. A vector of reflectivities (or transmissivities in this case) can be obtained from the known peak wavelength for each cell and its conversion to an RGB triplet (for example, as in [24]).4. Matrix P. is (pseudo-)inverted.The inversion is stable as the matrix condition number is close to 1 (i.e., the matrix is almost diagonal).5.The vector ⃗ is element-wise divided by known illumination intensity to obtain QECs for all three channels.
Proposed Implementation and Installation
The most comprehensive sets of narrow band-pass interference filters are offered by Spectrogon [25], Omega Optical [26], Andover Corporation [27], and Thorlabs [28].The transmission spectra of 10 nm FWHM filters manufactured by Omega Optical are published by the manufacturer, and some are shown in Figure 5. Simulations have demonstrated that recovered QECs have a sizable standard deviation of error = 0.051192 (i.e., around 5 percent) (Figure 6), which is consistent with the calculated value of the condition number 1237.71.The difference between the condition numbers calculated in the simulations described above and those of Omega Optical filters is due to the filters shape; the latter are far from having a Gaussian shape.
The ultra-narrow band-pass filters from Andover Corporation (Figure 7) (the website shows the parameters of the manufactured filters that were used in simulations) have FWHM 3 nm; their spectra have almost no overlap; the standard deviation of error = 0.00204, and the condition number is 1.003775.There is no visible difference between the ground truth QECs and the recovered ones.The ultra-narrow band-pass filters from Andover Corporation (Figure 7) (the website shows the parameters of the manufactured filters that were used in simulations) have FWHM 3 nm; their spectra have almost no overlap; the standard deviation of error E = 0.00204, and the condition number is 1.003775.There is no visible difference between the ground truth QECs and the recovered ones.Interestingly, the simulations indicate that even noisy measurements of RGB triplets lead to lower noise in recovered QECs, which is demonstrated in Figure 8.For an RGB triplet error of 15%, the standard deviation of error for QECs does not exceed 5%.
Because a full set of filters (~36-40) is costly, it was decided to prove the concept with a single filter that was already acquired, specifically the interference filter with maximum transmittance at 532 nm and FWNM 3 nm manufactured by Thorlabs [28].Using the setup shown in Figure 4, the spectra of light passing through the diffuser alone and through both the diffuser and the filter were recorded (Figure 9).Interestingly, the simulations indicate that even noisy measurements of RGB triplets lead to lower noise in recovered QECs, which is demonstrated in Figure 8.For an RGB triplet error of 15%, the standard deviation of error for QECs does not exceed 5%.
Because a full set of filters (~36-40) is costly, it was decided to prove the concept with a single filter that was already acquired, specifically the interference filter with maximum transmittance at 532 nm and FWNM 3 nm manufactured by Thorlabs [28].Using the setup shown in Figure 4, the spectra of light passing through the diffuser alone and through both the diffuser and the filter were recorded (Figure 9).Interestingly, the simulations indicate that even noisy measurements of RGB triplets lead to lower noise in recovered QECs, which is demonstrated in Figure 8.For an RGB triplet error of 15%, the standard deviation of error for QECs does not exceed 5%.
Because a full set of filters (~36-40) is costly, it was decided to prove the concept with a single filter that was already acquired, specifically the interference filter with maximum transmittance at 532 nm and FWNM 3 nm manufactured by Thorlabs [28].Using the setup shown in Figure 4, the spectra of light passing through the diffuser alone and through both the diffuser and the filter were recorded (Figure 9).Due to the point light source, the illumination of both filters is spatially inhomogeneous, leaving uncertainty about how exactly the values of RGB triplets should be calculated.Figure 10 shows the dependence of the value of the green component (the two other components are nearly zero) on the radius of the averaged area.Note that this is essentially the same uncertainty that is present in [4].As this bias is the same for all measurements and the recovered QECs are determined only up to a scale anyway, this uncertainty is not likely to affect the outcome.In our measurements, however, the illumination can be made spatially homogeneous with well-known techniques (for example, see [29]).Due to the point light source, the illumination of both filters is spatially inhomogeneous, leaving uncertainty about how exactly the values of RGB triplets should be calculated.Figure 10 shows the dependence of the value of the green component (the two other components are nearly zero) on the radius of the averaged area.Note that this is essentially the same uncertainty that is present in [4].As this bias is the same for all measurements and the recovered QECs are determined only up to a scale anyway, this uncertainty is not likely to affect the outcome.In our measurements, however, the illumination can be made spatially homogeneous with well-known techniques (for example, see [29]).Due to the point light source, the illumination of both filters is spatially inhomogeneous, leaving uncertainty about how exactly the values of RGB triplets should be calculated.Figure 10 shows the dependence of the value of the green component (the two other components are nearly zero) on the radius of the averaged area.Note that this is essentially the same uncertainty that is present in [4].As this bias is the same for all measurements and the recovered QECs are determined only up to a scale anyway, this uncertainty is not likely to affect the outcome.In our measurements, however, the illumination can be made spatially homogeneous with well-known techniques (for example, see [29]).
Discussion and Conclusions
We propose a technique and describe a device for determining the QECs for photo or video cameras using just a single picture.The main part of the device is a set of ultranarrow band-pass interference filters.The spectra of these filters should overlap with each other as little as possible for reliable noise-tolerant recovery of QECs.The number of filters employed determines the number of wavelengths at which QECs are recovered.This suggests the use of filters with FWHM not exceeding 3 nm (and preferably with 1 nm FWHM) for maximally accurate recovery.The device can be used by manufacturers of imaging sensors and cameras and individual photographers for fast colorimetric calibration.
The numerical results given in the paper show that the main cause of inaccuracies in QECs reconstruction using images of colored chips is the input data noise amplification.Reduction of noise can be achieved by using a disjoint (non-overlapping) set of input data elements-in our case, signals from ultra-narrow band filters.
The proposed approach allows us to estimate QECs much more quickly than those approaches mentioned in introduction, as it only requires taking a single photograph.The "Gold Standard" technique produces the same results but requires at least 20-30 min to obtain 36 points on the QECs.To estimate QECs with the proposed technique, one needs to take a single image, which may take just a few seconds.The techniques that use images of colored chips are less accurate and often lead to the appearance of artifacts, as mentioned above.
The recent developments of pigments utilizing quantum dots [30] allow for the possibility of replacing interference filters with pigments with different properties.This direction of research has potential and deserves further investigation.
This work shows by means of numerical simulation that the use of ultra-narrow band interference filters allows for accurate reconstruction of camera QECs even in the presence of noise in the input data.
Discussion and Conclusions
We propose a technique and describe a device for determining the QECs for photo or video cameras using just a single picture.The main part of the device is a set of ultra-narrow band-pass interference filters.The spectra of these filters should overlap with each other as little as possible for reliable noise-tolerant recovery of QECs.The number of filters employed determines the number of wavelengths at which QECs are recovered.This suggests the use of filters with FWHM not exceeding 3 nm (and preferably with 1 nm FWHM) for maximally accurate recovery.The device can be used by manufacturers of imaging sensors and cameras and individual photographers for fast colorimetric calibration.
The numerical results given in the paper show that the main cause of inaccuracies in QECs' reconstruction using images of colored chips is the input data noise amplification.Reduction of noise can be achieved by using a disjoint (non-overlapping) set of input data elements-in our case, signals from ultra-narrow band filters.
The proposed approach allows us to estimate QECs much more quickly than those approaches mentioned in introduction, as it only requires taking a single photograph.The "Gold Standard" technique produces the same results but requires at least 20-30 min to obtain 36 points on the QECs.To estimate QECs with the proposed technique, one needs to take a single image, which may take just a few seconds.The techniques that use images of colored chips are less accurate and often lead to the appearance of artifacts, as mentioned above.
The recent developments of pigments utilizing quantum dots [30] allow for the possibility of replacing interference filters with pigments with different properties.This direction of research has potential and deserves further investigation.
This work shows by means of numerical simulation that the use of ultra-narrow band interference filters allows for accurate reconstruction of camera QECs even in the presence of noise in the input data.
Patents
US Patent No. US11,202,062B2, titled "Methods and systems of determining quantum efficiency of a camera", issued on 14 December 2021, claiming priority to the provisional application No. 62/589,104, filed on 21 November 2017.
Figure 1 .
Figure 1.Spectrum of the illuminant used in simulations.
Figure 2 .
Figure 2. (a) Gaussian reflectance spectra in case of significant overlap; (b) comparison of ground truth (GT) and recovered (Rec) QECs for reflectance spectra shown in (a).
Figure 1 .
Figure 1.Spectrum of the illuminant used in simulations.
Figure 1 .
Figure 1.Spectrum of the illuminant used in simulations.
Figure 2 .
Figure 2. (a) Gaussian reflectance spectra in case of significant overlap; (b) comparison of ground truth (GT) and recovered (Rec) QECs for reflectance spectra shown in (a).Figure 2. (a) Gaussian reflectance spectra in case of significant overlap; (b) comparison of ground truth (GT) and recovered (Rec) QECs for reflectance spectra shown in (a).
Figure 2 .
Figure 2. (a) Gaussian reflectance spectra in case of significant overlap; (b) comparison of ground truth (GT) and recovered (Rec) QECs for reflectance spectra shown in (a).Figure 2. (a) Gaussian reflectance spectra in case of significant overlap; (b) comparison of ground truth (GT) and recovered (Rec) QECs for reflectance spectra shown in (a).
Figure 3 .
Figure 3. Reflectance spectra have a Gaussian shape.The errors in reconstructed QECs dramatically reduce when overlap between nearest Gaussians is approaching zero: (a) for the case of 36 measurements in the visible spectrum, the drop occurs between 10 and 7 nm; (b) the drop is directly related to the condition number of the matrix P. The condition number is approaching a value of 1.
Figure 4 .
Figure 4. Proposed setup for a single cell with an interference filter.The complete device consists of 36 or 40 such cells.Detailed explanations are in the text.
Figure 3 .
Figure 3. Reflectance spectra have a Gaussian shape.The errors in reconstructed QECs dramatically reduce when overlap between nearest Gaussians is approaching zero: (a) for the case of 36 measurements in the visible spectrum, the drop occurs between 10 and 7 nm; (b) the drop is directly related to the condition number of the matrix P. The condition number is approaching a value of 1.
Figure 3 .
Figure 3. Reflectance spectra have a Gaussian shape.The errors in reconstructed QECs dramatically reduce when overlap between nearest Gaussians is approaching zero: (a) for the case of 36 measurements in the visible spectrum, the drop occurs between 10 and 7 nm; (b) the drop is directly related to the condition number of the matrix P. The condition number is approaching a value of 1.
Figure 4 .
Figure 4. Proposed setup for a single cell with an interference filter.The complete device consists of 36 or 40 such cells.Detailed explanations are in the text.
Figure 4 .
Figure 4. Proposed setup for a single cell with an interference filter.The complete device consists of 36 or 40 such cells.Detailed explanations are in the text.
Figure 5 .
Figure 5. Transmission curves of some 10 nm filters manufactured by Omega Optical[26].The nearest curves have some overlap.Figure5.Transmission curves of some 10 nm filters manufactured by Omega Optical[26].The nearest curves have some overlap.
Figure 5 .
Figure 5. Transmission curves of some 10 nm filters manufactured by Omega Optical[26].The nearest curves have some overlap.Figure5.Transmission curves of some 10 nm filters manufactured by Omega Optical[26].The nearest curves have some overlap.
Figure 6 .
Figure 6.Difference between ground truth (GT) and recovered (Rec) quantum efficiency curves for 10 nm Omega Optical filters.
Figure 7 .
Figure 7. Transmission curves of some 3 nm filters manufactured by Andover Corporation [27].The nearest curves have almost no overlap.
Figure 7 .
Figure 7. Transmission curves of some 3 nm filters manufactured by Andover Corporation [27].The nearest curves have almost no overlap.
Figure 7 .
Figure 7. Transmission curves of some 3 nm filters manufactured by Andover Corporation [27].The nearest curves have almost no overlap.
Figure 8 .
Figure 8. Dependence of mean and standard deviation of error in recovered QECs as a function of error in measured RGB triplets.Each simulation has been repeated five times.
Figure 9 .
Figure 9. Spectra of light passed through the diffuser and through the diffuser and the filter.
Figure 8 .
Figure 8. Dependence of mean and standard deviation of error in recovered QECs as a function of error in measured RGB triplets.Each simulation has been repeated five times.
Figure 8 .
Figure 8. Dependence of mean and standard deviation of error in recovered QECs as a function of error in measured RGB triplets.Each simulation has been repeated five times.
Figure 9 .
Figure 9. Spectra of light passed through the diffuser and through the diffuser and the filter.
Figure 9 .
Figure 9. Spectra of light passed through the diffuser and through the diffuser and the filter.
Figure 10 .
Figure 10.The dependence of the measured value in the green channel on the radius of averaging.
Patent No. US11,202,062B2, titled "Methods and systems of determining quantum efficiency of a camera", issued on 14 December 2021, claiming priority to the provisional application No. 62/589,104, filed on 21 November 2017.
Figure 10 .
Figure 10.The dependence of the measured value in the green channel on the radius of averaging. | 7,953.4 | 2024-07-01T00:00:00.000 | [
"Physics"
] |
Preparation and properties of silicone rubber materials with foam/solid alternating multilayered structures
In this paper, silicone rubber materials with foam/solid alternating multilayered structures were successfully constructed by combining the two methods of multilayered hot-pressing and supercritical carbon dioxide (SCCO2) foaming. The cellular morphology and mechanical properties of the foam/solid alternating multilayered silicone rubber materials were systematically studied. The results show that the growth of the cell was restrained by the solid layer, resulting in a decrease in the cell size. In addition, the introduction of the solid layer effectively improved the mechanical properties of the microcellular silicone rubber foam. The tensile strength and compressive strength of the foam/solid alternating multilayered silicone rubber materials reached 5.39 and 1.08 MPa, which are 46.1% and 237.5% of the pure silicone rubber foam, respectively. Finite element analysis (FEA) was applied and the results indicate that the strength and proportion of the solid layer played important roles in the tensile strength of the foam/solid alternating multilayered silicone rubber materials. Moreover, the small cellular structures in silicone rubber foam can provided a high supporting counterforce during compression, meaning that the microcellular structure of silicone rubber foam improved the compressive property compared to that for the large cellular structure of silicone rubber foam. The silicone rubber materials with foam/solid alternating multilayered structure have been constructed by combining two methods of the multilayered hot-pressing and supercritical carbon dioxide (SCCO2) foaming. The growth of the cell is restrained by the solid layer, resulting in a decrease of the cell size. In addition, the introduction of the solid layer can effectively improve the mechanical properties of the microcellular silicone rubber foam. The experimental results are analyzed by finite element analysis (FEA).
Introduction
Silicone rubber foam is a porous polymer widely used in packaging, transportation, electronics, aerospace, and other fields [1][2][3][4]. It not only has the advantages of the high/low temperature resistance, aging resistance, radiation resistance, waterproofing, and biocompatibility of silicone rubber [5,6], but also has the physical properties of low density, high elasticity, and abilities to absorb mechanical vibrations and impact excellently [7,8]. However, it is difficult to control the cellular structure of silicone rubber foam by means of traditional preparation methods, resulting in poor mechanical properties. Hence, how to improve the mechanical properties of silicone rubber foam has become a focus of many scholars [9][10][11]. For example, Bai et al. [12] improved the cell morphology and mechanical properties of microcell silicone rubber foam by adding nanometer graphite into the foam. Luo et al. [13] found that the cell size of the foam had a significant influence on the mechanical properties of silicone rubber foam materials. In addition, Xiang et al. [14] found that the microcellular and nanocellular structures played an important role in improving the mechanical properties of silicone rubber foam. Park et al. [15] found that the mechanical properties of silicone rubber foam can be improved by regulating its microstructure. Based on the existing research, the improvement of the mechanical properties of silicone rubber foam is still limited. Therefore, it is necessary to seek other ways to improve the mechanical properties of silicone rubber foam materials.
Materials with multilayered structures in nature such as shells, feathers, and butterfly wings have rich multilayered interfaces, alternating periodic arrangements and excellent mechanical properties. Inspired by multilayered structure, scholars have invented many molding methods to prepare multilayered polymeric composite materials. Micronano multilayer coextrusion technology [16][17][18][19][20][21] and layer-by-layer stacking technology [22] have mainly been studied. A multilayered structure can improve the barrier performance, acoustic absorption performance, electromagnetic shielding performance, mechanical strength, and puncture resistance. For example, Zhao et al. [23] successfully prepared polypropylene (PP) foam board with a foam/solid alternating multilayered structure through multilayered coextrusion technology, and found that the mechanical properties of the foam board with the foam/solid alternating multilayered structure were much higher than those of a pure PP foam board. Jiang et al. [24] prepared BT/NBR-PU foam materials with multilayered structures, and the BT/NBR-PU foam materials with multilayered structures had better sound absorption performance than the single materials. Zhou et al. [25] prepared PMMA multilayered microporous foam materials by means of laminated hot-pressing, and found that the regulation of the cellular morphology of the microporous foam improved. It can be seen from the above results that the construction of the foam/solid alternating multilayered structure in silicone rubber matrix may greatly improve the mechanical properties of silicone rubber foam materials. Within the scope of our knowledge, there are few reports on the research of multilayered silicone rubber foam materials.
In this work, silicone rubber materials with alternating multilayered structures were successfully constructed by stacking and supercritical fluid foaming. The cellular characteristics and mechanical properties of the foam/ solid alternating multilayered silicone rubber foam were systematically studied. Preparation of silicone rubber with a foam/solid alternating multilayered structure
Materials
The formula in Table 1 is used to prepare silicone rubber mixture, and a mixer (PolyLab OS-RheoDrive 7, HAAKE, Germany) is used to mix the materials. The temperature is 105°C and the rotation speed is 90 r/min. First, silicone rubber, hydroxyl silicone oil, and silica are blended for 25 min. After that, silicone rubber mixture is removed and the mixer again for 25 min. After cooling to room temperature, silicone rubber mixture is mixed with the vulcanizing agent in the mixer. Finally, the mixed silicone rubber is applied to the preparation of foam/solid alternating multilayered silicone rubber foam materials after being held at room temperature for 24 h. Figure 1 shows the preparation process for the foam/solid alternating multilayered silicone rubber foam materials. First, the samples in Table 1 are made into sheets with a thickness of 0.2 mm. The samples are cut into 30 mm × 40 mm sheets, according to the size of the mold. Then, sample A 1 is alternately stacked with the sheets from the samples A 2 , A 3 , and A 4 . Different alternating multilayered silicone rubber foam samples are constructed according to Table 2. The sample is placed in a mold (30 mm × 40 mm × 2 mm) and pressed by a flat vulcanizer (Laboratory platen press, P300E) for 10 min at room temperature. After that, the samples are prevulcanized for 10 min at 125°C by the flat vulcanizer to prepare alternating multilayered silicone rubber.
The prevulcanized alternating multilayered silicon rubber sheet is cut into sample strips with dimensions of 10 mm × 30 mm. SCCO 2 is used as the foaming agent, and supercritical fluid foaming equipment is used to produce silicone rubber foam according to the process in Table 2. The pressureholding time is 30 min. Then, the kettle is quickly decompressed within 2 s, and the sample strip is removed and placed in an air-circulating oven at 160°C for 30 min to full vulcanization. Finally, the sample is removed and placed in the aircirculating oven at 190°C for 60 min for heat treatment to obtain foam/solid alternating multilayered silicone rubber foam materials.
Analysis of the apparent density
The apparent density of the sample is measured by a Vernier caliper and analytical balance. The density of the foam layer is computed by using a two-phase model [26].
where ρ foam is the density of the foam layer, ρ soild is the density of the solid layer, ρ is the density of the alternating layers of the foam/solid silicone rubber foam materials, X foam is the volume fraction of the foam layer, X solid is the volume fraction of the solid layer, d is the thickness of the foam sample, d solid is the total thickness of the solid layer in the scanning electron microscopy (SEM) images.
Analysis of the cellular morphology and layer structure
A field emission scanning electron microscope (SIGMA 300, Zeiss, Germany) is used to observe the cellular structures and alternating multilayered structure of silicone rubber foam. The sample is first soaked in liquid nitrogen for 5 min, then a section of the sample is sprayed with gold, and this section of the sample is observed by SEM. Image J software used to analyze the SEM images and obtain the cell size of the samples. The following formula is used to calculate the cell density N f (cells/cm 3 ).
where n is the number of cells in the SEM image of silicone rubber foam section, and A is the area of the SEM image (cm 2 ).
Rheological property test
A rubber processing analyzer (RPA2000, ALPHA, America) is used to test silicone rubber mixture. The test temperature is 165°C and the test time is 60 min. The energy storage modulus (G′), loss modulus (G″) and viscosity (η*) changes over time during the silicone rubber mixture vulcanization process.
Mechanical performance
The tensile and compression properties of the foam/solid alternating multilayered silicone rubber foam are tested by a solid rheology analyzer (RSA G2, TA, America). The tensile test samples are cut into dumbbell samples with a length of 20 ± 0.5 mm, a width of 5 ± 0.1 mm, and a thickness of 2 ± 0.5 mm. The tensile rate during
Finite element analysis (FEA)
A numerical two-dimensional plane strain mode in silicone rubber foam is designed. The model dimension is 200 × 200 µm. Circular voids are distributed uniformly with a radius from 2.5 to 50 μm and the void fraction is set at 54.0%. Thhe Mooney-Rivlin constitutive model is adapted for the silicone rubber. The boundary conditions are as follows: the bottom is constrained, both the left and right sides are free, the loading is from the upper side, and the compressive rate is set at 15.0%. Then, the supporting counterforce of the cell is calculated by ANSYS software which is based on the finite element method [27]. A linear elastic model of the uniaxial tensile test curves from the experimental materials is used for qualitative analysis. A three-dimensional finite element model of the foam/solid alternating multilayered silicone rubber is designed. Three-dimensional finite element analysis is used to study the stress distribution and tensile strength of the samples with different layered structures. The size of the model is 57 × 15 × 3 mm. The vertical displacement is 57 mm (strain is 100%), and the stress distribution and tensile load are analyzed.
Results and discussion
Basis of the foam/solid alternating multilayered silicone rubber foam Figure 2 shows the effect of the hydroxyl silicone oil content on the storage modulus (G′), the loss modulus(G″), and the viscosity(η*) of silicone rubber. During the whole process of vulcanization, the content of hydroxyl silicone oil, and the G′, G″, and η* values for the silicone rubber materials gradually decrease. G′ is essentially the elastic modulus, which is an index of the rebound after deformation. The larger G′ is, the easier it is to recover after deformation. However, the loss modulus and viscosity reflect the deformation ability. As shown Fig. 2, the G′, G ″, and η* of sample A 1 are highest among all other samples, which is not conducive to the growth and formation of cells during foaming process. Arefmanesh and Advarn et al. [28,29] investigated the growth process of diffusioncontrolled cells in a viscous fluid, and found that the greater the viscosity of silicone rubber substrate is, the greater the resistance to cell growth. The stress of the elastic energy acting on the cellular interface prevent the cell from growing and promotes cell shrinkage during the cellular shaping stage. During supercritical fluid foaming, the elastoplasticity of silicone rubber matrix plays an important role in controlling the microcellular structure [30]. Hence, the above analysis conclusion is the basis for the construction of the Fig. 2 The effect of the hydroxyl silicone oil content on the G′, G″, and η* of the silicone rubber. A Storage modulus, B loss modulus, and C viscosity. Hydroxyl silicone oil content: A 1 -1 phr, A 2 -3 phr, A 3 -5 phr, and A 4 -7phr foam/solid alternating multilayered silicone rubber foam materials.
Cellular morphology of the foam/solid alternating multilayered silicone rubber foam
Construction of the alternating multilayered foam/solid silicone rubber foam Figure 3 shows the influence of the silicone oil content on the cellular morphology of the silicone rubber foam. Table 3 shows the density, cell size, and cell density statistics of the samples. According to Fig. 3 (S 1 ) and Table 3, one can see that the cells are not present in silicone rubber matrix when the silicone oil content is 1 phr. As the silicone oil content exceeds 3 phr, cells appears in silicone rubber matrix. Moreover, with increasing silicone oil content, the cell size of the microcellular silicone rubber foam increases, but the cell density decreases. This is because in the prevulcanized silicone rubber matrix, the partial vulcanized silicone rubber matrix exhibits elasticity, while the unvulcanized silicone rubber matrix shows plasticity. The silicone oil has a plasticizing effect on the silicone rubber matrix. Hence, when the content of silicone oil is low, the prevulcanized silicone rubber matrix has an increased elasticity. During foaming, elastic shrinkage occurs in the silicone rubber foam, leading to the reduction in the cell size or even a disappearance of the cellular structure. When the content of silicone oil is high, the prevulcanized silicone rubber matrix retains great plasticity. Therefore, silicone rubber matrix can retain the microcellular structure, which results in the formation of microcellular silicone rubber foam. Figure 4 shows the cellular morphology of the foam/solid alternating multilayered silicone rubber materials. From Fig. 4 (S 5 ), it can be seen that when the silicone oil content ratio of the foam layer and solid layer is 3 to 1, a foam/solid alternating multilayered structure cannot be obtained, and cracks appear in the sample. When the silicone oil content ratio of the foam layer and solid layer exceeds 5 to 1, foam/ solid alternating multilayered silicone rubber materials are successfully built (S 6 and S 7 ). The interface between the foam layer and solid layer is continuous and dose not crack. The solid layer is flat, and the foam layer has dense and uniform cells, which is an obvious closed-cell foam. Table 3 shows that under the same foaming conditions, the cell size in the foam/solid alternating multilayered silicone rubber foam is significantly smaller than that in the pure silicone rubber foam, and its cell density is higher than that in the pure silicone rubber foam. In addition, the average cell size of the foam/solid alternating multilayered silicone rubber material (S 7 ) is 3.98 μm, which decreases to 14.23 μm compared with that of the pure silicone rubber foam (S 4 ). The cell density of the foam layer is 6.82 × 10 9 cells/cm 3 , which is much higher than that of the pure silicone rubber foam (S 4 , 1.75 × 10 8 cells/cm 3 ). This may be due to two reasons: on the one hand, the solid layer restricts the cellular growth in the foam layer as shown in Fig. 5. On the other hand, the solid layer squeezes the foam layer during the formation of the cellular structure, which promotes the shrinkage of the foam layer. These two reasons result in a Effect of the saturation pressure on the cellular morphology of the foam/solid alternating multilayered silicone rubber foam Figure 6 shows the effect of the saturation pressure on the cellular morphology of the foam/solid alternating multilayered silicone rubber foam. As shown in Fig. 6 and Table 3, with increasing saturation pressure, the average According to the classical nucleation theory [31][32][33], the higher the saturation pressure is, the larger the nucleation rate. Therefore, with an increase in the saturation pressure, the cell size of the foam/solid alternating layers of silicone rubber foam gradually decreases, and the cell density of the foam/solid alternating multilayered silicone rubber foam gradually increases.
Effect of the saturation temperature on the cellular morphology of the foam/solid alternating multilayered silicone rubber foam Figure 7 shows the effect of the saturation temperature on the cellular morphology of the foam/solid alternating multilayered silicone rubber foam. From Fig. 7 and Table 3, one can see that with an increase in the saturation temperature, the average cell size gradually increases, and the cell density gradually decreases. When the saturation temperature is 40°C, the cell size of the sample (S 11 ) reaches 0.92 μm, and the cell density reaches 2.64 × 10 10 cells/cm 3 . The result is attributed to three reasons: first, the higher the saturation temperature is, the greater the nucleation barrier, which leads to a reduction in the nucleation rate. The second one is that with an increase in the saturation temperature, the content of carbon dioxide entering silicone rubber matrix decreases, leading to a reduction in the nucleation rate. Finally, the cells are more likely to coalesce because the strength of silicone rubber matrix decreases when the saturation temperature increases. The phenomenon is consistent with the conclusions reached by Hong and Lee [34] and Yang et al. [35].
Mechanical properties of the foam/solid alternating multilayered silicone rubber foam
The effect of the cellular structure on the tensile properties of silicone rubber materials , which are 46.07% and 44.6% higher than that of (S 3 ) and (S 4 ), respectively. In addition, the elongation at break of the foam/solid alternative multilayered silicone rubber foam reaches 585.91% (S 6 ) and 696.06% (S 7 ), indicating that the solid layer can retain a high elongation at break in the pure silicone rubber foam. Compared with that break of the pure solid silicone rubber materials, the elongation at break of sample S 6 and sample S 7 are improved by 70.13% and 102.12%, respectively, which indicates an increased tensile deformation. However, the tensile strength and elongation at break of sample S 5 are both relatively low. According to Fig. 4, this is because when the viscosity difference between the foam layer and the solid layer is relatively small, crack defects are likely to form at the layer interface, resulting in poor mechanical properties. Figure 9 shows the influence of the foaming parameters on the tensile properties of the foam/solid alternating multilayered silicone rubber foam materials. According to Fig. 9A, when the saturation pressure increases from 12 to 18 MPa, the tensile strength and elongation at break of the foam/solid alternating multilayered silicone rubber increase from 3.88 to 5.25 MPa, and 416.09% to 699.28%, respectively. This is because the saturation pressure mainly affects the cell size and cell density of the foam layer. The higher the foam pressure is, the smaller the cell size. Therefore, a sample with a small cell size can exhibit improved mechanical properties [36]. In addition, with an increase in the cell size, defects form at the interface, which leads to a reduction in the mechanical properties. From Fig. 9B, when the saturation temperature increases from 40 to 70°C, the tensile strength and elongation at break of the foam/solid Fig. 8 Tensile strength comparison of alternating multilayered silicone rubbers with that of pure solid silicone rubber and pure foam. The content ratios of the solid layer and the foam layer are A 1:3, B 1:5, and C 1:7 alternating multilayered silicone rubber materials decrease from 5.34 to 4.01 MPa, and 789.22% to 536.16%, respectively. With increasing saturation temperature, the cell size of the foam/solid alternating multilayered silicone rubber foam gradually increases, indicating that the mechanical properties of the foam/solid alternating multilayered silicone rubber foam materials decrease.
The effect of the number of layers on the tensile properties is shown in Fig. 10. When the number of layers increases to 3, the tensile strength of the foam/solid alternating multilayered silicone rubber foam decreases sharply to 5.39 MPa. With an increase in the number of layers, the tensile strength of the foam/solid alternating multilayered silicone rubber foam decreases slightly, but the tensile strength of all samples is higher than 5 MPa. To better understand the effect of the alternating multilayered structure on the tensile properties of the foam/solid alternating multilayered silicone rubber foam, we analyze the results by theoretical analysis and finite element simulation.
The stress of the solid layer and the foam layer can be expressed as follows: where σ 1fs is the fracture stress of the solid layer, σ 2fs is the fracture stress of the foam layer, ε 1 is the fracture strain of the solid layer, ε 2 is the fracture strain of the foam layer, E 1 is the elastic modulus of the solid layer, and E 2 is the elastic modulus of the foam layer. From the above study, σ 1 is higher than σ 2 but ε 2 is higher than ε 1 . According to the parallel model, it is assumed that the fracture strain is the same during the tensile process as follows: where ε is the fracture strain of the foam/solid alternating multilayered silicone rubber foam. Hence, the stress of the foam/solid alternating multilayered silicone rubber foam can be expressed as: where σ is the fracture stress of the foam/solid alternating multilayered silicone rubber foam, and η is the proportion of the solid layer thickness. In addition, finite element analysis is used to analyze the stress of the samples according to the equivalent modulus of the elasticity, which can be expressed as: When a fracture forms, the fracture strain is the same, i.e., ε ¼ ε 1 ¼ ε 2 . The above equation can be expressed as: The stress distribution of the samples is shown in Fig. 11. In this study, it is assumed that the strain is the same, so there is no significant change in the stress distribution. Figure 12 shows that the theoretical analysis and simulation results are essentially the same. With an increase in the number of layer, the tensile strength of the foam/solid alternating multilayered silicone rubber foam decreases, indicating that the tensile strength of the foam/solid alternating multilayered silicone rubber foam mainly depends on the strength and proportion of the solid layer. Moreover, the Fig. 9 The effect of the foaming parameters on the tensile properties of the alternating multilayered silicone rubber (9 L): A pressure and B temperature Fig. 10 The tensile properties of foam/solid alternating multilayered silicone rubber foam materials with different numbers of layers tensile strength obtained from both theoretical analysis and simulation calculation is higher than the experimental value. It is likely that the theoretical analysis and simulation calculation do not consider the interface effect. The cellular structure of the interface layers exhibits defects during the stretching process, which may lead to the decrease in the tensile strength of the foam/solid alternating multilayered silicone rubber foam. Therefore, through this section of the investigation, an approach to improve the tensile properties of the foam/solid alternating multilayered silicone rubber foam is formulated, to improve the strength and proportion of the solid layer.
The effect of the cellular structure on the compressive properties of silicone rubber materials Figure 13 shows the effect of the cell size on the stress-strain curves of silicone rubber foam. From Fig. 13, one can see that the stress-strain curve of the microcellular silicone rubber foam with 18.21 μm cells is obviously higher than that of silicone rubber foams with 73.78 μm cells. When the foam density is~0.6 g/cm 3 and the cell volume fraction is~50%, the compressive stress of the foam with 18.21 μm cells is improved by~20.0% compared with that of the foam with 73.78 μm cells. These investigations [37][38][39][40] are in agreement with our results, and they deduce that foams with small cells are stronger than those with larger cells. Furthermore, the smaller cells in the bimodal thermoplastic foam are believed to be the main reason for the significantly elevated compressive properties. When the strain reaches to 40%, the stress sustained by the foam/solid alternating multilayered silicone rubber foam can reaches 0.19 MPa, while the stress of the pure silicone rubber foam is only 0.32 MPa. The compressive strength of the foam/ solid alternating multilayered silicone rubber improves by 237.5% compared with that of the pure silicone rubber foam.
However, these investigations [37][38][39][40] only prove that the microcellular structure can improve the compressive properties, but they do not give the reason. Therefore, finite Fig. 11 Stress distribution of silicone rubber materials with different structures: A pure silicone rubber, B pure silicone rubber foam, C 3 L, D 5 L, E 7 L, and F 9 L Fig. 12 Comparison of the analytical and simulation results of the fracture stress of the alternating multilayered silicone rubber materials element analysis is used to obtain the reason for these results. Figure 14 shows the strain distribution of the different cell sizes. The cellular wall of the microcellular silicone rubber foam receives a supporting counterforce during the compressive process. The higher the supporting counterforce is, the stronger the compressive resistance of the microcellular silicone rubber foam. The effect of the cell size on the supporting counterforce of the cellular wall is shown in Fig. 15. One can see that with the reduction in the cell size, the supporting counterforce of the cellular wall increases as shown in Fig. 15. In other words, the small cellular structure leads to a decrease in the radius of curvature, which can obtain an increased supporting counterforce. Thus, the microcellular silicone rubber foam can improve the compressive property. Figure 16 shows the compressive stress-strain curve of the silicone rubber materials with three different structures. The compression behavior of silicone rubber foam materials is reflected in the compressive stress-strain curve as follows: (1) In the linear elastic stage, the initial stress should become linear. (2) In the plateau stage, with an increase in the compression strain, the stress is maintained in a constant range. (3) In the compaction stage, cell collapse leads to materialization of the foam materials, and the stress and strain return to having a linear relationship. It can be seen from Fig. 16 that the compressive stress-strain curve of the alternating multilayered silicone rubber foam materials and pure foam silicone rubber materials are obviously divided into three regions, which is consistent with the compression behavior of the traditional foam materials described by Gibson et al. [41], but the compressive stress-strain curve of the pure solid silicone rubber is essentially linear. The plateau area for the foam/solid alternating multilayered silicone rubber materials in the compressive stress-strain curves is narrower than that for the pure foam silicone rubber materials and ranges from 15 to 35%. This is because the cell size of the foam layer is small and the deformation required for compaction of the whole material changes slightly. In addition, the compression resistance of the foam/solid alternating multilayered silicone rubber materials is higher than that of the pure silicone rubber foam materials. The foam/solid alternate multilayered structure enables the foam layer to form small cells that improve the compression resistance of the foam materials [42]. When the strain reaches 40%, the stress sustained by the foam/solid alternating multilayered silicone rubber foam reaches 1.08 MPa, while the stress of the pure silicone rubber foam is only 0.32 MPa. The compressive strength of the foam/solid alternating multilayered silicone rubber is 237.5% higher than that of the pure silicone rubber foam.
Conclusion
In this paper, the microcellular structure of silicone rubber foam is controlled by adjusting the viscoelasticity of the silicone rubber matrix, and foam/solid alternating multilayered silicone rubber materials are successfully constructed by means of layer-by-layer stacking and supercritical foaming. By introducing solid layers into the silicone rubber foam materials, the cellular growth is restricted, and the foam layer forms a small cellular structure. The tensile strength of the foam/solid alternating multilayered silicone rubber materials reaches 5.39 MPa, which is 46.1% higher than that of the pure silicone rubber foam. Moreover, when the compressive strain reaches 40%, the stress of the foam/solid alternating multilayered silicone rubber materials reaches 1.08 MPa, which is 237.5% higher than that of the pure silicone rubber foam. It may therefore be possible to improve the tensile properties of the foam/solid alternating multilayered silicone rubber foam by increasing the strength and proportion of the solid layer.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/. Fig. 15 The effect of the cell size on the supporting counterforce of the cell Fig. 16 Stress-strain curves of the silicone rubber materials with different structures | 6,766.2 | 2021-02-01T00:00:00.000 | [
"Materials Science"
] |
A Novel Space Large Deployable Paraboloid Structure with Power and Communication Integration
*e combination of a solar array and a communication antenna can reduce the entire mass, physical size, and cost in space applications. Currently, related studies mainly focus on the combination of the two structures on the one flat plate structure (FPS). Compared with the FPS, a paraboloid structure has a lower surface density and higher conversion efficiency. *erefore, a novel space large deployable paraboloid structure with power and communication integration (SSPCI) is proposed and designed in detail, for spacecraft on a sun synchronous earth orbit; it consists of a cable mesh membrane reflector (CMMR), an energy conversion device (ECD), and a three-extensible-rod (TER) pointing mechanism. To achieve the goal of integrating power and communication, the TER pointing mechanism drives the CMMR/ECD to track the sun in the sunshine region or to turn to face the ground station/other target in the Earth’s shadow region. *en, through simulation analyses of the deploying process, static force at a limit orientation, and sun tracking process of the SSPCI, it is proved that the SSPCI is feasible and has satisfactory performance. Finally, deploying experiments of the folded hoop of the CMMR and sun tracking experiments of the TER pointing mechanism on the ground were carried out successfully, which proves that the folded hoop can be deployed successfully with fairly high deploying dependability, and the TER pointing mechanism is feasible for the SSPCI from the mechanism principle and the control mode in space applications indirectly. Moreover, the tracking accuracy of the TER pointing mechanism is estimated to be within ±0.4° although the machining precision of its components is not high.
Introduction
As limitations of size of rockets carrying and cost of launching, it is a challenging issue to reduce the mass, physical size, and cost of major spacecraft components in space applications [1]. Generally, most spacecraft include at least one large-size solar array to provide power and one large-size communication antenna to establish a high gain and robust communication with the ground station. Both the two structures require a massive and large back support to maintain their surface tolerance. Moreover, they may interfere with each other in space. One solution to reduce the burden of these two large structures on the spacecraft is to combine them into one.
Currently, related research mainly focuses on the combination of a solar array and a communication antenna on the one flat plate structure (FPS). One technical scheme is patch antennas are directly integrated onto the solar cells of a small satellite to save valuable surface real estate that are proposed by Turpin and Baktur [4]. O'Conchubhair et al. studied the effect of a solar cell lattice on the performance of an IFA antenna [5]. An et al. designed, manufactured, and tested a Ka band reflectarray antenna integrated with solar cells, where solar cells are used as the reflectarray antenna substrate [6]. Because the communication antenna is on the front of the solar array, although the antenna has satisfactory radiation characteristics, an optical blockage for sunlight must exist in this structure. For the second idea, related research is less than the one of the first idea. Vaccaro et al. presented a new antenna which combines solar cells and printed patches. It is designed so as to accommodate the solar cells which provide power to a monolithic microwave integrated circuit amplifier [7]. Solar cells are on the top of this antenna. Similarly, solar cells may affect radiation characteristics of the antenna because the solar array is on the front of the communication antenna.
For the second technical scheme, it is easier to implement than the first one. Huang proposes the back of the microstrip reflectarray can be used for solar arrays [8]. Holland et al. proposed an origami style solar/antenna panel where the phased array is located on the opposite side of the solar array [9]. Compared with the first technical scheme, this scheme is applied in the case where sunlight and electromagnetic waves incident the spacecraft from both sides.
As mentioned above, although a solar array and a communication antenna could be integrated on one FPS, they will interact with each other when arranged on the same side of a FPS. Furthermore, the surface density of an integrated FPS is high, and its conversion efficiency is low. Compared with the FPS, a paraboloid structure has a lower surface density and higher conversion efficiency because it focuses sunlight or electromagnetic waves at the focal point. However, there are few studies on a paraboloid structure with power and communication integration. Only Lichodziejewski and Cassapakis have developed a power antenna concept, utilizing an inflatable membrane paraboloid reflector to concentrate solar energy for space electrical power generation, while concurrently or alternatively acting as an antenna with large aperture and high gain [10]. is power antenna is used for spacecraft on a deep space exploration orbit (DSEO), where sunlight and electromagnetic waves irradiate the spacecraft continuously from the same side. By adjusting the attitude of the spacecraft, this power antenna can concurrently receive sunlight and electromagnetic waves.
A sun synchronous earth orbit (SSEO), where the sun and earth are on both sides of a spacecraft, respectively, is different from a DSEO. erefore, sunlight and electromagnetic waves irradiate a spacecraft on a SSEO from different sides, respectively. Most spacecraft on the SSEO need more power and higher communication capacity than those on the DSEO. However, a paraboloid structure with power and communication integration for spacecraft on the SSEO has never been studied.
For this purpose, a novel space large deployable paraboloid structure with power and communication integration (SSPCI) for spacecraft on the SSEO is proposed in this paper, as shown in Figure 1. Section 2 of this paper presents a novel idea for integrating power and communication for the SSPCI, and then the overall structural design of the SSPCI is described, which consists of a cable mesh membrane reflector (CMMR), an energy conversion device (ECD), and a TER pointing mechanism for driving the CMMR/ECD to track the sun for power in the sunlight region or to turn to face the ground station for communication or achieving other targets in the earth shadow region. Following this, the key components of the SSPCI are structurally designed in Section 3. e tracking principle is deduced and elaborated for the SSPCI in Section 4. e simulation analyses of deploying process, static force, and sun tracking process of the SSPCI are addressed in Section 5. Deploying experiments of the CMMR and sun tracking experiments of the TER pointing mechanism on the ground were carried out successfully, as described in Section 6. Finally, Section 7 concludes the work with a summary.
Description of Space Large Deployable Paraboloid Structure with Power and Communication Integration
As is known, spacecraft on the SSEO always spend part of their time in the sunshine region and part of their time in the earth shadow region in a flight cycle. Based on this, a novel idea for integrating power and communication is proposed: when spacecraft are in the sunshine region, the SSPCI loaded on it tracks the sun for power; when spacecraft are in the earth shadow region, the SSPCI turns to face the ground station for communication or achieving other targets.
Energy conversion device
Cable mesh membrane reflector TER pointing mechanism Spacecra Figure 1: Schematic of the SSPCI.
International Journal of Antennas and Propagation
Based on this idea and our research on space large deployable antennas [11][12][13], the overall structure of the SSPCI is proposed, as shown in Figure 2. e SSPCI consists of a CMMR, a TER pointing mechanism, and an ECD. From the perspective of the whole system structure, the SSPCI has a simple and efficient structure, and each component is itself a mature structure. erefore, the system has high stability and reliability.
Consideration of Cable Mesh Membrane Reflector and
Energy Conversion Device. For the reflector of the SSPCI, an ultralight CMMR is proposed, as shown in Figure 2. It uses a folded hoop and a central cylinder as its rigid backbone; flexible cables are used with them to form a mesh surface on which the aluminized membrane is assembled. e aluminized membrane can reflect both sunlight and electromagnetic waves. Meanwhile, a preliminary physical model of the CMMR is shown in Figure 3.
e ECD is fixed to the focal point of the CMMR by the central cylinder. A beam splitter or metallic grid is mounted in front of a solar cell array or lightweight thermoelectric conversion device [14] and is used to reflect electromagnetic waves onto a feed, as shown in Figure 2 [10]. In this way, sunlight and electromagnetic waves can be separated and utilized for power or communication.
Design of ree-Extensible-Rod (TER) Pointing Mechanism.
e TER pointing mechanism is the most key component of the SSPCI which is used to drive the CMMR/ ECD to track the sun or to turn to face the ground station/ other target. Before proceeding with design, the mounting position of the SSPCI on the spacecraft should be determined firstly.
Most spacecraft on the SSEO adopt three-axis attitude stabilization, and their body coordinate system (BCS) and orbital coordinate system (OCS) coincide to implement ground orientation easily. e sun's path on the OCS is a spiral shape when a spacecraft is travelling on the SSEO. It can be treated as a circle in one cycle. According to the mounting position of solar arrays on a spacecraft and the position relationship between the sun and the earth's center relative to a spacecraft, a mounting position for the SSPCI is determined. For example, for a hexahedron spacecraft, the SSPCI is mounted at the center of the plane of the spacecraft that is parallel to the orbital plane and faces towards the sun, as shown in Figure 4. Not only can the SSPCI satisfy the requirements of sun tracking but also it can easily turn to face the ground station for communication or achieving other targets at this position.
After the mounting position has been determined for the SSPCI, a TER pointing mechanism is proposed, as shown in Figure 5. e TER pointing mechanism includes a base platform, a mobile platform, and three extensible rods. ree extensible rods, all with the same structure, are arranged between the two platforms in an equilateral triangle. e lower end of each extensible rod is connected to the base platform by a rotary joint, and the upper end is connected to the mobile platform by a compound joint. Huang and Zeng point out that the TER pointing mechanism has three degrees of freedom, two rotational motions, and an independent vertical translational motion [15]. e two rotational motions can satisfy the requirement of the SSPCI, while the independent vertical translational motion can be used to pull the mobile platform down to the base platform to reduce the carrying size of the TER pointing mechanism in the rocket fairing. e TER pointing mechanism is simple and lightweight in structure. It has high tracking accuracy, low inertia, and a fast response speed [16].
Achievement of Integrating Power and Communication.
As shown in Figure 2, the SSPCI is connected to the spacecraft by a supporting rod; a rotary joint is at the lower end of the supporting rod where a motor is equipped for the deployment of the SSCPI, and the upper end of the supporting rod is fixed to the base platform of the TER pointing mechanism. e mobile platform is connected to the lower end of the central cylinder of the CMMR by a rotary joint on which a motor is equipped for the deployment of the SSPCI. When the SSPCI is tracking the sun for power, the length of three extensible rods are controlled to allow the mobile platform to rotate along axis When the SSPCI needs to turn to face the ground station, the TER pointing mechanism keeps the elevation angle in the process of tracking the sun unchanged and rotates along axis Y b back to the direction of axis Z b , while the SSPCI rotates around the rotary joint at the lower end of the supporting rod until the central cylinder points towards the center of the earth, as shown in Figure 6.
Deployable Joint of CMMR.
e deployable joint, which is used to connect and drive the folded hoop to deploy in place, is the most key structure of the CMMR. e joint with a pair of synchronize gears, which is driven by torsion springs, is a fair satisfactory selection for a space large deployable structure [17]. Figure 7 shows the physical model of the deployable joint. Simplified taper gears are used to drive rods of the hoop from the horizontal position to vertical position. When the folded hoop is folded, the stretch energy is stored in torsion springs. Clamps and pins are used to connect the parts of the joint together. e deployable joint is simple and lightweight in structure, which has a fairly high deploying dependability.
Compound Joint of TER Pointing Mechanism.
For tracking the sun, the TER pointing mechanism needs to implement a large range motion. Instead of a spherical joint, a smart compound joint with a hook joint and rotary joint is used to connect each extensible rod with the mobile platform, whose axes converge at one point, as shown in Figure 10 presents the kinematic model of the TER pointing mechanism. First, a static coordinate system, O − XYZ, is established at center O of the base platform where axis Z is vertically upward along the base platform, axis X points towards the center point of rotary joint (R) R 1 at the lower end of extensible rod 1, and axis Y is formed as the crossproduct of the two previous axes (right-hand rule is applied). A mobile coordinate system, o − xyz, is established at center o of the mobile platform; axis z is vertically upward along the
Sun
Centre of earth mobile platform, axis x points towards the center of the compound joint, S 1 , at the upper end of extensible rod 1, and axis y is formed as the cross-product of the two previous axes (right-hand rule is applied). r is the radius of the mobile platform, and R is the radius of the base platform. In addition, the mobile platform is connected to the lower end of the central cylinder of the CMMR by a rotary joint (R) at its center point R 4 , which rotates along axis y. e position and orientation of the TER pointing mechanism can be expressed as where c and s denote cos and sin, respectively. According to the structure constraint of the TER pointing mechanism, x, y, and z can be represented by other parameters as follows: Substituting equation (3) in equation (4), L i is described as Carrying out a modular operation to two ends of equation (5) yields
For Sun Tracking. As mentioned above, the BCS
of the spacecraft on the SSEO coincide to implement ground orientation easily. According to VSOP (variation seculaires des orbits planetaires) 87 theory [18,19], the unit direction vector of the sun in the second equatorial coordinate system O s − X s Y s Z s , S s , is given by where χ and ε denote the apparent right ascension and declination of the sun, respectively. Figure 9: An exploded diagram of the smart compound joint. Figure 10: Geometry of the TER pointing mechanism. 6 International Journal of Antennas and Propagation where ω is the latitude angle, i is the orbital inclination, and Ω is the right ascension of the ascending node; R x , R y , and R z denote the unit rotation matrix around axis x, y, and z, respectively. e TER pointing mechanism is mounted on the spacecraft, as shown in Figure 11. e unit direction vector of the sun in O − XYZ can be computed as en, the azimuth angle of the sun ϕ and elevation angle φ in O − XYZ is given by As aforementioned, there are three degrees of freedom in the TER pointing mechanism: two-dimensional rotations, and a translation motion along direction Z. For sun tracking, only the orientation of the mobile platform is required to be determined, which can be done by α and β. e third parameter z is adjustable, which can be used in interesting ways or be optimized as a useful objective function.
Hence, α and β can be given by where if S d (y) ≥ 0 and α � − α.
Substituting equation (11) to equation (6), the length of each extensible rod in the process of sun tracking can be given by
For Ground Station Communication or Achieving Other
Targets. When the TER pointing mechanism drives the CMMR/ECD to face the ground station or achieving other targets, α, β, and δ are can be given as en, l i and δ can be given by [20]. e orbital parameters are listed in Table 1 [21]. Structural parameters of the CMMR are listed in Table 2, and those of the TER pointing mechanism are shown in Table 3. e energy conversion device is replaced by a cylinder with dimensions of 0.2 m × 0.2 m, and the mass is 3 kg. In the finite element analysis, the rigid structures of the SSPCI use the material of the carbon fiber, the cables use the material of Kevlar, and the reflective membrane uses the material of the aluminized membrane. e properties of the related materials are listed in Table 4. e mass of the CMMR and the ECD is 7.129 kg, the mass of the TER pointing mechanism is 4.3678 kg, and the overall mass of the SSPCI is 11.4968 kg.
Interference Detection of SSPCI.
e relevant dimensions of the system have been given in Section 5.1. However, it should be judged that whether the moving scope of the TER pointing mechanism can satisfy the requirement of the mission, and the SSPCI will interfere with the satellite body in the tracking process. Figure 11: Position of the TER pointing mechanism mounting on the spacecraft.
International Journal of Antennas and Propagation
According to the parameters of the orbit listed in Table 1, through the commercial simulation software, the angle between the sun vector and the orbital plane during one year is plotted in Figure 12. e minimum value of the angle is 18.504°, the maximum value is 29.884°, and the sun vector is always on the one side of the orbital plane. e moving scope of the TER pointing mechanism is mainly limited by the length of each extensible rod and the rotation angles of each compound joint and rotary joint. As listed in Table 3, the minimum length of the extensible rod is 492 mm, and the maximum length is 908 mm. As shown in Figure 8, the maximum rotation angle of the compound joint is 85°. According to engineering experience, the maximum rotation angle of the rotary joint is 57°. According to literature [22], the moving scope of the TER pointing mechanism is obtained, as shown in Figure 13. e blue part is the workspace of the TER pointing mechanism under the limiting condition, and the orange part is the required workspace for the SSPCI. It can be observed that the moving scope of the TER pointing mechanism can satisfy the requirement of the SSPCI.
In the process of tracking the sun, the most likely time for interference is when the SSPCI turns the diagonal section of the satellite body at the minimum pitch angle, as shown in Figure 14(a). e minimum working height of the supporting structure is calculated as 1393 mm. Because the working height of the TER pointing mechanism is 700 mm, the length of the supporting rod is 693 mm. Figure 14(b) shows the limit position of the SSPCI for communication. It can be observed that the SSPCI will not interfere with the satellite body in the tracking process, thus verifying the correctness of the relevant dimensions.
Simulation of Deploying Process.
e deploying probability is the key performance for a deployable structure. e SSPCI can be deployed by two steps: the deployment of the support structure and the deployment of the folded hoop. e deploying process is simulated by multibody dynamic simulation software.
Support Structure Deployment.
e satellite platform is unconstrained in this process. e acceleration of gravity is set to zero. As shown in Figure 15, two rotary drives are applied at the position of motor 1 and motor 2, respectively. e rotation angles of two motors are planned by a cubic polynomial in the deploying process. e relevant settings are made as follows: the deployment time is 200 s; both the angles of the two rotary drives are 90°; both the initial and terminal angular velocities of the two rotary drives are 0°/s. Simulation results show that the support structure can be deployed from status 1 to status 2 successfully, as shown in Figure 15. Dynamic curves of the spacecraft's center of mass with time are shown in Figure 16. e maximum displacement is no more than 15 mm. e maximum velocity and acceleration is no more than 0.35 mm/s and 0.01 mm/s, respectively; the results illustrate that the deployment of the support structure has little effect on the spacecraft. Figure 17 displays the dynamic curves of driving torques of the two motors with time. e maximum of them is not more than 25 Nmm, which indicates that two small and lightweight motors can implement the target.
Folded Hoop Deployment.
e deployment of the folded hoop is the second and most important step for the deployment of the SSPCI. e folded hoop is unconstrained in this process. e acceleration of gravity is set to zero. Every two rods are connected by two rotary pairs driven by a torsion spring, which is used to simulate the deployable joint Figure 18. Figure 19 shows that the displacement, velocity, and acceleration of the mass center of one of the rotary pairs with time, respectively.
Static Force Analysis.
e SSPCI has to keep a satisfactory mechanical property in space environment. Meanwhile, the CMMR requires a high surface precision for power or communication. us, static force analysis of the SSPCI at a limit orientation in zero gravity was carried out.
A simplified Φ5 m finite element method (FEM) model of the SSPCI was constructed. Parameters of the SSPCI are set based on Section 5.1. BEAM4 is used for the folded hoop, the central cylinder, the TER pointing mechanism, and the supporting rod. LINK10 is used for flexible cables. SHELL41 is used for the aluminized membrane. SOLID45 is used for the ECD and the satellite. e upper end node of each extensible rod and the corresponding position node of the mobile platform are coupled to constrain their three translational degrees of freedom (DOF) to simulate the compound joint. Meanwhile, the lower end node of each extensible rod and the one of the base platform are coupled to constrain their three translation DOF and two rotational DOF to simulate the rotary joint. e bottom of the ECD and the upper end of the central cylinder are rigidly fastened. e Solid45 of the ECD and the spacecraft is generated by mapping mesh into tetrahedron elements, while the other element types are generated by nodes directly. e lower surface of the satellite is fully constrained. e acceleration of gravity is set to zero. Because the main external load of the SSPCI is the temperature load in space, the temperature loads of 20°C, − 100°C, and 100°C are applied to the SSPCI, respectively. e large deformation and automatic time step are opened, and the finite element model is analyzed by static force analysis and modal analysis.
Simulation results are shown in Figures 20-23. e maximum displacement of the SSPCI at the temperature load of 20°C is 44 mm, as shown in Figure 20; the RMS of the CMMR is 3.6 mm to the best fit paraboloid after posttreatment, which is small enough to satisfy the requirements of concentrating both sunlight and electromagnetic waves. e fundamental frequency of the SSPCI is 0.9282 Hz, as shown in Figure 21.
e fundamental frequency can be increased by further system-level optimization. e maximum displacement of the SSPCI at the temperature load of − 100°C is 39 mm, as shown in Figure 22; the RMS of the CMMR is 12 mm. e maximum displacement of the SSPCI at the temperature load of 100°C is 35 mm, as shown in Figure 23; the RMS of the CMMR is 24 mm. Currently, the RMS of the CMMR at the temperature of 100°C is not satisfactory, which can be solved by necessary temperature control measures and further system-level optimization.
Sun Tracking Process Analysis.
As aforementioned, the sun's path can be treated as a circle relative to the spacecraft in one orbital cycle. erefore, the CMMR/ECD of the SSPCI has to rotate in a circle to follow the sun, which is driven by the TER pointing mechanism. e simulation process in one orbital cycle starting from 0 : 00 on July 7, 2017, is built up. Figure 24 shows the simulation model of the SSPCI. e reflector remains the pitch angle of 18.504°r elative to the mounting plane. e velocity of the attitude angle is planned, as shown in Figure 25. Figure 26 displays the dynamic curves of the spacecraft's center of mass with time.
e maximum value of displacement, velocity, and acceleration is no more than 20 mm, 0.015 mm/s, and 0.0015 mm/s 2 , respectively; the results illustrates that the motion of the SSPCI has little effect on the spacecraft. Figure 27 displays the dynamic curves of driving forces of three extensible rods with time. e maximum force of each extensible rod is no more than 0.015 N at start up time, which indicates three low power and lightweight motors can implement the target. Figure 28 displays the dynamic curves of the consumed power of each extensible rod with time from 13 s to 6075 s. Furthermore, the consumed power of three extensible rods in a flying cycle is 2.4186 × 10 − 4 J. e spacecraft flies 14.2 orbits around the earth in one day. e consumed power of the TER pointing mechanism in one day is 2.9 × 10 − 3 J, and the consumed power of the TER pointing mechanism in one year with 365 days is 1.06 J. Figure 29 displays the dynamic curves of the distance between the upper and lower U plates of each compound joint with time. It can be observed that the distance of each compound joint is always greater than zero; the results illustrate that there is no interference between the upper and lower U plates, and the design of the compound joint is feasible, stability, and reliability for the SSPCI. e folded hoop is constructed with 36 aluminum bars, whose outer diameter is Φ1 mm and thickness is 0.2 mm. e stowed size of the folded hoop is Φ17 mm × 19 mm. e actual model is hung to compensate the gravity influence, as shown in Figure 30(a). After releasing the rubber hand, the model can be deployed successfully as similar as the simulation analysis. Figure 30(b) shows the deployed status of the actual model.
Key Experiments for SSPCI on the Ground
Due to the parallel driving manner of the torsion spring, although the actual model is not precisely manufactured and assembled, it can still be deployed successfully. is illustrates that the structure has a fairly high deploying dependability.
Sun Tracking Experiments of TER Pointing
Mechanism on the Ground. Although theoretical and simulation analysis of the TER pointing mechanism have been made, it is still doubtful for its feasibility without an actual test. us, we have carried out sun tracking experiments of the TER pointing mechanism on the ground successfully. Although the environment on the ground is different from the one in space, and the mechanism principle and control mode of the TER pointing mechanism are identical. erefore, it can be indirectly predicted that the TER pointing mechanism is feasible for the SSPCI in a space environment. Figure 31 shows three moments of the sun tracking process in a day. Although the precision of manufacture is not high, it can still be observed that the central cylinder has no projection on the mobile platform at the three moments (morning moment in Figure 31(a), noon moment in Figure 31(b), and afternoon moment in Figure 31(c)). e shadow region on the mounting sleeve on the morning moment is shown in Figure 31(d). e wall thickness of the mounting sleeve is 6 mm, and the length of the central cylinder is 895 mm. is illustrates that the tracking precision of the TER pointing mechanism is within 0.4°(i.e., arctan (6/895)), which is sufficient for the SSPCI.
Conclusions
A novel SSPCI is designed in this paper. e theoretical analysis, simulation analysis, and preliminary experiments on the ground are conducted for the SSPCI. Analytical deduction and experiments show that the SSPCI has several remarkable advantages, which can be summarized as follows: (1) Due to integrating two functions (for power and communication) on one paraboloid structure, a large-size solar array and a large-size communication antenna can be combined into one. (2) e SSPCI is an ultralight paraboloid structure. Compared with the solar panel [23], the surface density of SSPCI is lower, while its power density is higher, as listed in Table 5. (3) Although only the SSPCI with an aperture of Φ5 m is simulated and analyzed, its aperture may be further expanded to Φ20 m to meet the requirements of a higher power spacecraft because of its low surface density. (4) e novel TER pointing mechanism with simple structure, lightweight, and high pointing accuracy can be used for many other applications in space environment.
One of the major area of focus for future works would be the optimal design of the size and shape of the SSPCI in a specific orbit for a given power supply and communication rate required by a specific satellite. In addition, the stability and reliability of the SSPCI should be analyzed in more detail according to the space environment.
Data Availability
e data used to support the finding the study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper. | 7,093.8 | 2019-11-13T00:00:00.000 | [
"Engineering",
"Physics"
] |