text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Functional model for boundary-value problems We develop a functional model for operators arising in the study of boundary-value problems of materials science and mathematical physics. We then provide explicit formulae for the resolvents of the associated extensions of symmetric operators in terms of the associated generalised Dirichlet-to-Neumann maps, which can be utilised in the analysis of the properties of parameter-dependent problems as well as in the study of their spectra. Introduction The need to understand and quantify the behaviour of solutions to problems of mathematical physics has been central in driving the development of theoretical tools for the analysis of boundary-value problems (BVP). On the other hand, the second part of the last century witnessed several substantial advances in the abstract methods of spectral theory in Hilbert spaces, stemming from the groundbreaking achievement of John von Neumann in laying the mathematical foundations of quantum mechanics. Some of these advances have made their way into the broader context of mathematical physics [35,21,43]. In spite of these obvious successes of spectral theory applied to concrete problems, the operator-theoretic understanding of BVP has been lacking. However, in models of short-range interactions, the idea of replacing the original complex system by an explicitly solvable one, with a zero-radius potential (possibly with an internal structure), has proved to be highly valuable [6,47,14,8,32,33,56]. This facilitated an influx of methods of the theory of extensions (both self-adjoint and non-selfadjoint) of symmetric operators to problems of mathematical physics, culminating in the theory of boundary triples. The theory of boundary triples introduced in [24,22,29,30] has been successfully applied to the spectral analysis of BVP for ordinary differential operators and related setups, e.g. that of finite "quantum graphs", where the Dirichlet-to-Neumann maps act on finite-dimensional "boundary" spaces, see [19] and references therein. However, in its original form this theory is not suited for dealing with BVP for partial differential equations (PDE), see [12,Section 7] for a relevant discussion. The key obstacle to such analysis is the lack of boundary traces Γ 0 u and Γ 1 u for functions u : Ω → R (where Ω is a bounded open set with a smooth boundary) in the domain of the maximal operator A corresponding to the differential expression considered (e.g. the operator −∆ on the domain of L 2 (Ω)-functions u such that ∆u is in L 2 (Ω)) entering the Green identity in other words dom(A) ⊂ dom(Γ 0 ) ∩ dom(Γ 1 ). Recently, when the works [26,27,5,23,52,12] started to appear, it has transpired that, suitably modified, the boundary triples approach nevertheless admits a natural generalisation to the BVP setup, see also the seminal contributions by M. S. Birman [10], L. Boutet de Monvel [4], M. S. Birman and M. Z. Solomyak [11], G. Grubb [25], and M. Agranovich [1], which provide an analytic backbone for the related operator-theoretic constructions. In all cases mentioned above, one can see the fundamental rôle of a certain Herglotz operator-valued analytic function, which in problems where a boundary is present (and sometimes even without an explicit boundary [2]) turns out to be a natural generalisation of the classical notion of a Dirichlet-to-Neumann map. The emergence of this object yields the possibility to apply to BVP advanced methods of complex analysis in conjunction with abstract methods of operator and spectral theory, which in turn sheds light on the intrinsic interplay between the mentioned abstract frameworks and concrete problems of interest in modern mathematical physics. The present paper is a development of the recent activity [15,16,17,20] aimed at implementing the above strategy in the context of problems of materials science and wave propagation in inhomogeneous media. Our recent papers [18,19] have shown that the language of boundary triples is particularly fitting for direct and inverse scattering problems on quantum graphs, as one of the key challenges to their analysis stems from the presence of interfaces through which energy exchange between different components of the medium takes place. In the present work we continue the research initiated in these papers, adapting the technology so that BVP, especially those stemming from materials sciences, become within reach. As in [18,19], the ideas of [46,39] concerning the functional model allow one to efficiently incorporate into the analysis information about the mentioned energy exchange, by employing a suitable Dirichlet-to-Neumann map. In our analysis of BVP, we adopt the approach to the operator-theoretic treatment of BVP suggested by [52], which appears to be particularly convenient for obtaining sharp quantitative information about scattering properties of the medium, cf. e.g. [20], where this same approach is used as a framework for the asymptotic analysis of homogenisation problems in resonant composites. We next outline the structure of the paper. In Section 2 we recall the main points of the abstract construction of [52] and introduce the key objects for the analysis we carry out later on, such as the dissipative operator L at the centre of the functional model. In Section 3 we construct the minimal dilation of L, based on the ideas of [50], which in the context of extensions of symmetric operators followed the earlier foundational work [39]. Using the functional model framework thus developed, in Section 4 we construct a new version of Pavlov's "three-component" functional model for the dilation [45] and pass to his "two-component", or "symmetric", model [46] (see also [39,50]), based on the notion of the characteristic function for L, which is computed explicitly in terms of the M -operator introduced in Section 2. In Section 5 we develop formulae for the resolvents of boundary-value operators for a range of boundary conditions αΓ 0 u + βΓ 1 u = 0, with α, β from a wide class of operators in L 2 (∂Ω), including those relevant to applications. The last two sections are devoted to the applications of the framework: based on the derived formulae for the resolvents, in Section 6 we establish the resolvent formulae for the operators of boundary-value problems belonging the class discussed earlier in the functional spaces stemming from the functional model, and in Section 7 we apply these formulae to obtain a description of the operators of BVPs in a class of Hilbert spaces with generating kernels. Ryzhov triples for BVP In this section we follow [52] in developing an operator framework suitable for dealing with boundary-value problems. The starting point is a self-adjoint operator A 0 in a separable Hilbert space H with 0 ∈ ρ(A 0 ), where ρ(A 0 ), as usual, denotes the resolvent set of A 0 . Alongside H, we consider an auxiliary Hilbert space E and a bounded operator Π : E → H such that Since Π has a trivial kernel, there is a left inverse Π −1 , so that Π −1 Π = I E . We define where neither A nor Γ 0 is assumed closed or indeed closable. The operator given in (2.2) is the null extension of A 0 , while (2.3) is the null extension of Π −1 . Note also that (2.4) ker For z ∈ ρ(A 0 ), consider the abstract spectral boundary-value problem where the second equation is seen as a boundary condition. As asserted in [52,Theorem 3.1], there is a unique solution u of the boundary-value problem (2.5) for any φ ∈ E. Thus, there is an operator (clearly linear) which assigns to any φ ∈ E the solution u of (2.5), referred to as the solution operator 1 for A and denoted by γ(z). An explicit expression for it in terms of A 0 and Π can be obtained as follows. Using the fact that A ⊃ A 0 , one can show (see [52,Remark 3.3]) that for all φ ∈ E one has and therefore Furthermore, note that By (2.6), one has ran(γ(z)) ⊂ ker(A − zI), but the inverse inclusion also holds. Indeed, taking a vector u ∈ ker(A − zI) and writing it in the form In view of (2.6), (2.7), the last expression shows that u ∈ ran(γ(z)). Putting together the above, one arrives at (2.9) ran(γ(z)) = ker(A − zI) . We remark that, since A is not required to be closed, ran(γ(z)) is not necessarily a subspace. This is precisely the kind of situation that commonly occurs in the analysis of BVPs. In what follows, we consider (abstract) BVP of the form (2.5) associated with the operator A, with variable boundary conditions. To this end, for a self-adjoint operator Λ in E, define The operator Λ can thus be seen as a parameter for the boundary operator Γ 1 . Definition 1. For a given triple (A 0 , Π, Λ), define the operator-valued M -function associated with A 0 as follows: for any z ∈ ρ(A 0 ), the operator M (z) in E is defined on the domain dom(M (z)) := dom(Λ), and its action is given by The above abstract framework is illustrated (see [52] for details) by the classical setup where A 0 is the Dirichlet Laplacian on a bounded domain Ω with smooth boundary ∂Ω, so A 0 is self-adjoint on dom(A 0 ) = W 2 2 (Ω) ∩W 1 2 (Ω). In this case Π is simply the Poisson operator of harmonic lift, its left inverse is the operator of boundary trace for harmonic functions and Γ 0 is the null extension of the latter to W 2 2 (Ω) ∩ W 1 2 (Ω) ∔ ΠL 2 (∂Ω). Furthermore, Λ can be chosen as the Dirichlet-to-Neumann map 2 which maps any function φ ∈ W 1 2 (Ω) =: dom(Λ) to −(∂u/∂n)| ∂Ω , where u is the solution of the boundary-value problem ∆u = 0, (see e.g. [55]). Due to the choice of Λ, it follows from (2.10) that Note that (2.13) follows from the fact that Π * f = −(∂u/∂n)| ∂Ω for u = A −1 0 f . Therefore, the M -operator M (z), z ∈ ρ(A 0 ), is the Dirichlet-to-Neumann map φ → −(∂u/∂n)| ∂Ω of the spectral boundary-value problem where φ belongs to L 2 (∂Ω), and M (z) is understood as an unbounded operator 3 defined on dom(M (z)) = W 1 2 (∂Ω). This example shows how all the classical objects of BVP appear naturally from the triple (A 0 , Π, Λ). In particular, it is worth noting how the energy-dependent Dirichlet-to-Neumann map M (z) is "grown" from its "germ" Λ at z = 0. Returning to the abstract setting and taking into account (2.10), one concludes from Definition 1 that (2.14) M (z) = Λ + zΠ * (I − zA −1 0 ) −1 Π. From this equality, one verifies directly that In this work we consider extensions (self-adjoint and non-selfadjoint) of the "minimal" operator Still following [52], we let α and β be linear operators in E such that dom(α) ⊃ dom(Λ) and β is bounded on E. Additionally, assume that α + βΛ is closable and denote its closure by ß. Consider the linear set 2 For convenience, we define the Dirichlet-to-Neumann map via −∂u/∂n| ∂Ω instead of the more common ∂u/∂n| ∂Ω . As a side note, we mention that this is obviously not the only choice for the operator Λ. In particular, the trivial option Λ = 0 is always possible. Our choice of Λ is motivated by our interest in the analysis of classical boundary conditions. 3 More precisely, M (z) is the sum of an unbounded self-adjoint operator and a bounded one, which will be obvious from (2.14). 4 Following [52, Lemma 4.1], the identity implies that αΓ 0 + βΓ 1 is well defined on dom(A 0 ) ∔ Π dom(Λ). The assumption that α + βΛ is closable is used to extend the domain of definition of αΓ 0 + βΓ 1 to the set (2.18). Moreover, one verifies that H ß is a Hilbert space with respect to the norm It follows that the constructed extension αΓ 0 + βΓ 1 is a bounded operator from H ß to E. According to [52,Theorem 4.1], if the operator α + βM (z) is boundedly invertible for z ∈ ρ(A 0 ), the spectral boundary-value problem has a unique solution u ∈ H ß , where, as above, αΓ 0 + βΓ 1 is a bounded operator on H ß . Under the same hypothesis of α + βM (z) being boundedly invertible for z ∈ ρ(A 0 ), it follows from [52, Theorem 5.1] that the function Among the extensions A αβ of A, we single out the operator that is, α = −iI and β = I. Since in this case α and β are scalar operators, and dom(Γ 1 ) ⊂ dom(Γ 0 ), by virtue of (2.18) one has The definition of dom(L) implies that for all h ∈ H, z ∈ C − , since, by (2.4) and the fact that L, A 0 ⊂ A, one has where the second equality is deduced in the same way as the first. In what follows, we will use the following relations, which are obtained by combining (2.11) and (2.23): It is proven in [52, Theorem 6.1] that the operator L of formula (2.21) is dissipative and boundedly invertible (hence maximal). We recall that a densely defined operator L in H is called dissipative if Im Lf, f ≥ 0 ∀f ∈ dom(L). A dissipative operator L is said to be maximal if C − ⊂ ρ(L). Maximal dissipative operators are closed, and any dissipative operator admits a maximal extension. Furthermore, the function turns out to be the characteristic function of L, see [36,54]. Since M is a Herglotz function (see (2.16)), one has the following formula: We remark that the function S is analytic in C + and, for each z ∈ C + , the mapping S(z) : E → E is a contraction. Therefore, S has nontangential limits almost everywhere on the real line in the strong operator topology [53]. Recall that a closed operator L is said to be completely non-selfadjoint if there is no subspace reducing L such that the part of L in this subspace is self-adjoint. We refer to a completely non-selfadjoint symmetric operator as simple. Proof. Suppose that L has a reducing subspace H 1 such that L| H1 is self-adjoint. Take a nonzero w ∈ dom(L) ∩ H 1 . Then (2.12) and (2.22) The nontrivial invariant subspace H 1 of L is a nontrivial invariant subspace of its restriction A as long as H 1 ∩ dom( A) = ∅. This last condition has been established above. Finally, since A is symmetric, H 1 is actually a reducing subspace of A. Clearly A is self-adjoint in H 1 . Self-adjoint dilations for operators of BVP and a 3-component functional model Any completely non-selfadjoint dissipative operator L admits a self-adjoint dilation [53], which is unique up to a unitary transformation, under an assumption of minimality, see (3.2) below. There are numerous approaches to an explicit construction of the named dilation [13,39,40,41,45,46,50,51,54]. In applications, one is compelled to seek a realisation corresponding to a particular setup. In the present paper we develop a way of constructing dilations of dissipative operators convenient in the context of BVP for PDE. In the formulae below, we use the subscript "±" to indicate two different versions of the same formula in which the subscripts "+" and "−" are taken individually. Recall that for any maximal dissipative operator L, its dilation is defined as a self-adjoint operator A in a larger Hilbert space H ⊃ H with the property We start by constructing a minimal dilation of the operator L of the previous section, defined by (2.21), following a procedure similar to the one used in [44,45]. Let In this Hilbert space, the operator A is defined as follows. Its domain dom(A) is given by where W 1 2 (R + , E) and W 1 2 (R − , E) are the Sobolev spaces of functions defined on R + and R − , respectively, and taking values in E. We remark that the results of the previous section imply that in our case H ß = dom(Γ 1 ). On this domain, the operator A acts according to the rule Proof. The fact that A is an extension of L follows from (2.21) and (2.22). Let us establish the self-adjointness of A. Furthermore, taking into account the conditions defining dom(A), one obtains It follows by combining (3.6) and (3.7) that A is symmetric. To complete the proof, it suffices to show that ran(A − zI) = H for all z ∈ C \ R. To this end, consider the operators ∂ ± and ∂ 0 ± in L 2 (R ± , E) given by Here, for some e ∈ E. Also, where to obtain the first equality we use (2.21), and the second equality follows from (2.8) and Definition 1. Thus In addition, we have The equalities (3.10) and (3.11) imply that (f − , f, f + ) ⊤ ∈ dom(A), see (3.4). Next, we show that On the one hand, it follows from (3.9) and the first line of (3.8) that On the other hand, due to the fact that L ⊂ A and the property (2.9), one has In conformity with (3.5), the identities (3.13), (3.14) yield (3.12). In the same way as above, it can be shown that which completes the proof. Remark 1. In the proof of Theorem 3.1, we have obtained the following formulae for the resolvent of A : where (f − , f, f + ) ⊤ is given by (3.8) for z ∈ C − and by (3.15) for z ∈ C + . The following technical result will be used to prove that A is a minimal dilation of L; at the same time, it is of a clear independent interest. Lemma 3.2. Each of the sets Proof. Due to (2.23) and the fact that dom(M (z)) is dense in E, it suffices to prove the assertion of the lemma about the first set. Since L is densely defined, one clearly has We next show that Finally, fixing Im z and taking the Fourier transform with respect to Re z yields g(ξ) = 0 for a.e. ξ ∈ R + , which concludes the proof of (3.16). By a similar argument, one also shows that which completes the proof. For convenience, we introduce the following families of sets in H. For any z + ∈ C + and z − ∈ C − , define where are dense in the spaces H and H, respectively. Proof. To simplify notation, denote by Y the closure of the first set in (3.17). It follows from Remark 1 that Using the formulae for the resolvent of the dilation (see (3.8) for z ∈ C − , (3.15) for z ∈ C + , and Remark 1), one immediately obtains Suppose that u ∈ H is such that u ⊥ G(z + , z − ) for all z + ∈ C + , z − ∈ C − . Taking into account that vectors in E in (3.20) can be chosen independently in the first and second summands, we obtain In particular, for z + ∈ C + we have Similarly, we establish that u ∈ ran( A − z − I) for z − ∈ C − . Since z + ∈ C + , z − ∈ C − above are arbitrary, it follows that The assumption that A is simple is equivalent (see [34,Section 1.3]) to the fact that the set on the right-hand side of (3.21) is trivial, and hence u = 0. This concludes the proof of (3.19). Remark 2. The terms on the right-hand side of (3.20) are linearly independent. Proof. Assume that e 1 , e 2 ∈ E are such that Applying Γ 0 and Γ 1 to (3.22) and using the definition of γ, we obtain respectively. Substituting the first identity above into the second one yields Then the first equality in (3.23) Two-component spectral form of the functional model Following [39], we introduce a Hilbert space in which we construct a functional model for the operator family A αβ , in the spirit of Pavlov [44,45,46]. The functional model for completely non-selfadjoint maximal dissipative operators that can be represented as additive perturbations of self-adjoint operators was constructed in [44,45,46] and further developed in [39] to include non-dissipative operators. In the context of boundary triples an analogous construction was carried out in [50]. In the most general setting to date, namely the setting of adjoint operator pairs, an explicit three-component model akin to the one we presented in the previous section was constructed in [13], which however stops short of constructing a "spectral", twocomponent, form of the model, which is particularly convenient for the development of a scattering theory for operator pairs. 4 In this section we we carry out such a construction, tailored to study operators of BVP, in the case when symbol of the operator is formally self-adjoint (but the operator itself can be non-selfadjoint due to the boundary conditions). Next, we recall some concepts relevant to the construction of [39]. In what follows, we assume throughout that A, see (2.17), is simple and therefore L is completely non-selfadjoint (see Proposition 2.1). A function f, analytic on C ± and taking values in E, is said to be in the Hardy class H 2 ± (E) when 4 We refer the reader to the paper [ . We now return to the setup of Section 2 and prove a fundamental regularity property for the expressions (2.24), which is crucial for our construction. Since L is maximal dissipative, it admits a self-adjoint dilation A [53]. (In the case of the operator L considered here, this dilation is given explicitly by Theorem 3.3. However, we do not require this fact here.) One concludes, by resorting to the resolvent identity, that Denoting by E(t), t ∈ R, the resolution of identity [9, Chapter 6] for A and setting z = k − iǫ, k ∈ R, ǫ > 0, one has Now, using Fubini's theorem, we obtain Taking supremum with respect to ε, it follows that The second inequality in (4.1) of the lemma is proven in the same way. 11 As mentioned in Section 2, the characteristic function S, given in (2.25), has nontangential limits almost everywhere on the real line in the strong topology. Thus, for a two-component vector function g g ∈ L 2 (R, E) ⊕ L 2 (R, E), the integral 5 2) vanishes is assumed. Naturally, not every element of the set can be identified with a pair g g of two independent functions, however we keep the notation g g for the elements of this space. Another consequence of the contractive properties of the characteristic function S is the inequalities They imply, in particular, that for every sequence { gn gn } ∞ n=1 that is Cauchy with respect to the H-topology and such that g n , g n ∈ L 2 (R, E) for all n ∈ N, the limits of g n + S * g n and S g n + g n exists in L 2 (R, E), so that the objects g + S * g and S g + g can always be treated as L 2 (R, E) functions. 6 Consider the following subspaces of H : 7 It is easily seen [46] that the spaces D − and D + are mutually orthogonal in H. Define the subspace which is characterised as follows (see [44,46]): The orthogonal projection P K onto K is given by (see e.g. [38]) where P ± are the orthogonal Riesz projections in L 2 (E) onto H 2 ± (E). 5 This is in fact the same construction as proposed by [46] and further developed by [39]. Henceforth in this section we follow closely the analysis of the named two papers, facilitated by the fact that essentially this way to construct the functional model only relies upon the characteristic function S of the maximal dissipative operator and an estimate of the type claimed in Lemma 4.1 above. A similar argument for extensions of symmetric operators, based on the theory of boundary triples, was developed in [50], [18]. 6 In general, g + S * g and S g + g are not independent of each other, see [28]. 7 In the language of scattering theory [35], the subspaces D − , D + are "incoming" and "outgoing" subspaces, respectively, for the group of translations of H, as was first observed in [44]. Definition 2 ([50]). The mappings F and Based on the above definition, we will now introduce a map from H to H, which will prove to be unitary. We will then show that H serves as a representation space for the spectral form of the functional model discussed in Section 3. We implement this strategy in Lemmata 4.2-4.6. Lemma 4.2. Fix z + ∈ C + , z − ∈ C − , and consider the map Φ : where w + , w − ∈ E are determined uniquely, by Remark 2, from The map Φ satisfies Proof. Taking into account Definition 2, one immediately verifies that (4.7) holds for v = 0. Since Φ, F ± are linear, it only remains to prove the assertion when E). Under this assumption, consider the first row in the vector equality (4.7), where v is replaced by the formula (4.6): In what follows, we show that and therefore (4.8) holds, as required. To verify (4.9) first, consider z ∈ C − . Using the second resolvent identity, it follows from (2.26) that Therefore, by (2.15), (2.24), one has Passing to the limit as z approaches a real value, we infer that (4.9) is satisfied for all w − ∈ E. To prove (4.10) for all w + ∈ E, we proceed in a similar way. By straightforward calculations, one has, for z ∈ C − , Proceeding in the same way as (4.12) was obtained from (4.11), one obtains which, by passing to the limit as z approaches the real line, yields the required property. The second entry of the vector equality (4.7) is proved in a similar way. Lemma 4.3. The mapping Φ, given in Lemma 4.2, is an isometry from Thus, taking into account that the spaces D − and D + are orthogonal (see the discussion following the formula (4.3)), one has Finally note that The surjectivity of the mapping follows from the fact that the Fourier transform is a unitary mapping between L 2 (R ± , E) and H 2 ± (E), by the Paley-Wiener theorem. Lemma 4.4. The mapping Φ, given in Lemma 4.2 and extended by linearity to is an isometry from the set (4.13) to H. Proof. Due to (4.4) and Lemma 4.3, the assertion will be proved if one shows first that and, second, that for all z ± ∈ C ± and v ∈ G(z + , z − ) one has 14 In view of the definition of Φ, see Lemma 4.2, to establish (4.14) it suffices to verify that, for z ± ∈ C ± and w + , w − ∈ E chosen as in (4.6), the vectors To this end, consider h ± ∈ H ± 2 (E). Taking into account the fact that (4.16) . (4.17) Now analytically continuing the function S * to the lower half-plane and using the fact that we conclude that the expression (4.17) vanishes, as required. Due to Lemma 3.4 and Lemma 4.4, the mapping Φ can be extended by continuity to the whole space H, provided that the operator A is simple. We will use same notation Φ for this extension. Proof. We prove the statement for z ∈ C + , as the case z ∈ C − is established in a similar way. Consider an arbitrary (h − , h, h + ) ⊤ ∈ H, and let (f − , f, f + ) ⊤ be the vector defined by (3.15). It follows from (3.13) that Recall that h ± and f ± are the Fourier transforms of h ± and f ± , respectively. According to Definition 2 and (3.15), one has where to obtain the expression in the second square brackets we invoke (4.20). Thus, using the resolvent identity and (2.25), Consider the third term on the right-hand side of (4.21) evaluated at ζ ∈ C + . Using the property (cf. (3.4)) we write it as follows: where for the second equality f is replaced by (3.15), while for the third and fourth equalities we have used (2.8) and the second resolvent identity, respectively. Furthermore, we utilise (2.15) to obtain the fifth equality. The identities (2.24) now yield the final expression (4.21). It follows that the second and third terms on the right-hand side of (4.21) cancel each other as ζ approaches the real line. We have therefore shown that Similarly, one proves that Combining Proof. In view of Lemma 4.4, the mapping Φ is an isometry defined in the whole space H. It thus suffices to show that the range of Φ is dense in H. To this end, suppose g ∈ H is such that By Lemma 4.3 and the definition of the subspace K, see (4.4), this is equivalent to the existence of a nonzero g ∈ K such that (4.24) holds with v − = 0, v + = 0. On the other hand, since Φ * g ∈ H, one has which by Lemma 3.4 yields Φ * g = 0, and hence g = 0. Combining the above lemmata, we obtain the following result, concerning the representation of the dilation A as the operator of multiplication in the two-component model space H. where Φ is unitary from H to H. Boundary traces of the resolvents of BVP Our aim here is to derive an explicit formula for the solution operator of the spectral boundary-value problem (2.19). To this end, consider the operator (see (2.20), (5.5), cf. [52,Section 5]) for all z such that 0 ∈ ρ(α + βM (z)). It is convenient to assume that β is boundedly invertible, which we do henceforth. Recall, that above (see Section 2) we have also required that β is bounded, and α is such that dom(α) ⊃ dom(Λ) and α + βΛ is closable. We note that M (z) := M (z) − Λ is bounded and Furthermore, one has dom(Λ) ⊂ dom(α + βΛ), and In addition, β −1 (α + βΛ) is closed, as a consequence of the general fact that whenever T 1 is bounded with a bounded inverse and T 2 is closed, the operator T 1 T 2 is closed. Therefore, β −1 α + Λ is closable and Combining (5.1) and (5.2), we obtain and [52, Theorem 5.1] implies that For convenience, henceforth we use the notation Q B (z) : Notice that [52, Theorem 5.1] requires Q B = ∅, which cannot be guaranteed in the most general setup. In the present article we focus on the PDE setting, where the standard choice of boundary conditions implies that Λ is the Dirichlet-to-Neumann map [52]. This allows us to make some reasonable assumptions that are bound to hold provided the boundary of the spatial domain in the BVP is smooth, so that [52,Theorem 5.1] is applicable and the resulting operator A αβ has discrete spectrum in C − ∪ C + . In what follows, we utilise the standard notation S ∞ the Banach algebra of compact operators [9, Section 11] on the boundary space E. Lemma 5.1. Suppose that Λ is the Dirichlet-to-Neumann map of a BVP problem, such that it is a selfadjoint operator with purely discrete spectrum, accumulating to −∞. 8 Then M (z) −1 ∈ S ∞ for all z ∈ C \ R. Proof. Choose a finite-rank operator K such that Λ + K has trivial kernel and (Λ + K) −1 ∈ S ∞ . Such a choice is obviously always possible. Furthermore, by the second Hilbert identity, where Ξ is a bounded operator. Hence, M (z) −1 ∈ S ∞ . Corollary 5.2. Within the conditions of Lemma 5.1, if B is bounded, then Remark 3. Note that if one drops the condition that B is bounded, it is possible for Q B to be empty. Indeed, put α = −Λ and β = I (as shown in [52], under these assumptions the operator A αβ is the Kreȋn extension [3] of the operator A). Then by (2.14) one has B + M (z) = zΠ * (I − zA −1 0 ) −1 Π, which is shown to be compact under the assumptions of Lemma 5.1. However, the following theorem suggests that instead of the restriction that B be bounded, it suffices to assume that it is compact relative to M (z), in order to ensure that Q B coincides with C \ R with the exception of a discrete set. Theorem 5.3. Suppose that BM (z) −1 ∈ S ∞ for at least one z ∈ C + and at least one z ∈ C − (and hence at all z ∈ C \ R), where B is defined by (5.3). If I + BM (z) −1 is invertible for for at least one z ∈ C + and at least one z ∈ C − , then 1) The operator A αβ has at most discrete spectrum in C \ R (accumulating at the real line only). Proof. By the Analytic Fredholm Theorem, see [48,Theorem 8.92], the operator I + BM (z) −1 is invertible at all z ∈ C \ R with the exception of a discrete set of points. Therefore, for any z such that the inverse exists, one has (B + M (z)) −1 = M (z) −1 I + BM (z) −1 −1 . This implies that the "Kreȋn formula", cf. (2.20), holds at all z ∈ C \ R with the exception of a discrete set of points: and therefore ρ(A αβ ) is discrete in C \ R, which proves the first claim. Furthermore, the right-hand side of (5.5) is analytic whenever its left-hand side is, i.e. on the set ρ(A αβ ), which immediately implies the inclusion ρ(A αβ ) ⊂ Q B . The second claim of the theorem now follows by comparing this with (5.4). Lemma 5.4. Assume that where Θ B and Θ B are defined via their inverses: Proof. Fix an arbitrary h ∈ H and define In order to prove (5.6), suppose that z ∈ C − ∩ Q B , so the resolvents (L − zI) −1 and (A αβ − zI) −1 are defined on the whole space H. Clearly, the vector is an element of ker(A − zI). It follows from g −iI I ∈ dom(L) and g αβ ∈ dom(A αβ ) that Γ 1 g −iI I = iΓ 0 g −iI I and βΓ 1 g αβ = −αΓ 0 g αβ , and therefore one has where in the last equality we also use the fact that g ∈ ker(A − zI), together with Definition 1. Hence, by collecting the terms in the calculation (5.10), one has (cf. (5)) α + βM (z) Γ 0 g = (α + iβ)Γ 0 (g + g αβ ) = (α + iβ)Γ 0 g −iI I , which, in turn, implies that, for z ∈ Q B one has Finally, using the second resolvent identity we obtain where we use the formula (2.26). The identity (5.7) is proved by an argument similar to the above, where the vector g −iI I is replaced with with g iI I , for z ∈ C + , and the formula (2.25) is used instead of (2.26). Remark 4. Note that the boundedness condition imposed on B in Lemma 5.4 can be relaxed. Not only can we assume that B is such that BM (z) −1 ∈ S ∞ , as suggested by Theorem 5.3, but the latter condition can be relaxed even further by assuming that B is bounded relative to M (z) with the bound 9 less than 1 (see [31]), which clearly suffices for B + M (z) = B + M (z). In present paper, however, we limit ourselves to physically motivated applications to BVP, which renders these considerations unnecessary. For this reason in what follows we will only consider the case when the parameter B is bounded. Functional model for non-necessarily dissipative operators In this section we obtain a useful representation for the resolvent of A αβ in the Hilbert space H, i.e. in the spectral functional model representation of L. The results of this section generalise those of [50]. We start by proving the following lemma. Throughout we assume that the condition imposed by Lemma 5.1 holds. The following is the main result of this section and is similar in form to [50,Theorem 2.5] and [39,Theorem 3]. Its proof closely follows the lines of the mentioned works. . (ii) If z ∈ C + ∩ Q B and ( g, g) ⊤ ∈ K, then Here, ( g + S * g)(z) and (S g + g)(z) denote the values at z of the analytic continuations of the functions g + S * g ∈ H 2 − (E) and S g + g ∈ H 2 + (E) into the lower half-plane and the upper half-plane, respectively. Proof. We prove (i). The proof of (ii) is carried out along the same lines. For this one should establish the validity of the identities: First we compute the left-hand-side of (6.3). It follows from Lemma 5.4 that for z, λ ∈ C − ∩ Q B , h ∈ H one has Letting z = k − iǫ, k ∈ R, it follows from the above calculation that Combining the expression for F + from Definition 2 with (6.4) yields Hence, in view of the identity F + h = g + S * g, which follows from (4.6), we obtain On the basis of Lemma 5.4 and reasoning in the same fashion as was done to write (6.5), one verifies Let us focus on the right hand side of (6.3). Note that where (4.5) is used in the first equality and in the second the fact that if f ∈ H 2 − (E), then, for all z ∈ C − , Now, apply F + Φ −1 to (6.7) taking into account that F + h = g + S * g once again: (6.8) where for the last equality we have used Lemma 6.1. By combining (6.8) with (6.5), we establish the first identity in (6.3). 21 Finally, applying F − Φ −1 to (6.7) and using the identity F − h = S g + g, we obtain where in the last two equalities we use Lemma 6.1. Comparing this with (6.6), we arrive at the second identity in (6.3). Application: a unitary equivalent model of an operator associated with BVP in a space with reproducing kernel In the present section we demonstrate that in the setting of operators of BVP, the results of Section 4 lead to the representation of (L * − zI) −1 as the Toeplitz operator P S f (·)(· − z) −1 | KS , where P S is the orthogonal projection of H 2 + (E) onto K S := H 2 + (E) ⊖ SH 2 + (E). Thus this results of Section 6 can be used to represent the resolvent of A αβ as a "triangular" perturbation of the aforementioned Toeplitz operator. Throughout the section we assume that the condition imposed by Lemma 5.1 holds, the operator B is bounded and that the operator A is simple. The following proposition carries over together with its proof from [28]. H, 0). For the spaces K S and K † S one additionally has the element-wise equality S * K S = K † S . Remark 5. It can be verified that the characteristic function S is indeed inner if the spectrum of the operator L is discrete. The latter is satisfied by the Kreȋn resolvent formula, provided that the conditions of Lemma 5.1 hold and the operator of the BVP with Dirichlet conditions has discrete spectrum, the latter being the case under minimal regularity conditions; however, see, e.g., the discussion in [37] and references therein. The formula (6.5) applied to the operator L and a similar computation in relation to the operator L * now yield the following result. Theorem 7.2. The operator (L − zI) −1 for z ∈ C − is unitary equivalent to the Toeplitz operator f → P † S f (·)(· − z) −1 in the space K † S ; the operator (L * − zI) −1 for z ∈ C + is unitary equivalent to the Toeplitz operator f → P S f (·)(· − z) −1 in the space K S . Here P † S and P S are orthogonal projections onto K † S and K S , respectively: where P + , P − are orthogonal projections onto Hardy classes H 2 + (E), H 2 − (E), respectively. For the operators of BVPs defined by different boundary conditions parameterised by the operator B, including self-adjoint ones, a similar argument yields the following representation. Theorem 7.3. The operator (A αβ − z) −1 for z ∈ C − ∩ ρ(A αβ ) is unitary equivalent to a "triangular" perturbation of the Toeplitz operator f → P † S f (·)(· − z) −1 in the space K † S , namely, to the operator For z ∈ C + ∩ ρ(A αβ ) the resolvent (A αβ − z) −1 is unitary equivalent to the operator Remark 6. It is rather well-known that the spaces K S and K † S are Hilbert spaces with reproducing kernels, closely linked to the corresponding de Branges spaces in the "scalar" case of dim E = 1. We refer the reader to the book [42] for an in-depth survey of the subject area and of the related developments in modern complex analysis. The applications of the latter Theorem to the direct and inverse spectral problems of operators of BVPs is outside the scope of the present paper and will be dwelt upon elsewhere.
9,947
2019-07-18T00:00:00.000
[ "Mathematics" ]
IShTAR ICRF antenna field characterization in vacuum and plasma by using probe diagnostic RF sheath physics is one of the key topics relevant for improvements of ICRF heating systems, which are present on nearly all modern magnetic fusion machines. This paper introduces developement and validation of a new approach to understanding general RF sheath physics. The presumed reason of enhanced plasma-antenna interactions, parallel electric field, is not measured directly, but proposed to be obtained from simulations in COMSOL Multiphysics® Modeling Software. Measurements of RF magnetic field components with B-dot probes are done on a linear device IShTAR (Ion cyclotron Sheath Test ARrangement) and then compared to simulations. Good resulting accordance is suggested to be the criterion for trustworthiness of parallel electric field estimation as a component of electromagnetic field in modeling. A comparison between simulation and experiment for one magnetic field component in vacuum has demonstrated a close match. An additional complication to this ICRF antenna field characterization study is imposed by the helicon antenna which is used as a plasma ignition tool in the test arrangement. The plasma case, in contrast to the vacuum case, must be approached carefully, since the overlapping of ICRF antenna and helicon antenna fields occurs. Distinguishing of the two fields is done by an analysis of correlation between measurements with both antennas together and with each one separately. Introduction Interaction between ICRF antenna fields and plasma of a tokamak edge has been studied for decades.RF sheathaccelerated ions have been shown to be the main reason of antenna limiters sputtering [1].Attempts done on different experimental machines to reduce negative effects accompanying plasma-wall interactions are usually heuristic, with conclusions drawn basing on consequential parameters like impurities concentration, RF currents or temperature of limiting structures [2,3], not on the electrical field.Improvements are typically achieved for specific machine parameters, magnetic field angle, antenna geometry, toroidal phasing, etc. Beside that, modelling tools have been developed for SOL ICRF physics simulations.They are reviewed in details in [4]. In this paper, an approach to understand general RF sheath physics is described and first results are presented.Parallel electric field, supposedly responsible for the enhancement of interactions between antenna and plasma, is inherently connected to the magnetic field of an RF wave.Experimental results of magnetic field components measurements in the linear device IShTAR are used as a reference for simulations in COMSOL Multiphysics® Modeling Software in order to obtain target parallel electric field values. Field characterization with probe diagnostic The object of interest, a thin layer in the vicinity of an antenna, is hardly approachable by diagnostic tools, especially since the most common (and so far the only one applicable in SOL plasma) diagnostic for potential measurements, electrical probes, is known to introduce significant changes by its presence.That is why indirect measurements are preferred, though their results should be necessarily linked to the E-field values. The electromagnetic field of an ICRF antenna can be simulated and values for each component of the field can be calculated separately.A relatively simple B-dot probe diagnostic is used for magnetic field measurements in our experimental device.A B-dot probe is able to provide local measurement of one component of the magnetic field.Assuming that agreement of experimental and numerical results for magnetic field components would mean the equality of electrical field values, we can expect to deduce the sought parallel electric field from simulation, as soon as we confirm the match of experiment and simulation results for the magnetic field. Experimental setup The objectives of IShTAR (Ion cyclotron Sheath Test ARrangement) are described in [5,6].Fig. 1 illustrates all important features of the device: main vacuum chamber of 0.5 meter radius with an ICRF antenna (5.22 MHz, 1 kW of power) and a manipulator in front of it; small vacuum chamber attached on a side, with helicon antenna for plasma ignition (11.76 MHz, up to 3kW); two sets of magnetic coils of up to 0.24 T and 0.03 T, surrounding correspondingly big and small vacuum chambers.This setup allows around 10 second of plasma confinement, with a high density (~10 17 m -3 ) plasma column of 15-20 cm in radius being in close proximity to the specially designed one-strap ICRF antenna.The working gas is He (Ar can be also used).ICRF antenna magnetic field components are measured by B-dot probes.An array of 4 probes (Fig. 2) is installed on a radially movable manipulator in parallel to the main magnetic field (z-direction) with coordinates of -15, -5, 5 and 15 cm counting from the ICRF antenna center. Experimental results The first thing that should be noted before proceeding to the experimental results, is that there is more than one radio-frequency antenna in the experimental setup.The helicon antenna situated in the small chamber and dedicated to start a discharge also emits RF waves.Its presence complicates the task of ICRF antenna magnetic field measurements and its contribution must always be taken into account. In order to distinguish the magnetic fields of the two antennas, following scenarios have been studied: (a) Plasma with only helicon antenna.(b) Plasma with only ICRF antenna (since this antenna is not designed to ignite plasma, only weak plasma is produced; no wave propagation is registered, evanescent wave signal decays as in vacuum).(c) Plasma with both antennas simultaneously.Experimental results (Fig. 3) reveal evident superposition of the two antenna fields.In the range of positive values of radius (farther away from the ICRF antenna) profiles of H z for all 4 probes have similar shape on two plots: Fig. 3a which shows the case with helicon antenna only, and Fig. 3c which is for the case with two antennas.In the vicinity of the ICRF antenna (negative r) measurements demonstrate the impact of the ICRF antenna field on the resulting signal.A significant rise of signal towards the ICRF antenna, similar in shape to that observed on Fig. 3b, is especially well seen for the two central probes. It can be noticed, however, that absolute values of ICRF antenna magnetic field on Fig 3b are significantly higher than those measured in plasma with both antennas.The reasons is the B-dot probes detectors, which have a constant attenuation factor for frequencies greater than 10 MHz, thus aiming to cut low-frequency noise.It is still possible to measure the 5.22 MHz signal of ICRF antenna with those detectors, using a different calibration (as it was done for results on Fig. 3b).But in combination with the 11,76 MHz frequency of the helicon antenna the signal of ICRF antenna becomes weak, being attenuated much stronger than the helicon antenna signal.For a complete analysis of the superimposed magnetic fields of the two antennas new detectors need to be made, with equal attenuation factor for both frequencies.Nevertheless, it can be already concluded that it is feasible to distinguish the ICRF antenna magnetic field from the sum of the two fields. Simulation Realistic geometry of all in-vessel parts (including ICRF antenna, manipulator and probes) has been built in COMSOL Multiphysics®.Fig. 4 is a view on the ICRF antenna without a side wall: the antenna strap, the limiting (supporting) structure and the feeding coaxial transmission line can be seen.On the background the small chamber is visible. Vacuum case The standard Comsol Electromagnetics Module makes it possible to model the vacuum electromagnetic field of the IShTAR ICRF antenna.In vacuum, an RF wave of 5.22 MHz propagates in the coaxial transmission line and then becomes evanescent, causing very small fields inside the IShTAR chamber.Magnetic field distribution in logarithmic scale (log of A/m) is plotted on Fig. 5. Magnetic field measured by a B-dot probe is represented in simulation as an average field inside a cylindrical volume at the position and of the size of a real inductor used in a probe.Radial profiles of H z at 5.22 MHz for the 4 probes (Fig. 6) have the same shape in the experiment and in the modeling.Due to a perfect symmetry of the ICRF antenna and probes in the model results for two outer and two central probes are identical, which is well reproduced in the experiment.Absolute values do not play any role here, since they are fully dependent on the amount of injected power and the matching system parameters.This can be always varied in the simulation, if needed.Four radial profiles obviously do not provide a full 2D plot of the magnetic field parameters.Here we are making an assumption that if a comparison of 4 radial experimental and simulation profiles is successful, an electromagnetic field distribution from the simulation is considered to closely depict the real field in our device. EPJ Web of Conferences 157, 03058 (2017) For more certainty not only one component of magnetic field can be compared, but at least 2. Since a simulation of electromagnetic field in COMSOL gives all components of the field at any geometrical point, parallel electric field is thus known at any required position.Calculated H z and E z in front of the antenna are provided as examples in Fig. 7 and 8. Plasma case Plasma can be programmed in COMSOL's Electromagnetics Module as a material with manually assigned physical properties.The next step of the validation of the approach presented in this paper is planned to be a plasma simulation in IShTAR geometry.Ideally, each case (ICRF antenna, helicon antenna, both antennas together) needs to be addressed separately and comparisons with experiment have to be made accordingly. Conclusion An approach for obtaining parallel electric field of an ICRF antenna using a combination of experiment and simulation is presented.The obtained agreement of experimental and modelling results for one of the magnetic field components in vacuum case supports the proposed assumption of feasibility of the explained approach.Further comparisons, for other magnetic field components in vacuum and for the plasma case will provide more confident and detailed summary of the applicability of the discussed approach. Fig. 2 . Fig. 2. Probe array on a manipulator in front of an ICRF antenna inside IShTAR. Fig. 5 . Fig. 5. Magnetic field of IShTAR ICRF antenna in vacuum in logarithmic scale. Fig. 7 . Fig. 7. Calculated 2D distribution of H z (in A/m) in front of the ICRF antenna in vacuum. Fig. 8 . Fig. 8. Calculated 2D distribution of E z (in V/m) in front of the ICRF antenna in vacuum. This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under grant agreement No 633053.The views and opinions expressed herein do not necessarily reflect those of the EuropeanCommission.
2,434.6
2017-10-01T00:00:00.000
[ "Physics" ]
Mixed Boundary Value Problem on Hypersurfaces and on Γ D the Dirichlet boundary conditions are prescribed, while on Γ N the Neumann conditions. The unique solvability of the mixed BVP is proved, based upon the Green formulae and Lax-Milgram Lemma. Further, the existence of the fundamental solution to divS(A∇S) is proved, which is interpreted as the invertibility of this operator in the setting H s p,#(S) → H s−2 p,# (S), where H s p,#(S) is a subspace of the Bessel potential space and consists of functions with mean value zero. In [3], the boundary value problem for the Laplace-Beltrami equation with the Dirichlet boundary condition were considered where ^Γ := (] Γ,1 , . . ., ] Γ, ) ⊤ is the unit normal vector field to the boundary Γ and tangent to the hypersurface C and + denotes the trace on the boundary.The derivative is tangent to the hypersurface C and normal with respect to the boundary Γ. The BVPs (1) and ( 2) were investigated in [3] in the following classical weak setting: and also in nonclassical weak setting: and the following was proved. For the solvability of the Neumann problems (2), ( 4) and (2), (5), the necessary and sufficient compatibility condition International Journal of Differential Equations should be fulfilled, which guarantees the existence and the uniqueness of solution. If and ℎ are regular integrable functions, the compatibility condition (6) acquires the form In Remarks 15 and 16, it is shown that the unique solvability of the Dirichlet BVP (1), ( 4) and the Neumenn BVP (2), (4) in the classical formulation follows from the Lax-Milgram Lemma. The investigation in [3] is based on the technique of Günter's derivatives developed in the preprint of Duduchava from 2002 and later in the paper of Duduchava et al. [2] and applies potential method.Similar problems, for = 2, by different technique were investigated earlier in the paper of Mitrea and Taylor [4]. The purpose of the present paper is to investigate the boundary value problems for the anisotropic Laplace equation with mixed boundary conditions: where C = Γ = Γ ∪ Γ is a decomposition of the boundary into two connected parts, = { } is an × strictly positive definite matrix, and for all ∈ C. We consider the BVP (8) in the weak classical setting (4).The nonclassical weak setting (5) will be considered in a forthcoming paper. Remark 2. As shown in [14], page 196, condition (4) does not ensure the uniqueness of solutions to the BVPs (1), (2) and (8).The right hand side f needs additional constraint that it belongs to the subspace H−1 0 (Ω) ⊂ H−1 (Ω) which is the orthogonal complement to the subspace H−1 (Γ) of those distributions from H−1 (Ω) which are supported on the boundary Γ = Ω of the domain only. For the classical setting (4), we apply the Lax-Milgram Lemma and prove unique solvability of the problem rather easily, while (5) in the nonclassical investigation relies again on the potential method. Mixed BVPs for the Laplace equation in domains were investigated by Lax-Milgram Lemma by many authors (see, e.g., the recent lecture notes online [5]). BVPs on hypersurfaces arise in a variety of situations and have many practical applications.See, for example, [6,Section 7.2] for the heat conduction by surfaces, [7,Section 10] for the equations of surface flow, [8,9] for the vacuum Einstein equations describing gravitational fields, and [10] for the Navier-Stokes equations on spherical domains, as well as the references therein. A hypersurface S in R has the natural structure of an (−1)-dimensional Riemannian manifold and the aforementioned PDEs are not the immediate analogues of the ones corresponding to the flat, Euclidean case, since they have to take into consideration geometric characteristics of S such as curvature.Inherently, these PDEs are originally written in local coordinates, intrinsic to the manifold structure of S. Another problem considered in the present paper is the existence of a fundamental solution for the Laplace-Beltrami operator.An essential difference between differential operators on hypersurfaces and the Euclidean space R lies in the existence of fundamental solution: in R fundamental solution exists for all partial differential operators with constant coefficients if it is not trivially zero.On a hypersurface even Laplace-Beltrami operator does not have a fundamental solution because it has a nontrivial kernel, constants, in all Bessel potential spaces.Therefore we consider Laplace-Beltrami operator in Hilbert spaces with detached constants , for all 1 < < ∞, ∈ R, and prove that it is an invertible operator.Another description of the space W ,# (S) is that it consists of all functions ∈ W (S) (distributions if < 0, which have the zero mean value, (, 1) S = 0).The established invertibility implies the existence of the certain fundamental solution, which can be used to define the volume (Newtonian), single layer, and double layer potentials. The structure of the paper is as follows.In Section 2, we expose all necessary definitions and some auxiliary material, partly new ones.Here the invertibility of the Laplace-Beltrami operator in the setting W ,# (S) → W −2 ,# (S) is proved.In Section 3, using the Lax-Milgram Lemma, it is proved that the basic mixed BVP (8) has a unique solution in the weak classical setting (4). Auxiliary Material We commence with definitions of a hypersurface.There exist other equivalent definitions but these are the most convenient for us.Equivalence of these definitions and some other properties of hypersurfaces are exposed, for example, in [3,11].Definition 3. A subset S ⊂ R of the Euclidean space is called a hypersurface if it has a covering S = ⋃ =1 S and coordinate mappings such that the corresponding differentials have the full rank rank Θ () = − 1, ∀ ∈ , = 1, . . ., , = 1, . . ., ; that is, all points of are regular for Θ for all = 1, . . ., .Such a mapping is called an immersion as well. A hypersurface is called smooth if the corresponding coordinate diffeomorphisms Θ in (10) are smooth ( ∞smooth).Similarly is defined a -smooth hypersurface. The next definition of a hypersurface is implicit. Definition 4. Let ⩾ 1 and ⊂ R be a compact domain.An implicit -smooth hypersurface in R is defined as the set where Let S be a closed hypersurface in R and let C be a smooth subsurface of S, given by an immersion with a boundary Γ = C, given by another immersion and let ^(X) be the outer unit normal vector field to C and let N() denote an extended unit field in a neighborhood C of C. ^Γ() is the outer normal vector field to the boundary Γ, which is tangential to C. A curve on a smooth surface C is a mapping of a line interval I to C. A vector field U ∈ V(Ω) defines the first order differential operator where F U () is the orbit of the vector field .Let be a first order differential operator with real valued (variable) matrix coefficients, acting on vector-valued functions in R , and its principal symbol is given by the matrix-valued function To distinguish an open and a closed hypersurface, we use the notation S for a closed hypersurface without the boundary rtialS = 0 (we remind the reader that the notation C is reserved for an open hypersurface with the boundary Γ := rtialS).Definition 6.We say that is a tangential operator to the hypersurface S, with unit normal ^, if (; ^) = 0 on the hypersurface S. (25) for every 1 function defined in a neighborhood of S. We continue with the definition of the surface divergence div S , the surface gradient ∇ S , and the surface Laplace-Beltrami operator Δ S . According to the classical differential geometry, the surface gradient ∇ S of a function ∈ 1 (S) is defined by and the surface divergence of a smooth tangential vector field V is defined by where Γ denotes the Christoffel symbols and := [ ] is the covariant Riemann metric tensor, while −1 := [ ] is the inverse to it-the contravariant Riemann tensor.div S is the negative dual to the surface gradient: The Laplace-Beltrami operator Δ S on S is defined as the composition Theorem 8 (cf.[2]).For any function ∈ 1 (S), one has Also, for a 1-smooth tangential vector field The Laplace-Beltrami operator Δ S on S takes the form M 2 , ∀ ∈ 2 (S) . (34) Corollary 9 (cf.[2]).Let S be a smooth closed hypersurface.The homogeneous equation has only a constant solution in the space W 1 (S). Proof.Due to (31) and ( 35), we get which gives ∇ S = 0.But the trivial surface gradient means constant function = const (this is easy to ascertain by analysing the definition of Günter's derivatives; see, e.g., [3]). Let C be a subsurface of a smooth closed surface M, C ⊂ M, with the smooth boundary Γ := rtialS.The space H (C) is defined as the subspace of those functions ∈ H (M), which are supported in the closure of the subsurface, supp ⊂ C, whereas H (C) denotes the quotient space H (C) = H (M)/ H (C ), and C := M \ C is the complementary subsurface to C. The space H (C) can be identified with the space of distributions on C which have an extension to a distribution ℓ ∈ H (M).Therefore, C H (M) = H (C), where C denotes the restriction operator of functions (distributions) from the surface M to the subsurface R . By X (M) we denote one of the spaces: It is obvious that and 0 = 0.Moreover, X (M) decomposes into the direct sum and the dual (adjoint) space is In fact, the decomposition (39) follows from the representation of arbitrary function ∈ X (M), because the average of the difference of a function and its average is zero: ( 0 ) aver = ( − aver ) aver = 0. Since the Sobolev space W ,# (M) with integer smoothness parameter = 1, 2, . . .does not contain constants, due to Corollary 9 the equivalent norm in this space can also be defined as follows: In particular, in the space W 1 ,# (M) the equivalent norm is The description (40) of the dual space follows from the fact that the dual space to X (M) is X − (M) (see [12]) and, therefore, due to the decomposition (39) and Hahn-Banach theorem the dual space to X ,# (M) should be embedded into X − (M).The only functional from X − (M) that vanishes on the entire space X ,# (M) is constant 1 ∈ X − (M) (see definition (37)).After detaching this functional the remainder coincides, due to (39), with the space X − ,# (M), which is the dual to X ,# (M). The perturbed operator is invertible, which can be interpreted as the existence of the fundamental solution to div S ∇ S − H. And, therefore, div S ∇ S has the fundamental solution in the setting (45). Proof.The first part of the theorem is proved in [3, Theorem 7.1] for the space setting W 1 (S) → W −1 (S) only.Therefore, we will prove it here in full generality.First of all, note that the operator (44) is bounded and elliptic, as an elliptic operator on the closed hypersurface div S ∇ S − H in (44) is Fredholm, for all ∈ R and 1 < < ∞ (it has a parametrix if S is infinitely smooth; see [13,15,16]).On the other hand, ( Let us prove the uniqueness of the solution.For this, consider homogenous boundary conditions: = 0, (^Γ, ∇ S ) = 0 on Γ and + = 0 on Γ .Then, (div S ∇ S − H) = 0 and ∫ Γ ⟨(^Γ, ∇ S ) + , + ⟩ = 0, and finally we get The conclusion = const = 0 follows as in the case (i).Therefore, Ker(div S ∇ S − H) = {0}.Since the operator is self-adjoint, the same is true for the dual operator Coker(div S ∇ S − H) = Ker(div S ∇ S − H) = {0} which, together with the Fredholm property of Let be, as in (23), a first order differential operator with 1 -smooth coefficients.is tangential if and only if the adjoint * operator is tangential.If is tangential to S and is defined in a neighborhood of S, then
2,912
2014-08-17T00:00:00.000
[ "Mathematics" ]
A Block Coordinate Descent-based Projected Gradient Algorithm for Orthogonal Non-negative Matrix Factorization This article utilizes the projected gradient method (PG) for a non-negative matrix factorization problem (NMF), where one or both matrix factors must have orthonormal columns or rows. We penalise the orthonormality constraints and apply the PG method via a block coordinate descent approach. This means that at a certain time one matrix factor is fixed and the other is updated by moving along the steepest descent direction computed from the penalised objective function and projecting onto the space of non-negative matrices. Our method is tested on two sets of synthetic data for various values of penalty parameters. The performance is compared to the well-known multiplicative update (MU) method from Ding (2006), and with a modified global convergent variant of the MU algorithm recently proposed by Mirzal (2014). We provide extensive numerical results coupled with appropriate visualizations, which demonstrate that our method is very competitive and usually outperforms the other two methods. Motivation Many machine learning applications require processing large and high dimensional data. The data could be images, videos, kernel matrices, spectral graphs, etc., represented as an m × n matrix R. The data size and the amount of redundancy increase rapidly when m and n grow. To make the analysis and the interpretation easier, it is favorable to obtain compact and concise low rank approximation of the original data R. This low-rank approximation is known to be very efficient in a wide range of applications, such as: text mining [2,27,30], document classification [3], clustering [19,32], spectral data analysis [2,12], face recognition [35], and many more. There exist many different low rank approximation methods. For instance, two well-known strategies, broadly used for data analysis, are singular value decomposition (SVD) [9] and principle component analysis (PCA) [11]. Much of real-world data are non-negative, and the related hidden parts express physical features only when the non-negativity holds. The factorizing matrices in SVD or PCA can have negative entries, making it hard or impossible to put a physical interpretation on them. Non-negative matrix factorization was introduced as an attempt to overcome this drawback, i.e., to provide the desired low rank non-negative matrix factors. Problem formulation A non-negative matrix factorization problem (NMF) is a problem of factorizing the input non-negative matrix R into the product of two lower rank non-negative matrices G and H: where R ∈ R m×n + usually corresponds to the data matrix, G ∈ R m×p + represents the basis matrix, and H ∈ R p×n + is the coefficient matrix. With p we denote the number of factors for which it is desired that p min(m, n). If we consider each of the n columns of R being a sample of m-dimensional vector data, the factorization represents each instance (column) as a non-negative linear combination of the columns of G, where the coefficients correspond to the columns of H. The columns of G can be therefore interpreted as the p pieces that constitute the data R. To compute G and H, condition (1) is usually rewritten as a minimization problem using the Frobenius norm: It is demonstrated in certain applications that the performance of the standard NMF in (NMF) can often be improved by adding auxiliary constraints which could be sparseness, smoothness, and orthogonality. Orthogonal NMF (ONMF) was introduced by Ding et al., [8]. To improve the clustering capability of the standard NMF, they imposed orthogonality constraints on columns of G or on rows of H. Considering the orthogonality on columns of G, it is formulated as follows: If we enforce orthogonality on the columns of G and on rows of H, we obtain the bi-orthogonal ONMF (bi-ONMF), which is formulated as where I denotes the identity matrix. Related work The NMF was firstly studied by Paatero et al., [26,1] and was made popular by Lee and Seung [17,18]. There are several different existing methods to solve (NMF). The most used approach to minimize (NMF) is a simple MU method proposed by Lee and Seung [17,18]. In Chu et al., [23], several gradient-type approaches have been mentioned. Chu et al., reformulated (NMF) as an unconstrained optimization problem, and then applied the standard gradient descent method. Considering both G and H as variables in (NMF), it is obvious that f (G, H) is a non-convex function. However, considering G and H separately, we can find two convex sub-problems. Accordingly, a block-coordinate descent (BCD) approach [18] is applied to obtain values for G and H that correspond to a local minimum of f (G, H). Generally, the scheme adopted by BCD algorithms is to recurrently update blocks of variables only, while the remaining variables are fixed. NMF methods which adopt this optimization technique are, e.g., the MU rule [17], the active-set-like method [15], or the PG method for NMF [20]. In [20], two PG methods were proposed for the standard NMF. The first one is an alternating least squares (ALS) method using projected gradients. This way, H is fixed first and a new G is obtained by PG. Then, with the fixed G at the new value, the PG method looks for a new H. The objective function in each least squares problem is quadratic. This enabled the author to use Taylor's extension of the objective function to obtain an equivalent condition with the Armijo rule, while checking the sufficient decrease of the objective function as a termination criterion in a step-size selection procedure. The other method proposed in [20] is a direct application of the PG method to (NMF). There is also a hierarchical ALS method for NMF which was originally proposed in [6,10] as an improvement to the ALS method. It consists of a BCD method with single component vectors as coordinate blocks. As the original ONMF algorithms in [19,32] and their variants [33,34,5] are all based on the MU rule, there has been no convergence guarantee for these algorithms. For example, Ding et al., [8] only prove that the successive updates of the orthogonal factors will converge to a local minimum of the problem. Because the orthogonality constraints cannot be rewritten into a non-negatively constrained ALS framework, convergent algorithms for the standard NMF (e.g., see [20,14,13,16]) cannot be used for solving the ONMF problems. Thus, no convergent algorithm was available for ONMF until recently. Mirzal [24] developed a convergent algorithm for ONMF. The proposed algorithm was designed by generalizing the work of Lin [21] in which a convergent algorithm was provided for the standard NMF based on a modified version of the additive update (AU) technique of Lee [18]. Mirzal [24] provides the global convergence for his algorithm solving the ONMF problem. In fact, he first proves the non-increasing property of the objective function evaluated by the sequence of the iterates. Secondly, he shows that every limit point of the generated sequence is a stationary point, and finally he proves that the sequence of the iterates possesses a limit point. Our contribution In this paper, we consider the penalty reformulation of (bi-ONMF), i.e., we add the orthogonality constraints multiplied with penalty parameters to the objective function to obtain reformulated problems (ONMF) and (bi-ONMF). The main contributions are: • We develop an algorithm for (ONMF) and (bi-ONMF), which is essentially a BCD algorithm, in literature also known as alternating minimization, coordinate relaxation, the Gauss-Seidel method, subspace correction, domain decomposition, etc., see e.g. [4,29]. For each block optimization, we use a PG method and Armijo rule to find a suitable step-size. • We construct synthetic data sets of instances for (ONMF) and (bi-ONMF), for which we know the optimum value by construction. • We use MATLAB [31] to implement our algorithm and two well-known (MU-based) algorithms: the algorithm of Ding [8] and of Mirzal [24]. The code is available upon request. • The implemented algorithms are compared on the constructed synthetic data-sets in terms of: (i) the accuracy of the reconstruction, and (ii) the deviation of the factors from orthonormality. Accuracy is measured by the so-called root-square error (RSE), defined as and deviations from orthonormality are computed using formulas (17) and (18) from Sect. 4. Our numerical results show that our algorithm is very competitive and almost always outperforms the MU algorithms. Notations Some notations used throughout our work are described here. We denote scalars and indices by lower-case Latin letters, vectors by lowercase boldface Latin letters, and matrices by capital Latin letters. R m×n denotes the set of m by n real matrices, and I symbolizes the identity matrix. We use the notation ∇ to show the gradient of a real-valued function. We define ∇ + and ∇ − as the positive and (unsigned) negative parts of ∇, respectively, i.e., = + − − . and denote the element-wise multiplication and the element-wise division, respectively. Structure of the paper The rest of our work is organized as follows. In Sect. 2, we review the well-known MU method and the rules being used for updating the factors per iteration in our computations. We also outline the global convergent MU version of Mirzal [24]. We then present our PG method and discuss the stopping criteria for it. Sect. 4 presents the synthetic data and the result of implementation of the three decomposition methods presented in Sect. 3. This implementation is done for both the problem (ONMF), as well as (bi-ONMF). Some concluding results are presented in Sect. 5. 2 Existing methods to solve (NMF) 2.1 MU method of Ding [8] Several popular approaches to solve (NMF) are based on so-called MU algorithms, which are simple to implement and often yield good results. The MU algorithms originate from the work of Lee and Seung [18]. Various MU variants were later proposed by several researchers, for an overview see [7]. At each iteration of these methods, the elements of G and H are multiplied by certain updating factors. As already mentioned, (ONMF) was proposed by Ding et al., [8] as a tool to improve the clustering capability of the associated optimization approaches. To adapt the MU algorithm for this problem, they employed standard Lagrangian techniques: they introduced the Lagrangian multiplier Λ (a symmetric matrix of size p × p) for the orthogonality constraint, and minimized the Lagrangian function where the orthogonality constraint is moved to the objective function as the penalty term Trace(Λ(G T G − I)). The complementarity conditions from the related KKT conditions can be rewritten as a fixed point relation, which finally can lead to the following MU rule for (ONMF): (3) They extended this approach to non-negative three factor factorization with demand that two factors satisfy orthogonality conditions, which is a generalization of (bi-ONMF). The MU rules (28)-(30) from [8], adapted to (bi-ONMF), are the main ingredients of Algorithm 1, which we will call Ding's algorithm. Algorithm 1 converges in the sense that the solution pairs G and Algorithm 1. Ding's MU algorithm for (bi-ONMF) INPUT: R ∈ R m×n + , p ∈ N 1. Initialize: generate G ≥ 0 as an m × p random matrix and H ≥ 0 as a p × n random matrix. If R has zero vector as columns or rows, a division by zero may occur. In contrast, denominators close to zero may still cause numerical problems. To escape this situation, we follow [28] and add a small positive number δ to the denominators of the MU terms (4). Note that Algorithm 1 can be easily adapted to solve (ONMF) by replacing the second MU rule from (4) with the second MU rule of (3). MU method of Mirzal [24] In [24], Mirzal proposed an algorithm for (ONMF) which is designed by generalizing the work of Lin [21]. Mirzal used the so-called modified additive update rule (the MAU rule), where the updated term is added to the current value for each of the factors. This additive rule has been used by Lin in [21] in the context of a standard NMF. He also provided in his paper a convergence proof, stating that the iterates generated by his algorithm converge in the sense that RSE is decreasing and the limit point is a stationary point. In [24], Mirzal discussed the orthogonality constraint on the rows of H, while in [25] the same results are developed for the case of (bi-ONMF). Here we review the Mirzal's algorithm for (bi-ONMF), presented in the unpublished paper [25]. This algorithm actually solves the equivalent problem (pen-ONMF) where the orthogonality constraints are moved into the objective function (the so-called penalty approach), and the importance of the orthogonality constraints are controlled by the penalty parameters α, β: The gradients of the objective function with respect to G and H are: For the objective function in (pen-ONMF), Mirzal proposed the MAU rules along with the use ofḠ = (ḡ) ij andH = (h) ij , instead of G and H, to avoid the zero locking phenomenon [24, Section 2]:ḡ where ν is a small positive number. Note that, the algorithms working with the MU rules for (pen-ONMF) must be initialized with positive matrices to avoid zero locking from the start, but non-negative matrices can be used to initialize the algorithm working with the MAU rules (see [25]). Mirzal [25] used the MAU rules with some modifications by consideringḠ andH in order to guarantee the non-increasing property, with a constant step to make δ G and δ H grow in order to satisfy the property. Here, δ G and δ H are the values added within the MAU terms to the denominator of update terms for G and H, respectively. The proposed algorithm by Mirzal [25] is summarised as Algorithm 2 below. Main steps of PG method In this subsection we adapt the PG method proposed by Lin [20] to solve both (ONMF) as well as (bi-ONMF). Lin applied PG to (NMF) in two ways. The first approach is actually a Algorithm 2. Mirzal's algorithm for bi-ONMF [25] INPUT: inner dimension p, maximum number of iterations: maxit; small positive δ, small positive step to increase δ. BCD method. This method consecutively fixes one block of variables (G or H) and minimizes the simplified problem in the other variable. The second approach by Lin directly minimizes (NMF). Lin's main focus was on the first approach and we follow it. We again try to solve the penalised version of the problem (pen-ONMF) by the block coordinate descent method, which is summarised in Algorithm 3. The objective function in (pen-ONMF) is not quadratic any more, so we lose the nice properties about Armijo's rule that represent advantages for Lin. We managed to use the Armijo rule directly and still obtained good numerical results, see Sect. 4. We refer to (8) or (9) as sub-problems. Obviously, solving these sub-problems in every iteration could be more costly than Algorithms 1-2. Therefore, we must find effective methods for solving these sub-problems. Similarly to Lin, we apply the PG method to solve the subproblems (8) - (9). Algorithm 4 contains the main steps of the PG method for solving the latter and can be straightforwardly adapted for the former. For the sake of simplicity, we denote by F H the function that we optimize in (8), which is actually a simplified version (pure H terms removed) of the objective function from (pen-ONMF) for H fixed: . Similarly, for G is fixed, the objective function from (9) will be denoted by: Repeat Fix H := H k and compute new G as follows: Fix G := G k+1 and compute new H as follows: In Algorithm 4, P is the projection operator which projects the new point (matrix) on the cone of non-negative matrices (we simply put negative entries to 0). Inequality (10) shows the Armijo rule to find a suitable step-size guaranteeing a sufficient decrease. Searching for λ k is a time-consuming operation, therefore we strive to do only a small number of trials for new λ in Step 3.1. Similarly to Lin [20], we allow for λ any positive value. More precisely, we start with λ = 1 and if the Armijo rule (10) is satisfied, we increase the value of λ by dividing it with γ < 1. We repeat this until (10) is no longer satisfied or the same matrix H λ as in the previous iteration is obtained. If the starting λ = 1 does not yield H λ which would satisfy the Armijo rule (10), then we decrease it by a factor γ and repeat this until (10) is satisfied. The numerical results obtained using different values of parameters γ (updating factor for λ) and σ (parameter to check (10)) are reported in the following subsections. Stopping criteria for Algorithms 3 and 4 As practiced in the literature (e.g. see [22]), in a constrained optimization problem with the non-negativity constraint on the variable x, a common condition to check whether a point x k is close to a stationary point is Algorithm 4. PG method using Armijo rule to solve sub-problem (9) INPUT: 0 < σ < 1, γ < 1, and initial H 0 . Find a λ (using updating factor γ) such that for 3. Until some stopping criteria is satisfied. where f is the differentiable function that we try to optimize and ∇ P f (x k ) is the projected gradient defined as and ε is a small positive tolerance. For Algorithm 3, (11) becomes We impose a time limit in seconds and a maximum number of iterations for Algorithm 4 as well. Following [20], we also define stopping conditions for the sub-problems. The matrices G k+1 and H k+1 returned by Algorithm 4, respectively, must satisfy whereε and ε is the same tolerance used in (13). If the PG method for solving the sub-problem (8) or (9) stops after the first iteration, then we decrease the stopping tolerance as follows: where τ is a constant smaller then 1. Numerical results In this section we demonstrate, how the PG method described in Sect. 3, performs compared to the MU-based algorithms of Ding and Mirzal, which were described in Subsections 2.1 and 2.2, respectively. Artificial data We created two sets of synthetic data using MATLAB [31]. The first set we call bi-orthonormal set (BION). It consists of instances of matrix R ∈ R n×n + , which were created as products of G and H, where G ∈ R n×k + has orthonormal columns while H ∈ R k×n + has orthonormal rows. We created five instances of R, for each pair (n, k 1 ) and (n, k 2 ) from Table 1. Matrices G were created in two phases: firstly, we randomly (uniform distribution) selected a position in each row; secondly, we selected a random number from (0, 1) (uniform distribution) for the selected position in each row. Finally, if it happens that after this procedure some column of G is zero or has a norm below 10 −8 , we find the first non-zero element in the largest column of G (according to Euclidean norm) and move it into the zero column. We created H similarly. Each triple (R, G, H) was saved as a triple of txt files. For example, NMF BIOG data R n=200 k=80 id=5.txt contains 200 × 200 matrix R obtained by multiplying matrices G ∈ R 200×80 and H ∈ R 80×200 , which were generated as explained above. With id=5, we denote that this is a 5th matrix corresponding to this pair (n, k). The second set contains similar data to BION, but only one factor (G) is orthonormal, while the other (H) is nonnegative but not necessarily orthonormal. We call this dataset uni-orthonormal (UNION). All computations are done using MATLAB [31] and a high performance computer available at Faculty of Mechanical Engineering of University of Ljubljana. This is Intel Xeon X5670 (1536 hyper-cores) HPC cluster and an E5-2680 V3 (1008 hyper-cores) DP cluster, with an IB QDR interconnection, 164 TB of LUSTRE storage, 4.6 TB RAM and with 24 TFlop/s performance. Numerical results for UNION In this subsection, we present numerical results, obtained by Ding's, Mirzal's, and our algorithm for a uni-orthogonal problem (ONMF), using the UNION data, introduced in the previous subsection. We have adapted the last two algorithms (Algorithms 2, 3) for UNION data by setting α = 0 in the problem formulation (bi-ONMF) and in all formulas underlying these two algorithms. Recall that for UNION data we have for each pair n, k from Table 1 five symmetric matrices R for which we try to solve (ONMF) by Algorithms 1, 2 and 3. Note that all these algorithms demand as input the internal dimension k, i.e. the number of columns of factor G, which is in general not known in advance. Even though, we know this dimension by construction for UNION data, we tested the algorithms using internal dimensions p equal to 20%, 40%, . . . , 100% of k. For p = k, we know the optimum of the problem, which is 0, so for this case we can also estimate how good are the tested algorithms in terms of finding the global optimum. The first question we had to answer was which value of β to use in Mirzal's and PG algorithms. It is obvious that larger values of β moves the focus from optimizing the RSE to guaranteeing the orthonormality, i.e., feasibility for the original problem. We decided not to fix the value of β but to run both algorithms for β ∈ {1, 10, 100, 1000} and report the results. For each solution pair G, H returned by all algorithms, the non-negativity constraints are held by the construction of algorithms, so we only need to consider deviation of G from orthonormality, which we call infeasibility and define it as The computational results that follow in the rest of this subsection were obtained by setting the tolerance in the stopping criterion to ε = 10 −10 , the maximum number of iterations to 1000 in Algorithm 3 and to 20 in Algorithm 4. We also set a time limit to 3600 seconds. Additionally, for σ and γ (updating parameter for λ in Algorithm 4) we choose 0.001 and 0.1, respectively. Finally, for τ from (16) we set a value of 0.1. In general, Algorithm 3 converges to a solution in early iterations and the norm of the projected gradient falls below the tolerance shortly after running the algorithm. Tables 2 and 3 Numerical results for bi-orthonormal data (BION) In this subsection we provide the same type of results as in the previous subsection, but for the BION dataset. We used almost the same setting as for UNION dataset: ε = 10 −10 , maxit = 1000, σ = 0.001 and time limit = 3600s. Parameters γ, τ were slightly changed (based on experimental observations): γ = 0.75 and τ = 0.5. Additionally, we decided to take the same values for α, β in Algorithms 2 and 3, since the matrices R in BION dataset are symmetric and both orthogonality constraints are equally important. We computed the results for values of α = β from {1, 10, 100, 1000}. In Tables 4-5 we report average RSE and average infeasibility, respectively, of the solutions obtained by Algorithms 1, 2, and 3. Since for this dataset we need to monitor how orthonormal are both matrices G and H, we adapt the measure for infeasibility as follows: not have a big impact on RSE and infeasibility for Algorithm 3, a significant difference can be observed only when the internal dimension is equal to the real internal dimension, i.e., when p = 100%. Based on these numerical results, we can conclude that smaller β achieve better RSE and almost the same infeasibility, so it would make sense to use β = 1. For Algorithm 2 these differences are bigger and it is less obvious which β is appropriate. Again, if RSE is more important then smaller values of β should be taken, otherwise larger values. Concluding remarks We presented a projected gradient method to solve the orthogonal non-negative matrix factorization problem. We penalized the deviation from orthonormality with some positive parameters and added the resulted terms to the objective function of the standard non-negative matrix factorization problem. Then, we considered minimizing the resulted objective function under the non-negativity conditions only, in a block coordinate decent approach. The method was tested on two sets of synthetic data, one containing uni-orthonormal matrices and the other containing bi-orthonormal matrices. Different values for the adjusting parameters of orthogonality were applied in the implementation to determine good pairs of such values. The performance of our algorithm was compared with two algorithms based on multiplicative updates rules. Algorithms were compared regarding the quality of factorization Table 2. It contains six plots which illustrate the quality of Algorithms 1, 2 and 3 regarding RSE on UNION instances with n = 100, 500, 1000, for β ∈ {1, 10, 100, 1000}. We can see that regarding RSE the performance of these algorithms on this dataset does not differ a lot. As expected, larger values of β yield larger values of RSE, but the differences are rather small. However, when p approached 100 % of k, Algorithm 3 comes closest to the global optimum RSE = 0. (RSE) and how much the resulting factors deviate from orthonormality. We provided an extensive list of numerical results which demonstrate that our method is very competitive and outperforms the others. Table 3. It contains six plots which illustrate the quality of Algorithms 1, 2 and 3 regarding infeasibility on UNION instances with n = 100, 500, 1000, for β ∈ {1, 10, 100, 1000}. We can see that regarding infeasibility the performance of these algorithms on this dataset does not differ a lot. As expected, larger values of β yield smaller values of infeasG, but the differences are rather small. Table 4 RSE obtained by Algorithms 1, 2 and 3 on the BION data. For the latter two algorithms, we used α = β ∈ {1, 10, 100, 1000}. For each n ∈ {50, 100, 200, 500, 1000} we take all ten matrices R (five of them corresponding to k = 0.2n and five to k = 0.4n). We run all three algorithms on these matrices with inner dimensions p ∈ {0.2k, 0.4k, . . . , 1.0k} with all possible values of α = β. Like before, each row represents the average (arithmetic mean value) of RSE obtained on instances corresponding to given n and given p as a percentage of k. We can see that the larger the β, the worse the RSE, which is consistent with expectations. [5] Choi, S. Algorithms for orthogonal nonnegative matrix factorization. Table 4. We can observe that with these settings of all algorithms we can bring infeasibility to order of 10 −3 very often, for all values of β.
6,258
2020-03-23T00:00:00.000
[ "Computer Science" ]
The concept of monotheism in the Book of Proverbs and an African ( Yoruba ) perspective Traditionally, King Solomon was regarded as the author of the Book of Proverbs because his name is mentioned in Proverbs 1:1, 10:1; 25:1; and I Kings 3:28. However, the Book of Proverbs also mentions Lemuel and Argur in Proverbs 30:1 and 32:1. From the above, it might be better to consider the Book as originally short collections (although some are long Pr 31:10–31) to summarise the basic values of Israelite society so that they can be easily remembered (Matthews & Moyer 2012:239). These sayings are not unique to ancient Israel because most of their wisdom is borrowed and recycled from their Near Eastern neighbours. Introduction Traditionally, King Solomon was regarded as the author of the Book of Proverbs because his name is mentioned in Proverbs 1:1, 10:1; 25:1; and I Kings 3:28. However, the Book of Proverbs also mentions Lemuel and Argur in Proverbs 30:1 and 32:1. From the above, it might be better to consider the Book as originally short collections (although some are long Pr 31:10-31) to summarise the basic values of Israelite society so that they can be easily remembered (Matthews & Moyer 2012:239). These sayings are not unique to ancient Israel because most of their wisdom is borrowed and recycled from their Near Eastern neighbours. The Hebrew word used for the word Proverbs is ‫.מׁשל‬ The exact meaning is not clear, but it could be related to the word 'rule' or 'to be like' in the form of comparison (Lucas 2015:2). The uniqueness of Proverbs among other wisdom books is incontestable because it uses ‫יהוה‬ as the name of God. Remarkably, Proverbs mentioned ‫יהוה‬ about 94 times as its primary way of referring to God (Bostrom 1990:33;Lucas 2015:246). There are similarities between the Israel's Book of Proverbs and that of the other nations. The comparison of the Book of Proverbs with the surrounding nations' proverbs shows how truly international Israel's Book of Proverbs is (Bartholomew & O'Dsowd 2011;Bostrom 1990:34;Lucas 2015:246). The Old Testament Book of Proverbs resembles African (Egyptian) and the Mesopotamian wisdom materials called Instruction of Amenemope, Instruction of Shuruppak and the Counsel of Wisdom. Similarities also exist among the Canaanite wisdom materials called Ahiqar, which is regarded as the most important non-biblical Canaanite wisdom text. However, there is no consensus as to the extent to which the borrowing took place. What is certain is that whatever the amount of borrowing from these surrounding nations, it was given a unique Hebrew stamp, that is, Yahwisized. The uniqueness of the Book of Proverbs among other wisdom books is incontestable because it uses ‫יהוה‬ as the name of God. Its regular use of the name means that the Book is concerned about God's monotheism. The mention of that proper name ‫)יהוה(‬ 94 times and the generic name ‫אלהים‬ only twice (this generic name still refers to ‫,)יהוה‬ emphasises the concept of monotheism. Monotheism in ancient Israel is not the denial of the existence of other gods, but the exclusive worship of Yahweh as the only one true God. The origin and the meaning of Yahweh although debatable, the majority of scholars believe that it is Exodus 3:13-15. The definition of proverbs although debatable, they can be defined as a traditional saying that gives advice and instruction. It is 'a relic of ageless tradition' that contains a pithy structure. Generally, scholars believe that Yoruba religious tradition also holds the fact that Yahweh is monotheistic by the name given to him (Olodumare). Unfortunately, the Yoruba translation of the Hebrew word ‫הוהי‬ is Oluwa instead of Olodumare. That wisdom teaching is theological literature according to Birch et al. (1999:384), it is true because it witnesses to Yahweh and its purpose for the world. What the wisdom teaches, observed, and reflected on is 'a world order that is willed, governed, and sustained by Yahweh'. Wisdom theology is also a theology of creation. Yahweh, the creator intends for the world to be 'whole, safe, prosperous, peaceful, just, fruitful, and productive', that is, for the world to be in peace (Birch et al. 1999:384). However, he set a limitation and builds it into its reward and punishment. That is the reason Proverbs 1:7 becomes the focal point and motto. A true understanding of reality is the recognition of Yahweh who is the creator. Any disregard for the will of Yahweh will surely bring punishment. In any discussion of biblical faith by scholars, the wisdom books particularly the Book of Proverbs seems to have the greatest difficulty because it seems not to fit into the type of faith expressed in the historical and prophetic works of literature. The Old Testament theology is centred on Yahweh's acts in history and the interpretation of these acts (Wright 1979:103). The questions that arise are whether wisdom theology contradicts or complement and/ or supplement other parts of Old Testament theology. The question of whether African (Yoruba) Traditional Religion is monotheistic or not, has been a subject of debate among biblical scholars. In light of the above my question is, can the Book of Proverbs and African (Yoruba) Traditional Religion be monotheistic? This article aims to discuss the monotheistic nature of the Book of Proverbs and African (Yoruba) Traditional Religion. In order to achieve this aim, I will use African Biblical Hermeneutics (a methodology that makes African social-cultural contexts the subject of interpretation. This is a methodology that reappraises ancient biblical tradition and African worldviews, cultures and life experiences). To achieve this goal, it will be necessary to discuss the meaning of proverbs, the origin of biblical proverbs, the concept of monotheism in the Book of Proverbs, and the African religion. It also discusses the translation of ‫יהוה‬ to Oluwa instead of Olodumare in the Yoruba Bible. Definition of proverbs Despite the familiarity of proverbs all over the world, there is still no unanimous agreement concerning the definition (Olumuyiwa 2012:106-120). Despite the universally unacceptable definition of proverbs, proverbs as a universal phenomenon can still be recognised; although the different meanings given to each proverb may differ from one culture to another. Proverbs have agitated scholars of different disciplines (Awolalu 1979). According to Wolfgang Mieder (1985:1-12), proverbs can be recognised through the use of common sense. Proverbs can be defined as a traditional saying that gives advice and instruction (Mieder 1989:2). It is 'a relic of ageless tradition' that contains a pithy structure (Fayemi 2009:2). Olatunji (1984:167) seems to agree with this definition of proverbs when he sees proverbs as 'an inheritance from elders' who have uncountable experiences. Below are the 'essential features of proverbs' that are acceptable (Fayemi 2009:6-7): 1. Proverbs originate from oral tradition. 2. They pass on from generation to generation, and their meaning can change. 3. They are metaphorical, and can only be understood metaphorically. 4. They are relics of cultural experiences. 5. Human observation, experience, and nature are important bases for proverbs. 6. They are universal and particular, and can make one think. 7. They can establish what life truly is. 8. Proverbs can be applied to almost all situations. Monotheism Monotheism is the belief that there is only one true God (Goldingay 1988:443). Ringgren (1978:602) defines monotheism as: 'The belief in and exclusive worship of one god'. Strict monotheism which implies the denial of the existence of other gods is a fairly rare phenomenon represented primarily by Christianity, Judaism, and Islam, and to a certain extent, Zoroastrianism. According to the evolutionistic school of comparative religion, monotheism is the last and highest stage in the evolution of religion (Ringgren 1978:602-604). Lang and Smidt contend that the origin of religion is primitive monotheism, but Pettazoni thinks that strict monotheism came into existence as a protest against polytheism (cited by Ringgren 1978:602-604). Primitive monotheism is the belief in one God who is high above other gods. He is often identified with the sky. He is the master and controller of man's destiny, and is the most superior to other gods. Wellhausen first introduced the term monolatry in 1880 and was taken up by W.R. Smith as a necessary evolutionary stage of transition from polytheism to monotheism (Cross 1974:931). According to these scholars, monolatrism was Israel's religious condition from Sinai Covenant in the Book of Exodus to the time of the prophets (Cross 1974:931). Some scholars maintain that ancient Israelites originally practiced monolatrism or henotheism (Eakin 1971:70, 263). Day (1992Day ( :1835, and McKenzie (1990McKenzie ( :1287 also believe that monolatrism existed in the Old Testament. Heiser (2008:1) also thinks that all the passages cited in support of absolute monotheism (Ps 82; Dt 4.32; 39; Dt-Is 43:10-12; 45:5-7, 14, 18, 21-22) do not make sense because there is the existence of divine plurality. He further says that the view that monotheism means absolute denial of the existence of other gods in the Bible is indeed problematic (Heiser 2008:1-30). Throughout the Book of Proverbs, no other god is mentioned so frequently (94 times) like ‫.יהוה‬ The author of the Book is consistently using the personal and proper name of God, ‫יהוה‬ to designate God of Israel. The direct mention of the personal name of God (Yahweh) can be found 21 times in Proverbs 1-9; 57 times in Proverbs 10:1-22:16; 5 times in 22:17-24:34; 7 times in chapters 25-29; and 4 times in chapters 30-31 (Lucas 2015:246). It seems to me that the direct references in the Book of Proverbs to the personal name of God so many times show that the author wants to demonstrate or emphasise that Yahweh is the only God of Israel, including the sages. Unlike the wisdom literature of the surrounding nations such as Egypt and Mesopotamia, the deity is usually referred to by a generic term instead of its proper name (Lucas 2015:246). It seems to me that as far as Proverbs is concerned there is no other God to be worshipped but Yahweh as Deuteronomy 6:4 has declared. To the author of Proverbs, the name ‫יהוה‬ means monotheism and that is attested by frequent use of the name. It will, therefore, be appropriate to discuss the origin and the real meaning of that name ‫.יהוה‬ Monotheism of God in Proverbs In the Book of Proverbs, God is referred to directly with his proper name ‫יהוה‬ in about 94 times (Bostrom 1990:33;Lucas 2015:246). The title ‫אלהים‬ with clear reference tֹ o Yahweh is used only twice in Proverbs 2:5; 3:4. In passages such as 8:26-31, God has been referred to anaphorically as he/him/his in the English language Bible translation (Lucas 2015:24). On two other occasions, the singular form of ‫אלוה‬ is used in 25:2; 30:9. In Proverbs 23:11 and 24:12, he is referred to as 'their redeemer' and 'he who weigh the heart' and 'he who keeps watching over your soul', respectively (NRSV). It means that 12% of the Book's 915 verses refer to God (Lucas 2015:246) as monotheistic. It is very remarkable that almost all the references to God in Proverbs used the tetragrammaton ‫,)יהוה(‬ the covenant name of God of Israel, and ‫אלהים‬ twice. The concept of God in the Book Proverbs reflects the Israelite concept of God as monotheistic. Despite so many references to Yahweh in the Book of Proverbs, many Old Testament theology scholars found it difficult to incorporate wisdom literature into their theologies (Lucas 2015:239). The Book of Proverbs seems to present to scholars some problems as far as Israel is concerned. The book does not express faith like the historical and prophetic writings. Wright (1952:103) and others see Old Testament theology as mainly on Yahweh's acts in history and the interpretation of and response to these acts. There is an absence of a direct reference to Israel's historical tradition. Eichrodt (1961:67-81) believes that wisdom in Israel is secular. In other words, this wisdom was borrowed from other international countries such as Egypt and Mesopotamia. Lucas (2015:240) quoted Preuss as arguing that Proverbs 10-29 is entirely in accord with the international wisdom and is alien to the faith of ancient Israel. Wisdom teaching in Proverbs is theological literature because it witnesses Yahweh and Yahweh's large purposes for the world. What the Book of Proverbs teaches and observed and reflected upon is a world order that is willed, governed, and sustained by Yahweh' (Birch et al. 1999:384). Yahweh demonstrated himself like a father (Pr 3:12). He can be trusted and he is omnipresent (3:5; 16:1, 2). The fear of him is wisdom (15:33a), and understanding (9:10) which is the faith of Israel and later taken over by the Christian Church (Goldingay 1988:443). However, the Old Testament affirms that Yahweh has unrivaled power and wisdom, and that his being is uniquely eternal (Goldingay 1988:443). Origin and meaning of the name ‫יהוה‬ The origin of Israelite monotheism is debatable. The origin of the name in Exodus 3:13-15 is also debatable. Moses asks for God his name and what to tell his people (Ex 3:14). Yahweh has three answers for Moses. (Phillips 1998:81- The ambiguity of the above answers to Moses' question is maintained by Sachs (2010:244-246). The debates about the origin of the word ‫יהוה‬ concern whether it was mosaic or pre-mosaic or not. Some scholars' opinion is that it was pre-mosaic (Foerster 1965(Foerster :1065(Foerster -1066. Others believe that Moses invented the name (Beitzel 1980:5-20;Hamilton 2011:64). Beitzel (1980:5-20) believes that the root ‫הוה‬ or ‫היה‬ is the origin of the word ‫.יהוה‬ According to Adamo (2015a:12) and Davis (2008:442) these words ‫הוה‬ or ‫היה‬ and ‫יהוה‬ are related and means to exist. Exodus 3:14 can be interpreted both negatively (God is unknown) and positively (the revelation of God himself) (Finkelstein & Silberman 2001:50). From the above, if the name ‫יהוה‬ has its origin from the word ‫היה‬ or ‫הוה‬ which God himself spoke to Moses, it is a self-revelation and selfaffirmation (Finkelstein & Silberman 2001:50). It shows that the author agrees that the meaning of ‫יהוה‬ is 'I am alone is the only God who exists' (Payne 1980:210-212). It is a proclamation of strict monotheism in the Book of Proverbs. Monotheism in African (Yoruba) indigenous tradition Yoruba people of Nigeria The majority of the Yoruba people occupy southwestern Nigeria, with others in Kwara and Kogi, states. Others are in the Benin Republic and Sierra Leone. They are regarded as the largest ethnic group in Sub-Sahara Africa (Bascom 1969:1). They are also one of the most interesting and important people in West Africa (Bascom 1969:1). According to Booth (1977:179), they are not only important groups in http://www.ve.org.za Open Access terms of numbers but in historic significance and contemporary influence (Ilega 2000:105-138). According to Bascom, no other ethnic group in Africa has so much influence as the Yoruba people who spread their influence to the New World-the Americas (Bascom 1969:1). Yoruba people are widely known and called Aku in Sierra Leone, Nago in Brazil, and Lucumi in Cuba (Kilson & Rotberg 1976:7-8). The multiplicity of divinities or Orishas in the Yoruba pantheon has led many scholars to conclude that African Traditional Religion (Yoruba) is polytheistic, rather than monotheistic. There are three categories of scholars with three different opinions about the concept of God in Africa. The first category is the scholars who think that the idea of God is philosophical and therefore, Primitive Africans can't comprehend him (Ludwig 1950:1;Baudin quoted by Awolalu 1979:vii;Kato 1975:56). Scholars who think that what is called the Supreme Being in Africa is too remote to Africans, are the second category. In order words, though Africans believe that he exists, the creator of the world, and God of gods or divinities, he is too remote from the Africans. He is in heaven to rest and has no direct dealing with Africans. The third category is the scholars who think that the idea of a monotheistic God does not exist in Africa. What is called the Supreme Being in Africa is not the same universal God of the world or (Israel). Ludwig (1950:1), an anthropologist, and sociologist represents the first category. Baudin, a French Roman Catholic Priest and scholar who was writing in 1884 can be a representative of the second category (quoted by Awolalu 1979:vii). The third category of scholars can be represented by Kato (1975), who by criticising Idowu, Mbiti, Awolalu, believe that: [T]he traditional idea of God in Africa is defective, inferior, and unworthy of his Divine Supremacy because it is only the gifted Semites of the first century that had a clear vision of the concept of God. (p. 56) Kato (1975:56, 69, 91-158) accuses Idowu, Awolalu, and Mbiti, and others of Hellenising the African God. From the above, one may summarise that the foreign writers did not credit Africans with any kind of knowledge of the true God or Supreme Being (Bewaji 1998:1-17).1 According to Olupona (2014:19-26), the Yoruba religion combines elements of both monotheism and polytheism. According to Idowu, there is what he calls 'implicit monotheism' or 'diffused monotheism'. It means the existence of one God who is Supreme and also the divinities who are his representatives on earth (Idowu 1960:49). The above review of these scholars shows that most of them do not understand the actual nature of the Yoruba religion and tradition. They do not understand the Yoruba idea of 1.This is not to deny that few Africans deny Africans of their true God. Kato is one of them as mentioned above. monotheism Olodumare. To achieve the real concept of the monotheism of Olodumare, one needs to examine the various names and the meanings of Olodumare the Supreme Being (Johnson & Oyinade 2004:3). Translation of the Book of Proverbs into the Yoruba Language An unfortunate thing is that even though the Book of Proverbs prefers to use profusely (94 times) the proper name of God, ‫יהוה‬ throughout the Book, except in few occasions (twice only) when the generic name ‫אלהים‬ is used to refer to ‫יהוה‬ (2:5; 3:4), the Yoruba translators did not translate this unique name of God to the exact equivalence in Yoruba language-Olodumare. Instead, ‫יהוה‬ was translated as Oluwa, which means 'Lord' throughout the Book. The reason is perhaps, the translators may not be very good in the use of the Hebrew language or they are misled by the English translators (KJV, RSV, NIV, and others) who were probably the basis of the translations. However, no one seems to know why the translators do that, more so if they could translate the Hebrew word ‫אלהים‬ appropriately to Olorun which is also a generic name in the Yoruba language. For example, Proverbs 2:5 in the Yoruba language Bible ‫יהוה‬ was translated Oluwa, while ‫אלהים‬ is translated Olorun in Bibeli Mimo Atoka. Proverbs 3:4 also read Bee ni iwo o ri ojurere, ati ona rere loju Olorun ati eniyan (Bibeli Mimo Atoka 1980) ('So you will find favor and repute in the sight of God and of people' NRSV). Since the Yoruba language has an equivalence ֹ of God's proper name ‫,יהוה‬ the translation should be Olodumare and not Oluwa. I would like to discuss why Oluwa is not the appropriate translation of ‫.יהוה‬ The name Olodumare alone is the Supreme Being and is never given to any other Deity or person among the Yoruba people. His uniqueness is not contestable. But Oluwa means oga (master) and can be applied to any person who is more superior to another person. The essence of divinity is absent in the word Oluwa. The very uniqueness in the proper name Olodumare is also absent. The meaning and attributes of Olodumare make it more appropriate to be the translation of ‫יהוה‬ in the Yoruba Bible. Although there are many names for Olodumare, two names stand out Olodumare and Olorun (Yahweh, Elohim). According to Idowu, the name has three parts. The first one is Ol; the second word is Odu, and the third word is Mare. According to Idowu, the Ol is a prefix that means 'ownership'; the Odu means 'largeness', 'very full' and 'extensive', and 'superlative greatness' in size and quality. Mare means 'does not change or move' or 'stable' (Adamo 2017:13-16;Idowu 1960:34). These names below are very important because they represent the totality of what the Supreme Being, Olodumare is: • Olorun: The owner of heaven. • Eleda: The one who creates. • Atererekariaiye: He spreads and covers the entire universe. • Olojo Oni: The owner and the controller of the day. A closer look at some of the attributes of Olodumare demonstrates his monotheistic nature: (Awolalu 1979:vii, 12-18;Idowu 1960:38-47;Mbiti 1979:31-41). All power belongs to him -Omnipotent; all knowledge belongs to him -Omniscient. He is transcendent and immanent. That is the reason why the Yoruba people gave him the name Olorun -owner of heaven, and atererekariaiye -the one who spreads over all the earth. An examination of other tribes in Africa also shows that the oneness of God is affirmed not only among the Yoruba people of Nigeria but among others throughout Africa. African people refer to the name of God as 'One and separate Deity' because he has both the heavenly and the earthly aspects (Adamo 2014:47-62;Mbiti 1979:30). Ndebele believes in God the Father, the Mother and the Son, and yet One (Mbiti 1979:30). The Gikuyu tries to emphasise the unity and the oneness of God by saying that 'God is all alone' without parents or companions (Adamo 2014:47-62). To the Lugbara of Congo and Uganda, God is not only transcendent and immanent, he is of one essence and that is why they refer to him as 'One but many' (Adamo 2014:47-62). The Shilluk of Sudan believes that God is One Spirit but is also of a plurality (Mbiti 1979:30). The Vugusu people of Kenya also believe in the plurality of God headed by the Supreme Being (Adamo 2014:47-62;Mbiti 1979:29). Conclusion This article discusses the meaning of the monotheism of God in the Book Proverbs and Yoruba Indigenous Religion. It argues that Proverbs proclaims monotheism by using very frequently the personal divine name ‫.יהוה‬ In other words, the Book is monotheistic in nature and theology. To support the monotheistic nature of Proverbs, the origin and the meaning of the word ‫יהוה‬ is critically discussed. In African (Yoruba) context, the article discussed how the translation of the Book of Proverbs does not represent the Hebrew original and the monotheistic nature of the Book of Proverbs by translating ‫יהוה‬ to Oluwa in Yoruba when there exists the most appropriate Yoruba equivalent word that means ‫.יהוה‬ While one appreciates the good effort of the translators of the Book of Proverbs into the Yoruba language, it is not acceptable. Therefore, the need for a re-translation of the Book of Proverbs into the Yoruba language is important. Perhaps, the entire Yoruba Bible needs a re-translation because the translation of ‫יהוה‬ to Oluwa exists throughout the Yoruba Bible. Such translation of ‫יהוה‬ into Oluwa obscures the strict monotheistic nature of God in African (Yoruba) context. It should be emphasised that it will be difficult to find an African indigenous person, particularly the Yoruba people of Nigeria, who is an atheist. If such a person exists at all, he or she must have been exposed to the non-African influence (Johnson & Oyinade 2004:1-8). The Supreme Being is responsible for the creation and directing this entire creation. He alone is supreme. His proper name is Olodumare and he is acknowledged and worshipped by all Yoruba divinities. He is no one among many and his status of supremacy is not disputed among the Yoruba and the divinities. The Yoruba owe him their ultimate, first, and the last daily allegiance (Awolalu 1979:53;Bascom 1969:53). As Meiring (2007:733, 744, 748) has said, Western Christianity has a lot to learn from African Traditional Religion especially their emphasis on the sense of community.
5,517.4
2021-08-18T00:00:00.000
[ "Philosophy", "History" ]
Impedance Based Temperature Estimation of Lithium Ion Cells Using Artificial Neural Networks : Tracking the cell temperature is critical for battery safety and cell durability. It is not feasible to equip every cell with a temperature sensor in large battery systems such as those in electric vehicles. Apart from this, temperature sensors are usually mounted on the cell surface and do not detect the core temperature, which can mean detecting an offset due to the temperature gradient. Many sensorless methods require great computational effort for solving partial differential equations or require error-prone parameterization. This paper presents a sensorless temperature estimation method for lithium ion cells using data from electrochemical impedance spectroscopy in combination with artificial neural networks (ANNs). By training an ANN with data of 28 cells and estimating the cell temperatures of eight more cells of the same cell type, the neural network (a simple feed forward ANN with only one hidden layer) was able to achieve an estimation accuracy of ∆ T = 1 K (10 ◦ C < T < 60 ◦ C) with low computational effort. The temperature estimations were investigated for different cell types at various states of charge (SoCs) with different superimposed direct currents. Our method is easy to use and can be completely automated, since there is no significant offset in monitoring temperature. In addition, the prospect of using the above mentioned approach to estimate additional battery states such as SoC and state of health (SoH) is discussed. Introduction The performance of lithium ion batteries (LIBs) is strongly dependent on the cell temperature, particularly with regard to battery aging and safety issues. With low temperatures there is a risk of lithium plating due to reduced reaction kinetics, which results in decreased lithium availability. However, operating LIBs at a high temperature can cause a rise in undesirable side reactions that cause rapid degradation, including capacity and power loss [1]. Furthermore, there is a risk of material decomposition which can trigger a so-called thermal runaway and may lead to self-ignition and even an explosion [2]. High temperature issues are caused by cell-internal heat generation, and low temperature operation is generated due to environmental temperatures. There are various temperature indication methods in existence. Raijmakers et al. provided a broad overview on various temperature indication methods for LIBs [3]. The most common approach uses a conventional temperature sensor, such as a thermocouple or thermistor, placed on the housing of the cell. However, the core temperature varies widely from the surface temperature in cases of heavy loading, and the temperature rise can only be detected with a time-shift or requires extensive thermal models [4][5][6]. Moreover, the accuracy varies with thermal contact and the position of the temperature sensor. In addition, in most cases not every cell is equipped with a sensor, and therefore the pack design needs to be considered to detect feasible hot spots [7]. Thus, a thermal runaway can only be detected stochastically [2]. By placing temperature sensors internally, these problems can be avoided. This might lead to an increase in costs, more complexity for manufacturers and possible negative effects in terms of battery life [1,8]. The impedance based methods have gained substantial interest because of their characteristic of measuring the average internal temperature without using internal or external hardware [9]. Therefore, the method is also known as sensorless temperature measurement. Figure 1 clarifies the temperature's strong dependency on the battery's characteristic impedance response. Additionally, with increased temperature, a significant reduction in impedance response can be observed. Such measurements were conducted by the authors. EIS measurements were performed on Samsung INR18650-15L cells at different temperatures. Details about the experimental process are described in Section 2.1. The impedance can also be detected in other battery states, such as the state of health with respect to nominal capacity (C/C N , SoHC) and the state of charge (SoC); therefore, those other states can be crucial input variables for battery management systems [3,10]. In Figure 2 the progression of the impedance for different cycles and three different cells are presented, and it points out the continuous increase in impedance with over the lifecycle. Details about the experimental process are described in Section 2.1. However, it is important to distinguish the various influence factors from each other and find the optimal basis for predictions [11][12][13][14]. Srinivasan et al. were the first to find the relation between impedance at a specific frequency and the temperature-more precisely, the phase shift at a frequency that is associated with the solid electrolyte interface [12]. Contrarily, Schmidt et al. made use of the real part of the impedance measurement at higher frequencies, because the time constant has a significantly lower level of correlation with the SoC, which enables improved temperature estimation at unknown SoCs [11]. Similarly, Richardson et al. analyzed the influence of internal thermal gradients on the impedance. It can be shown that this technique estimates the volume-average temperature and is therefore able to detect internal hotspots without any time delays [8]. Most publications in this field either show a correlation between a measurable value (e.g., impedance) and the state of interest (e.g., SoC or temperature), or present the state estimation results for a single cell. The state estimation of a number of cells of the same cell type is more complicated due to the existence of variations among individual cells. In this case it is crucial to select the required input parameters. Several studies have raised concerns regarding the question of finding the optimum input variable for determining the temperature. Beelen et al. compared some of these approaches and performed a sensitivity analysis to optimize the prediction accuracy [15]. In the case of an unknown SoC, the achieved average bias was ±0.4 K with an average standard deviation of ±0.7 K. The study highlights the importance of selecting the appropriate input parameters for temperature determination. This study investigates an approach using ANNs, which are promising for handling multidimensional feature problems. The procedure can be automated and can be easily transferred to other cell chemistries. There are only a few studies available which present data based temperature estimation methods using artificial neural networks. Feng et al. combined the advantages of physical models with artificial neural networks to enhance the performance for estimating the SoC and temperature [16]. Hasan et al. estimated cell temperature based on a nonlinear, autoregressive exogenous artificial neural network and time series data, namely, current and ambient temperature for a battery container [17]. However, there are a number of studies that present ANN based methods for estimating the SoC and SoH [18][19][20][21]. Khumprom et al. confirmed ANNs' ability to approximate a nonlinear system by comparing a deep ANN against other machine learning algorithms for SoH prediction; the former could either match or outweigh the other algorithms' performances [22]. Furthermore, some researchers have used impedance data in combination with ANNs. Messing et al. used impedance data for equivalent circuit parameterization and input data for the ANN [23]. However most approaches use methods which require a lot of computational effort due to the need for solving partial differential equations and the fitting of physical models or time series. For this study, we chose an approach using impedance data from directly measurable indicators (voltage, current, time) as input data and linking impedance based temperature estimation with ANNs. The implied advantage is that error-prone parameterization may be dispensed with. Since it is necessary to monitor the voltage of a lithium ion cell constantly, it should be possible to perform a four-point measurement on each cell by using an AC current source in the battery system to create the EIS spectra. Our main focus lies in demonstrating the technical feasibility of this concept. In contrast to many other publications, the state estimation was not performed for a single cell but for a number of cells. We show that it is possible to train an ANN with data from a number of cells to estimate the temperatures of other cells of the same cell type. An advantage of the EIS-ANN method is that once the ANN is trained, the temperature estimation is completed within milliseconds since there is no need to solve partial differential equations. In addition, a perspective is given on the possibility of utilizing this method of impedance based state estimation using ANNs to estimate the SoC and the SoHC. In Section 2, we describe the data acquisition and the architecture of the ANN. In Section 3 we present the results of the temperature, SoC and SoHC estimations and discuss the limitations of the ANN method. Materials and Methods Lithium ion cells were set to well defined states, where SoHC, SoC and temperature parameters were varied. For each state, an electrochemical impedance measurement (EIS) was performed; every EIS spectrum is related to a defined state. To simulate dynamic working conditions, the EIS measurements were performed as soon as the SoC was adjusted, without relaxation. Since it was not possible to perform EIS measurements for every possible combination of SoHC, SoC and temperature, different series of measurements were performed; each series mainly focused on one state, e.g., temperature. These series of measurements are described in the following subsections. The EIS datasets were separated into a training dataset and a test dataset. The training dataset was used to train the ANN. In the first step, the ANN was trained by using the EIS spectra as input and the related data of the cell states as target values. After the training process, the ANN was evaluated with the test data set. The ANN needed to estimate the related state by itself, and at last the estimated states were compared with the measured states to evaluate the estimation quality. Electrochemical Impedance Spectroscopy EIS involves a non-destructive technique for characterizing electrochemical systems by applying a sinusoidal excitation and measuring the corresponding response, as shown in Figure 3. The impedance was calculated using the complex voltage, complex current and the phase shift between those values: and from the complex impedance equation, the real part R and the imaginary part jX were calculated. Applying AC currents at different frequencies (usually between 10 kHz and 10 mHz) creates an electrochemical impedance spectrum. The results of an EIS measurement are usually plotted in a Nyquist plot, as shown in Figure 4. In order to test the performance in different loading conditions and guarantee charge conservation, the galvanostatic mode was chosen. All EIS measurements were performed using a Gamry reference 3000 AE potentiostat multiplexed on a Basytec CTS. Each internal process can be allocated a characteristic time constant. Therefore, the operating frequency range was varied between 1 Hz and 10 kHz with 15 frequencies per decade (61 frequencies in total). Moreover, the frequency band is limited to the change of charge while running a possibly superimposed current. As a compromise, the AC current was set to a C-rate of 1/10 C for all measurements, in order to guarantee linearity while achieving a good noise-to-signal ratio. The temperature was controlled by a Memmert ICP 110 thermal chamber (∆T = ±0.1 K). The SoC adjustment was performed by charging/discharging with constant current/constant voltage (CC/CV) (current limit: C/30). The selected voltage corresponds to the open circuit potential. For measurements with superimposed DC current during the EIS measurements, the SoC was set to be 5% higher than the SoC of interest by CC/CV. From there, the cell was discharged by CC with the same value of current, which was used for the superimposed EIS measurement. After reaching the target SoC, the EIS measurement was performed without relaxation, to simulate dynamic working conditions. To make sure that the presented results can be generalized, cells of different types, such as cylindrical high power and prismatic high energy cells, were investigated. To evaluate the impact of the cell to cell variance, at least nine cells per cell type were measured using the same load. The investigated cell types are shown in Table 1. Investigations were performed on the Samsung INR18650-15L1 1500 mAh lithium ion cells. Therefore, 36 cells at different SoHCs (SoHC = state of health regarding actual capacity to nominal capacity C/C N ) were used. The cells were aged by cycling (at T = 40 • C, constant current charge at 2C, discharge at 3C) using a PEC ACT0550. More detailed information about the SoHC of all 36 cells is shown in Table 2. Table 2. State of health with respect to nominal capacity (C/C N ). Thirty-six cells were investigated. For each SoHC range, the data of 2 cells were defined as test data; the data of the other cells were used as the training dataset for the neural network (ANN). This guaranteed that each SoHC range would be taken into account. The labels Hx refer to the investigated cells. For each temperature and SoC state, EIS measurements were performed with 8 different superimposed DC-currents (C-rate: 0C, −1/4C, −1/2C, −3/4C, −1C, −3/2C, −7/8C, +1C). For the training and testing process, a multi-dimensional dataset was created by performing more than 20,000 EIS-measurements at different states. A relaxation time of 1 hour was used after changing the temperature to ensure that the cells were at the same temperature as the thermal chamber. To simulate a real application, there was no relaxation time between charging/discharging and EIS measurements. SoHC A shorter series of measurements for the temperature estimation were created on the Panasonic NCA 103450 (9 cells SoHC = 100%, C N = 2350 mAh, prismatic high energy) and on the Sony US18650VTC6 (9 cells SoHC = 100%, C N = 3000 mAh, cylindrical high energy). The temperature was varied from 10 to 50 • C in 5 K steps. For each temperature, the SoC was varied from 10% to 90% in steps of 20%. For each temperature setting and SoC state, EIS measurements were performed with 4 different superimposed DC-currents (C-rate: 0C, −1/4C, −1/2C, −1C). State of Charge Estimation The SoC estimation was performed for all cell types shown in Table 1. The SoHC of every cell was nearly 100%. EIS measurements were performed at 4 different temperatures (20,25,30, and 35 • C). To take the charging/discharging history (hysteresis) of the cells into account, the cells were discharged in steps from 95% to 5% (in total 36 SoCs: 95%, 92%, 90%, 87%, 85%, 82%, . . . , 5%) and then went through the same SoCs for charging (again 36 SoCs). During the EIS measurement, there was no DC applied, but only an AC of C/10. State of Health Estimation The SoHC estimation with respect to total capacity (SoHC = C/C N ) was performed only for the Samsung INR18650-15L1 cells. To generate the training data, three lithium ion cells were aged by cycling (at T = 40 • C, constant current: charge at 2C, discharge at 3C, 1400 cycles) using a Basytec CTS. After every hundredth cycle, the capacity was determined (charge constant current C/10 constant voltage C/20, discharge constant current C/10) and EIS measurement was performed (at 25 • C, at 70% SoC, 0C DC-current). The data of the investigated cells are described in Section 2.1.1 (at 25 • C, at 70% SoC, 0C DC-current). Artificial Neural Network Architecture The ANN used in this study is based on the MATLAB NN toolbox. The architecture consists of a feed forward architecture with a single output depending on the state of interest (temperature, SoHC, SoC), as shown in Figure 6 . The used ANN is subcategorized into three main layers: input layer, hidden layer and output layer. Each neuron is connected with a neuron at the following layer in the forward direction. In the context of this work, the number of hidden layers was limited to a single layer. When the number of hidden layers used is more than one, the network is a deep neural network. Within the layer, the number of neurons was varied. In order to determine the optimal number of neurons, a grid search approach was used, where the number of neurons was continually increased until the prediction accuracy was no longer improving without restricting the generalization ability. The number of neurons within the hidden layer is presented in the discussion for each investigated case separately. In the hidden layers, a hyperbolic tangent sigmoid transfer function is used, which is given by: As an input parameter, the real or imaginary portion of the EIS measurements with respect to the frequency domain was used. The training dataset was selected in such a way that the generalization of the battery cell variance can be verified and validated. The bias values and weights were updated according to two different optimization strategies: Bayesian regularization backpropagation optimization (BRBP) and Levenberg-Marquardt backpropagation optimization (LMBP). The prediction accuracy was assessed using the root mean square error and the coefficient of determination. The code was developed based on ANN toolbox. Results The following discussion shows the results of the state estimations of the temperature, SoC and SoHC predictions. Note that the test dataset was separated from the training dataset before training the neural network in order to ensure that the test data were completely unknown to the ANN. Table 2. For each investigated SoHC, two cells were selected for testing. By including various DC currents, SoHCs and SoCs, the realistic performance of a cell was simulated. The overall root mean square error (RMSE) was less than 1 K. The maximum RMSE for a single temperature estimation was about 5 K. Only the real part of the impedance was used as an input parameter, since the influence of the temperature was much greater on the real part than on the imaginary part, as shown in Figure 1. A combination of real and imaginary parts and only using imaginary part as input were investigated. However, as expected, the temperature estimation became more inaccurate when using the imaginary part. For the presented results, the actual SoHCs, the SoCs and the applied DC current during the EIS measurement were used as input parameters. The ANN was able to estimate the cell temperature without additional input parameters. However, the time required to train the ANN increased significantly, and the overall RMSE increased to 1.5 K. During the measurements, there were no temperature sensors attached to the cells. The temperature was taken directly from the thermal chamber. The self heating effect in Samsung 18650-15L cells by applying DC current was not taken into account for the state estimation. When using temperature sensors within the cells, center measurements showed that the temperature difference between the center and the surface was less than 2 K. For our proof of concept study, this temperature discrepancy was assumed to be negligible, since every measurement was performed in the same way. The results show that in general the ANN was able to estimate the temperature from a corresponding EIS spectrum. Figure 8 shows the evolution of the ANN during training. After 230 epochs, the RMSE reached its lowest value and was stabilized. The RMSE evolution was used to find a suitable configuration of the ANN. For the presented results, only one hidden layer consisting of 11 neurons within the hidden layer was used in a Bayesian regularization-backpropagation neural network. Using more than 11 neurons in the hidden layer had a tendency to overfit. Additional investigations were performed on prismatic Panasonic NCA 103450 high energy cells and cylindrical Sony US18650VTC6 high energy cells to show that the presented method is independent of cell geometry and cell type (high energy/high power). Figure 9a presents the results for the Panasonic NCA 103450 cells with an overall RMSE of 0.7 K, and Figure 9b shows the results for the Sony VTC6 cells with an overall RMSE of 0.5 K. Each figure shows 360 temperature estimations at different SoCs and different applied superimposed DC currents. For both cells, a Bayesian regularizationbackpropagation neural network with one hidden layer consisting of five neurons was used. As input parameters, both the real and the imaginary parts of the impedance were fed to the ANN; the information about SoHC, applied DC current and SoC were restrained. Temperature Estimation The SoHC of every Sony and Panasonic cell was about 100%. The temperature estimations for different SoHCs were performed only for the Samsung cells. Nevertheless, we showed that the temperature estimation by an ANN using EIS data can be realized for different cell types. However, it is necessary to create an individual ANN for each cell type with individual hyperparameters. To finally generalize this method, more cells with different aging profiles and higher DC currents need to be investigated. Due to the self heating effect of the cells through high currents, temperature sensors should be installed within the cells to measure the exact temperature in real time. The superimposed DC current affects the EIS spectrum, especially at low temperatures. Therefore, the presented ANN method is a powerful tool, especially at high temperatures where the influence of the DC current is reduced. This makes it perfectly suitable for real-life applications. In comparison to other sensorless temperature estimation methods, the main advantages are that there is no need for storing time series data, and that the computational effort is reduced, since there is no need to solve complicated equations, such as partial differential equations. It is suitable for different cell types, and it also takes a superimposed DC current and actual SoHC into account. The use of EIS data greatly improved the estimation accuracy. Various internal processes in the lithium ion cell show different temperature dependencies. These processes can be allocated to various frequency domains measured via EIS [24]. For some electrochemical processes in lithium ion cells, the influence of the temperature predominates over the influence of the cell to cell variance. The estimation accuracy can be further improved by adding impedance data at various frequency domains in addition to the cell resistance. State of Charge Estimation The state estimation methodology using EIS data and ANN was investigated for the estimation of the SoC. Figure 6 shows the results for the Panasonic NCA 103450 cells. Since the influence of the SoC on the impedance varies, it was necessary to split the SoC into area 1 from 5% to 45%, as shown in Figure 10a, for which the RMSE was 1.9%; and area 2 from 48% to 95%, as shown in Figure 10b, for which the RMSE was 2.2%. In both cases, a Bayesian regularization-backpropagation neural network with one hidden layer consisting of four neurons was used. For the lower SoCs, only the imaginary part was used as input parameter, and at higher SoCs the real and the imaginary parts were used. Using only one network for the whole SoC area increased the RMSE significantly. The RMSE shows higher deviations in the mid SoC range (30-70%), where the EIS data mostly overlap. This makes it harder for the ANN to distinguish SoCs. For the Sony VTC6 cells it was also necessary to split the SoC in two areas. For the lower SoCs from 5% to 35% the RMSE was 3.1%, as shown in Figure 11a. Figure 11b presents the results from 40% to 95% with an overall RMSE of 2%. In both cases a Levenberg-Marquardt backpropagation artificial neural network with one hidden layer and five neurons was used. Only the real part of the impedance was used as an input parameter. The RMSE was increased for the estimation of lower SoCs because the impedance varies only slightly with low SoC. The SoC estimation for the Samsung 15L cells is not shown, since the best overall RMSE was about 8%, with a maximum estimation discrepancy up to 20%, which is not sufficient for any application. This was caused by the influence of the SoC on the impedance spectra being in the same order of magnitude as the influence of the cell to cell variance. The presented results for the SoC estimation clearly show that it is necessary to create an individual ANN for each cell type. If there is a clear dependency between the SoC and the EIS spectra and only a little cell to cell variance, the signal to noise ratio is big enough to allow the state estimation, as shown for the Panasonic NCA 103450 cells. In general, SoC estimation is much more complex than temperature estimation. The reason is the behavior of the EIS spectra depends on the state. As shown in Figure 1, an increase in temperature causes a decrease in impedance. The SoC dependency of the EIS spectra varies among different cell types. For some cell types, an increase in SoC causes a decrease in the value of impedance at a low SoC or an increase in value at a high SoC. For such a case, several ANNs for different SoC ranges are required. The advantages of this method compared to other methods are, again, the little computational effort, since no partial differential equations to be solved and no error-prone parameterization needs to be performed. Furthermore, there is no need for storing and handling time series data. The utilizable capacity of a cell depends strongly on the cell temperature. Therefore, the SoC varies with the temperature. The SoC was calculated by the ANN using an EIS spectrum that is characteristic of the measured state. State of Health Estimation Another interesting application is the estimation of the SoHC with respect to the capacity (SoHC). Therefore, three cells were aged and characterized (EIS and capacity) and used to train the ANN. The cells which were aged and used for the investigation of the temperature estimation were characterized afterwards and used to test the SoHC estimation. Figure 12 shows the estimated cell SoHC and the measured SoHC. The RMSE for every SoHC estimation was below 2%. A Bayesian regularization-backpropagation neural network with one hidden layer consisting of four neurons was used. Both the real and the imaginary part of the impedance were used as input parameters. We were unable to estimate the SoHC at the very beginning of life because the shape and the value of the EIS spectrum varies strongly for the first cycles. The impedance spectrum first decreases and than increases. After a number of cycles, stabilization of impedance growth was achieved, as shown in Figure 2. Only the data of cells which performed 100 or more cycles were used. Since the cells that were used to collect the training data were characterized only for 1400 cycles, data from cells with more cycles could not be used for testing, because the ANN is not able to extrapolate. The aging profiles of the cells that were used to collect training data and the cells for the test data were partially comparable. After aging, the testing cells experienced further aging during measurements at different temperatures and currents. Nevertheless, to generalize this system it will be necessary to investigate the presented state estimation method with data from different aging profiles. The advantage of the SoHC estimation method compared to other algorithms is the ability to estimate the total utilizable capacity from an EIS spectrum within a few seconds. Unlike the current pulse method where only one data point is determined, the impedance spectrum offers many data points belonging to a single state. That increases the estimation accuracy and reduces other influences, such as that of contact resistances. Further Discussion We have developed a sensorless ANN-method based on EIS data to estimate the temperature of a lithium ion cell. The temperature estimation was performed with different SoCs, SoHCs and discharge currents. Furthermore, we have given a perspective on the possibility of using the presented method to estimate the SoC and the SoHC. One of the biggest advantages of the presented method is that a well-known ANN with a very simple architecture which requires little computational effort is able to estimate the cell temperature successfully. This fact makes it even more interesting and practical for industrial applications. The ANN was trained within minutes for each system. The time needed to train the ANN depends on the availability of supplementary input data, such as SOC, SOH or applied DC current. Once the ANN is trained, the calculations for the state estimation by the ANN are performed within milliseconds. The EIS measurement from 1 Hz to 10 kHz was performed in less than 1 min. However, we predict that the measurement time could be reduced to milliseconds by selecting suitable frequencies. The calculations for the state estimation by the ANN were performed in less than a second. Further, we showed that the presented model is independent of the cell geometry by investigating cylindrical and prismatic cells. The focus of this study lay in the investigation of 18,650 lithium ion cells. To generalize the presented method, we will investigate larger cells with higher capacity in future work. Furthermore, we will use temperature sensors in the cell core for data acquisition and investigate connected cells in battery modules. Due to its data-driven nature, we suggest that it is possible to adapt the model to every other cell chemistry, as long as there is a strong correlation between the EIS spectrum and the investigated state. We were able to achieve reasonable results with LFP cells. As our data-driven model is easily applicable to other systems, it is attractive for practical applications, since cell manufacturers usually do not reveal exact cell chemistry. Conclusions In this work we presented a sensorless method for predicting the temperatures of lithium ion cells that uses ANNs which take electrochemical impedance spectra as input data. Investigation were performed on Samsung INR18650-15L1, Sony US18650VTC6 and Panasonic NCA 103450 cells. To simulate real applications, the SoC was varied; a superimposed DC current during the EIS measurement was applied; and for every cell type at least nine cells were investigated to include the cell to cell variance. In addition, Samsung 15L cells with different SoHCs were investigated. The RMSE for all temperature estimations were around 1 K, which makes the presented method attractive for practical applications. SoC estimation was also investigated likewise. For the Sony VTC6 and the Panasonic NCA 103450 cells, the RMSE was about 3%. Different cell temperatures during the EIS measurement were taken into account. The SoC estimation for the Samsung 15L cells was not successful, with an RMSE of above 8%. In this case the influence of the SoC on the impedance spectrum was of the same order of magnitude as the influence of the cell to cell variance. Therefore, for the SoC estimation, it is necessary to investigate each cell system individually for its applicability. At last, the ANN was applied to estimate the SoHCs of Samsung 15L cells. Since the estimation errors for all cells were below 2%, this seams to be a feasible use of the method as well. However further investigations on cells with different aging profiles are necessary to give a definitive evaluation of the suitability. Its advantages compared to other temperature estimation methods are that there is no need to fit a battery model to the data, and no differential equation needs to be solved. Furthermore, the ANN needs only a single EIS spectrum to estimate the cell temperature. There is no need for handling time series data. The presented prediction method seems to be a promising way to estimate the inner cell temperature with high accuracy in a short time period, as little effort regarding measurements and calculations is required. Further work is suggested to investigate the ability of the neural network to estimating the temperatures of cells within electrical circuits. Furthermore, a reduction in the number of input parameters will be investigated to improve this method by reducing and simplifying the computational effort and the measurement time.
7,354.2
2021-12-12T00:00:00.000
[ "Engineering", "Materials Science", "Computer Science" ]
YOLOv7-UAV: An Unmanned Aerial Vehicle Image Object Detection Algorithm Based on Improved YOLOv7 : Detecting small objects in aerial images captured by unmanned aerial vehicles (UAVs) is challenging due to their complex backgrounds and the presence of densely arranged yet sparsely distributed small targets. In this paper, we propose a real-time small object detection algorithm called YOLOv7-UAV, which is specifically designed for UAV-captured aerial images. Our approach builds upon the YOLOv7 algorithm and introduces several improvements: (i) removal of the second downsampling layer and the deepest detection head to reduce the model’s receptive field and preserve fine-grained feature information; (ii) introduction of the DpSPPF module, a spatial pyramid network that utilizes concatenated small-sized max-pooling layers and depth-wise separable convo-lutions to extract feature information across different scales more effectively; (iii) optimization of the K-means algorithm, leading to the development of the binary K-means anchor generation algo-rithm for anchor allocation; and (iv) utilization of the weighted normalized Gaussian Wasserstein distance (nwd) and intersection over union (IoU) as indicators for positive and negative sample assignments. The experimental results demonstrate that YOLOv7-UAV achieves a real-time detection speed that surpasses YOLOv7 by at least 27% while significantly reducing the number of parameters and GFLOPs to 8.3% and 73.3% of YOLOv7, respectively. Additionally, YOLOv7-UAV outperforms YOLOv7 with improvements in the mean average precision (map (0.5:0.95)) of 2.89% and 4.30% on the VisDrone2019 and TinyPerson datasets, respectively. Introduction With the decrease in the cost of drones, the civilian drone market has entered a period of rapid development.At the same time, target detection technology based on deep learning has also made remarkable progress in recent years, which has made the combination of drones and target detection technology more closely related.The integration of the two can play an important role in many fields, such as crop detection [1], intelligent transportation [2], and search and rescue [3].However, most target detection models are designed based on natural scene image datasets, and there are significant differences between natural scene images and drone aerial images.This makes it a meaningful and challenging task to design a target detection model specifically suitable from the aerial drone perspective. In practical application scenarios, real-time target detection of the unmanned aerial vehicle (UAV) aerial video stream places a high demand on the detection speed of the algorithm model.Furthermore, unlike natural scene images, due to the high altitude of UAV flights and the existence of a large number of small targets in aerial images, there are fewer extractable features for these targets.In addition, the UAVs actual flight altitude often varies greatly, leading to drastic changes in object proportions and a low detection accuracy.Finally, complex scenes are often encountered during actual flight shooting, and there may be a large amount of occlusion between densely packed small targets, making them easily obscured by other objects or the background.In general, generic feature extractors [4][5][6] downsample the feature maps to reduce spatial redundancy and noise while learning high-dimensional features.However, this processing inevitably leads to the representation of small objects being eliminated.Additionally, in real-world scenarios, the background exhibits diversity and complexity, characterized by various textures and colors.Consequently, small objects tend to be easily confounded with these background elements, resulting in an increased difficulty of their detection.In summary, there is a need to design a real-time target detection model for UAV aerial photography that is suitable for dense small target scenarios in order to meet practical application requirements. Object detection algorithms based on neural networks can generally be divided into two categories: two-stage detectors and one-stage detectors.Two-stage detection methods [7][8][9] first use region proposal networks (RPNs) to extract object regions, and then detection heads use region features as input for further classification and localization.In contrast, one-stage methods directly generate anchor priors on the feature map and then predict classification scores and coordinates.One-stage detectors have a higher computational efficiency but often lag behind in accuracy.In recent years, the YOLO series of detection methods has been widely used in object detection of UAV aerial images due to their fast inference speeds and good detection accuracies.YOLOv1 [10] was the first YOLO algorithm, and subsequent one-stage detection algorithms based on its improvements mainly include YOLOv2 [11], YOLOv3 [12], YOLOv4 [13], YOLOv5 [14], YOLOx [15], YOLOv6 [16], YOLOv7 [17], and YOLOv8 [18].YOLO algorithms directly regress the coordinates and categories of objects, and this end-to-end detection approach significantly improves the detection speed without sacrificing much accuracy, which meets the basic requirements of real-time object detection for unmanned systems. Previous improvement methods for target detection in UAV aerial images can be categorized into three types: (i) utilizing more shallow feature information, such as adding small target detection layers [19]; (ii) enhancing the feature extraction capability of the target detection network, such as improving the Neck network [20] or introducing attention mechanisms [21]; and (iii) increasing input feature information, such as generating higher resolution images [22], image copying [23], and image cropping [24,25]. Taking into consideration the aforementioned discussion, we propose a high-precision real-time algorithm, namely YOLOv7-UAV, for aerial image detection in unmanned aerial vehicles (UAVs).In summary, the contributions of this paper are as follows: (1) We have optimized the overall architecture of the YOLOv7 model by removing the second downsampling layer and introducing an innovative approach to eliminate its final neck and detection head.This modification significantly enhances the utilization efficiency of the detection model in capturing shallow-level information. (2) We present the DpSPPF module as an alternative to the SPPF module.It replaces the original max pooling layers with a concatenation of smaller-sized max pooling layers and depth-wise separable convolutions.This design choice enables a more detailed extraction of feature information at different scales. (3) We propose the binary K-means anchor generation algorithm, which avoids the problem of local optimal solutions and increases the focus on sparse-sized targets by reasonably dividing the anchor generation range into intervals and assigning different numbers of anchors that need to be generated in each interval. (4) Extensive experiments were conducted on both the VisDrone dataset and the TinyPerson dataset to validate the superiority of our proposed method over state-of-the-art real-time detection algorithms. YOLOv7 YOLOv7 is one of the most advanced single-stage object detection algorithms that satisfies both real time and high precision requirements.YOLOv7 incorporates several trainable bag-of-freebies, which can significantly enhance the detection accuracy without increasing the inference cost.It uses the "ex-tend" and "compound scaling" methods to improve the utilization of parameters and computational resources.YOLOv7 also incorporates improved re-parametrization modules and label assignment strategies.The YOLOv7 model is mainly composed of three parts: a backbone network (Backbone), a bottleneck layer network (Neck), and a detection network (Head).The backbone network includes standard convolutional layers, max pooling layers, Extended Efficient Layer Aggregation Networks (ELAN) modules, and SPPCSPC modules.The backbone network performs feature extraction, where the ELAN module increases the cardinality of newly added features using group convolution without altering the original gradient propagation path.It merges features from different groups by mixing and merging their cardinalities, which enhances the learned features from different feature maps and improves the usage of parameters and computations.The SPPCSPC module performs feature extraction through max-pooling with different pooling kernel sizes, which expands the model's receptive field.To fuse feature information on different scales, the neck uses three different sized feature maps extracted from the backbone for feature fusion.This part still uses the PANet [26]structure based on FPN, which adds channels from shallow to deep networks.The model's head can be viewed as the YOLOv7 classifier and regressor. However, the YOLOv7 algorithm was not specifically designed to address small object datasets so it cannot be directly applied to the detection of aerial images from unmanned aerial vehicles (UAVs). Spatial Pyramid Pooling Spatial pyramid pooling (SPP) was proposed by Kaiming He et al. [27].It aggregates features of different sizes by using pooling layers of different scales and produces an output with a fixed size.In the YOLO series, YOLOv4 was the first to incorporate the SPP structure.YOLOv5 replaced the three parallel maximum pooling layers in SPP with three concatenated maximum pooling layers of smaller sizes to obtain Spatial Pyramid Pooling-Fast (SPPF).The impacts of SPPF and SPP on neural network output results are nearly identical, but SPPF has a faster processing speed.YOLOv7 uses the SPPCSPC module, which is a fusion of SPP and CSPNet [28] modules.Compared to the SPP module, the SPPCSPC module can extract richer feature information, but it has a higher number of parameters and a higher computational complexity. Anchor Generation Algorithm Anchors were first introduced in Fast-RCNN as pre-defined bounding boxes that are used to label regions in an image that may contain objects.It aids in the precise and efficient localization of targets for object detection algorithms.During detection, anchorbased object detection models adjust the size of anchors and filter them to obtain the final predicted boxes.In the past, there have been two main approaches to obtaining anchors: one involves manual design, while the other involves clustering algorithms such as Kmeans and K-means++.In the YOLO series, the anchor mechanism was first introduced in YOLOv2.YOLOv3, YOLOv4, YOLOv5, and YOLOv7 object detection models employ a genetic algorithm to refine the anchor generated by the K-means algorithm. Bounding Box Regression Loss Function The bounding box regression loss function is an important component in object detection tasks which measures the difference between predicted detection boxes and true boxes.In early object detection methods, the Mean Square Error (MSE) loss function was a common choice, which calculates the squared error between the predicted coordinates of the detection box and the true coordinates of the box.However, the MSE loss function is highly sensitive to outliers.To address this issue, Fast R-CNN introduced the Smooth L1 loss function, which uses a square function when the error is small and a linear function when the error is large, while also possessing robustness. IoU Loss is a loss function based on intersection over union (IoU), which optimizes the model by minimizing the IoU distance between the detection box and the true box, thereby more directly considering the degree of overlap between the detection boxes.GIoU Loss [29] is an improved version of IoU Loss, which not only considers the intersection and union of the two boxes but also considers the distance between their bounding boxes.Zhaohui Zheng et al. [30] proposed DIoU and CIoU.DIoU Loss is an improvement of GIoU Loss, which uses a more accurate distance metric in the calculation of the distance.CIoU Loss further considers the difference in aspect ratios based on DIou loss.Compared to CIoU Loss, EIoU Loss [31] directly considers the difference in length and width and SIoU Loss [32] adds considerations for the angle of the bounding box regression.Jinwang Wang et al. [33] pointed out that IoU is too sensitive to small object position deviations; thus, they designed an evaluation metric (nwd, normalized Gaussian Wasserstein distance) for small objects based on the Wasserstein distance. YOLOv7-UAV YOLOv7 is one of the most advanced single-stage object detection models, which comprises seven distinct versions: YOLOv7-tiny, YOLOv7, YOLOv7-X, YOLOv7-W6, YOLOv7-E6, YOLOv7-D6, and YOLOv7-E6E.Considering the trade-off between detection accuracy and speed, we selected the YOLOv7 model as the foundation for constructing the YOLOv7-UAV network architecture. The overall structure of the YOLOv7-UAV model is illustrated in Figure 1, which differs from YOLOv7 in four aspects.In the following four subsections, we will introduce each of these four modifications separately in detail.It should be noted that in order to ensure a fair comparison, we performed an overall scaling of the channel numbers on the modified model to ensure the compared models had similar GFLOPs.The scaling of channel numbers is accomplished through the approach described in Equation (1).We denote the scaling factor for the portion of YOLOv7 that is located before the second downsampling layer as W 1 , and the scaling factor for the remaining portion as W 2 .The GFLOP calculation formula for the CNN neural network is shown in Equation (2).Removing the second downsampling layer in the YOLOv8 model results in a two-fold increase in the height and width of feature maps following the layer, leading to a significant rise in model GFLOPs.However, the model's GFLOPs can be substantially reduced by reducing the number of feature map channels using a scaling factor W. Additionally, since removing the second downsampling layer does not affect the size of feature maps preceding it, different values of parameter W can be assigned to the the feature maps before and after the second downsampling layer.It is worth noting that the settings of W1 and W2 in this paper not only make the GFLOPs of the model similar before and after adjustment but also adhere to the ratio of GFLOPs between the second downsampling layer of the YOLOv7 model before and after the adjustment.Clearly, there are infinite combinations of W1 and W2 that satisfy this condition.However, due to experimental constraints, we only compared a few of them. where C 1 and C 2 are the number of channels in a neural network layer before and after scaling, respectively. where H and W represent the height and width of the output feature map, k denotes the size of the convolutional kernel, and C i and C o correspond to the channel numbers of the input and output feature maps, respectively. Reducing the Receptive Field of YOLOv7 We removed the second downsampling layer in YOLOv7 in order to reduce the receptive field and mitigate the loss of fine-grained feature information caused by downsampling. Despite the fact that deep-level feature information is beneficial for object classification, there exists a semantic gap between feature information extracted from different layers, and the overly large receptive field of deep networks is not conducive to detecting small objects.Thus, we removed the third detection head of the YOLOv7 model to enhance its utilization of fine-grained feature information.We then adjusted the model's channel numbers by scaling them using W 1 = 0.75 and W 2 = 0.5. Replacing SPPCSPC with DpSPPF We believe that a large-sized maximum pooling layer will result in the loss of finegrained feature information, which is detrimental to small object detection.Therefore, we proposed the DpSPPF moudle which replaces the maximum pooling layer (Kernel_size = 5) in the SPPF module with interconnected smaller depth-wise separable convolution (Ker-nel_size = 3) and max pooling layers (Kernel_size = 3).The structure of the DpSPPF module is illustrated in Figure 2. Subsequently, we incorporated the DpSPPF module into the deepest layer of the YOLOv7 model backbone that had undergone two modifications to aggregate the feature information of different scales. Binary k-Means Anchor Generation Algorithm In UAV-based image detection tasks, there exist targets of different sizes, and these targets are imbalanced in both the dataset and real-world scenarios.YOLOv7 initially generates anchors using the K-means algorithm and then applies the standard genetic algorithm to mutate these anchors based on their fitness, which is determined by the overlap between the generated anchors and the dimensions of all the targets in the training set.However, the k-means algorithm is highly influenced by initial points and outliers, which may result in the clustering results being only locally optimal.In addition, when the k-means algorithm is combined with the genetic algorithm in the anchor clustering process, it often focuses on samples with common sizes, while some samples with rare sizes may significantly deviate from the clustering results.To address these issues, we propose an improved anchor generation algorithm referred to as the "binary k-means anchor generation algorithm". The binary K-means prior anchor generation algorithm first obtains K cluster centers on the dataset using k-means and the genetic algorithm.Based on the width and height of the cluster anchor with the largest area, the algorithm divides the target size distribution interval for the dataset into three intervals to generate an anchor, requiring each region to generate at least one prior anchor.This helps the generated anchors to be closer to some rare sizes and reduces the probability of them becoming local optimal solutions.In addition, the algorithm determines the number of prior anchors to be generated in each interval based on the ratio between the number of cluster centers contained in each interval, so that more attention can be paid to samples with common sizes during the process of prior anchor generation.Its steps are shown in Algorithm 1. The selection of a K value affects the degree of attention paid by the target detection algorithm to targets of different sizes, so the selection of a K value needs to be within an appropriate range.Through experiments in Section 4.3.2,we found that when generating six anchors on the VisDrone2019 dataset, as long as the value of K is within [11,19], the effect of the binary K-means prior anchor generation algorithm is better than that of the K-means prior anchor generation algorithm.The default value of K for YOLOv7-UAV is 12, which was determined as the optimal value through testing on the VisDrone2019 dataset. To illustrate more clearly the difference in clustering performance between the binary prior anchor generation algorithm and the approach that sequentially uses k-means and the genetic algorithm, we present in Figure 3a comparison of the two algorithms on the VisDrone2019 dataset and the TinyPerson dataset.From the figure, it can be observed that the anchors generated by the binary K-means prior box generation algorithm are more widely dispersed, yet they also place a greater emphasis on objects of common sizes. 3: K: The number of clusters at the first cluster.4: k: The number of anchors needed.Output: anchor = (a 1 , a 2 , ..., a k ) 5: Consecutively using k-means and genetic algorithm to obtain K cluster centers on T, listed in ascending order by their respective areas as A 1 . . .A K .6: N 1 ← 0, N 2 ← 0, N 3 ← 0. N 1 , N 2 , and N 3 represent the number of anchors allocated to each interval to generate anchor, initially set to 0 7: b ← (A K [0] + A K [1])/6.It serves as the partition threshold for generating three intervals to generate anchor.8: for i = 1 to K do 9: if append(T[i]) end if 39: end for 40: Successively apply K-means and genetic algorithm for clustering on the T 1 (k = k 1 ),T 2 (k = k 2 ), and T 3 (k = k 3 ) to get the anchor, respectively. Nwd and Positive/Negative Sample Allocation Strategy YOLOv7 determines the number of positive samples required (k) for each ground truth object by summing the top 10 IoU scores.The model then selects the top k samples with the smallest cost (the cost is the sum of the classification loss and the regression loss, added in a ratio of 1:3) for each ground truth object as positive samples.Due to the excessive sensitivity of the IoU to size deviations of small objects, this approach leads to an insufficient number of positive samples being assigned to small target ground truths during the training process of object detection networks. The normalized Wasserstein distance (nwd) is a novel method for evaluating small object detection, which models the bounding boxes as two-dimensional Gaussian distributions and measures the similarity between predicted and ground truth objects, regardless of their overlap.The nwd is less affected by the scale of objects, making it particularly suitable for evaluating small objects.In YOLOv7's positive/negative sample allocation strategy, we use the weighted nwd metric and IoU metric instead of the IoU metric.The IoU loss is retained, as it is more suitable for medium-to large-sized objects.The computation process of the nwd is shown in Equation ( 3).After experimental tuning, we set nwd:IoU = 0.5:0.5 for the TinyPerson dataset, and nwd:IoU = 0:1.0for the VisDrone2019 dataset. where C is a constant related to the dataset (we adopted the same setting of C = 12.8 as in the original paper [33]), W 2 2 (N a , N b ) is a distance metric which is computed using Equation ( 4), and N a and N b are Gaussian distributions modeled by B a = (x a , c a , w a , h a ) and B =(x b , y b , w b , h b ).The VisDrone2019 [34] dataset consists of a large number of annotated images captured by drones, with a total of 7019 images divided into 10 classes.The training and validation sets contain 6471 and 548 images, respectively.This dataset mainly contains small-and medium-sized targets. The TinyPerson [35] dataset consists of 1610 images with a total of 72,651 annotated bounding boxes, mainly focusing on small objects.The images in this dataset were mainly captured by unmanned aerial vehicles and are categorized into two groups, namely sea_person and earth_person.The training and testing sets of the dataset comprise 794 and 816 images, respectively.There are a few annotation boxes in TinyPerson that can be ignored, including densely packed crowds that are difficult to separate, ambiguous regions, and shadow regions in water.These annotation boxes were replaced by the mean value of the image region in [35], and we simply ignore them. Evaluation Metrics We evaluated the performance of the object detection algorithm using four metrics, namely mean average precision (map), GFLOPs, Frames Per Second (FPS), and parameters.Map is computed using Equation (5), map0.5 denotes the map calculated at an IoU threshold of 0.5 and map(0.5:0.95) denotes the map scores across 10 IoU thresholds ranging from 0.5 to 0.95 with a step size of 0.05.GFLOPs can quantify the computational complexity of the model, parameter can measure the size of the model, and FPS represents the actual inference speed of the model. where N represents the total number of categories, P represents precision, and R represents recall. In the experiment conducted on the TinyPerson dataset, we set the number of epochs, batch size, and image input dimensions to 150, 4, and 960 * 960, respectively.In the experiment conducted on the VisDrone2019 dataset, we set the number of epochs, batch size, and image input dimensions to 300, 4, and 640 * 640, respectively.The iteration numbers of the K-means algorithm and genetic algorithm were set to 30 and 1000, respectively.The object detection model was evaluated for FPS on the corresponding test set of the dataset on which it was trained. Ablation Experiments We have divided the construction process of YOLOv7-UAV into four steps: In step 1, we removed the second downsampling layer and the deepest detection head of YOLOv7.Then, we reduced the number of channels in the model using Equation (1) with weights W 1 = 0.75 and W 2 = 0.5.In step 2, we introduced the DpSPP module.In step 3, we utilized the binary K-means anchor generation algorithm.In step 4, we used the weighted nwd and IoU instead of the IoU in the positive and negative sample allocation strategy.In the tables of each subsection in this chapter, the experimental settings highlighted in bold will serve as the baseline for the subsequent subsection. Tables 1 and 2 illustrate the changes in the performance of the object detection model during the construction process of YOLOv7-UAV on the VisDrone2019 and TinyPerson datasets, respectively.The results shown in these two tables indicate that each step of improvement we made to YOLOv7 was effective.In the following four sections, we will further introduce the ablation experiments conducted for each step and compare YOLOv7-UAV with other advanced object detection algorithms.Due to the presence of a large number of intermediate-sized objects in VisDrone2019, we did not apply step 4 to the models trained on VisDrone2019.We present the impact of changes to the model architecture on the detection model in Tables 3 and 4. In order to facilitate a fair comparison, we ensured that the experimental setups had similar GFLOP levels by adjusting the overall channel numbers of the models.The comparative results of the first and third rows, as well as those of the second and fourth rows in both tables, demonstrate that reducing the detection head located at the deepest position of YOLOv7 not only significantly increases the speed of model detection, but also greatly improves the accuracy of detecting unmanned aerial vehicle (UAV) images.The comparative results from the fifth to the seventh rows in both tables indicate that, in terms of both the detection speed and accuracy, the DpSPPF module achieves the best performance compared to the popular SPPF and SPPCSPC modules.The experimental setup in the seventh row of both tables corresponds to step 2 as described in Tables 1 and 2. The Impact of the K Value in the Binary k-Means Anchor Generation Algorithm We conducted an empirical study on the impact of the choice of the K value in the first iteration of the binary K-means anchor generation algorithm on the performance of the object detection algorithm.The results, as shown in Table 5, indicate that as the value of K increases, the generated range of anchors tends to become larger.Furthermore, the binary K-means anchor generation algorithm performs better when the value of K falls within the range of [11,19].Specifically, the results in the table also indicate that the object detection algorithm achieves the highest map (0.5:0.95) with K = 12 (i.e., step 3 in Tables 1 and 2).[4,6,12,8,8,15,22,20,48,34,91,56] In Table 6, we present the impact of the binary K-means anchor generation algorithm on several popular object detection models.For the object detection algorithm with nine anchors, we simply set K to 18 by taking 12 × K 1 /K 2 = 12 × 9/6 = 18.As demonstrated in this table, all detection algorithms exhibited better performances on both datasets, indicating the excellent generalization ability of the binary K-means anchor generation algorithm.7 presents the impact of using different weights of nwd and IoU in the positivenegative sample allocation strategy on the detection performance.The results in the table indicate that on the TinyPerson dataset, setting nwd:IoU = 0.5:0.5 (i.e., step 4 in Table 2) yields the best performance, while the VisDrone dataset, which contains a large number of medium-sized objects, does not require the use of the nwd. Algorithm Comparison The detection performance comparison of YOLOv7-UAV and other state-of-the-art realtime object detection algorithms on a UAV dataset is presented in Table 8.The algorithm tph-YOLOv5 in the table is specifically designed for detecting small targets in UAV imagery, while YOLOv8m-p2 is a version of YOLOv8 specifically designed for small object detection.The results indicate that YOLOv7-UAV outperforms its counterparts in both detection speed and accuracy. Figure 4 illustrates the detection results of YOLOv7 and YOLOv7-UAV.Both models were trained on the training set of the VisDrone2019 dataset.It can be observed that YOLOv7-UAV has a lower false negative rate and generally higher confidence levels compared to YOLOv7. Conclusions There exists a significant challenge for existing detection algorithms in detecting a large number of small targets with diverse shooting angles in unmanned aerial vehicle (UAV) images.This paper proposes an algorithm, YOLOv7-UAV, which can detect UAV images in real time.Firstly, the algorithm reduces the loss of feature information and improves the model's utilization efficiency of fine-grained feature information by removing the second downsampling layer and the deepest detection head of the YOLOv7 model.Then, the algorithm replaces the maximum pooling layer in the SPPF module with concatenated smaller depth-wise separable convolution and maximum pooling layers, optimizing its ability to extract fine-grained feature information while retaining the ability to aggregate multi-scale feature information.Subsequently, the paper proposes a binary K-means anchor generation algorithm that reasonably divides the anchor box generation interval and retains a focus on common sizes to obtain better anchors.Finally, YOLOv7-UAV introduces the weighted nwd and IoU as evaluation metrics in the label assignment strategy on the TinyPerson dataset.Results on the VisDrone2019 and TinyPerson datasets demonstrate that YOLOv7-UAV outperforms most popular real-time detection algorithms in terms of the detection accuracy, detection speed, and memory consumption. Despite the excellent performance of our proposed method on the unmanned aerial vehicle object detection datasets, there are still some limitations.Specifically, the effectiveness of YOLOv7-UAV on low-power platforms, such as embedded devices, requires further testing and optimization.Moreover, the determination of the K value in the binary K-means anchor generation algorithm is solely based on experimental results, without a sufficient analysis of the factors influencing K.In real-world unmanned aerial vehicle tasks, adverse weather conditions such as fog and darkness may be encountered, which our method has not been specifically optimized for.Additionally, unmanned aerial vehicles can be equipped with camera systems that have a larger field of view (FOV), which introduces radial and barrel distortions that significantly impact the detection of small targets, an aspect that our proposed algorithm has not specifically addressed. In future work, we plan to optimize the performance of YOLOv7-UAV on low-power platforms by employing model compression techniques such as model pruning and distillation.Moreover, we aim to improve the process of generating anchors using the binary K-means algorithm by directly determining an appropriate K value based on the distribution of data within the dataset and the receptive fields of the target detection model.We plan to incorporate advanced generative networks or construct more abundant datasets to enhance the performance of the YOLOv7-UAV algorithm in challenging environments such as those with dense fog or low light conditions.Additionally, we intend to investigate the rectification issues of wide-angle cameras and devise targeted data augmentation methods to enhance the detection performance of object detection algorithms on images captured with a larger FOV. (a) Clustering results on VisDrone2019 (b) Clustering results on TinyPerson Figure 3 . Figure 3.Comparison of clustering results between the binary k-means anchor generation algorithm with the approach that sequentially uses k-means and a genetic algorithm.(The 'k' and 'K' represent the number of anchors to be generated and the number of clusters in the first clustering step of the binary K-means prior box generation algorithm, respectively.The graph's coordinate points are indicative of the size of the targets, with a darker shade of blue indicating a higher number of targets represented). T 1 , T 2 , T 3 represent the sets of object bounding box sizes within the three interval to generate anchor.20: for i = 1 to n do21: Figure 4 . Figure 4.The detection results of YOLOv7 and YOLOv7-UAV (the sample images are all from the testing set of the VisDrone2019 dataset). Table 1 . Impact of each step evaluated on VisDrone2019. Table 2 . Impact of each step evaluated on TinyPerson. Table 3 . Impact of changes in the YOLOv7s architecture evaluated on VisDrone2019 ('dsp' refers to the second downsampling layer). Table 4 . Impact of changes in the YOLOv7s architecture evaluated on TinyPerson refers to the second downsampling layer). Table 5 . The impact of the K value in the binary k-means anchor generation algorithm on Vis-Drone2019. Table 6 . Impact of the binary K-means anchor generation algorithm on several detection models. Table 7 . Impact of using different weights of nwd and IoU. Table 8 . Performance of the YOLOv7-UAV algorithm and other object detection algorithms.
7,127.4
2023-07-19T00:00:00.000
[ "Computer Science", "Engineering" ]
Anamorphic and spatial frequency dependent phase modulation on liquid crystal displays . Optimization of the modulation diffraction efficiency In this work we present experimental evidence of an anamorphic and spatial frequency dependent phase modulation in commercially available twisted nematic liquid crystal spatial light modulators. We have found that the phase modulation depth depends on the magnitude of the local spatial frequency component along the horizontal direction. Along the vertical direction the phase modulation depth does not depend on the spatial frequency. This phenomenon is related with the electronics driving the device and in no way related to liquid crystal physics. It causes a reduction of the optical efficiency of a diffractive optical element displayed onto this type of modulator. We present an algorithm to correct this effect and more efficiently display a diffractive optical element. We apply it to the particular case of a Fresnel lens. Experimental results that confirm the improvements in the efficiency of the displayed diffractive lens are presented. ©2005 Optical Society of America OCIS codes: (50.1970) Diffractive optics; (230.3720) Liquid-crystal devices; (230.6120) Spatial Light Modulators; (220.3620) Lens design; (70.4560) Optical data processing. References and links 1. H.-K. Liu, J. A. Davis and R. A. Lilly, “Optical-data-processing properties of a liquid-crystal television spatial light modulator,” Opt. Lett. 10, 635-637 (1985). 2. A. Márquez, C. Iemmi, J. Campos, J. C. Escalera and M. J. Yzuel, “Programmable apodizer to compensate chromatic aberration effects using a liquid crystal spatial light modulator,” Opt. Express (in press). 3. R. Dou and M. K. Giles, “Closed-loop adaptive optics system with a liquid crystal television as a phase retarder,” Opt. Lett. 20, 1583-1585 (1995). 4. P. Yeh, Optics of liquid crystal displays (John Wiley & Sons, New York, 1999). 5. J. Nicolás, J. Campos and M. J. Yzuel, “Phase and amplitude modulation of elliptic polarization states by nonabsorbing anisotropic elements: application to liquid-crystal devices,” J. Opt. Soc. Am. A 19, 1013-1020 (2002). 6. K. Miyamoto, “The phase Fresnel lens,” J. Opt. Soc. Am. 51, 17-20 (1961). (C) 2005 OSA 21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 2111 #6568 $15.00 US Received 8 February 2005; revised 10 March 2005; accepted 11 March 2005 7. Y. Takaki and H. Ohzu, “Liquid-crystal active lens: a reconfigurable lens employing a phase modulator,” Opt. Commun. 126, 123-134 (1996). 8. V. Laude, “Twisted-nematic liquid crystal pixelated active lens,” Opt. Commun. 153, 134-152 (1998). 9. D. A. Buralli and G. M. Morris, “Effects of diffraction efficiency on the modulation transfer function of diffractive lenses,” Appl. Opt. 31, 4389-4396 (1992). 10. D. A. Pommet, M. G. Moharam and E. B. Grann, “Limits of scalar diffraction theory for diffractive phase elements,” J. Opt. Soc. Am. A 11, 1827-1834 (1994). 11. M. Kuittinen and H. P. Herzig, “Encoding of efficient diffractive microlenses,” Opt. Lett. 20, 2156-2158 (1995). 12. J. A. Davis, D. M. Cottrell, R. A. Lilly and S. W. Connely, “Multiplexed phase-encoded lenses written on spatial light modulators,” Opt. Lett. 14, 420-422 (1989). 13. E. Carcolé, J. Campos and S. Bosch, “Diffraction theory of Fresnel lenses encoded in low-resolution devices,” Appl. Opt. 33, 162-174 (1994). 14. E. Carcolé, J. Campos, I. Juvells and S. Bosch, “ Diffraction efficiency of low resolution Fresnel encoded lenses,” Appl. Opt. 33, 6741-6746 (1994). 15. I. Moreno, J. Campos, C. Gorecki and M. J. Yzuel, “Effects of amplitude and phase mismatching errors in the generation of a kinoform for pattern recognition,” Jap. J. Appl. Phys. 34, 6423-6434 (1995). 16. I. Moreno, C. Iemmi, A.Márquez, J. Campos and M. J. Yzuel, “Modulation light efficiency of diffractive lenses displayed onto a restricted phase-mostly modulation display,” Appl. Opt. 43, 6278-6284 (2004). 17. R. D. Juday, “Optical realizable filters and the minimum Euclidean distance principle,” Appl. Opt. 32, 51005111 (1993). 18. R. D. Juday, “Generality of matched filtering and minimum Euclidean distance projection for optical pattern recognition,” J. Opt. Soc. Am. A 18, 1882-1896 (2001). 19. J. L. Bougrenet de la Tocnaye and L. Dupont, “Complex amplitude modulation by use of liquid crystal spatial light modulators,” Appl. Opt. 36, 1730-1741 (1997). 20. A. Márquez, C. Iemmi, I. Moreno, J. A. Davis, J. Campos and M. J. Yzuel, “Quantitative prediction of the modulation behavior of twisted nematic liquid crystal displays,” Opt. Eng. 40, 2558-2564 (2001). 21. Z. Zhang, G. Lu and F. T. S Yu, “A simple method for measuring phase modulation in LCTVs,” Opt. Eng. 33, 3018-3022 (1994). 22. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996), 16-19. Introduction Spatial light modulators (SLM) are optical devices useful for the application in optical image processing, programmable diffractive optics and adaptive optics [1][2][3].Among the different technologies, liquid crystal displays (LCD) have become the most available and employed SLM for these applications [4].For instance, twisted nematic LCDs can be extracted from projection devices, and they can be used to act as a phase-mostly SLM when properly selecting input and output polarization configurations [5].These phase-mostly SLMs are very interesting devices for diffractive applications, where a designed phase-only mask is displayed onto the liquid crystal display.One case of particular interest is Fresnel lenses.They have long been proposed as replacements or complement to conventional refractive lenses [6], and they actually play an important role in modern optical technology.Real-time diffractive Fresnel lenses have been produced using LCDs in refs.[7,8]. One key parameter for the realization of such diffractive optical elements is its diffraction efficiency [9].The pixelated structure of the display produces a characteristic 2D diffraction grid pattern, which strongly reduces the efficiency of the displayed element (its reconstruction is replicated on every point of the grid pattern).In spite of this effect, the use of pixelated SLMs is still very interesting for diffractive applications.The low spatial resolution of electronically addressed SLMs permits to restrict the efficiency analysis of the displayed diffractive elements to the scalar theory [10].This limited spatial resolution also reduces, in general, the diffraction efficiency of the displayed lenses and induces a quantization of the phase levels at the lens edges [11].If the focal length is too short, a lens array effect is produced, which degrades the efficiency of the designed lens [12][13][14].Here we restrict our analysis to lenses with long enough focal length to avoid this effect.In this situation, the diffraction efficiency of the displayed phase-only diffractive element is reduced when the modulation produced by the display is not a perfect linear phase-only modulation with 2π phase depth [15].We have recently defined a modulation diffraction efficiency parameter and developed a model to numerically evaluate it as a function of the complex modulation provided by the LCD [16]. In recent experiments we have observed that the LCD phase modulation strongly depends on the orientation and spatial frequency components of the displayed image.We have found that this phenomenon is related with the electrical signal addressing scheme for large content information displays, i.e. it is in no way related to liquid crystal physics.Basically, as the LCD is fed a video signal, there is a low frequency in the vertical direction in the video signal and a high frequency in the horizontal direction, as is also usual with television circuits.When the frequency is very high along the horizontal direction the signal can be affected by the bandwidth of the electronic circuitry.This electrical analysis will be discussed with greater detail in a future paper.Now in this paper we present measurements of the phase modulation for our LCD working in a phase-mostly configuration that evidences an anamorphic behavior.The phase modulation remains invariant for the whole range of vertical frequencies (rows per period), and for low horizontal frequencies (columns per period).However, the phase modulation depth for high horizontal frequencies is seriously reduced.In general, a limited phase modulation depth seriously reduces the modulation diffraction efficiency.Therefore this anamorphic effect degrades the efficiency of any diffractive phase-only element displayed onto the LCD.Another consequence is that the shape of the diffractive element may get distorted, e.g.losing the rotational symmetry in the case of a Fresnel lens. We demonstrated in Ref. [16] that the application of a projection of the complex values based on the minimum Euclidean distance principle [17,18] leads to important improvement of the efficiency of a diffractive phase element, even if the phase modulation depth provided by the display is seriously reduced.The technique is valid for phase-only functions with values distributed over a 2π range with a constant probability density function.This is the case of a Fresnel lens.Here we present an algorithm for the design of the displayed phaseonly diffractive element that compensates the anamorphic and spatial frequency dependent behavior of the phase modulator.The algorithm is based on a different Euclidean projection for spatial frequencies with different orientations.We apply this technique for the optimization of the efficiency of displayed Fresnel lenses and experimentally demonstrate an improvement that is in agreement with the predictions of the theory. The paper is organized as follows.In Section 2 we present the measurement of the phase modulation obtained from gratings displayed onto the LCD with different orientations and frequencies; they demonstrate the anamorphic behavior mentioned above.In Section 3 the concept of modulation efficiency is reviewed and the algorithm to correct the anamorphic phase modulation is presented.In Section 4 we present experimental results of the efficiency of the displayed lenses that show the improvement when the anamorphic phase modulation is corrected.Finally, in Section 5 the conclusions of the work are presented. Anamorphic and spatial frequency dependent phase modulation depth Liquid crystal displays modulate an input light beam through the change in the orientation of the birefringent liquid crystal molecules when applying a voltage.When inserted between two polarizers they generally produce a complex amplitude modulation [19].We have recently demonstrated that phase-mostly modulation can be obtained with this kind of displays when a proper elliptically polarized light configuration is employed [5,20].For this purpose we use a combination of polarizers and wave-plates in front and behind the LCD, in order to optimize the response of the display to achieve a phase-mostly modulation. In the experiments we use a twisted nematic LCD from Sony, Model LCX012BL, with a VGA resolution (640 columns × 480 rows), extracted from a video projector Sony VPL-V500.We use a short wavelength λ=458 nm, in order to achieve a large value of the phase modulation depth.Figure 1 shows the intensity of the transmitted light as a function of the addressed grey level in the phase-mostly modulation configuration.This polarization configuration has been obtained following the procedure exposed in Ref. [20].These data are obtained by addressing a uniform image to the display and measuring the transmission with respect to the addressed gray level, which ranges from 0 to 255.The intensity transmission in Fig. 1 has been normalized with respect to the maximum value.A quite flat curve is obtained. In this polarization configuration we also measure the phase modulation produced by the display.We use a method proposed in Ref. [21], where a binary grating is addressed to the display and the ratio between the first and zero diffraction orders is measured.This ratio depends on the phase difference between the two levels of the grating.Since the intensity transmission remains almost flat on the entire range (Fig. 1), the LCD approximately produces a binary phase grating.We have observed that the LCD response strongly depends on the orientation and spatial frequency components of the displayed image.We have displayed gratings in the vertical and horizontal directions, with different periods.Figure 2 shows the intensity of the zero and first diffraction orders for different binary gratings displayed on the LCD.The gratings are generated with a fixed gray level g1=0 and a second gray level g2 variable in the range from 0 to 255.In Figs.2(a) and 2(b) the grating is displayed along the vertical direction, and the period is of 64 and 2 rows/period respectively.As the gray level g2 increases, the intensity of zero order (DC) is reduced while the first order increases.When g2 reaches a value around 180 the intensity of the first order shows a maximum and the zero order is strongly reduced, indicating a π phase difference between both gray levels.For higher values the DC term increases again while the first diffraction order is reduced, showing that the phase modulation approaches 2π for g2=255.Both gratings, with 64 and 2 rows/period show very similar behavior, proving that the phase modulation is not being affected by the spatial frequency in this direction.This is not the situation when the grating is displayed along the horizontal direction.In Fig. 2(c) and 2(d) the binary grating has spatial frequency of 64 and 2 columns/period respectively.In Fig. 2(c) the results are equivalent to those of the previous gratings.However, when the period of the grating is of only 2 columns the evolution of the diffracted orders is very different.In Fig. 2(d) it is observed a maximum phase depth of only around π radians for the maximum gray level (note that the first order intensity is equal to zero for this gray level). We tested this asymmetric behavior of the phase modulation depth for gratings with other periods.Figure 3 shows the phase modulation as a function of the addressed gray level.Figure 3(a) corresponds to gratings with vertical frequencies of 64, 32, 16, 8, 4 and 2 rows/period.The phase modulation remains invariant for the whole range of frequencies, reaching a maximum phase depth very close to 2π.Thus it represents an ideal phase-mostly modulation device for diffractive optics.Figure 3(b) shows the phase modulation now measured using gratings with horizontal frequencies of 64, 32, 16, 8, 4 and 2 columns/period.These results show that for low frequencies the phase modulation is equivalent to that produced for vertical frequencies.However, as the spatial frequency increases, the maximum phase modulation depth is being reduced, reaching only π radians when the period is of only 2 columns.Therefore, it is concluded that the device is not so efficient to display diffractive optical elements with high horizontal frequencies as it is with low horizontal frequencies or vertical frequencies.If a regular spherical Fresnel lens is displayed onto this LCD, the modulation diffraction efficiency for vertical direction is higher than for the horizontal direction. Encoding algorithm for phase-only diffractive elements with high modulation efficiency In this section we present a method to encode a diffractive optical element in this anamorphic phase modulator with optimal modulation diffraction efficiency.Firstly we briefly review the method based on the minimum Euclidean distance to encode phase-only diffractive elements onto a phase-only modulator with limited phase modulation depth.Then we extend the method to incorporate the anamorphic phase modulation described above. Minimum Euclidean projection Let us consider a designed spatial phase-only distribution ϕ(x,y) to be displayed onto the LCD.The display produces in general a complex modulation m(ϕ)=a(ϕ)exp(ip(ϕ)) which is a function of the addressed phase ϕ.The minimum Euclidean projection states that the most efficient way of projecting a complex diffractive mask onto a restricted modulation domain is to assign the available complex value closest in the complex plane [17,18].In Refs.[15,16] we applied this technique to the realization of a phase-only diffractive optical element with a pure phase-only modulator with a maximum phase depth smaller than 2π.The maximum phase depth is ε=2π(1-c), where the mismatch parameter c is in the range [0,1].In this case, the optimal realization given by the minimum Euclidean projection involves the following encoding of the phases p as a function of the addressed phase ϕ [16]: ( ) where we omitted the dependence on the spatial coordinates (x,y) for simplicity.If the phaseonly function ϕ(x,y) has values distributed over a 2π range with a constant probability density function, the modulation diffraction efficiency (η m ) can be defined as the coefficient of the first term in the Fourier expansion of the function m(ϕ) [16].Next we evaluate the modulation diffraction efficiency (η m ) for the encoding in Eq. ( 1), and the resulting expression is [16]: A Fresnel lens is a particular case of this situation, where the spatial dependence ϕ(x,y) of the designed phase-only function is given by ( ) the radial coordinate and f the focal length of the lens.In Ref. [16] we demonstrated that a great improvement in the modulation diffraction efficiency of the displayed lens is obtained when the minimum Euclidean projection is applied. Algorithm for the correction of the phase modulation As we previously described, the anamorphic behavior of the LCD reduces the efficiency of the displayed diffractive elements.To overcome this drawback we propose an algorithm for the correction of the phase modulation.In principle, the algorithm is valid for diffractive optical elements (DOE) exhibiting a phase variation slow enough so that it makes sense to define a local spatial frequency at each point of the DOE [22].In Section 2 it is shown that the LCD response is almost uniform for vertical spatial frequencies but it is strongly dependent on the horizontal frequency values.Therefore, a correction must be introduced to compensate this effect. The first step is to analyze, for each pixel of the image, what is the horizontal frequency value in its neighborhood, i.e. we calculate the local frequency component ν x along the horizontal direction.This local frequency is given by [22], ( ) In principle, the concept of local spatial frequency makes sense if the variation of the phase function ϕ(x,y) is slow enough [22].The result in the case of Fresnel lenses is given by, where we obtain the known result that the local spatial frequency for a quadratic phase function has a linear dependence with the coordinate.Once this value is established we have to know the maximum phase modulation (ε) achieved for that frequency.In Fig. 4 it is shown the maximum phase modulation depth as a function of the horizontal period.As the next step, if the pixel phase value (ϕ) is larger than ε, according to Eq. ( 1) we have to assign the phase ε if ε < ϕ < ε/2 + π or the phase 0 if ϕ > ε/2 + π.If ϕ < ε we have to apply a look up table that connects the phase value ϕ with a gray value.Figure 5 shows these curves for the different spatial frequencies for which the phase modulation has been measured.Actually these curves are the inverse of those plotted in Fig. 3(b).For frequency values different from the measured ones a linear interpolation between the nearest frequencies is applied.Figure 5 sketches an example of this situation.A phase ϕ is desired for a spatial period of 6 pixels in the horizontal direction.Since this frequency has not been calibrated, it is assigned a gray level g obtained from the interpolation between curves for periods of 4 and 8 pixels. Experimental results To prove the enhancement produced by the proposed technique, we generate diffractive lenses to be displayed onto the LCD.The efficiency of the lens is directly given by the modulation diffraction efficiency η m in Eq. (2).We first compare this theoretical modulation diffraction efficiency with the experimental results of cylindrical lenses, and we demonstrate the differences for horizontal and vertical directions.Although our LCD reaches almost 2π phase modulation for vertical frequencies, we generate lenses on the computer assuming different values of ε, in order to verify Eq. (2).In this way it is possible to experimentally reproduce the situation for modulators that have a smaller phase modulation depth. To study this dependence we generate a series of different lenses as follows: first the ideal lens phase-only function modulo 2π is calculated.From this function we generate a series of copies.A different maximum phase value ε=2π(1−c) is assigned to each copy, and each pixel phase value is encoded following the minimum Euclidean projection given by Eq. ( 1).Then, these numerical functions have to be displayed onto the LCD.For cylindrical lenses with axis along the horizontal direction, the phase modulation depth is not dependent on the spatial frequency, and the phase-only functions are directly addressed to the display.For cylindrical lenses with axis along the vertical direction we compare two procedures.In the first one the horizontal frequency (rows/period) dependence of the modulation is not taken into account and we consider that the phase modulation curve is the one corresponding to the vertical or the low horizontal frequencies.In the second procedure, the frequency dependence is taken into account and the algorithm described in Section 3.2 is applied before we can send the phase-only function to the LCD. Figure 6 shows the modulation efficiency both for vertical and horizontal cylindrical lenses.The continuous line shows the theoretical evolution of the modulation diffraction efficiency as a function of the mismatch parameter c given by Eq. ( 2).When c=0 the maximum phase modulation depth is ε=2π and therefore the modulation efficiency is perfect.As the value of c increases the modulation efficiency reduces.However the use of the minimum Euclidean projection makes this reduction slow, and for instance η m is still around 0.7 for a c=0.5 (maximum phase ε=π).The experimental modulation efficiency is calculated by measuring the intensity of the focalized light.The experimental data are normalized to the value obtained for the ideal lens, i.e., the one reaching 2π maximum phase modulation.This is obtained for the horizontal cylindrical lens when the maximum phase range is available (c=0).It is noticeable the good agreement among the theoretical curve given by Eq. ( 2) and the experimental data corresponding to horizontal lenses with different phase depths (see Fig. 6, triangles).However, when the vertical lenses are displayed, the efficiency is much less (Fig. 6, circles), due to the phase modulation changes as a function of the frequency.The modulation efficiency of the vertical lenses is around a 75% of the equivalent horizontal lens.However, when these vertical lenses are corrected according to the procedure described in the previous section, the efficiency of the lenses is noticeably enhanced (Fig. 6, squares), approaching that of the horizontal lenses.This proves the enhancement that can be obtained with the proposed correction method. Then we generate spherical diffractive lenses and we repeat the procedure.Figure 7 shows the equivalent results.Again the lenses are generated with a limited maximum phase, and then they are encoded following the encoding given by Eq. ( 1).Here we compare two situations: first we consider lenses with no correction of the anamorphic phase modulation (white circles), and secondly we apply the algorithm described the Section 3.2 with the correction of the anamorphic phase modulation (black circles).Again the correction results in an improvement in the efficiency around a factor 25% for values of c up to 0.5. Conclusions In summary, we have analyzed the modulation diffraction efficiency of liquid crystal SLM which present an anamorphic and spatial frequency dependent behavior of the phase modulation depth.We have shown experimental evidence that a LCD may produce this kind of phase modulation and we have studied how this situation affects the efficiency of a displayed phase-only diffractive optical element.We have used a twisted nematic LCD operating in a phase-mostly modulation regime.We have measured the phase modulation as a function of the spatial frequency, and we have shown that it strongly depends on its value for horizontal frequencies (rows/period).We have developed an encoding algorithm based on the minimum Euclidean projection to display the diffractive element with an optimal modulation diffraction efficiency.In principle, the algorithm is valid for DOEs exhibiting a phase variation slow enough so that it makes sense to define a local spatial frequency at each point of the DOE.We have applied the algorithm to the case of Fresnel lenses.Experimental results have been included to verify the developed theory.We have displayed cylindrical lenses and different diffraction efficiency is produced depending on their orientation.This asymmetry in the phase modulation with the frequency orientation produces a lack of rotational symmetry in spherical lenses displayed onto the LCD.We have demonstrated that the application of the proposed correction algorithm results in an important enhancement in the efficiency of the lens. Fig. 1 . Fig. 1.Normalized intensity transmission for the phase-only modulation configuration of the twisted nematic LCD. Fig. 4 .Fig. 5 . Fig. 4. Maximum phase modulation as a function of the vertical period, measured in pixels. Fig. 6 . Fig.6.Modulation diffraction efficiency for cylindrical lenses as a function of the mismatch parameter c of a modulator with limited phase depth.The line shows the theoretical efficiency (Eq.(2)).H: horizontal lenses.V: vertical lenses.VC: corrected vertical lenses. Fig. 7 . Fig. 7. Modulation diffraction efficiency of spherical lenses as a function of the mismatch parameter c. White dots correspond to lenses without correction and black dots to equivalent lenses with correction of the anamorphic phase modulation.
5,643.4
2005-03-21T00:00:00.000
[ "Engineering", "Physics" ]
Inhibition of inflammation and oxidative stress by an imidazopyridine derivative X22 prevents heart injury from obesity Abstract Inflammation and oxidative stress plays an important role in the development of obesity‐related complications and cardiovascular disease. Benzimidazole and imidazopyridine compounds are a class of compounds with a variety of activities, including anti‐inflammatory, antioxidant and anti‐cancer. X22 is an imidazopyridine derivative we synthesized and evaluated previously for anti‐inflammatory activity in lipopolysaccharide‐stimulated macrophages. However, its ability to alleviate obesity‐induced heart injury via its anti‐inflammatory actions was unclear. This study was designed to evaluate the cardioprotective effects of X22 using cell culture studies and a high‐fat diet rat model. We observed that palmitic acid treatment in cardiac‐derived H9c2 cells induced a significant increase in reactive oxygen species, inflammation, apoptosis, fibrosis and hypertrophy. All of these changes were inhibited by treatment with X22. Furthermore, oral administration of X22 suppressed high‐fat diet‐induced oxidative stress, inflammation, apoptosis, hypertrophy and fibrosis in rat heart tissues and decreased serum lipid concentration. We also found that the anti‐inflammatory and anti‐oxidative actions of X22 were associated with Nrf2 activation and nuclear factor‐kappaB (NF‐κB) inhibition, respectively, both in vitro and in vivo. The results of this study indicate that X22 may be a promising cardioprotective agent and that Nrf2 and NF‐κB may be important therapeutic targets for obesity‐related complications. Introduction In the past few decades, the prevalence of obesity has increased dramatically worldwide to become a global epidemic. Obesity has been linked to a dramatic escalation of nephropathy and cardiomyopathy and is also associated with elevated risks of cardiovascular and cerebrovascular disease, hypertension, sleep disorders and dyslipidemia [1,2]. Furthermore, increasing evidence shows that obesity is associated with structural and functional changes in the heart in both humans and animal models [3][4][5]. In the earliest postmortem investigations, subsequent autopsy findings confirmed increased heart weight and left ventricular and right ventricular hypertrophy in proportion to the degree of obesity [6][7][8]. Myocardial changes associated with obesity are becoming increasingly recognized as obesity cardiomyopathy, a condition independent of diabetes, hypertension, coronary artery disease or other etiologies. The most important mechanisms in the development of obesity cardiomyopathy include metabolic disturbances, activation of the renin-angiotensin-aldosterone and sympathetic nervous systems, myocardial remodelling and small-vessel disease [6]. Obesity has also become increasingly characterized as an inflammatory state, as chronic low-grade inflammation and oxidative stress play important roles in the pathogenesis of obesity-related complications [9][10][11][12][13]. It is well known that circulating free fatty acids (FFAs) associated with obesity, including palmitic acid (PA), can cause chronic inflammation, insulin resistance and cardiovascular disease. Free fatty acids can also increase the expression of pro-inflammatory cytokines and induce cellular oxidative stress, and it has been demonstrated that both in vitro and in vivo that FFAs can activate the nuclear factor-kappaB (NF-jB) pathway, subsequently increasing the expression of several pro-inflammatory cytokines such as tumour necrosis factor (TNF)-a, interleukin (IL)-6 and IL-1b [1,2]. In both human and animal models, this low-grade inflammation, combined with oxidative stress in various organs like the heart, can manifest itself as hypertrophy, apoptosis and fibrosis [8,14,15]. Currently, treatment options for obesity are limited primarily to diet, exercise and lifestyle modifications, all of which have high failure rates. Few obesity drugs exist, and those that do are not very effective [16]. However, as more studies confirm the role of inflammation and oxidative stress in the development and progression of obesity-related complications, molecules with anti-inflammatory and antioxidant properties may increase the efficacy of current treatment protocols for obesity-, FFA-, and high-fat diet (HFD)-induced injury. Previous studies have shown that imidazopyridines possess antiinflammatory properties. Ashwell et al. reported the discovery and optimization of a series of imidazopyridines to be effective inhibitors of protein kinase B, which functions as a key signalling node in cell proliferation, survival and inflammatory stress response [17][18][19]. Other studies have also described imidazopyridines as potential antioxidant and anti-cancer agents [20,21]. Building off of this previous evidence, our group synthesized a series of new imidazopyridine derivatives and screened them for anti-inflammatory activities. Of the 23 imidazopyridine derivatives tested, X22 was among the few that showed significant potential, inhibiting lipopolysaccharide (LPS)induced TNF-a and IL-6 production in macrophages [17], and because of these results, X22 was targeted for further analysis (Fig. 1A). While preliminary studies done by our group had found that X22 can inhibit LPS-induced inflammatory response in macrophages, whether or not it can protect against FFA-induced inflammation and oxidative stress was unclear. Therefore, to validate our ideas, we explored the effects of X22 in vitro in rat heart H9c2 cells. Furthermore, we explored whether or not X22 0 s in vitro results can translate to in vivo using a HFD rat obesity model to investigate if X22 can inhibit FFA-induced myocardial injury including cardiomyocyte hypertrophy, fibrosis and apoptosis. Chemicals and reagents Palmitate (PA) was purchased from Sigma-Aldrich (St. Louis, MO, USA). Stock solutions of 5 mM PA/10% bovine serum albumin (BSA) were prepared and stored at 4°C. Stock solutions were heated for 15 min. at 55°C and then cooled to room temperature prior to use. The dilution of PA/BAS solution to 500 lM of PA concentration was used in cellular experiments. X22 was dissolved in dimethyl sulfoxide (DMSO) for in vitro experiments and in carboxymethylcellulose sodium (CMCNa; 0.5%) for in vivo experiments. Antibodies used in the experiments were purchased from the fol-lowing suppliers: Nrf2, Bcl-2-like protein 4 (Bax), B-cell lymphoma 2 (Bcl2), NF-jB p65, inhibitor of jB (IjB), CD68, cleaved-poly (ADP-ribose) polymerase (PARP) and A-type natriuretic peptide (ANP), transforming growth factor (TGF)-b, Collagen IV from Santa Cruz Biotechnology (Santa Cruz, CA, USA), TNF-a from Abcam (Cambride, MA, USA), anti-cleavaged caspase-3 and 3-NT from Cell Signaling Technology (Danvers, MA, USA) and horseradish peroxidase-conjugated anti-rabbit secondary antibodies from Santa Cruz. Enhanced chemiluminescence (ECL) reagent and fluorescein isothiocyanate (FITC) annexin V apoptosis detection kit were obtained from Beyotime (Beijing, China). Cell culture and treatment All cellular studies were conducted with H9c2 rat heart-derived embryonic myocytes (CRL-1446; American Type Culture Collection, Manassas, VA, USA) cultured using DMEM/F12 supplemented with 10% (v/v) foetal bovine serum, 100 U/ml penicillin G, 100 mg/ml streptomycin and 2 mM L-glutamine. Cells were incubated at 37°C with 5% CO 2 and 95% air. For all experiments, cells were plated in six-well plates or 35-mm culture dishes at 5.0 9 10 4 cells/cm 2 . For cell stimulation, X22 was added 1 hr prior to PA after the H9c2 cells adherence for 12 hrs. Then, the old medium was removed and replaced with new medium. The cells were stimulated with 500 lm PA. Cells were incubated for the indicated time points and then harvested for biochemical or molecular assays. All experiments were repeated at least three times to demonstrate their reproducibility. Determination of intracellular ROS Dihydroethidium (DHE) and 2,7-dichlorodihydrofluorescein diacetate (DCFH-DA) assay: The presence of free radicals in the H9c2 cells after PA stimulation was determined using DHE or DCFH-DA assay kits, respectively (Beyotime, Nanjing, China). After the H9c2 cells adherence for 12 hrs, X22 was added 1 hr prior to PA in all experiments. Immediately after 500 lM PA stimulation, cells were washed with PBS, incubated in fresh culture medium containing 2 lM DHE or 2 lM DCFH-DA for 30 min. at 37°C and washed three times. Fluorescence intensity was measured with a fluorescence microscope, with excitation wavelengths of 535 nm or 488 nm. We then collected cells for flow cytometry using BD FACSCalibur TM (BD Biosciences, San Jose, CA, USA) and Cell Quest software. GSH/GSSG assay The cells were stimulated with 0.5 mg/ml LPS for 8 hrs or 500 lm PA for 10 hrs. After treatment, cells were lysed and the 30 ll of collected proteins were performed to reduced glutathione (GSH)/oxidized glutathione (GSSG) determination using a commercial GSH/GSSG assay kit (Beyotime Biotech, Nantong, China) according to manufacturer's instruction. Immunofluorescence assay for NF-jB p65 and TGF-b Immediately after stimulation, cells were fixed with 4% paraformaldehyde and permeabilized with 100% methanol at À20°C for 5 min. After fixation and permeabilization, cells were washed twice with PBS containing 1% BSA and then incubated with primary antibodies for transcription factor p65 or TGF-b (Santa Cruz Biotechnology) overnight at 4°C, followed by FITC-or phycoerythrin (PE)-conjugated secondary antibody (Santa Cruz Biotechnology). Then, the cells were counterstained with 4 0 ,6-diamidino-2-phenylindole (DAPI). The stained cells were viewed under fluorescence microscope (2009 amplification; Nikon, Tokyo, Japan). Preparation of nuclear extracts Nuclear protein extraction from H9c2 cells was done by using nuclear protein extraction kit (Beyotime Biotech) according to the manufacturer's instructions. The protein concentration was determined using Bio-Rad protein assay reagent (Hercules, CA, USA). The nuclear extract (15 lg protein) was used for the Western immunoblot analysis. Morphological analysis and rhodaminephalloidin staining Immediately after stimulation, H9c2 cells were fixed with 4% paraformaldehyde followed by taking phage micrograph using a light microscope (4009 amplification; Nikon). Then, the cells were permeabilized with 0.1% Triton-X100 and stained with rhodamine-phalloidin at a concentration of 50 lg/ml for 30 min. at room temperature and then washed with PBS and visualized by fluorescence microscope (4009 amplification; Nikon). Detection and quantification of cell apoptosis Living, apoptotic and necrotic cells were detected and quantified by fluorescence microscopy and using Hoechst staining (Beyotime, Nanjing). After treatment, H9c2 cells were then harvested, washed twice with icecold PBS, and evaluated for apoptosis by double staining with FITC conjugated Annexin V and propidium iodide (PI) in binding buffer for 30 min. using a FACSCalibur flow cytometer (BD Biosciences). Animals and treatment The animals were obtained from Animal Center of Wenzhou Medical University. All animal care and experimental procedures complied with the 'Ordinance in Experimental Animal Management' (Order NO. 1998-02, Ministry of Science and Technology, China) and were approved by the Wenzhou Medical College Animal Policy and Welfare Committee (Approval Document NO. wydw2014-0105). Twenty-one male Wistar rats (360-370 g) were randomly divided into three weight-matched groups. Seven rats were fed with low-fat diet (cat. #MD12031; MediScience Diets Co. Ltd, Yangzhou, China) contain-ing 10 kcal% fat, 20 kcal% protein and 70 kcal% carbohydrate for 12 weeks, and served as a normal control group (Ctrl). Remaining 14 rats were fed with HFD (cat. #MD12033; MediScience Diets Co. Ltd) containing 60 kcal% fat, 20 kcal% protein and 20 kcal% carbohydrate for 12 weeks. After 8 weeks of feeding, HFD-fed rats were further divided into two groups: HFD group (n = 7) and HFD plus X22-treated group (n = 7). X22 was given daily by oral gavage at a dose of 20 mg/ kg, respectively, in 0.5% CMCNa solution for 4 weeks. Rats in the Ctrl and HFD group were gavaged with vehicle only. All the animals were provided with food and water ad libitum. In the experiment process, bodyweight and blood glucose were monitored once every week. At the end of experimental period, all the animals were killed by cervical decapitation. The bodyweight was recorded and blood samples were collected and centrifuged at 4°C for 10 min. to collect serum. The heart was excised aseptically, blotted dry and the weight was recorded followed by immediate freezing in liquid nitrogen and then stored at À80°C before further analysis. Histological and histochemical analyses Excised heart tissue specimens were fixed in 4% formalin processed in graded alcohol, xylene, and then embedded in paraffin. Paraffin blocks were sliced into sections of 5 lm in thickness. After rehydration, the sections were stained with haematoxylin and eosin, Masson's trichrome or sirius red, respectively. Each image of the sections was captured using a light microscope (4009 amplification/image; Nikon). Immunohistochemical determination The paraffin samples (5 lm) were removed from the sections with xylene, rehydrated in graded alcohol series, subjected to antigen retrieval in 0.01 mol/l citrate buffer (pH 6.0) through use of microwave and then placed in 3% hydrogen peroxide in methanol for 30 min. at room temperature. After blocking with 5% BSA, the sections were incubated with anti-3-nitrotyrosine (NT) antibody (1:500), anti-TNF-a antibody (1:500) or anti-cluster of differentiation (CD68) (1:200) overnight at 4°C, followed by the secondary antibody (1:200; Santa Cruz Biotechnology). The reaction was visualized with 3,3 0 -diaminobenzidine solution. After counterstaining with haematoxylin, the sections were dehydrated and viewed under fluorescence microscope (4009 amplification; Nikon). Measurements of the level of serum lipid The components of serum lipid including the total triglyceride (TG), total cholesterol (TCH) and low-density lipoprotein (LDL) were detected using commercial kits (Nanjing Jiancheng Bioengineering Institute, Jiangsu, China). Western blot assay Tissues (30-50 mg) or cells were lysed, and protein concentrations were determined by using the Bradford protein assay kit (Bio-Rad). Aliquots (about 100 lg cellular protein) were subjected to electrophoresis and transferred to nitrocellulose membranes, which were then blocked in Tris-buffered saline, containing 0.05% Tween 20 and 5% non-fat milk. The polyvinylidene fluoride membrane was then incubated overnight with specific antibodies. Following incubation with appropriate secondary antibodies, immunoreactive proteins were visualized with ECL (Bio-Rad) reagent and quantitated by densitometry. Stripped membranes were reprobed with antibodies for glyceraldehyde 3-phosphate dehydrogenase, to assess protein loading. The amounts of the proteins were analysed using Image J analysis software version 1.38e and normalized to their respective control. Statistical analysis All in vitro experiments were assayed in triplicate repeat. Data are expressed as mean AE S.E.M. All statistical analyses were performed using GraphPad Pro, Prism 5.0 (GraphPad, San Diego, CA, USA). Student's t-test and two-way ANOVA were employed to analyse the differences between sets of data. A P-value <0.05 was considered significant. X22 mitigated NF-jB-mediated inflammatory response in PA-induced H9c2 cells The chemical structure of X22 is shown in Figure 1A. To determine whether X22 exhibits anti-inflammatory activity in FFA-stimulated cells, an immunofluorescence assay was conducted to detect the expression and distribution of NF-jB p65 in H9c2 cells. Our results showed that in PA-treated cells, NF-jB p65 accumulated in the nuclei (Fig. 1B). Further protein analysis confirmed this translocation of p65 from the cytoplasm to nucleus, showing that in PA-treated cells, there was a decrease of p65 in the cytoplasm, coupled with an increase of p65 in the nucleus (Fig. 1C). Since the degradation of inhibitor of NF-jB (IjB-a) is an important component in mediating the activation of the NF-jB pathway, we assayed for IjB-a levels. While PA induced IjB-a protein degradation, pre-treatment with X22 restored the protein level of IjB-a back to normal media levels (Fig. 1D). Building on these results, we examined the mRNA expression of NF-jB-activated inflammatory cytokines, such as TNF-a, IL-1b and IL-6, and cell adhesion molecules, ICAM-1 and VCAM-1. As shown in Figure 1E and F, we found that treatment with X22 significantly inhibited PAinduced mRNA expression of both inflammatory cytokines, TNF-a, IL-1b and IL-6, and adhesion molecules, ICAM-1 and VCAM-1. X22 prevents PA-induced ROS production and oxidative stress in H9c2 cells Previous studies have shown that elevated FFA levels can lead to increased oxidative stress in cardiovascular tissues [22]. It has also been shown that at a concentration of 500 lM, PA can stimulate ROS production and increase oxidative stress in H9c2 [1]. We wanted to examine if X22 could prevent PA-induced ROS production and oxidative stress in H9c2 cells. Using DHE (for O 2 À ) and DCFH-HA (for H 2 O 2 À ) probes, we were able to show that ROS production was significantly increased following PA treatment (500 lM for 10 hrs). However, pre-treatment with X22 at 20 lM for 1 hr significantly decreased ROS production back to control culture levels ( Fig. 2A). The mean fluorescent intensity values also showed that X22 significantly reduced PA-induced increases in ROS-positive cells (Fig. 2B). In addition, we used GSH/GSSG probe to confirm antioxidant activity of X22. The result in Figure 2C showed that X22 reversed PA-decreased GSH/GSSG ratio in H9c2 cells. We also wanted to confirm if X22 0 s observed antioxidant properties were through the modulation of the anti-oxidant Nrf2. Using fluorescent stain, we observed that H9c2 cells treated with PA and X22 showed an increase in the nuclear translocation of Nrf2, indicating X22 could promote Nrf2 transcriptional activity (Fig. 2D). Furthermore, when we examined the downstream target genes of Nrf2, including HO-1, GCLC and GCLM, it was observed that X22 significantly induced the mRNA expression of these anti-oxidant genes (Fig. 2E). X22 attenuates PA-induced cardiac hypertrophy and fibrosis in H9c2 cells The effect of X22 on cardiac cell hypertrophy was examined by using rhodamine-phalloidin staining. As referred to the literature [1], the H9c2 cells were pretreated with X22 at 20 lM for 1 hr and then incubated with PA at 500 lM for 6 hrs. As shown in Figure 3A, X22 significantly suppressed PA-induced hypertrophy in H9c2 cells. X22 also significantly inhibited PA-induced mRNA expression of ANP and BNP (Fig. 3B). The effect of X22 on fibrosis was determined using TGF-b staining, which revealed that X22 also significantly reduced PA-induced increase in TGF-b (Fig. 3C). Further, RT-qPCR analysis and Western blot analysis showed that X22 suppressed PA-induced mRNA expression of TGF-b, CTGF and collagen I (Fig. 3D), as well as protein expression of collagen IV and TGF-b (Fig. 3E). positive cells were significantly higher in PA-treated cells (Fig. 4B). However, in cells pre-treated with X22, cell morphological changes were less significant with decreased number of apoptotic cells, as evidenced by both Hoechst staining and flow cytometry ( Fig. 4A and B). Furthermore, when we examined the protein levels of key proteins involved in the apoptotic pathway, such as B-cell lymphoma (Bcl)-2, Bcl-2-associated X protein (Bax) and PARP, we observed significant PA-induced increases in the protein levels of pro-apoptotic proteins Bax and cleaved PARP and decreased levels of Bcl-2 (Fig. 4C). Treatment with X22 reversed these changes, suppressing PA-induced increases in protein expression of Bax and cleaved PARP and inducing protein expression of Bcl-2 (Fig. 4C). In addition, X22-alone treatment did not affect NF-jB p65 nuclear translocation, ROS level, SOD activity, hypertrophy and collagen-4 and cleaved PARP expression (Fig. S1) in H9c2 cells, indicating no difference between Ctrl group and X22-alone group. Administration of X22 attenuated HFD-induced changes in the lipid profiles of rats We used HFD-fed rat model to investigate whether X22 exhibits cardioprotective effects in vivo in obesity. Rats on a HFD for 8 weeks were subsequently treated with X22 at a dosage of 20 mg/kg/day or vehicle control for 4 weeks. Rats fed a normal diet (ND) were used as the control group. The bodyweight and blood glucose levels were monitored every week, and at the end of the experiment, the blood samples were collected and serum was analysed for levels of TG, TCH and LDL. High-fat diet-fed rats became slightly obese with a bodyweight above 550 g on average at the 12-week time-point, while treatment with X22 markedly reduced the bodyweight gain in HFD-fed rats (P < 0.05 versus HFD group, Fig. 5A). Both HFD-fed rats and X22-treat HFD rats did not exhibit significant changes in the level of blood glucose when compared with the ND group (Fig. 5B). Compared with the ND-fed rats, the HFD-fed rats displayed significantly elevated serum levels of TG, TCH and LDL (Fig. 5C-E). In contrast, treatment with X22 significantly inhibited HFD-induced increases of TG, TCH and LDL. These data indicate that X22 also possesses antiobesity effects in HFD-fed rats. X22 attenuated HFD-induced inflammation in the myocardial tissues of HFD-fed rats The inflammatory and oxidative indexes were determined in the myocardial tissues of HFD-fed rats. Immunohistochemical staining for TNF-a accumulation in formalin-fixed myocardial tissues in HFDfed rats showed a significant increase in the accumulation of TNF-a (Fig. 6A). This result was further supported through RT-qPCR analysis, which revealed a significant increase in the mRNA expression of inflammatory markers TNF-a, IL-6 and IL-1b (Fig. 6B). In contrast, treatment with X22 normalized TNF-a protein level (Fig. 6A) and significantly inhibited HFD-induced inflammatory cytokine expression (Fig. 6B). In addition, X22 0 s anti-inflammatory properties were also confirmed by determining adhesion molecule expression using immunohistochemical staining for CD68, a marker of infiltrated macrophages. CD68 staining revealed a significant accumulation of CD68 in the myocardial tissues of HFD-fed rats that was normalized following treatment with X22 (Fig. 6C). Furthermore, while HFDinduced marked increases in the mRNA expression of adhesion markers, VCAM-1 and ICAM-1, in myocardial tissues, X22 administration significantly inhibited HFD-induced expression of both adhesion markers (Fig. 6D). We also assessed whether X22 0 s effect on the expression of inflammatory cytokines was due to changes in the NF-jB pathway. As shown in Figure 6E, while protein levels of IjB-a were significantly decreased in the HFD group, indicating NF-jB activation, treatment with X22 restored the IjB-a to ND levels. X22 attenuated HFD-induced oxidative stress in the myocardial tissues of HFD-fed rats We then examined the effects of X22 in HFD-induced oxidative stress in rat myocardial tissues. 3-NT was used as a biomarker for formation of ROS and reactive nitrogen species (RNS). Staining for the accumulation of 3-NT in HFD-fed rats revealed that HFD led to a significant increase of 3-NT, indicative of increased ROS/RNS accumulation, which was normalized with treatment of X22 (Fig. 7A). Data from Figure 7B and C also showed that Nrf2 mRNA and protein expression was significantly decreased in the myocardial tissue of HFD-fed rats. Treatment with X22 induced a significant increase in the mRNA and protein expression of Nrf-2 ( Fig. 7B and C). Fluorescent staining assay also showed that X22-treated mouse hearts have the increased nuclear distribution of Nrf2 when compared with HFD-alone group, indicating a transcriptional activation of Nrf2 by X22 (Fig. 7D). Furthermore, we also examined the downstream target genes of Nrf2, including HO-1, GCLC and GCLM by real-time qPCR assay. As expected, it was observed that X22 significantly increased the mRNA levels of these anti-oxidant genes in HFD-fed mouse hearts (Fig. 7E-G). X22 administration improved cell histological abnormalities, hypertrophy, fibrosis and apoptosis in the myocardial tissues of HFD-fed rats To further investigate the in vivo cardioprotective effects of X22, we examined X22 effects on the morphology of the heart. Haematoxylin and eosin staining showed that hearts of HFD-fed rats displayed structural abnormalities, including broken fibres and irregular cellular structures, and significantly increased cardiomyocyte transverse cross-sectional area, while those of HFD-fed rats treated with X22 did not (Fig. 8A). Furthermore, in the cardiac tissues of HFD-fed rats, cardiac hypertrophy was characterized with increased cell surface area, increased mRNA expression of cardiac hypertrophic markers, ANP and BNP (Fig. 8B), and increased protein expression of ANP (Fig. 8C). However, as shown in Figure 8A-C, X22 treatment had a (E-G) The mRNA expression of the anti-oxidative genes HO-1, GCLC and GCLM in the heart tissues was determined by real-time qPCR assay (*, versus HFD group; # , versus Ctrl group; * and # P < 0.05; ** and ## P < 0.01; n = 7 per group). protective effect in HFD-induced cardiac remodelling and also significantly inhibited the mRNA expression of ANP and BNP, suggesting that X22 prevents the development of cardiac hypertrophy in HFD-fed rats. Further staining with Masson's stain and Sirius red demonstrated the anti-fibrotic properties of X22 in vivo. While Masson's and Sirius red staining revealed a significant increase in collagen accumulation and fibrosis in the hearts of HFD-fed rats, treatment with X22 mark- edly reduced the degree of collagen deposition and fibrosis (Fig. 8D). These observations were further confirmed through RT-qPCR analysis, which revealed increases in the mRNA expression of type 1 collagen (Fig. 8E) and protein expression of cardiac fibrosis marker TGF-b in HFD-fed rats (Fig. 8F). These changes were not observed and significantly blocked by administration of X22. Western blot analysis for protein expression of key apoptotic proteins showed that HFD-fed rats had decreased levels of Bcl-2, a key regulator of apoptotic proteins, and increased levels of both cleaved PARP and cleaved-caspase 3 (Fig. 8G). In contrast, HFD-fed rats treated with X22 showed significantly increased protein levels of Bcl-2 and reduced levels of cleaved PARP and cleaved-caspase 3, suggesting that X22 has the anti-apoptotic properties. Discussion Recent studies have implicated chronic inflammation and oxidative stress in the pathophysiology of obesity-related cardiovascular disorder [23][24][25][26]. Increased and uncontrolled production of inflammatory cytokines and reactive oxygen species due to hyperlipidemia impairs regular cellular function and causes cell apoptosis in an array of tissues, including the heart [27]. Therefore, due to the potential roles inflammation and oxidative stress play in cardiovascular disorders, molecules with anti-inflammatory and antioxidant properties may be targets to enhance the efficacy of therapeutic options for obesity and HFD-induced cardiovascular disorders. A number of studies have demonstrated that imidazopyridines have a wide variety of pharmacological activities, such as anti-inflammatory [17,28], antioxidant [20], antiviral [29] and anticancer [21,22]. In a previous study, our group synthesized imidazopyridine derivatives and evaluated them on the anti-inflammatory activity. The results from this study showed that a imidazopyridine derivative X22 does inhibit PA/HFD-induced inflammatory response, oxidative stress and apoptosis in vitro in rat H9c2 cells and in vivo in HFD-fed rats. At the same time, X22 treatment significantly decreased the hyperlipidemia profile (TG, TCH and LDL) and also improved cell histological abnormalities, hypertrophy and fibrosis in myocardial tissues of HFDfed rats, suggesting that X22 treatment has a protective effect on HFD-induced cardiac remodelling and injury. Hyperlipidemia is defined as a condition with elevated levels of cholesterol, triglycerides, LDL and FFAs, which have been shown to increase the risk of heart disease, stroke and other health problems. In obesity, FFA levels are usually elevated, and prolonged and chronic elevation can result many pathophysiological consequences. Therefore, FFAs play an important role both in the development of obesity-related complications and atherosclerotic vascular diseases [30]. Palmitic acid is a major saturated FFA in the plasma that stimulates inflammatory cytokine expression and ROS production both in cultured aortic smooth muscle cells and endothelial cells [31]. In this study, we also observed a significant increase in the production of inflammatory cytokines and of ROS and oxidative stress in H9c2 cells treated with PA (Figs 1 and 2). Furthermore, PA has also been shown to affect vascular functions [32], and we also observed PA-induced cardiac hypertrophy and fibrosis in H9c2 cells (Fig. 3). The relationship between obesity and inflammation has been wellestablished. Obesity is now often associated with a state of chronic, low-grade inflammation, suggesting that inflammation may serve as a potential mechanism resulting in obesity-related, cardiovascular complications [33,34]. Elevated FFA levels have also been linked to increased production of pro-inflammatory cytokines [1, 23-26, 34, 35], and in macrophages, FFAs have been found to trigger inflammatory responses via toll-like receptor 4. The NF-jB pathway is critical in the regulation of inflammatory responses with many of its downstream targets being inflammatory cytokines, chemokines, cell adhesion molecules, stress response genes and regulators of apoptosis. Our results demonstrated that PA/HFD decreased IkB-a level and increased p65 translocation and NF-jB activity in vitro in H9c2 cardiac cells (Fig. 1) and in vivo in the myocardial tissue of rats (Fig. 6), resulting in increased expression of pro-inflammatory cytokines TNFa, IL-6 and IL-1b and cell adhesion molecules VCAM-1 and ICAM-1. In contrast, X22 significantly inhibited NF-jB activation, attenuating against PA/HFD-induced expression of inflammatory cytokines and cell adhesion molecules. These results indicate that X22 0 s inhibits PA/ HFD-induced inflammation via upregulation of IkB-a and subsequent inactivation of NF-jB. While we had previously discovered that X22 had anti-inflammatory properties, it was through this study that we explored its potential antioxidant properties. However, the connection between inflammation and oxidative stress with regard to obesity has been well documented. Adipocytes and pre-adipocytes have been identified as sources of pro-inflammatory cytokine production, including TNF-a, IL-1b and IL-6, and these cytokines are also potent stimulators for the production of reactive oxygen and nitrogen species by macrophages and monocytes [23]. Therefore, the increased presence of excessive adipose tissue, which leads to a rise in production of inflammatory cytokines, may be responsible for elevated levels of ROS and subsequent oxidative stress. Furthermore, many genes involved in oxidative stress have been confirmed to be either directly or indirectly regulated by Nrf2 [1]. Under normal conditions, Nrf2 is found within the cytoplasm, but under stressed conditions, activated Nrf2 travels to the nucleus where it binds to the promoter regions of anti-oxidative genes, initiating the expression of those genes and subsequent proteins [36]. Others have also demonstrated that Nrf2 inhibits the oxidative stress in liver induced by FFA accumulation in HFD-fed mice [37], plays a role in pulmonary protection [38] and is critical in defence against high glucose-induced oxidative damage in cardiomyocytes [1,39]. The results from this study confirmed these findings. As shown in Figures 2 and 7, both HFD and PA activated Nrf2 expression and nuclear translocation. Furthermore, we found that X22 could reverse these changes, leading to an increased expression of Nrf2 and Nrf2-downstream genes in HFD-fed rats and reduced translocation of Nrf2 into the nucleus of PA-treated H9c2 cells. These results confirmed X22 0 s antioxidant properties and that X22 0 s observed protective effects against oxidative stress-induced cardiac injury could be potentially due to its regulation and activation of Nrf2. Oxidative stress and inflammation in cells are strongly associated with cell apoptosis [40,41]. As a result of PA/HFD-induced oxidative stress and inflammation, we showed that cardiac cells undergo apoptosis (Figs 4 and 8G). We also investigated the protective effects of X22 against PA/HFD-induced changes in cell morphology and apoptosis, and our findings show that X22 inhibited PA/HFD-induced cardiomyocyte apoptosis. In HFD-fed rats, X22-treatment resulted in decreased levels of cleaved PARP and cleaved-caspase 3 and increased Bcl-2 levels (Fig. 8G). From these results, we find it reasonable to conjecture that X22 attenuates PA/HFD-induced apoptosis by inhibiting the NF-jB pathway and alleviating oxidative stress. Oxidative stress and inflammation have also been implicated in the development and progression of cardiac hypertrophy [42,43]. Increased expression of proteins such as ANP and BNP and increased cell surface area are often seen as important molecular markers for cardiac hypertrophy. Our results showed that while PA and HFD led to a significant increase in the gene expression of ANP and BNP both in vitro and in vivo and decrease cell surface area in vivo (Figs 3 and 8), X22 was able attenuate the PA/HFD-induced cardiac hypertrophy. Another key feature of cardiac remodelling is fibrosis, which is characterized by the expansion of the extracellular matrix as a result of collagen accumulation. Inflammation and oxidative stress also contribute to cardiac fibrosis [44]. Oxidative stress was found to either directly or indirectly affect the progression of cardiac fibrosis through the activation of TGF-b [45]. As expected, we observed that while PA and HFD resulted in increased expression of pro-fibrotic markers, such as collagen I, TGF-b and CTGF (in vitro). X22 significantly inhibited this increased expression and produced visible changes in the cardiac tissues of HFD-fed rats, proving the anti-fibrotic activity of X22 in the cardiac tissue of HFD-fed rats. The oxidative stress and inflammatory pathways in obesity-related cardiomyopathy are closely interrelated. They interact and crosslink throughout the complicated process of obesity-related cardiomyopathy. Thus, independently blocking either pathway may be not effective for the treatment of this disease. Our results further suggest the tremendous therapeutic potential in treating obesity-related cardiovascular complications by attenuating both the initial oxidative stress and inflammation induced by hyperlipidemia. Agents including X22 with both anti-oxidant and anti-inflammatory properties may attract more attention for the treatment of this disease. In addition, it should be noted that treatment with X22 resulted in decreased bodyweight gain and serum TG, TCH and LDL levels in HFD-fed rats (Fig. 5A), indicating that the obesity-lowering effects of X22 are also significant. Thus, the in vivo cardioprotective effects of X22 also resulted from, at least partly, its obesity-lowering action. From the animal outcome, we cannot differentiate between the direct effects of X22 as an anti-inflammatory and anti-oxidative agent against hyperlipidemia-induced cardiac injuries and the secondary effects of X22 as an-obesity agent for re-establishment of a healthy lipid profile. Despite the in vitro data confirmed the anti-oxidant and anti-inflammatory properties of X22, the integrated merits of X22 may result in the cardioprotective effects and make X22 more valuable. Further, the question is whether X22 0 s observed anti-inflammatory and antioxidant effects are related to decreased lipid levels. In conclusion, the findings of this study confirm the cardioprotective role of X22 against PA and HFD-induced inflammation, oxidative stress, hypertrophy and fibrosis both in vivo and in vitro. Also, Figure S2 showed that X22 is highly effective to protect H9c2 cells from LPS-induced injuries, including NF-kB activation, oxidative stress, hypertrophy, collagen-4 overexpression and PARP activation. These data together indicated that the compound X22 is indeed protective via its anti-inflammatory actions in cardiomyocytes. Thus, one thing is clear, imidazopyridine derivatives, such as X22, could be promising therapeutic options for the treatment of obesity-related cardiac complications. While these results do provide a deeper understanding of the role Nrf2 and NF-jB play in hyperlipidemia-induced cardiac injury and provide support for targeting the Nrf2 and NF-jB pathways in the treatment of obesity-related complications, more information is needed to clarify the mechanism behind X22 0 s cardioprotective effects. In addition, it is very interesting that X22 could reduce obesity in HFD-fed mice. Our future plan on X22 also contains drug development and mechanistic investigation of X22 as an anti-obesity or hypolipidemic candidate. Supporting information Additional Supporting Information may be found in the online version of this article: Figure S1 X22 treatment alone does not induce any changes of phenotype in H9c2 cells.
7,721
2016-03-28T00:00:00.000
[ "Chemistry", "Environmental Science", "Medicine" ]
The influence of some model parameters on the impurity distribution implanted into substrate surface The model for description of the initial stage of ion implantation into the surface layer of the metal is presented. The interdependence of embedded impurity concentration and deformations arising from the impact of particles on the surface is investigated. The model takes into account the particle diffusion, the finite time of mass flux relaxation; the stress appearance due to a composition change of the surface layer and a mass transfer phenomenon under a stress gradient action. It is established that the interaction of mechanical waves and concentration leads to a distribution of concentration not corresponding to a pure diffusion process. The examples of coupled problems solution for different sets of model parameters are presented. Introduction Researches of the surface treatment process of materials by beams of charged particles are carried out by many authors [1][2][3][4]. The composition and structure, and, consequently, the surface properties of materials can be controlled owing to these methods. In spite of extensive experimental and theoretical data, possibilities of the implantation method are not fully exhausted. This is connected with a lack of understanding of physical processes in solids under irradiation [5] for small time moments. In the process of implantation into the surface substrate, different phenomena occur simultaneously with the diffusion process. For example, mechanical disturbances appear and propagate, structural and phase changes happen, point defects are generated as a result of the particles impact on the surface and appearing mechanical stresses etc. All these phenomena influence each other. Therefore the interrelation between different processes must be considered during modeling [6]. In the literature, the thermoelastic waves caused by exposure to high-energy sources on the surface of materials attract the basic attention. However, it should be noted that there are coupled models of ion implantation which take into account the processes associated with the difference in the properties of the base material and introduced elements; in this case diffusion and physico-chemical phenomena are a rarity. In [7] it was shown that the interrelation between mechanical and diffusion waves leads to a distortion of the deformation (and stress) wave profile, and the concentration distribution does not correspond to a pure diffusion process The purpose of this work consists in the study of the initial stage of the ion implantation. The mathematical formulation In the present paper, model [7] is used. This model is built with account that temperature is constant. It is assumed that the occurring stresses are elastic and deformations are small. Also this model takes into account the fact that transfer coefficients can be change due to changes in the activation diffusion volume. In the dimensionless variables, the model under deformations takes the form: , Results and discussions System (1)- (7) the sloping plot on the wave appears between the extremes, which increases with time. This is a reflection of the weak influence of waves on each other for a long time. During these moments of time the pulse has already no effect on the substrate material. It can be seen from trailing edges of waves. A change of the sign of the strain corresponds to the depth of impurity penetration. This phenomenon is related to the fact that the impurity is introduced at a smaller depth than the mechanical wave. Thus, the maximum range of ions is the distance where compressive deformation converts into tensile (or vice versa, it depends on properties of used materials). Thus, the variation of model parameters results in qualitative and quantitative changes of the impurity concentration profile. It is necessary to accurately calculate these parameters for obtaining a result corresponding to experimental observations. Conclusion Thus, in this work the mathematical model of the initial stage of ion implantation is described. Concentration and strain distribution in the waves are strongly dependent on the relations among the parameters of the model.
903
2016-04-01T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
Polytropic Dynamical Systems with Time Singularity In this paper we consider a class of second order singular homogeneous differential equations called the Lane-Emden-type with time singularity in the drift coefficient. Lane-Emden equations are singular initial value problems that model phenomena in astrophysics such as stellar structure and are governed by polytropics with applications in isothermal gas spheres. A hybrid method is proposed to approximate the solution of this type of dynamic equations. Introduction Laplace's equation and Poisson's equation are important examples of elliptic partial differential equations which used broadly in applied mathematics and theoretical physics, see, e.g., [22].For instance, Poisson's equation used to calculate gravitational field in potential theory and can be seen as generalization of Laplace's equation.By removing or reducing dimensions from Poisson's equation, we obtain a second-order nonlinear differential equation called Lane-Emden-type equation (LE, for short).The Lane-Emden equation (a.k.a.polytropic dynamic equation) is one of the well studied classical dynamical systems that has many applications in nonlinear mathematical physics and non-Newtonian fluid mechanics (see, for instance, [2,3,6,10,11,21,26]).A preliminary study on the LE equations (polytropic and isothermal) was undertaken by astrophysicists Lane (1870) and Emden (1907), such that the interest of the LE derived from its nonlinearity and singular behavior at the origin.The point x 0 is called ordinary point (or regular point) of the dynamic equation (2) if the coefficients of x, x ′ are analytic in an interval about x 0 .Otherwise, it is called singular point.In solving singular boundary value problems (BVPs) some numerical techniques are based on the idea of replacing a two-point BVP by two suitable initial value problem [14,21,25].In this paper we adopt such idea (called the shooting method) to study dynamical models that play an essential role in the theory of star structure and evolutions, thermodynamics, and astrophysics (see, e.g., [9]).Equation (1) describes and models the mechanical structure of a spherical body of gas such as a self-gravitating star and also appeared in the study of stellar dynamics (see, for instance [8,11] and the references therein).The solutions to the LE, which are known as polytropes, are functions of density versus the radius expressed by x(t) in (2).The index n determines the order of that solution.Nonlinear singular LE equations can be formulated as or, subject to x(0) = 1 , x ′ (0) = 0. The dynamical system model ( 2) along with initial conditions form a special type of initial value problems (IVP) for which it has several applications in the fields of celestial mechanics, quantum physics and astrophysics [6,10,14,26].The following figure is a motivation example shows finite solutions of Lane-Emden equation for the value of n in equation ( 1) or (2) given by n = 0, 1, 2, 3, 4, 5, 6. For some special cases when n = 0, 1, 5 exact analytical solutions were obtained by Chandrasekhar [8], while for all other values of n approximate analytical methods were obtained such as: the Adomian decomposition method [20,27], homotopy analysis method [5], power series expansions [16], variational method [13], and linearization techniques [23] (provide accurate closed-form solutions around the singularity.).Numerical discretization for equation (1) has been the object of several studies in the last decades (see, e.g., [?, [1][2][3]6,10,21,25,26] and the references therein).In [16], the authors presented numerical method for solving singular IVPs by converting Lane-Emden-type equation ( 1) to an integral operator form then rewriting the acquired Voltera integral equation in terms of a power series.Ramos [23] applied linearization method for the numerical solution of singular initial value problems of linear and nonlinear, homogeneous and nonhomogeneous second-order dynamic equations.Russell and Shampine in [25] discussed the solution of the singular nonlinear BVP for certain dynamical systems in the context of analytical geometry and symmetry as follows and with boundary conditions x ′ (0) = 0 (or equivalently x(0) is finite), x(b) = λ, for some scalar λ, and the convergence is uniform over the interval [0, 1].Biles et al. in [6], have considered an initial value problem for Lane-Emden type of the form where a ∈ R and p(t) may be singular at t = 0.They introduced the following definition and theorem, respectively; where the theorem gives the conditions of existence and uniqueness of solution of second-order linear BVPs. x is a solution of the above equation (4) if and only if there exist some T > 0, such that x, x ′ are absolutely continuous on [0,T]. Our paper is organized in the following fashion.In section 2, we provide some necessary notations and essential background.In section 3 we present the second-order dynamical system of Lane-Emden type, and the BVP is transformed to IVP by shooting method.Then applying Euler's method on the resulted initial value problem to get approximations for the solution of the LE.The convergence results and error estimation are analyzed in section 4. Finally, numerical examples are provided to demonstrate the validity and efficiency of the proposed technique. Preliminaries In this section we introduce some basic definitions and conventional notations.Let C 1 (I) be the space of all continuously differentiable functions defined on an interval I.A set D in the Euclidean space R n is compact set if and only if it is closed and bounded set.The basic space used throughout this paper is the space of continuous functions C[0, 1] on the compact set [0, 1] with the associated norm (distance) function defined by, In particular, all C 1 functions are locally Lipschitz.The following two theorems address existence and uniquness of solutions to any IVP. has a unique solution x = x(t). To better understand the theorem we illustrate it by giving an example on the interval [1,2] instead of [0, 1]: Consider the BVP, x ′′ (t) + sin x ′ + e −tx = 0 with x(1) = x(2) = 0 and t ∈ [1,2].Now apply the theorem to then the condition is satisfied and the BVP has a unique solution.Now reader might ask how can we apply this Theorem to Lane-Emden equation.Theorem 2.2 can be simplified by taking into account that the functions sin x ′ x ′ and e −tx are continuous on the interval (0, ∞) to assure the differential equation is linear. Computational Mehtods For Dynamical systems In this section, we start by presenting the methods (shooting to transform from BVP to IVP, and Euler's for regular singularity in the drift term) and apply them on the second order singular dynamical system. Shooting method The shooting method treats the two-point BVP as an IVP.The idea basically, is to write the BVP in a vector form and begin the solution at one end of the BVP, and then "shooting" to the other end with any IVP solver, such as; Runge-Kutta method or multistep method for linear case and Secant method or Newton's method for nonlinear case, until the boundary condition at the other end converges to its correct value.To be precise, the ordinary differential equation of second order, associated with its initial conditions must normally be written as a system of first order equation before it can be solved by standard numerical methods.Next figure shows graphically the mechanism of the shooting. Roughly speaking, we 'shoot' out trajectories in different directions until we find a trajectory that has the desired boundary value.The drawback of the method is that it is not as robust as those used to solve BVPs such as finite difference or collocation methods presented in [1,25], and there is no guarantee of convergence.Shooting method can be used widely for solving a BVP by reducing it to an associated IVP, and is valid for both linear (also called chasing method) and non linear BVPs, by [18], Next theorem provides existence and uniqueness to the BVP's solution. and assume f is continuous function on D such that it satisfies the BVP: Suppose that f x and f x ′ are continuous on the same set D. and (ii) There exists M > 0 such that then the BVP (8) has a unique solution. A special case of this theorem is the following corollary, i.e., when the right hand side of (8) is linear.For linear Lane-Emden equations, one can use Frobenius method to determine the analytical solutions of (1) near the singularity, see, for instance, [23]. Corollary 3.2.Consider (8) given by and the time-dependent coefficients p(t) , q(t) , r(t) are continuous functions on the domain [a, b] and further q(t) > 0, then the BVP (8) has a unique solution. Proof.We need to consider two cases: (i) When equation ( 9) given with boundary conditions x(a) = α , x ′ (a) = 0, has a unique solution x 1 (t).(ii) When equation ( 9) with r(t) = 0, x(a) = 0 , x ′ (a) = 1, has a unique solution x 2 (t).Therefore, one can easily check that the linear combination x 2 (t) is the unique solution to (9), and hence to (8) due to the existence and uniqueness guaranteed by Picard-Lindelof theorem (2.1). Euler's Method Euler's method is a numerical approach for solving (iteratively) initial value problems, as follows: We divide the time interval [t 0 , T ] into N equal subintervals, each of length h = ∆t = t n+1 −t n , for n ≥ 0, and start by initial value x(0) then move forward using the step size towards x(T ), that is, given the second-order ordinary differential equation (7), converting it into two first-order dynamic equations (i.e., dynamical system).Discretize the interval [t 0 , T ] into subintervals, and by assuming y n the approximation to x(t n ) and v n the approximation to u(t n ).Euler's method is then can be expanded, as a twoterms truncated Taylor series, by the following Euler's method for solving a second-order differential equation is given by: Forward Euler's Algorithm. Step 1. (Forward step): Given t n , y n , v n define The local error at every step is proportional to the square of the step size h and the global error at a given time is proportional to h.Moreover, the order of the global error can be calculated from the order of the local error ( i.e. by summing up the local error).We can understand Euler's method by appealing the idea that some differential equations provide us with the slope at all points of the function , while an initial value provides a point on the function.Using this information we can approximate the function with a tangent line at the initial point.It is known that the tangent line is only a good approximation over a small interval.When moving to a new point, we can construct an approximate tangent line, using the actual slope of the function, and an approximation to the value of the function at the tangency point.Repeating this manner, we eventually construct a piecewise-linear approximation to the solution of the differential equation.Moreover, this approximation can be seen as a discrete function and to make it a continuous function, we interpolate (linearly) between each pair of these points. In the following, we study and analyse the Lane-Emden-type equation with an endpoint singularity in terms of the independent variable which has the form where â(t, x(t), x ′ (t)) : [0, 1) × R × R → R, and the Lipschitz functions a(t, x), g(t, x) ∈ C 1 ([0, 1) × R), for all 0 ≤ t < 1.At t = 1, the −a(t, x) 1 − t term is singular, but symmetry implies the boundary condition x ′ (0) = 0.With this boundary condition, the term −a(t, x) 1 − t dx dt is well defined as t → 1.The solution of ( 10) can be given by the system: Define x t := x(t), x ′ t := x ′ (t).By the fundamental theorem of calculus and provided that all integrals are exist (finite), we notice that equation ( 11) is equivalent to the nonlinear system of integral equations: Where 0 = t 0 < t 1 < t 2 < ... < 1. Expanding the integrands in ( 12) so we have: Or in the equivalent form, For simplicity we assume t tn s tn â(u, x u , x ′ u ) du ds. Thus the system becomes, where h n+1 = t n+1 − t n . In order to estimate the error, we need to find a bound for the integrands in L n and L (2) n .The double integrals in both L (1) , L (2) yield the local truncation error, if we define the numerical value by: where h n+1 = t n+1 − t n . Discretization and Convergence Analysis Consider a sequences of times 0 = t 0 < t 1 < t 2 < ... < 1, and the corresponding step sizes h n = t n − t n−1 .Define x n = x(t n ) and x ′ n = x ′ (t n ) where (x(t), x ′ (t)) is a solution of (5).Writing (8) in the form: Use y n as defined in ( 9) and let n By using the inequality (x + y) 2 ≤ 2x 2 + 2y 2 , the error can be estimated as, Next, we introduce some assumptions on the functions a(t, x(t)), g(t, x(t)) and their partial derivatives for t ∈ [0, 1), x ∈ R .But before that we remind ourselves of the value of â from section 3, â(t, x(t), x ′ (t)) = −a(t, x(t)) Also, for any T 1 , T 2 ∈ [0, 1) the Lipschitz conditions are: Our required bounds explicitly are: The partial derivatives bounds are: This final bound applies along the path Taking the difference between the computed and the exact values of â, By adding and subtracting the required terms, we have Thus, the difference 18 becomes, Note that, We now apply a very well known result from functional analysis, Cauchy-Schwarz inequality twice on L (1) andL (2) : for some Constant D 1 , which does not depend on h n+1 and n. where D 2 is independent of n and h n+1 . To avoid the singularity and produce a better estimation to test the efficiency of the algorithm, we introduce a variable step size by fixing ĥ > 0 and then defining step size h n and node points t n using ĥ: ĥ In the process of estimating the global error, we need to use the following two fundamental lemmas: Lemma 4.1.For all x ≥ −1, and any m > 0, we have 0 The proof of this result follows by applying Taylor's theorem with f (x) = e x , x 0 = 0, and n = 1.Lemma 4.2.if M 1 ≥ −1 and M 2 ≥ 0 are real numbers and {a n } N n=0 is a sequence with a 0 ≥ 0 such that then, Proof.Fix a positive integer n, then (20) can be written as Now if we add the two inequalities in (11) together, we will have Using the definition of the norm ∥ϵ n ∥ = (ϵ ′ n ) 2 + (ϵ n ) 2 , then system (13) can be simplified as ĥ) 3 where m 1 and m 2 are independent constants of h n+1 and t n+1 .Now we apply Lemma 4.2 for a n = ∥ϵ n ∥ 2 , followed by a foundation for the step size order, with M 1 = 1 + m 1 ( ĥ) and M 2 = m 2 ( ĥ) 3 such that if The following theorem can assur the variable step size and the uniform convergence for solutions of the method. Simulation and Numerical Experiments In this section we run the algorithm over some examples to show the validity of the method.We used MATLAB with bulit-in functions such as; ode45 and EulerSolver Example 5.1.Consider the second order differential equation (10) with a(t, x) = sin x, and g(t, x) = x 5 , where the step size is 0.05 and time interval [0, 1] along with initial conditions x(0) = 0, x ′ (0) = 2; i.e., Table 1 compares the two dependent solutions x(t) and x ′ (t) for equation (10) given the above numerical values, and figures below draw the relationships between trajectories of the differential equation and the time.The analytical solution to this problem is somewhat lower than our approximation.By shrinking the size of the interval ∆t, we could calculate a more accurate estimate.Example 5.5.In this example we consider the non-autonomous inhomogeneous second order system with the right-hand side being t 3 e 2t , a(t, x) = 4, and g(t, x) = 4x, where the step size is 0.01 and along with initial conditions x(0) = 0, x ′ (0) = 0; with the absence of singularity.The graphs shown below and the tables as well. Conclusion and Extensions In this paper our primary goal was to investigate the second-order singular Lane Emden type equations and we have successfully arrived at the solutions by the forward Euler's algorithm combined with the shooting method, which in turn, reduces the boundary value problem into initial value problem, so the method showed that it is a precise and timesaving method.The Lane Emden equations are solved for the values of the polytropic indices varies from 1, 2, 3 and 5 with having constants, linear functions and periodic functions in the drift term.The numerical solution of the problem for these values of indices replaces the unsolvable version of equation and any closed form solution that we wish to find.For the case of n = 2 the solution is obtained as an infinite power series.Graphical representations of these results give us information about polytropes for different values of polytropic indices which may be helful in the study of the behavior of stellar structures in astrophysics.One good extension for this work is through implementing backward Euler formula for a second-order differential equations where the recursion formula is the same, except that the dependent variable is a vector.Another possible modification for the work is by using the reliable Runge-Kutta method which promises accurate results in deriving the solutions of the Lane Emden equations.It is also significant in handling highly nonlinear differential equations with less computations and a larger interval of convergence.For thinking globally, finite difference methods may be used to replace the shooting method to treat the boundary value problem.Finally, we may think of adding the additive noice to the second order differential equation (it will be called stochastic differential equation) and in this case, Euler's method will be replaced by Euler-Maruyama Algorithm, see, for instance, [12,15]. Figure 1 : Figure 1: Comparison between approximated solution by Euler's method and the actual solution for the equation x ′′ + 4x ′ + 4x = t 3 e 2t . Define a continuous function f : D → R n where D is an open subset of R n+1 , and consider Let D be a nonempty set.Suppose there is a function f from D to itself, and 0 ≤ L < 1, where L is free of x and y.If for any two points x, y ∈ D we have|f (x) − f (y)| ≤ L|x − y| , ∀ x, y ∈ D,then f is called a contraction.The smallest such value of L is called the Lipschitz constant of f , and f is then called a Lipschitz function. Definition 2.2.A function f : D ⊂ R n+1 → R n is said to be locally Lipschitz in x if for each compact set contained in D, and each x, y ∈ D, there exists L > 0 such that
4,578
2024-05-14T00:00:00.000
[ "Physics" ]
Numerical Investigation of Fine Particulate Matter Aggregation and Removal by Water Spray Using Swirling Gas Flow In this paper, a mathematical model based on the two-fluid frame model coupled with the population balance model which considers the aggregation of particles and droplets in detail for cyclonic spray dedusting is proposed. The model is applied to study the characteristics of multiphase flow field and the effects of the gas velocity, spray volume, and particle concentration on the removal efficiency. In addition, the simulation results are verified by the experimental data. The results suggest that the turbulence kinetic energy increases near the wall as the inlet velocity increases, and the spray region increases as the spray volume increases. This is conducive to turbulent mixing of the particles and droplets, and the agglomeration efficiency of the particles is improved, so the particle size increases, and the particle removal efficiency increases to 99.7% by simulation results are within the allowable range of error (about 99–99.5% in dedusting efficiency by experimental data). As the particle concentration increases, the particle removal efficiency initially increases, then decreases and reaches the highest value at 2 g/m3, which is due to the limited adsorption efficiency of the spray droplets. The results are helpful for providing a theoretical basis for spray to promote agglomeration of particles and improving the dust removal efficiency in the swirl field. Introduction Fine particulate matter (FPM) released from coal combustion is the main pollutant in the environment. The suspension of FPM (mainly PM2.5, i.e., particle size smaller than 2.5 microns) in the atmosphere always poses serious hazards to human health and the environment. Due to its small size, it can easily travel deep inside the respiratory tract, causing respiratory diseases and even lung cancer. Although the existing dust removal devices have removal efficiencies of as high as 99% or more, they still fall short of catching very fine particulate matter, and a great deal of FPM is still observed in the atmosphere [1][2][3]. Therefore, the study of FPM removal technology is particularly important. In industrial applications, different dust removal systems are currently applied to reduce PM emission. Available dust removal technologies vary in removal efficiency, collected PM size and costs [4]. Fabric filters and electrostatic precipitators (ESPs) have the highest removal efficiency for PM2.5. Fabric filters are mainly based on the sieve effect, produced by filtering textiles on which particles are captured. However, they have high maintenance costs due to the rapid clogging of the filter, which can cause re-suspension of particles previously collected [5]. ESPs remove FPM from the flue gas by the electric force, which also have high investment and operational costs [6]. Higher costs make fabric filters and ESPs economically suitable only in industrial application. Wet scrubbers have some 2 of 15 advantages over fabric filters and ESPs: scrubbers are simpler and have lower capital and maintenance costs. Collection efficiency of wet scrubbers reaches over 80% of FPM with design optimization [7]. However, one of the main drawbacks of wet scrubbers is the high amount of water needed for particle removal [8]. As an alternative, cyclonic separators as well as other inertial separation systems have been widely used, which also have low installation, operation and maintenance costs [9]. Nevertheless, cyclones' collection efficiency generally only reaches values between 60% and 80% for particle diameters between 2 and 10 µm, making them a good choice for a pre-collection device; moreover, they can be attached to other equipment with higher efficiency, depending on the process requirements [10]. Innovation methods to improve the collection efficiency of a conventional cyclone have been investigated. Spray in the interior of the cyclone can promote the agglomeration of particles, which has been regarded as an effective and inexpensive method to deal with FPM [11]. In this device, the strong cyclonic flow, also called swirling flow, is introduced to increase the relative velocity of the dust particles and droplets [12], enhance the gas-liquid turbulence, increase the contact probability between the fine particles and droplets, and accelerate the aggregation and growth of the fine particles [13]. Bo W et al. [14] innovated a new fine particle removal technology-Cloud-Air-Purifying-which aggregates FPM and increases the particle size and found that the collection efficiency of FPM was improved compared to traditional gas cyclone. Luke S. Lebel et al. [15] discussed the washing mechanism of cyclone spray scrubber and the numerical model was established to predict the aerosol effectively. Krames and Buttner [16] found that cyclone scrubber was more economical and feasible than a wet scrubber in cleaning. For particles larger than 3 µm, the collection efficiency reached 99%, and the water consumption was 0.05-0.25 L/m 3 . Lee et al. [17,18] performed both experimental and theoretical research on the particulate scrubbing efficiency based on the aerodynamic diameter of the particles to study the development and application of a novel swirl cyclone scrubber. They derived a model of the particle collection efficiency due to Brownian diffusion, inertial collisions, and gravitational sedimentation. Ali et al. [8,19] investigated a model of a centrifugal wet scrubber via numerical simulations and found droplet carryout has an important effect on the predicted collection efficiency. Liu et al. [20] proposed a tangential swirl coagulating device and found that swirling flow is beneficial to the mixing and collision of fine particles. A survey of existing research revealed that the enhancing effect of cyclonic spray dedusting on the efficiency has been demonstrated, but the influence of the swirl motion on the multiphase flow characteristics and removal efficiency has not yet been quantitatively analyzed. With the reduction in computing costs in recent years, numerical simulations have been extensively adopted for both scientific study and engineering design. Wang et al. [21] carried out a study on a spray scrubber using the discrete phase model (DPM) to simulate urea particle removal, and they predicted the removal efficiencies under different conditions. However, the DPM requires a great deal of computing resources and cannot provide information such as the collision and coalescence effects of the particles. Widespread theoretical research is being conducted on the population balance model (PBM) based on the two-fluid frame. It is used to describe the spatiotemporal evolution of the particle size distribution (PSD) of the dust particles and water droplets. Duangkhamchan et al. [22] developed a multi-flow model combined with the PBM as an alternative approach for modelling the spray in a tapered fluidized bed coater and predicted the temporal evolutions of the distributions with respect to the particle size and the liquid distribution. Akbari [23] studied the segregation of a wide range of PSDs in an industrial gas phase polymerization reactor using a computational fluid dynamics (CFD)-PBM coupled model, which helped to reveal the physical details of the multi-phase flow field. As was previously mentioned, few numerical studies have investigated the dynamic properties and interaction mechanism of water droplets and aerosol particles during the dedusting process, and the PBM has not been applied to cyclone spray dedusting. In this study, a mathematical model based on the two-fluid frame model coupled with the population balance model for cyclone spray dedusting was developed, which considers the aggregation of the particles and droplets in detail. The model was applied to study the multiphase flow characteristics and the key factors affecting the particle removal efficiency, such as the gas flow velocity, spray flow rate, and particle concentration. The results of the CFD simulation are helpful for providing a theoretical basis for spray to promote agglomeration of particles and improving the dust removal efficiency in the swirl field, which also can provide the guidance for optimum design of a cyclonic spray scrubber in practical engineering applications. Mathematical Model Two-Fluid (Euler-Euler) Model The particle size of dust and droplets is very small, and they are sparsely distributed in space; however, the interaction between the particles should be taken into account. Therefore, the two fluid (TF) model (primary phase is the air and the secondary phase is the particle) is used to calculate the velocity field [24]. The equations for the conservation of mass and momentum can be written as follows: ∂ ∂t where ϕ is the volume fraction of each phase, ρ is the phase density, v is the phase velocity, ∇p is the pressure gradient, τ is the stress tensor, ϕρg is the gravity term, g is the acceleration due to gravity, ∑ 2 p=1 R p is the interphase drag term, F is the additional physical force, and F vm is the virtual mass force. The drag force R p uses a simple interaction term of the following form: where K p is the interphase momentum exchange coefficient that can be written in the following general form: where A i is the interfacial area, τ p is the particulate relaxation time, d p is the diameter of the particle or droplets. The Schiller and Naumann model is acceptable for general use for all fluid-fluid pairs of phases, which can be written in the following form: The components of the multiphase flow in this paper are air, dust particles, and water mist, and turbulence plays an important role in the aggregation of the particles. Compared with the standard k − ε model, the Reynolds stress model (RSM) turbulence model introduces the correlation terms of the rotation and curvature to avoid the occurrence of a negative normal stress, so the RSM turbulence model was applied [25]. The Reynolds stress term includes the turbulence closure, which must be modelled to solve Equation (2). In the RSM, the transport equation can be written as follows: The turbulent diffusion term is The stress production term is The pressure strain term is The dissipation term is In Equations (3)- (7), δ ij is the Kronecker factor, and µ t is the molecular viscosity. Population Balance Model The population balance equation (PBE) model is used to calculate the aggregation effect of the particles in this paper, and the phenomenon of the breakup of particles is not considered. Based on the particle sparsity hypothesis, the zero-dimensional equilibrium equation of the PSD function in the Eulerian coordinate system with only particle aggregation considered can be written as follows [26]: where n(v, t) denotes the number density function of the particles with volume v at time t (1/m 3 ), and β(v − u, u, t) denotes the aggregation nucleus of the particles with volume u and v − u (m 3 /s). The first term on the right side of the equation denotes the number of new particles with volume v generated through aggregation, and 1/2 indicates that two particles participate simultaneously in a single aggregation event. The second term de−notes the number of particles whose volume vanishes as v as a result of aggregation into larger particles. Aggregation Kernel Model In practical engineering applications, the droplet size produced by various atomizers is larger than 10 µm [27], so there is no aggregation caused by the Brownian motion of the droplet particles. Although the aggregation of dust particles occurs when the local humidity increases to a certain value, the Brownian aggregation of <1 µm dust particles can still be neglected compared with the capture of dust by droplets. Therefore, the free molecular aggregation model is not discussed in this paper. The turbulent aggregation model is selected for use in this paper. In the turbulent flow field, the turbulence within the fluid always generates eddies, which in turn dissipate the energy. Energy is transferred through the largest eddies to the smallest eddies where it is dissipated via viscous interaction. The size of the smallest eddies is the Kolmogorov microscale, which is expressed as a function of the kinematic viscosity v and the turbulent energy dissipation rate ε as follows: According to the sizes of the two particles, aggregation can occur, and this is described in terms of the following three models. (a) When the diameters of two particles i and j are d i < η and d j < η, based on the study conducted by Saffman and Turner [28], the collision rate is expressed as follows: where ζ T is a pre-factor that takes into account the capture efficiency coefficient of the turbulent collision. The expression of the empirical capture efficiency coefficient of turbulent collision can be written as [29]: where N T is the ratio between the viscous force and the Van der Waals force: where H is the Hamaker constant, a function of the particle material; . λ denotes the deformation rate: . λ = 4ε 15πv 0.5 (16) γ is the shear rate, and it can be written as (b) When the diameters of the two particles i and j are d i > η and d j > η, the aggregation rate can be expressed using Abrahamson's model [30]: (c) When the diameters of the two particles i and j are d i ≥ η and d j < η or d i < η and d j ≥ η, Zheng's model [31] can be used to express the aggregation rate: where St is particle relaxation time scale and the fluid characteristic time scale ratio; U 2 i and U 2 j are the mean square velocities of particles i and j, respectively. Boundary Conditions and Computational Method The simulated flue gas at the inlet is set as a two-phase flow with a particle phase and gas phase. The PBM model is introduced in the Fluent software. During the computations, the inlet and outlet are set as the velocity inlet and pressure outlet, respectively. The gas flow velocity in the inlet varies from 8 to 16 m/s. The flow rate of the nozzle is set as 1.2-2.4 L/min. The dust loading is maintained at 1-3 g/m 3 . According to the experimental research on pressure swirl spraying, including particle size analysis, the particle size distribution of each phase is listed in Table 1. A pressure-based solver is used in the CFD calculations, a coupling algorithm is used for the pressure-velocity coupling term, a second order upwind difference scheme is used for the momentum discretization term, and a first-order scheme is used for the turbulent kinetic energy and turbulent dissipation rate. The transient simulation is carried out using a time step of 0.0001 s. The convergence criterion of all of the scalars requires that the normalized residuals be less than 10 -4 . The near-wall treatment includes the non-equilibrium wall function. Due to the specific requirements regarding the PSD and the respective volume fraction ratios, the more advantageous inhomogeneous discrete method is adopted. Model Validation A schematic diagram of the experimental setup is presented in Figure 1a. Contaminated gases were introduced via tangential inlets at the bottom of the cyclone, and the rotation of these gases in the body of the system induced a vortex. Some large particles were thrown to the wall for separation under the action of centrifugal force. Fine particles were not collected with the gases rotated upward along the wall of the cyclone, and while the gases migrated through the system, the water spray provided a counter-current flow arrangement that cleans the gases by washing out the suspended aerosols. A swirl plate was placed above the gas inlet to enhance the mass transfer through the scrubber. Finally, the purified flue gas was discharged from the top to achieve efficient removal of fine particles. The field experimental device of the cyclonic spray scrubber was established as shown in Figure 1b. The cyclonic spray scrubber has a rectangular tangential inlet with 0.1 m height and 0.05 m width and consists of a cylindrical Perspex column (2 m in height and 0.2 m in diameter) with a spray nozzle at the center of the scrubber. In this stage, the Fluent software (version 19.2, 1987-2018 ANSYS, Canonsburg, PA, USA) was used to simulate this experimental model. The Euler two-fluid model and population balance model described above were adopted. All of the boundary conditions were set according to the experimental data. The PSD according to the experimental data was divided into seven bins (Table 1). According to the cyclonic spray scrubber in the experimental system, a simplified three-dimensional numerical model was constructed (Figure 1c). The domain containing 2302,306 cells was discretized using a structured hexahedral mesh in the ANSYS ICEM software (Figure 1c). The value of minimum orthogonal quality was equal to 0.15, which indicated that the mesh quality could meet the calculation requirements. We also tested three grid domains in our preliminary computation containing 1,801,185, 2,302,306, and 2,875,240, respectively. The mesh sensitivity test was conducted to prove that when the mesh density was 2,302,306 cells, mesh independence was achieved since a further increase in the cell number only caused a 2% change in the predicted airflow velocity. In addition to the mesh independence study, the base model results were validated against field measurement data before accepting the model for use in parametric studies. In this study, gas velocities and dust concentration were employed for the base model validation. We used a high precision anemometer to measure the airflow at three points (A, B, and C in Figure 1c) and a TE-10-800 Anderson particle detector to measure the particle concentration at the outlet. Table 2 compares the model-estimated and measured air flow velocities and dust concentrations. Considering the influence of the measurement error, the simulation results are acceptable. In general, the model developed in this study has a high accuracy and application value in calculating the dedusting efficiency. Figure 2 shows a cloud diagram of the gas flow velocity distribution in section Y = 0. The color gradient in Figure 2 indicates the magnitude of the gas velocity. As can be seen from Figure 2 the velocity distribution within the scrubber is basically axisymmetric. Due to the effect of the centrifugal force on the airflow and the effect of the blade guide, the velocity is the smallest in the central area below the nozzle. It gradually increases along the radial direction from the center to the wall, and gradually decreases to zero near the wall boundary layer. The flue gas cannot flow upward through the blind plate, which is located at the center of the swirl plate, so the airflow velocity is very small in the center of the cyclonic spray scrubber. As the flow velocity increases, the central spray region is strongly affected by the vortex shear, and the flow velocity increases slightly. As the spray flow rate increases, the flow velocity of the flue gas in the spray area decreases sharply, and the low airflow velocity area affected by the spray becomes larger. This is mainly due to the higher concentration and speed of the droplets due to the larger nozzle In addition to the mesh independence study, the base model results were validated against field measurement data before accepting the model for use in parametric studies. In this study, gas velocities and dust concentration were employed for the base model validation. We used a high precision anemometer to measure the airflow at three points (A, B, and C in Figure 1c) and a TE-10-800 Anderson particle detector to measure the particle concentration at the outlet. Table 2 compares the model-estimated and measured air flow velocities and dust concentrations. Considering the influence of the measurement error, the simulation results are acceptable. In general, the model developed in this study has a high accuracy and application value in calculating the dedusting efficiency. Figure 2 shows a cloud diagram of the gas flow velocity distribution in section Y = 0. The color gradient in Figure 2 indicates the magnitude of the gas velocity. As can be seen from Figure 2 the velocity distribution within the scrubber is basically axisymmetric. Due to the effect of the centrifugal force on the airflow and the effect of the blade guide, the velocity is the smallest in the central area below the nozzle. It gradually increases along the radial direction from the center to the wall, and gradually decreases to zero near the wall boundary layer. The flue gas cannot flow upward through the blind plate, which is located at the center of the swirl plate, so the airflow velocity is very small in the center of the cyclonic spray scrubber. As the flow velocity increases, the central spray region is strongly affected by the vortex shear, and the flow velocity increases slightly. As the spray flow rate increases, the flow velocity of the flue gas in the spray area decreases sharply, and the low airflow velocity area affected by the spray becomes larger. This is mainly due to the higher concentration and speed of the droplets due to the larger nozzle flow rate, and the air flow is more affected by the droplet resistance, so the air flow decreases rapidly. Figure 3 shows the velocity distribution of the droplets at transverse Z = 850 mm under different operating conditions. As can be seen from Figure 3, as the gas velocity increases, the droplet velocity decreases slightly in the region of 0.3 < X/D < 0.7, while the velocity in the regions on both sides increases. As the spray volume increases, the droplet velocity increases in the region of 0.1 < X/D < 0.9, while the velocity decreases slightly in the regions on both sides. It can be seen from the characteristics of the swirl flow field that the airflow velocity in the central area is small, and the airflow is obviously affected by the droplets. The larger the flow rate, the larger the area of the droplets along the ra−dial direction. The airflow velocity on both sides is large, and the droplet is obviously affected by the airflow. The larger the velocity, the stronger the droplet carried by the airflow. Figure 3 shows the velocity distribution of the droplets at transverse Z = 850 mm under different operating conditions. As can be seen from Figure 3, as the gas velocity increases, the droplet velocity decreases slightly in the region of 0.3 < X/D < 0.7, while the velocity in the regions on both sides increases. As the spray volume increases, the droplet velocity increases in the region of 0.1 < X/D < 0.9, while the velocity decreases slightly in the regions on both sides. It can be seen from the characteristics of the swirl flow field that the airflow velocity in the central area is small, and the airflow is obviously affected by the droplets. The larger the flow rate, the larger the area of the droplets along the ra−dial direction. The airflow velocity on both sides is large, and the droplet is obviously affected by the airflow. The larger the velocity, the stronger the droplet carried by the airflow. flow rate, and the air flow is more affected by the droplet resistance, so the air flow decreases rapidly. Figure 3 shows the velocity distribution of the droplets at transverse Z = 850 mm under different operating conditions. As can be seen from Figure 3, as the gas velocity increases, the droplet velocity decreases slightly in the region of 0.3 < X/D < 0.7, while the velocity in the regions on both sides increases. As the spray volume increases, the droplet velocity increases in the region of 0.1 < X/D < 0.9, while the velocity decreases slightly in the regions on both sides. It can be seen from the characteristics of the swirl flow field that the airflow velocity in the central area is small, and the airflow is obviously affected by the droplets. The larger the flow rate, the larger the area of the droplets along the ra−dial direction. The airflow velocity on both sides is large, and the droplet is obviously affected by the airflow. The larger the velocity, the stronger the droplet carried by the airflow. Figure 4 presents a cloud diagram of the volume fraction distribution of the droplets in section Y = 0. As can be seen from Figure 4, the distribution of the droplet volume concentration is hollow and conical. The central area is low, and the side wall area is Figure 4 presents a cloud diagram of the volume fraction distribution of the droplets in section Y = 0. As can be seen from Figure 4, the distribution of the droplet volume concentration is hollow and conical. The central area is low, and the side wall area is high. The droplet covers the interaction area between the flow passage section and the fine particles, which is conducive to the collision and agglomeration of the droplets and 9 of 15 fine particles. The distribution of the droplet particle volume concentration is basically the same under different inlet gas velocities. As the inlet velocity increases, the axial velocity of the droplets in the central region decreases gradually under the action of the gas-liquid two-phase velocity difference, and the droplet movement distance along the axial direction becomes shorter. However, the centrifugal force on the droplet in the side wall region increases gradually, and the droplet concentration along the radial direction decreases gradually. high. The droplet covers the interaction area between the flow passage section and the fine particles, which is conducive to the collision and agglomeration of the droplets and fine particles. The distribution of the droplet particle volume concentration is basically the same under different inlet gas velocities. As the inlet velocity increases, the axial velocity of the droplets in the central region decreases gradually under the action of the gas-liquid two-phase velocity difference, and the droplet movement distance along the axial direction becomes shorter. However, the centrifugal force on the droplet in the side wall region increases gradually, and the droplet concentration along the radial direction decreases gradually. Particle Size Distribution under Different Conditions The average particle size distribution can be calculated using the population balance model. Figure 5 shows the average particle size distribution under different conditions. It can be seen from Figure 5 that the particle size in the spray area is large when the airflow velocity is small. As the flow velocity increases, the particle size gradually increases and tends to become stable after passing through the swirl plate. This is mainly because the air around the nozzle changes the direction of the velocity due to the influence of the droplets, and a reflux area is generated under the nozzle, which causes the flue gas and droplets to be sucked up, increases the interaction time between the particles and droplets, and enhances the agglomeration effect. As the flow velocity increases, the turbulent kinetic energy increases, the collision between the droplets and particles through the swirl plate becomes more severe, the particles significantly agglomerate, and the particle size along the axial direction gradually increases. Particle Size Distribution under Different Conditions The average particle size distribution can be calculated using the population balance model. Figure 5 shows the average particle size distribution under different conditions. It can be seen from Figure 5 that the particle size in the spray area is large when the airflow velocity is small. As the flow velocity increases, the particle size gradually increases and tends to become stable after passing through the swirl plate. This is mainly because the air around the nozzle changes the direction of the velocity due to the influence of the droplets, and a reflux area is generated under the nozzle, which causes the flue gas and droplets to be sucked up, increases the interaction time between the particles and droplets, and enhances the agglomeration effect. As the flow velocity increases, the turbulent kinetic energy increases, the collision between the droplets and particles through the swirl plate becomes more severe, the particles significantly agglomerate, and the particle size along the axial direction gradually increases. Figure 5b shows the particle size distribution under different spray volume flow rates in section Y = 0. Figure 5b shows that the particle size gradually increases along the axial direction as the amount of spray increases. As the nozzle flow rate increases, the concentration and velocity of the droplets increase, the gas-liquid turbulence becomes more intense in the spray area, the airflow speed decreases rapidly due to the effect of droplet resistance, and the tiny particles with an airflow movement speed are also reduced. The interaction time between the particles and droplets increases, and the interparticle collision coalescence strengthens, so the particle size significantly increases. Effect of Gas Flow Velocity on Particle Removal Efficiency The changes in the concentration of the fine particles at the outlet when the spray volume is 1.2 L/min, the inlet dust concentration is 2 g/m 3 , and the flue gas flow rate is 8-16 m/s are shown in Figure 6. Figure 6a show that as the airflow rate increases, the number density of the fine particles decreases in each particle size segment after spray agglomeration, but the number of small particles is slightly greater. This indicates that large particles are easily removed, while small particles are relatively difficult to remove. This is due to the fact that the fine particles follow the airflow movement, have a short residence time, and do not make full contact with the droplets. Figure 5b shows that the particle size gradually increases along the axial direction as the amount of spray increases. As the nozzle flow rate increases, the concentration and velocity of the droplets increase, the gas-liquid turbulence becomes more intense in the spray area, the airflow speed decreases rapidly due to the effect of droplet resistance, and the tiny particles with an airflow movement speed are also reduced. The interaction time between the particles and droplets increases, and the interparticle collision coalescence strengthens, so the particle size significantly increases. Effect of Gas Flow Velocity on Particle Removal Efficiency The changes in the concentration of the fine particles at the outlet when the spray volume is 1.2 L/min, the inlet dust concentration is 2 g/m 3 , and the flue gas flow rate is 8-16 m/s are shown in Figure 6. Figure 6a show that as the airflow rate increases, the number density of the fine particles decreases in each particle size segment after spray agglomeration, but the number of small particles is slightly greater. This indicates that large particles are easily removed, while small particles are relatively difficult to remove. This is due to the fact that the fine particles follow the airflow movement, have a short residence time, and do not make full contact with the droplets. (a) Figure 5b shows the particle size distribution under different spray volume flow rates in section Y = 0. Figure 5b shows that the particle size gradually increases along the axial direction as the amount of spray increases. As the nozzle flow rate increases, the concentration and velocity of the droplets increase, the gas-liquid turbulence becomes more intense in the spray area, the airflow speed decreases rapidly due to the effect of droplet resistance, and the tiny particles with an airflow movement speed are also reduced. The interaction time between the particles and droplets increases, and the interparticle collision coalescence strengthens, so the particle size significantly increases. Effect of Gas Flow Velocity on Particle Removal Efficiency The changes in the concentration of the fine particles at the outlet when the spray volume is 1.2 L/min, the inlet dust concentration is 2 g/m 3 , and the flue gas flow rate is 8-16 m/s are shown in Figure 6. Figure 6a show that as the airflow rate increases, the number density of the fine particles decreases in each particle size segment after spray agglomeration, but the number of small particles is slightly greater. This indicates that large particles are easily removed, while small particles are relatively difficult to remove. This is due to the fact that the fine particles follow the airflow movement, have a short residence time, and do not make full contact with the droplets. In order to study the influence of the flow velocity on the turbulent particle coalescence, the distribution curve of the turbulent kinetic energy in transverse Z = 850 mm in the scrubber was attained (Figure 6b). As can be seen from Figure 6b, as the flue gas velocity increases, the Reynolds number in the flow field increases, and the turbulent Figure 6. Effect of the gas velocity on the (a) particle number density, (b) turbulent kinetic energy at Z = 850 mm, and (c) particle concentration and removal efficiency. In order to study the influence of the flow velocity on the turbulent particle coalescence, the distribution curve of the turbulent kinetic energy in transverse Z = 850 mm in the scrubber was attained (Figure 6b). As can be seen from Figure 6b, as the flue gas velocity increases, the Reynolds number in the flow field increases, and the turbulent kinetic energy also increases, which improves the agglomeration efficiency of the particles and promotes the collision and agglomeration of the fine particles and droplets. As can be seen from Figure 6c, as the air inlet velocity increases from 8 to 16 m/s, the concentration of the particulate matter at the outlet decreases from 60 mg/m 3 to 2 mg/m 3 , reaching the ultra-low emissions requirement of 5 mg/m 3 . The removal efficiency increases from 96.8% to 99.9% and the fine particle removal efficiency gradually improves and then becomes stable. The efficiency values simulated by the model are within the allowable range of error (about 99-99.5% in dedusting efficiency by experimental data). The reasons for this are as follows. As the gas velocity increases, on the one hand, the dust particle capture efficiency of the droplets increases, the turbulent kinetic energy increases, and the gas-liquid mixing becomes more uniform, the probability of collisions between the droplets and particles increases, and the particle size increases, so more particles are removed under the action of the centrifugal force. Effect of Spray Volume on Dust Removal Efficiency The variation in the spray volume in the range of 1.2-2.4 L/min was investigated under the conditions that the inlet smoke flow rate was 8 m/s and the concentration was 2 g/m 3 . The number densities of fine particles at the inlet and outlet are shown in Figure 7a. The results show that as the spray volume increases, the number of fine particles in each particle size interval at the outlet decreases, and the rate of decrease of the speed increases. This is because as the volume of the spray increases, the number of droplets increases, and the contact area between the droplets and dust-containing gas also increases, leading to an increase in the probability of the dust particles being captured by the droplets, which can re-enroll the escaped fine particles and dust into the flow field, increase the interaction time between the particles and droplets, and enhance the agglomeration effect. Figure 7b shows the distribution of the turbulent kinetic energy in transverse Z = 850 mm for different spray flow rates. It can be seen from Figure 7 that as the spray flow increases, the turbulent kinetic energy in the spray area increases, which is conducive to the turbulent mixing of the particles and droplets. In addition, the collision probability and number of particles in a volume unit increase, thus enhancing the turbulent coalescence effect of the particles. As can be seen from Figure 7c, as the spray volume increases from 1.2 L/min to 2.4 L/min, the collection efficiency increases significantly, and the concentration of the particulate matter at the outlet decreases from 60 mg/m 3 to 4 mg/m 3 , meeting the ultra-low emission requirement of 5 mg/m 3 . As the spray flow rate increases, the droplet velocity increases, which enhances the inertia effect, diffusion effect, and interception effect of the dust particles, and improves the capture efficiency of the particulate matter. As the flow rate increase, the movement velocity of the droplets increases the disturbance of the surrounding smoke, and the intensity of the turbulence in the spray area increases, which greatly increases the collision probability between the dust particles and droplets. The particle size of the fine particles increases after humidification and agglomeration, and more particles are removed under the action of the centrifugal force. Increasing the slurry spray volume can increase the dust particle removal efficiency, which has been confirmed in many wet dust removal studies [32,33]. Figure 7b shows the distribution of the turbulent kinetic energy in transverse Z = 850 mm for different spray flow rates. It can be seen from Figure 7 that as the spray flow increases, the turbulent kinetic energy in the spray area increases, which is conducive to the turbulent mixing of the particles and droplets. In addition, the collision probability and number of particles in a volume unit increase, thus enhancing the turbulent coalescence effect of the particles. As can be seen from Figure 7c, as the spray volume increases from 1.2 L/min to 2.4 L/min, the collection efficiency increases significantly, and the concentration of the particulate matter at the outlet decreases from 60 mg/m 3 to 4 mg/m 3 , meeting the ultra-low emission requirement of 5 mg/m 3 . As the spray flow rate increases, the droplet velocity increases, which enhances the inertia effect, diffusion effect, and interception effect of the dust particles, and improves the capture efficiency of the particulate matter. As the flow rate increase, the movement velocity of the droplets increases the disturbance of the surrounding smoke, and the intensity of the turbulence in the spray area increases, which greatly increases the collision probability between the dust particles and droplets. The particle size of the fine particles increases after humidification and agglomeration, and more particles are removed under the action of the centrifugal force. Increasing the slurry spray volume can increase the dust particle removal efficiency, which has been confirmed in many wet dust removal studies [32,33]. Effect of Inlet Particle Concentration on Dust Removal Efficiency The changes in particle concentration in the range of 1-3 g/m 3 were investigated under the conditions of a flue gas flow rate of 12 m/s and a spray volume of 1.2 L/min. The changes in the number density of the fine particulate matter at the outlet are shown in Figure 8a. As can be seen from Figure 8a, as the particle concentration increases, the Effect of Inlet Particle Concentration on Dust Removal Efficiency The changes in particle concentration in the range of 1-3 g/m 3 were investigated under the conditions of a flue gas flow rate of 12 m/s and a spray volume of 1.2 L/min. The changes in the number density of the fine particulate matter at the outlet are shown in Figure 8a. As can be seen from Figure 8a, as the particle concentration increases, the number density of the fine particles initially decreases and then increases. Figure 8b shows the turbulent kinetic energy distribution in transverse Z = 850 mm for different particulate matter concentrations. It can be seen from Figure 8b that as the particle concentration increases, the collision kernel function is about the same when the turbulent kinetic energy is about the same. As can be seen from Figure 8c, as the particulate matter concentration increases, the removal efficiency of the fine particulate matter initially increases and then decreases, and it reaches the highest point at 2 g/m 3 by the simulation results, which is in good agreement with the experimental results. In addition, the particulate matter concentration at the outlet is less than 10 mg/m 3 . This is because, on the one hand, when the particulate matter concentration is large, the number of particles increases, and the spacing between the particles continuously decreases, which increases the collision probability of the fine particles [20]. This is conducive to enhancing the agglomeration effect of the particles, forming some large particles and thus significantly improving the removal efficiency of the fine particles. On the other hand, because the number of droplets in the scrubber is relatively fixed and the number of dust particles increases, the adsorption efficiency of the spray droplets is limited, which leads to a decrease in the capture efficiency of the dust particles. effect of the particles, forming some large particles and thus significantly improving the removal efficiency of the fine particles. On the other hand, because the number of droplets in the scrubber is relatively fixed and the number of dust particles increases, the adsorption efficiency of the spray droplets is limited, which leads to a decrease in the capture efficiency of the dust particles. Conclusions In this study, a mathematical model based on a two-fluid model for cyclonic spray dedusting was developed. The model considers the aggregation between particles and droplets caused by turbulence in detail. The hydrodynamic characteristics in a cyclonic spray scrubber were analyzed and the removal efficiencies were predicted for different Conclusions In this study, a mathematical model based on a two-fluid model for cyclonic spray dedusting was developed. The model considers the aggregation between particles and droplets caused by turbulence in detail. The hydrodynamic characteristics in a cyclonic spray scrubber were analyzed and the removal efficiencies were predicted for different gas flow velocities, spray flow rates, and particle concentrations. Based on our analysis, the following conclusions were drawn. 1. The velocity in the cyclonic spray scrubber is basically axisymmetric; that is, the velocity in the central region is the smallest, the velocity gradually increases along the radial direction from the center to the wall, and the velocity gradually decreases to zero near the wall boundary layer, exhibiting the characteristics of swirling flow. The volume concentration distribution of the droplet particles is hollow and conical, the central region is low, and the side wall region is high. 2. As the flue gas flow velocity increases, the turbulent kinetic energy increases, and the efficiency of the turbulent aggregation of the particles increases. The particle size along the axial direction increases, the number density of the fine particles within each particle size interval decreases, and the removal efficiency gradually increases. The particle concentration at the outlet reaches the ultra-low emissions requirement of less than 5 mg/m 3 . 3. As the spray flow rate increases, the number of droplets increases, the contact area between the droplets and air increases, the turbulent kinetic energy in the spray area increases, and the particle size increases significantly after wetting and agglomeration. The particle concentration in all of the size intervals at the outlet decreases, and the removal efficiency reaches 99.7%. 4. As the particle concentration increases, the spacing between the particles decreases continuously, and the particles agglomerate more closely. However, due to the limited adsorption efficiency of the spray droplets, the removal efficiency of the fine particles reaches its highest value at 2 g/m 3 .
9,917.6
2022-12-01T00:00:00.000
[ "Environmental Science", "Engineering" ]
Multi-valued versions of Nadler, Banach, Branciari and Reich fixed point theorems in double controlled metric type spaces with applications 1 Department of Mathematics, University of Malakand, Chakdara Dir(L), KPK, Pakistan 2 Department of Mathematics and General Sciences, Price Sultan University, P.O. Box 66833, Riyadh 11586, Saudi Arabia 3 Department of Medical Research, China Medical University, Taichung 40402, Taiwan 4 Department of Computer Science and Information Engineering, Asia University, Taichung, Taiwan 5 Department of Mathematics, Islamia College Peshawar, Peshawar, KPK, Pakistan Introduction Numerous applications in engineering and scientific fields can be managed through Fredholm or Volterra integrals. A substantial amount of initial value and boundary value problems can be transformed to Fredholm or Volterra integral equations. Applications of which can be found in areas of mathematical modeling of physics and biology. For instance, one can observe the applications of these equations in biological sciences such as the heat transfer or heat radiation and biological species living together, the Volterra population growth model, scattering in quantum mechanics, kinetic not just stop here. Still, authors are finding ways to restrict or control the triangle inequalities and introducing some new metric type spaces. In recent times, Mlaiki et al. [35] introduced the concept of control metric spaces. Which is in fact, the extension of b-metric spaces and extended b-metric spaces. Abdeljawad et al. [49] modified the control metric space via two control functions which generalizes b-metrics, extended b-metrics and control metric spaces. In this study, inspired theoretically from the above contribution, we establish some multi-valued fixed point theorems in the setting of double controlled metric spaces and then the established results are used to analyze the existence of solution to a Volterra integral inclusion and singular Fredholm integral inclusions of both types. The obtained results can be utilized in many of research problems seeking existence of solution to various kinds of integral equations. For example one can also use these results to obtain the of solution to Riemann-Liouville fractional neutral functional stochastic differential equation with infinite delay of order 1 < β < 2 [54]. Then ρ is a metric on X. The pair (X, ρ) is called a metric space. Remark 2.4. From the above example it can be concluded that in general a b-metric is not continuous. It is obvious that for S = 1, every b-metric is a standard metric. Definition 2.5. Let (X, ρ) be a b-metric space, such that for all ζ, η ∈ X. Definition 2.6. [51] Let X be a nonempty set. Define a distance function ρ : , be a function such that for all ζ, η, ν ∈ X, if ρ satisfies Then ρ is an extended b-metric on X. The pair (X, ρ) is called an extended b-metric space. And Note that the first two axioms in Definition (2.6) hold, trivially. For the real numbers x, y, with x = 0 or | x |≥ 1 and y = 0, or | y |≥ 1 we establish the following relation. Recently, Mlaiki et al. [35] generalized the notion of b-metric space to control metric space. Definition 2.8. [35] Let X be a nonempty set. Define a distance function ρ : X × X → [0, ∞). Let α : X 2 → [1, ∞), be a function such that for all ζ, η, ν ∈ X, if ρ satisfies Then ρ is a control metric on X. The pair (X, ρ) is called a control metric space. Thus the above metric is a control metric. However, it is not an extended b-metric as Thabet et al. [49] introduced double controlled metric type space. Remark 2.11. A controlled metric type is in fact, a double controlled metric. However, the converse is not true as is verified by the examples given below. It is simple to show that the above metric is a double controlled metric. However, we can deduce that ρ is not an extended b metric as And define the functions α, β : It is obvious that ρ 1 and ρ 2 in Definition (2.10) hold. We claim that (ρ 3 ) is satisfied in Definition (2.10). On the opposite side, we have So, the above double controlled metric is not a control metric. Now, we are going to explore the topological concepts of the double controlled type metric space. Consider the definitions follows immediately. Definition 2.14. [49] Let (X, ρ) be a double controlled type metric space either by one or two functions (1) The sequence {ζ n } n∈N is convergent to some ζ ∈ X, if and only if for each > 0, ∃ an integer N such that ρ(ζ n , ζ) < for each n > N . It can be written mathematically, as lim It can be concluded that when τ is continuous at ζ in (X, ρ). Then ζ n → ζ implies that τζ n → τζ, as n → ∞. The quest for the existence of fixed points for multi-valued mapping for complete metric spaces was originated by, Nadler, in 1979. He initiated the study of multi-valued version of Banach contraction mapping exercising the idea of Hausdorff metric. Consider, some of the fundamental concepts from multi-valued fixed point theory that will help us analyzing the present study. Definition 2.17. The Hausdorff metric H on CB(X) is defined as follow The mapping H is a metric for CB(X) and is called the Hausdorff metric. It can be deduce that the metric H in fact depends on the metric of X and the two equivalent matrices for X can not generate equivalent Haudorff matrices for CB(X). is the accumulation of all non-empty closed subsets of X. Consider, where ρ(ζ, A) = inf η∈A ρ(ζ, η). The pair (C(H), X) is known as generalized Hausdorff distance induced by d. Definition 2.20. [44] Let (X, ρ 1 ) and (X, ρ 2 ) be metric spaces. A mapping F : X → CB(Y) is said to be a multi-valued Lipschitz mapping of X into Y iff ∀ ζ, η ∈ X we have, The constant c in (2.2) is called a Lipschitz constant. Definition 2.21. If the Lipschitz constant c becomes less than 1 i.e., c < 1 in the case of (2.2) than F is called multi-valued contraction mapping. Mathematically it can be expressed as lim Mathematically it can be expressed as lim Definition 2.24. An element ζ ∈ X is known to be a fixed point of multi-valued operator τ : X → N(X) if ζ ∈ τ(ζ). Mathematically, represented by Fix(τ) = ζ ∈ X : ζ ∈ τ(ζ) . Multi-value fixed point results In this section some fixed point results is proved in the setting of double controlled metric spaces. Our first result is Nadler fixed point theorem. Then, τ has a fixed point. For a positive constant u ∈ (0, 1), define the set H ζ u ⊂ X as, Theorem 3.4. Consider (X, ρ) be a complete a double controlled type metric space by two controlled functions α, β : X × X → [1, ∞). Let τ : X → C(X) be a multi-valued mapping. If ∃ a constant k ∈ (0, 1) such that for any ζ ∈ X there is η ∈ H ζ u , satisfying Moreover, consider that for each ζ ∈ X that lim n→∞ α(ζ, ζ n ) and lim n→∞ α(ζ n , ζ) exist and are finite. Then, τ has a fixed point provided that k < u and f is lower semi-continuous. Let m, n be integers such that n < m then from Theorem (3.1) we get to the following. The ratio test together with (3.6) implies that the limit of real number sequence {S n } exists and so, {S n } is a Cauchy sequence. In fact, the ratio test is applied to the term x i = i j=0 β(ζ j , ζ m ) α(ζ j , ζ i+1 ). Letting m, n → ∞ in (3.9), that is where L = k u now we know that k < u so we have L n → 0 as n → ∞. From which is deduced that the sequence {ζ n } ∞ n=1 is a Cauchy sequence. Since (X, ρ) is a complete double controlled metric space. So, the sequence {ζ n } ∞ n=1 converges to some point ζ 0 ∈ X. We claim that ζ is a fixed point of T . As a matter of fact, from the given proof it is deduced that {ζ n } ∞ n=0 converges to ζ. While, on the other hand, f (ζ n ) is a decreasing sequence and hence, converges to 0. Since, f is lower semi-continuous, we get Therefore, f (ζ) = 0. Consequently, the closeness of τ(ζ) implies ζ ∈ τ(ζ). The above double controlled metric is not a control metric, as can be seen. It can be analyzed that (X, ρ) is a complete double controlled metric. Consider, the multi-value operator Now it is simple to deduce that f (ζ) = ρ(ζ, τ(ζ)) = 0. We see that f (ζ) is continuous. In addition, there exists η ∈ H x 8 9 , such that the condition (3.5) is satisfied. Moreover, for every ζ 0 ∈ X the condition (3.6) is satisfied. Thus all the hypothesis of the Theorem (3.4) are fulfilled. Hence {a, b, c} are the fixed points of the τ for k = 2 3 . However, τ is not a contractive mapping in Nadler. For instance, which shows that Theorem (3.1) is the generalization of Nadler multi-valued fixed point theorem. Now proceeding the work to extending Branciari integral fixed point theorem to double controlled type metric space. The theorem is proved in three steps. Step 2. we now show that {ζ n } ∞ n=0 is a Cauchy sequence. From Theorem (3.1) we have (3.12) Step 3. The ratio test together with (3.11) implies that the limit of real number sequence {S n } exists and so, {S n } is a Cauchy sequence. In fact, the ratio test is applied to the term Letting m, n → ∞ in (3.12), that is lim m,n→∞ ρ(ζ m , ζ n ) = 0. Applications to integral inclusions In the current section existence theorems for Volterra type integral inclusion and singular Fredholm integral inclusions are obtained via multi-valued fixed point results. The Volterra type integral inclusion can be expressed as where ϑ(α) is a continuous function on the given interval, Γ(α, τ) is the family of non-empty compact and convex sets on the interval, φ is the unknown solution belonging to the given inclusion, and a ≤ x ≤ b. Moreover, consider a multi-valued operator L : X → CB(X), defined by Then clearly the operator (4.2) is non-empty and closed. Furthermore, consider the double controlled metric ρ : X × X → [0, ∞) defined by ρ(α, β) = |α − β| and the controlled functions µ, ν : X × X → [1, ∞) defined by µ = α + 9β + 7 and ν = 6α + 2β + 4. Additionally, (4.2) be the operator from X → CB(X). The the integral (4.1) will have some solution provided that the following holds. In general there arises two cases of singularity cases in Fredholm and Volterra integral equations [3,26,38] to be precise those cases are: (1) Dealing with the limit a → −∞ and b → ∞. The general form of both the types are given below. where ϑ(α) is a function continuous on [a, b], Γ(α, τ) is the family of non-empty compact and convex sets on the interval, φ is the unknown solution belonging to the given inclusion, and a ≤ x ≤ b in each case. Proof. Since the given double controlled metric space is a complete space. Then, the theorem can be proved easily by following the same steps from Theorem (4.1). Then, by Theorem (3.10) the integral (4.4) possesses a solution.
2,798.6
2021-01-01T00:00:00.000
[ "Mathematics" ]
Quasiparticle interference and charge order in a heavily overdoped non-superconducting cuprate One of the key issues in unraveling the mystery of high Tc superconductivity in the cuprates is to understand the normal state outside the superconducting dome. Here we perform scanning tunneling microscopy and spectroscopy measurements on a heavily overdoped, non-superconducting (Bi,Pb)2Sr2CuO6+x cuprate. Spectroscopic imaging reveals dispersive quasiparticle interferences and the Fourier transforms uncover the evolution of momentum space topology. More interestingly, we observe nanoscale patches of static charge order with sqrt(2)*sqrt(2) periodicity. Both the dispersive quasiparticle interference and static charge order can be qualitatively explained by theoretical calculations, which reveal the unique electronic structure of strongly overdoped cuprate. The superconducting (SC) state of high TC cuprates exists within a "dome" in the phase diagram and disappears both in the severely underdoped and heavily overdoped limits. Because the cuprates are widely believed to be doped Mott insulators [1], the underdoped regime near the parent compound has been extensively studied by various experimental techniques, which have revealed highly unusual phenomena such as the pseudogap phase [2] and complex charge/spin orders [3][4][5][6][7][8][9][10][11][12][13][14]. On the contrary, the heavily overdoped regime is much less explored because it is generally considered to be a rather conventional Fermi liquid (FL) state. This point has been illustrated by the crossover from a non-FL-like linear temperature (T) dependent resistivity at optimal doping to the T 2 dependent resistivity characteristic of Landau FL in the heavily overdoped regime [15][16][17][18][19], as well as quantum oscillation experiments revealing a single hole-like Fermi surface (FS) [20,21]. Because the physics of the FL is well-understood, the heavily overdoped limit can actually serve as another valid starting point, presumably more accessible than the Mott insulator limit, for understanding the origin of superconductivity in cuprates. Previous experiments on overdoped cuprates have revealed a number of important features regarding the electronic structure. Angle-resolved photoemission spectroscopy (ARPES) shows a FS topology transition from a (π,π)-centered hole-like pocket to a (0,0)-centered electron-like pocket [22][23][24]. In single band tight binding model [25,26], the change of FS topology in two-dimension should be accompanied by a logarithmic divergence of electron density of states (DOS) known as the Van Hove singularity (VHS) [27]. Recent scanning tunneling microscopy (STM) experiments provide direct evidence for VHS in heavily overdoped cuprate, as well as the existence of pseudogap [26]. However, it is still unclear what the main difference is, from the electronic structure and electronic order point of view, between the FL and SC states across the phase boundary in the overdoped side. Especially, the charge order phenomenon, which is ubiquitous in underdoped cuprates and entangles intricately with superconductivity [7, [28][29][30], has been mostly neglected in the heavily overdoped non-SC regime of the phase diagram. In order to elucidate the electronic structure and electronic order in the overdoped regime outside the SC dome, here we perform STM studies on a heavily overdoped, non-SC Bi2-xPbxSr2CuO6+δ (Pb-Bi2201) cuprate. Tunneling spectroscopy reveals the VHS feature and its evolution into the pseudogap phase, and the dispersive quasiparticle interference (QPI) patterns reveal the change of FS topology. More remarkably, we observe nanoscale patches of static charge orders with √2 × √2 periodicity. The possible origin of the charge order and its implications to the superconductivity will be discussed. Results Spatially resolved tunneling spectroscopy. The Pb doped Bi2201 is chosen because it can be overdoped into the non-SC regime and has an ideal cleaved surface. High-quality Pb-Bi2201 single crystals are grown by the traveling solvent floating zone method and the TC of the as-grown sample is about 3 K [31]. The non-SC sample studied in this work is obtained by annealing the as-grown sample in high pressure O2 (~ 80 atm) at 500 °C for 7 days to further increase the hole density. It exhibits no sign of superconductivity down to 2 K. Figure 1(a) depicts the schematic electronic phase diagram, and the red arrow shows the approximate location of the non-SC sample. The Pb-Bi2201 crystal is cleaved in the ultrahigh vacuum chamber at T = 77 K, and is then transferred into the STM chamber with the sample stage cooled to T = 5 K. STM topography is taken in the constant current mode with an electro-chemically etched tungsten tip, which has been treated and calibrated on a clean Au(111) surface [32]. The dI/dV (differential conductance) spectra are collected by using a standard lock-in technique with modulation frequency f = 423 Hz. All the data reported here are taken at T = 5 K. Fig. 1(b) is the exposed (Bi,Pb)O surface topography of a non-SC sample, which shows a regular square lattice. The structural supermodulation usually observed in Bi-based cuprates is suppressed by Pb doping [28,33]. Around 13% of the atomic sites are bright spots, which is consistent with the 12.2% Pb substitution of Bi determined by the sample growth condition [28,33]. There are spatial inhomogeneities with typical size around a few nanometers, which presumably result from the non-uniform distribution of local hole density [34,35]. Shown in The local electronic structure is probed by dI/dV spectroscopy, which is approximately proportional to the electron DOS. Figure 1(c) displays a series of representative spectra taken at various locations indicated by the corresponding colored dots in Figure 1(b). The spectra exhibit significant but yet systematic variations. Roughly speaking there are two types of spectra, one with a prominent peak near the Fermi energy (EF) and the other with a DOS suppression around EF that is reminiscent of the pseudogap. In Fig. 1(d) we show that the peaks in dI/dV can be fitted well by a simple function a + b log |E-EVHS| with EVHS denoting the peak position, which is consistent with the spectrum due to the presence of VHS [36]. The spectra with DOS suppression are quite similar to that in OD cuprate with lower hole density in the overdoped SC regime [37,38]. The spatially averaged dI/dV spectrum in Fig. 1(e) exhibits a DOS peak around EF, revealing that statistically the dominant spectral feature in this sample is the VHS-type. The spatial variations of the spectra reflect that the VHS-type spectra gradually evolve into the pseudogap-type with reduced doping, which is consistent with the expected band structure evolution of overdoped cuprates. The dI/dV maps and dispersive QPI patterns. Next we will focus on the electronic structure and electronic order in this sample. The static √ × √ charge order structure. In addition to the dispersive QPI features, a more important, and totally unexpected feature revealed by the dI/dV maps are the existence of non-dispersive structure when we examine the low energy dI/dV maps. As displayed by the dashed squares in Fig. 3(a), the DOS map at zero bias exhibits nanoscale patches of short-range charge orders with a 45-degree rotation with respect to the atomic lattice. This feature has never been observed in cuprates before [7, 26,29,38]. The dI/dV maps obtained at different bias energies indicate that the charge order is more pronounced around EF, and is visible over the entire energy range. To inspect its fine structures, we show in Figs. 4(b) and 4(c) the zoomed-in topographic and dI/dV maps acquired at Vb = 0 mV on the area enclosed by the green dashed square in Fig. 3(a). It is clearly illustrated from the comparison of these two maps that the charge order locally imposes a commensurate √2 × √2 superstructure on the original atomic lattice. Moreover, the charge order patterns of different patches in Fig. 3(a) do not align with each other. The lack of long-range order indicates that the charge order may be affected by local impurities or inhomogeneous distribution of hole concentration. We gain more insight into the charge order by investigating its dependence on the bias voltage. Depicted in Figs. 4(d)-4(g) are the dI/dV maps of a small charge ordered patch (marked by the red dashed square in Figs. 4(b) and 4(c)) acquired at Vb = 0 mV, -5 mV, -10 mV and -20 mV, demonstrating that this charge order keeps a commensurate periodicity without any dispersion within the energy range from -20 mV to 0 mV. This suggests that the √2 × √2 pattern is a static charge order, in sharp contrast to the dispersive QPI patterns. Discussion Previous STM studies in underdoped cuprates have revealed the ubiquitous existence of charge order with wavevector around 4 a0 along the Cu-Cu bond direction [7,30]. However, the √2 × √2 charge order reported here has never been observed before in cuprates. In fact, the issue of charge order in heavily overdoped cuprates has been mostly neglected so far, and previous study on an overdoped Bi2201 with TC = 15 K did not observe such charge order [38]. Therefore, the √2 × √2 charge order could be a unique feature of the non-SC regime of the cuprate phase diagram, and may reveal key information regarding how superconductivity is suppressed by strong overdoping. The main questions regarding the observed charge order are its origin and implications to the SC phase. Below we present a possible mechanism to account for the √2 × √2 charge order in the strongly overdoped regime. A likely cause for the charge order is the competition between onsite Coulomb repulsion U and nearest-neighbor interaction V, in combination with the proximity to the VHS. In the simplest classical picture, or if the kinetic energy of electrons is neglected, the potential energy per site for the √2 × √2 charge configuration in Fig. 4(a) is:
2,192.4
2018-06-25T00:00:00.000
[ "Physics" ]
Global existence and blow-up of solutions to porous medium equation and pseudo-parabolic equation, I. Stratified groups In this paper, we prove a global existence and blow-up of the positive solutions to the initial-boundary value problem of the nonlinear porous medium equation and the nonlinear pseudo-parabolic equation on the stratified Lie groups. Our proof is based on the concavity argument and the Poincaré inequality, established in Ruzhansky and Suragan (J Differ Eq 262:1799–1821, 2017) for stratified groups. Introduction The main purpose of this paper is to study the global existence and blow-up of the positive solutions to the initial-boundary problem of the nonlinear porous medium equation and the nonlinear pseudo-parabolic equation where m ≥ 1 and p ≥ 2, f is locally Lipschitz continuous on R, f (0) = 0, and such that f (u) > 0 for u > 0. Furthermore, we suppose that u 0 is a non-negative and non-trivial function in C 1 (D) with u 0 (x) = 0 on the boundary ∂D for p = 2 and in L ∞ (D) ∩ S1,p (D) for p > 2, respectively. Definition 1.1.Let G be a stratified group.We say that an open set D ⊂ G is an admissible domain if it is bounded and if its boundary ∂D is piecewise smooth and simple, that is, it has no self-intersections. Let G be a stratified group.Let D ⊂ G be an open set, then we define the functional spaces S 1,p (D) = {u : D → R; u, |∇ H u| ∈ L p (D)}. (1.3) We consider the following functional . Thus, the functional class S1,p (D) can be defined as the completion of C 1 0 (D) in the norm generated by J p , see e.g.[7]. (b) Let N 1 be as in (a) and let X 1 , . . ., X N 1 be the left-invariant vector fields on G such that X k (0) = ∂ ∂x k | 0 for k = 1, . . ., N 1 .Then the Hörmander rank condition must be satisfied, that is, Then, we say that the triple Recall that the standard Lebesgue measure dx on R n is the Haar measure for G (see e.g.[14], [39]).The left-invariant vector field X j has an explicit form: see e.g.[39].The following notations are used throughout this paper: for the horizontal gradient, and for the p-sub-Laplacian.When p = 2, that is, the second order differential operator is called the sub-Laplacian on G.The sub-Laplacian L is a left-invariant homogeneous hypoelliptic differential operator and it is known that L is elliptic if and only if the step of G is equal to 1. One of the important examples of the nonlinear parabolic equations is the porous medium equation, which describes widely processes involving fluid flow, heat transfer or diffusion, and its other applications in different fields such as mathematical biology, lubrication, boundary layer theory, and etc. Existence and nonexistence of solutions to problem (1.1) for the reaction term u m in the case m = 1 and m > 1 have been actively investigated by many authors, for example, [3,4,9,11,12,15,16,20,21,22,28,30,41,42,43], Grillo, Muratori and Punzo considered fractional porous medium equation [17,18], and it was also considered in the setting of Cartan-Hadamard manifolds [19].By using the concavity method, Schaefer [44] established a condition on the initial data of a Dirichlet type initial-boundary value problem for the porous medium equation with a power function reaction term when blow-up of the solution in finite time occurs and a global existence of the solution holds.We refer for more details to Vazquez's book [45] which provides a systematic presentation of the mathematical theory of the porous medium equation. The energy for the isotropic material can be modeled by a pseudo-parabolic equation [10].Some wave processes [6], filtration of the two-phase flow in porous media with the dynamic capillary pressure [5] are also modeled by pseudo-parabolic equations.The global existence and finite-time blow-up for the solutions to pseudoparabolic equations in bounded and unbounded domains have been studied by many researchers, for example, see [26,27,33,34,37,47,48,49] and the references therein. Also, blow-up of the solutions to the semi-linear diffusion and pseudo-parabolic equations on the Heisenberg groups was derived in [1,2,13,24,25].In addition, in [40] the authors found the Fujita exponent on general unimodular Lie groups. In some of our considerations a crucial role is played by where introduced by Chung-Choi [8] for a parabolic equation.We will deal with several variants of such condition. • The Poincaré inequality established by the first author and Suragan in [38] for stratified groups: where Note that it is possible to interpret the constant |N 1 −p| p (pR) p as a measure of the size of the domain D. Then β in (1.7) is dependent on the size of the domain D. Our paper is organised so that we discuss the existence and nonexistence of positive solutions to the nonlinear porous medium equation in Section 2 and the nonlinear pseudo-parabolic equation in Section 3. Nonlinear porous medium equation In this section, we prove the global solutions and blow-up phenomena of the initialboundary value problem (1.1). 2.1. Blow-up solutions of the nonlinear porous medium equation.We start with the blow-up properly.Assume that function f satisfies where for some Then any positive solution u of (1.1) blows up in finite time T * , i.e., there exists ) where M > 0 and σ = √ pmα m+1 − 1 > 0. In fact, in (2.3), we can take Remark 2.2.Note that condition on nonlinearity (2.1) includes the following cases: 1. Philippin and Proytcheva [35] used the condition where ǫ > 0. It is a special case of an abstract condition by Levine and Payne [31].2. Bandle and Brunner [4] relaxed this condition as follows where ǫ > 0 and γ > 0. These cases were established on the bounded domains of the Euclidean space, and it is a new result on the stratified groups. Proof of Theorem 2.1.Assume that u(x, t) is a positive solution of (1.1).We use the concavity method for showing the blow-up phenomena.We introduce the functional and by (2.2) we have Moreover, J(t) can be written in the following form where Define with M > 0 to be chosen later.Then the first derivative with respect t of E(t) gives By applying (2.1), Lemma 1.2 and 0 < β m+1 , we estimate the second derivative of E(t) as follows By employing the Hölder and Cauchy-Schwarz inequalities, we obtain the estimate for [E ′ (t)] 2 as follows for arbitrary δ > 0. So we have (2.10) The previous estimates together with σ = δ = √ pmα m+1 − 1 > 0 where positivity comes from α > m + 1, imply By assumption J(0) > 0, thus if we select (2.11) We can see that the above expression for t ≥ 0 implies Then for σ = √ pmα m+1 − 1 > 0, we arrive at and some rearrangements with E(0) = M give Then the blow-up time That completes the proof.where for some where R = sup x∈D |x ′ | and x = (x ′ , x ′′ ) with x ′ being in the first stratum.Assume also that u 0 ∈ L ∞ (D) ∩ S1,p (D) satisfies inequality If u is a positive local solution of problem (1.1), then it is global and satisfies the following estimate: Proof of Theorem 2.3.Recall from the proof of Theorem 2.1, the functional Let us define By applying (2.12), Lemma 1.2 and m+1 , respectively, one finds We can rewrite E ′ (t) by using (2.9) and α ≤ 0 as follows That gives E(t) ≤ E(0).This completes the proof of Theorem 2.3. Nonlinear pseudo-parabolic equation In this section, we prove the global solutions and blow-up phenomena of the initialboundary value problem (1.2). 3.1.Blow-up phenomena for the pseudo-parabolic equation.We start with conditions ensuring the blow-up of solutions in finite time.Assume that αF (u) ≤ uf (u) + βu p + αγ, u > 0, ( where γ > 0 and R = sup Assume also that u 0 ∈ L ∞ (D) ∩ S1,p (D) satisfies Then any positive solution u of (1.2) blows up in finite time T * , i.e., there exists ) where σ = α 2 − 1 > 0 and Proof of Theorem 3.1.The proof is based on a concavity method.The main idea is to show that [E −σ p (t)] ′′ ≤ 0 which means that E −σ p (t) is a concave function, for E p (t) defined below. Let us introduce some notations: We know that where Let us define with a positive constant M > 0 to be chosen later.Then Now we estimate E ′′ p (t) by using assumption (3.1) and integration by parts, that gives Next we apply Lemma 1.2, which gives with F (t) as in (3.6), then E ′′ p (t) can be rewritten in the following form Also we have for arbitrary δ > 0, in view of (3.7), Then by taking σ = δ = α 2 − 1 > 0, we arrive at Note that in the last line we have used the following inequality where making use of the Hölder inequality and Cauchy-Schawrz inequality we have . By assumption F (0) > 0, thus we can select .9) We can see that the above expression for t ≥ 0 implies Then for σ = α 2 − 1 > 0, we arrive at This completes the proof. 3.2. Global solution for the pseudo-parabolic equation.We now show that positive solutions, when they exist for some nonlinearities, can be controlled. Theorem 2 . 1 . Let G be a stratified group with N 1 being the dimension of the first stratum.Let D ⊂ G be an admissible domain.Let 2 ≤ p < ∞ with p = N 1 . Theorem 3 . 1 . Let G be a stratified group with N 1 being the dimension of the first stratum.Let D ⊂ G be an admissible domain.Let 2 ≤ p < ∞ with p = N 1 .
2,353.4
2021-06-01T00:00:00.000
[ "Mathematics" ]
Study on the Fatigue of a Spherical Bearing Under Different Loading Conditions Considering Rotation Effect force, and when the rotation angle of the upper bearing plate is 1°. In other working conditions, the lowest fatigue life of the bearing is also mostly found in the wedge-shaped area of the upper and lower bearing plates. So, the wedge-shaped area should be strengthened in the design. Introduction In recent years, with the rapid development of long-span spatial structure, the stress analysis of its structure is more and more strict. Joints are an important part of spatial structure and the key to the promotion of spatial structure [1][2]. Bearing is a kind of joint. As an important component connecting the superstructure and substructure, it can transfer the reaction force of the superstructure to the substructure, coordinate and release the deformation of the superstructure [3]. Besides, it is an important factor to realize the assumption of structural boundary and influence the stability bearing capacity of the structure [4]. Bearing forms mainly include plate bearing, rubber cushion bearing and spherical bearing, etc. In recent years, lead-core rubber bearing, high damping rubber bearing and other new energy-dissipating bearings have appeared [5][6][7]. The spherical bearing is a new kind of bearing developed on the basis of the basin type rubber bearing, which belongs to a kind of steel bearing. Due to the advantages of long service life, large bearing capacity, flexible rotation, suitable to large angle and large displacement of the beam end [8,9], spherical bearings are widely used in highways, bridges and long-span structures [10,11]. In recent years, because of its good machining performance, cast steel material is often used as a connector for joints with large stress and complex structure in spatial structures (especially special-shaped joints). In foreign countries, especially in developed countries such as Germany and Japan, cast steel joints have been widely used [12][13][14]. Spherical bearing joints in engineering structures are in a complex stress state, which is controlled by wind load and earthquake action in addition to gravity load and temperature action [15]. Under the action of earthquake and temperature, the bearing may sustain a lot of horizontal force, while under the action of wind load, the bearing may withstand great wind suction [7,16,17]. However, the traditional spherical bearing is designed to transfer vertical load, and its rotational capacity under the action of horizontal force is poor [10 -13], and its resistance to drawing is poor under the action of tension. According to the disaster statistics of Wenchuan and Lushan earthquakes, bearing joints may fracture and fail during earthquakes due to the reciprocating effects of complex stresses such as bending, shear and tension [18]. And fatigue fracture is one of the main failure modes. Therefore, it is very important to study the mechanical properties of spherical bearing joints under static loads and the fatigue life under cyclic loads. At present, there are few studies on the fatigue performance of spherical bearings. He [3] analyzed the fatigue life of a railway spherical bearings by finite element software ANSYS and fatigue software FEMFAT, and obtained the fatigue life of the bearings under actual working conditions. Liu [19] predicted and analyzed the service life of a light rail bearing by the combination of finite element simulation and test, and the results verified the rationality of the finite element method and the fatigue life of the bearing met the design requirements. He et al. [20] studied the fatigue temperature effect of a lead-core rubber bearing through cyclic loading test, and found that the temperature change caused by fatigue temperature effect of the bearing was proportional to the initial yield stress and inversely proportional to the bearing height. Tie [21] evaluated the service life of rubber and steel bearings of a bridge through finite element software ANSYS, and concluded that the recommended service life of rubber bearings is 50 years, and the steel bearings can work normally in their design life. In addition, Wu [18], Wang [22], Wang [23] et al. studied the fatigue of the anchor bolt of the bearing joints by means of tests. Thus, it can be seen that most research on the fatigue life of bearings is focused on railway and bridge bearings. The mainly studied bearings are rubber bearings, and most of them adopt the experimental research method, which has a long period, harsh test conditions and large cost. By using finite element software to simulate the fatigue performance of the bearing in working state, the reliability of the test can be verified, and a convenient method for fatigue life estimation of the bearing is provided. In this paper, the finite element software ABAQUS will be used to study the static performance of a large cast steel spherical bearing, and on the basis, the fatigue life analysis of its each component will be mainly carried out by the fatigue software. Basic structure of the spherical bearing The traditional spherical bearing is composed of upper bearing plate (including stainless steel plate), planar PTFE plate, spherical PTFE plate and dustproof structure, etc. The sliding between the planar PTFE plate and the stainless-steel plate of the upper bearing plate can meet the displacement needs of the bearing, and the rotation function is realized by the sliding between the spherical crown liner plate and spherical PTFE plate [24]. Considering the requirements of the pull-out resistance and rotation performance of the bearing, based on the traditional spherical bearing, an improvement is proposed in this paper. Four wedge-shaped parts are used to make the upper and lower bearing plates butt (as shown in Fig. 1), which effectively improves the pull-out ability of the bearing. At the same time, a rotation angle of 0~3° is allowed between the upper bearing plate and the ball core, and the rotational ability of the bearing is also improved, as shown in Fig. 2. a) upper bearing plate b) ball core c) lower bearing plate ball core is in contact with the upper and lower bearings, lubricating oil will be applied to reduce wear, and special wear resistant material will be added to solve the loosening compensation caused by wear. Model building and unit division The finite element model is established as shown in Fig. 4, with a total of 33,376 nodes. The contact part between the lower bearing plate and the upper bearing plate adopts C3D4 element, with a total of 8,320, and the other parts adopts C3D8R element, with a total of 25,472. The minimum element size is 3.6 mm, and the mesh division of the whole bearing is shown in Fig. 5. . Contact and constraint settings Considering that the upper bearing plate, the lower bearing plate and the ball core are not completely fixed in the actual working state, sliding and rotation are allowed. So the surface-to-surface contact method is adopted in ABAQUS to constrain it and simulate its actual working state [30]. The form of fixed constraint is adopted at the bottom of the lower bearing plate, and the vertical pressure, vertical tension and horizontal shear force are applied to the upper bearing plate in the form of surface load. Load condition According to the actual working state of the spherical bearing, the following four working conditions are determined. 1. Working condition 1: 3400 kN vertical pressure; 2. Working condition 2: 1000 kN vertical tension; 3. Working condition 3: 3400 kN vertical pressure and 800kN horizontal shear force in X or Z direction; 4. Working condition 4: 1000 kN vertical tension and 800 kN horizontal shear force in X or Z direction. Results and analysis Through modeling in the software of ABAQUS and define material, adding boundary conditions and loads, stress results of the bearing can be obtained under the above loads. The maximum stress and its position of the bearing under the working conditions 1 and 2 are shown in Table 1. It can be seen from Table 1 that, under the same vertical pressure, with the increase of the rotation angle of the upper bearing plate, the maximum stress of the bearing also gradually increases, and the position of which is at the ball core. Under the action of the same vertical tension, when the rotation angle of the upper bearing plate is 2°, the maximum stress of the bearing is the largest, and located at the lower bearing wedge-shape region. Under the action of working condition 3, the maximum stress and its position of the bearing are shown in Table 2. Table 1 Stress and position of the bearing under working conditions 1 and 2 As can be seen from Table 2, under the same combined action of vertical pressure and X-direction shear force, the maximum stress of the bearing gradually decreases with the increase of the rotation angle of the upper bearing plate, and the position of the maximum stress of the bearing mostly occurs at the ball core. Under the combined action of the same vertical pressure and the Z-direction shear force, the maximum stress of the bearing increases with the increase of the rotation angle of the upper bearing plate, and the position of the maximum stress of the bearing is all the wedge-shaped part of the lower bearing. Under the action of working condition 4, the maximum stress and its position of the bearing are shown in Table 3 Table 3, it can be concluded that under the same combined action of vertical tension and X-direction shear force, the maximum stress of the bearing appears in the case of the upper bearing plate rotation angle of 1°, and the position is the wedge-shaped part of the upper bearing. Under the same combined action of tensile force and the shear force in the Z-direction, the maximum stress of the bearing also appears in the case that the rotation angle of the upper bearing plate is 1°, and the position is the wedge-shaped part of the upper bearing. Under vertical pressure, the force is mainly exerted by the ball core. Because of the deflection of the upper bearing plate, the contact area between the ball core and the upper bearing plate becomes smaller, so the stress increases. Under the vertical tension, according to the force transmission mechanism of the bearing, the force is mainly exerted by the wedge parts. With the increase of the deflection angle, the stress on the deflection side increases gradually. Under the action of vertical pressure and shear force, the wedgeshaped parts of the upper and lower bearings contact because of the existence of shear force.So the maximum stress does not only appear at the ball core. Under the combined action of vertical tension and shear force, the existence of deflection angle will be adjusted by the ball core, so the maximum stress is less than the undeflection angle. Fatigue life of spherical bearing structure The finite element stress calculation results obtained by ABAQUS in the previous section were imported into the fatigue software for fatigue life analysis. And the fatigue attributes, algorithm and load of the bearing were set, and then the fatigue calculation results were imported into ABAQUS for post-processing analysis. Fatigue attributes Correct definition of material fatigue attributes is one of the necessary conditions for fatigue life prediction analysis. The accuracy of material fatigue attributes directly affects the accuracy of fatigue analysis. The model of the spherical bearing is made of cast steel. In order to get accurate fatigue properties, the required materials can be defined in fatigue software. G20Mn5Qt in the material library is selected as the cast steel material, which contains information such as yield strength, ultimate strength, elastic modulus, Poisson's ratio, and the corresponding S-N, ε-N curves. Fatigue algorithm In the fatigue simulation, it is necessary to choose the appropriate fatigue algorithm. The fatigue algorithms in Therefore, it is used in this paper to effectively avoid calculation errors. Fatigue load When the load input of fatigue analysis is carried out, the node stress results of unit load should be first input, and then it should be taken as the time load history and multiplied by the corresponding load history coefficient. In this paper simulated the fatigue life of the spherical bearing under four working conditions were simulated, namely pressure cyclic load, tension cyclic load, pressure-shear cyclic load, and tension-shear cyclic load. Therefore, when setting the load history coefficient, the selected coefficients are 1, 0 and 1. The load spectrum is shown in Fig. 6. As can be seen from Fig. 7, under the action of vertical pressure, when the rotation angle is 1°, the position of the lowest fatigue life of the bearing is the blue position of the ball core, that is, the area where fatigue failure occurs first, which is consistent with the result of the maximum stress calculation. The fatigue life of this position is 10 5.483 =304088 times. The fatigue life of the upper bearing plate and the lower bearing plate is close to the "infinite life". The maximum fatigue life limit set for cast steel material is 10E7=10 million times. When the upper bearing plate tilt occurs at 0°, 1°, 2° and 3°, the minimum fatigue life is "infinite life", 10 5.483 = 304800 times, 10 5.490 = 309029 times and 10 5.488 = 307609 times, respectively. And the location of the occurrence is at the ball core, which is consistent with the results of the maximum stress position. Vertical tension action The stress results of working condition 2 were imported into the software for fatigue life calculation. Taking the rotation angle of 1° as an example, the calculation result is shown in Fig. 8. Time/s As can be seen from Fig. 8, under the action of vertical tension, when the rotation angle is 1°, the lowest fatigue life of the bearing is at the wedge-shaped position of the lower bearing plate, which is also the first area where fatigue failure occurs. The fatigue life of this position is 10 5.489 =308318 times. The lowest fatigue life of the upper bearing plate is 10 5.974 =941889 times, which appears in the wedge-shaped part. Due to the small force on the ball core under the vertical tensile force, the fatigue life of the ball core is close to the "infinite life". When the upper bearing plate tilt occurs at 0°, 1°, 2° and 3°, the minimum fatigue life is 10 5.893 =781627 times, 10 5.489 =308318 times, 10 5.407 =95940 times, and "infinite life", respectively. And the location of the occurrence is wedge-shaped region of the lower bearing plate, which is consistent with the results of the maximum stress position. Combined action of pressure and shear The stress results of working conditions 3 were imported into the software for fatigue life calculation. Taking the rotation angle of 1° as an example, the calculated results are shown in Figs. 9 and 10. a) upper bearing plate b) ball core c) lower bearing plate Fig. 9 Fatigue life cloud diagram of the bearing under pressure and X-direction shear force As can be seen from Fig. 9, under the combined action of vertical pressure and X-direction shear force, when the rotation angle is 1°, the lowest fatigue life of the bearing is at the blue position of the ball core, which is the area where fatigue failure occurs first. The fatigue life of this position is 10 5.514 =326587 times. The fatigue life of upper and lower bearing plates is close to "infinite life". When the upper bearing plate tilt occurs at 0°, 1°, 2° and 3°, the minimum fatigue life is 10 5.130 =134896 times, 10 5.514 =326587 times ,10 5.517 =328851 times and 10 5.485 =305492 times, respectively. When the rotation angle is 0°, the first fatigue failure occurs at the wedge-shaped part of the lower bearing plate. At other rotation angles, the first failure occurs at the ball core, which is consistent with the calculation results of the maximum stress. When the upper bearing plate tilt occurs at 0°, 1°, 2° and 3°, the minimum fatigue life of the bearing is 10 5.235 =171790 times, 10 5.194 =156314 times, 10 5.183 =152405 times and 10 5.231 =170215 times, respectively. The first place where fatigue failure occurs is the lower bearing wedgeshape, which is consistent with the position of the calculated maximum stress. Combined action of tension and shear The stress results after the working condition 4 were imported into the software for fatigue life calculation. Taking the rotation angle of 1° as an example, the calculated results are shown in Fig. 11 and Fig. 12 When the upper bearing tilts 0°, 1°, 2° and 3°, the minimum fatigue life is 10 5.778 =599791, 10 5.560 =363078, 10 5.796 =625172 and 10 5.815 =653130 times, respectively. The location of the occurrence is respectively the upper bearing wedge position, the lower bearing wedge position, the upper bearing wedge position and the lower bearing wedge position. increases by 12.7%, 2.5% and 11.6% when the rotation angle is 0°, 1° and 3°, respectively. Under the vertical tension and shear force in the X direction, when the rotation angle of the upper bearing plate is 1°, the minimum fatigue life of the bearing is the smallest. Compared with 1°, when the tilt Angle is 0° and 2°, the proportion of the minimum fatigue life is 199.9%, 973.9%. When the rotation angle is 3°, the fatigue life is close to the "infinite life". Under the vertical tension and shear force in the Z direction, when the rotation angle of the upper bearing plate is 1°, the minimum fatigue life of the bearing is the smallest. Compared with 1°, when the rotation angle of the upper bearing plate is 0°, 2° and 3°, the proportion of the minimum fatigue life is 65.1%, 72.1% and 79.8%, respectively. In order to study the fatigue life of a large spherical bearing under actual working conditions, the finite element software ABAQUS was firstly used to establish the spherical bearing model and simulate the mechanical properties of the bearing at different rotation angles under four working conditions of pressure, tension, pressure shear, and tension shear. And the stress characteristics under the corresponding working conditions were obtained. On this basis, the stress results were imported into the fatigue analysis software, and the fatigue life of each component of the spherical bearing was calculated under the four working conditions. The results show that: the rotation angle of the upper bearing plate has obvious influence on the maximum stress of the bearing under various working conditions. The maximum stress is 554.5 MPa and locates in the wedge-shaped region of the lower bearing plate under the working condition of the joint action of tension and X-direction shear force, and when the rotation angle of the upper bearing plate is 1°. In other working conditions, the maximum stress of the bearing mostly occurs in the wedge-shaped position of the upper and lower bearing plates. The influence of the rotation angle of the upper bearing plate on the minimum fatigue life of the bearing shows the same trend as the influence on the maximum stress. The minimum fatigue life is 13,000 times and locates in the wedge-shaped region of the lower bearing plate under the working condition of the joint action of tension and Xdirection shear force, and when the rotation angle of the upper bearing plate is 1°. In other working conditions, the lowest fatigue life of the bearing is also mostly found in the wedge-shaped area of the upper and lower bearing plates. So, the wedge-shaped area should be strengthened in the design. Keywords: spherical bearing, finite element analysis, static performance, fatigue life.
4,561.2
2022-06-21T00:00:00.000
[ "Engineering", "Materials Science" ]
Multistep Wind Speed Forecasting Using a Novel Model Hybridizing Singular Spectrum Analysis , Modified Intelligent Optimization , and Rolling Elman Neural Network Wind speed high-accuracy forecasting, an important part of the electrical systemmonitoring and control, is of the essence to protect the safety of wind power utilization. However, the wind speed signals are always intermittent and intrinsic complexity; therefore, it is difficult to forecast them accurately. Many traditional wind speed forecasting studies have focused on single models, which leads to poor prediction accuracy. In this paper, a new hybrid model is proposed to overcome the shortcoming of single models by combining singular spectrum analysis, modified intelligent optimization, and the rolling Elman neural network. In this model, except for the multiple seasonal patterns used to reduce interferences from the original data, the rolling model is utilized to forecast the multistep wind speed. To verify the forecasting ability of the proposed hybrid model, 10min and 60min wind speed data from the province of Shandong, China, were proposed in this paper as the case study. Compared to the other models, the proposed hybrid model forecasts the wind speed with higher accuracy. Introduction In the past few decades, with environmental degradation and resource depletion, renewable energy [1] has received more attention.Wind energy, one of the cleanest forms of renewable energy, is developing rapidly throughout the world.With the rapid increase in the utilization of wind energy, the primary concern is the security and stability of feeding electricity into the grid [2].Wind speed highaccuracy forecasting is an important part of the electrical system monitoring and control.However, due to the instability of wind energy and the inherent complexity, transferring electricity into the power grid is limited and costly [3,4].To improve the efficiency of wind power and reduce the comprehensive cost of wind energy, accurate prediction of wind speed is necessary. Many methods have been proposed to improve the forecasting accuracy of wind speed in recent decades.Based on the computational mechanism, these forecasting models can be grouped into four main categories: (i) physical models, (ii) statistical models, (iii) intelligence models, and (iv) hybrid forecasting models [5]. Physical methods [6,7], which are based on the lower atmosphere or numerical weather prediction (NWP), can accurately forecast the wind speed.However, the physical methods require long running times and are not applicable for short-term forecasting.Statistical models [8][9][10][11], which are known as time-series-based models, do not apply historical data.These models are trained with measurement data and the differences between forecasted and actual wind speed are used to adjust the model parameters.The ARMA and the ARIMA models are the most popular models used to forecast future wind speed.Many forecasting results based on statistical models show that these models are useful in the wind speed forecasting fields [12][13][14][15].These models have numerous advantages, and the approaches need only historical wind speed data and are easy to implement.However, if the nonlinear characteristics of the wind speed series are prominent, the prediction accuracy of these methods decreases rapidly.The intelligent methods adopt artificial intelligence (AI) theories or evolutionary algorithms to forecast wind speed.Many intelligent methods are used to forecast wind speed, such as ANN (Artificial Neural Network) [16][17][18], FLM (Fuzzy Logic Method) [19,20], and SVM (Support Vector Machine) [18,[21][22][23][24][25][26].Unlike the single methods, the hybrid methods proposed by experts and scholars always combine several models to improve the accuracy of wind speed forecasting.Most of the recently proposed forecasting methods are hybrids, and the decomposition algorithms are often used to enhance their precision [27,28]. Chaotic theory has been used to handle time series in many fields [29][30][31].Considering chaotic characteristics of the wind speed series, a hybrid prediction model was introduced in [32] using largest Lyapunov exponent prediction method to predict.Due to the inherent complexity of wind speed, describing the moving trend of wind speed and accurate prediction is difficult.Therefore, many studies use other methods to enhance the forecasting capacity of the original series.The specific methods include hybrid models that employ different approaches or combine different forecasting models to extract the inner traits of the original series in different aspects to perform wind speed forecasting.For the former type of hybrid models, the most common methods, such as wavelet transform (WT), singular spectrum analysis (SSA), and empirical mode decomposition (EMD), are used to preprocess the original series and forecast the wind speeds [32][33][34][35].These data processing methods are used to eliminate the influence of outliers on the forecasting accuracy, thereby improving the forecasting accuracy. In this paper, a novel algorithm is proposed that hybridizes SSA (singular spectrum analysis), FAPSO (Firefly Algorithm and Particle Swarm Optimization), and RENN (rolling Elman neural network), to forecast wind speed.To verify the performance of the model, several hybrid models and single models are also used to forecast wind speed.In this model, besides the multiple seasonal patterns used to reduce interferences from the original data, the rolling model is utilized to forecast the multistep wind speed.To verify the forecasting ability of the proposed hybrid model, 10 min and 60 min wind speed data from the province of Shandong, China, were used as the case study. The details of the algorithm are described below, and the flow diagram is shown in Figure 1. Step 1.The SSA is used to decompose the original wind speed datasets into several subseries.Then, the new series is reconstructed.The wind speed data used in this paper is typically a chaotic time series, and the use of SSA can eliminate the influence of outliers and improve the prediction accuracy of the wind speed forecast model. Step 2. The hybrid optimization algorithm (FAPSO) that combines the FA with the PSO is utilized to optimize the weights and thresholds of the ENN model.The optimization algorithm can provide better initial weights and thresholds to the ENN and improve the search ability.Compared with the single optimization model, the hybrid optimization model has better optimization effects. Step 3. Construct the ENN model for the reconstructed series.Then, use the established model to forecast the onestep wind speed.The optimized ENN model can avoid getting trapped into local optimum and the global searching ability of the algorithm is enhanced. Step 4. The rolling ENN model is used to forecast the multistep results.Multistep wind speed forecasting with high-precision is helpful for electricity production to produce various benefits, such as avoiding a power-grid collapse, reducing production costs, and reducing the spinning reserve capacity of thermal power units. Step 5.The Diebold-Mariano test is used to validate the accuracy and stability of the proposed model. Methodology In this paper, numerous methods are involved.In this section, the relative algorithms including singular spectrum analysis, the firefly algorithm, particle swarm optimization, and the hybrid model are described in detail. Singular Spectrum Analysis. The singular spectrum analysis [36][37][38] is a signal processing technique capable of capturing the intrinsic oscillation modes of a signal.The SSA has two main stages: decomposition and reconstruction. To perform the embedding, the original time series is mapped into a sequence of lagged vectors of size L by forming = − + 1 lagged vectors , = 1, . . ., . Then, the trajectory matrix is derived: ) . ( From the trajectory matrix, both the rows and columns of X are subseries of the original series, and X is a Hankel matrix, which means it has equal elements on antidiagonals. In the singular value decomposition step, the singular value decomposition of the matrix X can be computed through eigenvalues and eigenvectors of the matrix XX .Suppose ( 1 > 2 > ⋅ ⋅ ⋅ > > 0) as the eigenvalues of XX , and suppose as the corresponding eigenvector. The singular value decomposition of the trajectory matrix X is shown below: The matrices X are elementary matrices.The collection ( , , √ ) is called the th eigentriple of the SVD.Each eigentriple consists of an eigenvector, a factor vector, and a singular value. and are th left singular vectors and right singular vectors of X, respectively. Elman Neural Network (ENN). The Elman recurrent neural network, proposed by Elman, is a partial recurrent network model [39].Compared with classic feed-forward perception and pure recurrent network, ENN has a context layer that feeds back the hidden layer outputs in the previous time-steps.The context layer can enhance the ability of processing dynamic information and improve the forecasting accuracy. The neurons contained in each layer are used to disseminate information from one layer to another.The nonlinear state space expression of Elman networks is as follows: x () = x ( − 1) , (10) where y is the -dimension output node vector; x is dimension hidden layer node vector; u is -dimension input layer vector; x is the feedback state vector; 1 , 2 , and 3 denote their corresponding weights; ( * ) is the transfer function of the output neuron; and ( * ) is the transfer function of the hidden neuron. Then, adjust the weights of the network to minimize the squared error between the actual values and forecasting results: where ỹ () is a target output vector. Although the ENN has strong predictive power, the limitations are obvious.The initial weights and threshold values of ENN are randomly generated, the training speed is slow, and ENN is susceptible to falling into the local optimal value.The intelligent optimization algorithm can effectively overcome these shortcomings. Firefly Algorithm and Particle Swarm Optimization (FAPSO). The optimization algorithm is composed of the firefly algorithm and particle swarm optimization.Compared with a single optimization algorithm, the proposed optimization algorithm avoids many shortcomings and determines a better solution. Firefly Algorithm (FA). The firefly algorithm, proposed by Yang, is a multimodal nature inspired metaheuristic algorithm based on the flashing behavior of fireflies [40,41].The algorithm has proved effective in solving linear design problems and multimodal optimization problems. The firefly algorithm has two stages, which are described as follows. Step 1.The brightness is dependent on the intensity of light emitted by the firefly.Suppose there are a group of fireflies and the position for an th firefly is , where ( ) indicates the fitness value of the firefly.The brightness of a firefly is chosen to reflect the fitness value of its current position Step 2. All fireflies have a unique attractiveness , which indicates the ability to attract other fireflies.The attractiveness Step 1: looking for 20 place better fireflies Step 2: 20 fireflies as the initial particle Step 3: looking for the best particle coordinates Forecast engines Rolling ENN The final forecasting results Hidden layer Output layer FAPSO Moving window with length N Aim to forecast multistep wind speed is related to the distance factor at locations x and x , and between the two corresponding fireflies, and are given by The attractiveness function () of the firefly is computed as where 0 is the largest attraction and is the coefficient of light absorption.The movement of the less bright firefly toward the most bright firefly is computed as where is the randomization parameter and rand is a randomly selected number between the interval [0 Kennedy and Eberhart) [42,43] is based on the behavior of birds.The principle of PSO is assuming a location has no mass or volume, flying like a bird in multidimensional space, and not only adjusts its position but also exchanges information about its current position in search space according to its own earlier experience and that of its neighbors.In this mechanism, members of a swarm communicate their information and modify their positions and velocities using the group information according to the best position appearing in the current movement of the swarm.The particles of the swarm find the optimal point by cooperation.The particle velocity and position updating formulas are shown below: Particle Swarm Optimization (PSO). Particle Swarm Optimization (developed by where is the inertia weight; V is the velocity of the th particle; 1 and 2 are acceleration coefficients (nonnegative constant); rand(1) is a random value between 0 and 1; () is the position of the th particle; () and () denote local best particle of the th particle and global best particle among local bests at time . A rudimentary PSO algorithm isoutlined in Algorithm 2. Hybrid Optimization Algorithm. In this section, a modified optimization model hybridizing firefly algorithm and particle swarm optimization is proposed to improve the accuracy of wind speed forecasting.The specifics of the FAPSO are described below. Step 1.The firefly algorithm was used to optimize fireflies. (1.2) Compute the brightness of each firefly by using objective function.(1.3) Move firefly and evaluate new fireflies. (1.4) Rank the fireflies and find the current best as the firefly researched.(1.5) Optimize fireflies in total. Step 2. The particle swarm algorithm is used to search the best particle. Test of FAPSO. To verify the optimization performance and convergence speed of the modified algorithm, four benchmark functions are selected in this paper.These benchmark functions have different characteristics, which are used to fully investigate the optimization ability of the algorithm.The four common test functions are shown in Table 1.The experimental parameters of PSO and FAPSO are shown in Table 2. Thirty experiments, searching for the minimum value point by 2000 iterations, were carried out independently.The results, including the maximum, minimum, average, and standard deviations, are displayed in Table 3.The results show that the optimization effect of FAPSO is better than PSO. The Proposed Hybrid Model. In this paper, a novel algorithm, SSA-FAPSO-RENN, is proposed to forecast wind speed.SSA is used to acquire the moving tendency of wind speed and enhance the forecasting abilities.The hybrid optimization algorithm (FAPSO) that combines the FA and the PSO is utilized to optimize the parameters of the ENN model.To forecast the multistep wind speed, the rolling Elman neural network (RENN) model is used. For the convenience of narrative, the proposed hybrid model is named the SSA-FAPSO-RENN model. Experimental Simulation In this section, the details of experimental simulation will be introduced.Wind speed series of 10 min and 60 min are used to verify the effect of the model. Performance Metric. The primary concern is to determine whether the prediction model is superior to other models.The performance of the model is usually evaluated using statistical criterions. To estimate the forecasting performance, the Diebold-Mariano (DM) test and three error criterions are adopted, including MAE, MAPE, and MSE.DM test [44] is a comparison test that focuses on the predictive accuracy and can be used to evaluate the forecasting performance of the proposed hybrid model and other comparing models.The details of DM test are given as follows: where ( * ) is the loss function. 1 + and 2 + are the forecast errors from two models. 2 is an estimator of the variance of = ( 1 + ) − ( 2 + ).The hypothesis test is defined as The null hypothesis is that the two forecasts have the same accuracy.Under the null hypothesis, the test statistics DM are asymptotically (0, 1) distributed.If |DM| > /2 , the null hypothesis will be rejected. The detailed equations of these three error criterions are given as follows. MAPE (Mean Absolute Percentage Error) where and ŷ denote the real and predicted values at time , respectively.To further assess the forecasting accuracy, every wind speed series is divided into a training set and a validation set.In addition, an entire day of data will be used as a test set to test the forecasting ability of the models. Wind Speed The first case study is 10 min wind speed forecasting.The total number of available samples is 1152.The training set also includes 806 wind speed datasets and the validation set includes 140 wind speed datasets.The remaining data are used to calculate the predictive ability of these models.Figure 2 shows four wind speed datasets from three wind observation sites corresponding to the four seasons. The second case study is 60 min wind speed forecasting.The total number of available samples is 1032.The training set includes 806 wind speed datasets, and the validation set includes 202 wind speed datasets, and the remaining data are the test set.Figure 3 shows four wind speed datasets (60 min wind speed) from three wind observation sites corresponding to the four seasons. From Figures 2 and 3, several features can be summarized: (a) The data for four seasons are quite different.(b) There are three wind observation sites.The wind speed data from the same site is similar.(c) The intensity of the wind in winter is large but small for the wind in summer.(d) The experimental datasets reveal the chaotic nature and intrinsic complexity of wind speed.setting of SSA is very important for the forecasting effect.The window length is the only parameter in the SSA decomposition process.The window length was chosen as an integer fulfilling the conditions: 1 < < and ≤ /2, where is the data length.In this paper, the data length of 10 min and 60 min wind speed is 1152 and 1032, respectively; therefore, the window length chosen is 400. Set Choosing = 400 allows the trend to be extracted simultaneously.Since the trend of wind speed series is complex, many eigentriples are required to reconstruct it.In this paper, the trend is reconstructed by eigentriples 1-150 in the 10 min wind speed forecasting experiments, and the 60 min wind speed trend is reconstructed by eigentriples 1-100.The graph in Figure 4 depicts the initial series and trend of the 10 min wind speed series from the wind observation site A. Parameters of the Hybrid Model. Setting the parameters is very important for the prediction of wind speed.To compare the prediction effect of the model and attain a scientific conclusion, the initial parameters of these models need to be unified.The details are shown in Table 4. Figure 5 shows the multistep predicted results, the original wind speed data from spring datasets of wind observation site A, using the different involved models.The forecasting results are given in Tables 5-7 To reflect the forecasting results more directly, the results of Tables 5-7 can be averaged.The average results were calculated by the results of three wind observation sites and four seasons.They are shown in Table 8. Table 8 indicates the following: (1) The forecasting results of 1-step are better than 2-step, 3-step, and 5-step.For example, the MAPE values of the SSA-FAPSO-RENN model change from 5.06% to 6.43%, 7.69%, and 9.45% at 2-step, 3-step, and 5-step.This conclusion can be reached through other models. (2) Among all involved single models, the RENN model has the best performance except for the 1-step forecasting result, and the ARIMA model has the worst performance in every forecasting step. (3) Compared with combined models, the single model forecasting effect is relatively poor. Wind speed high-accuracy forecasting, an important part of electrical system monitoring and control, is crucial to protect the safety of wind power utilization but is always a difficult and arduous task.Compared with the other forecasting models involved in this paper, the proposed hybrid model has better forecasting ability in the 10 min wind speed forecasting study. Case Study Two: 60 Min Wind Speed Forecasting. In this case, one-hour wind speed series were used to test the forecasting capacity of the proposed hybrid model.Figure 6 depicts the initial series and trend of the 60 min wind speed series from the wind observation site A. The forecasting results of proposed hybrid model, SSA-FAPSO-RENN, are compared with the forecasting results of BPNN, ARIMA, RENN, SSA-RENN, and SSA-PSO-RENN. Figure 7 shows the multistep predicted results with the one-hour wind speed data from spring datasets of wind observation site A, using the different involved models.The estimated results of these predictions are given in Tables 9-11. From Tables 9-11, the values in bold indicate the smallest values of MSE, MAPE, and MAE, and the minimum values and the predictive value of the proposed model are approximately the same; compared to 10 min forecasting results, onehour wind speed forecasting has more error; with the increase of prediction steps, precision will rise; some wind speed series are more adaptable to other models, but overall, the proposed model has better prediction ability. To reflect the forecasting results more directly, the results of Tables 9-11 can be averaged.The average results are shown in Table 12. Table 12 indicates the following: (7) The forecasting results of 1-step are better than 2-step, 3-step, and 5-step.The MAPE values of the proposed (10) The above conclusion can also be achieved with MSE and MAE. The forecasting results are generally good.The proposed hybrid model can be used to forecast 60 min wind speed.Compared with traditional single models and other models involved in this paper, the proposed model has the best forecasting ability.The forecasting results show that the model has a better performance in the 10 min wind speed forecasting study than the 60 min study. The average results of study one and study two are shown in Table 13.The proposed model has the best performance as evaluated by MAPE, MAE, and MSE.The Diebold-Mariano values of the SSA-PSO-RENN and SSA-RENN models are larger than the upper limits at the 10% significance level, the DM values of the RENN and the BP models are larger than the upper limits at the 5% significance level, and the DM value of the ARIMA models is larger than the upper limits at the Conclusions Wind power systems need to further develop accurate and reliable technology for short-term wind speed forecasting.Due to the influence of various meteorological factors, wind speed series are intermittent and randomly characterized, making it difficult to forecast wind speed using a single model.The focus of recent research has been the development of new methods and combinations of methods.However, individual models do not always achieve desirable performance.Hybrid models can decrease negative influences that are intrinsic in each of the individual models.These models can use the advantages of each individual model and are less sensitive, in certain cases, to the factors that cause the individual models to perform undesirably.Therefore, the hybrid model is more effective than individual models for wind speed forecasting. To forecast the 10 min and one-hour wind speed more accurately, a new hybrid model, SSA-FAPSO-RENN, is proposed, which can overcome many limitations of single models, such as poor prediction accuracy and artificial parameters.The forecasting results show that the proposed model can improve the accuracy of 10 min and 60 min wind speed forecasting.Compared with other models involved in this paper, the prediction precision of the proposed model is the largest. m Step 4 : the coordinates of the particles are the parameters of  and b Optimize the parameters of  and b of ENN Stage Figure 1 : Figure 1: The flowchart of the proposed integrated forecasting model. Figure 2 : Figure 2: Four wind speed datasets (10 min speed) from three wind observation sites corresponding to the four seasons. Figure 3 : Figure 3: Four wind speed datasets (60 min speed) from three wind observation sites corresponding to the four seasons. . The values in bold indicate the smallest values of MSE, MAPE, and MAE.The smallest values of MSE, MAPE, and MAE are not all the forecasting results of the proposed model, but the minimum and the predictive values of the proposed model are very close.The results may differ with different error criterions.The proposed model is shown to have better prediction accuracy for most of the sample wind speed series. Figure 4 : Figure 4: The initial series and trend of 10 min wind speed series from the wind observation site A. Figure 5 : Figure 5: The multistep predicted results of 10 min wind speed series using the different involved models. Figure 6 : Figure 6: The initial series and trend of the 60 min wind speed series from wind observation site A. Figure 7 : Figure 7: The multistep predicted results of one-hour wind speed series using the different involved models. 17) END IF (18) / * Evaluate the new solution and update the new light intensity . Table 2 : The experimental parameters of PSO and FAPSO. Datasets.To verify the forecasting ability of the proposed hybrid model, 10 min and 60 min wind speed data (January 1, 2011, to November 9, 2011) from the province of Shandong, China, are proposed as the case study in this paper.In the two tests, multiple seasonal patterns are used to reduce interferences from the original data, March 1 to May 31 (spring), June 1 to August 31 (summer), September 1 to November 9 (fall), and January 1 to February 28 (winter), and the wind speed datasets are randomly selected. Table 3 : Test results of PSO and FAPSO. Table 4 : Parameters of the hybrid model.In this section, 10 min wind speed series, which are from four datasets of three wind observation sites, are used to test the forecasting capacity of the proposed hybrid model.The forecasting results of the SSA-FAPSO-RENN model are compared with the forecasting results of the BPNN, ARIMA, RENN, SSA-RENN, and SSA-PSO-RENN.The BPNN, ARIMA, and RENN models are the single models, and the others are combination models.The parameters of BPNN are the same as those of ENN.The MAE, MAPE, and MSE values are the evaluation standard. Table 5 : Performance evaluations of different models for the forecast of the 10 min wind speed series from the wind observation site A. Table 6 : Performance evaluations of different models for the forecast of the 10 min wind speed series from the wind observation site B. Table 7 : Performance evaluations of different models for the forecast of the 10 min wind speed series from the wind observation site C. Table 8 : Average results of the 10 min wind speed. Table 9 : Performance evaluations of different models for the forecast of the one-hour wind speed series from the wind observation site A. Table 10 : Performance evaluations of different models for the forecast of the one-hour wind speed series from the wind observation site B. Table 11 : Performance evaluations of different models for the forecast of the one-hour wind speed series. Table 12 : Average results of one-hour wind speed.
6,011.6
2016-12-04T00:00:00.000
[ "Engineering", "Computer Science" ]
Pilot Evaluation of the Long-Term Reproducibility of Capillary Zone Electrophoresis–Tandem Mass Spectrometry for Top-Down Proteomics of a Complex Proteome Sample Mass spectrometry (MS)-based top-down proteomics (TDP) has revolutionized biological research by measuring intact proteoforms in cells, tissues, and biofluids. Capillary zone electrophoresis–tandem MS (CZE-MS/MS) is a valuable technique for TDP, offering a high peak capacity and sensitivity for proteoform separation and detection. However, the long-term reproducibility of CZE-MS/MS in TDP remains unstudied, which is a crucial aspect for large-scale studies. This work investigated the long-term qualitative and quantitative reproducibility of CZE-MS/MS for TDP for the first time, focusing on a yeast cell lysate. Over 1000 proteoforms were identified per run across 62 runs using one linear polyacrylamide (LPA)-coated separation capillary, highlighting the robustness of the CZE-MS/MS technique. However, substantial decreases in proteoform intensity and identification were observed after some initial runs due to proteoform adsorption onto the capillary inner wall. To address this issue, we developed an efficient capillary cleanup procedure using diluted ammonium hydroxide, achieving high qualitative and quantitative reproducibility for the yeast sample across at least 23 runs. The data underscore the capability of CZE-MS/MS for large-scale quantitative TDP of complex samples, signaling its readiness for deployment in broad biological applications. The MS RAW files were deposited in ProteomeXchange Consortium with the data set identifier of PXD046651. ■ INTRODUCTION Mass spectrometry (MS)-based top-down proteomics (TDP) is a powerful technique for the identification and quantification of proteoforms in biological samples. 1 During the last several years, TDP has been deployed widely to discover new proteoform biomarkers of various diseases, e.g., cancer, 2−5 neurodegeneration, 6−9 cardiovascular diseases, 10 infectious disease, 11−14 and immunobiology. 15MS-based TDP is providing more and more new insights into the functions of proteins in modulating cellular processes. Due to the high complexity of the proteoforms in cells or tissues, high peak capacity separation of proteoforms before MS is crucial.Liquid chromatography (LC)-MS has been the widely used technique for TDP of complex samples. 16,17apillary zone electrophoresis (CZE) offers highly efficient separations of biomolecules according to electrophoretic mobility (μ ef ), which relates to their charge-to-size ratios. 18−25 Our group has shown the identification of hundreds to thousands of proteoforms from complex samples by single-shot CZE-MS measurements via innovations in capillary coating, online proteoform stacking, etc. 19,20,26 We further boosted the number of identified proteoforms from human cell lines to over 23,000 by coupling LC fractionation to CZE-MS. 3 Most recently, we developed online two-dimensional high-field asymmetric waveform ion mobility spectrometry-CZE-MS (FAIMS-CZE-MS) to benefit the identification of large proteoforms 27 and histone proteoforms. 28We also showed the capability of CZE-MS for TDP of membrane proteins. 29The Kelleher group documented the high sensitivity of CZE-MS for TDP and the reasonable complementarity between CZE-MS and LC-MS for proteoform identification. 30The Ivanov group illustrated the potential of CZE-MS for TDP of single mammalian cells. 31ZE-MS has made drastic progress in TDP and has been widely accepted as a useful tool for proteoform characterization.However, to use CZE-MS for large-scale TDP studies, we need to validate its long-term reproducibility for the topdown MS measurement of complex samples.In this work, for the first time, we performed a pilot investigation of the longterm reproducibility of CZE-MS for TDP of a complex sample (i.e., a yeast cell lysate) to achieve a better understanding of advantages, issues, and potential solutions of CZE-MS for large-scale TDP. Sample Preparation Yeast growth in (Yeast Extract−Peptone−Dextrose) YPD Broth is meticulously cultivated using a well-defined procedure.To begin, 50 g of YPD Broth was blended with 1 L of distilled water, ensuring a precise mixture.This suspension underwent autoclaving at 121 °C for a duration of 15 min.Following this, yeast cultures are introduced into detergentfree containers.A brief vortexing was then carried out to uniformly disperse the yeast cells throughout the medium.The yeast cultures were subsequently nurtured in a shaking incubator at 300 rpm. After yeast cell collection and cleanup with a PBS, 5 g of yeast cells was suspended in the lysis buffer containing 8 M urea, complete protease inhibitors and PhosSTOP (Roche), and 100 mM ammonium bicarbonate (pH 8.0), followed by incubation on ice for 30 min with periodical vortexing.The cells were lysed for 3 min using a homogenizer (Fisher Scientific) and then sonicated under a 50% duty cycle, level 10 output for 20 min on ice with a Branson Sonifier 250 (VWR Scientific).The yeast lysate was centrifuged at 14,000g for 10 min at 4 °C to collect the supernatant containing extracted proteins.The concentration of total proteins was measured by a bicinchoninic acid (BCA) kit (Fisher Scientific) according to the manufacturer's instructions, and the sample was stored at −80 °C. Buffer Exchange In this study, an Amicon Ultra Centrifugal Filter (Sigma-Aldrich) with a molecular weight cutoff (MWCO) of 10 kDa was utilized for buffer exchange to eliminate the urea effectively from protein samples.The procedure began with the initial wetting of the filter using 20 μL of 100 mM ammonium bicarbonate, followed by centrifugation at 14,000g for 10 min.Subsequently, an aliquot of 200 μg of proteins was added to the filter, and centrifugation was carried out for 20 min at 14,000g.200 μL of 100 mM ammonium bicarbonate was added to the filter, followed by centrifugation at 14,000g for 20 min.This step was repeated twice to remove the urea and other small interferences completely.The final protein solution in 35 μL of 100 mM ammonium bicarbonate (protein concentration of 3 mg/mL) was collected for CZE-MS analysis.All centrifugation steps were performed at 4 °C. Preparation of Linear Polyacrylamide (LPA)-Coated Capillary An LPA-coated capillary (1 m, 50 μm i.d., 360 μm o.d.) was prepared according to our previous procedure with minor modifications. 32First, 3 μL of ammonium persulfate (APS) solution (5% [w/v] in water) was added to 500 μL of acrylamide solution (4% [w/v] in water), and the mixture was degassed with nitrogen gas for 5 min to remove the oxygen in the solution.Then, the mixture was loaded into the pretreated capillary using a vacuum, followed by sealing both ends of the capillary with silica rubber and incubating it in a water bath at 50 °C for 40 min.Finally, a small portion (∼5 mm) of the capillary from both ends was removed with a cleaving stone, and the unreacted solution (an agarose gel-like consistency) was pushed out of the capillary with water (200 μL), using the syringe pump.One end of the separation capillary was etched by hydrofluoric acid to reduce its outer diameter to around 100 μm. 33 CZE-ESI-MS/MS Analysis The automated CE operation was performed using an ECE-001 CE autosampler from CMP Scientific (Brooklyn, NY).Through an electro-kinetically pumped sheath flow CE-MS interface (CMP Scientific, Brooklyn, NY), the CE system was coupled to a Q-Exactive HF mass spectrometer (Thermo Fisher Scientific). 34,35For CZE separation, the LPA-coated capillary (50 μm i.d., 360 μm o.d., 1 m in length) was used.A background electrolyte (BGE) of 5% (v/v) acetic acid (pH 2.4) was used for CZE.The sample buffer was 100 mM ammonium bicarbonate (pH 8).The dramatic difference of BGE and sample buffer in pH enabled online dynamic pH junction-based sample stacking. 26The sheath buffer contained 0.2% (v/v) formic acid and 10% (v/v) methanol.The sample was injected into the capillary by applying pressure.The sample injection volume was calculated based on the pressure and injection time using Poiseuille's law.In this study, 5 psi for a 20 s period was applied for sample injection, corresponding to about 100 nL of sample-loading volume for a 1 m long separation capillary (50 μm i.d.).At the injection end of the separation capillary, a high voltage (30 kV) was applied for separation, and in the sheath buffer vial, a voltage of 2−2.2 kV was applied for ESI.With a Sutter P-1000 flaming/brown micropipet puller, ESI emitters were pulled from borosilicate glass capillaries (1.0 mm o.d., 0.75 mm i.d., and 10 cm length).ESI emitters had an opening size of 25−35 μm. All experiments were conducted using a Q-Exactive HF mass spectrometer.A data-dependent acquisition (DDA) method was used for the yeast protein sample.MS parameters were 120,000 mass resolution (at m/z 200), three microscans, 3E6 AGC target value, 100 ms maximum injection time, and 600− 2000 m/z scan range.For MS/MS, 60,000 mass resolution (at 200 m/z), 1 microscan, 1E6 AGC, 200 ms injection time, 4 m/ z isolation window, and 20% normalized collision energy (NCE) were used.The top 8 most intense precursor ions in one MS spectrum were isolated in the quadrupole and Journal of Proteome Research fragmented via higher-energy collision dissociation (HCD). Fragmentation was performed only on ions with intensities greater than 1E4 and charge states greater than 5.We enabled dynamic exclusion with a duration of 30 s.The "Exclude isotopes" function was enabled. Data Analysis The complex sample data was analyzed using Xcalibur software (Thermo Fisher Scientific) to get the intensity and migration time of proteins.For the final figures, the electropherograms were exported from Xcalibur and formatted using Adobe Illustrator. Proteoform identification and quantification were performed on the yeast protein RAW files using the TopPIC (Top-down mass spectrometry-based Proteoform Identification and Characterization) pipeline. 36In the first step, RAW files were converted into mzML files using the Msconvert tool. 37The spectral deconvolution which converted precursor and frag- ment isotope clusters into the monoisotopic masses and proteoform features were then performed using TopFD (Topdown mass spectrometry Feature Detection, version 1.5.6). 38he resulting mass spectra and proteoform feature information were stored in msalign and text files, respectively.The database search was performed using TopPIC (version 1.5.6)against UniProt proteome database of Yeast (UP000002311, 6060 entries, version 11/14/2022) concatenated with a shuffled decoy database of the same size as the yeast database.The maximum number of unexpected mass shifts was one.The mass error tolerances for precursors and fragments were 15 parts per million (ppm).There was a maximum mass shift of 500 Da for the unknown mass shifts.To estimate false discovery rates (FDRs) of proteoform identifications, the target-decoy approach was used and proteoform identifications were filtered by a 1% FDR at the proteoform-spectrum-match (PrSM) level and proteoform level. 39,40The lists of identified proteoforms from all CZE-MS/MS runs are shown in Supporting Information I.The TopDiff (Top-down mass spectrometry-based identification of Differentially expressed proteoforms, version 1.5.6)software was used to perform labelfree quantification of identified proteoforms by CZE-MS/MS using default settings. 41The MS RAW files were deposited to the ProteomeXchange Consortium via the PRIDE 42 partner repository with the data set identifier of PXD046651. Capillary Cleanup To remove proteins adsorbed on the capillary inner wall, the capillary was cleaned periodically by flushing with 0.5% NH 4 OH for 10 min at 30 psi, H 2 O for 10 min at 20 psi, and the BGE (5% acetic acid) for 10 min at 20 psi successively. ■ RESULTS AND DISCUSSION For the first time, we studied the long-term reproducibility of CZE-MS/MS for TDP of a complex proteome sample, a yeast cell lysate, and developed an effective procedure for cleaning up the inner wall of LPA-coated capillaries for reproducible CZE-MS/MS measurements of proteoforms.Figure 1A shows the experimental design of this project.Yeast cells were lysed by homogenization and sonication.The proteoform extract was analyzed by the dynamic pH junction-based CZE-MS/ MS 26 after a simple buffer exchange with a 10 kDa cutoff centrifugal filter unit.The yeast cell lysate was diluted to 1 mg/ mL with 100 mM ammonium bicarbonate (pH 8) for CZE-MS/MS.Finally, the TopPIC software developed by Liu's group was used for database search to identify and quantify proteoforms.Figure 1B represents the cleanup procedure to remove the adsorbed proteoforms on the LPA polymer coating on the capillary inner wall. Reproducibility of CZE-MS/MS for Top-Down Proteomics of a Complex Sample CZE-MS/MS with a fresh LPA-coated capillary generated reproducible measurements of the yeast cell lysate, which is evidenced by the example electropherograms and the number of proteoform identifications from the first roughly 10 runs, Figures 2A and 3A.When we kept running the yeast cell lysate, we observed that the proteoform peaks were broadened gradually, and proteoform intensity decreased accordingly, Figure 2A.The peak width of one proteoform doubled in run 14 compared to that in run 1 and the proteoform intensity decreased by a factor of 2 roughly.For runs 16, 22, and 23, the peak width of the example proteoform tripled and the proteoform intensity is only 20% of that in run 1.The number of proteoform and protein IDs decreased obviously from run 10 to run 24, as shown in Figure 3A,3B. We suspected that the phenomenon was due to proteoform adsorption onto the LPA polymer coating on the capillary's inner wall.When more and more CZE-MS/MS runs are performed, proteoforms are gradually adsorbed onto the capillary wall.The adsorbed proteoforms can have significant impacts on the CZE separation.Proteoforms on the capillary inner wall are positively charged under the acidic BGE of 5% (v/v) acetic acid (pH 2.4), leading to a potential of generation of a low reversed electroosmotic flow (EOF) in the capillary.The reversed EOF slows down the migration of proteoforms in the capillary and increases the chance of peak broadening of proteoforms due to longitudinal diffusion and dispersion. 43he reversed EOF could also affect the performance of dynamic pH junction stacking because it could negatively impact the migration of hydrogen protons from the BGE vial to the separation capillary for sample zone titration.Figure S1B shows an example electropherogram of the yeast cell lysate after more than 30 continuous CZE-MS/MS runs without capillary cleanup.Once we cleaned up the capillary inner wall using a procedure involving capillary flushing with 0.5% ammonium hydroxide, water, and the BGE, the separation profile and the number of proteoform IDs recovered back to nearly the original condition, Figure S1A,C.The data demonstrate that the cleaning up method can remove the adsorbed proteoforms efficiently. After the first and second capillary cleanup, we observed the repeated phenomenon of the fresh capillary.The number of proteoform and protein IDs declined as the runs continued, Figure 3A,B.Interestingly, after the third cleanup, the capillary inner-wall condition became more stable, evidenced by the relatively more consistent numbers of proteoform and protein IDs (Figure 3A,B) as well as more reproducible proteoform separations (Figure 2B).Our data suggest that to achieve reproducible top-down MS measurements of a complex proteome sample by CZE-MS/MS, we can perform the experiment either using a fresh LPA-coated capillary (Phase I) or using an LPA-coated capillary after enough protein adsorption and sufficient capillary cleanup with 0.5% ammonium hydroxide (Phase II).The phase II condition can provide reproducible CZE-MS/MS measurements for more than 23 runs. We further studied the pairwise overlap of identified proteoforms for phase I (runs 1−10) and phase II (runs 40−62) conditions, Figure 3C.The medians of proteoform overlap between any two CZE-MS/MS runs in phase I and phase II are both between 70 and 75%, which is comparable to CZE-MS/MS data in the literature. 44It documents that CZE-MS/MS under both conditions can repeatedly identify the same proteoforms from the yeast cell lysate.The small variations in the identified proteoforms are most likely due to the randomness of data-dependent acquisition (DDA). To investigate the quantitative reproducibility of CZE-MS/ MS in both phase I and II conditions, we studied the pairwise proteoform intensity correlation coefficients for runs 1−10 and 40−62, Figure 3D.Label-free quantification of proteoforms was performed by the TopDiff software. 41The intensities of overlapped proteoforms between any two runs were used to create the Pearson linear correlation and obtain the correlation coefficients.The median for the phase I runs is about 0.95, and the correlation coefficient has a narrow distribution, suggesting high quantitative reproducibility.The median for the phase II runs is about 0.85, indicating reasonable quantitative reproducibility.The much lower Pearson linear correlation coefficients in phase II runs than phase I runs are most likely due to drastically lower proteoform intensities in phase II runs, as shown in Figure 2A,B. To further confirm the possibility of CZE-MS/MS in the phase II condition for accurate label-free quantification of proteoforms in a complex sample, after the 62 CZE-MS/MS runs of the yeast cell lysate, we performed CZE-MS/MS analyses of a 3 times diluted yeast cell lysate in quintuplicate, Figure 4.The CZE-MS/MS produced reproducible measurements of the original and diluted yeast cell lysates in terms of separation profiles, the number of proteoform IDs (1204 ± 49 for original vs 753 ± 33 for diluted, relative standard deviations (RSDs) as 4%, n = 5), and the normalized level (NL) intensities (RSDs: 8−9%, n = 5), Figure 4A,B.The average NL intensity of the diluted sample is about 3 times lower than that of the original sample (4.7E7 vs 1.4E7), which agrees well with the dilution factor of 3, demonstrating that the CZE-MS/MS in the phase II condition performs well for relative quantification of proteoforms.We further analyzed the distribution of proteoform intensity ratios between the original and diluted samples, Figure 4C.The median of the ratios is close to the theoretical ratio of 3. The number of matched fragment ions from the original sample is consistently higher than that from the diluted sample, most likely due to the much higher proteoform intensity, as shown in Figure 4D.Majority of the identified proteoforms have more than 10 matched fragment ions for the original and diluted samples, indicating reasonably high confidence of the proteoform IDs. The results presented in this study are critically important for CZE-MS/MS for top-down proteomics of complex samples.First, the data document that CZE-MS/MS using one LPA-coated capillary can produce high-quality top-down proteomics data of a complex proteome sample for at least 78 h (67 runs and 70 min per run), indicating the high robustness of the system.Second, the study provides rich experimental data that can be extremely useful for pursuing a better understanding of CZE-MS for proteoform separation and characterization.Third, the results demonstrate that CZE-MS/ MS with an appropriate operational procedure (i.e., capillary cleanup) can generate highly reproducible separation and identification of proteoforms in a complex sample across dozens of runs.The CZE-MS/MS is ready for some important biological applications to discover potentially critical proteoforms in biological processes and diseases in a quantitative manner.Fourth, the data also highlight some potential challenges of CZE-MS/MS for large-scale top-down proteomics studies in the next step and point out some important directions to work on.For example, we need to make more effort to create more consistent capillary inner-wall chemistry during CZE-MS/MS runs, which will eventually make CZE-MS/MS a powerful and highly reproducible technique for large-scale top-down proteomics studies. Correlation of Experimental and Predicted Electrophoretic Mobility of Proteoforms under Different CZE-MS/MS Conditions We have shown that the electrophoretic mobility (μ ef ) of proteoforms in CZE can be predicted well using a simple semiempirical model. 21,45Proteoforms' experimental and predicted μ ef have high linear correlation coefficients.This feature is critically useful for validating the proteoform IDs and PTMs (i.e., phosphorylation).Here, we have multiple different CZE-MS/MS conditions, phase I (runs 1−10), phase II (runs 40−62), and transition period between them (runs 11−39).We are asking how those CZE-MS/MS conditions influence the correlation of experimental and predicted μ ef of proteoforms. where L is the capillary length in cm; t M is the migration time in seconds; and 30 and 2 are the separation voltage and electrospray voltage in kilovolts, respectively.For the predicted μ ef (cm 2 •kV −1 •s −1 ), we utilized eq 2. where M and Q represent the molecular mass and charge number of each proteoform, respectively.We got the information on M directly from the database search results.We obtained Q by counting the number of lysine, arginine, and histidine amino acid residues in the proteoform sequence, and added 1 for the N-terminus.We used only proteoforms containing no PTMs and those having N-terminal acetylation or phosphorylation for this study.As shown in Figure 5A−C (top panels), strong linear correlations between experimental and predicted μ ef were observed for proteoforms without any PTMs (R 2 = 0.96).As shown in the middle panels, when we consider the proteoforms with N-terminal acetylation or phosphorylation, those modified proteoforms fall off the main trend and have lower experimental μ ef compared to the corresponding nonmodified proteoforms.The reduction of experimental μ ef is due to the charge (Q) reduction by one from the N-terminal acetylation or phosphorylation, considering the acidic BGE of CZE (i.e., 5% acetic acid, pH 2.4).After reducing the estimated net charge Q by one for the μ ef prediction, we achieved strong linear correlation coefficients (R 2 = 0.95−0.96)for the nonmodified proteoforms and proteoforms having N-terminal acetylation or phosphorylation, Figure 5A−C (bottom panels).The results here suggest that the proteoforms identified in this study have high confidence because of the strong linear correlations between experimental and predicted μ ef .In addition, the data indicate that the different CZE-MS/MS conditions do not have a significant impact on the correlations between experimental and predicted μ ef .We realized that the experimental μ ef of proteoforms become lower from run 5 (≥0.15,A) to run 52 (≥0.1, C), which is due to the much longer migration times of proteoforms in run 52 compared to run 5, Figure 2. ■ CONCLUSIONS For the first time, the long-term qualitative and quantitative reproducibility of CZE-MS/MS for a complex proteome sample was investigated.We revealed significant changes of proteoforms in migration time and intensity after about 10 CZE-MS/MS runs of the yeast cell lysate due to proteoform adsorption onto the capillary inner wall.We developed an efficient and simple capillary cleanup procedure via flushing the capillary with 0.5% NH 4 OH, water, and the separation buffer successively.The capillary cleanup protocol can remove the adsorbed proteoforms efficiently.After several rounds of capillary cleanup, the capillary inner wall chemistry became more consistent, producing reproducible proteoform separation and identification across dozens of CZE-MS/MS analyses of the yeast cell lysate.The results in this work highlight that CZE-MS/MS is robust enough to create high-quality top-down proteomics measurement of a complex sample across dozens of runs, for example, more than 60 runs of the yeast cell lysate.In addition, the measurement can be qualitatively and quantitatively reproducible across dozens of runs (i.e., at least 23 runs) under some specific conditions with an appropriate operational procedure (i.e., regular capillary cleanup).We expect that it is time to apply CZE-MS/MS-based top-down proteomics to broad biological applications. We have some recommendations about using CZE-MS for quantitative top-down proteomics.For label-free quantification, we should not combine the Phase I and Phase II conditions because of the dramatic shifts in migration time, making the data alignment challenging across CZE-MS runs for relative quantification.If a small-scale label-free quantification is performed, for example, comparing two samples with only about 10 CZE-MS runs or fewer, then the Phase I condition will be ideal.If a large-scale study is needed, for example, comparing multiple samples with more than 20 CZE-MS runs, then the Phase II condition should be considered.Alternatively, stable isotopic labeling techniques (e.g., tandem mass tags 46 ) can be employed.In this case, we may not need to worry about the Phase I or Phase II condition because the relative quantification is performed based on the data within the same CZE-MS runs. Lists of identified proteoforms from all CZE-MS/MS runs (XLSX) Electropherograms of a yeast cell lysate by CZE-MS/MS in three instances (PDF) ■ Figure 1 . Figure 1.(A) Schematic of the experimental design of sample preparation, CZE-MS/MS analysis, and database search.(B) Schematic of the capillary inner-wall cleanup procedure using NH 4 OH.The figure is created using the BioRender and is used here with permission. Figure 2 . Figure 2. Electropherograms of a yeast cell lysate after analysis by CZE-MS/MS.(A) Example runs during the first 23 CZE-MS/MS measurements.(B) Six examples of CZE-MS/MS runs during the 40th to 62nd measurements. Figure 3 . Figure 3. Summary of the identified proteoforms and proteins from 62 CZE-MS/MS runs.(A) The number of proteoform IDs was a function of the run number.(B) The number of protein IDs as a function of the run number.The trends of number of proteoform IDs and the time for capillary cleanup are marked.(C) Boxplots of pairwise proteoform overlaps for runs 1−10 and 40−62.(D) Boxplots of pairwise Pearson correlation coefficients of proteoform intensity for runs 1−10 and 40−62.Log 2 transformed proteoform intensities were used to generate the Pearson correlation coefficients. Figure 4 . Figure 4. Comparisons of the original and diluted yeast cell lysate data from CZE-MS/MS analyses.(A) Base peak electropherograms of the original yeast cell lysate after CZE-MS/MS analyses in quintuplicate.(B) Base peak electropherograms of the 3 times diluted yeast cell lysate after CZE-MS/MS analyses in quintuplicate.(C) Boxplot of the intensity ratio of overlapped proteoforms between original and diluted yeast cell lysates.(D) Boxplots of the number of matched fragment ions of identified proteoforms from original and diluted yeast samples. Figure 5 . Figure 5. Linear correlations between predicted μ ef and experimental μ ef of proteoforms from the yeast cell lysate identified in CZE-MS/MS runs 5 (A), 25 (B), and 52 (C).The top figures show the correlations for proteoforms without any PTMs.The middle ones indicate the correlations for all proteoforms without PTMs and with N-terminal acetylation or phosphorylation.The charge Q of those proteoforms was not corrected.The bottom ones show the correlations for the same proteoforms as the middle ones but with charge Q correction.For example, for one N-terminal acetylation or phosphorylation, the Q was reduced by one.
5,803
2024-02-28T00:00:00.000
[ "Chemistry", "Biology" ]
Interconnecting Carbon Fibers with the In-situ Electrochemically Exfoliated Graphene as Advanced Binder-free Electrode Materials for Flexible Supercapacitor Flexible energy storage devices are highly demanded for various applications. Carbon cloth (CC) woven by carbon fibers (CFs) is typically used as electrode or current collector for flexible devices. The low surface area of CC and the presence of big gaps (ca. micro-size) between individual CFs lead to poor performance. Herein, we interconnect individual CFs through the in-situ exfoliated graphene with high surface area by the electrochemical intercalation method. The interconnected CFs are used as both current collector and electrode materials for flexible supercapacitors, in which the in-situ exfoliated graphene act as active materials and conductive “binders”. The in-situ electrochemical intercalation technique ensures the low contact resistance between electrode (graphene) and current collector (carbon cloth) with enhanced conductivity. The as-prepared electrode materials show significantly improved performance for flexible supercapacitors. A fast-growing market for electronic devices and the development of hybrid electric vehicles result in an ever increasing demand for environmentally friendly and low-cost energy devices [1][2][3][4][5][6] . Supercapacitor, also known as ultracapacitor or electric double-layer capacitor, could store charges by separation of charges in a Helmholtz double layer at the interface between the surface of a conductive electrode and an electrolyte. Thus, due to their simple charge storage mechanism, supercapacitors are able to store and deliver energy at relatively high rates. Currently, carbon-based electrode materials ranging from activated carbon to carbon nanotubes and graphene are the most commonly utilized in supercapacitors because of their excellence physical and chemical properties [7][8][9] . In the development of supercapacitors, a proper control over the specific surface area and optimized contact between electrode materials and current collectors are crucial to ensure a good performance of the supercapacitor in terms of both power delivery rate and specific capacitance. With rapidly growing demand for personal electronics with small, thin, light-weight, flexible, and even roll-up characteristics, more and more attentions have been devoted to the flexible energy storage systems including flexible supercapacitors for these electronic devices 10 . For a flexible supercapacitor, the vital component is a flexible electrode with favourable mechanical strength and large capacitance. It is still a challenging task to fabricate a supercapacitor electrode with the advanced characteristics of light-weight, flexibility, high conductivity and high surface area. In general, flexible electrodes could be prepared by fabricating free-standing film of the active materials, or by depositing active materials on flexible substrate [11][12][13][14] . Typically, carbon nanotube-or graphene-based composite thin films prepared by vacuum filtration have been well-developed for flexible supercapacitors 15 . However, the free-standing thin films developed this way showed poor mechanical strength, which would hinder the potential application of supercapacitors in some strict environment. On the other hand, recently, researchers developed a variety of approaches to deposit/load active electrode materials onto the flexible current collectors. For examples, Shi et al. 5 coated cotton paper with carbon nanotubes by immersing the paper into the carbon nanotube solutions. Cui et al. 16 fabricated flexible supercapacitors by directly drawing graphite on cellulose paper. Previously, we also developed the electrophoresis deposition method to load porous graphene and porous graphene/carbon nanotubes hybrid onto carbon cloth as the electrode materials of flexible supercapacitors 11,12 . However, supported electrode materials fabricated this way usually have high contact resistance between electrode materials and current collectors (substrates), which would hinder the performance of the flexible supercapacitors. Carbon cloth, which is mechanically flexible carbon fibers woven in form of cloth, could be attractive electrode for flexible supercapacitor because of their good electrical conductivity, chemical stability, flexibility and high porosity. However, carbon cloth only shows a very low surface area due to the large size of carbon fibers (around 10 μ m). During the production of carbon cloth, inevitably, big gaps (ca. micro-size) between individual carbon fibers are generated, as observed by the scanning electron microscopy (SEM) images ( Figure S1, Supporting Information) which would significantly reduce the area-normalized capacitance when used as the electrode of supercapacitors. In this work, for the first time, we utilized electrochemical cation intercalation method to in-situ exfoliate graphene from carbon fibers of carbon cloth. The as-exfoliated graphene interconnected individual carbon fibers, which showed significantly increased surface area. The electrochemical cation intercalation method used here is a non-oxidative production route to few layer graphene 7 , which avoid the further reduction of oxidized graphene for use in supercapacitors. The interconnecting of carbon fibers by graphene (as a conductive "binder") would enhance the conductivity of the composites. The interconnected carbon fibers by graphene showed significantly improved specific capacitance as the binder-free electrode materials for flexible supercapacitors in terms of area-normalized capacitance. Results The electrochemical cation intercalation of carbon cloth was conducted by using a three-electrode system with carbon cloth as working electrode, Pt mesh as counter electrode, Ag/AgClO 4 as reference electrode, and tetramethylammonium perchlorate (TMAClO 4 ) as electrolyte. We performed the cyclic voltammetry (CV) scanning to define a proper potential for the chronoamperometric (CA) mode to start the electrochemical cation intercalation process. As shown by the CV curves in Fig. 1A, scanning from 0 to − 4 V resulted in a clear cathodic current (starting from around − 2 V) associated with the intercalation of cations into the graphite of carbon fibers 17,18 . At more negative potential, the current is much higher, indicating obviously better intercalation efficiency 17 . Initially, we conducted the CA running at a constant most negative potential of − 4 V for 10000 s. At this potential, graphene was successfully exfoliated from graphite of carbon fibers, but most of them were de-attached from carbon fibers into the electrolyte and participated, as shown by the digital photograph in Figure S2. For the final use in flexible supercapacitor, it is required to keep the exfoliated graphene in the matrix of carbon fibers. Therefore, the potential of − 4 V is too negative and too strong for this purpose. Subsequently, a milder potential of − 2.5 V was chosen for the CA running ( Fig. 1B) to exfoliate graphene from carbon fibers. After the intercalation process of 10000 s, no apparent participates were observed, indicating that the as-exfoliated graphene was well reserved in the matrix of carbon fibers. The SEM images were collected for the exfoliated carbon cloth (denoted as Ex-CC) obtained at the potential of − 2.5 V. Different from the pristine carbon cloth (SEM images shown in Figure S1) in which carbon fibers were individually distributed and showed big gaps of around 1-4 μ m, the carbon fibers in Ex-CC were interconnected by the in-situ exfoliated graphene, as shown in Figs 1C,D. As observed in Figs 1C,D, it seems that the exfoliated graphene acted as conductive "binder" to interconnect the individual carbon fibers. The conductive "binder" (graphene) linked with carbon fibers could effectively enhance the conductivity of the composites due to the high conductivity of graphene. The transmission electron microscopy (TEM) and atomic force microscopy (AFM) images were used to identify the structural information of the as-obtained materials. The TEM and AFM images, as shown in Figure S3, show the typical graphene structure. The graphene-interconnected carbon fibers would contribute more available charging sites per a unit area due to the high surface area of graphene, thus leading to higher area-normalized capacitance when used in supercapacitors. Based on the SEM observation, graphene was successfully exfoliated and acted as interlinkers to interconnect individual carbon fibers. It is well-known that graphene always has high surface area, therefore, it is expected that the interconnected carbon fibers by graphene, that is, Ex-CC would show much higher surface area than pristine CC. We performed the Brunauer Emmett Teller (BET) testing for Ex-CC as well as CC for comparison. Figure 2 showed the nitrogen adsorption-desorption isotherms of CC and Ex-CC. As can be seen, CC gave type I isotherms characterized by a plateau that is nearly horizontal to the P/P 0 axis, indicating the microporous nature of of carbon fibers in CC 19 . For Ex-CC, the type IV isotherm with pronounced adsorption at low and medium relative pressures indicate the existence of a large number of mesopores and micropores created by the as-exfoliated graphene in Ex-CC. The hysteresis loop in the isotherms of Ex-CC indicates the Ex-CC is porous. The total pore volume of Ex-CC is 0.424 cm 3 /g, much higher than that of CC (0.011 cm 3 /g). After the electrochemical cation intercalation, Ex-CC exhibited much higher surface area (68.5 m 2 /g) than CC without intercalation (11.5 m 2 /g). The enhancement of the surface area by the in-situ interconnected graphene could be clearly found. The electronic properties of Ex-CC and CC were investigated by Raman spectra. Raman spectroscopy is an excellent tool for investigating the electronic structure and defect concentration in graphene [20][21][22][23] . As can be seen from the Raman spectra in Figure S4, the D band and G band were located around 1330 and 1580 cm −1 , respectively. It has been found that G band arises from the bonding stretching of sp 2 -bonded C-C pairs, while the D band is associated with the sp 3 defect sites. In the Raman spectra of carbon-based materials, the ratio of I D /I G was usually used as an indicator of the defects level. It can be seen from Figure S4 that Ex-CC showed slightly higher I D /I G ratio than pristine CC. For Ex-CC, graphite in carbon fibers was exfoliated to graphene, resulting in more exposure of edge defects and thus higher D band intensity in the Raman spectrum. In order to monitor the change of the C bond configuration, fine-scan C1s spectra were collected from both CC and Ex-CC, as shown in Fig. 3. For CC as a typical carbon material, the C1s XPS peak could be fitted into two peaks, located around 284.6 and 285.3 eV, assigned to sp 2 and sp 3 C1s, respectively 24 . After the electrochemical intercalation, the sub-peak of sp 3 C of Ex-CC increased relative to the pristine CC, indicating more edge defects exposed after exfoliation, in consistent with the observation of the Raman spectra as discussed above. It should be pointed out that no obvious oxygen-containing species were observed for Ex-CC, indicating the electrochemical cation intercalation is a non-oxidative route, which preserved the highly conductive properties of graphene. Based on the above physical and chemical characterizations, it is expected that Ex-CC would have attractive electrochemical performance in supercapacitors. Since the Ex-CC with high surface area originates from CC and preserves the well-defined cloth structure with good mechanical strength, Ex-CC could be an excellent candidate as an advanced binder-free electrode for flexible supercapacitors. The symmetric flexible supercapacitors were constructed using two Ex-CC samples as both positive and negative electrodes, Whatman filter membrane as the separator, and 1.0 M H 2 SO 4 solution as the electrolyte. For comparison, pristine CCs were also assembled to a symmetric flexible supercapacitor. The area of the devices is 2 × 2 cm 2 . CV measurements were first carried out to observe the electrochemical behaviour of Ex-CC and CC-based flexible supercapacitors. Figure 4A shows the CV curves of the two flexible supercapacitors using Ex-CC and CC as electrodes, from which the remarkable difference in the electrochemical behaviour and properties between the two electrodes could be easily recognized. The CV curve at the Ex-CC electrode is close to the ideal rectangular shape, indicating smaller internal resistance in the electrode, while CC shows a very poor rectangular shape. Although carbon fiber is a highly conductive substrate, carbon fibers in CC have poor affinity with electrolytes due to the inert surface. After the electrochemical cation intercalation, graphite in the fibers expanded to graphene with more edge defects exposed, which may increase the affinity with electrolyte through the capillary interaction. Thus, Ex-CC is more electrochemically affinitive toward electrolyte. On the other hand, for Ex-CC, the interconnecting of carbon fibers by the in-situ exfoliated graphene would enhance the conductivity of the composites for use as electrode materials. From the CV curves, it could be observed that Ex-CC showed much higher current than CC, which could be attributed to the unique structure of the interconnected carbon fibers through the in-situ exfoliated graphene. In order to further investigate the performance of the two electrodes, galvanostatic charge-discharge (CD) experiments were carried out with the voltage windows the same as for the above CV analysis. As shown in the CD curves (Fig. 4B), the discharging time of the Ex-CC was significantly longer than that of CC, indicating that the Ex-CC offers a much larger capacitance, which agrees well with those obtained from the CV testing. Moreover, for the galvanostatic charge-discharge, the IR drop of the Ex-CC is much smaller than CC. The IR drop is caused by the equivalent series resistance (ESR), which includes electrode resistance and electrolyte resistance. The smaller IR drop on Ex-CC is attributed to the existence of various pore structure and high conductivity of the in-situ exfoliated graphene. The specific capacitance in this work was calculated from the CD curves. In addition, the in-situ exfoliated graphene filled the gaps between individual carbon fibers in Ex-CC samples; therefore, it is interesting to investigate the contribution of the in-situ exfoliated graphene to the area-normalized capacitance. So, in this work, the specific capacitances were described with "mF cm −2 ". According to the calculation, the area-normalized capacitance of Ex-CC at the discharge current of 3 mA is 64.5 mF cm −2 , much high than that of CC (17.1 mF cm −2 ). The higher area-normalized capacitance of Ex-CC may be attributed to the presence of the in-situ exfoliated graphene with high surface area and high conductivity. Furthermore, we investigated the durability of the Ex-CC electrode using continuous charge/discharge cycles at a constant current load of 5 mA. Figure S5 shows the specific retention of Ex-CC as a function of cycle number. It can be clearly seen that Ex-CC shows a good cycling behaviour in supercapacitors and the Ex-CC remains stable under the electrochemical operation conditions. Finally, we examined the electrochemical performance of the as-fabricated flexible supercapacitor based on Ex-CC as the electrode materials under bending conditions. Figure S6 shows the CV curves of the Ex-CC-based flexible supercapacitor device before and after bending, and it could be observed that the electrochemical performance of the device does not significantly change under the bending conditions, indicating the as-fabricated supercapacitors are highly flexible 25 . Discussion Based on the above analysis, it could be found the individual carbon fiber in the carbon cloth could be efficiently interconnected by the electrochemically exfoliated graphene. The morphology characterizations demonstrate the ideal structure for the potential applications in supercapacitor due to more exposed active sites for charge storage. On the other hand, the enhanced capacitance as demonstrated by the electrochemical testing may also be attributed to the increased conductivity of the composite. The exfoliated graphene could significantly decrease the contact resistance between individual carbon fibers by linking each together. In summary, we successfully interconnected carbon fibers of CC with graphene through the in-situ electrochemical exfoliation method. The electrochemical exfoliation leads to generation of graphene without de-attachment from carbon fiber under mild exfoliation conditions and preserves the high conductivity of graphene without the formation of oxygen-containing species. The exfoliated graphene, which remains in the matrix of carobn fibers, acted as conductive "binders" to enhance the conductivity and to increase the surface area of the composites [26][27][28] . The as-obtained Ex-CC were used as advanced binder-free electrode materials for flexible supercapacitor, which shows significantly enhanced supercapacitor performance compared to CC in terms of the area-normalized capacitance due to the high surface area and high conductivity of graphene in the cloth matrix. We also demonstrated that the Ex-CC based supercapacitors are very flexible, showing potential applications in the field of flexible electronic devices. Therefore, the strategy developed in this work has a significant impact on the development of the electrode materials for flexible energy storage system. Methods Preparation of Ex-CC. The electrochemical exfoliation of carbon cloth was conducted in a three-electrode system with carbon cloth as working electrode, Pt mesh as counter electrode and Ag/ AgClO 4 as reference electrode in 0.1 M TMAClO 4 in NMP as electrolyte. First of all, Voltammetry was performed as a scan rate of 20 mV/s. The potentials observed in the cyclic voltammetry (CV) were used to define the potential set in the chronoamperometric mode to control the mild intercalation process. Subsequently, − 2.5 V was chosen for the chronoamperometric mode to realize the mild electrochemical exfoliation for 10000 s. Following the electrochemical exfoliation, the as-obtained Ex-CC was washed with acetone thoroughly. Assembly of flexible supercapacitors. To construct Ex-CC based flexible supercapacitor device, two pieces of Ex-CC electrodes with the size of 2 × 2 cm 2 were used as the positive electrode and negative electrode, respectively. Whatman membrane was used as the separator and 1 M H 2 SO 4 in aqueous was used as electrolyte. For comparison, CC based supercapacitors were also assembled in the similar way. Electrochemical measurements. Cyclic voltammetry and galvanostatic charge/discharge tests of assembled two-electrode supercapacitors were carried out using an Autolab potentiostat/galvanostat. CV measurements were conducted in the applied voltage window of 0-1 V. Galvanostatic charge/discharge tests were operated under a constant charge/discharge current of 3 mA within an applied voltage window range from 0 to 1 V. The specific capacitance was calculated from the galvanostatic charge/discharge curves normalized by the surface area of the devices (4 cm 2 ).
4,062.8
2015-07-07T00:00:00.000
[ "Engineering", "Materials Science" ]
Impact of Crosstalk on Signal Integrity of TSVs in 3D Integrated Circuits A R T I C L E I N F O A B S T R A C T Article history: Received: 02 November, 2017 Accepted: 24 December, 2017 Online: 30 January, 2018 Through-Silicon-Vias (TSVs) are utilized for high density 3D integration, which induce crosstalk problems and impact signal integrity. This paper focuses on TSV crosstalk characterization in 3D integrated circuits, where several TSV physical and environmental configurations are investigated. In particular, this work shows a detailed study on the influence of signal-ground TSV locations, distances and their structural configurations on crosstalk. Embedded 3D testing circuits are also presented to evaluate the coupling effects between adjacent TSVs such as crosstalk induced delay and glitches for different crosstalk modes. Additionally, A 3D parallel Ring Oscillators testing structure is proposed to provide crosstalk strength coupling indicator between adjacent TSVs. Simulation results are conducted using a 3D electromagnetic field solver (HFSS) from Ansoft Corporation and a Spice-like simulator (ADS) from Keysight Technologies Corporation based on MIT 0.15μm 3DFDSOI process technology. Introduction 3D Interconnect is the promising technology [1]- [4], which includes Through-Silicon-Vias (TSVs) to connect vertically stacked semiconductor chips with shortest paths, which means lowest inductance and conduction loss, to both signals and power supplies. In spite of these benefits, the signal integrity issues in TSVs become the major challenges in 3D designs [5][6]. The goal of TSVs or 3D Vias development is to acquire high chip density. Therefore, the density of the 3D Vias is also high. In this environment, a crosstalk problem appears between two adjacent signal 3D Vias (Aggressor and Victim). Studies show that the coupling problem is not negligible in TSVs because of the relatively large diameter and small pitch, which results in nonnegligible TSV-to-TSV coupling that degrades significantly the 3D circuit performance. Hence, it becomes very essential to precisely model and evaluate the electrical characteristics of TSVs [7]- [9] to analyze signal integrity (SI), and crosstalk of adjacent TSVs under the conditions of various structures and configurations. In this paper, the electrical characteristics of 3D interconnect, based on our previous work [10], is presented to characterize signal integrity effects of 3D crosstalk for different TSVs placement and configurations. 3D Vias based on 0.15 µm 3DFDSOI process from MIT Lincoln lab [11] are used as a device under test (DUT) for crosstalk characterization where a 3D full wave simulator such as HFSS from Ansoft Corporation is used to extract and predict the electrical characteristics of TSVs in the frequency domain (S-Parameters) and a Spice-like simulator such as ADS from KEYSIGHT technologies to evaluate the TSVs transient response in the time domain (Eye-diagram). Additionally, embedded 3D testing applications are proposed to characterize the TSV's signal integrity effects and the impact of TSVs on the 3D circuit performance after fabrication. A 3D circuit test is presented to evaluate the coupling effects between adjacent TSVs such as induced-delay and glitches [12][13] for different crosstalk modes. Additionally, a consecutive triggered parallel Ring Oscillators (ROs) testing structure is proposed to provide a crosstalk coupling indicator between adjacent TSVs. The paper is organized as follows: section 2 discusses the 3D Full wave modeling for TSV and the simulation setup. A detailed study of crosstalk for different physical and environmental TSVs configurations is given in Section 3. 3D Crosstalk embedded ASTESJ ISSN: 2415-6698 110 testing applications are given in Section 4. Section 5 concludes the paper. A 3D Full Wave Modeling for TSVs In order to evaluate the electrical characteristics of a TSV depending on structural parameters such Via pitch, Via height, and Via size, the 3D interconnect based on MIT 0.15µm 3DFDSOI technology is used as DUT to model and characterize crosstalk in different testing configurations. The vertical connection in this technology is slightly different from the standard Through-Silicon-Via (TSV), which is a square shape via made from Tungsten material and fully surrounded by oxide; thus, it is simply called a 3D Via. The 3D Via pitch is around 3.325 µm (distance between the centers of two 3D Vias), 7.34 µm TSV height, 1.25 µm x 1.25 µm TSV size. The physical size of 3D Via after fabrication is estimated to be around 2 µm for the top dimension and 1 µm for the bottom dimension. The 3D Via was simulated using a 3D full wave simulator (HFSS from Ansoft Corporation), which generates the S-Parameters of the structural model of the Via and a Spice type simulator (ADS from Keysight Corporation) to predict the electrical characteristics of 3D Vias in the time domain (Eye-diagram). Figure 1 presents a pair of TSVs structure using the HFSS simulator. Usually, an interconnection line is characterized using S-Parameters. S11 and S21 are the S-parameters reflection and transmission coefficients respectively, which are typical characteristics of an interconnection. The evaluated S11 and S21 magnitudes for the electrical characteristics of a 3D Via based on the default parameters are shown in Figure 2. The transmitted data stream through the 3D Via was simulated with the evaluated S-parameters from the 3D full wave simulator (HFSS). The Eye-diagram of the transmitted data stream was evaluated for 10 7 -1 pseudo random bits sequence (PRBS) using the Spice type circuit simulator (ADS). The source for the simulation has 1.5 Vp-p and 50Ω source termination. The 3D Via is terminated by a shunt connected 50 Ω resistor and 1pF capacitor. The Eye-diagrams of 2 Gbps, and 10 Gbps PRBSs are shown in Figure 3. Also, all PRBSs were assumed that they have 10% rising and falling times The Influence of 3D Via Locations, and Distances on Crosstalk Crosstalk is evaluated depending on the distance of two signal Vias and the location of two GND Vias as shown in Figure 4. 111 Five distances (4 µm, 8 µm, 16 µm, and 32 µm) based on SGSG configuration between the two signal Vias have been simulated. As expected, if the distance between the two signal 3D Vias is larger, the crosstalk level is going down as shown in Figure 5. The effect of the distance of GND Vias with respect to the signal via on the crosstalk is also evaluated as shown in Figure 6 with four distances: 4 µm, 8 µm, 16 µm, and 32 µm. The results show an increase in the magnitude of the crosstalk as the distance of the reference via (GND) increases. Also it is shown in Figure 7 that the crosstalk magnitude of SGGS (i.e. the cross locations of the signal and ground Vias) configuration is smaller than that of SGSG configuration comparing two same distance cases. The difference of the two cases is almost 10dB. This is a very interesting point for 3D designers to keep in mind, because just changing the 3D Via role can reduce the crosstalk magnitude especially in high frequency applications, where crosstalk problem is very critical to obtain the maximum system performance. 3D Via Crosstalk in Structures with Different Configurations The geometry of the return current path may be one of the most efficient methods to affect the crosstalk between a signal and victim Via. Four different configurations have been investigated (1) Two Via pairs in a straight line (Figure 8(a)), (2) Two Via pairs placed opposite to each other (Figure 8(b)), (3) A signal Via with two reference via placed opposite to a victim Via with two ground Vias (Figure 8(c)), (4) A signal and victim via, each with three reference Vias as a return current path (Figure 8(d)). In configuration 2, the signal Via has a slightly lower inductance than it would in configuration 1 because the second Via is close enough to the signal Via to have a slight impact on its inductance. Configuration 4 has the lowest inductance because it has the most well defined return current path. This lowering of the inductance will also lower the near-end crosstalk as shown in Figure 9. Also it is shown that the highest crosstalk as predicted, comes from configuration 1. Only slightly lower is the crosstalk from configuration 2. Then, there is a significant decrease in crosstalk when the extra two reference Vias are added in configuration 3, and a slight decrease further when the third reference Via is added to the victim and signal Vias in configuration 4. The reduction in crosstalk from adding additional reference Vias is almost 4 dB. Configuration#1 Configuration#2 Configuration#3 Configuration#4 Figure 9. Crosstalk between four different configurations Until this point, only isolated 3D Vias with respect to ground Vias have been considered. Potential coupling in the 3D Vias in the discontinuity between the 3D via and the transmission line may be an important effect to consider. In Figure 10, a new Ansoft HFSS model is presented that accounts for the discontinuity between the 3D via and the transmission line. Figure 11 shows that, according to the full wave simulations, there is no measurable difference between the near-end crosstalk as a result of the discontinuity between the transmission lines and the 3D Vias. The difference between the two simulations is very small over the entire frequency range. However, the discontinuity between the transmission lines and 3D Vias will increase the farend crosstalk. On the other hand, the far-end crosstalk is more minimal than the near-end crosstalk, less significant and never exceeds -50 dB as shown in the simulation. 3D Testing Circuit for Crosstalk Induced-Delay and Glitches In this embedded test application, the coupling effects between adjacent TSVs such as induced-delay and glitches can be investigated for different crosstalk modes. As shown in Figure 12, high speed signals can be fired through three adjacent TSVs at each tier using a multi-edge delay generator circuit. A Mux and Tristate circuits are used to control which signals are active from which tier. The complementary signals are also generated from the delay generator to cover different crosstalk modes. In order to study the effect of phase shifting the aggressor signal on crosstalk induced-delay cancellation, the multi-edge delay generator is used to fine control the delay between adjacent signals. 113 Figure 13. Different Crosstalk Patterns Figure 14 shows simulated results for the induced-delay crosstalk with different patterns using ADS and 0.25 μm CMOS standard process. The middle graph is the victim line signal with no activity on both aggressor lines. The furthest right and left graphs are -2X and 2X cases respectively, which cause the worst case induced-delay effect. This induced-delay can be mitigated by phase shifting the aggressor signals using the multi-edge delay generator. Figure 15 shows an example of the crosstalk induceddelay cancellation effect after phase shifting the aggressor line 0.8ns for the case of +1X. As graphed, the induced delay due to crosstalk is almost cancelled and the signal aligns again with the 0X case (i.e. without crosstalk). Ring Oscillators 3D Crosstalk Test In this test, a consecutive triggered parallel Ring Oscillators (ROs) structure running same frequency is used to characterize the crosstalk effect between TSVs. Figure 16 shows four triggered oscillators; two oscillators are crosstalk-coupled and the other two are crosstalk-free. The proposed ROs parallel structure creates a delta phase shift difference between each consecutive triggered oscillators, which is equal to the time difference between the delay buffer chain and the oscillation time period of the triggered oscillators. 3D crosstalk detection can be achieved by observing the frequency of crosstalk-coupled oscillators, which is different from the frequency of the crosstalk-free oscillators. Interestingly, the two crosstalk-coupled triggered oscillators have less oscillation time delay (i.e. faster frequency) than the crosstalk-free oscillators, and the phase difference between the two crosstalk-coupled oscillators diminishes due to the coupling effect as shown in Figure 18. Phase detection at the output of the crosstalk-coupled triggered oscillators can be used as an indicator for strong coupling between TSVs. On the other hand, the edges of crosstalk-free oscillators are still separated by a deterministic phase shift dictated by the time difference between the delay buffer chain and the ring oscillation time period. Conclusion In this paper, we presented the signal integrity effects of crosstalk in 3D stacked ICs. A detailed study of TSVs electrical modeling and characterization using HFSS and ADS simulators for frequency and time domains analysis respectively was presented. Simulation results were conducted based on 0.15µm 3DFDSOI process technology from MIT Lincoln lab which present the influence of 3D Vias distances, locations and their structural configurations on crosstalk. The study shows that increasing and decreasing distance of 3D Via signals and grounds respectively can mitigate significantly the effect of 3D crosstalk. In addition, adding more reference Vias and creating a welldefined return current paths have the most impact on mitigating crosstalk. Furthermore, it was shown that the discontinuity between 3D Via and transmission line has negligible impact on near-end crosstalk (NEXT), however; far-end crosstalk (FEXT) might increase but with less significant impact. Furthermore, a 3D testing circuit application based on a multiedge signal generator placed at different 3D stacked tiers was studied to evaluate the effect of crosstalk induced-delay and glitches. Additionally, a cross-coupled parallel ROs structure was also presented to evaluate the crosstalk coupling strength effect compared to ROs structure with crosstalk-free TSVs.
3,050.6
2018-01-01T00:00:00.000
[ "Physics" ]
Bayes linear analysis of risks in sequential optimal design problems In a statistical or physical model, it is often the case that a set of design inputs must be selected in order to perform an experiment to collect data with which to update beliefs about a set of model parameters; frequently, the model also depends on a set of external variables which are unknown before the experiment is carried out, but which cannot be controlled. Sequential optimal design problems are concerned with selecting these design inputs in stages (at different points in time), such that the chosen design is optimal with respect to the set of possible outcomes of all future experiments which might be carried out. Such problems are computationally expensive. We consider the calculations which must be performed in order to solve a sequential design problem, and we propose a framework using Bayes linear emulators to approximate all difficult calculations which arise; these emulators are designed so that we can easily approximate expectations of the risk by integrating the emulator directly, and so that we can efficiently search the design input space for settings which may be optimal. We also consider how the structure of the design calculation can be exploited to improve the quality of the fitted emulators. Our framework is demonstrated through application to a simple linear modelling problem, and to a more complex airborne sensing problem, in which a sequence of aircraft flight paths must be designed so as to collect data which are informative for the locations of ground-based gas sources. © 2018, Institute of Mathematical Statistics. All rights reserved. Introduction Scientific research increasingly relies on the specification and analysis of models which are intended to recreate the properties of a particular natural or manmade system. These models can vary greatly in complexity: in some instances, model predictions for a system are simple and easy to evaluate (for example, the atmospheric dispersion model discussed in Pasquill (1971)), whereas in others, simply evaluating the model to generate a single prediction may be a time-consuming, non-trivial task (for example, the climate model considered in Williamson et al. (2013)). Despite the diversity of fields in which modelling is undertaken and the range of different complexity levels, modellers often share a number of common goals. Frequently, one of these goals is to infer certain parameters of a model using data collected from the system that the model is designed to represent (see, for example, Kennedy and O'Hagan (2001)); these parameters may be of direct interest themselves, or they may be of interest simply because we wish to calibrate the model so that it makes better predictions for unobserved states of the system. Generally, the modeller may also control some of the model inputs governing the observation process (for example, spatial locations at which observations of the real climate are made). Additionally, there may be inputs to the model which cannot be controlled when making observations on the real system (for example, the wind conditions at the point at which observations on the real climate are made), but which must be accounted for when making inferences from the data. Once data has been collected from the system, decisions about the system must be made using the information provided by the model specification and the observed data; subsequently, under particular outcomes, these decisions will have known consequences. The general framework for a Bayesian decision analysis is presented in detail by, for example, Smith (2010) and Lindley (1972); Randell et al. (2010) perform such an analysis for a model describing a large offshore structure, where maintenance decisions must be made about individual components whose characteristics have a complex covariance structure. Sometimes, the experiments can be performed sequentially; that is, there is the opportunity to perform a sequence of experiments to learn about the system, and the benefit that could be obtained by continuing to experiment must be weighed against the cost of doing so. DeGroot (1970) provides an introduction to sequential decision-making, and Williamson and Goldstein (2012) provide an example in which a climate model must be used to choose a sensible CO 2 abatement policy at the present time and at fixed points in the future. Combining model and decision problem specifications, the question of design arises naturally: given that some of the model inputs governing the measurement process may be controlled, how should these be selected so as to maximise the expected benefit of the observations? For non-sequential decision problems, optimal experimental design choices for common, simple scenarios are reviewed in Chaloner and Verdinelli (1995), and more complex, non-linear problems are considered in Ford et al. (1989). If a more complex model or loss function is specified, or for sequential problems, there is usually no analytic solution to the design calculations, and the resulting problem usually presents a computational challenge; Muller et al. (2007) provide a simulation procedure for sequential design problems with simple forward models, which works by discretizing the design space and sampling the possible experimental outcomes from the model. In this article, we develop an approximation framework which provides decision support for the sequential design problem; our procedure is designed to be able to cope with problems that have large numbers of stages, as well as problems in which the system model is an expensive function that may only be evaluated at a handful of parameter settings. Any approximation to the sequential design calculations will introduce numerical uncertainty; the procedure that we present is designed to track this uncertainty throughout the calculation, allowing the user to make an informed decision about whether to select an experiment subject to these uncertainties, or to carry out further analysis which may reveal more about the risks involved. Jones et al. (2016) proposed a similar framework to handle non-sequential design problems for expensive models. The remainder of this article is laid out as follows: in Section 2, we introduce a notation for the sequential design problem, and present the standard backward induction algorithm for its solution. Then, in Section 3, we propose a framework which approximates the backward induction calculation using Bayes linear emulators. In Section 4, we consider an application to a simple linear model, and in Section 5, we consider a more complex application in atmospheric dispersion modelling. In Section 6, we discuss our results and propose avenues for future research. Additional details regarding the framework and the examples are provided in the supplementary material Jones et al. (2018); sections in the supplement are labelled S1, S2 etc. Sequential optimal design In this section, we introduce a notation for the general problem, and set out the Bayesian optimal experimental design framework in full. Problem definition and notation The general problem is this: we hold prior beliefs about a system we are studying in the form of a model which is a function of a number of input parameters. For each setting of its inputs (within some allowed range), the model can be run to generate a prediction for a set of system attributes (which we will refer to as the model outputs). We wish to use data from the system to update beliefs about some model parameters, before using these updated beliefs to make decisions; sets of observations may be taken sequentially, and after each has been observed, we have the option of either measuring the next set, or using current beliefs to make an immediate decision. In what follows, 'stage j' refers to the point in time at which (j − 1) sets of observations have been made, and where we must consider whether to take the j th set. We assume that a maximum of n experiments can be performed, and denote the j th set of available observations by z j = {z j1 , . . . , z jnzj }, j = 1 . . . , n. We denote the collection of observations collected up to and including stage j by z [j] = {z 1 , . . . , z j }. We assume that the model inputs can be divided into three classes: • Model parameters: these are the parameters about which we wish to learn. They are denoted by q = {q 1 , . . . , q nq }. • Design inputs: we may select these, and they control the behaviour of the experiment which is performed. We denote the set of design inputs affecting the j th set of observations by d j = {d j1 , . . . , d jn dj } (for d j ∈ D j ), and we denote the collection up to and including stage j by d [j] = {d 1 , . . . , d j }. • External inputs: we cannot control these, but they affect predictions for the system, though we are not interested in using the data z j to learn about them. We denote the set of external inputs affecting the j th observation by w j = {w j1 , . . . , w jnwj } (for w j ∈ W j ), and we denote the collection up to and including stage j by w [j] = {w 1 , . . . , w j }. At each stage j, we proceed as follows: first, the design inputs d j must be selected; then, the external inputs w j become known; finally, the experimental data z j are observed. After collecting the experimental data, we must either make an immediate decision, with no possibility of further sampling, or select a design d j+1 for the next experiment before collecting these observations using this configuration. Note that the numbers of observations, design inputs and external inputs (n zj , n dj and n wj ) need not be the same, either within a particular stage or between stages; for example, in the atmospheric dispersion example presented in Section 5, a set of 5 design inputs and 2 external inputs affects the characteristics of a set of 100 observations to be collected at each stage. Decision problem To determine the value of any set of observations, and therefore to choose between making an immediate decision and paying for another set of observations, we must specify the decision problem that we will solve using our beliefs about model parameters q after j sets of observations have been made. Within a probabilistic Bayesian framework, our beliefs about q after the j th experiment are summarised through the posterior distribution p q|z [j] , w [j] , d [j] = p z [j] |q, w [j] , d [j] p q|w [j] , d [j] p z [j] |w [j] , d [j] = p z [j] |q, w [j] , d [j] p (q) p z [j] |w [j] , d [j] (1) where we specify the conditional distribution p z [j] |q, w [j] , d [j] for the observations z [j] given the model parameters, and p (q) specifies our prior beliefs about q, which we assume do not depend on {w [j] , d [j] }. For the decision problem, we specify the following components: • a space A j of possible actions a j = {a j1 , . . . , a jnaj } which might be taken at stage j. • a loss function L j (a j , q) which describes (in utility units) the cost of taking action a j at stage j (having terminated sampling) and then realising model parameters q. • a function c j (d j ) which describes the cost (in utility units) of selecting design d j for the experiment at stage j. Based on this specification, we can evaluate our risk (expected loss) from making an immediate decision at stage j which is optimal against our current beliefs, Fig 1: Graphical representation of the design procedure, showing quantities in the order that we choose, observe or compute them. Square nodes represent design parameters which we select, circular nodes represent random quantities which we observe when experimenting, and non-bordered nodes are risks which we compute. as described by (1) ρ trm where ρ trm j is referred to as the 'terminal risk' or the 'risk from an optimal terminal decision' at stage j. We denote the optimal decision at stage j, which minimises (2), by a * j . We would now like to know: given our prior beliefs about the system, the costs of the observations which might be made, and the consequences of the decisions which might be taken, how many sets of observations should be collected, and at what settings of the design parameters? This is a problem in Bayesian sequential optimal experimental design, which can be solved using backward induction. Extensive form -backward induction The backward induction algorithm is a well-known technique for solving decision and design problems; for an introduction to the algorithm, see, for example, DeGroot (1970). The algorithm begins at the final stage of the problem, where it is not possible to collect further observations, and then works back through the stages, deciding for each setting of the model inputs whether it is optimal to continue experimenting or to stop and make an immediate decision. We iterate the following steps for j = n, (n − 1), . . . 1: • We compute the overall risk, denoted by ρ j , assuming that we act optimally at all future stages. At the final stage (j = n), no further observations are possible, and so the overall risk is equal to the terminal risk ρ n z [n] , w [n] , d [n] = ρ trm n z [n] , w [n] , d [n] ( For all other stages (1 ≤ j < n), we compare the risk from an immediate decision with that from future experimentation The risk ρ * j+1 from optimal future experimentation is the output from the (j + 1) th iteration of this procedure, defined in (6) • At the point when we must choose between an immediate decision and further experimentation, the experimental outcomes {z j , w j } are unknown, and so we compute the expectationρ j of the risk ρ j with respect to our current beliefs where in the second line, we have relied upon the assumption that we do not use the z j to learn about the w j . • We find the optimal design d * j for the j th experiment, as a function of the risk inputs {z [j−1] , w [j−1] , d [j−1] } at the previous stages, by minimisingρ j over d j , taking account of the cost c j (d j ) of experimentation The minimum risk ρ * j is then Note that when j = 1, we define A graphical representation of the relationship between the designs, observables and risks is provided in Figure 1; the backward induction procedure is written in pseudo-code in Algorithm 1. If we can perform these calculations, then the output from this algorithm is the optimal design d * 1 for the experiment at the first stage, and the corresponding optimal risk ρ * 1 . To decide how to proceed, we compare this risk (from an optimal future procedure) with the risk ρ trm 0 from making an optimal decision under our prior beliefs (before experimentation). If ρ * 1 < ρ trm 0 , then it is optimal to perform the first experiment (at d * 1 ), and then to assess the benefit of the second experiment against an immediate decision under our beliefs after the first experiment; otherwise, it is optimal to make an immediate decision and to cease sampling. More generally, if we have collected data up to the k th experiment, then we assess the optimal course of action by comparing ρ * k+1 with ρ trm k . While the calculations (2), (4), (5) and (6) are simple to express, they generally represent a large computational challenge. It is not uncommon to find a problem in which the terminal risk (2) cannot be computed without recourse to numerical integration and optimisation methods; the intractability of this calculation in turn rules out a closed-form expression at any of the other steps of the procedure. Even in the situation where this calculation can be performed directly, numerical methods will generally be required by the time we must perform either (5) or (6). For previous discussions of the computational challenges involved in sequential decision and design problems, see, for example, Muller et al. (2007) or Williamson and Goldstein (2012); simple, discrete examples in which the backward induction can be performed exactly are discussed in Smith (2010) and Berger (1980). If there is a computationally feasible way to approximate these calculations, however, the potential gains are large: designing observations that we make now to take into account their effect on data that we might collect in the future improves the overall quality of the information that we can collect, and the backward induction framework introduces the possibility of stopping once we have enough information to perform the task at hand, thus potentially saving the cost of unnecessary observations. A sequential procedure is guaranteed to have a risk which is no greater than that of the procedure in which we simply collect all of the observations before making a decision (DeGroot, 1970), and the potential benefits from such a procedure may be large. As discussed by Huan and Marzouk (2016), common strategies for approximating the full sequential design calculation where this is computationally infeasible include 'batch' design, in which designs for the experiments are all selected upfront, and 'greedy' or 'myopic' design, in which the optimal design at each stage is selected without considering the possibility of further experiments. Both strategies may lead to the selection of sub-optimal designs, as they do not account for information which may be available from data collected in the future. Previous work on approximating sequential design problems is presented by Huan and Marzouk (2016), who formulate the problem as a general dynamic programming procedure, and then use an iterative procedure based on linear regression models to approximate the design calculations. Their approximation uses a 'one-step lookahead' approximation to the full procedure, in which, at stage j, only the results of the j th and (j + 1) th experiments are considered. Drovandi et al. (2013) use the 'myopic' approximation in conjunction with a sequential Monte-Carlo algorithm to find approximate solutions to the sequential design problem. Drovandi et al. (2014) adopt a similar approach which additionally incorporates model uncertainty. Section 4.2 of the article by Huan and Marzouk considers an example which illustrates the drawbacks of using a myopic approximation in a sequential design problem. A vehicle is to be used to measure concentrations of a contaminant at a sequence of locations; the observation times are fixed, and so the vehicle is constrained in how far it can travel before it needs to make the next observation. A myopic design strategy chases after the next available measurement at the cost of potentially reducing the quality of the information available from future experiments. A fully sequential strategy, however, balances the expected quality of the immediately available measurement with the expected quality of future measurements, and so prevents the vehicle from moving to regions too far away from those where good-quality measurements might be made in forthcoming experiments. The procedure that we present in Section 3 is based on the backward induction Algorithm 1, which provides the basis for a more natural approximation, since the information from all future experiments n, (n − 1), . . . , (j + 1) is accounted for in the risk function ρ j at stage j. In addition, we use a more flexible class of models to approximate the risk functions at each stage, and outline a strategy for choosing basis and covariance functions which approximate the risks well, while retaining tractability. Approximation of the design calculation In this section, we detail the procedure that we will use to approximate the general algorithm described in section 2.2. It is designed to follow the backward induction procedure as closely as possible, using Bayes linear emulators to approximate calculations which cannot be performed analytically. The analysis proceeds in waves, in a similar way to the history matching procedure of Vernon et al. (2010) and the non-sequential design procedure of Jones et al. (2016): at the first wave, we model the backward induction calculations over the whole design space, and we search the model input space to rule out designs which are unlikely to be optimal; then, in subsequent waves, we re-fit our risk models in those parts of the design space which have not yet been ruled out, allowing us to build up a more accurate picture of the behaviour of the risk and the structure of the design space in these regions. In Section 3.1, we provide an overview of our approximation procedure; each step is explained more fully in Sections 3.2 to 3.6. Further detail is provided in Section S2 of the supplementary material. Overview of the procedure Throughout the remainder of the article, we use a superscript (i), i = 1, 2, . . . to index the current wave of the algorithm. The steps that we perform for each wave are the same, but the approximating emulators are re-focused on those parts of the design space which we believe may contain the optimal design; this point is discussed further in section S2.1. At each wave i = 1, 2, . . . , we iterate the following steps for j = n, (n − 1), . . . , 1: Emulate the risk We fit a model which approximates the risk surface at stage j. Our model for the risk ρ j (defined in (4)) is denoted by r (i) j , and consists of a regression surface and a residual component where g jn g (i) } is a known set of basis functions, the β (i) jp are corresponding unknown weights, and u (i) j is a zero-mean correlated residual process. We specify our prior beliefs about the uncertain components of the model, and combine these prior beliefs with a set of evaluations of the risk to create a secondorder emulator. The details of this procedure are discussed further in Section 3.2; an introduction to Bayes linear methods and to second-order emulation is provided in Section S1. For an illustration of the risk emulation procedure, see Section 4.2 and Section S3.1.1 of the supplementary material. Compute the expected risk We derive a model for the expected risk surface at stage j by integrating our model for the risk. Our modelr (i) j for the expected riskρ j (defined in (5)) is j is discussed in Section 3.3. The computation of the expected risk for a simple example is considered in Section 4.2 and in Section S3.1.2 of the supplementary material. Characterise the candidate design space We eliminate parts of the design space which we deem unlikely to be optimal. Our approximation to the optimal risk ρ * j (defined in (6)) is denoted by s The value of d * j is unknown; we represent our uncertainty about the optimal design setting by sampling candidate designsd j from within a candidate design space D (i) j which could plausibly contain the optimal design. Our strategy for characterising this space and for characterising our uncertainty about s (i) j is discussed in Section 3.4. This procedure is applied to a simple example in Section 4.2, with further details provided in Section S3.1.3 of the supplementary material. Algorithm 2 Approximation to the backward induction procedure. [j] 4: Approximate the expected risk : end for 7: end for Modelling the risk When stage j of the algorithm is reached, the risk ρ j (equation (4)) is unknown, so our first task is to fit a model as an approximation. We choose to use a secondorder emulator, as this is a flexible model which will simplify the calculations that we need to perform in sections 3.3 and 3.4. An introduction to second-order emulation is provided in Section S1.2. The general form of the model that we use is given in equation (7). To fit the emulator, we begin by selecting the regression basis functions g In order to fit the emulator, we generate evaluations of the risk function. At wave i, we denote the set of N (i) j risk values that we use to fit the model by jk is an evaluation of the terminal risk ρ trm n (corresponding to the definition (3)) For all other stages (j < n), R jk is generated by comparing the terminal risk ρ trm j with s (i) j+1 , our approximation to the risk from an optimal decision at stage (j + 1) (corresponding to the definition (4)) where the characterisation of s (i) j+1 is discussed in section 3.4. Due to uncertainty introduced through approximations to the risk, we are generally not able to evaluate the R (i) jk exactly; instead, we assess E R jl by sampling, and we fit the emulator to the mean values, using the covariances to characterise the measurement error structure. This issue is discussed further in Section S2.1. Once the characteristics of the risk evaluations have been assessed, we can compute adjusted expectations E R (i) [j] , d [j] for any new input settings, as detailed in Section S1.2.1. Approximating the expected risk We use our model r (i) j to compute an approximationr (i) j to the expected risk ρ j (defined in equation (8)). As outlined in, for example, O'Hagan (1991) and Rasmussen and Ghahramani (2002), the characteristics of the expectation of a stochastic process can be derived by integrating the characteristics of the process directly; in this instance, the expectation ofr is our adjusted expectation for r (i) j (computed in Section 3.2), and the covariance betweenr is our adjusted covariance for the risk r (i) j . These calculations are performed for a general emulator in Section S1.2. In order to evaluate the above expressions, we must be able to compute expectations of both the basis and covariance functions with respect to the distributions p (z j | . . . ) and p (w j | . . . ) ; in practice, this either means that we must choose particular types of covariance functions and probability distributions in order to ensure integrability of the product, or that we must numerically compute the required integrals. This point is discussed further in section S2.2. Characterising the candidate design space We now characterise our approximation s (i) j (equation (9)) to the risk ρ * j from an optimal design at stage j, which will then be used as an input to the (j − 1) th stage of the algorithm (equation (7), Section 3.2), or to select a design for the j th experiment (Section 3.5). We do this using a sampling procedure, which interrogates our fitted emulator at a space-filling set of trial design inputs, and then selects the design which minimises the risk over this trial design set. Designs which are selected in this manner are referred to as 'candidate designs' and denoted byd j , and the subset of the full design space identified through this procedure is referred to as the 'candidate design space' and is denoted by D [. ] is discussed in Section S2.3. As discussed in e.g. Hennig and Schuler (2012), Adler (1981) and Jones et al. (1998), exactly characterising the extrema of stochastic processes is a challenging and open problem. The approach adopted here efficiently generates a conservative estimate of our uncertainty about the minimum. Stopping Assume that, at wave i, the procedure has been run back to stage j = k, and that we have already selected d [ believe may be optimal. If we knew the risk function exactly, this choice would be simple: we would choose to experiment if ρ * k < ρ trm k−1 , and otherwise make an immediate decision a * k−1 (see Section 2.2 and Algorithm 1). However, since the true risks are unknown, we must instead make a choice which takes account of our uncertainties about them. First, we discuss the selection of a design at which we would perform the experiment; then we consider what we should do given this choice of design; lastly, we consider the benefit that we may obtain from running further waves of the procedure. Choosing a design First, we must choose a design for the k th experiment. We denote the chosen design byd k , and selectd k to minimise our expected risk This minimum is identified either through use of a suitable numerical optimisation procedure, or through interrogation of the mean surface at a large, space-filling set of design inputs. Choosing a course of action Having fixedd k , we must now determine whether we believe that this experiment should be carried out; we do this based on a comparison of our expectation for the risk from this experiment with the risk from an immediate decision. If then we choose to carry out the k th experiment at this design setting; otherwise, we opt for an immediate decision based on our beliefs p q|z [k−1] Assessing the value of further waves Having fully determined our course of action based upon our current beliefs about the risk function, we also wish to make a judgement about the value of running further waves of the approximation procedure (Algorithm 2) to learn more about the risk. To help us make this judgement, we can compute the expected value of perfect information (EVPI) about the risk. If we knew the risk functionr (i) k exactly, and we also knew the optimal design setting d * k , then the risk from an optimal course of action would be min ρ trm k−1 ,r Suppose that, after having chosen somed k as the design setting for the next experiment, we discover that d * k is in fact the true optimal design for the k th experiment; in this situation, our expectation for the loss incurred by choosing to experiment atd k rather than at d The expectation of the second term is approximated by sampling candidate designsd k as outlined in Section 3.4 and Algorithm 3. The EVPI v (i) k−1 constitutes an upper bound on the amount that we should be willing to pay to learn about the risk from the k th experiment; we therefore decide whether to carry out a further wave of analysis by comparing v (i) k−1 with the resource cost (set-up, computer time etc.) that would be incurred by performing another wave of the procedure 2. This comparison requires a judgement on the part of the user. We should certainly not pay more than v (i) k−1 for further analysis, since we are sure that we will not gain this much; however, paying c wvi+1 for wave (i + 1) will not necessarily result in a correspondingly valuable reduction in our uncertainty about the risk. The actual reduction in the risk will depend on the characteristics of the problem under study, and the quality of the emulators that we can fit. It may be possible, for some problems, to make simple judgements about the reduction in uncertainty that we might achieve (in the style of Goldstein and Seheult (2008)) and then compare the resulting EVPI with the current value; this is beyond the scope of the current work. Input selection for the next wave If we decide using EVPI (Section 3.5) that we expect to gain from further analysis, then we may choose to run another wave of the algorithm (indexed by (i + 1)). At this wave, we re-emulate the risk functions inside the candidate design spaces identified by the emulators for the previous wave. At the first wave, we fit our emulators over the whole of the design space, allowing us to build up a picture of risk behaviour over the whole of the design space, and to make an initial identification of designs which can be ruled out as unlikely to minimise the risk. At later waves, we can then focus our modelling efforts on those regions of the space which we have not yet ruled out, building up a more accurate picture of risk behaviour in these regions and potentially allowing us to rule out more parts of the design space. This approach is similar to history matching: see, for example, Vernon et al. (2010). When generating the risk data for these new emulators, we should be careful to focus only on those parts of the design space which are still interesting. This issue is discussed further in Section S2.4. Example: Linear model We first illustrate the method described in section 3 through application to a simple Bayesian linear model. In Section 4.1, we specify our model linking the model parameters and design inputs to the data, and specify the decision problem which we will solve; then, in Section 4.2, we run the approximate backward induction algorithm, and interpret the results. Model and decision problem We assume that a scalar observation z j is available at each stage j, and that these observations are related to the model parameters and design inputs as T is a vector of n hj basis functions, and we assume that there are no external parameters w j affecting the outcome at any stage. We assume that q = (q 1 , . . . , q n h j ) T has a multivariate Gaussian prior distribution (q ∼ N (μ q , V q ) ), and that the errors j are independent, with zero-mean Gaussian distributions ( j ∼ N 0, v j ). We specify that our losses at all stages depend on the value of a new observation z (q) = h d T q +ˆ at a known locationd as where l (ẑ) is a known weight function. Using this loss function, the risk from an optimal terminal decision at stage j is Due to the Gaussian specifications for the prior and the error structure, the distribution p ẑ|z [j] , d [j] is also a Gaussian distribution; we find thatẑ|z [j] , d [j] ∼ N μẑ z [j] , d [j] , Vẑ d [j] where where H d [j] is the design matrix created by stacking the vectors h j (d j ) as rows, andμ q z [j] , d [j] andV q d [j] are the parameters of the posterior Gaussian distribution p q|z [j] , d [j] . If we differentiate the integral from (13) with respect to a j and set to zero, we find that the optimal decision is for E [l (ẑ) ] > 0, where expectations are taken with respect to p ẑ|z [j] , d [j] , and that ρ trm If we choose a polynomial expression as the weighting function, the Gaussian form of the predictive distribution means that the risk can be computed in closed form using expressions for the non-central moments of a univariate Gaussian; due to the simplicity of this specification, then, we can compute the terminal risk at any stage in closed-form, without having to resort to numerical integration or sampling schemes. Running the algorithm We now run Algorithm 2 for a two-stage version of the problem outlined in Section 4.1. For this example, we specify that d j ∈ [−1, 1] for both stages, and we fix the basis function vector for both stages to be The prior parameters of p (q) are fixed to μ q = (0, 0) T and V q = diag 0.5 2 , 0.5 2 , and the measurement error variance is chosen to be different at each stage, with v 1 = (0.5) 2 and v 2 = (0.1) 2 , so that we have the option of a more accurate measurement at the second stage. The weighting function for the loss (12) is chosen to be l (ẑ) = 1 +ẑ 2 , so that l (ẑ) > 0 everywhere. The observation costs are set to be constant, with c 1 (d 1 ) = 0.05 and c 2 (d 2 ) = 0.2, so that the second measurement is also more expensive. First wave At the first wave of the algorithm (i = 1), we fit emulators to the risk over the whole of the design space. Because of the simplicity of this problem, the terminal risks ρ trm j are simple to evaluate at both stages; we therefore use these as the basis functions for our emulators r (1) j , with the data z j substituted for its conditional expectation. We use separable squared exponential covariance functions, which ensure that it is easy for us to compute moments ofr (1) 2 1/2 between the true risk and the mean prediction under the model. These plots show that the true risk lies within three standard deviation error bars of the mean prediction at all points, and that the model captures important aspects of the variation in the risk across the design space. Stopping Based on this first wave of analysis, we assess the optimal course of action under our current beliefs (Section 3.5). First, we interrogate the expected risk at a Latin hypercube of 2000 points, and find thatd 1 = −0.018, with E R (1) 1 r (1) 1 + c 1 d 1 = 0.2520 and Var R (1) 1 r (1) 1 = (0.0006) 2 at this point. The risk from an immediate prior decision is ρ trm 0 = 0.6345, and so it is clear that it is optimal under current beliefs about the risk to carry out at least the first experiment. The EVPI for the risk is 0.0011; this should be compared to the cost of another wave in order to determine whether further analysis should be performed. In any case, we perform another wave to further illustrate the procedure from Section 3. Second wave At the second wave, we re-fit the emulators within the candidate design space from the first wave. In order to cut down on computation time, we characterise the candidate design space at both stages using simple limits (see Section S2.3). The candidate design space D (1) 1 is illustrated in Figure 3; a set of 200 candidate design samples are shown alongside the emulatorr (1) 1 that they are drawn from. For our emulators, we use the same basis and covariance functions as at the first wave. Further details of the fit at this wave are provided in Section S3.2. Stopping We repeat the assessment from Section 3.5 using the emulators from the second wave. First, we interrogate the mean surface at a Latin hypercube of 2000 points, fixingd 1 = 0.103, where E r and Var r (2) 1 = (0.0002) 2 . After this wave, the EVPI for the risk is 0.0001; a set of 200 samples from the candidate design space are shown in Figure 4, alongside the emulatorsr (i) 1 for waves i = 1, 2. Using this plot and the reduced EVPI value, we see that we have reduced our uncertainty about the risk at the second wave; it is becoming clear that the risk is rather flat in this region, so the candidate design space D (2) 1 is not much smaller than the space D (1) 1 from the first wave. Example: Airborne sensing problem We now apply the procedure outlined in Section 3 to a more complex problem. We consider an atmospheric dispersion problem, in which our goal is to infer the emission rates of a set of ground-based gas sources using concentration measurements collected along a sequence of flight paths. We would like to plan the sequence of flights in such a way that we obtain the 'best information' about possible sources of gas (according to some loss function). In Section 5.1, we outline the model that we use for this problem, and in Section 5.2, we specify the components of the design problem that we will solve. Then, in Section 5.3, we outline the application of the procedure 2 to this problem, and discuss the results produced. Model specification A commonly-used model for an atmospheric dispersion problem is the stationary Gaussian plume (see, for example, Hirst et al. (2012), Stockie (2011) or Jones et al. (2016)); this is a simple solution to the advection-diffusion equation (which gives a more general description of atmospheric dispersion), obtained under a number of simplifying assumptions, which describes the steady-state concentration downwind of a source under a wind direction which is constant over a suitable time-scale. We denote the location of an individual measurement by x = (x x , x y , x h ) T and the location of a single source by c = (c x , c y , c h ) T ; we project the sourceobservation vector onto the wind direction vector w = (w x , w y ) as follows In terms of this wind-projected distance, the contribution made by a source located at c with emission rate ψ to the measurement made at x is given by a (ω, σ) ψ, where a is the Gaussian plume coupling coefficient computed as T are horizontal and vertical plume standard deviations, which depend on the downwind distance from source to measurement as σ y = ω y tan (γ y ) and σ h = ω h tan (γ h ), where γ = (γ y , γ h ) T are horizontal and vertical plume opening angles (measured in degrees), which can be estimated from atmospheric data, and are treated as known for the purposes of this analysis. In this example, we design for a multi-source problem; the concentration contribution from a set of n s sources located at {c k } ns k=1 with emission rates {ψ k } ns k=1 to a measurement at x under wind field w is simply the sum of the individual source contributions where y is measured in parts per volume (ppv). In Section 5.2, we combine this model with a function describing the flight path to obtain the full model specification. Design problem Flight parametrisation We are interested in inferring the emission rates of a grid of ground-based sources within a rectangular survey area. The sensor which we will use to collect concentration measurements is to be mounted on an aircraft, which will be flown at some altitude over the survey area. We assume that the sensor will make observations at a fixed rate during the course of a flight, and that flight paths will be pre-planned according to some low-dimensional parametrisation. We specify that each flight will consist of n fl regularly-spaced, parallel transects of a given length. Each flight path is completely determined by five parameters: We specify that n ob measurements will be made on each transect, and that all transects are perpendicular to the wind direction. We specify a deterministic map between the five design parameters and the vector of locations of the individual concentration measurements, which we denote by x p = t p (d) . Combining this mapping with the concentration model (14), our model for the data z j observed at stage j of the problem is where d j = {d jx , d jy , d jh , d jw , d jd } is the five-dimensional design input setting at the j th stage, w j = {w jx , w jy } is the two dimensional wind field input (the external parameter set) at stage j, and j is a (n fl × n ob ) dimensional vector of uncorrelated measurement errors. In general, if there were further systematic effects in the concentration profile not related to the source contributions (e.g. a smoothly-varying background concentration), we would include additional terms in equation (15). We assume that the wind parameters are independently uniformly distributed at each stage, over the following ranges (all units are metres/second): These prior distributions are chosen to give an example in which the prevailing wind direction is different at all stages. When designing for data collection over a real region, suitable prior distributions could be constructed from historic measurements of the wind field, or from wind data collected immediately prior to the measurement campaign. Model and decision problem We specify the following loss function for all stages of the problem where q = {ψ 1 , . . . , ψ ns } is the set of emission rates for the grid of sources. C is a baseline cost, and l is a scalar multiplier for the quadratic component. Under this loss function specification, the risk from an optimal terminal decision is Var q k |z [j] , w [j] , d [j] (16) In this example, we choose to make a prior second-order specification for all of the components of the model (15), and we characterise the posterior distribution (1) in terms of our adjusted moments at each stage; under this assumption, the conditional variances in equation (16) are simply equivalent to the adjusted variances Var z [j] q k |w [j] , d [j] . For further discussion of this point, see Section S4.1 of the supplementary material. The cost of the flight path is also assumed to be constant across stages, and consists of a constant setup cost for each flight, and a cost per unit distance flown where c set is the constant setup cost, c dst is the cost per unit distance, and x 0 = (x 0x , x 0y , 0) T is the location of the airport from which the aircraft takes off. Figure 5 illustrates an inference carried out using flights parametrised in this way; Figures Running the algorithm In this section, we outline the application of the approximate sequential design Algorithm 2 to this problem, and discuss its results. Further details related to this section are provided in Section S4 of the supplementary material. First wave At the first wave of the algorithm, we fit our emulators over the whole of the design space. At each stage, the emulator is fitted to the risk using the procedure outlined in Section 3.2 and Section S2.1. In all models, the regression basis consists of an intercept term and an approximation to the risk based on a comparison between the current terminal risk and the risk from a good design at the next stage, and the correlation function is chosen to have a squared exponential form. Further details of the modelling choices made in this example are given in Section S4.2. Figure 6 shows a set of 100 samples from the candidate design space at stage j = 1 after wave i = 1 of the algorithm. We see that our modelling of the risk function has restricted the ranges of settings of all components of d 1 which appear in our candidate design space; this restriction is perhaps greatest in the (d 1x , d 1y ) plane. Stopping After the first wave, we assess the optimal course of action under our current beliefs (Section 3.5). First, we interrogate the expected risk (including the design cost) at a Latin hypercube of 2000 points and fixd 1 as the design to 5(c) show the expected concentration measurements for a grid of points in (x, y) space, for an emission rate of ψ k = 1 at both sources (locations indicated by magenta markers), under the wind conditions indicated by the black arrows. Observations of this concentration field are made at the black markers. Figures 5(d) conduct at least the first experiment. Based on a set of 100 samples from the candidate design space, we asses that the EVPI for the risk is 0.62; in practice, whether we choose to perform further analysis on this basis will depend on the cost of our computational resources relative to the risk from the experiment. In any case, we perform another wave to further illustrate aspects of the procedure from Section 3. Second wave At the second wave of the algorithm, we re-fit another sequence of emulators in the candidate design spaces at each stage (see Section 3.4). Figure 7 shows a set of candidate designs for each of the 3 stages of the problem sampled according to the procedure in Algorithm 3. From this, we see that the candidate design spaces are strongly restricted in (d x , d y ) space at all stages; on this basis, we decide to generate designs for the second wave of the procedure by approximating the candidate design space in (d x , d y ) using simple limits. For further discussion of this point, see Section S2.4. Our emulator fitting procedure at this wave is the same as that at the first wave. Further information specific to the procedure at this wave is provided in Section S4.2. Figure 8 shows the expectation of the risk E R (2) 1 r (2) 1 + c 1 d 1 at a set of 100 candidate designsd 1 , sampled as in Algorithm 3; this should be compared with the equivalent set of samples from the candidate design space at the first wave shown in Figure 6. We see that after this wave, the candidate design space has roughly the same shape in the (d 1x , d 1y ) plane, and that we have started to see greater restrictions placed on the settings of d 1h , d 1w and d 1d that appear in our candidate design space. (1) j at each stage j = 1, 2, 3; designing within the candidate design space at each stage restricts the candidate design space at each subsequent wave. Red contours enclose regions with higher densities of sampled points, and blue contours enclose regions with lower densities. (2) 1 d 1 1/2 = (0.35) 2 . The EVPI after this wave is 0.04, so the second wave of analysis has resulted in a reduction of our uncertainty about the risk. Discussion In this study, we considered the Bayesian optimal design problem in the situation where the data is available from a series of experiments (with associated costs), and where there is the option after each to evaluate the expected benefit from the remaining series of experiments and to either stop and use the data to make decisions, or to collect the next set of observations. We outlined the backward induction procedure that is used to solve such problems, and explained the computational issues that this algorithm presents in the general case. An approximating framework was proposed which uses Bayes linear emulators to perform some of the difficult calculations; these emulators capture a large amount of the structure of the problem, and allow us to use various existing tools to track the uncertainty in the calculation through the stages of the numerical procedure. This approximation proves beneficial in application to both a simple linear model example, and to a more complex atmospheric dispersion modelling problem. In the work reported here, we have considered problems with up to 3 potential future experiments. This involved hours of computation on a reasonably powerful laptop. In general, we might want to consider problems with many more stages in the backward induction calculation. For problems with reason-ably high-dimensional collections of data, design inputs and external inputs at each stage, computational complexity may become prohibitive. There is a tradeoff between the per-stage dimensionality of the design problem to be solved and the number of stages that can be handled using our approximation: for problems with lower-dimensional input spaces, we can potentially consider a larger number of future experiments; whereas for problems with higher-dimensional input spaces, we may be restricted to a smaller number of stages. This work suggests several interesting areas for possible future research: first, it is often the case that the data collected on the system during the course of a sequential sampling scheme is used not only to make inferences about the parameters of the model, but also to motivate improvements to the model. The experimentation plan may specify that model development is to take place after the completion of all stages, or improvements may be planned between experimental stages. Goldstein and Rougier (2009) introduced a framework which links improved versions of a model to the current implementation and to the system under study; if we were to replace the model specification in section 3 with this framework, then the approximate backward induction procedure could be modified in order to generate designs which would take into account both the availability of future observations, and the possibility of future model development. Assessing our uncertainty about the risk at its minimum is the most difficult task which we must perform as part of our backward induction approximation. The procedure presented in section 3.4 works well for low-dimensional problems where the risk function is relatively smooth; however, in higher-dimensional problems, or where there are multiple, disconnected regions of the design space which could minimise the risk, it becomes more difficult to use. Additionally, the procedure is sensitive to the size of the trial design used, and it is computationally difficult to draw enough samples to assess the variability in the minimum risk for each input setting. Characterising the distribution of the minimum of a stochastic process is an active research area (see, for example, Hennig and Schuler (2012) or Adler (1981)); theoretical guarantees for the behaviour of the sampling scheme from section 3.4, or the development of an alternative technique which does have such guarantees would increase confidence in our ability to properly track uncertainties across the stages for a greater range of sequential design problems.
12,593
2018-12-31T00:00:00.000
[ "Mathematics" ]
Microstructure and Oxidation Behavior of CrAl Laser-Coated Zircaloy-4 Alloy Laser coating of a CrAl layer on Zircaloy-4 alloy was carried out for the surface protection of the Zr substrate at high temperatures, and its microstructural and thermal stability were investigated. Significant mixing of CrAl coating metal with the Zr substrate occurred during the laser surface treatment, and a rapidly solidified microstructure was obtained. A considerable degree of diffusion of solute atoms and some intermetallic compounds were observed to occur when the coated specimen was heated at a high temperature. Oxidation appears to proceed more preferentially at Zr-rich region than Cr-rich region, and the incorporation of Zr into the CrAl coating layer deteriorates the oxidation resistance because of the formation of thermally unstable Zr oxides. Introduction Zirconium alloys have been widely used as nuclear materials because of their high chemical stability under the normal operating conditions of a pressurized or boiling water reactor, low absorption cross-section of thermal neutrons, and fairly good mechanical properties.However, Zr alloys are vulnerable to the high temperature oxidation that can occur in the case of accidents.Since explosive hydrogen is formed by the rapid zirconium oxidation, a reduced oxidation rate of Zr alloys at high temperatures is necessary to improve the accident tolerance [1][2][3][4].As a short-term solution to the problem, protective coating through an efficient and economical method such as laser coating and thermal spray can be considered [2,4]. It has been previously reported that a laser coating of chromium on Zircaloy-4 cladding tube could enhance the high-temperature oxidation resistance significantly [2].The diffusion of oxygen into the Zr substrate was observed to be effectively restricted by the Cr coating layer during the oxidation.When a coated alloy is exposed to a high temperature, a significant diffusion and microstructural variation may occur at the coating/substrate interface, even for a short time.In the case of laser-treated FeCrAl-coating on Mo alloy, an interfacial reaction was found to occur at high temperature [5]. Meanwhile, it was reported that Cr-Al composite coatings were very effective in enhancing the oxidation resistance of metals at high temperatures [6].Since Al 2 O 3 and Cr 2 O 3 can form on the surface of Cr-Al coated alloys, the high temperature oxidation resistance is expected to increase even further.Although the superiority of Cr-Al laser coating has already been demonstrated, research on the stability of the microstructure at high temperature is not yet sufficient.In the case of Cr-coated Zr alloy, comparatively a few phases can be formed between Cr coating layer and Zr substrate according to the Zr-Cr binary phase diagram [7].However, in the case of CrAl-coated Zr alloy, more complicated interfacial reactions would occur during the solidification and when it is exposed to high temperatures.Therefore, in the present research, CrAl laser-coated Zr alloys were exposed to high temperatures, and then their microstructural variation and oxidation behavior were investigated. Materials and Methods Zircaloy-4 alloy (Zr-1.38%Sn-0.2%Fe-0.1%Cr,wt.%) sheets were used as the substrate, and CrAl (30 wt.% Al) coating layer with the average thickness of about 300 µm was deposited on the surface of the Zr alloy through a laser coating process.A photograph of the laser equipment for coating is shown in Figure 1.The laser coating was carried out by using a continuous wave (CW) diode laser (wavelength of 1062 nm) with a maximum power of 300 W (PF-1500F model; HBL Co., Daejeon, Korea) and a power supply (Pwp14Y04K model; Yesystem Co., Daejeon, Korea).Coating process variables such as laser power, powder injection speed, specimen moving velocity, and gas flow speed were adjusted based on previous research results [8].The applied power for the laser treatment ranged up to 300 W, and the scanning speed was 14 mm/s.To prevent any oxidation during the process, an inert gas (Ar) was continuously blowing into the melted surface of specimen.The mean size of the CrAl alloy powders as a raw material for coating was 90 µm. As previously mentioned, when used for nuclear fuel claddings or the like, it can be accidently exposed to a high temperature for a long time, and it is presumed that the atmosphere is mainly water vapor.For ease of experiment, firstly, microstructural variations of the laser-coated specimens were investigated under argon or air atmosphere at 1100 • C for different holding times.Then, oxidation test of the CrAl and CrAlZr alloys was conducted in the steam atmosphere at 1200 • C for 1 h.Cr-30 wt.% Al and Cr-30 wt.% Al-20 wt.% Zr alloy specimens were prepared through vacuum arc remelting process to directly investigate the characteristics of the coating layers without Zr substrate.Microstructural analyses were performed using SEM (JEOL, Tokyo, Japan) equipped with an energy dispersive X-ray spectrometer (EDS, JEOL, Tokyo, Japan), and an X-ray diffractometer (XRD, Rigaku, Tokyo, Japan).Cr-coated Zr alloy, comparatively a few phases can be formed between Cr coating layer and Zr substrate according to the Zr-Cr binary phase diagram [7].However, in the case of CrAl-coated Zr alloy, more complicated interfacial reactions would occur during the solidification and when it is exposed to high temperatures.Therefore, in the present research, CrAl laser-coated Zr alloys were exposed to high temperatures, and then their microstructural variation and oxidation behavior were investigated. Materials and Methods Zircaloy-4 alloy (Zr-1.38%Sn-0.2%Fe-0.1%Cr,wt.%) sheets were used as the substrate, and CrAl (30 wt.% Al) coating layer with the average thickness of about 300 μm was deposited on the surface of the Zr alloy through a laser coating process.A photograph of the laser equipment for coating is shown in Figure 1.The laser coating was carried out by using a continuous wave (CW) diode laser (wavelength of 1062 nm) with a maximum power of 300 W (PF-1500F model; HBL Co., Daejeon, Korea) and a power supply (Pwp14Y04K model; Yesystem Co., Daejeon, Korea).Coating process variables such as laser power, powder injection speed, specimen moving velocity, and gas flow speed were adjusted based on previous research results [8].The applied power for the laser treatment ranged up to 300 W, and the scanning speed was 14 mm/s.To prevent any oxidation during the process, an inert gas (Ar) was continuously blowing into the melted surface of specimen.The mean size of the CrAl alloy powders as a raw material for coating was 90 μm. As previously mentioned, when used for nuclear fuel claddings or the like, it can be accidently exposed to a high temperature for a long time, and it is presumed that the atmosphere is mainly water vapor.For ease of experiment, firstly, microstructural variations of the laser-coated specimens were investigated under argon or air atmosphere at 1100 °C for different holding times.Then, oxidation test of the CrAl and CrAlZr alloys was conducted in the steam atmosphere at 1200 °C for 1 h.Cr-30 wt.% Al and Cr-30 wt.% Al-20 wt.% Zr alloy specimens were prepared through vacuum arc remelting process to directly investigate the characteristics of the coating layers without Zr substrate.Microstructural analyses were performed using SEM (JEOL, Tokyo, Japan) equipped with an energy dispersive X-ray spectrometer (EDS, JEOL, Tokyo, Japan), and an X-ray diffractometer (XRD, Rigaku, Tokyo, Japan). Micorstucture of CrAl Laser-Coated Zr Alloy Figure 2 shows SEM micrographs of CrAl laser coating layers on Zircaloy-4 alloy.Some Zr content could be measured in the CrAl coating layer that is far away from the Zr substrate (sometimes even near the coating surface), implying that significant intermixing of CrAl coating and Zr substrate Metals 2017, 7, 59 3 of 7 occurred during the laser coating.The light bottom area is Zr substrate, and the Zr content is increased as distance from the substrate is increased.A Zr-rich region appears between the Zr substrate and the CrAl coating layer.The composition of the Zr-rich part indicates the formation of a solid solution of Cr and Al in Zr.As indicated in Figure 2b, the majority of the Cr-rich coating layer is composed of two rapidly solidified regions: Cr-rich and AlZr(Cr) phases. Micorstucture of CrAl Laser-Coated Zr Alloy Figure 2 shows SEM micrographs of CrAl laser coating layers on Zircaloy-4 alloy.Some Zr content could be measured in the CrAl coating layer that is far away from the Zr substrate (sometimes even near the coating surface), implying that significant intermixing of CrAl coating and Zr substrate occurred during the laser coating.The light bottom area is Zr substrate, and the Zr content is increased as distance from the substrate is increased.A Zr-rich region appears between the Zr substrate and the CrAl coating layer.The composition of the Zr-rich part indicates the formation of a solid solution of Cr and Al in Zr.As indicated in Figure 2b, the majority of the Cr-rich coating layer is composed of two rapidly solidified regions: Cr-rich and AlZr(Cr) phases.Namely, dendritic primary Cr and AlCr phases were not observed.Additionally, in the case of the AlZr phase, it shows a dendritic morphology.This discrepancy seems to be because the growth occurred under a rapid solidification condition [10]. CrAl Laser-Coated Zr Alloy that Exposed to a High Temperature Since the CrAl-coated Zr alloy is aimed to be resistant at high temperatures, the coated specimen was isothermally heated in inert atmosphere at 1100 °C for different times.Figure 3 indicates that inter-diffusion among phases in the coating layers apparently occurred after 2 h.The diffusion of aluminum appears to be significant so that the aluminum content can be detected in the Namely, dendritic primary Cr and AlCr phases were not observed.Additionally, in the case of the AlZr phase, it shows a dendritic morphology.This discrepancy seems to be because the growth occurred under a rapid solidification condition [10]. CrAl Laser-Coated Zr Alloy that Exposed to a High Temperature Since the CrAl-coated Zr alloy is aimed to be resistant at high temperatures, the coated specimen was isothermally heated in inert atmosphere at 1100 • C for different times.Figure 3 indicates that inter-diffusion among phases in the coating layers apparently occurred after 2 h.The diffusion of aluminum appears to be significant so that the aluminum content can be detected in the Zr substrate.Generally, three distinct parts are shown: Zr-substrate, Zr-rich area, Cr-rich area (the majority of the coating layer).An intermediate area between the Zr-rich and Cr-rich area may be counted, but it was excluded as it can be regarded as a part of the Zr-rich area. Zr substrate.Generally, three distinct parts are shown: Zr-substrate, Zr-rich area, Cr-rich area (the majority of the coating layer).An intermediate area between the Zr-rich and Cr-rich area may be counted, but it was excluded as it can be regarded as a part of the Zr-rich area.As indicated in Figure 4, the isothermal heating clarified that the microstructure of the Zr-rich region near the substrate is composed of two regions: Zr-rich and Cr-rich.The Zr-rich area is Zr phase, and the Cr-rich area is postulated to be CrZr plus AlZr phases.Figure 5 also shows that the Cr-rich area is composed of three distinct phases.The main phase is Cr, containing about 40 at.% Al that is near the maximum solubility limit for Cr at 1100 °C.Others are Al-rich phases that include relatively large or very limited Zr content.According to the Pandat prediction and literature [9,11,12], the Al-rich phases with large Zr contents seem to be Al3Zr.Meanwhile, the Al-rich phase with a very low Zr content should be Al8Cr5.The microstructure of the coated specimens isothermally heated for 10 h was also investigated.However, the coated specimens maintained for 10 h showed similar microstructural characteristics to those of specimens held for 2 h.It is believed that the microstructure of the coating layer could be converted into near the equilibrium structure even with a holding time of just 2 h, since 1100 °C is a very high temperature.As indicated in Figure 4, the isothermal heating clarified that the microstructure of the Zr-rich region near the substrate is composed of two regions: Zr-rich and Cr-rich.The Zr-rich area is Zr phase, and the Cr-rich area is postulated to be CrZr plus AlZr phases.Figure 5 also shows that the Cr-rich area is composed of three distinct phases.The main phase is Cr, containing about 40 at.% Al that is near the maximum solubility limit for Cr at 1100 • C. Others are Al-rich phases that include relatively large or very limited Zr content.According to the Pandat prediction and literature [9,11,12], the Al-rich phases with large Zr contents seem to be Al 3 Zr.Meanwhile, the Al-rich phase with a very low Zr content should be Al 8 Cr 5 .The microstructure of the coated specimens isothermally heated for 10 h was also investigated.However, the coated specimens maintained for 10 h showed similar microstructural characteristics to those of specimens held for 2 h.It is believed that the microstructure of the coating layer could be converted into near the equilibrium structure even with a holding time of just 2 h, since 1100 • C is a very high temperature. Metals 2017, 7, 59 4 of 8 Zr substrate.Generally, three distinct parts are shown: Zr-substrate, Zr-rich area, Cr-rich area (the majority of the coating layer).An intermediate area between the Zr-rich and Cr-rich area may be counted, but it was excluded as it can be regarded as a part of the Zr-rich area.As indicated in Figure 4, the isothermal heating clarified that the microstructure of the Zr-rich region near the substrate is composed of two regions: Zr-rich and Cr-rich.The Zr-rich area is Zr phase, and the Cr-rich area is postulated to be CrZr plus AlZr phases.Figure 5 also shows that the Cr-rich area is composed of three distinct phases.The main phase is Cr, containing about 40 at.% Al that is near the maximum solubility limit for Cr at 1100 °C.Others are Al-rich phases that include relatively large or very limited Zr content.According to the Pandat prediction and literature [9,11,12], the Al-rich phases with large Zr contents seem to be Al3Zr.Meanwhile, the Al-rich phase with a very low Zr content should be Al8Cr5.The microstructure of the coated specimens isothermally heated for 10 h was also investigated.However, the coated specimens maintained for 10 h showed similar microstructural characteristics to those of specimens held for 2 h.It is believed that the microstructure of the coating layer could be converted into near the equilibrium structure even with a holding time of just 2 h, since 1100 °C is a very high temperature.The chemical composition distribution as a function of depth for the CrAl-coated specimen exposed to a high temperature air is shown in Figure 6.Like Figure 3, three distinct parts are generally observed in the specimen, and it is clear that oxidation proceeded a little, only at the surface.If cracks are formed in the coating layer, they are undoubtedly undesirable, because the protective coating may be detached from the substrate.Vertically-formed cracks are supposed to be The chemical composition distribution as a function of depth for the CrAl-coated specimen exposed to a high temperature air is shown in Figure 6.Like Figure 3, three distinct parts are generally observed in the specimen, and it is clear that oxidation proceeded a little, only at the surface.If cracks are formed in the coating layer, they are undoubtedly undesirable, because the protective coating may be detached from the substrate.Vertically-formed cracks are supposed to be more detrimental to oxidation resistance, since oxygen ions can move easily through the cracks into the substrate.The formation of Zr oxides was observed on the Zr substrate near a vertical crack, as shown in Figure 7.However, a significant oxygen content was measured only at the top surface of CrAl coating layer in the sound region, and this suggests that the CrAl coating layer was effective in delaying the high temperature oxidation.It also appears that Al and Zr oxidize more preferentially than Cr in the coating layer.Unlike Al oxides, Zr oxides are not protective against oxidation toward the matrix at high temperatures [1,2].Therefore, a mixing between CrAl coating layer and Zr substrate should be carefully controlled to minimize the Zr content at the top of the surface coating layer.The chemical composition distribution as a function of depth for the CrAl-coated specimen exposed to a high temperature air is shown in Figure 6.Like Figure 3, three distinct parts are generally observed in the specimen, and it is clear that oxidation proceeded a little, only at the surface.If cracks are formed in the coating layer, they are undoubtedly undesirable, because the protective coating may be detached from the substrate.Vertically-formed cracks are supposed to be more detrimental to oxidation resistance, since oxygen ions can move easily through the cracks into the substrate.The formation of Zr oxides was observed on the Zr substrate near a vertical crack, as shown in Figure 7.However, a significant oxygen content was measured only at the top surface of CrAl coating layer in the sound region, and this suggests that the CrAl coating layer was effective in delaying the high temperature oxidation.It also appears that Al and Zr oxidize more preferentially than Cr in the coating layer.Unlike Al oxides, Zr oxides are not protective against oxidation toward the matrix at high temperatures [1,2].Therefore, a mixing between CrAl coating layer and Zr substrate should be carefully controlled to minimize the Zr content at the top of the surface coating layer. Oxidation Behavior of CrAl Laser-Coated Zr Alloy at High Temperature To clearly compare the oxidation resistance of CrAl coating layer with that of the Zr-incorporated CrAl layer, Cr-30 wt.% Al and Cr-30 wt.% Al-20 wt.% Zr alloy cast specimens were fabricated by vacuum arc remelting, and an oxidation test in steam at 1200 °C for 1 h was carried out.In the case of accident, the temperature for nuclear cladding can be extremely increased, and is expected to be still under steam atmosphere.The corrosion resistance under atmosphere containing moisture can be quite different from that under dry air.Although Si and SiO2 are highly corrosion-resistant materials, they were quickly dissolved in a pressurized water condition at 360 °C in 18.9 MPa [2].As shown in Figure 8, the coating layer without Zr possesses remarkably Oxidation Behavior of CrAl Laser-Coated Zr Alloy at High Temperature To clearly compare the oxidation resistance of CrAl coating layer with that of the Zr-incorporated CrAl layer, Cr-30 wt.% Al and Cr-30 wt.% Al-20 wt.% Zr alloy cast specimens were fabricated by vacuum arc remelting, and an oxidation test in steam at 1200 • C for 1 h was carried out.In the case of accident, the temperature for nuclear cladding can be extremely increased, and is expected to be still under steam atmosphere.The corrosion resistance under atmosphere containing moisture can be quite different from that under dry air.Although Si and SiO 2 are highly corrosion-resistant materials, they were quickly dissolved in a pressurized water condition at 360 • C in 18.9 MPa [2].As shown in Figure 8, the coating layer without Zr possesses remarkably higher oxidation resistance than the Zr-mixed layer.Namely, much higher weight gain was observed for the Zr-containing alloy as compared to the CrAl alloy without Zr.The oxidation behavior for ZrCr30Al specimen (Zr alloy containing 30 wt. % Cr and 20 wt.% Al) was also compared for reference.Even though Cr and Al are contained in large amounts, it can be confirmed that the Zr alloy is seriously oxidized in a high temperature steam atmosphere. Oxidation Behavior of CrAl Laser-Coated Zr Alloy at High Temperature To clearly compare the oxidation resistance of CrAl coating layer with that of the Zr-incorporated CrAl layer, Cr-30 wt.% Al and Cr-30 wt.% Al-20 wt.% Zr alloy cast specimens were fabricated by vacuum arc remelting, and an oxidation test in steam at 1200 °C for 1 h was carried out.In the case of accident, the temperature for nuclear cladding can be extremely increased, and is expected to be still under steam atmosphere.The corrosion resistance under atmosphere containing moisture can be quite different from that under dry air.Although Si and SiO2 are highly corrosion-resistant materials, they were quickly dissolved in a pressurized water condition at 360 °C in 18.9 MPa [2].As shown in Figure 8, the coating layer without Zr possesses remarkably higher oxidation resistance than the Zr-mixed layer.Namely, much higher weight gain was observed for the Zr-containing alloy as compared to the CrAl alloy without Zr.The oxidation behavior for ZrCr30Al specimen (Zr alloy containing 30 wt. % Cr and 20 wt.% Al) was also compared for reference.Even though Cr and Al are contained in large amounts, it can be confirmed that the Zr alloy is seriously oxidized in a high temperature steam atmosphere.Figure 9 indicates that a stable Al2O3 phase is observed a lot in the surface of the Cr-30%Al alloy.It is worth mentioning that a Cr2O3 phase was not found in that specimen.Although both Cr2O3 and Al2O3 phases are generally stable, the Al2O3 phase is believed to be more stable, resulting in a continuous external Al2O3 layer.This phenomenon has been known as transient oxidation [13].If the content of aluminum in the coating layer is insufficient, it is considered that the Cr2O3 is observed on It is worth mentioning that a Cr 2 O 3 phase was not found in that specimen.Although both Cr 2 O 3 and Al 2 O 3 phases are generally stable, the Al 2 O 3 phase is believed to be more stable, resulting in a continuous external Al 2 O 3 layer.This phenomenon has been known as transient oxidation [13].If the content of aluminum in the coating layer is insufficient, it is considered that the Cr 2 O 3 is observed on the coating surface.Meanwhile, a significant amount of ZrO 2 and Al 2 O 3 phases were found in the case of the Zr-added alloy.Since the Zr oxide is not protective, the existence of a surface ZrO 2 layer should be responsible for the comparatively lower oxidation resistance. Metals 2017, 7, 59 7 of 8 the coating surface.Meanwhile, a significant amount of ZrO2 and Al2O3 phases were found in the case of the Zr-added alloy.Since the Zr oxide is not protective, the existence of a surface ZrO2 layer should be responsible for the comparatively lower oxidation resistance. Conclusions It was found that a significant mixing between the CrAl layer and the Zr substrate and the formation of rapidly solidified microstructure occurred during the laser surface coating process.Inter-diffusions among the solidified phases took place when the coated specimens were Conclusions It was found that a significant mixing between the CrAl layer and the Zr substrate and the formation of rapidly solidified microstructure occurred during the laser surface coating process.Inter-diffusions among the solidified phases took place when the coated specimens were isothermally heated at 1100 • C, and resulted in the formation of equilibrium phases after just 2 h.Since Zr is easily oxidized and Zr oxides are not protective against oxidation, the Zr content at the top of coating layer should be minimized to avoid deteriorated oxidation resistance. Figure 1 . Figure 1.Appearance of the laser equipment for coating.Figure 1. Appearance of the laser equipment for coating. Figure 1 . Figure 1.Appearance of the laser equipment for coating.Figure 1. Appearance of the laser equipment for coating. Figure 3 . Figure 3. SEM-EDS analyses of laser-coated Zircaloy-4 substrate alloy after the isothermal heating at 1100 °C for 2 h in inert atmosphere: (a) bright image; (b) backscattered image. Figure 4 . Figure 4. SEM-EDS analyses of Zr-rich area of CrAl-coated Zircaloy-4 alloy after the isothermal heating at 1100 °C for 2 h in inert atmosphere. Figure 3 . Figure 3. SEM-EDS analyses of laser-coated Zircaloy-4 substrate alloy after the isothermal heating at 1100 • C for 2 h in inert atmosphere: (a) bright image; (b) backscattered image. Figure 3 . Figure 3. SEM-EDS analyses of laser-coated Zircaloy-4 substrate alloy after the isothermal heating at 1100 °C for 2 h in inert atmosphere: (a) bright image; (b) backscattered image. Figure 4 . Figure 4. SEM-EDS analyses of Zr-rich area of CrAl-coated Zircaloy-4 alloy after the isothermal heating at 1100 °C for 2 h in inert atmosphere. Figure 5 . Figure 5. SEM-EDS analyses of CrAl coating layer on Zircaloy-4 alloy after the isothermal heating at 1100 • C for 2 h in inert atmosphere. Figure 5 . Figure 5. SEM-EDS analyses of CrAl coating layer on Zircaloy-4 alloy after the isothermal heating at 1100 °C for 2 h in inert atmosphere. Figure 6 . Figure 6.SEM micrographs with EDS profiles of the oxidized CrAl coating layer on Zircaloy-4 after isothermal heating at 1100 °C for 10 min in air. Figure 7 . Figure 7. SEM-EDS analyses of laser coated Zircaloy-4 substrate alloy after isothermal heating at 1100 °C for 30 min in air: (a) near the substrate; (b) top surface of the coating layer. Figure 7 . Figure 7. SEM-EDS analyses of laser coated Zircaloy-4 substrate alloy after isothermal heating at 1100 • C for 30 min in air: (a) near the substrate; (b) top surface of the coating layer. Figure 7 . Figure 7. SEM-EDS analyses of laser coated Zircaloy-4 substrate alloy after isothermal heating at 1100 °C for 30 min in air: (a) near the substrate; (b) top surface of the coating layer. Figure 8 . Figure 8. Oxidation behavior of Cr-30%Al cast alloys with and without Zr after the steam oxidation test at 1200 °C for 1 h (Zr alloy containing Cr and Al is also compared for reference) Figure 8 . Figure 8. Oxidation behavior of Cr-30%Al cast alloys with and without Zr after the steam oxidation test at 1200 • C for 1 h (Zr alloy containing Cr and Al is also compared for reference). Figure 9 Figure9indicates that a stable Al 2 O 3 phase is observed a lot in the surface of the Cr-30%Al alloy.It is worth mentioning that a Cr 2 O 3 phase was not found in that specimen.Although both Cr 2 O 3 and Al 2 O 3 phases are generally stable, the Al 2 O 3 phase is believed to be more stable, resulting in a continuous external Al 2 O 3 layer.This phenomenon has been known as transient oxidation[13].If the content of aluminum in the coating layer is insufficient, it is considered that the Cr 2 O 3 is observed on the coating surface.Meanwhile, a significant amount of ZrO 2 and Al 2 O 3 phases were found in the case of the Zr-added alloy.Since the Zr oxide is not protective, the existence of a surface ZrO 2 layer should be responsible for the comparatively lower oxidation resistance.
6,083
2017-02-15T00:00:00.000
[ "Engineering", "Materials Science" ]
Three-Dimensional Printing Assisted Laparoscopic Partial Nephrectomy vs. Conventional Nephrectomy in Patients With Complex Renal Tumor: A Systematic Review and Meta-Analysis Objective: The purpose of this meta-analysis was to systematically assess the influence of three-dimensional (3D) printing technology in laparoscopic partial nephrectomy (LPN) of complex renal tumors. Methods: A systematic literature review was performed in June 2020 using the Web of Science, PubMed, Embase, the Cochrane library, the China National Knowledge Infrastructure (CNKI), and the Wanfang Databases to identify relevant studies. The data relative to operation time, warm ischemic time, intraoperative blood loss, positive surgical margin, reduction in estimated glomerular filtration rate (eGFR), and complications (including artery embolization, hematoma, urinary fistula, transfusion, hematuria, intraoperative bleeding, and fever) were extracted. Two reviewers independently assessed the quality of all included studies, and the eligible studies were included and analyzed using the Stata 12.1 software. A subgroup analysis was performed stratifying patients according to the complexity of the tumor and surgery type or to the nephrometry score. Results: One randomized controlled trial (RCT), two prospective controlled studies (PCS), and seven retrospective comparative studies (RCS) were analyzed, involving a total of 647 patients. Our meta-analysis showed that there were significant differences in operation time, warm ischemic time, intraoperative blood loss, reduction in eGFR, and complications between the LPN with 3D-preoperative assessment (LPN-3DPA) vs. LPN with conventional 2D preoperative assessment (LPN-C2DPA) groups. Positive surgical margin did not differ significantly. Conclusion: The LPN-3DPA group showed shorter operation time and warm ischemic time, as well as less intraoperative blood loss, reduction in eGFR, fewer complications for patients with complex renal tumor. Therefore, LPN assisted by three-dimensional printing technology should be a preferable treatment of complex renal tumor when compared with conventional LPN. However, further large-scale RCTs are needed in the future to confirm these findings. INTRODUCTION With the advancement and widespread usage of image technology, low-stage and small renal tumors are being detected more often in recent years, which has partly contributed to the dramatically increased incidence of renal tumors (1). Currently, there are three methods for partial nephrectomy: open surgery, laparoscopy, and robot-assisted laparoscopy. Since partial nephrectomy (PN) achieves equivalent oncological prognosis and lower incidence of adverse outcomes in comparison with radical nephrectomy (2), PN has gradually been recognized as a standard treatment for patients with clinically localized renal cell carcinoma with tumor size <4 cm and stage T1a based on the renal cell carcinoma guidelines (3). In recent years, with the advancement of technology, application of a robot-assisted surgery system has become a popular trend in the field of urology surgery. The robot surgery system has a highly flexible robotic arm and manipulators as well as a high-definition threedimensional (3D) operating system; it has the characteristics of accurate operation, fine anatomy, and clear vision; and with it surgeons can perform the most precise operations. Previously, surgeons defined pathologies and applied a surgical approach using a conventional two-dimensional (2D) monitor projecting X-ray, computer tomography, and magnetic resonance image scans. However, to define more complex lesions, including invisible feeding arteries and hilar or endophytic masses, conventional 2D preoperative assessment neither provides a sense of perspective nor does it facilitate these procedures (4). In recent decades, with the more widespread application of 3D printing technology in the medical field, doctors can obtain physical anatomical models based on patients' imaging data for preoperative assessment. In addition, a 3Dprinted model can be used to study complex cases, to simulate and practice operations, to teach students, and to educate patients (5). It is unclear whether or not patients with complex renal tumors benefit from a 3D-preoperative assessment. Recently, several studies have directly compared surgical outcomes and oncological outcomes of laparoscopic partial nephrectomy (LPN) with 3D-preoperative assessment (LPN-3DPA) vs. LPN with conventional 2D preoperative assessment (LPN-C2DPA) Abbreviations: LPN, laparoscopic partial nephrectomy; 3D, three dimensional; 2D, two dimensional; LPN-3DPA, laparoscopic partial nephrectomy with 3Dpreoperative assessment; LPN-C2DPA, laparoscopic partial nephrectomy with conventional 2D-preoperative assessment; RCT, randomized controlled trial; PCS, prospective comparative studies; RCS, retrospective comparative studies; Non-RCT, non-randomized controlled trial. for complex renal tumors, but to date conclusions remain inconsistent. Therefore, it is necessary to conduct a systematic review and meta-analysis of evidence to evaluate the efficacy and safety of LPN-3DPA and in order to draw a more definitive and meaningful conclusion relative to its application. Study Design Article selection proceeded according to the search strategy based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (6). Participants, Interventions, and Comparator Patients aged >16 years with complex renal tumor confirmed by pathology were enrolled in this study. Outcomes The following parameters were analyzed to determine the advantages of 3D-preoperative assessment: (1) perioperative parameters, including operation time, warm ischemic time, intraoperative blood loss; (2) clinical outcomes, including positive margins and reduction in estimated glomerular filtration rate (eGFR); and (3) complications. Search Strategy Two authors independently systematically searched the electronic literature databases. The search was performed in June 2020, and the Web of Science, PubMed, Embase, the Cochrane library, the China National Knowledge Infrastructure (CNKI), and the Wanfang Database were searched to identify relevant studies. No regional, language restrictions were set. The following were the MeSH term and text words used: "laparoscopic partial nephrectomy, " "3D, " "3 Dimensional, " "three dimensions, " "three-dimension, " "three dimensional, " "printing" were applied in search engines. In addition, the cited references of all selected articles were also further assessed for potentially relevant papers. Eligibility Criteria The study was included in this meta-analysis if (1) it was a randomized controlled trial (RCT) or a non-RCT; (2) it reported the 3D printing-assisted LPN vs. conventional LPN for renal tumors; (3) 3D printing technology was only used for preoperative preparation; (4) studies recorded at least one of the following outcomes for LPN groups with both 3D printing preoperative assessment and conventional 2D preoperative assessment: operation time, warm ischemic time, intraoperative blood loss, reduction in eGFR, positive surgical margin, or complications. Exclusion criteria were as follows: (1) case reports, letters, conference abstracts, review articles, or meta-analysis; (2) duplicated publications from the same author or organization; (3) studies lacking sufficient data for extraction; (4) lack of the nephrometry score or evidence to assess the complexity of the tumor. Selection of Studies The selection of included studies was conducted independently by two authors based on the PRISMA flow diagram, and the search results were imported into the software Endnote X9.3.3 (Thomson Corporation, USA). Firstly, we screened the titles and abstracts, and excluded the duplicated and apparently irrelevant references. Then, the full-text of the remaining potential studies were downloaded and reviewed to exclude those that did not meet our inclusion criteria. Finally, all disagreements were resolved by a third independent author until a consensus was reached. Data Extraction Data were extracted and summarized from the included studies by two authors independently, and the consistency between them was checked by the third author. The extracted items were the following: (1) the general study information, including the first author, year of publication, study type, patients enrolled, age, sex, body mass index, tumor size, RENAL score, and PADUA score; (2) perioperative parameters, including operation time, warm ischemic time, intraoperative blood loss; (3) clinical outcomes, including positive margins and reduction in eGFR; (4) complications, including artery embolization, hematoma, urinary leakage, transfusion, hematuria, intraoperative bleeding, and fever. The continuous data were extracted as mean, SD (standard deviation), and the sample size. The dichotomous data were recorded as the number of events and the number of non-events. Assessment of Study Quality The methodological quality of the studies was assessed using the Risk of Bias Tool recommended by the Cochrane Collaboration for RCT (7) and the Newcastle-Ottawa Scale (NOS) for non-RCT (8). Statistical Analyses Data of the included studies were collected and STATA 12.1 (StataCorp LP, College Station, TX, USA) was applied for the meta-analysis. Statistical heterogeneity was assessed using the I 2 statistic. The fixed-effects model was applied if no significant heterogeneity was detected or the statistical heterogeneity was low (I 2 ≤ 50%). Otherwise, a random-effects model was used (I 2 > 50%). For heterogeneity data, subgroup analysis was performed to identify the possible sources of heterogeneity. Subgroup analysis was performed by stratifying by complexity of the tumor, surgery type, or nephrometry score. Sensitivity analysis was conducted by consecutively omitting one single study to evaluate the reliability of the pooled results. The standard mean difference (SMD) was used for continuous outcomes and the odds ratio (OR) was used for dichotomous outcomes, both with 95% confidence interval (CI). For studies presenting continuous data as means and range, standard deviations (SDs) were analyzed using the technique described by Hozo et al. (9). Funnel plots and the Begg's test were applied to assess publication bias, and a p < 0.1 was defined as significant publication bias (10). The trim-and-fill computation was used to estimate the effect of publication bias on the interpretation of the results. Included Studies A total of 496 candidate publications were identified through the Web of Science (n = 218), PubMed (n = 61), Embase (n = 128), Cochrane library (n = 5), CNKI (n = 43), and the Wanfang (n = 41) databases. After excluding the duplicate studies, 287 articles were screened for relevance on the basis of the title and abstract. Of the 19 articles that were deemed to meet the inclusion criteria based on the content of titles and abstracts, 9 were excluded for reasons of "no control group in the papers" and for other reasons (details are shown in Figure 1). The remaining 10 studies were included in the meta-analysis. Characteristics and Qualifications of Included Studies The basic characteristics of all 10 included studies (11)(12)(13)(14)(15)(16)(17)(18)(19)(20) were summarized and are shown in Table 1. All studies were published between 2017 and 2020. Trial sample sizes ranged from 20 to 127 patients, for a total of 647 patients with renal tumors that were enrolled in our meta-analysis: 322 in the experimental group and 325 in the control group. Risk of bias assessment of RCT is presented in Table 2. The Newcastle-Ottawa Scale was used to assess the risk of bias of the retrospective comparative studies (RCS) and prospective controlled studies (PCS), and the total scores of 5-9 indicated that the study was a low risk of bias ( Table 3). Warm Ischemia Time Five studies (11,(13)(14)(15)(16) provided data on the warm ischemia time. The pooled meta-analysis results showed that the use of 3D-preoperative assessment had a shortened warm ischemia time Frontiers in Oncology | www.frontiersin.org Risk of bias was assessed with use of the Newcastle-Ottawa Scale. "*" means a score of 1; "**" means a score of 2; the total score of this scale is 9. A higher overall score corresponds to a lower risk of bias; a total score of 5 or less indicates a high risk of bias. The pooled results showed that the LPN-3DPA group had a lower incidence of overall complications than the LPN-C2DPA group (OR = 0.57; 95% CI = 0.37-0.89, I 2 = 0.0%, P = 0.602; Figure 2F). Reported complications included artery embolization, hematoma, urinary fistula, transfusion, hematuria, intraoperative bleeding, and fever. Conversely, to avoid complications from being overly reported, we counted the incidence of the above complications, respectively. The pooled results for each single complication indicated that there was a trend that the LPN-3DPA group was associated with a lower incidence of each complication, although it failed to reach a significantly statistical difference ( Figure 3A). Subgroup Analysis In view of the high level of heterogeneity, we conducted a subgroup analysis in which the studies were categorized into subgroups according to the complexity of the tumor and surgery type or the nephrometry score. In terms of operative time during subgroup analysis, high heterogeneity was found in the mixed tumor group and complex tumor group (Figure 3B). But when the subgroup analysis was stratified by type of surgery or nephrometry score, the heterogeneity was low (SMD = −0.10; 95% CI = −0.70 to −0.14, I 2 = 23.7%, P = 0.252; Figure 3C) in the studies with robotic surgery or PADUA nephrometry score group, but the heterogeneity was high (SMD = −0.53; 95% CI = −0.88 to −0.19, I 2 = 69.7%, P = 0.002; Figure 3C) in studies with nonrobotic surgery or RENAL nephrometry score. After the study by Fan et al. (11) was excluded, the overall heterogeneity declined dramatically (SMD = −0.44; 95% CI = −0.62 to −0.27, I 2 = 47.9%, P = 0.052; Figure 3D). In terms of the warm ischemic time for the subgroup analysis, because studies with robotic surgery or PADUA nephrometry score do not record the warm ischemic time, we only conducted subgroup analysis by complexity of the tumor. The results showed that lower heterogeneity was found in the mixed tumor group (SMD = −0.49; 95% CI = −0.83 to −0.16, I 2 = 49.8%, P = 0.113; Figure 3E) compared with the overall groups (SMD = −0.97; 95% CI = −1.64 to −0.30, I 2 = 88.3%, P = 0.000; Figure 3E). To reduce the pooled result heterogeneity, we consecutively omitted included studies one by one. After Wang et al.'s study (14) was excluded, the heterogeneity relative to the warm ischemic time declined significantly (SMD = −0.45; 95% CI = −0.67 to −0.23, I 2 = 49.8%, P = 0.113; Figure 3F). Sensitivity Analysis and Publication Bias We conducted a sensitivity analysis to examine the effect of a single study on the collective results by consecutively omitting each single study. Due to the different follow-up period of times based on the reduction of eGFR, we could not evaluate its robustness. For other remaining outcomes, except for complications, statistical robustness was evaluated by other methods, as shown in Figure 5. Publication bias was evaluated using Funnel plots and Begg's test. Funnel plots are shown in Figure 6. By using Begg's test, no obvious publication bias was found regarding warm ischemic time (p = 0.462, Figure 7A), intraoperative blood loss (p = 0.107, Figure 7B), reduction in eGFR (p = 0.707, Figure 7C), and complications (p = 0.711, Figure 7D). Obvious publication bias was found regarding operation time (p = 0.012, Figure 7E) and positive surgical margin (p = 0.089, Figure 7F), but further analysis with trimand-fill test revealed that this publication bias did not impact the initial estimates (no trimming performed; data unchanged). DISCUSSION Renal cell carcinoma is the most common solid lesion in the kidney, which constitutes ∼3% of all cancers, with the highest incidence in Western countries (21). Surgery is the only curative treatment for localized renal cell cancer. During PN, in order to obtain clear operative field and precise surgical closure of the collecting system, it is necessary that surgeons clamp the renal pedicle to interrupt renal blood flow during the procedure, especially for renal hilar tumors or those with deep parenchymal invasion. The longer the clamping time of renal pedicle, the greater the impairment of renal function. Warm ischemia time was one of the most important predictors of renal function preservation after LPN. All efforts should be made in order to shorten the warm ischemia time as much as possible, especially when planning to perform LPN for complex renal tumors (22). Recently, with the advantages of the surgical robotic system, it has been possible for urologists to perform a meticulous microdissection on renal arterial branches feeding the tumor during surgery (23). In addition, several useful scoring systems such as the R.E.N.A.L. (24) and P.A.D.U.A. (25) nephrometry scores have been used to assess the complexity of the tumor. With rapid development of 3D printing technology in recent years, 3D printing is not only widely applied in industries such as traditional manufacturing and electronics, but it has also gained much interest in the medical field (26). The process of 3D printing involves making a 3D anatomical model via layer by layer printing (27). The procedure of 3D printing in medical practice includes the design of the 3D models based on medical imaging data with computer modeling software, the 3D model is cut into slices, then the model is printed layer by layer (26). Through the 3D anatomical model, surgeons can not only precisely identify location of tumors and direction of the tumor specific arterial branches, as well as quantify the size of the renal defect, but also can predicate the position of the blood vessel and collecting system, which might be damaged during the surgical resection of the tumor (28). With the combination of the abovementioned benefits, 3D printing technology can help surgeons make more meticulous preoperative preparation, as well as make rational choice of operative approach to minimize damage to the surrounding tissue. In PN, the term Trifecta, indicating negative margin, no complication, and maximal renal function preserve, is used to evaluate the success of a procedure to some extent, which is the ultimate goal for urologists (15). In recent years, an increasing number of urologists have been applying 3D printing technology to preoperative assessment of complex renal tumors. Thus, reports regarding the advantages of 3D-preoperative assessment for the treatment of complex renal tumors have emerged, but these benefits have not been confirmed by evidence-based science. In order to draw a definitive conclusion, we conducted this systematic review and meta-analysis to evaluate the safety and effectiveness of LPN-3DPA. In our review, we found that patients treated by LPN-3DPA had shorter operation time and warm ischemia time, and less intraoperative blood loss with heterogeneity existed. To facilitate the meta-analysis and to minimize heterogeneity, we excluded the study by Fan et al. (11) in the subgroup analysis for the evaluation of operation time, because this study accounted for the major source of heterogeneity. After we read this article in detail, we identified two explanations for the long operative time. The first was that the operation time reflected the difficulty of the procedure, the technique used by the operator, and surgical experience. The second reason was that for three cases described in the article the surgical modality was switched, which obviously increased the operative time. As for the analyses of warm ischemia time and intraoperative blood loss, the study by Wang et al. (14) showed high heterogeneity in the sensitivity analyses. Similarly, we excluded it from the analysis of warm ischemia time and intraoperative blood loss. After careful assessment of the study, we concluded that it was a poorly planned retrospective comparative study, which was the reason for the apparent biases. Nonetheless, given the differences in the surgeons' skills, operation conditions, and scope of application of 3D technology, the heterogeneity could not be completely eliminated between studies. The kidney injury caused by prolonged warm ischemia is an important cause of post-operative acute kidney injury and chronic kidney disease. How to minimize warm ischemic injury in PN and maximize the protection of renal function has always been the focus of national and international experts and scholars. In our meta-analysis, although various studies had different follow-up times for post-operative renal function, the pooled results indicated that the LPN-3DPA group experienced less renal function impairment than the LPN-C2DPA group. With regard to complications, the incidence of serious complications dropped to about 3 and 3.2% in the LPN and robot-assisted partial nephrectomy groups, respectively (29). In the overall meta-analysis, we found that there were fewer complications in the LPN-3DPA group. Conversely, we did not find any significant differences between the LPN-3DPA and LPN-C2DPA groups in the positive surgical margin, mainly due to the small sample size; besides, it has been reported that the positive margin rate by LPN is very low (only 0.7-4%) (30). Therefore, according to the above metaanalysis, it is clear that the application of a 3D-preoperative assessment not only can speed up the operational procedure but also benefits the prognosis of patients with complex renal tumors. To the best of our knowledge, this meta-analysis is the first to systematically evaluate the safety and effectiveness of LPN-3DPA in renal tumor patients, and the 10 studies included in the metaanalysis strictly adhered to our inclusion and exclusion criteria with high methodological quality. Therefore, the results of the meta-analysis are generally reliable. However, there were some limitations in our study. First, only 10 trials met the inclusion criteria after searching various databases, and the included studies were small in sample size. The statistical power to detect the outcomes difference was limited. Further, three studies from the Chinese literature were included, which will not be accessible to non-Chinese researchers. Second, most of the studies included in this metaanalysis were retrospective comparative studies, which were more likely to have been subjected to various biases and high heterogeneity. Third, 3D printing was generally used to assist LPN in patients with complex renal tumors, and the nephrometry score was used to evaluate the complexity of renal tumors. In theory, the higher the nephrometry score, the better the effectiveness of 3D printing assisted LPN. However, due to the lack of original data for each patient in the included literature regarding nephrometry scores, and the small sample size in the studies, we could not address this issue in this analysis. Finally, as small sample size study populations were included in our analysis, we believe that further results from high-quality trials and more rigorous, large-scale, long-term follow-up in RCTs should be provided to update this study. CONCLUSION Overall, for LPN performed in patients with renal tumors, 3D printing technology can help surgeons obtain more comprehensive information and provide theoretical guidance preoperatively. In our meta-analysis, LPN under the guidance of 3D printing technology is superior to the conventional LPN in terms of operation time, warm ischemia time, intraoperative blood loss, complications, as well as reduction in eGFR. However, the heterogeneity and small sample size in our current study may hamper our meta-analysis, so more RCTs are needed to go a step further in confirming the benefits of combining LPN with 3D printing techniques for the treatment of renal tumors. DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the article/Supplementary Material. AUTHOR CONTRIBUTIONS YJ, HZ, and JC designed and conceived the research. YJ and HZ searched the database and analyzed the data. YJ, HZ, JC, and ZZ wrote the draft. All authors reviewed the manuscript and approved the final manuscript. FUNDING This study was supported by the National Natural Science Foundation of China 81770705 (to HC).
5,187
2020-10-22T00:00:00.000
[ "Medicine", "Engineering" ]
Research Progress in Rare Earth-Doped Perovskite Manganite Oxide Nanostructures Perovskite manganites exhibit a broad range of structural, electronic, and magnetic properties, which are widely investigated since the discovery of the colossal magnetoresistance effect in 1994. As compared to the parent perovskite manganite oxides, rare earth-doped perovskite manganite oxides with a chemical composition of LnxA1-xMnO3 (where Ln represents rare earth metal elements such as La, Pr, Nd, A is divalent alkaline earth metal elements such as Ca, Sr, Ba) exhibit much diverse electrical properties due to that the rare earth doping leads to a change of valence states of manganese which plays a core role in the transport properties. There is not only the technological importance but also the need to understand the fundamental mechanisms behind the unusual magnetic and transport properties that attract enormous attention. Nowadays, with the rapid development of electronic devices toward integration and miniaturization, the feature sizes of the microelectronic devices based on rare earth-doped perovskite manganite are down-scaled into nanoscale dimensions. At nanoscale, various finite size effects in rare earth-doped perovskite manganite oxide nanostructures will lead to more interesting novel properties of this system. In recent years, much progress has been achieved on the rare earth-doped perovskite manganite oxide nanostructures after considerable experimental and theoretical efforts. This paper gives an overview of the state of art in the studies on the fabrication, structural characterization, physical properties, and functional applications of rare earth-doped perovskite manganite oxide nanostructures. Our review first starts with the short introduction of the research histories and the remarkable discoveries in the rare earth-doped perovskite manganites. In the second part, different methods for fabricating rare earth-doped perovskite manganite oxide nanostructures are summarized. Next, structural characterization and multifunctional properties of the rare earth-doped perovskite manganite oxide nanostructures are in-depth reviewed. In the following, potential applications of rare earth-doped perovskite manganite oxide nanostructures in the fields of magnetic memory devices and magnetic sensors, spintronic devices, solid oxide fuel cells, magnetic refrigeration, biomedicine, and catalysts are highlighted. Finally, this review concludes with some perspectives and challenges for the future researches of rare earth-doped perovskite manganite oxide nanostructures. Introduction Perovskite manganites refer to a family of manganese compounds with a general composition of AMnO 3 , where A = La, Ca, Ba, Sr, Pb, Nd, Pr, which crystallize in the perovskite structure named after the mineral CaTiO 3 . Depending on the composition, they exhibit various magnetic and electric phenomena such as ferromagnetic, antiferromagnetic, charge, and orbital ordering. Thus, these properties have potential applications in the fields of sensors and spintronic devices. The early studies of perovskite manganites began in 1950, first performed by Jonner and Van Santen [1]. They found that the change of proportion of Mn 4+ by introducing the bivalent alkaline earth metal elements (e.g., Ca, Sr, Ba) with different doping ratio into LaMnO 3 , could lead to the changes in the Curie temperature (namely the T C ) and saturation magnetization. Since then the term of "manganites" was adopted to refer to these compounds containing trivalent as well as tetravalent manganese. One year later, Zener [2] proposed a "double exchange" (DE) mechanism to explain the unusual correlation between magnetism and electrical conduction, which was reported by Jonner and Van Santen. Based on the Zener's theoretical studies; the DE mechanism was further developed in more detail [3][4][5]. At the same time, the experimental researches were also carried out. As compared to the parent perovskite manganite oxides, rare earth-doped perovskite manganite oxides with a chemical composition of Ln x A 1-x MnO 3 (where Ln represents rare earth metal elements such as La, Pr, Nd, A is divalent alkaline earth metal elements such as Ca, Sr, Ba) exhibit much diverse electrical properties due to that the rare earth doping leads to a change of valence states of manganese which plays a core role in the transport properties. For example, La-doped SrMnO 3 (La 0.7 Sr 0.3 MnO 3 ) is a ferromagnetic (FM) metal, whereas SrMnO 3 is an antiferromagnetic (AFM) insulator. Wollan and Koe [6]. found a series of rare earth-doped perovskite manganite oxides Ln x Ca 1-x MnO 3 with the feature of FM and AFM properties depending upon the relative ion manganese content (Mn 3+ and Mn 4+ ). In 1994, Jin et al. [7] first reported on the colossal magnetoresistance (CMR) effect in the perovskite La 0.67 Ca 0.33 MnO 3 thin films grown on LaAlO 3 substrates by laser ablation, where a several-tesla magnetic field could induce a 1000-fold change in the resistance of the epitaxial thin film of La 0.67 Ca 0. 33 MnO 3 . Since that time, perovskite manganites become the focus of great interest again, both theories and experiments have been further advanced. In 1995, Millis et al. [8] pointed out that the phenomena observed in experimental consequences cannot be accounted by double exchange alone, such as the sharp drop in resistivity just below T C . Before long, Millis et al. [9] indicated that the essential physics of manganites are dominated by the interplay between electron-phonon coupling arising from the Jahn-Teller effects [10] and double exchange mechanism. Later, this newer theory as well as Jahn-Teller effect were adopted and discussed [11,12]. In order to explain the novel physical transport properties more reasonably, many theoretical models have been proposed in recent years, such as one-orbital model (that is simple but incomplete) and two-orbital model (that is essential to explain the notorious orbital order tendency in Mn-oxides) [13]. From 1998 to 1999, Dagotto and his collaborators [14,15] developed a theory of phase separation where phase segregation tendencies appeared in manganites. Gradually, phase separation theory was verified and recognized as the mainstream theory describing the perovskite manganese oxides [16,17]. Rare earth-doped perovskite manganite oxides belong to the group of highly correlated systems, which display a wide spectrum of novel properties, including CMR effect, metal-insulator (M-I) transition, electronic phase separation (EPS), and complex structural phases in their phase diagrams due to the complex interactions among the spin, charge, orbital, and lattice degrees of freedom. There is not only the technological importance but also the need to understand the fundamental mechanisms behind the unusual magnetic and transport properties that attract enormous attention. Nowadays, with the rapid development of electronic devices towards integration and miniaturization, the feature sizes of the microelectronic devices based on rare earth-doped perovskite manganite are down-scaled into nanoscale dimensions. At nanoscale, various finite size effects in rare earthdoped perovskite manganite oxide nanostructures (e.g., zero-dimensional (0D), one-dimensional (1D), and twodimensional (2D) nanostructures) will lead to more interesting novel properties of this system. In the past two decades, researches on the rare earth-doped perovskite manganite oxide nanostructures have achieved much progress after considerable experimental and theoretical efforts. In this paper, an overview of the state of art in the rare earth-doped perovskite manganite oxide nanostructures is presented, which covers the fabrication, structural characterization, properties, and functional applications. Due to the tremendous research efforts and the space limitations, it would be impossible to provide a complete overview on all existing topical literature, and therefore we limit ourselves to selected, but the representative results. Wherever possible, the readers are referred to the review articles, books and/or chapters in which selected sub-topics on the rare earth-doped perovskite manganite oxide nanostructures are discussed in full detail. Also, this review article seeks to present the topic not only from the viewpoint of fabrication methods but also tries to motivate the interest in these special compounds from the perspective of structural characterization, physical properties, and functional applications in the fields of microelectronic, magnetic, and spintronic devices, solid oxide fuel cells, magnetic refrigeration, biomedicine, and catalysts. This overview ends with some perspectives and challenges for the future researches of rare earth-doped perovskite manganite oxide nanostructures. Synthesis Methods of Rare Earth-Doped Perovskite Manganite Oxide Nanostructures Rare Earth-Doped Perovskite Manganite Oxide Nanoparticles Molten Salt Synthesis Molten salt synthesis (MSS) method is a simple, versatile, and environmental-friendly approach, which is widely used to synthesize high purity and nanoscale inorganic oxides with controllable compositions and morphologies. In this approach, inorganic molten salt is served as the reaction medium to enhance the reaction rate and to reduce the reaction temperature of the reactant oxides [18]. Due to the short diffusion distances and large mobilities of the reactant oxides in the molten salts, the whole solid-state reactions are easily carried out at moderate temperatures (600-800°C) in a short dwell time (less one hour). Besides the low formation temperature, molten salts also promote to stabilize the specific morphology of the final products. In addition, the morphology of the final products can be well controlled by adjusting the MSS processing parameters (e.g., the types and quantities of the used molten salts, different reactant oxides, annealing temperature and dwell time, and heating/cooling rates) in the MSS process. In recent years, MSS method has been successfully used to synthesize rare earth-doped perovskite manganite oxide nanoparticles. For example, Luo et al. [19] synthesized La 0. 7 2 , and Mn(NO 3 ) 2 were used as starting materials, and KNO 3 was used as molten salt. By controlling the molar ratio of KNO 3 and metal nitrates and the reaction temperature, they obtained the LSMO particles with average grain size modulated from 20 to 50 nm. A significant enhanced magnetoresistance was observed in these nanosized LSMO powders, especially at low temperature. Tian et al. [20] developed a facile molten salt synthetic route to synthesize La 1-x Sr x MnO 3 (x = 0, 0.3, 0.5, 0.7) nanoparticles, where the eutectic NaNO 3 -KNO 3 mixture were used as molten salt and the nitrates of La, Mn, and Sr were used as reagents. The average grain sizes of the La 1-x Sr x MnO 3 (x = 0, 0.3, 0.5, 0.7) particles were about 20, 20, 19, and 25 nm, respectively. Later, by the same method, Tian et al. [21] also synthesized the La 0. 67 Sr 0.33 MnO 2.91 nanoparticles with particle sizes in the range of 20-60 nm. Xia et al. [22] also synthesized single-crystalline La 1-x Ca x MnO 3 (LCMO with x = 0.3 and 0.5) nanoparticles by MSS method, where the eutectic NaNO 3 -KNO 3 mixture was used as the molten salt. By using NaNO 2 as molten salt, Kačenka et al. [23] synthesized La 1-x Sr x MnO 3 (x = 0.18-0.37) nanoparticles, which were rather separated as compared with that synthesized by sol-gel route. Similarly, a series of singlephase La 1-x Sr x MnO 3 (x = 0.25-0.47) nanoparticles with an average size of~50 nm were also synthesized [24]. Mechanochemical Processing As an effective, economical, and versatile way to synthesizing ultrafine powders, mechanochemical processing (MCP) makes use of chemical reactions activated mechanically by high-energy ball milling. Muroi et al. [25] carried out the pioneering works on the synthesis of perovskite manganites by MCP, where the starting materials were LaCl 3 , CaCl 2 , MnCl 2 , and Na 2 CO 3 was used as molten salt. They were mixed in an appropriate ratio via a chemical reaction to form La 0.7 Ca 0.3 MnO 3 powders with particle sizes in the range of 20 nm-1.0 μm. Following a similar method, Spasojevic et al. [26] synthesized the La 0.7 Ca 0.3 MnO 3 nanoparticles with an average size of 9 nm by high-energy ball milling in a single-step processing. By mechanical alloying method, Li et al. [27] also synthesized La 2/3 Ca 1/3 MnO 3 powders with a grain size of~18 nm. In another work, Manh's group carried out a series of studies to synthesize La 0.7 Ca 0.3 MnO 3 nanoparticles by reactive milling methods [28][29][30][31][32]. They found that the as-synthesized La 0.7 Ca 0.3 MnO 3 nanoparticles exhibited super-paramagnetic behavior with a blocking temperature, which was reduced as increasing the milling time from 8 to 16 h [28]. Besides the La 0.7 Ca 0.3 MnO 3 nanoparticles, La 0.7 Sr 0.3 MnO 3 nanoparticles were also synthesized by reactive milling methods under different milling times [30,31]. Recently, La 0.7 Ca 0.3 MnO 3 nanoparticles with particle size of 21-43 nm were also synthesized by reactive milling and thermal processing methods [32]. Wet chemical Routes Sol-Gel Process Sol-gel process is a popular method for the synthesis of multicomponent metal oxides such as perovskite oxide materials. This process involves the formation of a sol by dissolving the metal aloxide, metalorganic, or metal-inorganic salt precursors in a suitable solvent, subsequent drying the gel followed by calcination, and sintering at high temperatures to form perovskite oxide materials. Ravi et al. [33] used a modified sol-gel method to synthesize LSMO nanoparticles, where oxalic acid was used as chelating agent, oleic acid as surfactant in poly acrylic acid matrix, and metal nitrates as starting materials. The xerogel was heated at 100°C and dried in atmosphere to obtain powders. And then, these powders were grinded and annealed at temperatures from 500 to 800°C for 4 h to obtain LSMO nanoparticles with different particle sizes. Similarly, Pr 1/2 Sr 1/2 MnO 3 [34], La 0.6 Pb 0.4 MnO 3 [35], Nd 0.5 Sr 0.5 MnO 3 [36], La 1-x-Ca x MnO 3 [37], Ln 0. 67 Sr 0.33 MnO 3 (Ln = La, Pr and Nd) [38], and Pr-doped La 0.67 Ca 0.33 MnO 3 nanoparticles [39] were also synthesized by this method. Their particle sizes can be well controlled by the annealing temperatures. Sarkar et al. [40] adopted the sol-gel-based polymeric precursor polyol route to synthesize Pr 0.5 Ca 0.5 MnO 3 nanoparticles with particle size down to 10 nm. In their work, the polymer ethylene glycol was used to form a close network of metal ions in the precursor solution, which assists the reaction and enables the phase formation at relatively low temperatures. Co-precipitation Method The co-precipitation process involves the separation of a solid containing various ionic species from a solution phase. It is a very rare situation where a quantitative and simultaneous precipitation of all the cations occurs without segregation of any particular constituents in the precipitates to form a completely mixed-metal precursor. That is resulted from the different solubilities between the various precipitating phases, especially in the case of the solution containing more than one metal ion. Normally, this problem can be modified by introducing the precipitating agents (such as oxalates, tartarates, and citrates) that render the cations insoluble. Dyakonov et al. [41] synthesized (La 0.7 Sr 0.3 ) 0.9 Mn 1.1 O 3 manganite nanoparticles by this method, where a mixture of stoichiometric amounts of high purity Mn 3 O 4 , La 2 O 3 , and SrCO 3 powders was dissolved in diluted nitric acid. This solution was evaporated and dried, and then fired at 500°C to decompose the nitrates. The dry remainder was thoroughly ground again and annealed at temperatures from 800 to 950°C for 20 h in air, and then followed by slow cooling down to room temperature. The resulting product was repeatedly ground, and nanopowders with average particle sizes of 40, 75, and 100 nm were obtained. Pang et al. [42] also synthesized the La 0.7 Sr 0.3 MnO 3 nanoparticles by a sonication-assisted co-precipitation method. Similarly, La 0.5 Ca 0.5 MnO 3 nanopowders with different average sizes (13,18, and 26 nm) were obtained after annealing at 700, 800, and 900°C , respectively [43]. By using an improved chemical coprecipitation method, Zi et al. [44] synthesized La 0.7 Sr 0.3 MnO 3 nanoparticles with particle sizes in the range of 50-200 nm. (Microwave-) Hydrothermal Process Hydrothermal process involves heating an aqueous suspension of insoluble salts in an autoclave at a moderate temperature and pressure so that the crystallization of a desired phase will take place. The hydrothermal synthesis is a powerful method for the preparation of very fine and homogeneous perovskite powders with a narrow size distribution and spherical morphology. Sin et al. [45] reported on the synthesis of single-crystalline La 1-x Sr x MnO 3 nanoparticles by a hydrothermal route in the presence of surfactant named as cetyltrimethylammonium bromide (CTAB). Analytical grade KMnO 4 , MnCl 2 ·4H 2 O, LaCl 3 ·7H 2 O, SrCl 2 ·6H 2 O were used as starting materials. The chemical reactions were carried out in 10 ml Teflon-lined stainless steel autoclaves, where the added KOH maintained a proper alkalinity. Then, the CTAB powder was mixed with the above solution containing metal ions and agitated vigorously to obtain a homogeneous black solution. The reaction mixture was placed in the autoclaves and heated at 240°C under the autogenously pressure for 1 day. The obtained product was filtered off and washed with ethanol and deionized water to remove the residual CTAB, potassium ions, and chloride ions. The final product was dried at 80°C for 2 h to yield a small quantity of black powder. Urban et al. [46] also synthesized singlecrystalline La 1-x Ba x MnO 3 (x = 0.3, 0.5, and 0.6) nanocubes with sizes of 50-100 nm. Deng et al. [47] reported the synthesis of La 1-x Sr x MO 3-δ (M = Co, Mn; x = 0, 0.4) particles by using a modified strategy of citric acid coupled with hydrothermal treatment [48]. They found that Sr-doping led to a decrease in the amount of over stoichiometric oxygen and also caused the Mn 4+ concentration to be increased, improving the redox ability of the catalysts consequently. Microwave-hydrothermal (M-H) synthesis is a modified approach by involving the microwave heating techniques during the hydrothermal synthesis procedure. The microwave heating manner can largely increase the reaction and crystallization rate, and enhance fabrication efficiency. Recently, this method has been used to synthesize rare earth-doped perovskite manganite oxide nanostructures. Ifrah et al. [49] reported the microwaveassisted hydrothermal synthesis of La 0.8 Ag 0.2 MnO 3+δ nanoparticles, which were homogeneous with a crystallite size of 70 nm. Moreover, the La 0.8 Ag 0.2 MnO 3+δ nanoparticles were excellent in methane catalytic combustion. Anwar et al. [50] reported the microwaveassisted hydrothermal synthesis of La 0.67 Sr 0.33 MnO 3 nanoparticles, which had a rod-like morphology with average crystallite size of 11 nm. Pyrophoric Reaction Process Pyrophoric reaction process involves thermolysis of aqueous precursor solutions of coordinated metal compounds of organic amines and acids via the formation of mesoporous carbon precursors and their calcination at high temperatures (800°C). Its principle is to atomistically disperse the complex metal ions in the polymeric network provided by organic coordinating agent, i.e., triethanolamine, during the pyrolysis of excess reagents. During the pyrolysis of the precursor solution, the metal ions or their salts form nanoclusters, which are embedded in the resulting matrix of mesoporous carbon. Slow volatilization of mesoporous carbon in the precursor material through low temperature between 500 and 800°C air oxidation, aided by the catalytic effect of in situ metal ions, favors the formation of metal-oxide nanocrystals. The advantages of this method in preparing oxide nanoparticles are the high purity of the products, small particles sizes with narrow particle size distribution, good compositional control, and chemical homogeneity of the final products. [57] reported on the synthesis of a series of Sr-doped lanthanum manganites by simple solution combustion technique. La 0.6 Sr 0.4 MnO 3 nanoparticles with different particle sizes were also synthesized by the nitratecomplex auto-ignition method [58]. Thermal Decomposition Synthesis Thermal decomposition synthesis is fast, simple, and cost-effective synthesis route for preparations of metal oxide and complex oxide nanoparticles. Monodisperse magnetic nanocrystals with smaller sizes can essentially be synthesized through the thermal decomposition of organometallic compounds in high-boiling organic solvents containing stabilizing surfactants. In principle, the ratios of the starting reagents including organometallic compounds, surfactant, and solvent are the decisive parameters for the control of the size and morphology of magnetic nanoparticles. The reaction temperature and time as well as the aging period may also be crucial for the precise control of size and morphology [59]. The method is simple and convenient in operation, low in cost and high in direct yield, all volatile components volatilize, and the problem of carbon impurities is solved. Recently, Huang et al. [60] synthesized the La 0.7 Sr 0.3 MnO 3 particles via the thermal decomposition of metal-complexes by using ethylenediaminetetraacetic acid as a complex agent. Daengsakul's group [61][62][63] also synthesized La 1-x Sr x MnO 3 nanoparticles via thermal decomposition method by using acetate salts of La, Sr, and Mn as starting materials. To control the sizes of the La 1-x Sr x MnO 3 nanoparticles, thermal decomposition of the precursors was carried out at the different temperatures. Similarly, La 1-x Sr x MnO 3 nanoparticles (0 ≤ × ≤ 0.5) were synthesized via a simple thermal decomposition method by using acetate salts of La, Sr, and Mn as starting materials in aqueous solution [62]. All the prepared La 1-x Sr x MnO 3 (x ≤ 0.3) nanoparticles had a perovskite structure with transformation from cubic to rhombohedral as the thermal decomposition temperature was over 900°C, while the others remained cubic structure. Other Methods Moradi et al. [64] reported on the synthesis of La 0.8 Sr 0.2 MnO 3 nanoparticles with different particle sizes by the microwave irradiation process. Hintze et al. [65] prepared La 1-x Sr x MnO 3 nanoparticles via a reverse micelle microemulsion, which was based on CTAB used as a surfactant. Preparation Methods for 1D Rare Earth-Doped Perovskite Manganite Oxide Nanostructures Recently, 1D perovskite manganite nanostructures such as nanowires, nanorods, nanotubes, nanofibers, and nanobelts have received much attention due to their unique features as compared with other low-dimensional systems such as 0D perovskite manganite nanostructures (or quantum dots) and 2D perovskite nanostructures (or quantum wells). The two-dimensional quantum confinement while one unconfined direction for the transport of carriers in the 1D perovskite manganite nanostructures, allows it to behave novel electrical transport and magnetic properties that are significantly different from their polycrystalline counterpart due to the nanosized dimensions. Besides, they also offer a good system to investigate the intrinsic size effects of physical properties. Understanding these behaviors at nanoscale dimension is of importance for developing new generation of revolutionary electronic nanodevices. However, there are numerous challenges on the fabrication and synthesis of these nanostructures with well-controlled dimensions, uniform sizes, phase purity, and homogenous chemical compositions. Since structural control is the key step in controlling properties and device performances, recently many physical techniques and chemical synthesis approaches are developed to understand and thereby control the nucleation and growth processes. In the past decade, significant progress has been made in the synthesis of 1D rare earth-doped perovskite manganite oxide nanostructures. The most commonly adopted techniques toward the realization of 1D rare earth-doped perovskite manganite oxide nanostructures are "bottom-up" routes (such as template-based synthesis, hydro/solvothermal synthesis, molten-salt synthesis, solution-based metal-organic decomposition, and electrospinning), and "top-down" approaches (such as focus ion beam (FIB) milling, and nanoimprint lithography (NIL) techniques). Basically, the synthesis routes to 1D rare earth-doped perovskite manganite oxide nanostructures can be divided into two different categories: (i) templatefree synthesis, and (ii) template-assisted synthesis, which are briefly delineated in the following. Template-Free Synthesis Up to date, several template-free methods such as hydro/solvothermal synthesis, MSS method, electrospinning process have been used to synthesize 1D rare earth-doped perovskite manganite oxide nanostructures. For example, single-crystalline perovskite manganite La 0.5 Ca 0.5 MnO 3 nanowires with an orthorhombic structure were synthesized by a hydrothermal method [66]. These nanowires grew along [100] direction and had uniform diameter (~80 nm) with lengths ranging from several to several tens of micrometers. Similarly, singlecrystalline La 0.5 Sr 0.5 MnO 3 , La 0.5 Ba 0.5 MnO 3 , and Pr 0.5 Ca 0.5 MnO 3 nanowires with a cubic structure were also synthesized by hydrothermal method [67][68][69]. In the Pr 0.5 Ca 0.5 MnO 3 nanowires, the charge ordering transition was suppressed and a ferromagnetic phase was observed, whereas the antiferromagnetic transition disappeared [69]. Datta et al. [70] also synthesized the single crystalline La 0.5 Sr 0.5 MnO 3 nanowires with a diameter of~50 nm and a length up to 10.0 μm. It was found that these La 0.5 Sr 0.5 MnO 3 nanowires had a FM-PM transition temperature (Curie temperature, T C ) at around 325 K, close to the bulk value (~330 K) of the single crystal. That indicates that the functional behavior still retains even after the diameter size of the nanowires is reduced down to 45 nm. The electrical transport measurements from a single nanowire demonstrate that the nanowires exhibit an insulating behavior within the measured temperature range from 5 to 310 K, which is similar to the bulk system. As a simple, one-step and effective method, electrospinning technique is also used to synthesize inorganic and hybrid compound nanofibers [71,72]. In addition, the fiber sizes can be easily controlled by changing the electrospinning parameters, such as the applied potential, precursor concentrations, viscosity, and flow rate of the solution [73,74]. The good examples are the La 0.67 Sr 0.33 MnO 3 nanowires with diameters in a range of 80-300 nm and length of 200 μm synthesized Jugdersuren et al. [75] and the La 0. 75 Sr 0.25 MnO 3 nanofibers synthesized by Huang et al. [76] In addition, multicomponent La x Sr 1-x Co 0.1 Mn 0.9 O 3-δ (0.3 ≤ × ≤ 1) and La 0.33 Pr 0.34 Ca 0.33 MnO 3 nanofibers are also synthesized by electrospinning method, which can be used as cathode materials in the next-generation high-performance supercapacitors and phase separation nanodevices, respectively [77,78]. Rare earth-doped perovskite manganite oxide nanorods are also synthesized by using template-free method such as hydrothermal synthesis. For example, La 0.65 Sr 0.3 MnO 3 nanorods were successfully synthesized through a simple hydrothermal reaction followed by calcination at 850°C for 2 h in air. Small nanorods having a diameter in the range of 80-120 nm tend to connect with each other forming long rods with length of a few hundred nm to a few micron [79]. Nano-sized La 0.7 Ca 0.3 MnO 3 manganites with rod-like morphologies were also obtained via the hydrothermal method in the presence of two mineralizers of sodium hydroxide (NaOH) and potassium hydroxide (KOH) at different alkalinity conditions (10,15, and 20 M) [80]. Template-Assisted Methods The template-assisted method is to use the pre-existing 1D nanostructures (e.g., nanoporous silicon, polycarbonate membranes, anodic aluminium oxide (AAO) membranes) as templates, which are filled up with the suitable polymeric precursors. The solution contained within the template is heat treated to form perovskite manganite oxide materials, and subsequently removing the template by chemical etching or calcination. Synthesis of 1D perovskite manganite oxide nanostructures through template-assisted method offer the following advantages: (a) the structure of the nanoarrays is subject to the structure of the template, (b) the channels of the template control the dimension sizes of the materials, (c) pore walls of template prevent the aggregation of the material, and (d) a large amount of nanowires or nanotubes can be massively produced. Among the common used template-assisted methods, the sol-gel template method combined with AAO as template is the most popular one, which is widely used to fabricate highly ordered perovskite manganite oxide nanostructures such as La 0.8 Ca 0.2 MnO 3 nanowires with nearly uniform diameter of about 30 nm [81], and the ordered array of La 0.67 Ca 0.33 MnO 3 nanowires with diameter of 60-70 nm and tens of microns in length [82]. Following the success of this method, perovskite manganite oxide nanowires of La 0.6 Sr 0.4 CoO 3 and La 0.825 Sr 0.175 MnO 3 with a diameter of 50 nm and length up to tens of microns were also synthesized with a polycrystalline perovskite structure [83]. Ordered arrays of La 0.67 Sr 0.33 MnO 3 nanowires with diameter of 60-70 nm and length up to tens of microns were prepared using a simple sol-gel process combining with nanoporous alumina as template [84]. Optical lithography is also used to fabricate (La 5/8-0.3 Pr 0.3 )Ca 3/8 MnO 3 (LPCMO) wires starting from a single crystalline LPCMO film epitaxially grown on a LaAlO 3 (100) substrate [85]. As the width of the wires is decreased, the resistivity of the LPCMO wires exhibits giant and ultrasharp steps upon varying temperature and magnetic field in the vicinity of the M-I transition. The origin of the ultrasharp transitions can be ascribed to the effect of spatial confinement on the percolative transport in manganites. Han et al. [86] fabricated the MgO/La 0.67 Ca 0.33 MnO 3 core-shell nanowires with the inner MgO core about 20 nm in diameter and the La 0.67 Ca 0.33 MnO 3 shell layer around 10 nm in thickness. Here, the vertically aligned single-crystalline MgO nanowires act as excellent templates for epitaxial deposition of the desired transition metal oxides and lead to highquality core-shell nanowires. Besides the perovskite manganite oxide nanowires, perovskite manganite oxide nanotubes are also fabricated by using a sol-gel template-based method. Curiale et al. [87] synthesized the perovskite rare earth manganite oxide nanotubes such as La 0. 67 MnO 3 , and La 0.325 Pr 0.300-Ca 0.375 MnO 3 , respectively. The walls of these nanotubes are composed of magnetic nanograins, and their sizes are less than the critical size for multidomain formation in manganites. As a consequence, each particle that constitutes of the nanotube walls is a single magnetic domain. Highly ordered perovskite manganite La 2/3 Ca 1/3 MnO 3 nanotube arrays (with uniform diameter of 80 nm) were also successfully synthesized by a simple and rapid process, combining AAO template-assisted synthesis with microwave irradiation [88]. This method offers a quick hands-on route to produce nanotube arrays at relative low temperatures. Rare earth manganese oxide nanotubes with nominal composition of La 0.325 Pr 0.30-Ca 0.375 MnO 3 (800 nm external diameter, 4 μm length, and wall thickness below 100 nm) were synthesized by pore wetting of porous polycarbonate templates with the liquid precursor, and then followed by microwave irradiation and a further calcination at 800°C (two-stage thermal treatment) [89]. The wall thickness of these nanotubes was found to be formed by small crystals of approximately 20 nm. Perovskite La 0.59 Ca 0.41 CoO 3 nanotubes prepared by a sol-gel template method can be used as the catalysts in the air electrode for oxygen evolution, demonstrating superior catalytic activity and durability in comparison with that of the electrodes made by nanoparticles [90]. This indicates a promising application of La 0.59 Ca 0.41 CoO 3 nanotubes as electrocatalysts of air electrodes in fuel cells and rechargeable metal-air batteries. Perovskite Sm 0.6 Sr 0.4 MnO 3 nanotubes with diameter of 200 nm were also prepared by a sol-gel template method. Their walls are composed of nanoparticles with a diameter of 25 nm [91]. However, in these processes, the templates are usually dipped into the sols directly with the only driving force of capillary action. In the case of higher concentration sol, filling the pores become much difficult, especially for the templates with small pore diameters. While in the case of the sol with lower concentration, it usually results in serious shrinkage and cracking of porous templates during annealing process. Therefore, the synthesis of rare-doped perovskite manganite nanotubes with high crystallized quality by template-assisted method is still much challenging. Synthesis Methods for 2D Rare Earth-Doped Perovskite Manganite Oxide Nanostructures 2D rare earth-doped perovskite manganite oxide nanostructures include perovskite manganite oxide thin films, nanodot arrays, nanosheets, nanoplates, nanowalls, which exhibit interesting physical properties due to their complex interplays of spin, charge, orbital, and lattice degrees of freedom. They have promising applications in the fields of high-density memory and storage, sensors, and spintronic devices. Therefore, in the past few years, several methods have been developed to fabricate 2D rare earth-doped perovskite manganite oxide nanostructures [92][93][94]. For the reason of clarity, this section is divided into three subsections: current works on earthdoped perovskite manganite oxide thin films and/or multilayers, 2D earth-doped perovskite manganite oxide nanostructures based on planar structures, and rare earth-doped perovskite manganite oxide nanosheets. Rare Earth-Doped Perovskite Manganite Oxide Thin Films or Multilayers The growths of rare earth-doped perovskite manganite oxide thin films or multilayers are the process of taking the starting materials to be turned into films or multilayers and producing from its atoms, molecules, or ions in a gaseous state, which are then deposited onto the surface of a clean substrate. The prepared methods used to convert the starting materials into atomic, molecular, or ionized states are also diverse, which include physical vapor deposition (PVD) methods such as pulsed laser deposition (PLD), vacuum vapor deposition, RF magnetron sputtering, and chemical methods such as chemical solution deposition (CSD), chemical vapor deposition (CVD), metalorganic chemical vapor deposition (MOCVD), and molecular beam epitaxy (MBE). In the following sections, the most widely used techniques, including PLD, CSD, CVD, and MOCVD, and MBE techniques will be shortly introduced. Pulsed Laser Deposition PLD is a thin film deposition technique, in which thin film is grown by the ablation of one or more targets illuminated by a focused pulsed-laser beam [95]. In this method, a high power of pulsed laser beam is focused inside a vacuum chamber to strike a target of the material that is to be deposited. PLD process generally can be divided into the following four stages [96]: the laser radiation interaction with the target, dynamic of the ablation materials, decomposition of the ablation materials onto the substrate, nucleation and growth of a thin film on the substrate surface. PLD has several attractive features, including the stoichiometric transfer of material from the target, generation of energetic species, hyperthermal reaction between the ablated cations and molecular oxygen in the ablation plasma, and compatibility with background pressures ranging from ultra-high vacuum (UHV) to 100 Pa. Among them, the most feature characteristic of the PLD process is the ability to realize a stoichiometric transfer of the ablated material from a multi-cation target for many materials, achieving a composition of the film that is almost identical with that of the target, even though the target involves a complex stoichiometry. Moreover, the ability to easily vary the deposition rate is one of the principal features of PLD compared to other physical vapor deposition methods such as the sputtering technique. By controlling the growth conditions (e.g., the substrate temperature, chamber pressure, laser influence, target-to-substrate distance), many perovskite manganite oxide thin films or multilayers can be grown for high-performance electrical, magnetic, and optical devices. For example, Lawler et al. [97] grew the La 1-x Ca x MnO 3 thin films by PLD, which were ferromagnetic when 0.2 ≤ × ≤ 0.5 with T C ≈ 250 K. Harzheim et al. [98] also grew the La 0.66 Ba 0.33 MnO 3 films (with a thickness range of 5 to 250 nm) by PLD. Their CMR effects are dependent upon the thickness of epitaxial thin films deposited on MgO (100) and SrTiO 3 (STO) (100). A giant magnetoresistance near room temperature was observed in the ferromagnetic films of La 1-x Sr x MnO 3 (0.16 ≤ × ≤ 0.33) grown on (100) SrTiO 3 substrates by PLD [99]. Atomically defined epitaxy of the La 0.6 Sr 0.4 MnO 3 thin films with MnO 2 atomic layer as the terminating layer was also achieved by PLD method. The film as thin as 4 nm still shows a clear magnetic transition at T C = 240 K, semimetallic conduction below T C , and a novel magnetoresistive behavior down to the lowest temperature. Other rare earth-doped perovskite manganite oxide thin films such as La 0.6 Pb 0.4 MnO 3 [100], Nd 0.7 Sr 0.3 MnO z [101], Sm 1-x Sr x MnO 3 [102], and Pr 0.5 Ca 0.5 MnO 3 [103] were also in situ deposited at different temperatures and oxygen partial pressures by PLD process. To check effects of strains in the charge-ordered epitaxial Pr 1-x Ca x MnO 3 (x = 0.5, 0.6) thin films deposited on LaAlO 3 (LAO) and SrTiO 3 (STO) substrates, Haghiri-Gosnet et al. [104] carried out the Raman studies of the Pr 1-x Ca x MnO 3 films with different thickness. They found that the A g (2) mode (related to the tilting angle of the MnO 6 octahedra) was highly sensitive to the local changes and distortions in the lattice caused by the variations in temperature, doping, and epitaxial strains. Dhakal et al. [105] performed the epitaxial growth of (La 1-y Pr y ) 0.67 Ca 0.33 MnO 3 (LPCMO) (with y = 0.4, 0.5, and 0.6) thin films on NdGaO 3 (NGO) (110) and STO (100) substrates by PLD, and the effect of spatial confinement on EPS in the La 0.325 Pr 0.3 Ca 0.375 MnO 3 single-crystalline disks with diameters in the range of 500 nm-20 μm (fabricated from epitaxial LPCMO thin films by electron beam lithography) was investigated by Shao et al. [106]. It is found that the EPS state still remains to be the ground state in disks with the diameter of 800 nm or larger whereas vanishes in the 500-nmdiameter disks whose size is distinctly smaller than the characteristic length scale of the EPS domains. In the 500-nm-dameter disks, only the ferromagnetic phase was observed at all temperatures below Curie temperature Tc, indicating that the system was in a single-phase state rather than a EPS state. Kurij et al. [107] reported that all-oxide magnetic tunnel junctions with a semiconducting barrier, formed by the halfmetallic ferromagnetic La 0. 7 (20 nm) were grown in situ by pulsed laser deposition on TiO 2 single-terminated, (100)-oriented STO substrates. The Nb:STO layer thickness in the junctions varied from 1.8 to 3.0 nm, and the additional 10-nm-thick La 0.7 Sr 0.3 MnO 3 layer helped to avoid Ru diffusion into the barrier. It is found that tunnel junctions with an Nb:STO barrier exhibit an enhanced quality with a reduced number of defects, resulting in improved reproducibility of results, large TMR ratios between 100 and 350% between 20 and 100 K, and also a three orders of magnitude improvement of the low-frequency noise level. These results open the way to all oxide sensors for magnetometry applications. Xu et al. [108] reported on the epitaxial of La 0.7 Sr 0.3 MnO 3 /SrRu 1-x Ti x O 3 (SR 1-x T x O) superlattices on (001)-oriented (LaAlO 3 ) 0.3 (SrAl 0.5 Ta 0.5 O 3 ) 0.7 (LSAT) and (001)-oriented NGO single crystal substrates by PLD. Good reviews on the epitaxial growth of perovskite oxide thin films and superlattices can be found in the literatures [92][93][94]. Chemical Solution Deposition CSD is also named as solution growth, controlled or arrested precipitation, etc. Chemical deposition of perovskite thin films results from moderately slow chemical reaction that leads to the formation of thin solid layer onto the immersed substrate surface at the expense of chemical reaction between the aqueous precursor solutions [109][110][111]. In this method, when cationic and anionic solutions are mixed together and if the ionic product exceeds or becomes equal to the solubility product, precipitation occurs as ions combine together on the substrate and in the solution to from nuclei. Perovskite manganite oxide thin films can be grown on either metallic or nonmetallic substrates by dipping them in appropriate solutions of metal salts without the application of any electric field. Deposition may occur by homogeneous chemical reaction, usually reduction of metal ions, in a solution by a reducing agent. The growth rate and the degree of crystallinity depend upon the temperature of the solution. This method has many advantages such as large area thin film depositions, deposition at low temperature, and avoiding oxidation or corrosion of the metallic substrates [112]. Up to date, many perovskite manganite oxide thin films or multilayers have been synthesized by CSD method. Hasenkox et al. [113] reported on a flexible CSD method for the preparation of magnetoresistive La 1-x (Ca,Sr) x MnO 3 thin films based completely on metal propionates. Tanaka et al. [114] also grew (La,Sr)MnO 3 thin films on STO (100) single crystal substrates by CSD method. Solanki et al. [115] measured the transport and magnetotransport properties of the La 0.7 Pb 0.3 MnO 3 thin films grown on single crystal LAO (100) substrates by CSD technique. The structural, surface, and electrical properties of the La 0.7 Ca 0.3 MnO 3 and La 0.7 Sr 0.3 MnO 3 thin films deposited on (100)-oriented LAO single crystal substrates by CSD technique were also investigated [116,117]. The Pr-doped La 0.8-x Pr 0.2 Sr x MnO 3 (x = 0.1, 0.2, and 0.3) thin films were also grown on STO (100) single crystal substrate by CSD method [118]. Details about the growth of perovskite manganite oxide thin films by CSD method can found in good reviews contributed from Schwartz [111] and Zhang et al. [119]. CVD and MOCVD CVD is one of the most popular routes to synthesize perovskite oxide functional nanomaterials. It is often used to prepare high-quality, high-performance thin films on large area wafers or complex patterned substrates. The key difference from CSD is that instead of solutions as precursors, materials are prepared by CVD via the deposition of gaseous precursor onto the substrate. Thus, it requires high vapor pressure composition as the precursor and often the substrate must be heated to a particular temperature to facilitate the deposition reaction as well as the motion of adatoms [120]. In the CVD process, the film composition and structure are rather sensitive to the substrate temperature, the precursor delivery ratio, and the vaporizer temperature. CVD processes have the advantage of high deposition rate and low deposition temperature. As compared with the CSD process, they offer much better control over the morphology, crystal structure and orientations, and as a result are often used to prepare epitaxial perovskite oxide thin films [121][122][123]. Herrero et al. [124] reported on the growth of perovskite manganite La 1-x A x MnO 3 (A = Ca, Sr) thin films by a modified CVD process. When metal-organic compounds are used as precursors, the process is generally referred to as MOCVD, which is a popular CVD method and commonly used in Si technologies and electronic device fabrication for the synthesis of thin films and coatings. This technique offers several potential advantages over other physical deposition processes such as (i) high degree of control in stoichiometry, crystallinity, and uniformity; (ii) a versatile composition control; and (iii) the ability to coat complex shapes and large areas. Depending upon the processing conditions, different MOCVD variants are available, for example, low-pressure MOCVD, atmospheric pressure MOCVD, direct liquid injection MOCVD, and plasma-enhanced MOCVD [125]. In the direct liquid injection MOCVD, microdroplets of precursor solution controlled by a the computer are injected into the evaporator system. These droplets are produced by a high-speed electro-valve. The frequency and the time of the injection can be well adjusted so as to achieve the appropriate growth rate for each deposited material. Therefore, the final film stoichiometry can be precisely controlled by adjusting the respective concentrations of the precursors in the precursor liquid source. Up to date, MOCVD has been successfully used for growths of perovskite manganite oxide thin films or multilayers such as La 1-x Sr x MnO 3 [126], Pr 1-x Ca x MnO 3 [127], and perovskite oxide superlattices such as (La 0.7 Sr 0.3 MnO 3 /SrTiO 3 ) 15 [128]. Molecular Beam Epitaxy The molecular beam epitaxy (MBE) growth of thin films may be thought of as atomic spray painting, in which alternately shuttered elemental sources are employed to control the cation stoichiometry precisely, thus producing perovskite oxide thin films of exceptional quality. The flux of spray from each atomic or molecular beam is controlled by the temperature (and thus vapor pressure) of the effusion cell in which each species is contained. The duration of spray is individually controlled for each beam by shutters, which control not only the open time (and thus dose) but also the sequence in which species reach the growth surface. By controlling the shutters and temperature of the evaporant (which control dose and flux, respectively), the layering sequence of the desired structure can be customized. This technique is the premiere synthesis technique for the synthesis of layered oxides with customized layering control down to the atomic layer level [94]. Reutler et al. [129] reported on the growth of La 2/3 Ca 1/3 MnO 3 films by laser molecular beam epitaxy on (001)-oriented STO and NGO single-crystal substrates. The film thickness was 200 nm for the films on STO and 40 nm for the films on NGO. Werner et al. [130] reported that resistance versus magnetic field measurements for a La 0. 65 Sr 0.35 MnO 3 /SrTiO 3 /La 0. 65 Sr 0.35 MnO 3 tunnel junction grown by MBE, which showed a large field window of extremely high TMR at low temperatures. Peng et al. [131] systematically studied the dead-layer behavior of La 0.67 Sr 0.33 MnO 3 (LSMO)/STO heterostructures grown by ozone-assisted molecular beam epitaxy (OMBE). They found that the low kinetic energy of atomic beam could reduce extrinsic defects to the lowest level, and the composition was easily tuned at the single-atomiclayer level. Matou et al. [132] reported the reduction of the dead layer by growing La 0. 67 Top-Down Methods In recent years, 2D earth-doped perovskite manganite oxide nanostructures based on planar structures such as nanoplates [133] or lamella [134] and lateral arrays of nanodots [135] or nanowires [85] are fabricated. Different forms of "top-down" such as electron beam lithography (EBL) and NIL have been used for the geometrical patterning of 2D perovskite manganite nanostructures. EBL is another nanofabrication technique in rapid development. Guo et al. [140] grew La 0.67 Ca 0.33 MnO 3 films with thickness of~100 nm on STO (100) substrates by a PLD technique, and fabricated the La 0.67 Ca 0.33 MnO 3 microbridges with different widths (e.g., 1.5 μm, 1 μm, and 0.50 μm) via EBL technology. Beekman et al. [141] also grew thin La 0.7 Ca 0.3 MnO 3 films (with a thickness range of 20-70 nm) on STO (001) substrates by DC sputtering. And then, they fabricated the La 0.7 Ca 0.3 MnO 3 microbridges with a width of 5 μm by using EBL technology and Ar + etching. Bottom-Up Methods Besides the top-down methods, bottom-up methods such as template-assisted synthesis are also used to fabricate 2D perovskite manganite oxide nanostructures based on lateral arrays of nanodots. In contrast, template-assisted "bottom-up" synthetic approaches provide a route to achieving 2D geometrical ordering of perovskite manganite nanostructures with narrow size distributions. Nanosphere lithography (NSL) has been demonstrated as a versatile template-based method for generating 2D perovskite manganite nanostructures [142]. In NSL, the spacing and size of the periodically arranged nanostructures can be readily controlled by using polymer spheres with different diameters, and/or by changing the amount of material deposited. For example, Liu et al. [143] prepared twodimensional oxide nanoconstriction arrays via NSL. They dropped a drop of aqueous suspension of SiO 2 microspheres, with a diameter of 1.5 μm, onto a STO (100) substrate. These microspheres could selfassemble during the drying process and finally turned into a hexagon-like ordered monolayer. Then, a reactive ion etching process was proceeded to reduce the sizes of the microspheres. Subsequently, the substrate was put into a PLD chamber for the deposition of La 0.67 Sr 0.33 MnO 3 , after that the sample was transferred into a furnace and annealed at 750°C. After removing the microspheres, a La 0. 67 Synthesis Methods for 3D Rare Earth-Doped Perovskite Manganite Oxide Nanostructures Basically, there are two approaches for fabricating 3D perovskite-type oxide nanostructures: "bottom-up" and "top-down." "Bottom-up" processing refers to the synthesis of nanostructures starting at the atomic or molecular level. Solution-based routes (e.g., sol-gel based chemical solution deposition, templating, solution phase decomposition, and hydro/solvothermal synthesis) are the most commonly employed in the "bottom-up" approaches for synthesizing 3D perovskite-type oxide nanostructures (i.e., vertically aligned nanowires, rods or tubes). "Top-down" processing, e.g., FIB milling and some lithographical methods such as NIL, consists of carving away at a bulk material to create coherently and continuously ordered nanosized structures. Recently, 3D perovskite manganite oxide nanostructures are prepared by 3D nano-template PLD method. The basic concept of this method is an inclined substrate deposition onto the side surfaces of a 3D nano-patterned substrate, i.e., 3D nano-template is schematically shown in Fig. 1 [144]. At first, template wall structures are patterned on substrate by NIL technique using an organic resist (blue region) (Fig. 1a). Target material, i.e., metal oxide, is then deposited onto the side surface of the template patterns by PLD (Fig. 1b). After liftoff of templates and then etching for residual bottom film (Fig. 1c, d), self-standing metal oxide nano-wall wire arrays are obtained (Fig. 1e, f). Due to the right-angle side surface, the 3D nanotemplate acts as a shape and position reference point. The deposited material starts to grow at the side surface (interface) of the 3D nano-template while translating its shape. Therefore, the formation of nanostructures beyond the resolution limitations of top-down methods is realized. Recently, precisely size-controlled and crystalline (La 0.275 Pr 0.35-Ca 0.375 )MnO 3 nanobox were fabricated on a MgO (001) substrate using the this method [145]. In this process (see Fig. 2b. The wall-width of nanoboxes were successfully controlled in a range from 160 nm down to 30 nm by changing the deposition time, as shown in Fig. 2c. These (La 0.275 Pr 0.35 Ca 0.375 )MnO 3 nanoboxes exhibited the insulator-metal transition at the higher temperature than that in the corresponding film. This indicates that the well-aligned and reliably prepared, highly integrated CMR manganite 3D nanoboxes can provide a Reproduced with permission of [145] way to tune the physical properties of the CMR oxides. 3D nanotemplate PLD technique can be used to fabricate various perovskite manganite oxide nanostructures. Introduction The structural characterizations of rare earth-doped perovskite manganite oxide nanostructures are conducted to investigate their crystal structures, chemical compositions, and morphologies. The crystal structures are usually characterized by X-ray diffraction (XRD), Raman spectrum, Fourier-transform infrared spectroscopy (FTIR), field-emission scanning electron microscopy (FE-SEM), transmission electron microscopy (TEM), high-resolution TEM (HRTEM), and selected area electron diffraction (SAED). The chemical compositions are usually examined by energy dispersive X-ray spectroscopy (EDS), electronic energy loss spectroscopy (EELS), and X-ray photoelectron spectroscopy (XPS). The chemical bonding and chemical structure of the prepared rare earth-doped perovskite manganite oxide nanostructures can be examined by XPS, EELS, FTIR, and Raman spectra. The morphologies are usually characterized by atomic force microscopy (AFM), scanning electron microscopy (SEM), and TEM. In this section, the structural characterizations of rare earth-doped perovskite manganite oxide nanostructures are described to provide a brief review of the microstructural characterizations of rare earth-doped perovskite manganite oxide nanostructures. Rare Earth-Doped Perovskite Manganite Oxide Nanoparticles Up to date, many rare earth-doped perovskite manganite oxide nanoparticles have been synthesized by physical or chemical methods. Their physical and chemical properties are dependent upon the phase structures, morphologies, chemical compositions, and the grain size distributions of the nanoparticles as well as their thermal history during the synthesized process [146]. XRD is often used to identify the phase structure and the relative percents of different phases of the prepared nanomaterials. In addition, some structural parameters such as particle size, lattice parameters (a, b, and c), lattice volume, and theoretical density can be derived from the XRD data. Also, XRD is also used to optimize the preparation conditions of rare earth-doped perovskite manganite oxide nanoparticles [147][148][149]. For example, Sayague's et al. [147] synthesized the La 1-x Sr x MnO 3±δ (0 ≤ × ≤ 1) nanoparticles by mechanochemistry synthesis method under different conditions (e.g., different substitutions of La by Sr modifiers; various milling time; heat treatment at 1000°C under static air), and the XRD patterns of these samples are shown in Fig. 3. The inset shows an enlargement of the highest maxima. Reproduced with permission of [147] shows the XRD patterns of the La 1-x Sr x MnO 3±δ (x = 0.25) nanoparticles synthesized at different milling time. It was clearly observed that the solid state reaction during mechanochemistry synthesis process progressed significantly after 15 min milling and after 30 min it was almost finished. After only 45 min, no reactant peaks were detected and the solid-state reaction seemed to be complete. To ensure the full conversion, the mechanochemical synthesis of the nanoparticles was then carried out by 60 min ball milling. Figure 3b demonstrates the XRD patterns of the La 1-x Sr x MnO 3±δ (0 ≤ × ≤ 1) nanoparticles with different substitutions of La by Sr modifiers obtained by mechanochemistry synthesis. All the nanoparticles crystallized in a single phase with pseudo-cubic symmetry and perovskite structure. The right-shift of the XRD reflections in 2θ was ascribed to the substitution of La by Sr modifier. Figure 3c displays the XRD patterns of the La 1-x Sr x MnO 3±δ (0 ≤ × ≤ 1) nanoparticles heat treated at 1000°C under static air. Higher crystallinity and well-defined symmetry were clearly observed. Similarly, the XRD reflections are shifted to smaller d-spacing as increasing the La substitution from x = 0.0 to x = 0.75 (see the inset). In the samples with x = 0.0 and x = 0.25, the maxima XRD reflections were clearly split demonstrating a structure very similar to La 0.95 Mn 0.95 O 3 (JCPDS No. 01085-1838) with rhombohedral cell (R3c space group) calculated by Van Roosmalen et al. [150]. However, in the samples with x = 0.50, 0.75, 0.80, 0.85, and 0.90, the splitting of the maxima XRD reflections was not observed, which could be ascribed to different symmetries or different lattice parameters and same symmetry. The structural parameters of the synthesized samples in the La 1-x Sr x M-nO 3±δ (0 ≤ × ≤ 1) system were calculated by assuming a rhombohedral symmetry or cubic structure. The results showed a better fit when rhombohedral symmetry (R3c space group) was used for samples with 0 ≤ × ≤ 0.90. However, when the x value is equal to 1.0 (SrMnO 3 ), another perovskite structure with hexagonal symmetry and P6 3 /mmc space group (194) was observed. It was found that the volume of the unit cell was decreased as increasing the x value, which was due to the formation of Mn 4+ at the same time that La 3+ (136 pm) is substitute by Sr 2+ (144 pm) in the cationic subcell for keeping electronic neutrality. This is consistent with the ionic radius of Mn 4+ (53 pm) being smaller than that of Mn 3+ (65 pm), and indicates that the manganese ionic radius is actually the determinant of the unit cell volume. Moreover, it is also noticed that the appearance of Mn 4+ ion and its content was increased with increasing the strontium content, will reduce the John-Teller effect that was favored by the Mn 3+ cation. Therefore, the absence of the splitting of XRD peaks when the x values increase can be easily understood due to a higher symmetry of the structure. In order to investigate the changes of the crystallization and symmetry in milled samples (with pseudo-cubic symmetry) after annealing process (rhombohedral symmetry), XRD measurements as a function of the temperature from 30 to 1100°C (up and down) under air atmosphere were performed. The results are shown in Fig. 4. With increasing the temperature, the crystallization process can be observed and at 1100°C, a small diffraction peak at 2θ≈35°C (marked with an asterisk) appear, which could be due to the formation of an orthorhombic phase [151]. As the temperature is lowered down to 800°C, the small peak still exists and below this temperature it disappears. Below 500°C, some reflections start to be split (see the inset) and a small peak appears before 2θ = 40°C (marked with a cross), indicating the formation of the rhombohedral phase. The above results demonstrate that the rhombohedral phases are stable at low temperature, which can be explained in terms of oxygen composition. The orthorhombic phase is stable at high temperature (1100°C) and its ability to accommodate the oxygen in the structure is smaller than that of the rhombohedral one, which stabilizes below 500°C with an oxygen composition of La 0. 75 Sr 0.25 MnO 3.11 . The average crystallite size (D) was calculated from X-ray line broadening of the (110) diffraction peak using the Scherrer equation, which was about 20 nm close to the data obtained from SEM and TEM images. The preparation conditions (e.g., annealing temperature and time, and synthesis methods) affect greatly the morphology and surface characteristics of rare-earth doped perovskite manganite oxide nanoparticles, as revealed by SEM and TEM [19,61,147,152]. Figure 5 shows the representative SEM images of some milled and heated samples. It was observed that all [147] the milling samples with pseudo-cubic perovskite structure had a similar microstructure characterized by aggregates of small particles. As expected, the heated samples were composed of larger faceted particles, being very similar in shape as can be seen in the H1 and H2 samples with same rhombohedral symmetry; however, the H8 sample with a hexagonal symmetry exhibited very round particles and smaller in size. The representative TEM and SAED results of the milled and heated samples are shown in Fig. 6. The TEM image of M1 sample (x = 0.0) (shown in Fig. 6a) had quite large particles formed in fact by agglomerated small crystallites in the nanometer size range as evidenced by the presence of rings in the SAED pattern. All the rings can be indexed in the pseudo-cubic structure (Pm-3m). TEM images of the H1 sample (Fig. 6b) and the H3 sample (Fig. 6c) also showed the presence of aggregates but formed by submicrometric crystallites of several hundred nanometers as observed in the enlargements of two of these crystals. The corresponding SAED patterns were taken along the [001], [211], and [210] zone axis. All the diffraction spots can be indexed in the rhombohedral structure (R-3c). The TEM image of the H8 sample (x = 1.0) shown in Fig. 6d displays the crystals with different sizes, and its SAED pattern taken from the [201] zone axe can be indexed in the hexagonal structure (P6 3 /mmc), matching well with the XRD data. Tian et al. [20] also synthesized a series of crystalline La 1-x Sr x MnO 3 nanoparticles with an average particle size of~20 nm and good dispersion by MSS method. These La 1-x Sr x MnO 3 nanoparticles are well dispersed in water to form a clear solution and do not deposit even after standing for several weeks, exhibiting a good dispersion. The chemical bonding and structural information of the rare earth-doped perovskite manganite oxide nanoparticles can be revealed via FTIR and Raman spectra. For example, the main absorption band around 524 cm −1 observed in the FTIR spectra of the La 0. 7 [39]. The Raman peak around 224 cm −1 can be assigned as A g (2), which is related to the tilting of MnO 6 octahedron, whereas the Raman peak around 425 cm −1 is related to the Jahn-Teller type modes of the MnO 6 octahedron [154]. The Raman peak around 680 cm −1 can be assigned as B 2g (1), which is related to the symmetric stretching vibration mode of oxygen in MnO 6 octahedron [154]. With increasing the Pr-doping concentration (x) up to x = 0.4, the Raman peak around 680 cm −1 became disappeared. That was ascribed to the increased orthorhombic distortion in the LPCMO nanoparticles with high Pr-doping concentrations, leading to the much weak symmetric stretching vibration of oxygen in MnO 6 octahedron [39]. XPS is a surface-sensitive technique, which provides the information of the surface elemental compositions and surface chemistry of a material. The surface compositions of rare earth-doped perovskite manganite oxide nanoparticles can be identified via XPS [21,39,154]. For example, Fig. 7 shows the Mn 2p3/2 and O 1s XPS spectra of LaMnO 3.15 (LMO) and La 0. 67 (LSMO) nanoparticles synthesized by MSS method, which are effective catalysts for volatile organic compounds combustion [21]. It is observed in Fig. 7a that, for each sample, an asymmetrical Mn 2p3/2 peak located at 642.2 eV could be resolved into two components with a binding energy of 641.5 eV and 642.9 eV, respectively. The former component can be assigned to the Mn 3+ ions, whereas the latter one is assigned to the Mn 4+ [155,156]. Obviously, after the partial substitution of Sr 2+ for La 3+ , the intensities of the signals of O α and O γ were decreased whereas the signal for O β was increased, indicating an enhancement in the amount of adsorbed oxygen species. Therefore, more structural defects such as oxygen vacancies contribute to the enhanced catalytic performance of the La 0. 67 1D Rare Earth-Doped Perovskite Manganite Oxide Nanostructures The exciting developments in 1D perovskite manganite nanostructures must be effectively supported by a variety of structural characterization tools because the characterization provides invaluable information on the various microstructural, crystallographic, and atomic features, which can shed light on the unique properties exhibited by these fascinating materials. XRD is used for crystal structure analysis in which some structural parameters can be obtained. For example, Arabi et al. [80] synthesized the La 0.7 Ca 0.3 MnO 3 nanorods by hydrothermal method under different conditions (e.g., different mineralization agents such as KOH and NaOH, various alkalinity conditions (10,15, and 20 M)). Figure 8a shows the XRD patterns of the La 0.7 Ca 0.3 MnO 3 nanorods synthesized in the presence of two different mineralization agents (KOH and NaOH) with various concentration, namely K10, K15, K20, N10, N15, and N20, respectively. It was found that all the six samples crystallized in orthorhombic structure with space group Pnma according to the diffraction peaks. A typical Rietveld refinement analysis of the sample N10 is displayed in Fig. 8b, indicating a good agreement between the observed and calculated profiles and no detectable secondary phase present. The FE-SEM micrographs confirmed the rod-like morphology of all the obtained samples. Fig. 8 a Room temperature XRD patterns of La 0.7 Ca 0.3 MnO 3 manganite nanorods synthesized via the hydrothermal method with two mineralizers namely sodium hydroxide (NaOH) and potassium hydroxide (KOH) in different alkalinity conditions (10,15, and 20 M). b Room temperature XRD pattern (red symbol) and Rietveld profile (black line) for the sample N10. N (or K) means the NaOH (or KOH) mineralizer, 10 for the NaOH (or KOH) concentration. Reproduced with permission of [80] Datta et al. [70] reported the template free synthesis of single-crystalline La 0.5 Sr 0.5 MnO 3 nanowires by hydrothermal method. XRD pattern (see inset in Fig. 9a) demonstrated that these nanowires crystallized in a tetragonal structure with the space group I4/mcm. The diameter and length of these nanowires were about 20-50 nm and 1-10 μm, as revealed by SEM image (Fig. 9a) and TEM image of a single nanowire (Fig. 9b). Singlecrystalline nature of the nanowires was confirmed by the SAED pattern and HRTEM image (see insets in Fig. 9b). The lattice fringes with spacing of 0.311 nm were clearly resolved in the HRTEM image, corresponding the planar distance of (102) planes. The EDS data collected from the nanowire demonstrated that the atomic percentage ratio (La:Sr):Mn:O was approximately 1:1:3, close to the desired composition. The valence state of Mn in the nanowires was also quantitatively determined by EELS, which was about 3.5, very close to its bulk value. Similar work was also carried out to determine the Mn valence in the La 0.7 Ca 0.3 MnO 3 , La 0.5 Ca 0.5 MnO 3 , and La 0.7 Sr 0.3 MnO 3 nanowires synthesized by hydrothermal method [157]. In addition, single-crystalline perovskite manganite La 0.5 Ba 0.5 MnO 3 and La 0.5 Sr 0.5 MnO 3 nanowires were also synthesized by a hydrothermal method at low temperature [158]. They have a uniform width along the entire length, and their typical widths are in the range of 30-150 nm for La 0.5 Ba 0.5 MnO 3 and 50-400 nm for La 0.5 Sr 0.5 MnO 3 . These nanowires grow along the [110] direction and their surfaces are clean without any sheathed amorphous phase. By the compositehydroxide-mediated method, Wang et al. [159] synthesized the BaMnO 3 nanorods with diameters of 20-50 nm and lengths of 150-250 nm, which belong to a hexagonal structure with lattice parameters of a = 0.5699 nm and c = 0.4817 nm. By template-assisted method, Li et al. [160] also synthesized the La 0.33 Pr 0.34 Ca 0.33 MnO 3 / MgO core-shell nanowires with diameters about tens of nanometers in two steps. Similarly, by using AAO membranes (pore size~300 nm, thickness~100 mm) as the templates, perovskite manganite La 0.75 Ca 0.25 MnO 3 nanotubes with the average diameter of 160 nm and lengths up to tens of micrometers were fabricated by laser induced plasma filling [161]. The XRD pattern of the synthesized La 0.75 Ca 0.25 MnO 3 nanotubes is shown in Fig. 10a, where all the diffraction peaks can be indexed perfectly to the standard monoclinic perovskite structure of bulk La 0.8 Ca 0.2 MnO 3 (JCPDS no. 44-1040), and no second phase was detectable. That indicated well-crystallized perovskite-type phase was successfully transferred from the target to the nanotubes via the PLD method. The composition of the as-prepared La 0.75 Ca 0.25 MnO 3 nanotubes was determined by EDS analysis technique, and the result matches well with the target. A representative SEM image of the La 0.75 Ca 0.25 MnO 3 nanotubes array is shown in Fig. 10b, which reveals uniform fluffy feature with an average length of 50 μm. The cross-sectional TEM image of the La 0.75 Ca 0.25 MnO 3 nanotubes is shown in Fig. 10c, where the maximum wall thickness was observed about 20 nm. This thin-walled feature determines the poor mechanical strength of the nanotubes, hence the ultrasonic processing was avoided during the nanotubes dispersion. This indicates that the length of the nanotubes can be controlled by the amount of deposition from several to tens of micrometers. It is also noticed that a nanowire-like structure with the diameter of ca. 10 nm is observed in TEM image (Fig. 10d), which may originate from either the broken walls of nanotubes or the curl of nanotubes during the annealing process. The uniformly distribution of the elements in the wall of [70] individual nanotube was also confirmed by EDS element mapping. 2D Rare Earth-Doped Perovskite Manganite Oxide Nanostructures Perovskite manganite oxides have been a particularly appealing hunting ground for both condensed matter physics and practical device applications due to their physical properties, such as the high degree of spin polarization, CMR, spontaneous charge spin-orbital orderings, and so on [13,[162][163][164][165][166]. In thin films of rare earth perovskite manganites, RE 1-x M x MnO 3 (RE = rare earth, M = Ca, Sr, Ba) with mixed-valence perovskite structure, their transport properties are highly dependent upon the deposition techniques, processing conditions, and the substrate used. Among all perovskite manganite thin films, La 1-x Sr x MnO 3 has been the most widely investigated system due to its intrinsic magnetoresistance properties, electric-field tunable M-I transitions, halfmetallic band structure, and the highest Currie temperature (T c = 369 K for x = 0.33). Up to date, several deposition methods have used for their growth. For example, PLD and CVD are versatile techniques that can be used both for the growth of epitaxial and polycrystalline films [167,168], and RF magnetron sputtering and wet chemical processes are principally for polycrystalline films [169][170][171][172]. In contrast, MBE and atomic layer deposition (ALD) are mainly used for epitaxial films and superlattice structures. For example, polycrystalline perovskite manganite La 1-x Sr x MnO 3 films with x = 0.15, 0.33, and 0.40 were deposited onto silicon (100) substrates by PLD in an 80/20 (Ar/O 2 ) atmosphere at room temperature [173]. After deposition, the films were air annealed at 900°C for 1 h to obtain the desired crystalline phase. Several groups have epitaxially grown the La 0.67 Sr 0.33 MnO 3 thin films on different singlecrystal substrates including STO (cubic), LAO (pseudocubic), NGO (orthorhombic), and MgO (cubic) [174][175][176][177]. Due to the small lattice mismatch between the La 0.67 Sr 0.33 MnO 3 and these substrates (except for MgO), the La 0.67 Sr 0.33 MnO 3 films exhibit single crystalline in a perfect epitaxy between the film and the substrate. Therefore, neither interfacial dislocations nor secondary phases' inclusions are observed at the film/substrate interface, as it can be further identified by the crosssectional HRTEM image [176]. Since the lattice strain is not released easily, therefore, the La 0. 67 [161] example, the films are still single crystalline without dislocations or intermediate layer in the whole thickness up to 120-130 nm [176,178] or the films are divided into two regions with a strained bottom part and a relaxed top layer that are separated by an intrinsic interface containing a dislocation network [179]. These microstructures can be ascribed to strongly different growth parameters such as deposited temperature, the target-to-substrate distance, and oxygen pressure, laser influence. To make the full use of functionalities of the epitaxial La 0.67 Sr 0.33 MnO 3 thin films into the future devices such as sensors, data storage media, and IR detectors, the integration of the functional oxides into conventional semiconductor substrates is highly essential [180,181]. However, the direct epitaxial growth of La 0.67 Sr 0.33 MnO 3 thin films and functional complex oxides on the Si substrates needs to be further developed due to the dissimilarities of these materials in chemical reactivity, structural parameters, and thermal stability [182]. Vila-Fungueiriño et al. [183] reported the high-quality epitaxial growth of the La 0.67 Sr 0.33 MnO 3 thin films on Si substrates with epitaxial STO thin buffer layer by a combination of CSD and MBE methods. Figure 11 displays the STEM image of atomic and chemical structure of the epitaxial La 0.67 Sr 0.33 MnO 3 (LSMO PAD )/ STO MBE /Si heterostructure. Atomic-resolution Z-contrast images of the LSMO/STO/Si interface confirm an optimal epitaxial growth of LSMO ultra-thin films with a perfect crystalline coherence onto the STO/Si buffer layer. EELS measurements with atomic resolution (Figure 11 right) show that cationic intermixing is restricted to the first two unit cells, in agreement with the sharp contrast observed in the Z-contrast image. To modulate the magnetic and transport properties of epitaxial La 1-x Sr x MnO 3 thin films, superlattice structures such as (LSMO/STO) n were grown on LAO substrates by MOCVD [184]. The XRD pattern (synchrotron) of the (LSMO/STO) n superlattice is shown in Fig. 12a, which demonstrates well-resolved satellite peaks characteristic of the superlattice period. That indicates a good coherence over the stacking. Figure 12b shows a crosssectional HRTEM image of the (LSMO/STO)n superlattice, revealing sharp interfaces between individual layers. The perfect interfaces between adjacent layers extend on a very large scale. It is observed that the Tc value is decreased sharply when the thickness of the LSMO layer is decreased below 4 nm (see Fig. 12c) [163]. The dependence of Tc as a function of the LSMO thickness can be understood from the 2D scaling law proposed by Fisher and Barber [185], which gives a typical two-dimensional thickness of about four monolayers (t 2D ∼1.5-2 nm). Besides the (LSMO/STO) n superlattice, the (La 0.7 Sr 0.3 MnO 3 (LSMO)) m /(SrRu 1-x Ti x O 3 (SRTO)) n (x < 0.3) superlattices was also reported by Xu et al. [108]. For clarity, the sample is written as [m/n] N (the numbers of m and n in square brackets denote the thicknesses of LSMO and SRTO in nanometers, respectively, and the subscript N denotes the periods, i.e., the repeat numbers of LSMO). Figure 13 shows a STEM image of the [1.2/ 2.4] 10 sample grown on NGO substrates, indicating the high quality of superlattices grown on NGO. The 2D morphology of perovskite manganite nanosheets have the advantage of direct implementation as potential building blocks for next-generation nanodevices due to their extended 2D network with rich electronic and magnetic properties. Recently, Sadhu and Bhattacharyya [186] synthesized pure-phase perovskite Pr 0.7 Ca 0.3 MnO 3 manganite nanosheets (PNS1) and Pr 0.51 Ca 0.49 MnO 3 manganite nanosheets (PNS2) via a "beakerless" pressure synthesis route. XRD Rietveld refinement patterns revealed that the PNS1 and PNS2 perovskite manganite nanosheets crystallized in the orthorhombic phase with the Pnma space group. The TEM image of four stacked sheets (i-iv) of the representative PNS1 sample is shown in Fig. 14a, and the SAED pattern is shown as inset in Fig. 14a, indicating the characteristic reflections of the orthorhombic phase. Lattice fringes with spacing of 0.272 nm are observed in the HRTEM image (Fig. 14b), corresponding to the (112) plane reflection. Figure 14(c) displays the FE-SEM image of PNS1 nanosheets with thickness of 10-14 nm. In addition, the nanosheet surface spans over 500-600 nm, and on average 10-12 nanosheets remain stacked together. EDS data on ten different nanosheets provided the homogeneity profile of the samples (Fig. 14d, e). The nanosheet morphology (Fig. 14f), lattice fringes with FFT (Fig. 14g), and the EDS pattern (Fig. 14h) of the PNS2 samples were found similar to that for PNS1. The EDS results matched well with that of ICP-MS as Ca 2+ /Pr 3+ atomic ratio which was 30.0 ± 0.5 for PNS1 and 49.0 ± 0.5 atom % for PNS2, respectively. 2D rare earth-doped perovskite manganite oxide nanostructures such as La 0.67 Ca 0.33 MnO 3 nanobridges are also fabricated by FIB method from the epitaxial La 0.67 Ca 0.33 MnO 3 thin films [187]. Figure 15 shows schematic diagrams for the fabrication process of La 0.67 Ca 0.33 MnO 3 nanobridges, which involves epitaxial growth of La 0.67 Ca 0.33 MnO 3 film by PLD method or MOCVD, nanobridge fabrication by FIB (or EBL), and four-electrode construction for physical measurement. For physical property measurements, the electric connection between the sample and instrument can be achieved in different ways such as wire bonding, indium and silver paint [188]. 3D nanostructured perovskite manganite materials such as branched nanorods or nanoforests have attracted extensive research attentions due to their unique 3D nature. By making full use of the vertical and horizontal dimensions, perovskite manganite oxide 3D nanostructures exhibit many fascinating physical and chemical properties due to their highly enhanced interfacial area and stability, as compared to the one-dimensional (1D) nanowire arrays [193][194][195][196][197]. Recently, 3D nanostructures have been successfully constructed by interlayering La 0.67 Ca 0.33 MnO 3 (LCMO)-CeO 2 -based epitaxial vertically aligned nanocomposite thin films with pure CeO 2 (or LSMO) layers, which were epitaxial grown on SrTiO 3 (001) substrates via a PLD method [198]. This 3D strained framework structures combine both the lateral strain by the layered structures and the vertical strain in the vertically aligned nanocomposite and thus achieve the maximized strain tuning in LSMO. The schematic diagram illustrating the design of such 3D nanostructures is shown in Fig. 17, which creates 3D interconnected CeO 2 or LSMO Reproduced with permission of [192] framework microstructures within the thin films, and provides versatile tool to achieve 3D strain tuning. The structural characterizations of this 3D nanostructures are shown in Fig. 18 [198]. Clearly, the CeO 2 nanopillars with a large aspect ratio are vertically aligned and well distributed in the LSMO matrix and the sharp phase boundaries suggest the well separated growth of the two phases. Thus, a well-defined 3D interconnected LSMO frame is clearly achieved within the dense films. More importantly, by varying the types of the interlayers (e.g., CeO 2 or LSMO) and the number of interlayers from 1 to 3 layers, such 3D framework nanostructures effectively tune the electrical transport properties of LSMO, e.g., from a 3D insulating CeO 2 framework with integrated magnetic tunnel junction structures, to a 3D conducting LSMO framework, where the MR peak values have been tuned systematically to a record high of 66% at 56 K and enhanced MR properties at temperatures above room temperature (~325 K). This new 3D-framed design provides a novel approach in maximizing film strain, enhancing strain-driven functionalities, and manipulating the electrical transport properties effectively. Rare Earth-Doped Perovskite Manganite Oxide Nanoparticles Magnetic Properties Superconducting quantum interference device (SQUID) magnetometer is the most powerful, sensitive, and widely used instrument for magnetic characterization in material science. This device works on the principal of quantum interference produced using Josephson junctions. This measurement system is used for d.c. magnetization and M vs. H measurements of the samples. For d.c. magnetization, a small external field is applied and χ is measured as a function of temperature at constant applied field. For M-H measurements, magnetization is measured at a constant temperature while magnetic field is varied up to a certain value of positive and negative applied field. The most common units for the magnetic moment is emu. The natural unit of the magnetization is thus emu/g or emu/cm 3 . If one can estimate the number of atom in the sample, then one can also calculated the magnetic moment per atom in μ B . The perovskite manganite oxide nanoparticles of La 1-x Sr x MnO 3 are one of the most attractive rare earthdoped perovskite manganites, which exhibit a metallic nature, large bandwidth, and the Curie temperature (T C ) as high as 300-370 K [199]. Their magnetic properties are influenced by many factors; the key ones include chemical compositions, the type and the degree of defectiveness of the crystal lattice, the particle size and morphology, the interactions between the particle, and the surrounding matrix and/or the neighboring particles. By changing the nanoparticle size, shape, composition ,and structure, one can control to an extent the magnetic characteristics of the nanoparticles. For example, Baaziz et al. [148] synthesized La 0.9 Sr 0.1 MnO 3 Fig. 17 Schematic illustration of 2-phase heterogeneous microstructure evolution of the thin films: from vertical aligned nanocomposite (VAN) C0/L0 to 3D CeO 2 framed thin films C1-C3 and 3D La 0.7 Sr 0.3 MnO 3 (LSMO) framed thin films L1-L3. The 3D framed microstructure is achieved by alternative growth of the single phase and the VANs in multilayered fashion. This design combines the lateral strain introduced from multilayered thin film and the vertical strain from interfacial coupling in VANs, creates 3D interconnected CeO 2 or LSMO framework microstructures within the thin films, and provides a versatile tool to achieve 3D strain tuning. The unit cells and phase of LSMO are in green, and the unit cells and phase of CeO 2 are in red. The single layer LSMO-CeO 2 VAN thin films are named as C0 or L0, without LSMO or CeO 2 as the interlayers. 3D CeO2 interlayered thin films with 1, 2, and 3 interlayers inserted in VAN structures are named as samples C1, C2, and C3, respectively. Similarly, 3D LSMO interlayered thin films with 1, 2, and 3 interlayers inserted in VAN are named as sample L1, L2, and L3, respectively. Reproduced with permission of [198] nanoparticles by the citrate-gel method and annealed them at 600°C (H6), 800°C (H8), 1000°C (H10), and 1200°C (H12), respectively. Their magnetization (M) versus temperature (T) curves measured under the applied magnetic field of 500 Oe are shown in Fig. 19a. The T C was obtained from the inflection points in dM/ dT as a function of temperature for all samples in the inset (Fig. 19i)). As shown in M-T curve, all samples exhibit a PM to FM transition at T C upon cooling. The T C dependent upon the particle sizes (d) is shown as an inset (Fig. 19ii) in Fig. 19a, where it is clearly observed that T C decreases from 250 to 210 K with increasing the particle size from 45 to 95 nm. The decrease of the Curie temperature with increasing the particle size can be ascribed to the strain effects of grains induced by the distortion at grain boundaries and the orthorhombic strains caused by the strong J-T coupling. It was also found that the saturation magnetizations (M sat ) of the La 0.9 Sr 0.1 MnO 3 nanoparticles were increased with increasing the particle sizes, as shown in Fig. 19b. The reduction of M sat in the small particles may be attributed to the loss of long-range ferromagnetic (FM) order in the smaller particle sized samples since the surface contribution is larger in this case. This can be explained in terms of a core-shell model developed for nanoparticles [58,200,201], where ideally the core part retains the bulk-like physical properties, but the outer shell (with thickness t) can be considered as a disordered magnetic system whose magnetization may be considered to be zero in the absence of the magnetic field (see the inset Fig. 18 a Cross-sectional TEM image of the VAN thin film C0 and b its corresponding STEM image at low magnification. c Cross-sectional and d plan-view HRTEM images of sample C0. In the HRTEM image of (c), "C" in yellow points out the CeO 2 nanopillars and "L" in green points out the La 0.7 Sr 0.3 MnO 3 (LSMO) matrix. Clearly, those CeO 2 nanopillars with a large aspect ratio are vertically aligned and well distributed in the LSMO matrix and the sharp phase boundaries suggest the well separated growth of the two phases. Cross-sectional TEM images of the thin films e-g C1-C3 and h-j L1-L3, showing the microstructures of 3D interconnected CeO 2 and LSMO frames embedded within the thin films respectively. Reproduced with permission of [198] of Fig. 19b). This shell is named as the dead layer, which does not have any spontaneous magnetization. As the particle size decreases, the shell thickness t increases, which enhances the inter-core separation between two neighboring particles, resulting in a decrease in the magnetic exchange energy. That is the reason why the reduction in the saturation magnetization with decreasing the particle size. In order to confirm this, M sat value versus the inverse of the crystallite sizes (1/d) is plotted in Fig. 19b, which reveals a quasi-linear relationship between the M sat and 1/d. Similarly, the particle size effect on the magnetic properties of La 0.8 Sr 0.2 MnO 3 nanoparticles synthesized by the microwave irradiation process was also observed [64]. Beside the particle size and the synthesis process, the magnetic properties of La 1-x Sr x MnO 3 nanoparticles are also dependent upon the Sr-doping level. Tian et al. [20] synthesized 0.1, 0.2, 0.3, and 0.4), respectively, which were decreased with increasing the Pr-doping concentration. That is ascribed to that the double-exchange interactions in the LPCMO nanoparticles became weakened due to the narrower bandwidth and the reduced mobility of e g electrons. Similarly, in the La 1-x Ba x MnO 3 (x = 0.3, 0.5, and 0.6) nanocubes synthesized via hydrothermal methods, the low-temperature saturation magnetization was also decreased with increasing the Ba-doped content [46]. However, in the Ca 1-x Sm x MnO 3 (CSM, x = 0.0-0.20) nanoparticles, the T C value was first abruptly decreased with increasing the Sm-doping concentration up to 0.05, but with a further increase in the doping level, it has monotonically augmented and approached a plateau above x = 0.1, as shown in Fig. 20 [202]. The decrease in magnetic transition temperature demonstrates that the strength of the super-exchange interaction is reduced due to the dilution of the Mn 4+ lattice by Mn 3+ spins. While at moderate and larger doping regime, the magnetic behavior of nanograins are dominant by double exchange Mn 3+ -O-Mn 4+ interactions and the strong [202] inter/intragrain coupling. The M sat was also increased linearly from 0 to 0.03, and then it was increased abruptly in the 0.03-0.05 doping level, approaching a plateau above x = 0.1 (seen in Fig. 20). Besides, as a popular system, Ca-doped lanthanum manganite, especially the magnetic properties of Ca-doped lanthanum manganite nanoparticles with different Ca doping levels were investigated by several groups [203][204][205][206]. The reduction of the M sat due to the particle size reduction is attributed to the increase of magnetic dead layers. With decreasing the particle size, the finite-size effect causes the decrease of T C . To obtain information on the dynamical properties of magnetic nanoparticles, ac magnetic susceptibility is measured on cooling or heating the nanoparticle samples. The ac susceptibility (χ ac ) has two components: one is in-phase (χ') with the excitation while the other is a dissipative out of phase (χ") component. Figure 21 shows the ac susceptibility of the La 0.8 Sr 0.2 MnO 3 nanoparticles (with particle size of 20 nm) versus temperature at an applied magnetic field of 1 mT and different frequencies (33.3, 111, 333.3, and 1000 Hz) [64]. In χ′(T) and χ″(T), a frequency-dependent peak near T b = 237 K (blocking/ freezing temperature) was observed, which shifted to a higher temperature as increasing the frequency. The frequency dependence of the ac magnetic susceptibility and appearance of the irreversibility temperature in the field cooling (FC) and zero-field cooling (ZFC) magnetization patterns are the signature for superparamagnetic/spin glass (SPM/SG) regime in both the interacting and non-interacting nanoparticles [207,208]. Similar phenomenon was also reported for other perovskite manganite nanoparticles such as spin glass or super-spin glass behavior in La 0.67 Sr 0.33 MnO 3 [209] and La 0.6 Sr 0.4 MnO 3 nanoparticles [210], and SPM behavior in La 2/3 Sr 1/3 MnO 3 nanoparticles [211]. To reveal the dynamic behavior of magnetic nanoparticles and the nature of the T b peak (SPM or SG) in of the La 0.8 Sr 0.2 MnO 3 nanoparticles calcined at 600°C for 3 h, three well-known phenomenological models (e.g., Neel-Brown model, Vogel-Fulcher model, and critical slowing down model) have been used to fit the experimental data of ac susceptibility of the sample. The best fitting results from the critical slowing down model indicate that there exists a strong interaction between the LSMO magnetic nanoparticles. However, in the La 0.67 Sr 0.33 MnO 3 nanoparticles (with average particle size of 16 nm) prepared by sol-gel method, Rostamnejadi et al. [212] found that the experimental data of ac susceptibility was best fitted by the Vogel-Fulcher model, whereas the fitting the experimental data with Neel-Brown model and critical slowing down model give out unphysical value for the relaxation time. In addition, the unusually large value for the dynamic critical exponent and smaller value for relaxation time constant obtained from the fitting of data by critical slowing down model indicate that the spinglass phase transition does not take place in this system of nanoparticles. Magnetocaloric properties Recently, the large magnetocaloric effect (MCE) in perovskite manganites has been widely studied [213,214]. MCE originates from the heating or the cooling of magnetic material due to the application of magnetic field, which is characterized by the magnetic entropy change. The magnetic entropy change (ΔS) can be estimated from the M(H) curves and the use of the Maxwell's relation where M is the magnetization, H is the magnetic field, and T is the temperature. The relative cooling power (RCP) proposed by Gschneidner et al. [215] is also an important parameter for selecting potential substances for magnetic refrigeration, which is described as the refrigeration capacity of magnetic refrigerant for magnetic refrigeration. It is evaluated using the relation where δT is the full width at half maximum of a -ΔS(T) curve. Wang et al. [38] investigated the magnetocaloric effect in the Ln 0.67 Sr 0.33 MnO 3 (Ln = La, Pr and Nd) nanoparticles prepared by using the sol-gel method. Figure 22a-c shows the temperature dependence of −ΔS(T) under different changes of applied field from 1 to 5 T for LaSrMnO 3 , PrSrMnO 3 , and NdSrMnO 3 nanoparticles, Fig. 21 Ac susceptibility versus temperature for the La 0.8 Sr 0.2 MnO 3 sample (particle size of 20 nm) at different frequencies. Inset is the imaginary part as a function of the temperature at different frequencies. Reproduced with permission of [64] respectively. Under a field changing from 0 to 5 T, the maximum values of isothermal entropy change are found to be 2.49, 1.94, and 0.93 J/kg K for the samples with Ln = La, Pr, and Nd, respectively, and the corresponding values of RCP reach 225, 265, and 246 J/kg. The RCP as a function of magnetic field is presented in Fig. 22d. It is seen that the RCP increases in almost a linear way as the field increases. These results suggest that those nanoparticles could be useful for magnetic refrigeration in a broad temperature range. Transport Properties Transport property measurements of the manganite materials were carried out using standard four-terminal method on a Quantum Design PPMS system. At a constant applied field, resistance was measured as a function of temperature. Kumar et al. [154] synthesized the (La0.6Pr0.4)0.65Ca0.35MnO3 nanoparticles via a sol-gel route at different sintering temperatures, and measured their electrical transport properties. The electrical resistivities (ρ) as a function of temperature for the (La 0.6 Pr 0.4 ) 0.65 Ca 0.35 MnO 3 nanoparticles sintered at 600°C , 800°C, and 1000°C was shown in Fig. 23. It was observed that the (La 0.6 Pr 0.4 ) 0.65 Ca 0.35 MnO 3 system shows insulator type behavior at higher temperatures due to the development of charge-ordered states in the nanocrystalline system, and starts to behave as a metal at lower temperatures because double exchange interaction Fig. 22 Isothermal entropy changes as a function of temperature with field changes of 1, 2, 3, 4, and 5 T, a for LaSrMnO 3 , b for PrSrMnO 3 , and c for NdSrMnO 3 , respectively. d Relative cooling power (RCP) as a function of magnetic field for the Ln 0.67 Sr 0.33 MnO 3 (Ln = La, Pr, and Nd) nanocrystalline samples. Reproduced with permission of [38] Fig. 23 Variation of resistivity with the temperature for the (La0.6Pr0.4)0.65Ca0.35MnO3 nanoparticles sintered at 600°C, 800°C, and 1000°C, respectively. Reproduced with permission of [154] plays a dominant role in the transport behavior of the system. The insulator-metal transition temperature (T IM ) and resistivity (ρ) of nanoparticles are dependent upon the sintering temperature of the system (or the particle sizes). With increasing the sintering temperature, the particle (grain) size is increased; therefore, the effect of grain boundary is reduced and consequently the charge carriers in the nanocrystalline system face less scattering from the grain boundaries. This factor also improves the double exchange interaction mechanism and the system starts to show M-I transition at higher temperatures, and the resistivity of the system is also decreased significantly. Zi et al. [44] prepared La 0.7 Sr 0.3 MnO 3 nanoparticles by a simple chemical co-precipitation route. To study the magnetoresistance (MR) effect, the magnetic field dependence of MR ratio at 10 K and 300 K by sweeping the applied magnetic field from − 20 to 20 kOe, was shown in Fig. 24. MR is defined as where ρ H and ρ 0 refer to the resistivity under the applied and zero field, respectively. It can be seen that MR drops abruptly with the increasing field in low-field region, which is called as low-field magnetoresistance (LFMR). LFMR values at 10 and 300 K are 22.3% and 2.9% at 5 kOe, respectively. Because of the small coercive field, the alignment of the magnetization in each LSMO grain to the applied magnetic field occurs in the low-field region. At a comparatively high-field region above 5 kOe, MR decreases linearly with the applied field, but with a much reduced slope. High-field MR (HFMR) ratios at 10 and 300 K are 29.2% and 6.5% at 20 kOe, respectively. HFMR can be attributed to the non-collinear spins at LSMO boundaries. To investigate the particle size effect on the transport properties of La 0.7 Sr 0.3 MnO 3 (LSMO) nanoparticles, Navin and Kurchania [216] synthesized the LSMO nanoparticles with particle sizes of 20 nm (LSMO-1), 23 nm (LSMO-2), and 26 nm (LSMO-3), respectively. Figure 25 shows their temperature dependence of resistivity measured under (H = 1 T) and without magnetic field in the temperature range of 10-300 K. It was found that the resistivity values of the LSMO-1 nanoparticles (Fig. 25a) were higher than that of the LSMO-2 (Fig. 25b) and LSMO-3 (Fig. 25c) nanoparticles. That is ascribed to the smaller particle size in the LSMO-1 sample. As the particle sizes become smaller, more grain boundaries in the samples acts as scattering centers to the charge carriers, resulting in the larger resistivity. Besides, it was found that the value of resistivity of all samples decrease under an external magnetic field of 1 T. The applied magnetic field gives rise to the increasing of spin ordering and the decreasing of the localization of the charge, which result in the reduction of resistivity. The LFMR property is related to the spin dependent scattering or spin dependent tunneling of the conduction electrons near the interfaces and grain boundaries. Figure 26a shows the temperature dependence of the MR of the samples LSMO-1, LSMO-2, and LSMO-3 at an applied magnetic field of 1 T. It was found that their MR values increase monotonically with decreasing the temperature. The LFMR at 1 T and 10 K for the samples LSMO-1, LSMO-2, and LSMO-3 was obtained as 32.3%, 28.4%, and 25.1% respectively. Obviously, the LFMR enhanced with decreasing the particle sizes. Figure 26b shows the magnetic field dependence of the normalized resistivity (ρ H /ρ o ) with applied magnetic field at temperatures 10 K and 300 K, where ρ 0 and ρ H are the resistivity without and with magnetic field respectively. A sharp drop in the value of resistivity in low magnetic field region was also observed and the resistivity does not saturate up to a magnetic field of 4 T. The effects of doping levels on the electrical transport properties of perovskite manganite nanoparticles were also investigated. Thombare et al. [217] reported the electrical properties of Nd 1-x Sr x MnO 3-δ (NSMO, 0.3 ≤ × ≤ 7) nanoparticles synthesized by glycine assisted auto combustion method. Figure 27a shows the resistivity for NSMO in the temperature range 5-300 K at zero magnetic field. It is found that all plots show high resistivity. The resistivity values slightly increases with the Sr concentration up to 100 K, whereas below that the rise in resistivity is steeper. The M-I transition is not observed in the present NSMO nanoparticles without applied Reproduced with permission of [44] magnetic field. However, under applied magnetic field (H) of 8 T, the M-I transition temperature (T P ) was clearly observed around 12-48 K, as shown in Fig. 27b. These transition temperatures are lower than that in bulk counterpart. That may be due to the formation of small ferromagnetic clusters which are suffice for magnetic contribution but forbids conduction [218]. Optical Properties The optical study of perovskite manganites has interestingly shown that they are controlled by the electronic structure of perovskites. Kumar et al. [154] synthesized . To investigate the optical absorbance and evaluate the optical band gap of (La 0.6 Pr 0.4 ) 0.65 Ca 0.35 MnO 3 , ultraviolet-visible (UV-Vis) spectroscopy measurements are carried out and the obtained UV-Vis spectra are shown in Fig. 28a. Obviously, there is a sharp absorption edge around 308 nm in ultraviolet region. The optical absorption edges can be analyzed as follows [219]: where E g is the band gap energy, hν is the photon energy, and α is the absorption coefficient, depending upon the optical absorbance (A) and thickness (d). n can be equal to 1/2 (for direct transition process) or 2 (for indirect transition process). The variations of (αhν) 2 versus photon energy (hν) for the (La 0.6 Pr 0.4 ) 0.65 Ca 0.35 MnO 3 nanoparticles post-annealed at 600°C, 800°C, and 1000°C are plotted in Fig. 28b. It is observed that (αhν) 2 varies linearly for a very wide range of photon energy (hν), indicating a direct type of transitions in these systems. The intercepts of these plots on the energy axis give the energy band gaps of the systems, which were determined to be 3.52, 3.46, and 3.42 eV, respectively, for the (La 0.6 Pr 0.4 ) 0.65 Ca 0.35 MnO 3 nanoparticles post-annealed at 600°C, 800°C, and 1000°C. These direct band gaps fall into the range of wide band gap semiconductors. The decrease of the band gap (red-shift) with increasing post-annealing temperature can be attributed to the increased particle sizes. Negi et al. [220] also investigated the optical properties of GdMnO 3 nanoparticles synthesized by the modified sol-gel route. The room temperature optical absorption spectrum of the GdMnO 3 nanoparticles measured in the range of 200-600 nm clearly shows that the absorbance is less in the range of 380-600 nm. The low absorbance in the entire visible region is an essential condition for nonlinear optical applications [221]. An extrapolation of the linear [154] region of a plot of (αhν) 2 on the y-axis versus photon energy (hν) on the x-axis gives the optical band gap ∼ 2.9 eV. Wang and Fan [223] reported on the magnetic properties of electron-doped Ca 0.82 La 0.18 MnO 3 nanowires and nanoparticles, and compared them with their bulk counterpart. It is found that the Ca 0.82 La 0.18 MnO 3 bulk exhibits a strong charge ordering (CO) peak at T CO = 132 K followed by an AFM ground state, whereas the CO peak becomes weak in the nanowires (T CO = 124 K), and disappeared in the nanoparticles which exhibits a ferromagnetism with T C = 165 K. Chandra et al. [224] also reported on the magnetic properties of the singlecrystalline La 0. 5 The inset of (e) shows a magnified view of χ'(T) for more frequencies. Reproduced with permission of [224] temperature is lowered from 340 K, the nanowires exhibit a PM to FM transition at T C ∼315 K followed by a peak at T N ∼210 K, which is associated with the onset of the FM-AFM transition. As the applied magnetic field is increased, the irreversible temperature shifts to lower temperatures as shown in Fig. 29b-d. Figure 29e, f demonstrates the temperature dependences of the real (χ′) and imaginary (χ′′) parts of ac susceptibility in the temperature range of 10-340 K, respectively. The χ′ (T) curves show a maximum at T N with no frequency dependence and a kink at T L . In addition, the χ′′ (T) curves, which reveal insight into magnetic loss behavior, showing a peak at T C , a broad shoulder at T N , and a kink at T L . The χ′ (T) peak shifts to a higher temperature as the frequency is increased, which is consistent with the results reported previously [225,226]. Magnetocaloric properties Kumaresavanji et al. [227] reported on the MCE in La 0.7 Ca 0.3 MnO 3 nanotube arrays, which were synthesized by template-assisted sol-gel method in temperatures ranging from 179 to 293 K and under magnetic fields up to 5 T. Their temperature dependence of −ΔS M at different fields for nanotube arrays and bulk is plotted in Fig. 30a, b. When compared with the bulk counterpart (4.8 J/kg K), the magnitude of the ΔS M (1.9 J/kg K) is smaller for nanotube arrays. In addition, the temperature dependence of −ΔS M curves for bulk sample show a narrow peak at 258 K which become broader and shift to lower temperature for nanotube arrays. The refrigerant capacitance (RC), is also an important parameter for selecting potential substances for magnetic refrigeration, which is described as the amount of heat transferred between the hot and cold sinks in one ideal refrigeration cycle. It is evaluated using the relation where T 1 and T 2 are the temperatures of cold and hot reservoirs, respectively, which correspond to the full width at half maximum (δT FWHM ) of the ΔS M curves. The calculated δT FWHM and RC values for nanotube arrays and bulk samples are depicted in Fig. 30c. The RC values vary linearly with H in both cases. Moreover, the RC value is reasonably large for bulk sample compared to nanotube arrays. However, the δT FWHM values of nanotube arrays are nearly 54% larger than the observed one for bulk sample. The temperature dependence of ΔS M curves of nanotube arrays and bulk at a field of 5 T are comparatively shown in Fig. 30d. The shadow part represents the δT FWHM of nanotube arrays which is nearly three times larger than their bulk counterpart. [227] From this figure, one can understand how the nanotube arrays provide an expanded working temperature range compared to the bulk one. Even though the nanotube arrays present a broader ΔS M curve, the ΔS M value is lower compared to the bulk sample. However, the higher surface to volume ratio together with the hollow structure and broader peaks of ΔS M indicate that the manganite nanotubes could be a suitable material for magnetic refrigeration in nano-electromechanical systems. The magnetocaloric properties of La 0.6 Ca 0.4 MnO 3 nanotubes with diameter of 280 nm and wall thickness of 10 nm were also reported by Andrade et al. [228]. It is found that the decrease of ΔS is commonly accompanied by a broadening in the ΔS curve. The RCP of nanoparticles is decreased with decreasing the particle size, but they still possess a larger cooling power than the nanotubes of the same compound, due to the broadening of the magnetic transition observed in these samples. In this way, it is important to notice that the reduced maximum value of ΔS observed for nanosystems is often accompanied by a broad magnetic entropy change. Transport Properties Lei et al. [229] successfully synthesized single-crystalline MgO/La 0.67 Ca 0.33 MnO 3 core-shell nanowires (MgO core is ∼ 20 nm in diameter and the La 0.67 Ca 0.33 MnO 3 shell layer is ∼ 10 nm in thickness) by depositing epitaxial La 0.67 Ca 0.33 MnO 3 sheaths onto MgO nanowire templates through the PLD technique. Transport investigations were carried out by measuring the four-probe resistance of individual core-shell nanowires, as shown in Fig. 31. The SEM image of a typical device with a 5μm-long nanowire and four uniformly distributed electrodes is shown in Fig. 31a. The four-probe resistance of an MgO/La 0.67 Ca 0.33 MnO 3 nanowire was recorded as a function of temperature under two different magnetic fields (0 and 1 T), as shown in Fig. 31b. This M-I transition occurred at ∼ 140 K under zero magnetic field, and the transition temperature shifted to ∼ 160 K when a magnetic field of 1 T was applied normal to the device substrate. This M-I transition and transition temperature shifting effect induced by magnetic fields strongly suggest a correlation between the ferromagnetism and the metallicity, which is ascribed to the doubleexchange mechanism. This M-I transition associated with FM to PM transition also happened in the MgO/ La 0.67 Sr 0.33 MnO 3 core-shell nanowires with T MI ∼240 K at H = 0 and the T MI shifted to ∼ 250 K under a perpendicular magnetic field of 1 T (Fig. 31c). In addition, MR measurements were also performed with both MgO/ La 0.67 Ca 0.33 MnO 3 (inset of Fig. 31b) and MgO/ La 0. 67 Sr 0.33 MnO 3 (inset of Fig. 31c) core-shell nanowires at their transition temperature by sweeping the perpendicular magnetic field between ± 2.0 T. By defining the Fig. 31b), and 12% was achieved at T = 240 K and H = 2.0 T for La 0.67 Sr 0.33 MnO 3 (inset of Fig. 31c). Optical Properties Arabi et al. [80] synthesized the La 0.7 Ca 0.3 MnO 3 nanorods by hydrothermal method under different conditions (e.g., different mineralization agents KOH and NaOH; various alkalinity conditions (10, 15, and 20 M)). The UV-Vis absorption spectra of all the La 0.7 Ca 0.3 MnO 3 nanorods are shown in Fig. 32a, where three obvious peaks are observed due to optical response in these La 0.7 Ca 0.3 MnO 3 nanorods. The first peak was observed around 220 nm (5.6 eV) for all the samples. Strong absorption peak appeared at wavelengths about 325-380 nm (3.8-3.3 eV) and the third peak appeared around 950 nm (1.3 eV) in all the samples, as shown in inset of Fig. 32a. It is found that a decrease and broadening of the absorption peaks for the N-series samples is related to size reduction. Figure 32b shows the curves of (αhv) 2 versus hv, and the intercepts of these plots on the hν axis provide the optical band gaps. Figure 33 shows the transition from the EPS state to a single ferromagnetic metallic (FMM) state. Figure 33a shows Fig. 33b-d. In the color scale, the contrast below zero (red or black) represents FMM phase, while the contrast above zero (green or blue) represents non-ferromagnetic phase. Obviously, all the disks show distinct features of the EPS state (i.e., the coexistence of the FMM and charge order insulating (COI) phases), except for the 500 nm disk. The typical length scale of the EPS domains is around a micrometer. It was also found that with decreasing temperature, the portion of FMM phase was increased. Huijben et al. [231] grew ultrathin La 0.7 Sr 0.3 MnO 3 films with thicknesses from 3 to 70 unit cells on STO substrates by PLD method. Their magnetic properties are shown in Fig. 34. Figure 34a shows the M-H loops of The variation of (αhv) 2 absorption versus hv (photon energy) for the different samples (N10, N15, N20, K10, K15, and K20). N (or K) means the NaOH (or KOH) mineralizer, 10 (or 15, 20) for the NaOH (or KOH) concentration. Reproduced with permission of [80] all samples. It is observed that the saturation magnetization (M S ) is increased with increasing the film thickness up to 13 unit cells (~48 Å), whereas the coercive field (H C ) is decreased. The M-T curves for all the films with different thickness are displayed in Fig. 34b, from which the Curie temperature T C is determined. The thicknesses dependent of H C and T C is shown in Fig. 34c. It is found that H C and T C are nearly constant for thicknesses down to 13 unit cells. Further reduction in the film thickness results in a dramatic change in the magnetic properties, although the films remain ferromagnetic down to three unit cells (~12 Å). Magnetocaloric Properties Debnath et al. [232] reported on the magnetocaloric properties of an epitaxial La 0.8 Ca 0.2 MnO 3 /LaAlO 3 thin film grown by PLD method. The magnetic entropy changes in the La 0.8 Ca 0.2 MnO 3 thin film for different magnetic field directions are shown in Fig. 35a-c, Fig. 35d. Large RCP values (i.e., 1000 mJ/cm 3 for the ab plane and 780 mJ/cm 3 for the c-direction) are obtained, which are higher than those observed in other perovskite manganites and rare earth alloys [213,233,234]. Such higher entropy change value and higher RCP with no noticeable hysteresis loss will make the epitaxial La 0.8 Ca 0.2 MnO 3 films more attractive for use as a magnetic refrigeration with large useful temperature ranges. Giri et al. [235] deposited epitaxial Sm 0. 55 [106] measured at 10 K with increasing and decreasing magnetic fields of the films grown on STO. The field hysteretic loss is very less, which is a well characteristic of magnetic refrigeration. This low-field large magnetic entropy change in the thin film is mainly due to the rapid change of magnetization near the transition temperature in the easy magnetization plane. The specific heat (C P ) data of the Sm 0.55 Sr 0.45 MnO 3 film grown on LAO substrate is shown in Fig. 36d, which clearly shows a lambda-shaped anomaly close to T C . This is mainly arisen due to second-order magnetic phase transition. The peak temperature of C P of the film matches well with T C determined by dc magnetization measurement. The values of relative cooling power (RCP) are usually calculated for both the cases (near T C and around Tp), and several methods have been used to calculate the value of RCP. For example, in the first method, RCP-1 is calculated from the product of maximum peak value |ΔS M | and the full width at half maximum, δT FWHM , i.e., RCP-1 = jΔS Max M j ×δT FWHM . In second method, RCP-2 is estimated from the maximum value (area) of the product |ΔS M | ×ΔT under the |ΔS M | vs. T curve. The ΔS M versus T curve of the Sm 0.55 Sr 0.45 MnO 3 films grown on STO at magnetic field of 6 T is shown in Fig. 36e. The bigger rectangular sketch and shaded area corresponds to RCP-1 and RCP-2, respectively. Inset in Fig. 36e shows the value of RCP-2 as a function of magnetic field for three different films. Figure 36f demonstrates the value of RCP-1 as a function of magnetic field for three different films. It can be observed that for both the temperature regime, the value of RCP increases with increasing the magnetic fields. An important observation is that the values of RCP are significantly larger around T C than T p . Therefore, a material in the same refrigeration cycle with higher RCP is preferred as it would confirm the transport of a greater amount of heat in an ideal refrigeration cycle. The epitaxial Sm 0.55 Sr 0.45 MnO 3 films grown on STO exhibit large MCE modulated by lattice strain; their larger |ΔS M | and enhanced RCP with almost zero hysteresis loss make them ideal for magnetic refrigeration, providing an alternative approach in searching for energy efficient magnetic refrigerators. Transport Properties Chen et al. [236] investigated the transport properties of (La 1-x Pr x ) 0.67 Ca 0.33 MnO 3 (0 ≤ × ≤ 0.35) films (with thicknesses from 9 to 60 nm) grown on NGO(110) OR substrates by PLD. Their temperature coefficient of resistance (TCR, defined by (dρ/dT)/ρ, where ρ is the resistivity and T the temperature) is shown in Fig. 37a. The inset shows the corresponding ρ-T curves. The doping-level dependent T C and TCR peak values are . a Magnetic hysteresis loops measured at 10 K. The diamagnetic contribution to magnetization not shown has been attributed to the substrate and has been subtracted. b Temperature dependence of the magnetization measured at 100 Oe. All samples were field cooled at 1 T from 360 K along the [100] direction before the measurements were performed. c Layer thickness dependence on the coercive field H C and the Curie temperature T C . Reproduced with permission of [231] shown in Fig. 37b. Obviously, the monotonous reduction of T C is accompanied by the increasing Pr-doping. TCR value greatly relies on the Pr-doping level x, which reaches the maximum value at 88. 3) enhances the e g electron bandwidth due to larger size of Sr 2+ ion, promoting the motion of more itinerant electrons between Mn 3+ and Mn 4+ which in turn suppresses resistivity and enhances T P (metal-insulator/semiconductor transition temperature). In addition, due to substitution of Sr 2+ (x) at La-site, average grain size increases and grain boundary density decreases resulting in the suppression in scattering of e g electrons which in turn increases T P . The thickness-dependent transport properties of the La 0.7 Pb 0.3 MnO 3 manganite films grown on LAO(100) single crystal substrates by CSD technique are also reported [115]. Figure 38a shows the ρ-T data of all the La 0.7 Pb 0.3 MnO 3 /LAO films under zero applied field. It is found that the ρ decreases and T P increases up to 269 K (with thickness of 350 nm) as increasing the film thickness, which is ascribed to the total (in plane and out of plane) strain relaxation effect. Figure 38b shows low temperature MR behavior of various thickness films and bulk, the MR isotherms, at 5 K. It was found that bulk exhibits MR 38% at 9 T, while the film only shows MR~42% at room temperature. In addition, as the film thickness is increased, MR value at 5 K increases from [232] 5% for 150 nm film to 18% for 350 nm film. This observation suggests the thickness-dependent microstructural effect on the transport and MR behavior of the La 0.7 Pb 0.3 MnO 3 /LAO films at low temperature under high fields. The thickness-dependent transport properties in the La 0.7 Sr 0.3 MnO 3 thin films are also reported [116,117,[237][238][239]. Optical Properties Cesaria et al. [240] reported the optical response of 200nm-thick La 0.7 Sr 0.3 MnO 3-δ films, which were deposited by PLD on amorphous silica substrates at nearly 600°C under different oxygen pressures (0.1, 0.5, 1, 5, and 10 Pa). A blue-shift of the transmittance-curve edge was observed as the p(O 2 ) was increases from 1 to 10 Pa. That is ascribed to the changes of oxygen nonstoichiometry in the films, leading to larger Mn 4+ /Mn 3+ ratios under higher oxygen pressure. In order to indepth understand the optical response of the deposited films, the nature (direct or indirect) of the optical transitions in the films were investigated by plotting (Eα(E)) n versus the photon energy (E) for n = 1/2 and 2, where α(E) is the absorption coefficient. n is equal to 1/2 for indirect transition process or 2 for direct transition process. The graphs of (Eα(E)) 2 versus energy E for all the deposited films exhibit a linear regions, as shown in Fig. 39a, from which the direct energy gap values (E g ) for the thin films can be estimated by extrapolation to the energy axis of the linear regions of the graphs. In the allowed direct transitions, the lowest energy one at nearly 1.0 eV was observed only for the films grown under oxygen pressures of 0.1 Pa and 0.5 Pa. That can be assigned to electronic excitations Mn 3+ ( e 1 g )→Mn 3+ ( e 2 g ) from a bound state (owed to the lattice distortion around the Mn 3+ ion) into a final state also bound by lattice distortions. The highest observed transition appeared at nearly 3.5 eV. Other transitions were yielded at intermediate energies for all the examined values of p(O 2 ): optical [235] transitions of at least 2.45 eV were observed with increasing energy as p(O 2 ) was increased. The films grown under oxygen pressures of 10 Pa exhibited further transitions (at 3.04 eV and 3.50 eV), which can be assigned to transitions from O 2p states to the higher energy Mn 3+ (e 2 g ) band. In the plots of (Eα (E)) 1/2 versus energy (E) of the films grown under the oxygen pressure of 10 Pa and 0.5 Pa, linear regions were also observed, as shown in Fig. 39b, which may be assigned to indirect transitions (phonon assisted transitions, i.e., phonon absorption and phonon emission) or indicative of amorphous nature. The occurrence of two adjacent linear dependences cab be interpreted as corresponding to phonon absorption and phonon emission processes leading to an indirect band gap. Indirect transitions are assigned to the films grown under oxygen pressures of 10 Pa, 5 Pa, and 1 Pa, and amorphous nature is detected in the films grown under oxygen pressures of 0.5 Pa and 0.1 Pa. Tanguturi et al. [241] reported the optical properties of Nd 0.7 Sr 0.3 MnO 3 films grown on amorphous-SiO 2 substrate. In the absorption coefficient spectra of the as-deposited and annealed films, a broad peak in the region hv < 2 eV was observed and beyond that they rose rapidly up to around 4 eV. Beyond 4 eV, no appreciable change in the spectrum was observed. The energy band gaps of the films were determined by plotting (αE) 2 as a function of energy (E), which were determined to be 2.98 eV and 2.64 eV for the asdeposited and annealed films, respectively. Therefore, the amorphous film exhibits larger band gap as compared with the crystalline one. Such a large band gap value in amorphous phase is a known phenomenon [242]. 3D Rare Earth-Doped Perovskite Manganite Oxide Nanostructures Up to date, only a limited works on 3D rare earth-doped perovskite manganite oxide nanostructures are available. Here, an example of 3D rare earth-doped perovskite manganite oxide nanostructure is demonstrated, which is constructed by interlayering La 0. 7 [236] (VAN) thin films with pure CeO 2 (or LSMO) layers. This 3D strained framework nanostructures combine both the lateral strain by the layered structures and the vertical strain in the VAN, and thus maximize the 3D strain states in the systems, manipulating the electron transport paths in these systems. For example, in the 3D nanostructured LSMO-CeO 2 VAN systems, the electrical transport properties can be effectively tuned from a 3D insulating CeO 2 framework with integrated magnetic tunnel junction structures, to a 3D conducting LSMO framework by varying the types of the interlayers (i.e., CeO 2 or LSMO) and the number of interlayers from 1 to 3 layers [198]. Figure 40 shows the transport properties of these 3D framed nanostructures. The temperature-dependent resistance (R-T) curves at zerofield are shown in Fig. 40a for samples C0-C3, and Fig. 40b depicts the temperature dependence of the MR (%) in the 3D-framed nanostructures C0-C3. It is observed Reproduced with permission of [240] that in Fig. 40a, the resistance is decreased with increasing temperature, indicating typical semiconductor behavior in C0-C3 due to the large portion of CeO 2 introduced in the nanostructures (CeO 2 :LSMO ≥ 1:1 in C0-C3). The MR (%) of the films C0-C3 is increased at first and then reduced as the temperature increasing from low temperature to room temperature. Therefore, a MR peak is observed around 50 K. It is also noticed that the 3D CeO 2 frameworks could enhance the overall MR properties; for example, the MR peak value is increased from 40% (C0) to 51% (C3), 57% (C2), and maximized at 66% (C1). Such an enhancement can be ascribed to the 3D CeO 2 framework not only tailoring the out-of-plane strain of the LSMO phase but also building up the 3D tunneling framework for the electron transport. The relatively lower MR (%) in C2 and C3 samples compared to C1 is possibly related to the surface roughness observed in both samples where the 3D insulating framework might not be effective in the top layers. In contrast to the C1-C3 samples, a metallic behavior is observed in the L1-L3 samples with a 3D LSMO framework, as shown in Fig. 40c. The resistances are gradually increased from 10 to 350 K with a M-I transition temperature (T MI ) at~325 K. Such a metallic behavior is associated with the high composition of LSMO in L1-L3 and the 3D interconnected conductive LSMO frames built in the composite films L1-L3. Meanwhile, the resistance of the composite films L1-L3 decreases with inserting more lateral LSMO interlayers over the entire temperature regime. The LSMO interlayers interconnect with the vertical LSMO domains forming a conductive 3D frame in the film. Thus, the tunneling MR effect is effectively reduced. Figure 40d demonstrates the temperature dependence of MR for the nanocomposite thin films L0-L3 with the M-I transition temperature (T MI ) marked for samples L1-L3. It is observed that such L1-L3 structures enable higher MR values at higher temperatures, e.g., 13% at 316 K in sample L2, which is a dramatic MR value improvement compared to C0-C3 and the previous reports at higher temperatures (e.g., near room temperature). Based on the above observations, it is clear that magnetic tunneling junctions (MTJ) of the LSMO/CeO 2 /LSMO and their geometrical arrangement in the 3D framework Reproduced with permission of [198] nanostructures are very important for enhancing the low-field MR properties. In C1-C3 samples, there are effective vertical and lateral MTJ structures integrated in the system by incorporating CeO 2 interlayers in the VAN system, such 3D insulating frameworks effectively maximize the 3D magnetic tunneling effect and lead to a record high MR% in the LSMO based systems. This 3D strain framework concept opens up a new avenue to maximize the film strain beyond the initial critical thickness and can be applied to many other material systems with strain-enabled functionalities beyond magnetotransport properties. Applications of Rare Earth-Doped Perovskite Manganite Oxide Nanostructures Rare Earth-Doped Perovskite Manganite Oxide Nanoparticles Magnetic Refrigeration Magnetocaloric effect (MCE) makes magnetic materials attractive for potential applications in magnetic refrigeration. As compared to the conventional gas compression technology, the magnetic refrigeration technology offers many advantages, such as no use of any gasses or hazardous chemicals, low energy consumption, and low capital cost [243][244][245]. Mahato et al. [246] synthesized La 0.7 Te 0.3 MnO 3 nanoparticles with average particle size of 52 nm where a large magnetic entropy change of 12.5 J kg −1 K −1 was obtained near T C for a field change of 50 kOe. These results confirmed the application for magnetic refrigeration. Yang et al. [247] reported that for La 0.7 Ca 0.3 MnO 3 nanoparticles with average size of 30 and 50 nm, their maximum magnetic entropy changes at 15 kOe applied field were 1.01 and 1.20 J kg −1 K −1 , respectively, indicating that the La 0.7 Ca 0.3 MnO 3 nanoparticles could be considered as a potential candidate for magnetic refrigeration applications at room temperature. Phan et al. [248] Biomedical Applications Magnetic nanoparticles offer some attractive possibilities in biomedicine. First, they have controllable sizes ranging from a few nanometers up to tens of nanometers, which places them at dimensions that are smaller than or comparable to those of a cell (10-100 μm), a virus (20-450 nm), a protein (5-50 nm), or a gene (2 nm wide and 10-100 nm long). This means that they can "get close" to a biological entity of interest. Indeed, they can be coated with biological molecules to make them interact with or bind to a biological entity, thereby providing a controllable means of "tagging" or addressing it. Second, the nanoparticles are magnetic, which means that they obey Coulomb's law, and can be manipulated by an external magnetic field gradient. This "action at a distance," combined with the intrinsic penetrability of magnetic fields into human tissue, opens up many applications involving the transport and/or immobilization of magnetic nanoparticles, or of magnetically tagged biological entities. In this way, they can be made to deliver a package, such as an anticancer drug, or a cohort of radionuclide atoms, to a targeted region of the body, such as a tumour. Third, the magnetic nanoparticles can be made to resonantly respond to a time-varying magnetic field, with advantageous results related to the transfer of energy from the exciting field to the nanoparticle. For example, the particle can be made to heat up, which leads to their use as hyperthermia agents, delivering toxic amounts of thermal energy to targeted bodies such as tumours; or as chemotherapy and radiotherapy enhancement agents, where a moderate degree of tissue warming results in more effective malignant cell destruction. These, and many other potential applications, are made available in biomedicine as a result of the special physical properties of magnetic nanoparticles [250]. Bhayani et al. [251] report, for the first time, immobilization of commonly used biocompatible molecules on La 1-x Sr x MnO 3 nanoparticles, namely bovine serum albumin and dextran. Such bioconjugated nanoparticles have a tremendous potential application, especially in the field of biomedicine. Daengsakul et al. [61,62] reported the cytotoxicity of La 1-x Sr x MnO 3 nanoparticles with x = 0, 0.1, 0.2, 0.3, 0.4, and 0.5 evaluated with cell NIH 3T3. The result showed that the La 1-x Sr x MnO 3 nanoparticles were not toxic to the cells. This will be useful for medical applications. Similar studies about the toxicity of the nanoparticles were performed by Zhang et al. for safe biomedical applications [252]. Magnetic resonance imaging (MRI) represents a powerful imaging method commonly utilized in clinical practice. The method shows excellent spatial resolution, which is very suitable not only for examination of human bodies but also for detailed anatomical studies of animal models in vivo in biological research. On the other hand, the sensitivities of other techniques, such as optical methods, single-photon emission computed tomography or positron emission tomography, are much higher. Thus, the design and synthesis of so called dual or multimodal probes is an important field. The combination of both respective approaches utilizing only one dual probe, e.g., magnetic nanoparticles tagged with fluorescent moieties, establishes a very useful method for bioimaging. Moreover, fluorescent magnetic nanoparticles are promising materials for other medical applications, where the same tool might be used either for diagnostics or for therapy, like for magnetic hyperthermia and optically driven surgery. The positioning of the magnetic cores with the external magnetic field could be used in cell micromanipulation [253][254][255]. Kačenka et al. [256] reported the potential of magnetic nanoparticles based on the La 0. 75 Sr 0.25 MnO 3 perovskite manganite for MRI. Fluorescent magnetic nanoparticles based on a perovskite manganite La 0. 75 Sr 0.25 MnO 3 core coated with a two-ply silica layer were synthesized and thoroughly characterized in order to prepare a novel dual MRI/ fluorescence probe with enhanced colloidal and chemical stability. Viability tests show that the complete particles are suitable for biological studies. In recent years, magnetic nanoparticles have been used in magnetic hyperthermia, referring to the introduction of ferromagnetic or super-paramagnetic particles into the tumor tissue. The magnetic nanoparticles create heat that can be used to treat cancer when they are placed in alternating magnetic fields. The La 1-x Sr x MnO 3 nanoparticles for hyperthermia applications are studied in details [257][258][259][260]. Catalysts Research in environmental catalysis has continuously evolved over the last two decades owing to the necessity of obtaining worthwhile solutions to environmental pollution problems. The development of innovative environmental catalysts is a crucial factor toward the purpose of determining new sustainable manufacturing technologies. Rare earth perovskite manganites attract notable attention of explorers due to their high catalytic activity in numerous redox reactions [261,262]. Oxygen electrocatalysis is one of the key processes limiting the efficiency of energy conversion devices such as fuel cells, electrolysers, and metal-air batteries. In particular, the oxygen reduction reaction (ORR) is commonly associated with slow kinetics, requiring high overpotentials, and high catalyst loadings. Celorrio et al. [263] reported the effect of tellurium (Te) doping on the electrocatalytic activity of La 1-x Te x MnO 3 nanoparticles with an average diameter in the range of 40 -68 nm toward the oxygen reduction reaction. Carbon monoxide is a colorless, odorless, and tasteless gas that is slightly lighter than air. It is toxic to humans and animals when encountered in higher concentrations. The catalytic oxidation of CO is utilized in various applications, e.g., indoor air cleaning, CO gas sensors, CO 2 lasers, and automotive exhaust treatment. The structural and catalytic properties of La 1-x (Sr or Bi) x MnO 3 samples with x = 0.0, 0.2 or 0.4 for CO oxidation, are investigated [264]. Volatile organic compounds (VOCs), emitted from many industrial processes and transportation activities, are considered as great contributors to the atmospheric pollution and dangerous for their effect on the human health [265]. From an economical point of view, compared to an incineration process, catalytic combustion is one of the most interesting technology for the destruction of emissions of VOCs. Blasin-Aubé et al. [266] reported that the La 0.8 Sr 0.2 MnO 3+x perovskite-type catalyst is highly active in the oxidative destruction of VOCs, especially for oxygenated compounds. The possibility of catalytic activity enhancement in NO reduction is also studied [267]. Ran et al. synthesized the Ce-doped PrMnO 3 catalysts and investigated the effect of cerium doping on the catalytic properties of Ce-doped PrMnO 3 catalysts. Their results showed that in the case of the Ce-doped series with lower doping ratio, most of the Ce 4+ ions were introduced into the Asite to form perovskite-type oxides with some additional ceria. The oxidation state of manganese was more easily affected by the addition of cerium and more vacancies might arise at the A-site due to the structural limit of the oxide. High catalytic activity in NO reduction might be caused by the presence of oxygen vacancies and the relative ease of oxygen removal. Besides, ceria could also adsorb oxygen to sustain the reduction of NO. Solid Oxide Fuel Cells Solid oxide fuel cells (SOFCs) have become of great interest as a potential economical, clean, and efficient means of producing electricity in a variety of commercial and industrial applications. Its major advantages include high efficiency, potential for cogeneration, modular construction, and very low pollutant emissions. Lanthanum manganite-based oxides, e.g., La 1-x Ca x MnO 3 and La 1-x Sr x MnO 3 , are promising materials as cathodes, because of their high electrical conductivity and good compatibility with yttria-stabilized zirconia (YSZ). For example, nano-sized (La 0. 85 Sr 0.15 ) 0.9 MnO 3 and Y 0.15 Zr 0.85 O 1.92 (LSM-YSZ) composite with 100-200 nm in diameter was co-synthesized by a glycine-nitrate process (GNP) [268]. Alternating current impedance measurement revealed that the co-synthesized LSM-YSZ electrode shows lower polarization resistance and activation energy than the physically mixed LSM-YSZ electrode. This electrochemical improvement was attributed to the increase in three-phase boundary and good dispersion of LSM and YSZ phases within the composite. Lay et al. [269] synthesized the Ce-doped La/Sr chromo-manganite series (Ce x La 0.75-x Sr 0.25 Cr 0.5 Mn 0.5 O 3 with x = 0, 0.10, 0.25, and 0.375) as potential SOFC anode or solid oxide electrolyzer cell (SOEC) cathode materials. All those materials are stable in both elaboration and operating conditions of an SOFC anode, and they are also stable in steam electrolysis of cathodic conditions in SOEC. Besides, the possibility of A 2-x A′ x MO 4 (A = Pr, Sm; A′ = Sr; M = Mn, Ni; x = 0.3, 0.6) as a cathode of SOFC was investigated by Nie et al. [270]. Besides being used as cathodes in solid oxide fuel cells, rare earth-doped perovskite manganite oxides (Ln x A 1-x MnO 3 ) also exhibit high potential for being used as redox materials for solar thermochemical fuel production from thermochemical H 2 O/CO 2 splitting [271]. It was reported that substituted lanthanum manganite perovskite was one of the most suitable candidates among the perovskite family, owing to its unique redox properties [272]. To further improve the redox properties of these materials, Nair and Abanades [273] performed a systematic study to investigate the effects of synthesis methods on the redox efficiency and performance stability for CO 2 splitting. They synthesized single-phase La x Sr 1-x MnO 3 (LSMO) by using various technical routes such as solid-state reactions, pechini process, glycine combustion, or glucose-assisted methods, and found that the materials synthesized by the pechini method exhibited the highest reactivity among the series, and a stable CO production of~260 μmol g − 1 was achieved at x = 0.5. They also found that the substitutions of Y/Ca/Ba at A-site and Al/Fe at B-site in (La,Sr)MnO 3 did not enhance the redox cycling capability as compared with LSMO. It was observed that Sr was the best A-site substituent and the presence of single Mn cation alone in B site was the most suitable option for promoting CO 2splitting activity. Furthermore, the addition of promotional agents and sintering inhibitor such as MgO and CeO 2 without altering the La 0.5 Sr 0.5 MnO 3 composition could improve the CO 2 -splitting activity. For an overview on the recent progress on solar thermochemical splitting of water to generate hydrogen, we refer to other review articles [274][275][276]. 1D Rare Earth-Doped Perovskite Manganite Oxide Nanostructures Catalysts In the catalytic combustion of methane, the perovskite manganites nanoparticles generally lose their activities due to the severe sintering under such a high-output and high-temperature reaction. As a consequence, designing of highly reactive and stable catalysts has been an interesting research direction in heterogeneous catalysis. Recently, some results indicate that the catalytic properties of the catalysts could be improved by controlling their morphologies and structure. For example, SrCO 3 nanowires showed higher activity for ethanol oxidation than the nanoparticles [277]. Besides, CeO 2 nanorods were more reactive for CO oxidation than the corresponding nanoparticles [278]. However, catalytic properties of 1D Rare earth-doped perovskite manganite have been scarcely reported. Teng [279] reported the hydrothermal synthesis of La 0.5 Sr 0.5 MnO 3 nanowires, and the stability and the activity for methane combustion of La 0.5 Sr 0.5 MnO 3 nanowires were also investigated. The results showed that after being calcined for a long time, the nanowires showed a higher stability as compared with the La 0.5 Sr 0.5 MnO 3 nanoparticles. The nanowire catalyst maintained a higher catalytic activity for methane combustion. The photocatalytic activity of La 1-x Ca x MnO 3 (x ≈ 0.3) nanowires synthesized by hydrothermal method was also investigated by Arabi et al. [230]. The results revealed that La 0.68 Ca 0.32 MnO 3 nanowires exhibited sufficient photocatalytic activity for degradation of methylene blue solution under visiblelight irradiation. Solid Oxide Fuel Cells Up to now, various approaches have been suggested to fabricate LSM/YSZ composite cathodes for SOFCs. Several researches have shown that electrode microstructure (i.e., particle size, pore size, and porosity) has a strong influence on the value of the area specific resistance (ASR) [280][281][282]. Da and Baus synthesized La 0.65 Sr 0.3 MnO 3 (LSM) nanorods through a simple hydrothermal reaction. It is worth noting that the ASR values in this work are substantially lower than most of the former reports available in the literature [268,[283][284][285][286][287][288]. They pointed out that the promising performance of the nanostructured LSM cathodes was attributed to the optimized microstructure, i.e., high surface area, small grain size, and good inter-granular connectivity, which make it a potential candidate for intermediate temperature SOFC application. In addition, nano-tube structured composite cathodes were also investigated [283]. La 0.8 Sr 0.2 MnO 3-δ /Zr 0.92 Y 0.08 O 2 (LSM/YSZ) composite nano-tubes are co-synthesized by a pore wetting technique as a cathode material for SOFCs. The asprepared nanostructured composite cathode shows low ASR values of 0.17, 0.25, 0.39, and 0.52 Ω cm −2 at 850, 800, 750, and 700°C, which is mainly due to small grain size, homogeneous particle distribution and fine pore structure of the material. Magnetic Memory Devices The elaboration of submicron MR read heads and highsensitive elements of non-volatile memories (MRAM) passes through patterning processes that are commonly used in the semiconductor industry. The planar processes for thin-film patterning are based on two main steps: (i) the pattern definition in photon or electron sensitive polymer (resist) by lithography and (ii) the transfer of these nanostructures in the manganite film using dry etching [163]. Conventional UV lithography is traditionally used to get patterns higher than one micron in size; however, patterning at dimensions lower than 50 nm needs high-resolution techniques such as scanning electron beam lithography (SEBL), X-ray lithography (XRL), or NI [289,290]. At present, ultimate resolution limits of SEBL and XRL are well known [291,292] and the electron PMMA (polymethylmethacrylate) resist allows replications below 20 nm. After nanolithography, the pattern transfer can be achieved using direct etching with the resist as mask, or using a metallic lift-off process followed by etching. The lift-off process is the preferred method for manganite etching since these CMR oxides are very hard materials compared to metals. A magnetic domain wall separates two oppositely polarized magnetic regions, and a number of data storage schemes based on domain walls in magnetic nanowires have been proposed [293,294]. In the race-track memory, each magnetic domain wall represents a data bit [293]. During the write operation, the domain wall is moved by external magnetic field or spin transfer torque [295][296][297]. To read a bit, GMR or TMR type devices are used to detect the stray field from the domain wall. To utilize such a scheme, it is critical to controllably create domain walls. Magnetic nanowires with an artificial pinning center, such as notches [293], bent conduits [298], and narrow rings [299], can serve this purpose. In perovskite manganite nanostructures, various types of domain patterns such as stripes [300], bubbles [301], and checker-boards [302] have been reported. For example, Wu et al. [303] reported on the perpendicular stripe magnetic domains in La 0.7 Sr 0.3 MnO 3 nanodots. Takamura et al. [304] reported flower-shaped, flux closure domain, and vortex structures in patterned manganites created by Ar+ ion milling. Mathews et al. [305] reported successful fabrication of La 0.67 Sr 0.33 MnO 3 nanowires on NdGaO 3 substrate by using interference lithography. It was demonstrated that not only the shape anisotropy but also the substrate induced anisotropy play important roles in determining the magnetic easy axis in these manganite nanostructures. In spite of challenges in controlling magnetic domain walls in perovskite manganite oxide nanowires, several groups have reported current induced domain wall motion in perovskite manganite oxide materials such as La 0.7 Sr 0.3 MnO 3 and La 0.67 Ba 0.33 MnO 3-δ . By using FIB milling, Ruotolo et al. [306] and Céspedes et al. [139] patterned La 0.7 Sr 0.3 MnO 3 into nanowires containing notches as the domain wall pinning centers. The MR measurements confirm the current induced domain wall depinning with a critical current density of 10 11 A/m 2 . Liu et al. [307] reported current dependent low-field MR effect in La 0.67 Sr 0.33 MnO 3 nanowires with constrictions and they ascribed this effect to the spin polarized bias current. In a similar constricted La 0.67 Ba 0.33 MnO 3-δ nanowire, Pallecchi et al. [308] observed magnetic field and DC bias current dependent asymmetric resistance hysteresis, which was also connected to the effect spin transfer torque. Surprisingly, the threshold current was found to be in the range of 10 7 -10 8 A/m 2 , much smaller than the typical current (10 11 A/m 2 ) needed for moving domain walls in metals [309]. A number of possibilities, such as stronger spin torque due to half metallicity, Joule heating assistance, and spin wave excitation, may contribute to such a drastic reduction in the threshold current. Spintronic Devices In terms of perovskite manganites, the large MR and the great tunability of CMR oxides are promising for magnetic recording, spin valve devices, and magnetic tunnelling junctions [310][311][312][313][314]. However, there are several obstacles related to perovskite manganites in nanodevice applications. First, the spin polarization of manganites decays rapidly with temperature. Second, the defect chemistry and the stoichiometry-property correlation in perovskite manganites are quite complex [315,316]. Third, the physical properties of interfaces in manganite-based devices remain elusive [317,318]. Finally, there is the urgent need for developing suitable device processing techniques. The spintronic devices exhibit prominent advantages, such as nonvolatility, increased processing speed, increased integration densities, and reduced power consumption. The significance and value of spintronics lies in the study of active control of carrier spin, and then the development of new devices and technologies, for example, spin transistors, spin-FET, spin-LED, spin-RTD, spin filters, spin modulators, reprogramming gate circuit. Nanowires have been successfully incorporated into nanoelectronics [319], and naturally they are envisaged as ideal building blocks for nanoscale spintronics. For perovskite manganite oxides, nanowires and related heterostructures hopefully can enhance the Curie temperature [82] and the low-field MR [75,320]. Another promising research topic is anisotropic magnetoresistance (AMR) which is a result of spin-orbital interaction and has important applications in magnetic field detection and data storage [321]. Significant magnetic anisotropy has been observed in perovskite manganite oxide nanowires [322]. In addition, the morphology of nanowire can be modified by annealing or growing on engineered substrates [323], which could significantly affect their properties. It is expected that more efforts will be made to explore the anisotropic transport properties of perovskite manganite oxide nanowires and to gain deeper understanding of lowdimensional spin-dependent transport properties. Spin valves based on perovskite manganite oxide nanowires are also reported. Gaucher et al. [324] successfully fabricated La 2/3 Sr 1/3 MnO 3 nanowires with the smallest width of 65 nm by combining electron beam lithography and ion beam etching. They showed that the electronic transport properties of these perovskite manganite oxide nanowires are comparable to those of unpatterned films. 2D Rare Earth-Doped Perovskite Manganite Oxide Nanostructures Magnetic Memory Devices Typically, micrometer-sized La 0.7 Sr 0.3 MnO 3 dots with smooth edge and surface are etched in the corresponding films by such lift-off process. Ruzmetov et al. [135] fabricated regular arrays of epitaxial perovskite La 2/3 Sr 1/3 MnO 3 magnetic nanodots by PLD combined with electron beam lithography and argon ions exposure. The diameters of these dots are less than 100 nm with a height about 37 nm. These perovskite magnetic nanodots maintain their crystallinity, epitaxial structure, and ferromagnetic properties after the fabrication process, which have promising applications in magnetic massive memory devices. Liu et al. [325] demonstrated a new programmable metallization cell based on amorphous La 0. 79 Sr 0.21 MnO 3 thin films for nonvolatile memory applications. The schematic diagram of the metallization cell is shown in Fig. 41, where amorphous La 0. 79 Sr 0.21 MnO 3 thin films are deposited on the Pt/Ti/SiO 2 /Si substrates by rf magnetron sputtering. The Ag/amorphous La 0.79 Sr 0.21 MnO 3 /Pt cell exhibited reversible bipolar resistive with a R OFF /R ON ratio (> 10 2 ), stable write/erase endurance (> 10 2 cycles) with resistance ratio over 10 2 , and stable retention for over 10 4 s. Such a sandwiched device may be a promising candidate for future nonvolatile memory applications. Hoffman et al. [326] also tested the non-volatile memories using an electric-field-induced M-I transition in the PbZr 0.2 Ti 0.8 O 3 /La 1-x Sr x MnO 3 (PZT/LSMO), PZT/ La 1-x Ca x MnO 3 (PZT/LCMO), and PZT/La 1-x Sr x CoO 3 (PZT/LSCO) devices. To study the switching speed of the Mott transition field effect devices, they fabricated a series of devices where the room temperature RC-time constants varied from 80 ns to 20 μs. They found the circuit RC-time constant limited the switching speed of devices down to 80 ns, offering the opportunity for faster operation though device scaling. Room temperature retention characteristics show a slow relaxation, with more than 75% of the initial polarization maintained after 21 days. These Mott transition field effect devices have promising potential for future non-volatile memory applications. Spintronic Devices Spin valves may be the most influential spintronic devices which have already found applications in magnetic data storage industry. Its basic working principle is the GMR effect [327,328], where the resistance of a FM/ NM (non-magnetic)/FM multilayers depends on the relative alignment between the two FM layers. Most GMR research focuses on transition metals, but spin valves have also been realized in oxides, in particular FM manganites [329,330]. A related concept is the magnetic tunnel junction (MTJ) [331,332], which also has vast applications in nonvolatile magnetic memory devices. The basic difference between spin valve and MTJ lies in the middle spacer layer, which needs to be insulating for MTJ, whereas for spin valve, this layer is conducting. A number of efforts have been devoted to create oxide based MTJ devices so far, especially due to the 100% spin polarization in several FM oxides, such as La 1-x Sr x MnO 3 with x~0.33, CrO 2 and Fe 3 O 4 . Lu et al. [333] and Sun et al. [334] first fabricated all-oxide MTJ device with La 0.67 Sr 0.33 MnO 3 and SrTiO 3 as FM and insulating layer, respectively. Subsequently, a record tunneling magnetoresistance (TMR) ratio of 1850% was reported in 2003 by Bowen et al. [335]. Despite these promising results, a major issue for the oxide based MTJ devices is that the working temperature is often lower than the room temperature, generally ascribed to the degraded interfaces [336,337]. Magnetic Sensors The application of perovskite manganite CMR thin films to magnetic sensors at room temperature has been considered. Compared to magnetoresistive sensors using permalloy films, the field coefficient of resistance (dR/ dH)/R is much smaller, typically of about (10 −2 -10 −1 )% Fig. 41 Schematic of the Ag/La 0. 79 Sr 0.21 MnO 3 (a-LSMO)/Pt memory device. Reproduced with permission of [325] per mT. However, they can operate over a wide field range and their characteristics should be maintained at submicron lateral size, since the CMR mechanism does not involve large scale entities such as magnetic domains and walls. By means of magnetic flux concentration with soft ferrite poles, the field coefficient of resistance can be increased up to 4% per mT [338]. Other applications of CMR of La 0.67 Sr 0.33 MnO 3 thin films to position sensors and to contact-less potentiometers are presently investigated [339]. The basic idea is to exploit the large resistance variation induced by the stray field of a permanent magnet in Sm-Co or Nd-Fe-B alloys. In recent years, there has been growing demand for sensitive yet inexpensive infrared detectors for use in a variety of civilian, industrial, and defense applications such as thermal imaging, security systems and surveillance, night vision, biomedical imaging, fire detection, and environmental detection. The material for bolometric applications should possess a high temperature coefficient of resistivity (TCR), which enables small temperature variations caused by absorbed infrared radiation (IR) generate a significant voltage drop across the bolometer. High TCR in recently discovered CMR manganese oxides in the vicinity of metal-to-semiconductor phase transition makes them suitable for thermometer and bolometer applications. For example, Lisauskas et al. [340] reported that the epitaxial submicron thick perovskite manganite La 0.7 (Pb 0. 63 Solid Oxide Fuel Cells Lussier et al. [342] identified a mechanism whereby the strain at an interface is accommodated by modifying the chemical structure of the SOFC material to improve the lattice mismatch and distribute the strain energy over a larger volume (thickness), concentrate on two particular manganite compounds, La 2/3 Ca 1/3 MnO 3 and La 1/2 Sr 1/2 MnO 3 thin films. 3D Rare Earth-Doped Perovskite Manganite Oxide Nanostructures Recently, several 3D rare earth-doped perovskite manganite oxide nanostructures such as 3D (La 0.275 Pr 0.35-Ca 0.375 )MnO 3 nanobox array structures (145), 3D strained LSMO-CeO 2 VAN nanostructures (198) fabricated by PLD technique are reported. It was found that 3D (La 0.275 Pr 0.35 Ca 0.375 )MnO 3 nanobox array structures exhibited an insulator-metal transition at higher temperature than that in the corresponding thin film, which provided a new way to tune the physical properties of CMR oxide 3D nanostructures. This enables 3D (La 0.275 Pr 0.35 Ca 0.375 )MnO 3 nanobox array structures to find promising application in oxide nanoelectronics by making full use of the huge electronic/spintronic phase transition. The 3D framework of LSMO-CeO 2 VAN nanostructures combine not only the lateral strains from the layered structures but also the vertical strain from the VAN, and thus maximize the 3D strain states in the systems, controlling the electron transport paths. This new 3D framed design provides a novel approach in maximizing film strain, enhancing strain-driven functionalities, and manipulating the electrical transport properties effectively. At present, the applications of 3D rare earth-doped perovskite manganite oxide nanostructures in the fields of oxide nanoelectronics, spintronics, and solar energy conversion are still in their infancy; thus, many problems remain unsolved and technical challenges lie ahead. In this direction, there is a long way to walk on before the commercialization of rare earth-doped perovskite manganite oxide nanostructures. Conclusions and Perspectives In this work, we have discussed the recent advances in the fabrication, structural characterization, physical properties, and functional applications of rare earthdoped perovskite manganite oxide nanostructures. It is our aim that we have captured all the excitements achieved in the development of rare earth-doped perovskite manganite oxide nanostructures used for microelectronic, magnetic, and spintronic nanodevices, providing some useful guidelines for the future researches. In spite of great progress made in the past two decades, considerable effort is highly required to realize the practical applications of rare earth-doped perovskite manganite oxide nanostructures in the next generation of oxide nanoelectronics. While many fascinating physical properties of rare earth-doped perovskite manganite oxide nanostructures are originated from the interactions among the spin, charge, orbital, and lattice degrees of freedom, whereas there is still a long way to go for obtaining a full understanding of the interaction mechanisms among the spin, charge, orbital, and lattice degrees of freedom. It is expected that in the next years, further progress will be achieved in the experimental and theoretical investigations on rare earth-doped perovskite manganite oxide nanostructures. We believe that this review of the recent advances on rare earth-doped perovskite manganite oxide nanostructures will motivate their future researches and applications of not only in the fields of oxide nanoelectronics, but also in energy and biomedical fields.
34,594
2020-01-13T00:00:00.000
[ "Materials Science", "Physics" ]
Emergent agent causation In this paper I argue that many scholars involved in the contemporary free will debates have underappreciated the philosophical appeal of agent causation because the resources of contemporary emergentism have not been adequately introduced into the discussion. Whereas I agree that agent causation’s main problem has to do with its intelligibility, particularly with respect to the issue of how substances can be causally relevant, I argue that the notion of substance causation can be clearly articulated from an emergentist framework. According to my proposal, a free agent is a causally powerful substance that emerges in an anomic way from her constitutive mental events, downwardly constraining, selecting and, in this way, having control on them. As we shall see, this particular concept of agent causation not only makes sense of the deep insight behind agent libertarianism, but it also provides us with the resources to solve some of the main objections that have been raised against it. It is true that here I cannot develop a complete defense of the evidential credentials of emergentism. Still, even if the considerations that follow do not serve to convince detractors of agent causal libertarian accounts of free will, they do suggest that libertarian agent causation is more promising than is typically acknowledged. Introduction Incompatibilists have argued that the freedom required for moral responsibility is incompatible with the causal determination of action by factors beyond the agent's control. As a consequence, they affirm that indeterminism is a necessary condition for the control that the agent requires for being an ultimate source of her actions -for being up to her whether she does one thing or another on some occasions (Ginet, 1990;Kane, 1996;Franklin, 2018). Some authors, however, have answered that although the kind of free will at issue is incompatible with determinism, it also seems to be incompatible with indeterminism. They contend that a mere indeterministic causal production of an action by the agent's reasons can constitute a pure random effect that, as a matter of good or bad luck, is produced without the required agential control (van Inwagen, 1983;Mele, 1999;Levy, 2011;Pereboom, 2014). To solve this problem, the agent-causal libertarian claims that this scenario lacks the agent's direct causal control on her action. So, she proposes that we introduce the agent as a cause, not merely as a collection of events, but rather as a fundamental substance (Chisholm, 1964;Clarke, 2003;O'Connor, 2009O'Connor, , 2011Steward, 2012;Pereboom, 2014, ch. 4). The idea of the agent as a cause that resolves the indeterminacy promises to vindicate the possibility of free will and the requisite control in an indeterministic world. Although we can agree that introducing this possibility can be the main appeal in favor of agent causation, it is possible to appeal (as some authors do) to agent causation for further reasons. For instance, one can affirm that our folk understanding of ourselves as agents not only presupposes a libertarian perspective of free will and moral responsibility (Jackson, 1998;Vargas, 2013, ch. 1;Pereboom, 2014), but that the idea of agent causation also does justice to the notion of ourselves as particulars, which happen to be more than collections of mental states and reasons (Velleman, 1992;Hornsby, 2004;Steward, 2012). We can also say that the agent-causal theory is appealing because it captures the way we experience our own activity: it does not seem to us that we are caused to act by the reasons of our actions, but that we produce our actions in the light of those reasons, so we could have, in an unconditional sense, acted differently (Taylor, 1966;O'Connor, 1995). In addition, one might find support for the idea in the well-accepted ontological principle according to which all (concrete) existences must have the power to cause and intervene (see, for instance, Alexander, 1920;Kim, 1992;Armstrong, 1997;Fodor, 2003;Shoemaker, 2007). In view of it, one might argue that the agent and her mental events should be understood as distinct and, therefore, that a real agent in our physical world, if there is a such a thing, must have some distinct causal power different from her mental events and reasons'. In spite of its intuitive and philosophical appeal, agent causation has been considered by some philosophers to be empirically implausible and even internally incoherent. P.F. Strawson, for instance, denounces its libertarian assumptions as "obscure and panicky metaphysics" (1962, p. 25). In a similar line and on the assumption that only events can be causally relevant, John Searle complains about the very idea of agent causation; according to him, to speak about the agent as a cause "makes no sense" because "is worse than mistaken philosophy", it is just "bad English" (2001, p. 82). As will become evident, I agree with these philosophers that agent causation's first and biggest problem has to do with its intelligibility; unlike them, however, I think the difficulty can be overcome. The main problem of agent causation is the issue of how substances, as opposed to the events in which they participate, can be of causal relevance. Many philosophical and scientific traditions believe that wherever some object is cited as a cause, there is some feature or property of the object or some event involving the object that is doing the causal work. So it would seem that there cannot be literal sense for the idea of the object having a causal relevance different from or beyond that of its constitutive events. The paper aims to vindicate the notion of a free agent as a substance with causal powers that go beyond those of her constitutive events. The text is divided into two parts. In the first -sections § 2-5 -, I articulate and clarify the notion of a free agent as an anomically emergent substance; a substance who synchronously emerges from her mental events, and diachronically exercises her causal power by constraining the nomological possibilities of her subsequent mental events in a way that is not previously fixed by any law of nature. This requires saying what an emergent substance is, how an emergent substance can have causal powers, and why it can be considered free. The second part of the paper -sections § 6-8 -is devoted to show how the concept of agent causation, as developed here, can be deployed to solve some of the main objections raised to agent-causal libertarian accounts of free will. These objections concern the causal and explanatory interaction between the agent and her reasons, the problem of luck under the assumption of indeterminism, and the empirical adequacy of the theory with respect to contemporary scientific knowledge. I finally draw some salient conclusions. What is an emergent substance? I contend that free agents are anomically emergent substances. The first thing that we have to do is to understand what an emergent substance is. Emergentism refers to the idea that some entities of our world (properties, events, substances, and so on) are fundamental, in the sense of being non-reducible to, not completely grounded in, and still dependent on other things (see, for instance, Morgan, 1923;Broad, 1925;Barnes, 2012;Gillett, 2016;Morales, 2018;O'Connor, 2021;J. Wilson, 2021). One motivation for the view is the appealing thought that some substances or systems have properties that are not merely a function of the properties of their parts, so they cannot be reconstructed nor explained only as a mathematical product of the latter. It's because such systems are not a mere function of their parts that they are fundamental; and it's because they are constituted by their parts that are dependent on these. 1 On my account, a substance is a persistent structure or organization of global states and events at some region of space-time. 2 Substances, according to this view, are of two kinds: some of them are reducible to (nothing but) their parts, others are not. Which substances are emergent is essentially an empirical question. In brief, it depends on whether their causal relevance and dynamics are or are not a function of their proper temporal parts (their events), as exhibited when those parts work in isolation or composing different substances (see Broad, 1925, p. 61;Kim, 1999Kim, , pp. 13 − 4, 2009a. And it does not matter whether we are concerned with linear or non-linear mathematical functions (see Silberstein &McGeever, 1999, andJ. Wilson, 2013): while a reducible substance is a (linear or non-linear mathematical) function of its events, an emergent substance isn't. Let me explain this idea. A pure material object, such as a table, is a substance reducible to its states and events, in so far as its causal power and dynamics are nothing more than the result 3 of its global properties: its global mass, volume, density, and so on. If we fix the value of these global properties, the dynamics and powers of the table will be fixed as well. In turn, as we know, these global properties are reducible to the properties of the parts of the table, its molecules, just because they are a pure function of these lower level properties. The global mass of the table is mathematically determined by the mass of the table's molecules. In general, dealing with pure material objects reduction of both substances to their global properties and global properties to their parts' properties tends to be the rule. 4 Seen this way, the question of whether a substance is emergent or not has, in fact, two parts, one corresponding to the question about the emergent or reducible character of the substance (the system) from its global properties (states and events), and the other concerning the emergent or reducible character of its global properties from its parts' properties. This means that, in every case, we have four possible scenarios: (a) a substance could be doubly reductive, being a reductive system (a function of its global events) and having only reductive global events (functions of the properties of its components); (b) it could be a reductive substance and still have emergent global properties and events (not reducible to the properties of its components); or vice-versa, (c) it could be an emergent substance (not a pure function, and so not reducible to its persistent and changing properties and events) and still be constituted only by reductive global properties and events (functions of the properties of its components); and, finally, (d) it could be doubly emergent, being an emergent system and having (at least some) emergent global properties and events. 5 2 There are several conceptions of events and their connections and differences with states, processes, and the like (see, for instance, Kim, 1976;Davidson, 1980;J. Bennett, 1988;Shoemaker, 2007;Casati & Varsi, 2020). For the present purposes these differences will not be relevant here. In general terms, will we take events, states, processes, and the like as property instantiations. 3 As it is called, a mere resultant (Mill, 1843, p. 428), aggregative (Broad, 1925, p. 77), or compositional (McLaughlin, 1992 Although there are exceptions; see section § 8. 5 All these scenarios are empirical possibilities that cannot be excluded only by definitional or merely ontological assumptions. As Kim reminds us, "There are no free lunches in philosophy any more than in real life" (1998, p. 30). In arguing that free agents are anomically emergent substances, I claim that they can be emergent either in the (c) or the (d) senses, having in mind that senses (a) and (b) are still open to different varieties of reductionist and compatibilist accounts. Substance causation as downward causation The very notion of substance causation is the idea that substances cause things to happen. According to what I mentioned before, one of the reasons why some philosophers have not found the notion of agent causation intelligible is because they have problems seeing how substances, as opposed to events, can participate as causal relata. My argument is that they have been skeptical about this idea because they have not examined the notion of a substance as a higher level persistent structure of global events that can have a downward causal and dynamical relevance on its own subsequent constitutive events. Appealing to the general idea of causes as difference makers (Sartorio, 2005; Beebee, Hitchcock & Price, 2017), the notion of downward causation can be characterized in terms of a higher level entity (property, event, substance) making a difference at a lower level, that is, causing (or increasing the objective probability of -see Hitchcock, 2016) the instantiation or appearance of a lower level entity. Now, we can understand downward causation as constituted by two necessary and sufficient conditions (see Morales, 2018, pp. 158-160): (i) a necessary causal under-determinacy given at the lower levels at the moment of the emergence of the higher level entity; and (ii) the emergence of the higher level entity (with higher level causal powers) which diachronically narrows, constrains (Kelso, 1995;Schröder, 1998;Juarrero, 2009), and selects (Campbell, 1974;Popper, 1978;Van Gulick, 1993;Steward, 2012) the subsequent lower level courses of events, making a difference on them. 6 Jaegwon Kim has shown the relevance of the concept of downward causation in discussions about mental causation, physicalism, and the unity of the sciences. Through what has been called the supervenience argument, he has argued that (the occurrence or instantiation of) a higher level entity can cause another higher entity only if the first can cause the lower level basis of the second. 7 It follows that whenever we find emergent, non-reducible, higher level causation we also find the occurrence of downward causation (Kim, 2009a, p. 40; see also McLaughlin, 1992, p. 51). 6 It is important to note the subtle distinction between the notions of under-determinacy (or under-determination) and indeterminacy. Although indeterminacy is commonly and simply articulated as implying a (less that 1) fixed objective probability in the interaction of different entities, the issue is that in order to genuinely emerge a higher causal power, the objective probability of the lower level causal chains (of 1 or less than 1, that is, deterministic or non-deterministic) cannot be completely fixed at the same lower level (and therefore, that probability must be lower-level under-determined: limited but not fixed), leaving the possibility for a higher causal addition. Otherwise, even if causation were indeterministic, there would be a causal closure at the lower level that would make it impossible for higher causal powers to emerge (see Morales 2018, ch. 5). 7 On the assumption that we are physicalists or at least believe that the higher level entities and causes can only be instantiated in (and so dependent on) lower level (ultimately micro-physical) mechanisms, as emergentism does. It is important to note that whether fundamental macro, higher level, or emergent causation, along together with its consequent downward causation obtain in our world is an empirical question, one to which I will return below. If this kind of phenomenon constitutes a fact, there must be multiple levels of organization with their own causal influences that end up complementing one another. The higher level laws and causal influences would not contradict, change, or violate, but complement the lower ones (Anderson, 1972, p. 222;Campbell, 1974, p. 180;Van Gulick, 1993, p. 252;Gell-Mann, 1994, p. 112;Dretske, 2004, p. 167); the reason is that the latter would under-determine, that is, leave open different possibilities for the lower level chains of events that would be further constrained by the higher causal factors. I have articulated the notion of a substance as the (reductive or emergent) persistent structure and organization of its own global events. As such, the concept of an emergent substance is that of a substance or system with irreducible causal powers and dynamics that synchronically emerges from its constitutive events, and diachronically makes a direct difference through the downward causal influence it exerts over its own subsequent states and events (through a kind of internal causation or selfdetermination). 8 In this sense, as a kind of downward causation, the causal influence of the emergent substance over its events is the causal constraint and selection that it imposes over the causally under-determined possibilities of its subsequent global events. The substance fixes, with this, the development of its own determinations, giving rise to its self-determination. In the case of agent causation, as I shall now argue, it is the kind of intentional self-determination that we call free will. Agent causes as free causes According to libertarian accounts, at least on certain occasions people can be genuinely free agents. This means that sometimes they can be sources of their actions, as opposed to mere witnesses or bystanders of them. As it is frequently put, with respect to at least some of their actions it is up to them whether they do them or not (Ginet, 2007;Steward, 2012;Clarke, 2020). Some philosophers have tried to capture this idea by referring to free agents as "uncaused causes," the type of substances or things that cannot be an effect of something else (Clarke, 2003;Ginet, 2007;Clarke, Capes & Swenson, 2021); while others emphasize the uncaused nature of the free action (O'Connor, 1995(O'Connor, , 2011O'Connor & Ross, 2004;Botham, 2008) 9 . But this way putting things as such doesn't help 8 A kind of causation that have been denominated inmmanet as opposed to transeunt causation (Chisholm, 1964). 9 Although my perspective coincides in general aspects with that of Timothy O'Connor, who has articulated the most well-developed application of emergentism to agent-causation and free will so far, I think it may be interesting and clarifying to note how our proposals substantially differ. In this regard, for instance, O'Connor's theory -but not mine -: (1) tends to be dualist rather than physicalist, differing from the main purposes and the more standard articulations of emergentism since its appearance (as we examined in section § 2 and § 3). (2) His perspective is hesitant as to whether or not emergent entities (nomologically) supervene on their physical bases (see this section § 4 below). (3) As far as I can see, O'Connor doesn't develop an explicit articulation of the meaning of the emergence of a substance (and not just of its causal powers) from its constitutive events (our sections § 1 and § 2). (4) Although to clarify the subtle idea at the heart of libertarianism. Instead, I will argue that the important point is to realize that there can be causal relations that are nomologically grounded and others that are not (see Tooley, 1990Tooley, , 1997Pereboom, 2014), 10 and to reconstruct the notion of an agent whose actions are up to her in the terms of substances that are anomically emergent. To see why we need to introduce the idea of anomic emergence, consider the model of a layered reality as discussed above. As we ascend to higher levels of causal constraints, one could argue, the world becomes more causally determined, with an emergent but completely causally and nomologically determined world at the limit. 11 In a world of this sort there would be no room for libertarian agency. 12 Even if there were emergent substances in it (i.e. persistent non-reducible organizations of different and changing events), their causal powers and dynamics could end up being non-reductively but nomologically determined by the conjunction of lower level and emergent laws. 13 And a determined world, whether it is reducible or emergent, cannot be a libertarian world. And not only emergence but indeterminism by themselves are insufficient for libertarian agency -a central point that is connected with the luck objection that will be analyzed in section § 7. Just as an emergent agent could be completely predetermined by the conjunction of intra-ordinal and emergent laws, she could be indeterministic and still her probabilistic dynamics could be completely fixed and governed by preestablished natural (probabilistic intra-ordinal and emergent) laws. If this were the case, the agent as a substance (as a persistent non-reducible organization of her reasons and mental states) would be an emergent probabilistic result of the preceding causal factors in conjunction with the laws of nature, and she wouldn't have a direct causal control on, nor be a genuine (libertarian) source of her actions, just because accepting that reasons structure the causal capacity of the agent, making her "objectively likely to act" (O'Connor & Ross, 2004, p. 251), O'Connor systematically argues about a non-causal account of the role of reasons in the explanation of actions; (5) assertion that is connected with the idea that what agent causalists usually take as the agent's first basic and primary free action (the agent's causing an event) is an uncaused occurrence because, in order to be free, there can be neither a sufficient causal condition for it nor for its objective probability (I partially agree with this; see sections § 4, § 6, and § 7, particularly footnotes 19 and 22). Now, (6) even affirming that the agent's control must be not merely probabilistic but "at will", O'Connor claims that agent causation does not need to be anomic (see this section § 4 below). As a consequence of this, and as I will argue from now on as one of his agent-causal theory's worst result, finally, (7) O'Connor accepts that the causal power of the agent should coincide with the probabilistic microphysical (or even special sciences') laws in order to be coherent with them (see sections § 6, § 7 and § 8). I have indicated in each case the sections in which I articulate my perspective in disagreement with O'Connor's. I thank an anonymous reviewer for suggesting this clarification. 10 Inmanuel Kant (1781Kant ( /1987 contends that this is at least a prima facie conceptual possibility. See also Hoefer, 2016. 11 Some of the so-called classic British Emergentists defended the idea that our world was emergent but completely determined. See, for instance, Mill, 1843, p. 247;Ryan, 1970, p. 104;McLaughlin, 1992, p. 73;and F. Wilson, 1998, p. 205. 12 That is, the kind of agency that implies the strongest sort of control in action required for the core sense of moral responsibility at issue in the free will debate that is called basic desert (Strawson, 1994;Fischer, 2007;Pereboom, 2014). 13 C.D. Broad called them trans-ordinal laws, that is, a posteriori principles that synchronically link the lower level interactions with the emergent level(s), those laws that complement the corresponding intraordinal operating within each level (1925, pp. 78 − 9). the (indeterministic) objective probabilities of her actions would be nomologically fixed by preceding causal factors even before her birth, that is, by factors whose efficacy she does not control. As in the compatibilist scenario, she would not be able to contribute anything to her actions beyond what is already set before she acts, becoming a pure development of the probabilities stipulated beforehand. In fact, this situation would have the same practical results as those of the incompatibilist reductionist agent causalists (Kane, 1996(Kane, , 2007Balaguer, 2009), according to which the indeterministic causation of the agent is reduced to the indeterministic causation of her constitutive reasons and mental states, namely: her actions would be nothing more than a (nondeterministic) causal outcome of their causal antecedents in accordance with the laws of nature. While in one case, the actions of the non-emergent agent (who is reduced to her constitutive events -her reasons and mental states) remain completely governed by the event-like indeterministic laws that apply to the lower physical and intermediate psychological levels, the actions of the emergent agent also remain completely governed, but now by the event-like indeterministic laws together with emergent or trans-ordinal laws that apply to the indeterministic dynamics and causal powers of the agent as a substance. As many authors argue against the reductionist position wherein no action could be truly free, the same must be said against the nondeterministic emergentist but completely nomologically governed agent, that is to say, in the words of O'Connor and Ross, "the ultimacy of the agent's control is compromised, [because her actions become as nothing more than] a product (albeit an indeterministic one) of other factors whose efficacy [s]he does not control." (2004, p. 250). My proposal, then, is to reconstruct the core idea behind libertarian agency in terms of an anomic or non-nomologically governed substance: a causally relevant structure that anomically emerges from the under-determination 14 of her mental events as reasons, desires, and emotions, and who exercises her causal powers by downwardly constraining and determining her subsequent mental states as decisions (which, in turn, cause her bodily actions) in a way that is not previously fixed by any (intra-ordinal or emergent) law of nature. 15 This is what grants her the kind of control, in virtue of which she can be considered free: the objective probability of her actions are nomologically (physically, neurobiologically, and psychologically -even socially) under-determined, but not necessitated by anything other than her. In this respect, her actions are ultimately up to her. To further clarify this concept, let me briefly list and differentiate the possibilities that the emergentist theory accepts as empirical options. First, we can have a reductionist and compatibilist conception of the agent wherein (i) the agent is reduced to (is nothing more than) her mental states, (ii) such mental states (as reasons) deterministically cause her actions, so (iii) her actions cannot be free libertarian actions (see, for instance, Nelkin, 2011, ch. 4;Markosian, 2012;Pereboom, 2015;Clarke, 2019). Secondly, we can have a non-reductive, emergentist but compatibilist conception of the agent wherein (i) the agent isn't reduced to (is a persistent non-reducible organization of her) her mental states, (ii) such mental states and such emergent agent both are deterministic results of previous events acting in accordance with the lowerlevel and emergent laws and, in turn, nomologically and deterministically produce her actions, so (iii) her actions cannot be free libertarian actions. 16 The third option is a reductionist but incompatibilist conception of the agent (see Kane, 1996Kane, , 2007Balaguer, 2009) wherein (i) the agent is reduced to her mental states, (ii) such mental states (as reasons) are both causal outcomes of previous events and non-deterministic causes of her actions in accordance with the laws of nature, so (iii) the indeterministic objective probabilities of her actions would be nomologically fixed by preceding causal factors whose efficacy she does not control and, therefore, (iv) her actions cannot be free libertarian actions. 17 In the fourth place, we can have a non-reductive, emergentist and incompatibilist conception of the agent wherein (i) the agent isn't reduced to her mental states, (ii) such mental states (as reasons), the emergent agent, her emergent causal powers, and the particular ways she exercises those causal powers are both causal outcomes of previous events acting in accordance with the lower level and emergent laws of nature, and non-deterministic but completely nomologically governed causes of her actions; so (iii) the indeterministic objective probabilities of her actions would be fixed by causal factors whose efficacy she does not control (by preceding events in accordance with the lower level and emergent laws) and, in consequence, (iv) her actions couldn't be free libertarian actions. And this is the reason why even the emergentist agent-causal proposals articulated so far have failed to clearly show how a free action is really up to the agent, and why they again have fallen prey to objections such as the luck and the disappearing agent. In our fifth and final possibility, the emergentist response is that the emergent agent must be anomic, meaning that the particular ways in which she exercises her emergent causal powers are under-determined, limited, but not fixed by any intra-ordinal or emergent law of nature, so it is only up to her how she selects her psychological possibilities, how she acts, how she decides. From this perspective, (i) the agent isn't reduced to her mental states, (ii) she emerges from the causal and nomological underdetermination of her mental states according to a general trans-ordinal law with the following form: whenever the same lower level components be related in the same way, an anomic agent should synchronically emerge with the causal power to diachronically select her lower level options in a way that is not determined by any law 16 In order to understand the emergentist proposal it is important to recognize and highlight that these first two options articulate a coherent (compatibilist -so not libertarian -reductive or non-reductive) notion of the agent and her causing her actions that, for instance, is compatible with the agent-causalist Richard Taylor when he says that "What is entailed by [his] concept of agency, according to which [wo]men are the initiators of their own acts, is that for anything to count as an act there must be an essential reference to an agent as the cause of that act, whether [s]he is, in the usual sense, caused to perform it or not." (Taylor, 1966, pp. 114 − 15). The question is whether all our actions are produced in such a manner, so they couldn't be free in the way that we would be morally responsible for them in the basic desert sense. 17 To further analyze this conclusion see section § 7 about the luck objection, and also the disappearing agent objection in, for instance, Pereboom, 2014, andClarke, 2019. of nature. 18 To put it in other terms, the objective probability for the appearance of the anomic agent is fixed by the different laws of nature, but the objective probability of the anomic agent for selecting (downwardly causing) her subsequent mental states as decisions is not fixed by anything other than herself. 19 So, unlike the four empirical options already seen, (iii) (the objective probability of) her actions will not be fixed by causal factors whose efficacy she does not control (as preceding events and the laws of nature); and therefore, (iv) some of her actions can be performed "at will," that is, as free libertarian actions. I have said that the objective probability for the appearance of the anomic agent is fixed by the different laws of nature, but that the objective probability of the anomic agent for causing her decisions is nomologically under-determined and not fixed by anything other than herself. In a very simplified way, let us suppose that the relevant circumstances C at t 1 nomologically and (for simplicity) deterministically cause the agent Alice to have a moral reason R M , an egoistic reason R E , and a sentimental reason R S at t 2 , and that if there were nothing more than event-causal powers at issue, these reasons would have the objective non-deterministic probability of 0.5 to cause her moral decision D M , 0.3 for causing her egoistic decision D E , and 0.2 for her sentimental decision D S at t 3 . If this were the case, the Pr(Alice's decisions at t 3 | Alice at t 2 ) = the Pr(Alice's decisions at t 3 | circumstances C at t 1 ) because the existence (supervenience, event-like complete grounding) of Alice at t 2 wouldn't introduce any change in the causal probabilities that C has fixed -and we can even suppose that C have been settled before her birth. Let us now say that the relevant circumstances C at t 1 nomologically deterministically cause Alice to have R M , R E , and R S at t 2 , from which Alice synchronically emerges. Here we find two different options corresponding to our fourth and fifth empirical possibilities: the emergence of Alice and her causal powers can be nomologically fixed (by trans-ordinal or emergent laws) or they can be anomically emergent. Let us take the former option and let us suppose that the trans-ordinal laws fix the emergence of Alice with the causal power to downwardly select her decisions and 18 A law that should apply in nomologically similar worlds that will include anomic emergent agents. 19 And, in this precise sense, as several agent causalists argue, the agent's primary free action (the agent's anomically and downwardly causing her decision with a specific objective probability) is uncaused (see, e.g., O'Connor 2000O'Connor , 2011O'Connor and Ross 2004;and Botham 2008); in our articulation, it is not determined by anything other than the agent herself, a fortiori, it is not determined by preceding causal factors in accordance with the laws of nature.In footnote 9 I said that I partially agree with O'Connor about this issue, and this is why: although in the aforementioned precise sense I think that the agent's primary free action is uncaused, nonetheless, we just have referred to another sense in which it is caused: the objective (even non-deterministic) probability for the appearance of the anomic agent is fixed (determined) by the different laws of nature; and the objective probability of the anomic agent for causing her decision (although not fixed) is under-determined by the same laws; so, by our characterization of a cause in section § 3 (as a difference maker, which increases the objective probability of the instantiation or appearance of other entities), this primary free action has been caused: its objective probability has been increased by the preceding causal factors (in particular, by the agent's reasons) in accordance with the laws of nature. For the cited agent causalists, these causal antecedents are merely causal contributors or influences, but not causal producers or determinants, and I agree; but I add: causal influences that increase the probability of the primary free action. But beyond this, the important point to realize is the reason why such an action cannot be "completely" caused, that is, because its anomic nature. I thank an anonymous reviewer for raising the question about this uncaused nature and the analysis of the nomic and anomic relations at issue. so to cause at t 3 D M with the objective probability of 0.7, D E with the probability of 0.2, and D S with 0.1. But if this were the case, we would have the same result as the reductionist scenario: the Pr(Alice's decisions at t 3 | Alice at t 2 ) = the Pr(Alice's decisions at t 3 | circumstances C at t 1 ). Given that the laws of nature are established from the beginning and that they are something that Alice cannot change or manipulate, the emergence of Alice at t 2 wouldn't introduce any change in the causal probabilities that C fixes. According to this scenario, Alice's actions would be fixed (to the extent that they are fixed, that is, non-deterministically) by causal factors whose efficacy she does not control (by preceding events in accordance with the laws of nature), depriving her of the ability to somehow contribute to the determination of her actions. As a consequence, as we have already pointed out, this articulation is subject to objections such as the luck, the disappearing agent, and (given the possibility for establishing or manipulating C) the manipulation argument (Pereboom, 2014, ch. 4). But the scenario changes with the introduction of the anomic agent. So let us say that the relevant circumstances C at t 1 nomologically deterministically cause Alice to have R M , R E , and R S at t 2 , from which Alice synchronically and anomically emerges. Given that her anomic causal power implies the nomological under-determination of the probabilities of her decisions, these are established neither by C nor by any other event or circumstance in accordance with the laws of nature, so she can (in virtue of this causal power) fix them "at will," by herself, independently of any other condition. According to this scenario, although we can say that the Pr(Alice's decisions at t 3 | circumstances C at t 1 ) could be projected as if there were nothing more than event-causal powers at issue (because, as O'Connor -2000, p. 115 -says, "these choices are at times even brought about event-causally, while we simply monitor the result and retain the capacity to agent-causally redirect things as need be"), such probability is objectively under-determined, that is, nomologically limited but not established at t 1 . The specific probability of Alice's decisions at t 3 will be anomically established only at t 2 by the agent herself insofar as she is the anomically emergent organization (of the causal contribution) of her reasons. That is to say, given that the agent synchronically emerges as the complex and irreducible organization of her mental states and reasons, she emerges as the irreducible organization of the causal power and contribution of the latter. In this way, her causal power depends on, but is neither directly nor reductively determined by, the causal power of her reasons. So the agent's power synchronically emerges as a power to cause with certain probabilities (neither directly nor reductively determined by the probabilities that her reasons have to cause) her subsequent decisions. As far as this agent causal power is anomic, it is a power to cause her subsequent decisions with certain probabilities which are only up to her. This means that the probabilities of the agent's power to cause her subsequent decisions are not necessarily deterministic (or indeterministic), so such power can emerge with a distribution of different probabilities for her different possibilities. As in our above example, the agent can emerge with the causal power to downwardly select her decisions and so to cause D M with the objective probability of 0.7, D E with the probability of 0.2, and D S with 0.1. But only as far as this agent power with spe-cific probabilities for causing her subsequent decisions is anomic, it is (non-causally, emergently) determined only by herself. 20 We can find some epistemic consequences that follow from this picture. Given that the anomic agent emerges from her lower level psychological constituents according to a trans-ordinal law, and that we could have predictions of these constituents, then we could have a posteriori predictions of the appearance of anomic agents. But, given the anomic nature of the agent's causal power over her subsequent psychological dynamics, all the available information (about past events, her psychological, biological, and physical conditions, and the ordinal and trans-ordinal laws of nature which can be implicated) will be insufficient to know how she will causally constrain her psychological possibilities and, so, the decisions that she will make. This will be known only retrospectively. A consequence which follows from a substantive reading of the agent as the ultimate source of her free actions, conferring her moral responsibility in the basic desert sense. 21 So far I have developed the articulation of the concept of a libertarian agent who freely causes her actions in terms of an anomically emergent agent. In the sections that follow I will explain in further detail and responding some criticisms the working of this kind of causation. Taking stock Traditionally, critics of agent causation have claimed that its main problem has to do with its intelligibility as a solution to the problem of free will, particularly about the issue of how substances, as opposed to the events in which they participate, can be of causal relevance in the light of an indeterministic picture of the world. I have articulated the concept of a substance as either the reducible or emergent organization of some global events, and I have explained that which of these two ways it should be understood depends on whether or not it is a function them. Given that emergent causation is conceptually tied to downward causation, as a first result we reached the idea that emergent substance causation is the downward causation that the substance exerts over its subsequent under-determined constitutive events. But I argued that emergent substance causation (added to compatibilist requirements for freedom) is not sufficient for free agent causation because it can be completely determined by preceding causal factors in accordance with the conjunction of intra-level and inter-level or emergent laws of nature. And the same applies to an indeterministic emergent agent because the non-deterministic probabilities of her actions could be nomologically fixed by preceding causal factors (whose efficacy she does not control) even before her birth. So I proposed that a true concept of a free agent is that of an anomically emergent substance, a substance that exercises her causal powers by downwardly constraining her (nomologically under-determined) subsequent mental states (as decisions) 20 I owe the articulation of the last paragraphs to the comments of an anonymous reviewer. 21 This epistemological consequence is clearly foreseen by what Chisholm calls a Kantian as opposed to a Hobbesian approach (1964, p. 12). See also Steward, 2012, pp. 168-9. in a way that is not determined by any law of nature, in such a way that such causal constraint is only up to her. It should be clear that I am trying to offer neither a priori nor empirical arguments to show that we in fact are, or aren't, free agents. Rather, I want to show that the idea of agent causation is intelligible, that an a priori objection can be answered, and that if we have this answer we can be confident that at least it makes sense to ask some remaining questions. To see why this is so important, I am going to show how with this clarity we can solve some central problems that have been raised against the view. The causal integration of the agent and her reasons In addition to the problem of its intelligibility, other concerns have been raised with respect to the idea of agent causation as grounding libertarian free will. In what follows, I want to discuss these concerns. Whereas I cannot claim to present decisive replies to them, I can show how having a clearer idea of the notion of agent causation puts libertarians in a better position towards answering them. Let us start with the consequences of the agent causal account in the light of a causal theory of action. Donald Davidson (1980) already told us, and most theorists agree that the reasons for an action are the reasons that cause the action. 22 Given that actions for which agents are morally responsible are normally rational actions, these should be caused by reasons rather than by agents insofar as they are rational; and, on the agent causal proposal, caused by agents rather than by reasons insofar as they are morally responsible free actions. This apparent dilemma poses a challenge to the agent causal account of morally responsible action. Following our previous example, let us suppose that Alice anomically emerges from her reasons R M , R E , and R s with a power to downwardly constrain their possible decisions D M , D E , and D S . The anomic emergentist picture is an integrated account that gives relevance to the different implicated causal factors, in particular, both to the anomic causal agent and her reasons and motivations. It is precisely because the causal relevance of her motivations R M , R E , and R s that Alice is left with only three 22 So far I have assumed this "traditional" causalist conception of action, taken intentions, choices, and/or decisions as plausibly the most direct and basic actions of agents. Nonetheless, this can mislead us about primary free actions. So I agree with agent-causalists like O'Connor (2000), O'Connor and Ross (2004, and Botham (2008) that the agent's causing an event (e.g., the agent's causing a decision) is the foremost candidate for the agent's primary free action -and not the agent's caused event (the decision) as such. This is so because, following our emergentist articulation, a free action minimally requires that the agent anomically and downwardly causes her subsequent mental state, her decision, in the sense that if that decision were differently produced, it wouldn't be a free action (for further detailed reasons see Botham 2008, pp. 90ff, p. 122, pp. 132ff).This means that when we say that the agent is the genuine source, causally necessitate, and causally control (and so she's morally responsible for) -e.g. -her free decision, as both the common sense and the agent causalist want to say, we imply that such decision is free only in virtue of being the effect-constituent of her primary free action: her anomically causing it (meeting what Botham -2008, p. 93 -calls Whenceness, a principle of origination/sourcehood that is required by acting freely: being the underived/ultimate originator of an essential element/part of her primary free action). I thank an anonymous reviewer for suggesting the clarification of this issue. In what follows I show how the agent has causal control over her decisions even though her reasons are also causally relevant for them. of her possible decisions: D M , D E , and D S ; so when Alice decides, her final decision needs to be explained (although only partially explained) as an outcome of her motivational structure -in this sense, her reasons are causally and explanatorily necessary but insufficient for her decision. As we have done before, we can suppose that if there were nothing more than event-causal powers at issue, Alice's psychological states would have the objective probability of 0.5 to cause D M , the probability of 0.3 for D E , and 0.2 for D S . 23 We can also say that she could have several other reasons and motivations to choose between D M , D E , and D S , but that the aforementioned R M , R E , and R S are those that on this occasion are doing the causal work, that is, constituting the motivational scenario that will cause one of her decisions. 24 Now, if Alice is an anomic agent, she will have a similar motivational structure that leaves open her three possible decisions, but from which she emerges as a higher level organization with the anomic causal power to downwardly constrain, manipulate, and establish one of her possible decisions as its ultimate source. Given the motivational probabilistic set up of her mental events and reasons, it is probable but not necessary (depending on Alice's nature, abilities, and circumstances), that her making the decision D M will be easier than the decision D E , and she will have to strive much harder to decide D S . But causing one of these decisions is the element that is only up to her qua agent as a whole, rather than just having the various aspects of her motivational structure. Now suppose that Alice causes D M . In this case, what can we say about the real cause of her decision? Is Alice's moral reason R M for deciding D M or Alice as an agent that caused it? As I have argued before, on this account there is no contradiction but complementarity between the two kind of causes: it isn't any other event than R M which finally cause D M , 25 but it only does it in virtue of and because Alice by herself constrains and selects over her (reason's) possible outcomes, determining D M over the other ones. And what about her other possible decisions? Suppose now that Alice downwardly constrains and selects over her possible outcomes determining D E . We still have to 23 If this were the case, as we have said, such probabilist result would be the function of the causal relevance of Alice's mental states and reasons when these are isolated or composing other systems (agents) -see section § 2. This could be also an anomic scenario wherein, quoting again O'Connor, "these choices are at times even brought about event-causally, while we simply monitor the result and retain the capacity to agent-causally redirect things as need be." (2000, p. 115). 24 So suppose that Alice has additional reasons for causing her decisions D M , D E , and D S . For instance, suppose that she also has R M2 for causing D M , and that she finally causes D M . In this case, how can we say that R M and not R M2 (or R M together with R M2 ) is (are) doing the causal work of producing D M ? The emergentist answer (as other proposals', under the aforementioned principle that each concrete reality must have a unique causal power) is that different reasons can have different causal powers that can coincide in certain respects but must differ in others. Thus, although in certain circumstances R M , R M2 (and even R M together with R M2 ) can cause a same result, namely, the decision D M , their meditated consequences must be different, causally affecting different actions and mental states that follow D M . We can say that the causal powers that R M bestows on the agent are different from those that R M2 bestows on her, and that such differences must become evident from the partially coincidental causation of D M . I thank an anonymous reviewer for suggesting the clarification of this issue. 25 Remember that her egoistic and sentimental reasons R E and R S would cause D E and D S , respectively, but not D M . say that both Alice as the agent and her selfish reason R E are complementary causes of her decision. This is because, although having a low nomologically projectable probability of 0.3 for causing D E , the only motivational or psychological factor that is doing the work for causing this decision is her reason R E , but it is only doing it just because Alice selected it over her other nomologically and psychologically underdetermined possibilities. This is the meaning of the idea that we perform our actions in the light of our reasons and motivations: our acting is causally constrained but not completely determined by them. We (as anomically emergent structures, systems, substances) are finally who constrain and select among the nomologically under-determined possible outcomes of our motivations and reasons, that is to say, we are finally who select and so determine our actions. Still, we have to notice that the causal relevance of our motivations and reasons is essential: the anomic downward control that the agent imposes on their possible outcomes is only a power of their emergent organization or structure and, therefore, can only exist while these motivations and reasons take place. 26 Meeting the luck objection The luck objection is another central worry that has been raised to libertarian accounts of free will. Recall that contemporary agent causal libertarianism proposes to introduce the agent as a substance cause with the purpose to solve this problem. Several authors have argued that in fact it does not help at all (see, for instance, van Inwagen, 1983;Haji, 2004;Mele, 2005Mele, , 2006and Clarke, 2019). In short, the objection is that a scenario that takes into account all the mental events and reasons of an agent and still portrays her objectively indeterministic in her having different probabilities to cause different decisions, seems to imply neither factual nor nomological elements that can account for the selection of one of these decisions over the others. As a consequence, there are no grounds for the required causal control that the agent should have on them, making her election just a matter of (good or bad) luck, and depriving her of any responsibility on them. At first, we can differentiate two general senses of luck and randomness at issue. 27 We have a principal sense for the free will debates referring to actions and happenings which are not under the agent's control: for an action to be a non-lucky outcome of the agent, the action must happen as a result of the agent's causal influence and control. And we have a secondary sense of luck that picks up the idea of non-deterministic, probabilistic causation: in order to overcome it, the agent must secure just one course of actions and, with it, prevent any other possible chance. We will see that although the anomic agent's actions can be "lucky" in the second, probabilistic sense (given that, as we have explained, the different probabilities of her 26 It is important to highlight that the causal determination of the anomic agent is not necessarily conscious nor rational. Plausibly, we have to admit that much of our free actions can be made through unconscious elections which are based on non-rational emotions, affections, and biases. See, for instance, Doris, 2017. 27 Kevin Timpe (2013 has developed a similar distinction. actions are only up to her), they still are under her causal control thanks to her anomic causal power that can act either probabilistically or deterministically. This in turn will show us that an objectively indeterministic scenario is compatible with the kind of control that is at issue in the free will and moral responsibility debate and, so, by itself cannot be used to articulate as a sustainable objection to the anomic agent proposal. In the end, the problem is not whether the agent causes her decision deterministically or non-deterministically but whether she finally introduces a real contribution beyond that of the causal conditions of the world that she cannot control (as her past events in accordance with the laws of nature). I have said that the causal power of a free agent is an anomic power, meaning that the (either deterministic or non-deterministic) objective probability for constraining her mental events and reasons is not determined by any natural law, neither lower level nor emergent. As I have explained, this is meant to signify that, in our example, Alice by herself (insofar as she anomically emerges as the higher level organization of the causal contribution of her reasons) fix the objective probabilities of her mental states and reasons for causing one of her possible decisions D M , D E , or D S . Let us suppose that Alice establishes her psychology in such a way that her reasons R M , R E , and R S have the objective probability of 0.5 for causing D M , the probability of 0.3 for causing D E , and the probability of 0.2 for causing D S . Now suppose that through her determination of this probabilistic set up, Alice causes her decision D M . Would we say that because Alice left causally opened her three possible decisions D M , D E , or D S , then Alice's causing D M was a matter of luck? In accordance with the second sense of luck, Alice is "lucky" (under the definition), but in accordance with the first, primary sense, she's not. She's "lucky" because she causes her decision D M only in a probabilistic way, but she finally is not really lucky because she has causal control over D M insofar as by herself she anomically determined the probabilities of her three psychologically possible decisions, she could have selected another psychological set up with a different distribution of the probabilities and, with it, she could have either increased (up to its fullest value 1.0) or decreased (and even cancelled out) the probability for the occurrence of D M . As we have said, no other factors other than herself determine this kind control on her decisions, in particular, no preceding causal factors in conjunction with the laws of nature. This shows that there is nothing by itself problematic with an indeterministic scenario, wherein the agent can determine her psychological structure having certain probabilities for causing her subsequent mental states and decisions either in a nondeterministic or in a completely deterministic way. The real point is whether she has that power. And that's a factual and empirical issue, the worldly issue whether she is an anomically emergent agent. We can make explicit that even maintaining exactly the same past events Alice has the power to fix different probabilities to cause any of her decisions. But she can do it because two important reasons: (i) these probabilities are nomologically underdetermined, that is, from the nomological constrictions that her reasons impose on such probabilities she fixes, as her higher level organization, their specific values. And (ii) this fixation is anomic: it isn't necessitated by anything other than herself, it is ultimately only up to her. In this sense, the answer to the luck objection isn't based on the fact that the agent ensures the appropriate connection between her reasons and her decisions (this connection is ensured by the laws of nature, although in an underdetermined way, namely, it is actualized (selected) depending on the anomic action of the agent); rather, the solution is based on the anomic nature of the agent. Now, what does determine the agent's exercise of her anomic agent-causal power in one way or another -with certain probabilities rather that others? Nothing but her; that exercise is only up to her (in the sense explained above), granting her the agential control necessary for having moral responsibility in the basic desert sense. The empirical adequacy of agent causation Some authors have argued that although agent causation could turn out to be coherent, it still has to face unsurmountable empirical problems. Specifically, problems about the very peculiar ways the world and its laws would need to be structured in order to accept it and increase its feasibility (Vargas, 2013, ch. 2;Pereboom, 2014, ch 3;Clarke, Capes & Swenson, 2021). Pereboom, for instance, argues that fundamental agent causation may be the best alternative for libertarians to pursue, but that there are good reasons to doubt its empirical credentials. He contends that the agent causalist faces one of two unwelcome possibilities. She must accept an unexplainably wild coincidence between the outcomes of the agent's causal powers and those expected from what we suppose our best physical theories would propose on the basis of purely microphysical laws. Or she must accept too dubious contraventions of the microphysical laws that should be governing the small-scale elements that constitute our world. My answer is that there cannot be massive coincidences between the anomic agent's causal effects and those expected by our best (micro)physical (even adding all the special) sciences on the basis of their natural laws, just because then there will be no significance for the anomic character of the agent. But the agent's anomic causal power is not committed to contraventions of the microphysical or special sciences laws either, because such causal power only functions as a constraint that emerges from the under-determinacy of these natural laws and, therefore, can only exist while they take place in such a way. Let us specify with a little more detail. The problem of wild coincidences starts from a non-deterministic interpretation of quantum mechanics that states that our physical world is governed by laws that are fundamentally probabilistic or statistical; physical laws that, although are insufficient for, can allow the action of the causal power of the anomic agent, as we have seen on our emergentist articulation. But some agent causalists have argued that the causal power of the agent should conform with what the probabilistic microphysical laws dictate in order to be coherent with them (see, for instance, Clarke, 2003, p. 181;and O'Connor, 2003, p. 309). But Pereboom disagrees, arguing that a credible agent causal theory should affirm that the agent's causal power must be distinct from the causal powers of her constitutive events and that, in consequence, "we would expect the decisions of the agent cause to diverge in the long run from the frequency of choices that would be extremely likely on the basis of these events alone." (2014, p. 67) He also contends that if. we nevertheless found conformity, we would have good reason to believe that the agent-causal power was not of a different sort from the causal powers of the events after all […] Or else, this conformity would be a wild coincidence, which we would not expect and would have no explanation. (Pereboom, 2014, p. 67) I think that Pereboom's point is powerful. Furthermore, we can add that such kind of wild coincidences only invites agent causal reductionism or epiphenomenalism on the basis of Kim's well known exclusion argument. As Kim puts it, "the problem of causal exclusion is to answer this question: Given that every physical event that has a cause has a [sufficient] physical cause, how is a mental [or an agent] cause also possible?" (1998, p. 38, original italics). Certainly, the premise of the rationale is the principle of the causal closure of the physical domain: if a physical event has a cause at t, it has a sufficient physical cause at t (Kim, 2009a, p. 38). Complete coincidence between agent causation and event causation naturally invites the idea that every event has a sufficient event as its cause. From this, the exclusion argument runs and leaves us with only two options: reductionism or epiphenomenalism. If we agree with many authors that epiphenomenalism is wrong, absurd, or even incoherent (see, for instance, Silberstein, 2001, p. 84;Kim, 2005, p. 70;McLaughlin, 2006, p. 40), the general result is the reduction of agent causation. 28 But if we accept epiphenomenalism as a viable articulation of agent causation, we would have to face the disappearing agent objection: the objective probabilities of the agent's actions would be completely fixed as causal outcomes of her preceding events in accordance with the laws of nature, so she would have neither the agential control necessary for having moral responsibility in the basic desert sense, nor deserve to be blamed and to be praised in a retributive way. Complete coincidence between the causal consequences of agent causation and event causation takes us to whether reduce or eliminate the agent's causal power. But anomic agent causation cannot, by its very essence, completely coincide with event causation in accordance with the laws of nature, just because it only works constraining, selecting, and making a difference on the under-determined possibilities that these laws display and, so, bringing about courses of actions different from those projected by the only action of the latter. So there is coincidence and nomological reduction/elimination, or there's no coincidence and anomic emergence. 29 Let us now examine the second horn of the dilemma according to which the theory has to accept implausible contraventions of the laws that govern our microphysical world. As Pereboom states, 28 As we have seen, the variants of reductive agent causation can be compatibilist (Nelkin, 2011, ch. 4;Markosian, 2012;Pereboom, 2015;Clarke, 2019) and incompatibilist (Kane, 1996(Kane, , 2007Balaguer, 2009 On O'Connor's (2009) emergentist account of agent causation, the agent-causal power is a higher-level power that strongly emerges from a wholly microphysical constitution by virtue of the organization of the constituents. That is, the exercise or activation of this higher-level power can result in contraventions of the microphysical laws that can ideally be discovered without taking into account any higher-level properties-henceforth the ordinary laws. (2014, p. 68) Here we may note that the issue about the agent causal power doesn't differ from a general question referring to any emergent causal power, and that it's because of this reason that Pereboom, following O'Connor's example, illustrates the issue through the connection between an arguably emergent causal power of a protein molecule and its causal dynamics at the microphysical level (Pereboom, 2014, p. 68). This shows us that Pereboom's trouble is about the general notion of ontologically emergent (non-agent) causal powers in the sense that they should contravene the microphysical laws from which they emerge. We have explained that emergent causal powers only function as higher level constraints that emerge from the under-determinacy of lower level (as microphysical) natural laws and, therefore, can only exist while the latter take place in such a way. As we have highlighted, the emergent (in particular, the agent's anomic) causal powers complement, and neither contradict, change, nor violate the under-determinacy of the lower level causal and nomological factors. What kind of empirical evidence can we have of this? One of the commonly recurrent examples that seem to show the falsity of microphysical reduction (complete microphysical grounding) and the appearance of further emergent higher level causal constraints, is the phenomenon of quantum states of entanglement (Maudlin, 1998;Silberstein & McGeever, 1999;Papineau, 2008;Ismael & Schaffer, 2020). But this does not seem to be an isolated phenomenon. Within the scope of both the physical science itself and its interaction with chemistry we find numerous examples of (non-quantum) irreducible holistic properties (see Anderson, 1972;Leggett, 1987;Gell-Mann, 1994;Cartwright, 1997;Hendry, 2006;Kistler, 2006;Hoffmann, 2007). And there is also evidence that the failure of microphysical reduction goes beyond the physical and chemical scopes. For instance, there is a growing consensus based on empirical evidence that biological properties cannot be explained completely on the basis of their underlying chemical processes (Campbell, 1974;Rothschild, 2006;Wimsatt, 2007;Davies, 2012;Dupré, 2021). And the empirical results available regarding the interaction between mental and neural properties at least suggest a failure of reductive explanation between theses domains (Van Gulick, 1993;Velmans, 2002;Scott, 2007;Juarrero, 2009). If this kind of evidence finally ends up being correct, what is a completely empirical issue, emergence and downward causation should be, as William Wimsatt thinks (2007, p. 175), much more common than normally supposed. The emergent causal laws and powers would finally complement, and wouldn't contradict, change, nor contravene the lower level (such as microphysical) laws and powers from which they emerge. But what about anomically emergent agent causation? We have explained how this kind of special, anomic causation can only emerge on the basis of the underdetermined dynamics of our mental events and reasons in such a way that, after having all the scientific knowledge about the laws that govern us, it would be impossible, as nowadays is the case, to predict the particular ways in which we evolve, transform ourselves, and transform our world. Maybe only then we could say in a determined sense, as we daily believe and hope, that we are (partial but) real constructors of our own destiny. Conclusion We have seen that the agent causation's main traditional problem about its intelligibility can be solved by understanding non-reducible substances as emergent organizations of global events, downwardly constraining, selecting and, in this way, having control on them. But we explained that for free agent causation we need more than emergent substance causation, so that its true concept is that of an anomically emergent substance who has a causal power that is not determined by any law of nature, that it's only up to her. This conceptual framework provided us with the resources to face some of the main objections against the view, but which cannot by themselves show that we are actually anomic agent causes of our acts. What we have argued is something more modest: agent causation is plausible. It is consistent, and indeed continuous with, credible metaphysical and scientific pictures. If that is right, we have good reason to take seriously the possibility that we are, in a strictly literal sense, the ultimate and irreducible causes of our own actions. The author has no conflicts of interest to declare. is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/ licenses/by/4.0/.
15,552.6
2023-04-01T00:00:00.000
[ "Philosophy" ]
Controllable Unsupervised Snow Synthesis by Latent Style Space Manipulation In the field of intelligent vehicle technology, there is a high dependence on images captured under challenging conditions to develop robust perception algorithms. However, acquiring these images can be both time-consuming and dangerous. To address this issue, unpaired image-to-image translation models offer a solution by synthesizing samples of the desired domain, thus eliminating the reliance on ground truth supervision. However, the current methods predominantly focus on single projections rather than multiple solutions, not to mention controlling the direction of generation, which creates a scope for enhancement. In this study, we propose a generative adversarial network (GAN)–based model, which incorporates both a style encoder and a content encoder, specifically designed to extract relevant information from an image. Further, we employ a decoder to reconstruct an image using these encoded features, while ensuring that the generated output remains within a permissible range by applying a self-regression module to constrain the style latent space. By modifying the hyperparameters, we can generate controllable outputs with specific style codes. We evaluate the performance of our model by generating snow scenes on the Cityscapes and the EuroCity Persons datasets. The results reveal the effectiveness of our proposed methodology, thereby reinforcing the benefits of our approach in the ongoing evolution of intelligent vehicle technology. Introduction Intelligent vehicles and other advanced mobile agents are engineered to navigate through a spectrum of adverse weather conditions.This poses a formidable challenge to perception algorithms [1].To enhance the robustness of these algorithms, a prevalent strategy involves augmenting the training dataset [2][3][4].However, operating vehicles under such severe conditions contravenes road safety regulations, and data acquisition, in this case, becomes significantly time-consuming. A viable alternative to traditional data collection is to synthesize weather effects on existing public benchmarks [5][6][7][8][9].Conventionally, this is accomplished by modeling the impact of weather effects, such as fog, rain, and snow, as a function [10].The derived function is subsequently applied to images or videos, thus simulating the desired weather conditions.This method facilitates the creation of diverse datasets, which serve as valuable resources for training and testing various perception algorithms, encompassing object detection, intention estimation, trajectory prediction, etc. In light of the widespread adoption of deep learning methodologies, several researchers have begun considering the use of physical synthesis datasets as sources to train more universally applicable convolutional neural networks (CNNs) or generative adversarial networks (GANs) [11,12].These datasets furnish diverse and realistic training samples, without the need for actual data collection in perilous weather conditions.Still, the efficacy of this approach highly depends on the accuracy of the weather effect model and the quality of the synthesized images or videos. A recently proposed concept involves utilizing unpaired image-to-image translation models.This type of model is capable of learning how to map visual features from a source domain to a target domain without one-to-one correspondence [13][14][15].A prominent example in this domain is the CycleGAN architecture [14], designed to generate images based on GANs that are virtually indistinguishable from real photographs.The key innovation introduced through CycleGAN is the implementation of a cycle consistency loss.This feature encourages the mapping of an image from one domain to another to be consistent and vice versa.Consequently, the model is able to learn a mapping between two image collections, effectively capturing correspondences between higher-level appearance structures. In our previous research [16], we implemented a CycleGAN-based model to synthesize realistic snow on driving scene images.We used the Cityscapes and EuroCity Persons datasets as source domains, while a self-captured snow collection functioned as the target.The image generation performance was assessed using a variety of image quality metrics.Benefiting from semantic information, each sample was effectively transformed into convincing snow scenes, while maintaining the integrity of the original image's structure and texture.However, the adopted method yielded a single output conditioned on the given input image, which does not fully leverage the inherent multimodality of the mapping between two visual domains.This limitation overlooks the potential diversity of snow scenes that may be present. In the present study, we present a novel framework, controllable unsupervised snow synthesis (CUSS), devised to overcome the limitations inherent in existing snow synthesis methodologies.The reason we focus on snow is that snow can drastically reduce visibility, often more than rain or haze, and accumulating snow will cover the road surface and points of interest.The novelty of this work stems from the presumption that the snow representation can be decomposed into a texture-invariant content code and a snowspecific style code.Further, in the middle of the training process, we explore the latent space by a self-regression module.The module linearly interpolates the style code of the clear domain and the snow domain.After training, we can twist the style code with a hyperparameter that theoretically controls the size of the snow as shown in the Figure 1.This strategy facilitates a more comprehensive capture of the full distribution of potential outputs, marking a significant advancement within the domain.The key contributions of our research are as follows: • Content and style disentanglement.The CUSS model employs an architectural framework comprising a content encoder and a style encoder.In order to separate the content and style latent spaces, we introduce a supplementary content discriminator that distinguishes the content codes of clear and snow images. • Multimodal controllable output generation.The CUSS model allows for the generation of multiple and diverse outputs based on a single input image through the sampling of distinct style codes.Moreover, the incorporation of a self-regression module facilitates the linear interpolation of the style code, thereby enabling manual adjustment of the generated size of snow. Related Work 2.1. Weather Generation Despite extensive research on improving visibility during inclement weather, there has been limited attention given to incorporating artificial weather effects into existing driving datasets.Our literature review delves into the current techniques for synthesizing weather, particularly those leveraging deep learning.Utilizing a generative model for weather simulation could offer a versatile instrument for generating authentic conditions to evaluate and enhance AI-driven systems. Fog and Rain The fog generation process, as detailed in a study conducted by Christos et al. [6], comprises two primary steps.It is initialized with the estimation of a transmission map and atmospheric light based on a clear scene image.Following this, depth denoising and completion techniques are implemented to enhance the accuracy of the depth map.This refined depth map is subsequently employed to simulate fog, culminating in the generation of synthetic fog images that closely resemble real-world foggy scenarios.Zhang et al. [17] introduce a technique to generate haze images from clear ones, employing a network that includes two encoders with shared feature extraction layers.The essence of their approach lies in separating the style feature, used exclusively for haze synthesis, from the content feature that conveys consistent semantic information.The authors control a parameter α to synthesize multidensity haze images, with the networks learning to distinguish between thick and thin haze. For the rain generation, Garg and Nayar [7] propose an adaptive image-based rendering technique that utilizes a database of rain streak appearances, requiring only the positions and attributes of light sources and a rough depth map, to add photorealistic rain to a single image or a recorded video with moving objects and sources.Venceslas et al. [18] discuss a real-time rendering algorithm for the realistic representation of fog in animations.The algorithm works by rendering fog for each pixel of the screen.The authors also introduce the concept of equipotential functions, which allow for the creation of complex shapes of fog. Snow Liu et al. [8] suggest a snow synthesis method that utilizes base masks representing different snow particle sizes-small, medium, and large.The snow synthesis process involves overlaying these base masks with images and introducing random factors such as snow brightness and random cropping to increase variation. Ohlsson et al. [19] present a real-time method for rendering accumulated snow, which involves determining which regions should receive snow based on surface inclination and exposure to the sky, and then rendering the snow convincingly at those locations.The rendering process uses a Phong illumination model and a noise function to distort the surface normally, creating a realistic snow cover appearance and an illusion of snow depth around occlusion boundaries.The exposure function is implemented using a depth buffer, similar to shadow mapping, and the depth map is sampled multiple times to calculate a fractional value for occlusion, creating a smooth transition between snow-covered and non-snow-covered areas. Alexey et al. [20] introduce an innovative approach for snow simulation through a user-adjustable elastoplastic constitutive model paired with a hybrid material point method (MPM).The MPM employs a consistent Cartesian grid to facilitate self-collision and fracture automatically.Moreover, it utilizes a grid-centric semi-implicit integration scheme not reliant on the count of Lagrangian particles.This technique adeptly simulates diverse snow behaviors, especially the intricate dynamics of dense and wet snow, and incorporates rendering methods for a true-to-life visual depiction of snow. As an evaluation work, Thomas et al. [5] take multiple cutting-edge image-to -mage (I2I) translation models for comparison and CycleGAN [14] as the baseline.The models used include UNIT [21] and MUNIT [22].This work attempts to generate all kinds of bad weather images; the main focus is snow scenes.Authors believe that the identity loss that is calculated by the Manhattan distance between input and reconstructed images plays an essential role in the translation process.Therefore, they train the model several times and each time specify a different weight for the loss.In addition, to make up for the shortness of current datasets, images retrieved from the image engine Flickr are fed to their model. Our previous work [16] presents a novel method for synthesizing realistic snow images on driving datasets using cycle-consistent adversarial networks.We introduce a multimodality module that uses a segmentation map to accurately generate snow according to different regions of an image.We also propose a deep supervision module that adds extra side outputs to the discriminator, improving the network's learning of discriminative features.The model is evaluated using the same loss functions as CycleGAN [14].The evaluation results on the Cityscapes and EuroCity Persons datasets show that the model outperforms other methods in generating realistic snow images. From the above, generative models such as GANs are able to generate scenes under a variety of challenging conditions and make the output convincible.Models based on cycle consistency are able to generate images from the target domain without paired data.However, these models can only produce one output based on one input, and the degree of style transfer cannot be controlled.This work aims to further explore the latent space of extracted features and make the model produce diverse results. Unpaired Image-to-Image Translation Image-to-image (I2I) translation focuses on learning the mapping between two domains [23,24].This involves capturing correspondences between higher-level appearance structures.The goal is to transform an image from a source domain to a target domain while preserving the underlying structure or context.Unpaired I2I translation further improves the training process, and it does not require paired input-output examples [13][14][15].Instead, it assumes that there is some underlying relationship between the two domains and seeks to learn that relationship.This approach is particularly useful when image pairs are unavailable or the sensing environment is dangerous like driving scenes under challenging conditions. Latent Space Constraint With the success of unpaired I2I translation, researchers are now directing their attention to generating a more diverse range of output.This is achieved by a latent space constraint. Zhu et al. [25] introduce the BicycleGAN model, designed to enhance image-to-image translation by producing diverse and realistic outcomes using a concise latent vector, and this model utilizes a combined technique that ensures a one-to-one consistency between latent encoding and output modes to avoid mode collapse.In their research, they explored various methods of incorporating the latent code into the generator and observed similar performance levels while also investigating the balance between result diversity and sampling complexity by adjusting the latent code's dimensionality. Lee et al. [26] suggest a technique that projects input images into a joint content space and domain-distinct attribute areas.The content encoders relay the mutual details shared across domains to shared content space, and attribute encoders relay the domainunique data to specific attribute space.To handle datasets without pairs, they introduced a cross-cycle consistency loss leveraging the separate representations. Huang et al. [22] showcase the multimodal unsupervised image-to-image translation (MUNIT) structure, which differentiates image representation into a universal content code and a domain-centered style code, and incorporates two autoencoders, competitive objectives, and two-way reconstruction objectives to produce a range of results from a single source image.Furthermore, the model introduces the concept of style-enhanced cycle consistency, ensuring that the original image is recovered when converted to a target domain and reverted using its initial style. Choi et al. [27] discuss a unified model called StarGAN that handles I2I translations across multiple domains.The generator takes an input image and a target domain label to generate a fake image.This target domain label is represented as a binary or onehot vector for categorical attributes.The generator focuses on the explicitly given label and ignores unspecified labels by setting zero vectors, enabling the model to generate high-quality images. Liu et al. [28] introduce a technique named the unified feature disentanglement network (UFDN) designed for self-supervised feature decomposition.They utilize a variational autoencoder (VAE) structure to achieve disentangled representations across various data domains.The encoder accepts an image, processes its representation, and then merges it with the domain vector.These combined data are then used by the generator to recreate the image. Inspired by the above work, we introduce a content and style representation from our previous snow synthesis framework [16].For intuition, we divide the translation step into three parts: encoding, translation, and decoding.The encoding network encodes the input image into one style code and one content code.By swapping and twisting the style code generated by the style encoder, we can obtain diverse but high-quality outputs.In addition, we interpolate the style code between the clear and snow domains to obtain gradually increasing snow effects.This is achieved by disentangling the latent space. Controllable Unsupervised Snow Synthesis To synthesize realistic snow on the driving datasets, we focus on the GAN with cycle consistency.The goal is to learn the mapping between the snow domain and the clear weather domain.In our previous unpaired I2I methods [29,30], two generators are employed to transfer images into the expected domain.Two corresponding discriminators are employed to differentiate real images and fake images.The cycle consistency ensures that translated images can be reconstructed into original input images. Recently, when researchers use similar methods for weather removal or synthesis, they follow an assumption that weather images can be decomposed into a content partition and a weather partition [17,31,32].The partition could be any mathematical format, such as vectors or tensors.In general image translation tasks, the weather partition refers to the style representation.This technique will disentangle the translation process and preserve the structural feature of the background.Therefore, we follow the assumption and split the generator into three networks, which are a style encoder, a content encoder, and a decoder. In the field of representation learning, incomplete disentanglement is often more prevalent.This concept suggests that images from varying domains have a shared content representation space, but the style representation space remains unique to each domain.This idea is also known as the shared latent space assumption.In our task, the style is related to snow, and different classifications detail the attributes of weather events that produce snow. Intuitively, the content codes and style codes should be disjoint in the representation space.To better achieve representation disentanglement, we apply a content discriminator to distinguish the domain membership of the encoded content features.The goal is to force content encoders to generate features that cannot be identified, which means the content code does not contain style details. Further, in order to make the size of synthesized snow controllable, we need to explore the space of style partition S. Inspired by the work of Zhang et al. [17], we transform the snow domain into a continuous space by associating the style code vectors with a linear manipulation.With the help of the content discriminator, the style code will not contain information on image attributes.Ideally, the interpolated style code should represent an intermediate snow density. Fundamental Basis To illustrate the framework of controllable unsupervised snow synthesis (CUSS) in an intuitive way, suppose that x 1 ∈ X 1 and x 2 ∈ X 2 are images from the clear domain and the snow domain, respectively.In statistics, the images belong to two marginal distributions, p(x 1 ) and p(x 2 ).The joint distribution p(x 1 , x 2 ) is inaccessible due to a lack of paired data.The goal is to learn an I2I translation model that can estimate two conditionals, p(x 1→2 |x 1 ) and p(x 2→1 |x 2 ), where x 1→2 is a sample of synthesized snow images and x 2→1 is a sample of synthesized clear images (recovered from real snow samples).In general, the synthesis outputs do not fall into a single mode.There are multiple solutions corresponding to the transform problem. To obtain other possible solutions, we adopt the partially shared latent space assumption from MUNIT [22] to produce diverse snow effects.This theory posits that each image x i ∈ X i originates from a content latent code c i , shared across both domains and a unique style latent code s i tied to its respective domain.For snow synthesis, a matching pair of clear and snow images (x 1 , x 2 ) from the combined distribution is created by x 1 = F 1 (c 1 , s 1 ) and x 2 = F 2 (c 2 , s 2 ), with F 1 , F 2 as the foundational generators with the inverse encoders E 1 and E 2 , with The structure of the CUSS model is depicted in Figure 2. As displayed in Figure 2a, our conversion model has an encoder E 1 and a decoder F 1 for the clear domain X 1 , and an encoder E 2 and a decoder F 2 for the snow domain X 2 .Each image fed into the encoder becomes converted into a content code c and a style code s, represented as (c, s) = E(x).The translation between images occurs by interchanging encoder-decoder pairs, as depicted in Figure 2b.For instance, to transform a clear image x 1 ∈ X 1 to X 2 , we first capture its content latent code c 1 = E c 1 (x 1 ) and draw a style latent code s 2 from the normal distribution q(s 2 ) ∼ N (0, I).Then, we employ F 2 to generate the ultimate snow image In earlier research [16], we harnessed the cycle consistency loss [14], measured by the L1 norm of the input image.This aimed to deter the secondary generator from producing arbitrary target domain images.However, Huang et al. [22] demonstrated that if cycle consistency is imposed, the translation model becomes deterministic.As a result, we integrated a style-enhanced cycle consistency in the image-style joint spaces, which aligns better with multimodal image conversion.As illustrated in Figure 2c, we derive the content code c 1→2 and style code s 1→2 from the synthetic snow image x 1→2 .We then feed the content code c 1→2 and the identical style latent code s 2 to the clear decoder F 1 .The result image is named cycle clear image x 1→2→1 .The idea behind style-enhanced cycle consistency is that by translating an image to a target domain and then back with the original style, we should retrieve the initial image.We do not apply explicit loss measures to ensure this style-enhanced cycle consistency, but it is suggested by the bidirectional reconstruction loss.We show the pseudo-code of CUSS in Algorithm 1. Algorithm 1 Controllable Unsupervised Snow Synthesis (CUSS) In order of clear and snow Output: for data pair (X 1 , X 2 ) in data_loader do 6: Get content codes and style codes of input images: Generate fake images: Generate reconstruct images: Get content codes and style codes of fake images: (c 21 , Generate cycle translation images: Update Generator E 1 , E 2 , F 1 , and F 2 13: end for 14: end while Figure 2. The architectural design of the proposed controllable unsupervised snow synthesis (CUSS) network is outlined as follows.The solid arrows show the forward process of the generators.The dashed arrows show the input to the discriminators.CUSS comprises two encoders, which assume the responsibility of encoding images from both the clear and snow domains, yielding a content code and a style code, respectively.Additionally, CUSS incorporates two decoders, which accept a content code and a style code as input, subsequently generating synthetic images pertaining to the target domain.Moreover, there exist two discriminators, whose purpose is to discern images originating from each domain, alongside a content discriminator, which endeavors to discriminate between content codes.(a) illustrates the fundamental pipeline, wherein the encoded images ought to be recoverable utilizing identical codes.Conversely, (b) portrays the process of translation accomplished by substituting the style code with a randomly sampled one.Lastly, (c) exemplifies the translation process's cycle consistency, whereby the translated synthetic images ought to revert to the original input, utilizing the initially extracted style code in conjunction with their own content code. Disentanglement of Content and Style A disentangled representation captures the underlying structure of the data so that individual factors can be modified independently without affecting others.The goal is to achieve complete disentanglement, where both content and style features are extracted independently.To achieve this, a content discriminator D c is used to remove style information from the content feature.At the same time, we use self-supervised style coding to reduce content information from the style feature. To enhance the content encoder, we employ the content feature discriminator proposed by Lee et al. [26].Initially, the content encoder extracts content codes, denoted as c 1 and c 2 , from the respective inputs x 1 and x 2 .The content discriminator D c takes input images and classifies their source domain.Then one objective of the content encoder is to deceive D c with distinct features.As a result, the content encoder and discriminator refine each other through adversarial training.Once equilibrium is reached, the content features extracted no longer retain any stylistic information of the image. When this game of generators and discriminators stabilizes at the Nash equilibrium [33], it becomes impossible for D c to ascertain the image domain of the content feature, implying an absence of snow details in the content feature.A successful separation of style from content is achieved when the content encoder exclusively captures the image's content characteristics. It is proved that utilizing the content discriminator can prevent content codes from containing style details [26].Naturally, the next step is to remove content details from style codes.The purpose is to make the generation more stable without being affected by other factors.Consequently, we implement the self-supervised style coding to remove any excess content details from the style codes, illustrated in Figure 3.Our methodology involves extracting style codes from two distinct domains.By using linear interpolation between these domains, guided by the parameter k, we are able to generate a range of snow sizes.A higher value of k gives greater significance to the snow dimensions.We then merge the content and interpolated style features to create a snow scene image with a specific snow density determined by the aforementioned parameter k.Throughout the training process, we use a randomly selected k value from the interval [0, 1] to derive a novel style code that represents an intermediate level of snow density.This newly generated style code serves as a self-supervised pseudo-label, effectively guiding the updating process of the style encoder. Using a nonlinear function f to denote the style encoder, we initially perform interpolation on two style codes, one from the clear domain and the other from the snow domain to acquire s k , as shown in Equation (1). In the scope of our problem, the need to disentangle the style feature from the content feature makes sure that the operation on the style code is consistent with those on the input image. According to the derivation of Zhang's work [17], because s k is calculated by the linear projection of s 1 and s 2 , it should contain snow detail that is also a linear relation of x 1 and x 2 .Use s k and a content code of a clear input c 1 and a new snow image x k can be generated.We then encode s k again to obtain its style code, which will be supervised by s k itself; the loss function is defined as Equation (2). Even though the function f is nonlinear, we continually generate x k and optimize the encoder with s k in the training phase to maintain a consistent relationship between input images and corresponding style codes linearly. In the early stages, the style code will contain an extra content detail because of latent space entanglement.At every forward iteration, the extra details become separated with the style codes.Instinctively, the style encoder will identify content details and ignore them. In the absence of manually assigned labels, the process relies on s k as a self-generated label to guide the updates of networks.The desired situation is that the style code will generate snow according to each object distribution at every distance and not decrease the information density of traffic sign areas. Due to the stochastic choice of k at each forward iteration, the encoder is compelled to project the snow-related detail into a linear space.As a result, we engage in linear adjustments to style codes to generate images with varying snow densities. For example, we can specify the k value presenting the k × 100% snow density of the input snow image.Then we extract the style code and content code of the input clear image.After that, we feed the content code and interpolate the style code to the decoder to obtain the output. The factor k governs the snow density.Since s 2 originates from the baseline snow, it can be scaled up or down using k to yield a background invariant image featuring different levels of snow density as depicted in Equation (3). Loss Function The comprehensive loss function discussed in this paper comprises several components: the adversarial loss L adv , image reconstruction identity loss L id , style reconstruction loss L s recon , content reconstruction loss L c recon , style regression loss L regre , cycle consistency loss L cc , and content loss L cont .The overall objective function is formulated as the weighted sum of these individual loss components: Here, L adv = L D 1 + L D 2 .L D represents the adversarial losses in the clear and snow images.The various λ terms act as the model's hyperparameters, modulating the significance of each loss component. Adversarial Loss Adversarial loss is employed in both the clear and snow domains to enhance the realism of the generated images.In the domain of clear images, the adversarial loss is specified as follows: D 1 serves the purpose of differentiating real clear images from their synthesized counterparts, striving to maximize the aforementioned loss function.On the other hand, F 2 aims to reduce the loss in order to make the generated clear images appear more authentic.Likewise, L D 2 for the snow domain is defined as We consider both adversarial losses to have equal impact and straightforwardly sum them up to compose the ultimate adversarial loss. Identity Loss and Latent Space Reconstruction Loss When provided with a snow image and a clear image, the encoders are required to recreate the input image based on the same content code and style code.As such, the disparity between the reassembled image and the initial image serves as the reconstruction loss, adding additional constraints to the encoder: Additionally, we aim for the decoded images to have content and style features that closely resemble those in the original images.As a result, we define the following losses for the reconstruction of content code and style code: It is important to note that we treat the reconstruction loss of a style code as falling under the umbrella as the self-supervised style coding loss.We sum these two up, applying the same weight to both, to arrive at the final style coding loss: Cross-Cycle Consistency Loss Our model incorporates the cross-cycle consistency loss, as referenced in [26], to facilitate the learning of domain mappings.For the generated snow image x 1→2 , its corresponding clear image x 2 can be recovered through a desnowing transformation.The cross-cycle consistency loss constrains the scope of the generated image while maintaining the background information of the input images.The Manhattan distance between the cyclically reconstructed image and the original image serves as the measure for this cross-cycle consistency loss.The image conversion process, which involves converting the clear image to the snow image and the other way around, proceeds in Equation ( 12): The reverse translation operation, which entails reconstructing the original input from the generated image, is outlined in Equation ( 13): The formulation of the cross-cycle consistency loss for both the snow and clear image domains is Experiments To validate the efficacy of the approach described in this paper, this section delves into the influence of various modules and loss functions on the generated outcomes.It also benchmarks these outcomes against existing methods through both quantitative and qualitative metrics.Initially, we provide an overview of the datasets used and the implementation specifics of the approach.Subsequently, we offer an in-depth examination of our model, comparing it with current methodologies in the field.We further support the model's effectiveness by showcasing visualizations of intermediate outcomes and conducting a generalization analysis.The final portion of this section focuses on ablation studies to scrutinize the model's components.All testing and experimentation are performed on an NVIDIA RTX A6000 GPU equipped with 24 GB of memory. Datasets 4.1.1. Urban Sceneries: Cityscapes The Cityscapes collection [34] consists of 5000 detailed images showcasing urban landscapes, predominantly captured in various German cities during daylight hours.This dataset, which primarily focuses on street vistas, intersections, and vehicular scenes, has gained significant popularity for training and evaluating machine vision systems.We use all images as a clear source to train the model, as the number of images is close to our snow set. European Urban Scenes: EuroCity Persons The EuroCity Persons collection [35] comprises a vast array of photographs depicting pedestrians, cyclists, and other moving figures within city traffic scenarios.These images were captured from a mobile vehicle across 31 cities in 12 European nations.Each image in this collection is accompanied by a comprehensive set of precise annotations, including bounding boxes around pedestrians and cyclists, as well as additional information, such as direction, visibility, and potential obstructions.This extensive collection is further divided into separate segments for daylight and nighttime scenes, encompassing a grand total of over 47,300 images.To maintain consistency with the snow dataset, we handpicked 5921 snapshots from the daytime training segment. Snow Condition Driving Dataset In order to provide an authentic and realistic benchmark for learning, we recorded a comprehensive video during adverse snowfall conditions.Employing a high-resolution camera positioned behind the windshield of the vehicle, we captured footage at an impressive frame rate of 120 frames per second.From this extensive collection, we carefully selected the highest-quality images and resized them to a resolution of 960 × 540.Some of the examples are shown in the Figure 4.As a result, our curated snow dataset [30] comprises a total of 6814 meticulously curated photographs.The number of intercepted images we keep is the same as for cityscapes because GAN training is prone to problems such as mode collapse, which leads to training failure. Implementation Details Our proposed model's network comprises two encoders, two decoders, two discriminators, and a content encoder.Among the encoders, one is designed for style, and the other for content.Their structure aligns with what is described in [26].Breaking it down, • The content encoder has five convolutional layers. • The style encoder includes an initial residual layer, two downsampling layers, and one adaptive average pooling layer. • The content encoder features an initial residual layer, two downsampling layers, and four residual blocks. • Each decoder is made up of four residual blocks and two upsampling layers.It employs adaptive instance normalization, while the encoders use standard instance normalization. • All the discriminators take specific image patches with the same resolution as input, which is inspired by Demir's work [36].This structure includes five convolutional layers. For training, we implement minibatch stochastic gradient descent with a batch size of 12, and the Adam optimization technique (parameters: β 1 = 0.5, β 2 = 0.999).We initiate with a learning rate of 0.0001, reducing it linearly from the 100th epoch.In the training phase, the input is cropped to a 256 × 256 resolution for input.The weight of each loss function is listed below: Performance Assessment In this section, we present a comprehensive analysis comparing the outputs CUSS with the current state-of-the-art (SOTA) I2I transition methods.We delve deeply into the impact of disentanglement and conclude with a meticulous examination of the uniqueness and significance of each module based on ablation studies. Assessment Criteria Initially, we evaluate the quality of image synthesis using traditional computer vision measures, namely, the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM).Additionally, we employ metrics that specifically focus on the depth and the perception features of the images, including the Fréchet inception distance (FID) [33], the LPIPS distance [37], and the VGG distance [38]. PSNR serves as a reliable objective measure for images, quantifying the discrepancy between corresponding pixel values.Higher PSNR values indicate reduced distortion in the generated images.SSIM, on the other hand, assesses the similarity between two images by considering their luminance, contrast, and structure.An SSIM value of 1 indicates that the two compared images are identical in terms of structural information and quality, while a value of 0 indicates that the images are entirely different in these respects. FID calculates the Fréchet distance between two sets of images based on the features extracted by the inception network [39].It provides a measure of similarity between the generated images and their respective benchmarks.A lower FID value suggests that the generated images closely resemble the benchmark images. LPIPS, or learned perceptual image patch similarity, is a metric used to evaluate perceptual differences between images [25].Unlike traditional metrics, such as mean squared error (MSE), which measure pixel-level differences or structural similarities, LPIPS employs deep learning to better align with human visual perception.In essence, LPIPS offers a more perceptually meaningful measure of image similarity, especially useful in tasks like image synthesis, where the objective is not just to reproduce pixel-accurate outputs but to generate outputs that are perceptually indistinguishable or pleasing to humans. VGG distance refers to a perceptual loss metric based on the VGG network that was originally designed for image classification tasks [38].Similar to LPIPS, it is used to measure the difference between two images in a feature space.The activations from one or more layers of the VGG network capture higher-level content and texture information about the images. Qualitative Results We generate snow images in varying sizes by adjusting the previously discussed parameters and contrast our proposed model against the leading state-of-the-art methods. In our experimental setting, we take the Cityscapes dataset and the EuroCity Persons dataset as the target set.These two datasets contain over 5000 thousand road scenarios under different urban and weather conditions.There are also variations of road users, such as pedestrians, cyclists, and moving vehicles. Figure 5 displays images with varying amounts of snow.It is important to note that we assume that the k value of the input clear image is 0, while the value of the input snow image is 1.We then adjust the snow feature within the range of 0 to 1.The differences in snow density across images with distinct parameter values demonstrate our success in differentiating the generated snow through manipulation.As an illustration, as the value increases, objects at the far end of the image become less distinguishable.Due to the impact of the style feature coding, the style encoders can identify between large and small snowflakes. The results of the qualitative comparison are shown in Figure 6.This experiment compares the generated snow images with mainstream I2I translation methods (CycleGAN [14], CUT [40], MUNIT [40], and DRIT [26]).The first two models use single projections, while the last two can produce diverse outcomes.For a more accurate comparison, we consistently use ResNet [41] as the backbone for the generator in all methods.The training data use Cityscapes [34] and EuroCity Persons [35] as clear sources and the self-captured snow set as target sources.The methods used for comparison all require no paired data. The qualitative comparison shows that models such as CUT [40], MUNIT [40], and DRIT [26] mainly exhibit three primary defects.First, after translating images to represent snowy scenes, the original colors are often distorted, diminishing the natural appearance of the scene.Second, these models sometimes introduce artifacts that were not present in the original image, leading to inconsistencies and jarring visual outcomes.Lastly, they inadequately handle the far end and sky regions, resulting in uneven or unrealistic snow representation in these areas.In contrast, the method proposed in this paper offers several advantages.Our approach naturally integrates snow, ensuring that its boundaries fade out seamlessly across the image, providing an authentic representation in both the foreground and background.By distinguishing between snow style and actual image content, our method is able to capture and reproduce the intrinsic properties of snow, resulting in a synthesis that feels genuine and consistent throughout the image.Moreover, while other models might render trees or other objects as if they were buried under un-natural snow formations, our technique retains the original structure and detail, providing a more balanced and realistic representation. Qualitative Results As reported in other I2I works, the constraint of cycle consistency is strong so that the ability to generate diverse output is suppressed.However, the output image will retain a high similarity of the original image, which explains why CycleGAN [14] achieves the best SSIM value as shown in Tables 1 and 2. Compared with other methods, CUSS combines the content discriminator and style code manipulation, and both turn out effective for high-quality synthesizing.Therefore, CUSS achieves better results on those metrics.CUT is the only method that does not employ any format of cycle generation pipeline, but instead uses contrastive learning.The data used in our experiment cannot satisfy the requirement of a large batch size, which cannot make full use of contrastive loss.The results of CUSS prove that our method is available even when the data are not sufficient.To understand the individual contribution of different components of CUSS, we conduct an ablation study with respect to the loss functions.Since loss functions reflect the direction of model optimization, we not only validate the new module of the content discriminator and self-supervised style coding but also test the improvement from reconstructing the image, style code, and content code.In Table 3, we observe that each component is crucial to the CUSS model presented in the decrease of the metrics.We generate images with three sets of k (0.3, 0.6, 1).Smaller k values indicate the small size of the synthesized snow, i.e., closer to the input clear image.The results show that the model produces the best quality output at the smallest k values.I2I translation methods such as CycleGAN, which uses the principle of cycle consistency, produce deterministic outputs.For a given clear image, it will produce the same translated snow image every time.To produce diverse outputs, researchers manipulate the latent space of extracted image features by dividing them into style codes and content codes.In our experiments, we found that latent space manipulation inevitably splits the translation network into two or more parts.This leads to performance degradation.In this work, the solution is to use a content discriminator to distinguish the content code from different domains.With the requirement of generating an indistinguishable content code, the encoder can achieve better disentangled representations. When obtaining the disentangled style code, the operation on it will reflect on the output snow images [42].Therefore, we interpolate the style codes of the input clear image and the snow image.Since the input snow image represents the maximum snow size, we can control the degree of snow effects.Note that we cannot obtain snow effects larger than the input image. The controllable output is high quality and reasonable.The scenes are gradually covered with stronger snow effects.However, the generated snow is not invariant to objects.The snow covering the trees and the snow covering the building should be different; i.e., the snow effects should change appearance according to scenario changes.However, it looks similar in Figure 5.To improve CUSS, we need to consider the semantic information in the latent space. Our method basically belongs to domain translation, which learns knowledge from the source domain and transfers it to the target domain.In the case of snow generation, the output will only contain a similar snow effect with input snowy images.To obtain more variety of snow like real snow scenes, we can add more snowy datasets that have different snow shapes and sizes.However, it will lead to mode collapse if there are too many modes in the GAN training process.In addition, the data should be collected in the same region to avoid large domain gaps, such as Asian driving scenes and European driving scenes. Conclusions The study presents an innovative approach to unsupervised snow synthesis, wherein a controllable method is introduced that incorporates latent space manipulation.To effectively separate the features of snow style and content, an additional content discriminator is incorporated along with a self-regression style coding module.To transition smoothly from clear to snow-affected images, a partial style cycle consistency loss is employed to refine the latent representation space.Furthermore, comparative analyses are conducted to comprehend the impact of each loss component or module within the model on the outcomes.When subjected to quantitative and qualitative evaluation, against various techniques using the Cityscapes and EuroCity Persons datasets, our approach consistently produces diverse and high-quality traffic scenes under snowy conditions.Moving forward, our future research endeavors can be classified into two distinct paths: • Expanding the proposed technique to tackle generation tasks in more demanding driving conditions, such as heavy rain, dense fog, nighttime, and strong light; • Delving deeper into the relationship between generative methods and latent space manipulation for I2I translation tasks by integrating existing insights from self-supervised and contrast learning methodologies. Figure 1 . Figure 1.Examples of the output produced by the proposed snow synthesis model.In the upper row, the dimensions of the snow are progressively enlarged, an effect accomplished through the interpolation of style codes.Conversely, the images in the lower row are generated using randomly sampled style codes that follow a Gaussian distribution. Figure 3 . Figure 3. Snow sizing through self-regression style coding.Our methodology involves extracting style codes from two distinct domains.By using linear interpolation between these domains, guided by the parameter k, we are able to generate a range of snow sizes.A higher value of k gives greater significance to the snow dimensions.We then merge the content and interpolated style features to create a snow scene image with a specific snow density determined by the aforementioned parameter k.Throughout the training process, we use a randomly selected k value from the interval [0, 1] to derive a novel style code that represents an intermediate level of snow density.This newly generated style code serves as a self-supervised pseudo-label, effectively guiding the updating process of the style encoder. Figure 4 . Figure 4.A collection of self-captured videos depicting urban driving amidst intense snowfall [30].The images are carefully screened.The footage includes various road users, including cyclists, automobiles, buses, and pedestrians. Figure 5 . Figure 5. Synthesis of multidensity results on the EuroCity Persons dataset by adjusting the parameter k.From the left to the right column, objects such as vehicles, people, and trees are covered with snowflakes and haze that gradually increase in size. Figure 6 . Figure 6.Comparisons between the synthesized snow images produced by our method and SOTA unsupervised image translation methods.In particular, CUT deviates from the utilization of cycle consistency and the associated loss, as observed in CycleGAN.Conversely, the remaining models, such as CUSS (controllable unsupervised snow synthesis), incorporate a form of partial style cycle consistency. 1) s n i means style code sampled from normal distribution 7: Table 1 . A comparison on the Cityscapes dataset is made between SOTA image translation techniques through numerical evaluation.We generate images with k = 1.The metrics used for evaluation are d VGG and d LPIPS , which respectively represent the VGG and LPIPS distances.It should be noted that down arrows mean lower values for d VGG and FID indicate more favorable experimental outcomes.Conversely, up arrows mean that higher values for the other metrics suggest superior results.Optimal values are denoted in bold. Table 2 . A comparison on the EuroCity Persons dataset is made between SOTA image translation techniques through numerical evaluation.We generate images with k = 1.The same image quality metrics are used.Down arrows mean lower metric values are better and up arrows mean higher values are better.The best results are shown in bold. Table 3 . Results from quantitative model comparisons after eliminating various loss factors are presented.We generate images with three sets of k values.We examine the impact of the content discriminator, cross-cycle consistency loss, reconstruction losses, and style regression loss.Down arrows mean lower dVGG and FID values signify improved experimental performance, while up arrows mean that higher values for the remaining metrics indicate better results.Figures in bold highlight the best values.
10,554.6
2023-10-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Active waveguide Bragg lasers via conformal contact PDMS stamps Lasing is observed in Bragg lasers formed through conformal contact of a patterned PDMS stamp with a plain active film, spincoated on glass. The thresholds, output efficiencies and spectral characteristics are compared to standard substrate patterned gratings and is discussed in relation to the coupling coefficient \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\upkappa $$\end{document}κ. The reported thresholds are highly sensitive in distributed feedback (DFB) lasers to grating duty cycles, for both PDMS-air and substrate–film lasers. Overall, laser thresholds of PDMS-air (PA) DFB lasers are found to be significantly higher than substrate–film (SF) lasers, which is attributed to an approximate three-fold reduction of optical-confinement in the grating region. Slope output efficiencies are found to be comparatively higher in PA lasers relative to SF lasers for both DFB and DBR configurations and is attributed to several competing factors. The PDMS can be removed from the surface of the active film repeatedly and conformal contact is limited mainly by the particle build up on the PDMS surface. The proposed PA system is expected to be useful in rapid laser metrology of new gain materials and in practical applications of optically pumped lasers. achieve lasing. Due to the low surface energy of PDMS, damage to the active film is minimal, and the PDMS can be peeled on/off repeatedly with no deleterious effect on lasing performance. The longevity of the sample is determined primarily by dust/particles accumulation on the PDMS surface. For material systems suffering from photo-degradation, the laser can be recovered by moving the stamp to a different location on the active film. Additionally, the system can be useful for the qualification of gain/lasing performance in materials and films without expensive/repetitive fabrication steps beyond the initial stamp fabrication. Here, the performance of the proposed PA Bragg lasers is compared to standard substrate-film (SF) (Fig. 1d) Bragg lasers, with respect to lasing thresholds and output efficiencies. Results/Discussion In this study, F8 0.9 BT 0.1 (ADS233YE) was used for its commercial availability and broadband gain spectrum 16 . The latter is important to minimize variability in lasing thresholds due to changes in effective refractive index ( n eff ) away from the peak of the gain spectrum between PA and SF structures. The native active film thickness was fixed at 180 nm for all samples; we find this is a sufficient compromise to obtain an appreciable pump-mode overlap and moderate optical confinement 17 . The film thickness was low enough that only the TE 0 mode propagates with substantial confinement. Pure 2nd order (2O) lasers are commonly used due to less stringent fabrication requirements than 1st order lasers, and ease of metrology since the laser emission is vertically outcoupled. However 1st order lasers tend to yield lower thresholds, as the maximum theoretical diffraction efficiency for feedback is stronger than 2nd order lasers for optimized duty cycles γ = a � , where a is the grating linewidth and is the periodicity, as illustrated in Fig. 1a,d [18][19][20] . To retain both high feedback strength and vertical out-coupling, lasers made from 1st order gratings with a 2nd order out-coupler (1O) have been used 21 . Here, 2O DFB, 1O DFB and DBR lasers are explored for both PA and SF structures. For 1O DFB lasers, 40 periods of 2nd order gratings were placed between two 1st order gratings (Fig. 1e). For 1O DBR lasers, 500 1st order periods are used for both mirrors; this was sufficient to achieve full reflection of the waveguided light, while 40 2nd order periods were placed on one mirror facet for outcoupling (Fig. 1f). Gratings of periodicity of = 366, 183 nm were chosen for 2nd/1st order gratings to match the Bragg condition, m 0 = 2n eff , for wavelength, 0 ~ 565 nm (near peak of gain bandwidth based on amplified spontaneous emission (ASE) 16 ) corresponding to n eff ~ 1.53. All cavity lengths were fixed to approximately 200 µm, including 2O DFB lasers, and excitation stripe is shaped to ~ 200 µm × 50 µm, as described in Fig. S7, to match the cavity dimension and shown with a zoom lens image (Fig. 2a). Fig. 2a could be observed from the grating above threshold and the outline of the 2nd order out-coupler is visible, however, the vertically outcoupled emission was not observed as the image is taken at oblique incidence. Magnified (~ 3.8 × , by compar- Figure 1. (a) Schematic for PDMS-air (PA) grating, (b) Pictogram of lasing from PA sample (pump beam filtered), (c) SEM of 60% γ 1st and 2nd order PDMS gratings, (d) schematic for substrate-film (SF) grating, (e) 1O DFB laser with 40 periods of a 2nd order grating out-coupler between 1st order gratings (f) 1O DBR laser with 1st order gratings and 40 periods of 2nd order grating out-coupler defined on a single mirror facet. www.nature.com/scientificreports/ ing physical stripe length and spectrograph image) 0th order diffraction (reflection mode, fully open entrance slit) spectrograph images of the emission for a 1O DFB below and above threshold are illustrated in Fig. 2b,c respectively. The vertically out-coupled emission from the 2nd order section is discernible from the background emission of the excitation stripe below threshold. Above threshold, emission is predominantly localized in the 2nd order section, and spans ~ 5 pixels (65 µm), and similarly for 1O DBR samples ( Fig. S9a-d). All spectra were taken with a 50 µm entrance slit, therefore we expect that most of the lasing emission (~ 77%) was captured with 1O DFB and DBR samples. Normalized spectra for base film emission are given in Fig. 2d) along with spectra for 60% γ PA 1O DFB below and above threshold. Below threshold, two sharp peaks close to the minimum spectral resolution of the spectrograph (~ 0.7 nm) was observed, along with a dip in the spectral intensity between the two peaks at 563 nm. The spectral position of this dip is relatively close to the fundamental TE 0 mode (numerically calculated n eff ∼ 1.52 , depending on grating γ as shown in Fig. S10), thus we assign it to the photonic stopband of the TE 0 mode. Also shown is the lasing peak for the low energy longitudinal mode, though we find that the oscillating mode can begin at the high energy peak and will lase at both modes with higher pump fluence (Fig. S11a). This is expected, as there is no mode threshold discrimination process in 1st order DFB lasers unlike 2O DFB lasers [22][23][24] . In 1st order lasers, the threshold gain for each mode nearest the stopband are equally likely to lase in the absence of a defect/phase-shifting element 24 . For 2O DFB lasers, mode discrimination is provided by differential radiation loss of either mode 25 . Nevertheless, we do see some level of discrimination between repeated samples for the 1O DFB laser. This is attributed to a multitude of factors including, differential loss/gain at different wavelengths, altered grating phase due to the 2nd order grating out-coupler, or small reflections from adjacent gratings 26 . Fluence dependant lasing output and slope output efficiency in PA laser. Fluence-dependant spectra of 60% γ PA 1O DFB sample is illustrated in Fig. 3a, a super-linear growth of the low energy band-edge feature was observed with increasing fluence indicating the onset of lasing. Similar trends in intensity growth were observed in 2O DFB and 1O DBR lasers (Fig. S12). Growth of the integrated spectral intensity near the www.nature.com/scientificreports/ lasing peak (± 10 nm) with fluence is shown in Fig. 3b for a typical set of 60% γ PA 1O DFB, 1O DBR and 2O DFB samples. Lowest thresholds were observed in 1O DBR samples, corresponding, also to the highest output efficiency, followed by the 1O DFB sample. The highest threshold belongs to the 2O DFB laser, with lowest apparent slope efficiency. However, the low slope efficiency of the 2O DFB laser can be mainly attributed to its large spatial emission area (Fig. S9f.), resulting in a large proportion of light that was not collected by the 50 µm spectrograph entrance slit. For 2O DFB lasers, the higher relative thresholds can be partially attributed to reduced feedback as later discussed and an increase in outcoupling loss. For 1O DFB and DBR lasers, since the two operate by different physical mechanisms, direct comparisons of thresholds are difficult. For DBR lasers, the active gain medium is separate from the periodic element. The Bragg reflectors act as mirrors and the DBR lasers behave as spectrally selective Fabry-Perot lasers and lasing occurs within the stopband, where the reflectivity is the highest. In DFB lasers, the gain medium is integrated with the periodic element, and feedback occurs via periodic reflection of counter-propagating waves at the band-edges. Observation of lasing in PDMS-air laser. Scattered laser radiation in Spectral and lasing properties as a function of duty cycle for PA and SF lasers. To further explore the discrepancy in thresholds, we look at the general expression derived from coupled-mode theory for the coupling coefficient of a pure index-coupled laser, assuming a perfectly square-periodic profile 11,19,20 , Here, k 0 = 2π 0 where 0 is the propagation wavelength in free space, n 2 , n 1 are the refractive indices of the grating materials (SF/PA), Ŵ g is the modal confinement in the grating region, n eff is the effective refractive index, m is the Bragg order, a is the grating linewidth and �ν is the longitudinal mode spacing at the photonic band edges. We make a point here that the equation is derived with a perturbative approach assuming that the refractive index contrast between n 2 and n 1 is small compared to n eff . Therefore, quantitatively, it is not directly applicable to DFB lasers comprised of solution processed materials where the index contrast is typically high, and the active layer refractive index is low. Nevertheless, we instead use Eq. (1) to qualitatively predict the behaviour of the PA samples with comparison to standard SF samples. Note that γ used in this context refers to the initial design pattern dimensions of positive e-beam resist for lithography, and not the exact physical ratio of the linewidth to the grating periodicity. This is because the linewidths will depend on e-beam exposure dosage and other practical fabrication factors. The PDMS used in the PA sample was moulded from the same SiO 2 gratings used in SF samples. For SF samples, the grating corrugations were smoothened out such that the topology of the film surface was only modulated by at most 10 nm (Fig. S13, S14). This results in an optical confinement modulation of ~ 0.23 (Fig. S10a), assuming the active film thickness is 130 nm in the grating troughs and 180 nm in the trenches. We can therefore expect a large gaincoupling contribution from the periodic modulation in confinement for DFBs with the SF samples in addition to the index coupling. In comparison, the optical confinement in the active film for PA samples is virtually unchanged (Fig. S10b) since there is no modulation in active film thickness. Figure 4a-d show representative experimental spectra for SF, PA 1O and 2O DFB samples of 30, 45 and 60% γ above and below lasing threshold. The stopband widths �ν are annotated in energy units, and were used to estimate the coupling coefficients according to Eq. (1). For PA 2O stopbands are clearly observed for 30, 60% γ , however, for 45% γ , the dip was less prominent, with the stopband width noticeably narrower and similarly for SF 2O in Fig. 4b. Additionally, lasing was not observed in the 45% γ PA sample at the highest fluences before the film was ablated, otherwise lasing was observed on either side of the stopband. The observations are consistent with the sin(π mγ ) term in Eq. (1) for m = 2. Close to γ = 0.5 , the coupling coefficient is null κ = 0 , therefore little or no coupling is expected close to this γ , while κ is at its highest at 25, 75% γ . Practically, deviation from www.nature.com/scientificreports/ a perfectly square profile, will result in an incomplete null of κ 26 . Conversely, for 50% γ in 1O samples, m = 1 , κ reaches its maximum value, and departure from 50% γ results in a relatively slowly decreasing κ. Additional peaks and dips on either side of the main stopband for 1O samples were observed, with the dips more prominent for SF samples shown in Fig. 4c,d, particularly for 45% γ PA 1O DFB , and 45, 60% γ SF 1O DFB. We preclude the possibility of the higher order TE modes and TM modes based on mode-solver calculations of n eff and the predicted spectral position from the Bragg equation (TM 0 spectral feature is assigned in Fig. S15). The symmetric distribution of the peaks away from the main stopband suggests these may be the sidemodes found in typical Bragg structures 26 . Inspection of Fig. 4d suggests that the dips form directly from the band-edge peaks. At 30% γ , no obvious dips are present, and the band-edge mode intensity is skewed to the high-wavelength edge. However, for 45% and 60% γ , new dips appear to emerge from both band-edge peaks (the transition is more clearly observed in Fig. S16a,e), and the intensity of the high-wavelength band-edge peak decreases relative to the low wavelength edge. Moreover, the assignment of these dips to sidemodes, would suggest that the main stopband width decreases as γ hovers around 50% γ , where κ is expected to reach its maxima. For 45% γ PA 1O DFBs, lasing still occurs within the main stopband, however, for SF 1O samples, lasing appears to occur in the high wavelength side-dip/band. Assuming that the new dips split directly from the main www.nature.com/scientificreports/ band-edge modes, the centre of the side-dips were used to calculate κ . Where the position of the dips is ambiguous, as with 45% γ PA 1O, the main stopband was used to calculate the width, noting some underestimation of κ values. Even if calculated with the side bands, we find consistently lower �ν for PA compared to SF for both 1O and 2O DFB samples and thus correspondingly lower κ for all γ shown in Fig. 4e. Minima/maximum in both PA and SF 2O DFBs were observed at around 50/25 and 75% γ , agreeing relatively well with Eq. (1). A less discernible trend was observed for the 1O samples close to 50% γ ; however, we attribute this mainly to the ambiguity in spectral positions of the Bragg dips and fabrication limitations of PDMS replication for γ at the extremities. Overall, the appearance of the side-dips appears to correlate with high coupling coefficients in 1O but not 2O lasers; however, the origin of the features are currently unknown. We observe that trends in DFB lasing thresholds follow trends in κ closely, as shown in Fig. 4f, that is, lower thresholds for higher κ . Thresholds were obtained by averaging over at least 3 test samples. The lowest thresholds obtained were 0.63 µJ cm −2 for 55% γ 1O SF DFB samples and 1.01 µJ cm −2 for 75% γ 2O SF DFB. The threshold could be further reduced to 0.45 µJ cm −2 in 1O SF DFBs by replacing the 2nd order grating of the same γ with a 75% γ 2nd order grating (Fig. S17b). On the other hand, the highest thresholds obtained are 23.5 µJ cm −2 for 45% γ 2O SF, and > 300 µJ cm −2 for 45% γ 2O PA (threshold not reached before film ablation). The results show that a poorly optimized γ could raise the lasing thresholds by over an order of magnitude for 2O lasers. Improved performance of 1st order lasers with 2nd order out-couplers in previous reports 21 , can therefore be attributed to, at least in part, to unoptimized grating duty cycles. Contrary to previous reports of lower lasing thresholds in surface-corrugated lasers relative to SF lasers by Quintana et al. 15 , significantly higher thresholds were observed here inin the former. We suspect that this is partially due to the different excitation and resonator lengths used. In our work, the excitation stripe and resonator lengths are exactly matched to 200 µm. We've shown that the thresholds can be further reduced by increasing the total feedback with longer resonators and correspondingly, longer stripe lengths (Fig. S18), in line with theoretical predictions 11 . The reduction in threshold tapers off with higher cavity length, however, the saturation occurs later for PA lasers than SF lasers due to the lower κ. For example, an approximate 2.7-fold reduction in threshold for PA 2O lasers was found when increasing the cavity length from 200 to 400 µm, whereas only a 1.2-fold reduction was observed in SF 2O lasers. In work by Quintana et al., the holographically patterned gratings presumably encompass a larger area than the excitation stripe length used (1100 µm). We expect with these large resonator/excitation lengths, the thresholds are relatively saturated. Nevertheless, we find that even at longer cavity lengths, thresholds in PA lasers are consistently higher than in SF lasers. Instead, we mainly attribute the higher lasing thresholds of PA lasers for all γ, primarily to an approximate threefold reduction in Ŵ g ( Ŵ g ~ 0.2 for SF compared to ~ 0.07 for PA, depending on γ , calculated as shown in Fig. S19 and given in Table S1), and correspondingly, a reduction in κ . In contrast, the SF lasers by Quintana et al. have used dye-doped polystyrene (refractive index ~ 1.59 at the lasing wavelength) matrices as the active layer with DCG/SiO 2 (refractive index 1.55/1.46) gratings, resulting in significantly lower index modulation (1.59-1.55/1.59-1.46) compared to DCG-air active-layer surface corrugation lasers. The reduction of Ŵ g in PA lasers appears to outweigh any increase in κ due to higher grating index contrast (1.43-1 compared to 1.7-1.46), and lower thresholds due to higher confinement in the active layer relative to SF lasers. However, for the SF lasers, differences in thresholds may also be attributed to contributions from gain-coupling from the modulation in confinement. Ŵ g can be increased by reducing the active layer thickness and/or refractive index, thereby increasing the overlap of the evanescent portion of the mode. However, this would also result in a decrease in confinement in the active layer. In previous work, the confinement around the upper active film region could be increased by depositing a low loss, high dielectric constant material atop the active film 27 . In this case, the overall confinement in the active film would increase only if the film thickness was kept thin. For DBR lasers, the thresholds for PA and SF lasers were comparable, implying that the thresholds are not strongly correlated with κ. We attribute this to a combination of low waveguide loss (~ 11 cm −1 as determined in Fig. S20) and complete reflection from the mirrors. Although lower κ may increase the penetration depth into the DBR mirrors, assuming the loss upon round-trip reflection remains relatively unchanged, it would not significantly alter laser feedback for the same gain. Slope output efficiency of PA and SF lasers. Measured slope efficiencies for 30, 60% γ 1O DFB and DBR lasers are given in Table 1. In comparing both PA/SF 1O DFB and DBR lasers, significantly higher slope outputs were found in the corresponding DBRs. We attribute this to the fact that the grating is continuous along the length of the DFB cavity and there is decrease in intensity of the resonator mode along the length of the laser due to continuous back-reflection, whereas in DBRs, reflection only occurs at the mirror facets. We observe higher slope outputs in 60% γ lasers compared to 30% γ lasers for PA samples. The higher output is consistent with a higher overlap of the optical mode with the gratings at higher PDMS fill factors (Confinement 0.064 with 30% γ compared to 0.077 with 60% γ , Table. S1) and reduced grating height of low γ PDMS. Additionally, as previously mentioned, the outcoupling from 2nd order Bragg gratings occurs via first order diffraction, we therefore expect the output efficiencies to correlate with first order coupling coefficients, that is, higher outcoupling closer to 50% γ , which is consistent with the higher slope output with 60% γ gratings relative to 30% γ . For SF lasers, the discrepancy in output was less discernible. In SF DFB lasers, the lower slope output is consistent with lower outcoupling loss, thus lower lasing thresholds, while for DBR lasers the slope output remains comparable within the error margin. We find substantially higher output efficiency in PA lasers compared to SF for 30 and 60% γ . A similar increase in efficiency for top-layer gratings was found by Quintana et al. 15 in comparing DCG-air (index 1.55-1) gratings defined above the active layer, and standard SF/DCG-film gratings with a dye-doped polystyrene (index 1.59-1.46 or 1.59-1.55). They found a 3/20-fold increase to the slope efficiency compared to SF/DCG-film lasers, www.nature.com/scientificreports/ respectively, and have attributed this primarily to increased grating efficiency due to an increased index contrast. However, several other factors ultimately contribute to the magnitude of the radiated power output from the lasers, as demonstrated in analysis of grating-coupled radiation in GaAs waveguides and lasers by Streifer et al 28 . They find a complex dependence of radiative output on grating height, duty cycle, index contrast, grating period and the refractive indices of layers adjacent to the grating layer. It is therefore difficult to attribute changes in slope efficiencies, to a single parameter. Numeric calculations may be warranted, to predict the optimal geometries, to obtain the highest outputs. Conclusions and perspectives Lasing was successfully achieved by conformal contact of a composite PDMS stamp patterned with Bragg gratings to an active layer (F8 0.9 BT 0.1 ). In this way, the active gain medium is decoupled from the resonator. The stamp could be repeatedly removed from the active-layer surface to recover lasing after degradation, with repeat usage limited predominantly by particle build up on the stamp surface. Although the stamp tends to peel off after initial contact (after several hours/days) we expect applying a small amount of pressure can help sustain contact with the active layer surface. The emission behaviour of 1st order DFB and DBR lasers with 2nd order out-couplers (1O DFB and DBR), pure 2nd order DFBs (2O) was explored. PDMS-air (PA) grating lasers showed higher thresholds than substrate-film (SF) lasers for a given duty cycle. These higher thresholds are attributed predominantly, to an approximate threefold reduction of confinement in the grating region . Similar thresholds between PA and SF were observed for DBR lasers. This is attributed to low loss and complete reflection in the 1 st order mirrors comprising the 1O DBRs. We find slightly lower thresholds in DBRs relative to corresponding DFB lasers in the PA samples but the opposite trend in SF samples. Slope outputs were explored for 30, 60% γ 1O DBR and DFB lasers, where higher outputs were found for PA lasers compared to their SF counterparts. Further study is required to determine the origin of this behaviour. Improvements to the PA structure can be potentially made by adjusting the grating heights as previous reports have shown 13,[28][29][30] . The limit to grating height would, however, be fundamentally limited by the aspect ratio to which the PDMS can be made before pattern collapse. This can be somewhat overcome, by increasing the rigidity of the PDMS, at the cost of increased brittleness. Additionally, as mentioned previously, an increase to laser cavity length for DFB lasers, decreases the threshold at the cost of increased fabrication time. Overall, we expect the proposed PA system can help accelerate screening of suitable lasing materials without increased patterning/fabrication cost. The system also opens prospects for potential practical application of optically pumped lasers where lasing can be replenished after degradation upon spatial translation of the PDMS across an active film. SiO 2 substrate grating master fabrication. For all DBR and 1O DFB lasers, 40 periods of 2nd order Bragg gratings were used to outcouple light vertically. For 1O DFB lasers, the 2nd order section was placed in the middle of the 1st order gratings as illustrated in Fig. 1e, where the number of 1st order periods were chosen to produce a resonator of roughly the desired length. In 1O DBR lasers, the 2nd order out-coupler is placed at the cavity edge with 500 periods of 1st order gratings comprising the rest of the mirror, while the mirror on the other side of the cavity is only comprised of 500 periods of a 1st order grating (Fig. 1f). The feedback in 2nd order gratings is accomplished via 2nd order diffraction while the light is diffracted out via 1st order diffraction. For 1st order gratings, feedback is accomplished via 1st order diffraction. Double side-polished fused silica chips (20 × 20 mm 2 ) were cleaned in an ultrasonic bath by acetone and IPA. Between acetone/IPA rinses, the chips were physically rubbed by hand via a microfibre cloth and subsequently rinsed with the respective solvents before blown dry by N 2 . The samples were then treated with low RF power O 2 plasma (RF: 50 W, O 2 : 50 sccm, pressure: 20 mTorr) for 3 min then CHF 3 /O2 plasma for 1.5 min (RF:125 W, CHF 3 : 45 sccm, O 2 : 1.5 sccm, pressure: 20 mTorr) and finally another O 2 plasma step for 3 min (RF: 50 W, O 2 : 50 sccm, pressure: 20 mTorr). The purpose of these steps were to descum the surface, smoothen the polished surface to improve adhesion of e-beam resist and then a final plasma clean to remove any passivation polymer formed by the CHF 3 plasma. We find extensive line collapse during the resist development process if the smoothening step Substrate-film sample preparation. For the Substrate-film sample, the chips were baked at 180 °C for 5 min before F8 0.9 BT 0.1 in toluene (25 mg/mL) was spun-coat as is at 2000 rpm to yield a film of ~ 180 nm (without gratings) and was used as is, without annealing (annealing above the glass transition temperature drastically raises the lasing threshold). The F8 0.9 BT 0.1 solution was prepared in a N 2 filled glovebox but spun in ambient conditions. PDMS composite stamp fabrication 31 . The SiO 2 etched samples were used as a master for PDMS replication. The chips were baked at 180 °C for 10 min before being placed in an evacuated desiccator with 7 µL TCOFS on a separate holder for 1 h. A droplet of deionized water was used to test hydrophobicity and the sample was rinsed with IPA afterwards to clean the surface as a milky film tends to deposit on the chips during the TCOFS coating process. To prepare h-PDMS, a mixture of 0.791 g (7-8% vinylmethylsiloxane)-(dimethylsiloxane) with 7 µL platinum divinyltetramethyldisiloxane catalyst and 24 µL 2,4,6,8-tetramethyltetravinylcyclotetrasiloxane modulator was made. To this, 230 µL (25-30% methylhydrosiloxane)-dimethylsiloxane copolymer, hydride terminated was added along with 540 µL toluene. Toluene is used to provide better moulding of the mixture into the trenches of the patterned nanostructures 32 . The mixture was then quickly degassed with a vacuum desiccator and poured over the master chip and spun at 1000 rpm for 60 s. The sample was left to rest for 1 h in ambient conditions before baked in an oven at 60 °C for 10 min. To prepare the soft PDMS, Sylgard 184 base was mixed with its curing agent in a 9:1 weight ratio, stirred thoroughly and degassed in vacuum, the mixture was poured over the h-PDMS covered chips in a petri dish and subsequently degassed in vacuum again. The resulting samples were then cured in an oven at 70 °C for 5 h, cooled, and left resting in ambient conditions for more than 12 h before the sample was removed from the petri dish and the master chip removed with a scalpel and then peeled off. The resultant stamp is cut at the edges to remove any large protrusions that may prevent conformal contact with lasing active film. PDMS-air laser preparation. A 30 × 30 mm 2 fused silica chip was cleaned following the steps in the SF/ SiO 2 master fabrication including the plasma cleaning steps. The chip was baked at 180 °C for 5 min before F8 0.9 BT 0.1 in toluene (25 mg/mL) was spun coat at 2000 rpm to yield a film of ~ 180 nm and was used as is, without annealing. The F8 0.9 BT 0.1 solution was prepared in a N 2 filled glovebox but spun in ambient conditions. The PDMS stamp was placed atop the film and gently pressed until conformal contact was made. Optical characterization. Lasing measurements were carried out using the output from a diode-pumped, active Q-switched frequency tripled Nd: YVO 4 (1.1 ns) laser (Picolo MOPA, Innolas) at 355 nm. The repetition rate was changed between different samples depending on the signal obtained, for higher output signals, the repetition rate was decreased to prevent saturation of the camera while running in continuous acquisition mode. However, signals are all scaled to a 10-pulse signal for slope output efficiency measurements. The fabricated samples were mounted on an xyz stage and were excited at normal incidence with an ~ 200 µm × 50 µm stripe formed by a set of optics (Fig. S7). The pump light was filtered out via a long-pass filter with the output emission is collected at normal incidence, directed with a set of mirrors, and focused onto the entrance slit of a spectrograph comprised of an Acton 2150i spectrometer (15 mm focal length) and an sCMOS camera (PCO edge 3.1). For zero order diffraction measurements, the entrance slit was fully opened, while for lasing and spectral characterization, the slit is set to 50 µm, resulting in a spectral resolution of ~ 0.7 nm. Data availability The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. www.nature.com/scientificreports/ Reprints and permissions information is available at www.nature.com/reprints. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
7,277.2
2022-12-23T00:00:00.000
[ "Physics" ]
Insight into treatment of HIV infection from viral dynamics models Summary The odds of living a long and healthy life with HIV infection have dramatically improved with the advent of combination antiretroviral therapy. Along with the early development and clinical trials of these drugs, and new field of research emerged called viral dynamics, which uses mathematical models to interpret and predict the time‐course of viral levels during infection and how they are altered by treatment. In this review, we summarize the contributions that virus dynamics models have made to understanding the pathophysiology of infection and to designing effective therapies. This includes studies of the multiphasic decay of viral load when antiretroviral therapy is given, the evolution of drug resistance, the long‐term persistence latently infected cells, and the rebound of viremia when drugs are stopped. We additionally discuss new work applying viral dynamics models to new classes of investigational treatment for HIV, including latency‐reversing agents and immunotherapy. | INTRODUC TI ON HIV, the causative agent of AIDS, infects nearly 40 million people worldwide 1 and represents one of the highest overall global burdens of disease. 2 After an estimated entry into the human population in the early 20th century, 3 Insight into treatment of HIV infection from viral dynamics models Alison L. Hill 1 | Daniel I. S. Rosenbloom 2 | Martin A. Nowak 1 | Robert F. Siliciano 3,4 1 period of a few weeks to a "setpoint" typically between 10 3 and 10 6 c/mL, where they can remain relatively stable for many years. 9 During this time, viral populations diversify and diverge from the strains that founded infection, 10 often displaying population genetic signs of strong selection. 11,12 CD4+ T cells slowly decrease over the course of chronic infection and eventually become so low (<200 cells/uL blood) that opportunistic infections occur and the individual classified as having AIDS. Early in the epidemic, these characteristic trends inspired the use of mathematical models to understand these dynamics and help generate ideas about how to treat the infection. Mathematical models are sets of equations or rules that describe how different entities in a system interact and change over time. 15 Different models may consider dynamics at very different scales -from individual molecules to cells to people to countries. Most commonly, models are formulated as systems of nonlinear differential equations or as sets of stochastic reactions constituting a Markov process. Roughly speaking, the use of models in biology can be divided into two cases. In one scenario, models may be constructed with the goal of explaining patterns that are observed in existing data, perhaps for generating and comparing hypotheses about the mechanisms that lead to the observed data or to estimate values of particular model parameters. While this approach has the advantage of allowing direct comparison of models with data, it has the downside that it is generally always possible to create a model that reproduces observed data, but this does not mean that model is correct or useful. Alternatively, models may be constructed in the absence of directly related data, by starting from a basic mechanistic understanding of the biological processes involved and choosing only the processes considered most critical to the outcome. Values for reaction rates can ideally be taken from direct measurement of individual steps in the process. Constructing such a model is a formal way of integrating often disparate data into a single framework, and can be used to predict the outcomes of studies that have not yet been conducted based on the optimal use of prior information. Ideally, models can be developed and refined by iterating between these two approaches. In this paper, we will review some examples of how mathematical models have improved our understanding of HIV treatment, including both successes and failures. The models we will discuss are commonly called "viral dynamics" models and track levels of virus and immune cells over time within individual infected people or animals (and thus are often referred to as "within-host" models). A huge amount of other work that will not be discussed here uses "between-host" models to describe how HIV spreads between individuals in a population (eg, . The first half of the paper will focus on antiretroviral drugs, which are still the only approved drugs for treating HIV. The second half of the paper will discuss investigational therapies being tested with the hope that they may one day replace combination antiretroviral therapy (ART) by permanently curing the infection. Many other excellent reviews of viral dynamic modeling of HIV exist in the literature (eg, . Here, we do not attempt to cover the entire field but rather to detail some topics we personally have studied or feel are illustrative examples of these methods. where V 0 is the viral load at the time of therapy initiation. Thus, the decay dynamics only depend on the lifespans of free virus and infected cells: viral load will decay with the slower of these two values after a shoulder phase approximately equal to the length of shorter lifespan. Since the lifespan of free virus is estimated to be around 1/c ~ 1 h, 26 but the observed decay rate is around 1/d, we must have d I > c and d I ~ 1/d. | Basic viral dynamics model When this decay was first observed and interpreted in the context of this model, [29][30][31] it was very surprising that virus-producing cells had such a short lifespan. This lifespan implies that many new cells must be infected each day to maintain setpoint viremia (estimates of d I I at setpoint from the model in Figure 2 19 ), these numbers allow for a tremendous amount of diversity to be generated, explaining the rapid rates of evolution observed. Despite these and many other insights into HIV infection that have come from the viral dynamics model, it is important to note that the model does make a number of unrealistic assumptions. For example, this model assumes that cells start producing virus immediately upon being infected, whereas in reality a cell must pass through multiple stages of the viral lifecycle before infectious virions are released. Additions to this model include this time delay, [32][33][34] which has many interesting effects, but most importantly, changes the relationship between the early viral growth rate and estimates of R 0 . 7 CD4+ T cells obey very simplified dynamics in these equations, but are actually governed by more complicated homeostatic mechanisms that increase cell proliferation when numbers get low. 35,36 While CD4 + T cell levels can decline dramatically during chronic infection, generally only activated cells are highly susceptible to infection, and only a very small fraction of them are infected at any given time (around 1/1000). 37,38 Including more of these details can improve the agreement between model predictions and observed CD4 counts but still cannot explain the entire progression to AIDS. 39 Infected cells and free virus are not generally cleared at a constant rate throughout infection because they are targeted and cleared by adaptive immune responses that expand in response to infection. Many models of antiviral immunity have been developed to explain different features of infection. 12,19,40,41 Inclusion of immune system effects is needed to reproduce the large drops from peak viremia to setpoint 42,43 and explain patterns of viral evolution (eg, References 40,44,45). When treatment reduces R 0 < 1 in this model, the simplest forms of the model predict that infection will eventually be completely cleared. However, early studies demonstrated that no matter how long antiretroviral therapy is given and plasma viral levels remain undetectable by standard clinical assays, the infection always returns once therapy is stopped. 46,47 This was found to be due to the presence of a "latent reservoir" of integrated proviral genomes in resting memory CD4+ T. These latent genomes are not transcribed into mRNA and translated in protein to complete the viral lifecycle due to the quiescent state of these cells. 48 However, upon cellular activation, transcription and translation can resume. Latently infected cells can persist despite decades of therapy, 49,50 and reactivate later to restart infection. [51][52][53] Consequently, antiretroviral therapy is not curative and currently must be taken for life. Models that include viral latency are now common in studies of both antiretroviral therapy and new curative strategies (Reference 54 and discussed in later section). Interestingly, many of these more complicated facets of infection can actually be inferred from looking more closely at viral load Further insight has been gained by comparing viral load decay curves in treatment with and without the integrase inhibitor (II) class of drug. Early on after this class was introduced, it was noticed that viral loads became suppressed faster than with reverse-transcriptase (RTI) or protease inhibitor therapy (PI). This was initially taken as evidence that these drugs were more efficacious, but for the reasons detailed above ( Figure 2, lack of dependence of decay curves on drug efficacy), modelers cautioned against this interpretation and hypothesized that the altered kinetics may be due to the later stage in the lifecycle at which the integrase inhibitor class acts. [63][64][65] Recent work by Cardozo et al. 58 used densely longitudinally sampled viral load data 66 during treatment with either (a) 3 RTIs + 1 PI, (b) 1 II, or (c) 2 RTIs + 1 II to compare various models to fit the decay curves. Based on the various alterations in kinetics seen with the II (first phase viral decay separating into two phases, (1a) and (1b), second phase decay occurring later and slower), they identified the model that fit the data best without unnecessary complexity. They concluded that the virus infects two distinct cell subsets, one with a fast rate of integration and another with a slow rate of integration, but that once integration occurs, production of virions occurs with similar rates in each subset. Additionally, their results suggest that the decay curves can only be explained if integrase inhibitors are not 100% effective even at the high concentrations administered, so that some integration proceeds slowly even in the presence of the drug. This agrees with direct measurements of drug efficacy in ex vivo assays (discussed in next section), 67 and could be due to the ability of HIV genes to be expressed at low levels from unintegrated viral DNA. In Figure 3, we show the infection model that has emerged from these combined studies and the decay curves that are produced under different treatment regimes. | How efficacious are antiretroviral drugs? HIV drugs rapidly reduce viral loads, but they do not eliminate all of These results were surprising for a few reasons. First, HIV drug efficacy (as measured by older assays), was previously only reported in terms of the IC 50 , but inhibition at the higher concentrations which are required clinically is highly dependent on the slope as well (Figure 4). The total viral inhibition at clinical drug levels calculated from these assays is higher in drug combinations recommended for first-line treatment and in those that outperform others in head-to-head randomized clinical trials. 67 In situations where drug levels are suboptimal, viral replication can occur and drug-resistant variants can arise. 74 Resistance is not an all-or-nothing phenomenon, and most mutations only confer partial resistance. To quantify the degree of resistance, viruses can be generated in the laboratory with specific suspected drug resistance mutations, and then subjected to the same dose-response curve measurements described above. 75 Overall, the dose-response curve shifts in three possible ways for each resistant strain. In the absence of drug, mutant strains tended to have lower infection rates than wildtype strains. This "cost of resistance" is well-documented in many systems and occurs because of compromises in the function of viral proteins that occur when they undergo amino acid changes to avoid drug effects. [76][77][78][79][80] Since this fitness cost shifts the entire dose-response curve down ( Figure 4C) | How does antiretroviral efficacy and adherence influence treatment outcomes? Dose-response curves tell us how much infection is instantaneously Actively infected cells transition into latent infection at rate γ, a is the rate at which latently infected cells reactivate, and d L is the death rate of latently infected cells. Q is a matrix that includes information both on the mutation rate and the genetic structure of the population, ie, is the probability that a cell initially infected by a virion of genotype i ends up carrying genotype j due to mutation during the reverse transcription process is Q ij . The rates governing latently infected cells tend to be much smaller than those for activated cells or virus (eg, d L , γ, a ≪ d I , d T ). Even without dynamically simulating such a model, important insight can be gained on potential treatment outcomes just from at the relative dose-dependence of mutant and wildtype viral fitness 69,89 ( Figure 4D) Figure 4C). These proxies are significantly better than simply measuring time-averaged drug concentration, which misses the highly nonlinear relationship between drug levels and viral fitness. However, they still have limited predictive power, since they ignore the fact that resistant strains do not always exist but instead must be generated stochastically via mutation before being available to be selected, and can go extinct if outcompeted temporarily. 69 Consequently, the specific time-course of drug levels can influence outcomes. More predictive models of viral dynamics under drug treatment can be created by (a) moving from differential equations, which assume populations can be arbitrarily small and all processes occur Beyond the overall adherence level, more detailed characteristics of the drug time-course can influence treatment outcomes. Wahl and Nowak 89 showed that resistant strains are more likely to flourish when drug doses are taken more evenly as opposed to in a more "clumped" fashion, even when the total fraction of doses taken is the same (assuming that resistant strains always exist). When drugs are given in combination, the overlap between missed doses, which can differ depending on whether the drugs are packaged together in a "combo-pill" or allowed to be taken separately, can determine whether or not a drug combination is "resistance-proof". 69 Long-acting therapy, which is taken much less frequently than current daily dosing due to extended half-life formulations, is currently under development, 101 and there are worries it may be more prone to resistance development in the presence of missed doses. Models can be used to explore this possibility, and for preliminary investigation of a once-weekly formulation of the drugs dolutegravir and raltegravir, and suggest failure rates should be similar to daily pills with similar average drug concentrations. 102 The periodic highs and lows of drug levels during regular therapy can also promote resistance in an unexpected way. For example, viral populations may be able to evolve the ability to "synchronize" their lifecycle with the drug period, so that they only undergo a particular lifecycle stage when drug level blocking it is at their lowest, and therefore avoid the drug effect. 34 Whether this effect is responsible for any clinical resistance patterns for HIV is still unknown. There are two basic ideas for how this could be accomplished. One approach, often called a "sterilizing cure", is to purge the body of enough residual latently infected cells that the chance that infection will be rekindled when treatment is stopped is extremely low. | MODELING NOVEL THER APIE S TO PERTURB L ATENT INFEC TI ON OR BOOS T IMMUNE RE S P ONS E S Another approach, often called a "functional cure", is to equip the body with the ability to control the infection, rendering small amounts of virus released from reservoirs inconsequential. 105 As was the case for antiretroviral therapy, mathematical models are being used to predict how and when these therapies would work, interpret their outcomes in trials, and help guide drug development efforts (see related reviews 54,106) ( Figure 6). F I G U R E 6 Schematic of the barriers to HIV cure and conceptual approaches to cure. Combination ART rapidly suppresses viral loads (solid red) to below clinical detection limits, but low-level viremia released from long-lived latently infected cells continues. Whenever therapy is stopped, viral load rebounds (solid red). "Sterilizing cure" approaches aim to reduce or completely clear the latent reservoir, or render cells in it incapable of reactivating (possible infection scenario shown in bottom red dotted line). "Functional cure" approaches aim to equip the body with the ability to control reactivating infection before full-blown rebound occurs (effectively by reducing R 0 < 1) (three possible control scenarios shown in red dotted lines). | What maintains the latent reservoir and how can we reduce or clear it? One branch of HIV cure research is focusing on developing therapeutics that can perturb the latent reservoir, ideally reducing its size or activity such that the risk of latently infected cells reactivating and rekindling infection when ART is stopped is removed. 107 In imagining such therapies, researchers have sought to better understand the processes that maintain a nearly stable population of latent cells despite decades of treatment and extremely low levels of detectable virus. The latent reservoir persists mainly as proviruses integrated into the genomes of infected resting memory CD4+ T cells. The frequency of these latently infected cells is around 1 per million cells 53,108,109 (depending on the particular assay used and the requirement for virus functionality), and its size decays with a half-life of 44 months on average. 49,50 The majority of evidence supports the fact this reservoir is maintained by the underlying dynamics of these cells, and not by ongoing viral replication, which could lead to continual reservoir seeding despite antiretroviral therapy (Reference 54,110,111). While it was originally believed by many that latently infected cells must be intrinsically long-lived, since cell division was expected to reactivate viral expression and lead to eventual cell death, a series of studies over the past few years have convincingly demonstrated that cells in the reservoir can proliferate while remaining latently infected (Reference 110,112). These studies have identified multiple latently infected cells -even in small samples -with virus integrated into identical sites [113][114][115] in the genome or with sequence-identical virus 116-119 -two findings that would be exceedingly unlikely to occur in two independent infection events and likely reflect division of infected cells. The first class of drugs to be investigated to target latent infection was the so-called "latency-reversing agents". The rationale for these drugs is to increase the rate at which HIV expression is restarted in latently infected cells. If these drugs are given along with antiretroviral therapy, then these reactivated cells will release virus but the released virus will not be able to spread infection to other cells. Eventually, the productively infected cells should die -either by viral cytopathic effects or cytotoxic immune responses. 120 Now that the role of proliferation in maintaining the reservoir has been established, there is renewed interest in developing "antiproliferative" therapies for HIV, which would reduce the ability of latently infected cells to self-renew. Mathematical models have been developed to predict how effective these treatment strategies are likely to be. 121,122 Two recent papers used a similar approach which we will summarize here. If it is assumed Red and blue lines are for alternate parameter sets. B) Hypothetical therapy that increases the activation rate (a) of latently infected cells during ART. When pretherapy a is varied (to 10a* or a*/10), p is kept constant at p* but d is adjusted to keep δ the same. (C) Hypothetical therapy that decreases the proliferation rate (p) of latently infected cells during ART. When pretherapy p is varied (to 10p* or p*/10), a is kept constant at a* but d is adjusted to keep δ. the same. (D/E) Comparison of the relative magnitude of dynamic rates for the corresponding scenarios in the figure above. The height of the bar is proportional to the log10 of the value of the rate. The bar above the horizontal axis represents the process that contributes to reservoir increase ("gain rate", p) whereas bars below are processes that contribute to reservoir decay ("loss rate", a, d). Latency-reversing agents have had some success in increasing HIV gene expression but have not impacted reservoir size, [124][125][126] perhaps because of their lack of specificity for the HIV promotor, posttranscriptional blocks, and lack of recognition of cells by cytotoxic immune responses. Antiproliferative therapies are still at an early stage, but it will likely be difficult to find compounds that substantially reduce division of infected cells without being overtly immune suppressive or triggering compensatory mechanisms to maintain cell population sizes. The differential equation-based model above can give estimates for the expected decay rate of the latent reservoir, but to achieve cure, the probability that at least one cell remaining in the reservoir reactivates and restarts high-level infection before dying must be zero. To estimate these odds, a stochastic model is needed. An example of this type of calculation is given in Hill et al., 123 Like the above calculation, the exact relationship between reservoir size and probability of cure predicted from the stochastic model is highly dependent on estimates of the underlying parameter values. | What can viral dynamics tell us about the mechanism of action of new immunotherapies? Another approach to treat and ideally cure HIV infection involves immunotherapies, which perturb the immune response to infection, either by boosting antiviral immune responses or reversing infection-induced immune suppression. 127 There are many types of immunotherapeutic agents, ranging from small-molecules that act on immune signaling pathways, to biologics like broadly neutralizing antibodies, checkpoint inhibitors, or vaccines, to cell therapies including chimeric antigen receptor T cells. These drugs are being examined alone or in combination with ART for their ability to promote either sterilizing or functional cures for HIV. Even in the few trials that have already been conducted, mathematical models are helping to understand the mechanism of these therapies. In recent studies by Caskey et al., 128 A few earlier studies conducted this type of "structured" or "analytic" treatment interruption and have provided proof-of-principle for using rebound as a measure of preinterruption infection status. In the AUTOVAC study, individuals on long-term suppressive ART underwent a series of consecutive treatment interruptions. 46 During each interruption, viral loads rebounded, and once levels passed a threshold therapy was restarted for three months before another interruption. This study found that in the second and third interruptions, the rate of exponential increase in viral load was decreased compared to the first interruption (doubling time increased from 1.4 to 1.9 days), whereas the inferred initial level of viremia from which rebound started -which is directly related to the "reservoir" size and exit rate -was higher (by ~10-fold). These findings suggest that during later interruptions, the immune response may have been boosted compared to the first, which would be expected F I G U R E 8 Modeling viral rebound following ART and immunotherapy. (A) Design of a study in which two novel immunotherapies, a TLR7-agonist and a therapeutic vaccine (Ad26/MVA), where administered during ART treatment of SIV-infected rhesus macaques, followed by a treatment interruption. 130 The time-course of viral loads for one example animal is shown. (B) A mathematical model of viral dynamics augmented to include an antiviral immune response that is stimulated in a viral load-dependent way. (C) Example time-courses of viral load for one animal from each treatment group, along with fits to the model. Each animal was fit to the model individually in a Bayesian framework (with six estimated parameters), and maximum a posteriori values for each parameter were used to plot the results. (D) Group mean values (for 8-9 animals per group) and standard deviations of two parameters that displayed significant variation between groups. Although rebound after long-term ART is generally assumed to arise from reactivated latently infected cells, it is unlikely that these short interruptions substantially increased the reservoir size compared to everything that was seeded before initial therapy. 131 136 Therapy was started at a range of times between 3 days and 2 weeks after infection, and then after 6 months treatment was withdrawn. All animals experienced viral rebound, but the kinetics differed between groups. We would expect that animals starting treatment earlier would have smaller latent reservoir sizes (less opportunity for seeding) and weaker antiviral immune responses. Both experimental assays and fitting viral dynamics models to rebound trajectories supported these hypotheses: very early initiation of therapy lead to the steepest increase in viremia during rebound but the longest delay until the first detectable viral load, which are the predicted effects of lower rates of reservoir exit and decreased effective viral fitness (eg, Figure 8). Neither very early therapy initiation or repeated treatment interruptions are effective or scalable interventions, but these studies do provide a proof-of-concept that viral rebound kinetics are reflective of preinterruption interventions and they have informed the analysis of two recent preclinical immunotherapy studies. The main drug of interest in these studies was an agonist of Toll-like receptor 7 (TLR7), which is involved in the innate immune system response to viral infections. In the first study, the TLR7-agonist was given to SIV-infected macaques during suppressive ART, and later all treatments were stopped. 137 Most animals rebounded in both treatment (TLR7 + ART) and control (ART only) groups, and mathematical modeling of rebound kinetics showed that rebound trajectories were altered slightly in groups receiving the TLR7 agonist in a way that suggested a partial reduction in the latent reservoir along with alterations to target cell levels and viral immune responses. 137 Consistent with these suggestions, many animals experienced transient increases in viral load during TLR7-agonist administration, despite ART, suggesting that this therapy had an unexpected latency-reversing effect, and two of the thirteen animals in the intervention group never had detectable viremia after therapy cessation. In a follow-up study, 130 the TLR7 agonist was tested along with a therapeutic vaccine product (both given during ART). In some animals treated with the vaccine, with or without the TLR7-agonist, viremia rebounded rapidly to high levels but was then controlled to very low or completely undetectable levels. These dynamics are never produced by the basic viral dynamics model, which always leads to chronic infection. Alternative models were explored to explain the observations. A model that includes a population of cells belonging to the adaptive immune response which expand in response to viral antigen and act to reduce infection could explain the kinetics, and allowed for estimates of the relative contribution of reductions in the latent reservoir vs enhanced immunity in the altered kinetics. 130 Overall, the modeling analysis suggested that the role of the vaccine was not in boosting clearance of latently infected cells prior to therapy interruption, but in creating an effective primed population of immune cells that do not exist in animals treated only with ART. While these models have provided insight into treatment interruption trials was a way to evaluate HIV cure studies, there is significant room for improvement in future studies. A major limitation is the lack of detailed longitudinal data on levels and functionality of a panel of components of the immune response, which would allow modelers to conduct more formal hypothesis testing about potential mechanisms. The models used to explain these data are completely deterministic, whereas reactivation from latency, especially following reservoir-reducing interventions, may be highly stochastic. 123,138 They also only track a single strain of virus, but it is possible that fitness differences between multiple strains that exit the reservoir and contribute to rebound, or that new strains that arise via mutation early in rebound contribute to viral and immunologic dynamics. For example, the number of antigenically distinct strains that reactivate may impact the chance of immune control. Another limitation is the uncertainty about the time it takes antiretroviral therapy to effectively "wash out" of the system after the last dose is taken. Hence, the relative contribute of drug washout, waiting time to latent cell reactivation, and time for infection to grow to the detection limit are hard to separate, which limits the quantitative interpretation of reservoir reactivation rates estimated from models. Closer connections between modelers and experimentalists in the early-stage design of HIV cure trials will help ensure that mathematical model can be as informative as possible. | CON CLUS IONS Mathematical models have been used to understand the dynamics of HIV within individual patients ever since the infection was first identified. These "viral dynamics" models have provided many im- ACK N OWLED G EM ENTS We thank Alan Perelson and Fabian Cardozo for helpful discussions and feedback on the paper. This work was supported by NIH grants (DP5OD019851, P01AI131385, P01AI131365, and 5P30AI060354-15), and Bill & Melinda Gates Foundation award (OPP1148627).
6,668.2
2018-08-11T00:00:00.000
[ "Medicine", "Mathematics" ]
Reconfigurable edge-state engineering in graphene using LaAlO$_3$/SrTiO$_3$ nanostructures The properties of graphene depend sensitively on doping with respect to the charge-neutrality point (CNP). Tuning the CNP usually requires electrical gating or chemical doping. Here, we describe a technique to reversibly control the CNP in graphene with nanoscale precision, utilizing LaAlO$_3$/SrTiO$_3$ (LAO/STO) heterostructures and conductive atomic force microscope (c-AFM) lithography. The local electron density and resulting conductivity of the LAO/STO interface can be patterned with a conductive AFM tip, and placed within two nanometers of an active graphene device. The proximal LAO/STO nanostructures shift the position of graphene CNP by ~ $10^{12}$ cm$^{-2}$, and are also gateable. Here we use this effect to create reconfigurable edge states in graphene, which are probed using the quantum Hall effect. Quantized resistance plateaus at $h/e^2$ and $h/3e^2$ are observed in a split Hall device, demonstrating edge transport along the c-AFM written edge that depends on the polarity of both the magnetic field and direction of currents. This technique can be readily extended to other device geometries. Graphene has proved to be a powerful and versatile platform for studying condensed matter phenomena due to the unique honeycomb crystal structure and Dirac fermion behavior of electrons. The unique Dirac cone band structure makes it possible to tune the carrier density continuously between electrons and holes. This duality of carriers in graphene results in many exotic properties of graphene, such as Klein tunneling, [3][4][5][6] edge state mixing, 7-10 and recently the "wedding cake" structure of quantum Hall states. 11 Central to many of these experimental findings is the ability to control the charge neutrality point (CNP) by electrical gating. Another well-studied two-dimensional electronic system is the LaAlO 3 /SrTiO 3 (LAO/STO) heterostructure, which supports a high mobility 2D electron layer 12 with a wide range of additional properties including magnetism, 13 tunable spin-orbit coupling, [14][15][16] superconductivity, 17 and BEC-like superconductivity. 18 The two dimensional electron gas (2DEG) on the interface is globally tunable with a backgate voltage and locally tunable from the top LAO surface using conductive atomic force microscope (c-AFM) tip, when the LAO thickness is close to a critical thickness of $3 unit cells. 1,19,20 Using c-AFM lithography, a wide range of devices on the LAO/STO interface can be fabricated, such as a single electron transistor, 21 a broadband terahertz source and detector, 22,48 a one-dimensional interference device, 23,24 and an electron waveguide. 25 This technique can also be applied to other complex oxide heterostructures as well. 26 There have been efforts to locally control the CNP of graphene on silicon or hexagonal boron nitride (h-BN) substrates using AFM 27 or STM. 28 However, those doping techniques are either non-reversible or can only be performed in ultra-high vacuum and low temperature, which limits the applications. In this work, we demonstrate how local control over the metal-insulator transition in LAO/STO can be used to reversibly pattern interacting edge channels in a proximal graphene layer under ambient conditions. The graphene used in this work is grown from chemical vapor deposition (CVD) on oxygen-free electronic grade copper flattened with a diamond turning machine. 29 Then, graphene is coated with perfluoropolymer Hyflon AD60 and transferred onto the LAO/STO surface with the wet-transfer technique. 30 Graphene is patterned into Hall bars by standard photolithography. Hyflon is removed from graphene with FC-40 after patterning. Particles and contaminants on graphene from wet transfer and photolithography are brushed away using a contact-mode AFM scan sequence. After cleaning, the 4 Å atomic steps of the LAO surface underneath graphene are clearly resolvable. 30 The quality of the graphene is similar to other samples prepared in similar methods, with the mobility l > 10 000 cm 2 V À1 s À1 at 2 K. 30 Figures 1(a) and 1(b) illustrate the c-AFM writing setup. Graphene is scanned with a conductive doped-silicon tip in the contact mode with a contact force of 15-20 nN and scanning speed between 1 lm/s and 10 lm/s. The bias voltage applied on the tip is set to þ17 V (for creating a conductive LAO/STO interface) or À5 V (for restoring an insulating LAO/STO interface while avoiding damage to graphene 31 ). After each raster scan of the graphene area, the CNP of the graphene in the written region is shifted. The mechanism for shifting the CNP is believed to be essentially the same as for tuning the LAO/STO interface without graphene. 2,30,32 Under ambient conditions, when a positive voltage is applied to the tip while graphene is grounded, water molecules adsorbed on the graphene surface are dissociated into protons and transferred through the graphene and mediate the metal-insulator transition in the LAO/STO while contributing to a shift in the chemical potential in the graphene layer. 2,[32][33][34] The CNP can be further shifted by dynamically changing the electron density in the LAO/STO layer. STO has high dielectric permittivity at low temperature ( r $ 10 000), 35 which enables the graphene to be easily tuned with a back-gate voltage V bg applied to the bottom of the LAO/STO substrate [ Fig. 1(b)]. However, this gating method is subject to significant hysteresis 36,37 [see Fig. S1(a), inset], and hence, the back-gate voltage is not a reliable indicator of the doping level with respect to the CNP. In addition, the c-AFM lithography itself will dope the graphene, even when the back-gate voltage is held fixed. For these reasons, we rely on the four-terminal resistance of the graphene to monitor the carrier density change in situ during the c-AFM writing process, which takes place under the condition V bg ¼ 0 V (more details are discussed in the supplementary material). Once the c-AFM writing is finished, the sample is immediately stored in vacuum and cooled to cryogenic temperatures, where the writing is known to persist indefinitely. 2,32 To directly illustrate the effect of c-AFM writing, we scan half of the graphene device with V tip ¼ þ17 V, as shown in Fig. 1(c). The graphene resistance is then measured as a function of back-gate voltage at T ¼ 2 K. Figure 1(d) shows a control measurement where the resistance is measured before c-AFM scanning. The peak at V bg ¼ 5 V clearly indicates the CNP. Figure 1(e) is measured after c-AFM writing shown in Fig. 1(c), and two peaks can be observed. The additional peak on the left-hand-side is attributed to the c-AFM writing. The graphene doping from the positively biased c-AFM tip is reversible. After the c-AFM writing and the change in four-terminal resistance are observed, a scan with V tip ¼ À5 V voltage on the c-AFM tip will partially remove the previous writing effect. Scans with negative V tip need to be carefully conducted and the c-AFM tip should be connected in series with a 1 GX resistor, due to the fact that graphene can be oxidized as the anode. 31,38 Also, graphene has to be detached from measurement leads or groundings so that there is no significant current flowing through graphene. 31 The carrier density in the LAO/STO-doped graphene device is quantified using the Hall effect. As shown in the inset of Fig. 2(a), a graphene/LAO/STO device is prepared with one graphene Hall cross (Hall B) scanned in the contact mode with the c-AFM tip biased at þ17 V 15 times. A second Hall cross device (Hall A) is measured as a control, where no c-AFM lithography is performed. An electrical gate connected to the back of the 1 mm thick STO substrate is used to adjust the overall carrier density of the graphene device. Magnetotransport experiments are performed at T ¼ 2 K, in an out-of-plane magnetic field (B ¼ 1 T), in order to determine the carrier densities of the two regions. A shift of Dn ¼ 7  10 11 cm À2 is observed, with the patterned area being more n-type. Because Hall Device B is locally gated positively, the CNP is shifted to a lower V bg value (green curve). The carrier densities on both regions can be tuned by the back-gate up to 1  10 13 cm À2 at V bg ¼ À10 V, in part due to the large dielectric constant of STO ( r $ 10 000) at 2 K. 35,36 The right ends of the curves are less linear and tend to be saturated, due to the shielding effect of the 2DEG on the LAO/STO interface induced by a high positive back-gate voltage. For V bg < 5 V, the interface of LAO/STO outside the previously written area is insulating, so the back-gate voltage will not be shielded. ARTICLE scitation.org/journal/apl At sufficiently large magnetic fields, graphene would exhibit quantized Hall resistance of R h ¼ h=½ð4n þ 2Þe 2 ðn ¼ 0; 1; 2…Þ and vanishing longitudinal resistance, as a result of the non-trivial Berry phase and fourfold degeneracy from electron spin and valley pseudo-spin. [39][40][41] When the two adjacent regions have different Landau level filling factors, for example, a p-n junction in the quantum Hall regime, 7,10 the mixing and equilibration of edge states will produce a non-zero longitudinal resistance, which follows the Landauer-Buttiker formalism. 42,43 In our sample, the Dn ¼ 7  10 11 cm À2 carrier density difference on the two sides is enough to keep them at adjacent Landau level filling factors. Consequently, these two regions have different edge-channel occupancies. As shown in Fig. 2(b), when both regions have the same polarity, the channels present in both regions would travel across both regions, while the ones from higher filling factors would only circulate in one region. The longitudinal resistances R xx1 and R xx2 measured from the top and bottom of the sample can be described using the Landauer-Buttiker formalism 7,8,10,27,[42][43][44][45][46][47] (details of derivations can be found in the supplementary material) where 1 and 2 are the filling factors of the two regions, equal to 62, 66,…. In the case of opposite polarity on two sides, the device becomes a p-n junction and the current flows in opposite directions on each side. The longitudinal resistances R xx1 and R xx2 become Figures 3(a) and 3(b) show R xx1 and R xx2 in þ7 T and À7 T magnetic fields. When the back gate is swept from À10 V to þ10 V, the carrier type in the two regions would transit from unipolar ( 1 ¼ À6, 2 ¼ À2) to bipolar ( 1 ¼ À2, 2 ¼ þ2) and then unipolar ( 1 ¼ þ2, 2 ¼ þ6) again. As shown in Fig. 3(a), when the back-gate voltage V bg is between À2 V and þ6 V, the resistance R xx1 transitions from h/3e 2 to 0 and then to h/3e 2 , while R xx2 transitions from 0 to h/e 2 and then to 0, as predicted by the Landauer-Buttiker formalism. When the magnetic field is reversed, the quantization of R xx1 and R xx2 is switched, because of the reversing of current directions. Figures 3(c) and 3(d) show the swapping of quantization between R xx1 and R xx2 when the magnetic field is reversed. These results are consistent with the graphene edge-state mixing reported in the literature. 9,27 The values of resistance plateaus are quite close to theoretical values when the magnetic field is higher than 2 T, suggesting well-defined edge-state and 2(c)] in the magnetic field of þ7 T and À7 T. In (a) R xx1 shows a plateau at h/3e 2 when the two regions are unipolar with filling factors jj ¼ 2 and 6, respectively. R xx2 shows a plateau at h/e 2 when the two regions are bipolar with filling factor ¼ þ2 and À2. The plateaus of the two curves are anti-symmetric with respect to the magnetic field. When the direction of the field is reversed, the resistance values and features are swapped between R xx1 and R xx2 . (c) and (d) show that the edge state mixing is well developed at jBj > 2 T. equilibrium. The quantization features experience a negligible change over the course of the measurement (>10 h), indicating that the graphene doping is stable in vacuum, similar to the c-AFM writing on bare LAO/STO. 32 In summary, we developed a reversible, spatially controllable graphene doping technique by c-AFM tips on LAO/STO substrates. Graphene edge state mixing in the quantum Hall regime can be observed from with the c-AFM writing. In the future, this technique can be used to locally dope high-mobility graphene with feature sizes as small as 20 nm 2 and create a new family of reconfigurable graphene metamaterials. See supplementary material for the details of hysteresis from back-gate voltage sweep, resistance and carrier density of graphene as functions of the back-gate voltage, graphene resistance change during the c-AFM writing process, and the derivations of longitudinal resistances for edge-state mixing.
2,980.6
2018-11-05T00:00:00.000
[ "Physics", "Materials Science", "Engineering" ]
AN EXISTENCE AND UNIQUENESS OF THE WEAK SOLUTION OF THE DIRICHLET PROBLEM WITH THE DATA IN MORREY SPACES . Let 𝑛 − 2 < 𝜆 < 𝑛 , 𝑓 be a function in Morrey spaces 𝐿 1,𝜆 (Ω) , and the equation { be a Dirichlet problem, where Ω is a bounded open subset of ℝ 𝑛 , 𝑛 ≥ 3 , and 𝐿 is a divergent elliptic operator. In this paper, we prove the existence and uniqueness of this Dirichlet problem by directly using the Lax-Milgram Lemma and the weighted estimation in Morrey spaces. Let be a bounded and open subset of for 1 ≤ < ∞ and 0 ≤ ≤ . This Morrey spaces were introduced by C. B. Morrey [1] and still attracted the attention of many researcher to investigate its inclusion properties or application in partial differential equation [2,3,4,5,6,7,8]. Let ∈ 1, (Ω). In this paper, we will investigate the existence and uniqueness of the weak solution to the equation where is defined by (1) and the satisfies a certain condition. The Eq. (4) is called the Dirichlet problem. Notice that, the result in [9], generalized by themselves in [12]. In [9,10,11], the authors used a representation of the weak solution, which involves the Green function [13], and proved that this representation satisfies (5) to show the existence of the weak solution of (4). Cirmi et. al [14] proved that the weak solution of (4) exists and unique, and its gradient belongs to some Morrey spaces, where they assumed ∈ 1, (Ω) for − 2 < < . The proof of the existence and uniqueness of the weak solution, which is done by Cirmi et. al, used an approximation method. By assuming ∈ 1, (Ω), for − 2 < < , in this paper we will give a direct proof of the existence and uniqueness of the weak solution of the Dirichlet problem (4). Our method uses a functional analysis tool, i.e. the Lax-Millgram lemma, combining with a weighted embeddings in Morrey and Sobolev spaces. RESEARCH METHODS The constant = ( , , … , ), which appears throughout this paper, denotes that it is dependent on , , …, and . The value of this constant may vary from line to line whenever it appears in the theorems or proofs. Our method relies on functional analysis tools, that is Lax-Milgram lemma, that we will state in this section. We start by write down some properties related to Lax-Milgram lemma. Now we state the following two theorems regarding to the estimation for any functions in 0 1,2 (Ω), that we will need later. The first theorems called Poincar's inequality (see [15] for its proof) and the second theorem called sub representation formula (see [16] for its proof). We close this section by state the following Theorem which slightly modified from [17]. for every ∈ . RESULTS AND DISCUSSION To start our discussion, we prove that the bilinear mapping defined by (6) is continuous and coercive. Lemma 2. The mapping defined by (6) is continuous and coercive. Proof. Let ∈ 0 1,2 (Ω). We first prove the coercivity property. By using (3) and then Poincar's inequality, we have where = ( , ) is a positive constant. Now, we prove the continuity property. Let , ∈ 0 1,2 (Ω). Note that, according to (2). By using Hlder's inequality, we have This completes the proof.  We need the theorem below to prove that the function defined by (7) is a bounded linear functional. This theorem states about a weighted estimation in Morrey spaces where the weight in Sobolev spaces. The proof of this theorem was given in [11]. However, the given proof did not complete. Here we give the complete proof. Notice that Hence The theorem is proved.  From Theorem 4, we obtain the following corollary. where the positive constant = ( , , , ‖ ‖ 1, (Ω) ). This means is also bounded and the proof is complete. 
931
2022-09-01T00:00:00.000
[ "Mathematics" ]
MSPEDTI: Prediction of Drug–Target Interactions via Molecular Structure with Protein Evolutionary Information Simple Summary Drug discovery is the process of identifying potential new compounds through biological, chemical, and pharmacological means. Billions of dollars are spent each year on research aimed at discovering, designing, and developing new drugs for a wide range of diseases. However, the research and development of new drugs remain time-consuming and sometimes difficult to complete. With the development of new experimental techniques, huge amounts of data are generated at different stages of drug development. Biomedical research, especially in the field of drug discovery, is currently undergoing a major shift towards “big data” applications of artificial intelligence technologies. Therefore, a key challenge for future drug discovery research is the development of robust artificial-intelligence-based predictive tools for drug–target interactions (DTIs) that can study biomedical problems from multiple perspectives. In this study, a deep-learning-based prediction model for DTIs was designed by combining information on drug structure and protein evolution to provide theoretical support for drug research. Abstract The key to new drug discovery and development is first and foremost the search for molecular targets of drugs, thus advancing drug discovery and drug repositioning. However, traditional drug–target interactions (DTIs) is a costly, lengthy, high-risk, and low-success-rate system project. Therefore, more and more pharmaceutical companies are trying to use computational technologies to screen existing drug molecules and mine new drugs, leading to accelerating new drug development. In the current study, we designed a deep learning computational model MSPEDTI based on Molecular Structure and Protein Evolutionary to predict the potential DTIs. The model first fuses protein evolutionary information and drug structure information, then a deep learning convolutional neural network (CNN) to mine its hidden features, and finally accurately predicts the associated DTIs by extreme learning machine (ELM). In cross-validation experiments, MSPEDTI achieved 94.19%, 90.95%, 87.95%, and 86.11% prediction accuracy in the gold-standard datasets enzymes, ion channels, G-protein-coupled receptors (GPCRs), and nuclear receptors, respectively. MSPEDTI showed its competitive ability in ablation experiments and comparison with previous excellent methods. Additionally, 7 of 10 potential DTIs predicted by MSPEDTI were substantiated by the classical database. These excellent outcomes demonstrate the ability of MSPEDTI to provide reliable drug candidate targets and strongly facilitate the development of drug repositioning and drug development. Introduction Drug research is a global development problem. In the past few decades, the drugtargeted therapy strategy has achieved great success [1,2]. Finding specific drugs for targets is the focus of pharmaceutical research and development, which has made an indelible contribution to human health [3]. However, the rate of new drug development has been declining in recent years, and the cost of research and development has been rising [4]. The main reason for this is that the early screening of a large number of drug candidates in drug research still relies mainly on time-consuming and labor-intensive experimental methods, and the later discovery of unsatisfactory efficacy or toxic side effects of drugs leads to the failure of development. Therefore, efficient and high-throughput computational techniques in the early stages of drug research can play an important role in targeting and saving costs in early development [5][6][7][8]. With the rapid development of bioinformatics, many achievements have been achieved by using computational and simulation approaches to predict DTIs. Quantitative structureactivity relationship (QSAR) utilizes the physicochemical properties or structural parameters of the molecule to quantitatively study the interaction between small molecules and biological macromolecules by means of mathematics. Casañola-Marti et al. proposed a QSAR model for predicting anti-tyrosinase activity and demonstrated the effectiveness of the model in subsequent in vitro experiments, which greatly increased the rate of biochemical discovery of skin disease treatment [9]. Kar et al. proposed an approach to predict the carcinogenicity of drug compounds based on QSAR, which has been identified as a key factor in carcinogenicity by analyzing the contribution of molecular fragments to carcinogenicity [10]. Molecular docking (MD) is a computational simulation method for studying the optimal binding sites between drug molecules and target proteins by structural matching and energy matching and predicting their binding patterns and affinity [11]. Wallach et al. proposed a model to normalize docking scores through the virtually generated bait set that avoids the variability due to changes in physical properties when identifying active compounds in large screening libraries, thereby extending the applicability of the model [12]. Recently, computational methods for predicting DTIs based on protein target sequences have achieved excellent results and are favored by researchers for their use of reliable, high-quality characterization information enriched by raw data to ensure the accuracy of prediction results [13][14][15][16][17][18]. For instance, Lan et al. proposed a PUDT model combining protein target sequences and drug compound structures, which greatly improved the accuracy of DTI prediction using a weighted SVM classifier [19]. Cao et al. aimed to predict DTIs by using an extended structure-activity relationship method at the genome-scale level. In subsequent experiments, this approach gained good results [20]. In the present study, we combined protein sequence evolution with drug structure information to propose a deep learning MSPEDTI model to predict hidden DTIs. Concretely, MSPEDTI first fuses protein sequence information characterized by the Position-Specific Scoring Matrix (PSSM) and drug structure information characterized by molecular fingerprinting, and then automatically extracts them into continuous, low-dimensional, information-rich features using a deep learning CNN, thus avoiding the disadvantages of manual features such as tediousness, sparsity, and high dimensionality. Finally, the ELM classifier is used to accurately determine whether drug-target pairs are associated or not. In the gold-standard dataset, we evaluated MSPEDTI using the five-fold cross-validation (5CV) approach. Compared with other previous methods, MSPEDTI was able to learn valid biological characteristics for predicting DTIs and showed better performance. The robustness of MSPEDTI is also demonstrated by the experimental results of the case study, which can provide effective candidate targets for new drug research. The supporting data used in this study can be downloaded from https://github.com/look0012/MSPEDTI (accessed on 1 April 2022). Gold-Standard Datasets In the present study, we implemented the MSPEDTI model using the gold-standard datasets enzyme, GPCR, ion channel, and nuclear receptor, which were collated by Yamanishi et al. [21] from the BRENDA [22], KEGG [23,24], SuperTarget [25], and Drug-Bank [26] databases. After removing the redundant information, the numbers of DTI pairs contained in these datasets are 2926, 635, 1467, and 90, respectively. All of these pairs are constructed as positive datasets. Table 1 presents the statistical information for these gold-standard datasets. The corresponding negative dataset construction process is as follows: firstly, all drugtarget interaction pairs are divided into drug and target components; secondly, these drug and target are recombined into DTI pairs, and the pairs of interactions are removed. Finally, these drug-target pairs are randomly selected to construct the negative dataset, which is the same size as the positive dataset. Drug Structure Characterization We employed molecular fingerprints in this study to characterize the drug structures for the purpose of numerical conversion. The design idea of fingerprints is to characterize the molecular structure using the form of a dictionary collection of molecular fragments, which converts a drug molecule into a binary vector of values by determining whether certain fragments, i.e., molecular substructures, are present in the molecule. It first divides the molecular structure to obtain the structural fragments, and then encodes the fragments of these molecular structures into numbers according to certain rules and corresponds to each bit of the binary string, thus combining them as a whole (binary string) as a characterization of the molecular structure. At present, the commonly used molecular fingerprints are FP4 fingerprint, MACCS fingerprint, Estate fingerprint, and PubChem fingerprint, and their corresponding molecular structure fragment numbers of 307, 166, 79, and 801. In this experiment, molecular fingerprints from the PubChem database were selected to characterize the drug structure of DTIs. The drug molecule is decomposed into 881 substructures in this descriptor. Given a drug, encode its corresponding bit as 1 or 0 depending on whether its molecular substructure is present. The fingerprint is encoded in Base64 on the PubChem website and provides a text description of it in binary, available for download from https://pubchem.ncbi.nlm.nih.gov/ (accessed on 1 January 2018). Target Protein Characterization In the experiments, the Position-Specific Scoring Matrix (PSSM) was used to numerically characterize the target protein. The PSSM can effectively describe the evolutionary information of protein amino acids, and it is commonly used in protein secondary structure prediction [27], protein binding site prediction [28], disordered region prediction [29], and distantly related protein detection [30,31] domains. The PSSM is a matrix of H × 20, where H is the length of the protein, and 20 is the type of amino acid. The PSSM Pssm = Θ i,j : i = 1 · · · H and j = 1 · · · 20 can be expressed equationally as follows: Here, the matrix element Θ i,j indicates the probability that the i-th residue of the protein mutates to the i-type amino acid during the evolutionary process. In the implementation, we utilized the Position-Specific Iterated BLAST (PSI-BLAST) [32] to calculate the PSSM by comparing it with the SwissProt database. We followed the previous study, setting the parameter iterations and e-value of the PSI-BLAST tool to 3 and 0.001 to obtain high homologous sequences in the experiment. The database and tool are available for download from http://blast.ncbi.nlm.nih.gov/Blast.cgi (accessed on 18 March 2002). Feature Extraction In the MSPEDTI model, the convolution neural network (CNN) algorithm of deep learning is used to extract the hidden features of the protein. Deep learning can learn the intrinsic patterns and levels of representation of sample data, thus enabling machines to have the same analytical learning capabilities as humans. As one of the representative algorithms of deep learning, CNN is able to classify the input information in a translation-invariant manner by hierarchical structure, thus deeply mining the essential features of data. Therefore, we introduced it into MSPEDTI to greatly strengthen the model prediction capability. CNN is a feedforward neural network with artificial neurons that respond to a portion of the surrounding units in the coverage area, including convolutional, pooling, sampling, fully connected, input, and output layers. With its special structure of local weight sharing, CNN has unique advantages in feature extraction, and its layout is closer to the actual biological neural network. CNN has unique superiority in feature extraction, with its special structure of local weight sharing, and its layout is closer to the actual biological neural network. Weight sharing reduces the complexity of the network, especially the feature that multidimensional input vectors can be directly input into the network, which avoids the complexity of data reconstruction in the process of feature extraction and classification. The structure diagram of CNN is shown in Figure 1. Assuming that C i is the feature map of layer ith, its description can be: Here, operator · indicates convolution operations, b i indicates the offset vector, W i indicates the weight matrix of the ith layer convolution kernel, and g(x) indicates the activation function. The subsampling layer follows the convolutional layer and samples the feature map according to specific rules. Let C i be the subsampling layer with the following sampling rules: After multiple convolution and sampling, the features are classified by the fully connected layer to yield the data distribution Γ of the original input. Fundamentally, CNN can be regarded as a mathematical model that uses multilevel dimensional transformations to transform the original data C 0 into a new feature representation Γ. Here, Γ represents the feature representation, p i indicates the ith label class, and C 0 represents the original data. Minimizing the loss function H(W, b) is the ultimate goal of CNN training. Therefore, CNNs are typically trained to solve the overfitting problem by controlling the fitting strength using the parameter θ and adjusting the loss function L(W, b) by generalizing the norm. CNNs normally update their network layer parameters (W, b) layer by layer by gradient descent in the training phase and control the backpropagation function to exploit the learning rate ε. tion of the surrounding units in the coverage area, including convolutional, pooling, sampling, fully connected, input, and output layers. With its special structure of local weight sharing, CNN has unique advantages in feature extraction, and its layout is closer to the actual biological neural network. CNN has unique superiority in feature extraction, with its special structure of local weight sharing, and its layout is closer to the actual biological neural network. Weight sharing reduces the complexity of the network, especially the feature that multidimensional input vectors can be directly input into the network, which avoids the complexity of data reconstruction in the process of feature extraction and classification. The structure diagram of CNN is shown in Figure 1. Assuming that is the feature map of layer ℎ, its description can be: Here, operator ⦁ indicates convolution operations, indicates the offset vector, indicates the weight matrix of the ℎ layer convolution kernel, and ( ) indicates the activation function. The subsampling layer follows the convolutional layer and samples the feature map according to specific rules. Let be the subsampling layer with the following sampling rules: Classification Prediction The extreme learning machine (ELM) [33] is employed by MSPEDTI as a classifier to predict potentially associated DTIs. The ELM is a simple and effective single-hidden layer feedforward neural network learning algorithm that does not need to adjust the input weights of the network and the bias of the hidden elements during the execution and produces a unique optimal solution, so it has the advantages of fast learning and good generalization performance. Given input samples (X i , P i ) with L tagged, the ELM consisting of N neurons can be formulated as: where indicates the activation function, V i indicates the output weight matrix, W i = [w i1 , w i2 , . . . , w iL ] T stands for the input weight matrix, W i ·X j stands for the inner product of W i and X j , and b i stands for the offset of the ith neurons. To realize the minimization of the output error, i.e., the training goal of ∑ L j=1 O j − P j = 0, the ELM needs to optimize its hyperparameters. The equation can be simplified as follows: Here, V means the output weight, P means the expected output, and S means the hidden layer neurons output. To gain optimal performance, we want the ELM to acquirê W i ,b i andV i , that is: This equates to minimizing the loss function By the principle of the ELM algorithm, when the input weight W i and the offset b i of the hidden layer are ascertained, the ELM is able to uniquely obtain its output matrix. Therefore, the training problem of the ELM is transformed into the problem of solving the linear equation SV = P with a minimal and unique interpretation. Evaluation Indicators We measured the performance of MSPEDTI in the present study using the evaluation indicators calculated by the five-fold cross-validation method (5CV). The 5CV approach first splits the whole dataset D into five subsets D 1 , . . . , D 5 , which are roughly equal in size and do not intersect with each other. When testing subset D i , the remaining subsets D − D i are fed into the classifier as the training set. Loop this operation until all subsets have been tested. The performance of MSPEDTI was evaluated by the average results and deviations of the five experiments. There are several evaluation indicators calculated through 5CV, which are described by the following equations. Sen. = TP TP + FN where TP means true positive, TN means true negative, FP means false positive, and FN means false negative. Additionally, we plotted the operating characteristic curve (ROC) generated by 5CV and calculated its area under the curve (AUC) [34,35]. ROC is an essential metric for assessing the comprehensive performance of the model, which visualizes the variation between specificity and sensitivity and is displayed graphically. It computes a set of specificities and sensitivities by setting multiple different thresholds for successive variables, and then plots curves by using 1-specificity as abscissa and sensitivity as ordinate. Assessment of Performance Gold-standard dataset enzymes, ion channels, GPCRs, and nuclear receptors were used to measure the capabilities of MSPEDTI in the experiment. The detailed outcomes of 5CV obtained by MSPEDTI on these datasets are listed in Tables 2-5, respectively. From these tables, it is possible to observe that MSPEDTI accomplished satisfactory prediction accuracy, with values of 94.19%, 90.95%, 87.95%, and 86.11%, and their standard deviations were 0.41%, 1.10%, 1.51%, and 4.39%, respectively. In the enzyme dataset, the accuracy of all five MSPEDTI experiments was higher than 93.85%, with the highest result reaching 94.87%, and their standard deviations values were 94.87%, 94.27%, 93.85%, 94.02%, and 93.94%, respectively. MSPEDTI achieved good results of 88.51%, 81.95%, 76.41%, and 72.46% on MCC, which was used to measure classification performance, and its standard deviations were 0.89%, 2.24%, 2.88%, and 8.97%, respectively. On the comprehensive performance assessment index AUC, MSPEDTI gained 94.37%, 90.88%, 88.02%, and 86.63%, with standard deviations of 0.59%, 0.97%, 2.88%, and 4.77%, respectively. Additionally, MSPEDTI also yielded more satisfactory outcomes in terms of sensitivity and precision. The ROC curves produced by MSPEDTI for 5CV on the four gold-standard datasets are shown in Figures 2-5. Comparison of Different Descriptor Model To estimate the impact of feature descriptors on MSPEDTI performance, we compared it with the two-dimensional principal component analysis (2DPCA) descriptor model. 2DPCA is an advanced version of the principal component analysis algorithm [36], which does not need to convert raw data into one-dimensional vectors, which is equivalent to removing the correlation of the row vector or column vector of the matrix. So, it can directly calculate the covariance training sample matrix and has the advantage of calculating the feature vectors quickly. To validate the representation capability of the features extracted by CNN, we compared it with the 2DPCA descriptor on the ion channel dataset. In the interest of fairness, the other modules in MSPEDTI were kept unchanged, and only the feature extraction module was replaced. The 5CV results produced by the two descriptor models on the ion channel dataset are shown in Table 6, in which it can be observed that the MSPEDTIgenerated results are higher than the 2DPCA descriptor model. The experimental outcomes of the contrast indicated that the CNN algorithm extracts the features better than the 2DPCA algorithm in our model. Figure 6 shows the ROC curve plotted on the ion channel by utilizing the 2DPCA descriptor method. Comparison of Different Descriptor Model To estimate the impact of feature descriptors on MSPEDTI performance, we compared it with the two-dimensional principal component analysis (2DPCA) descriptor model. 2DPCA is an advanced version of the principal component analysis algorithm [36], which does not need to convert raw data into one-dimensional vectors, which is equivalent to removing the correlation of the row vector or column vector of the matrix. So, it can directly calculate the covariance training sample matrix and has the advantage of calculating the feature vectors quickly. To validate the representation capability of the features extracted by CNN, we compared it with the 2DPCA descriptor on the ion channel dataset. In the interest of fairness, the other modules in MSPEDTI were kept unchanged, and only the feature extraction module was replaced. The 5CV results produced by the two descriptor models on the ion channel dataset are shown in Table 6, in which it can be observed that the MSPEDTI-generated results are higher than the 2DPCA descriptor model. The experimental outcomes of the contrast indicated that the CNN algorithm extracts the features better than the 2DPCA algorithm in our model. Figure 6 shows the ROC curve plotted on the ion channel by utilizing the 2DPCA descriptor method. Comparison with Different Classifier Model To validate whether the classifier helps to improve the performance of MSPEDTI, we compared it with the SVM classifier model in the same dataset. The learning strategy of SVM is to maximize the sample interval, thus converting it to the solution of the convex quadratic programming problem [37,38]. Similar to the ablation experiments for the descriptor model, in the comparisons of the classifier models, we only replaced the ELM classifier with the SVM classifier and left the other modules unchanged. Comparison with Different Classifier Model To validate whether the classifier helps to improve the performance of MSPEDTI, we compared it with the SVM classifier model in the same dataset. The learning strategy of SVM is to maximize the sample interval, thus converting it to the solution of the convex quadratic programming problem [37,38]. Similar to the ablation experiments for the descriptor model, in the comparisons of the classifier models, we only replaced the ELM classifier with the SVM classifier and left the other modules unchanged. Table 7 presents the 5CV experimental outcomes of the MSPEDTI and SVM classifier model on the ion channel dataset. It is possible to observe from the table that the SVM classifier model performs well, and the accuracy, AUC, MCC, precision, and sensitivity are 86.48%, 86.64%, 73.05%, 83.86%, and 89.05%, respectively. However, compared with the ELM classifier, there are still some gaps, and the values of the above evaluation criteria are lower by 4.47%, 1.26%, 7.90%, 8.90%, and 4.24% respectively. These results indicate that the ELM classifier is indeed helpful to improve the prediction performance of MSPEDTI. Figure 7 shows the ROC curve plotted on the ion channel through utilizing the SVM classifier model. Figure 7 shows the ROC curve plotted on the ion channel through utilizing the SVM classifier model. Comparison with previous approaches We compared MSPEDTI with previous methods in the gold-standard dataset to assess its ability to predict DTIs in a more intuitive way. Here, we picked the metric AUC, which best reflects the overall comprehensive capability of the model as the evaluation Comparison with Previous Approaches We compared MSPEDTI with previous methods in the gold-standard dataset to assess its ability to predict DTIs in a more intuitive way. Here, we picked the metric AUC, which best reflects the overall comprehensive capability of the model as the evaluation criterion. The AUC values resulting from these previous methods, including Yamanishi [4], DBSI [39], KBMF2K [40], Temerinac-Ott [41], NLCS [42], WNN-GIP [43], SIMCOMP [42], and NetCBP [44], are aggregated in Table 8. It can be observed from the table that MSPEDTI yielded optimal results in all four gold-standard datasets over the previous method. This suggests that the strategy of combining the CNN algorithm with the ELM classifier used by MSPEDTI can greatly enhance the ability to predict DTIs. Case Studies To further verify MSPEDTI's ability in predicting new pairs, we trained it using all available data and predicted the unknown DTIs with the trained model. We searched the SuperTarget database [25] for the 10 highest-ranked DTI pairs of predicted associations. SuperTarget is a publicly available classic database that stores information about DTIs, and it currently collects 332,828 DTIs. Table 9 lists the top ten DTIs with the highest predictive score, from which we can see that seven potential DTIs were validated in the SuperTarget database. These outcomes indicated that MSPEDTI has outstanding capabilities in predicting new DTIs. Notably, while the rest of the three DTI interactions were not found in the current database, there is also the possibility of interaction between them. Discussion Accurate identification of the target protein of the drug can improve the efficacy of the drug and reduce side effects, thereby improving people's health. In the current study, we presented a model MSPEDTI to predict DTI on the basis of protein evolution and molecular structures. The model takes full advantage of the protein evolutionary information and drug molecular information and uses a deep learning algorithm to mine the deep association between them. The experimental outcomes in the four gold-standard datasets revealed that the MSPEDTI model has outstanding performance. However, there are still some shortcomings in our method: firstly, the number of DTIs known at present is still relatively small, and the model cannot be trained adequately; secondly, the parameters of the deep learning algorithm used in the model need to be further optimized to avoid overfitting in some cases; finally, how to integrate more biological information into the model is still worth further study. Conclusions In the present work, we designed a deep learning model MSPEDTI for predicting DTI on the basis of drug structure and protein evolution information. The model deeply excavates hidden features in protein evolutionary information by CNN, combines them with drug molecular fingerprint features, and uses ELM to efficiently predict potential DITs. The model on the gold-standard datasets enzymes, GPCRs, ion channels, and nuclear receptors, attained better 5CV results. To evaluate whether the modules used by MSPEDTI contribute to boost model performance, we implemented ablation experiments and compared them with other descriptor and classifier models. Furthermore, 7 of the 10 DTIs predicted by MSPEDTI were substantiated in authoritative databases. The exceptional results as mentioned above indicate that MSPEDTI has outstanding ability to predict DTIs and can provide reliable candidate targets for drug research. In the next step of our research, we will try to optimize the deep learning feature extraction method to mine more useful information from the raw data.
5,881.4
2022-05-01T00:00:00.000
[ "Biology" ]
Numerical Schemes and Monte Carlo Method for Black and Scholes Partial Differential Equation: A Comparative Note This paper comparatively investigates some iterative methods and Monte Carlo simulation technique for the dynamics underlying the celebrated Black and Scholes (BS) model. In particular we attempt to answer the question: ‘How many Monte Carlo replications can yield prices, for plain vanilla type European derivatives on a stock, which are similar to those obtained by solving the BS PDE using iterative numerical schemes?’ We confine to three frequently referred iterative schemes such as Successive over Relaxation (SOR), Gauss-Seidel (GS) and Jacobi (JC). This information together with the information of ‘differences in time requirements’ will help to guess the similar trade-offs for complex derivatives(exotic) pricing for which there are no analytic pricing formulas. Introduction Modern option pricing techniques are often considered among the most mathematically complex area among all applied areas of finance. A vanilla option (normal call or put option) is a financial instrument that gives the holder the right, but not the obligation, to buy or sell an underlying asset at a predetermined price, within a given time frame and it is generally traded on an exchange such as the Chicago Board Options Exchange. In this paper we deal with two approaches in derivative pricing under Black and Schole's world. One approach is preferred by people in computational mathematics area and another approach falls under the concern of people from stochastic finance area. The first approach concerns iterative methods to solve the Black-Scholes (BS) type partial differential equation in order to price different financial products traded on some underlying, which is often an index or individual stock. The second approach concerns the so called Monte Carlo (MC) simulation to pricing various financial products. This approach replicates random phenomena which are hypothesized to capture the reality. While the numerical methods are algorithmic and can be applied to most of the PDE's, the Monte Carlo methods are time consuming and depends upon the number of replications considered in order to price some particular financial product under consideration. We shed some light on the trade-offs between the approaches for basic vanilla type derivative pricing. We revisit different iterative methods used in numerically solving BS PDE in order to obtain the fair price of the vanilla options. We consider Jacobi (JC), Gauss-Seidel (GS) and Successive-Over-Relaxation (SOR) algorithms with backward substitution to price plain vanilla put options. The iterative methods are studied in great details by Burden [3], Isaacson [7], Atkinson [1] ; and their stochastic versions are explored by Mao [9], Kloeden [10]. We observe that except some differences in time requirement, the prices obtained are very similar. We revisit the application of Monte Carlo (MC) simulation technique to generate asset price paths governed by BS stochastic differential equations (SDE). We admit that MC simulation and its applications in finance and insurance is a huge area. We just get some flavor by delving as little as is required to simulate paths for BS SDE. We price options using these simulated paths and provide reasonable evidence to answer the fundamental question 'how many paths in MC pricing can provide option prices similar to those obtained by iteratively solving BS PDE under different numerical scheme?' We discuss the trade-offs associated with both the approaches. While we try different number of replications under MC method we overall observe that except little more time requirement MC method is much more powerful than the iterative numerical schemes. Other useful applications of Monte Carlo methods in Finance are well documented in the literature; see e.g. Jackel [8], Dowd [4] , Glasserman [11] .We figure out about number of replications which can price a particular type of option ensuring that prices are Universal Journal of Computational Mathematics 3(4): 50-55, 2015 51 always better than those obtained by iterative schemes. Comparisons for illustrative options are presented through tables showing the trade-offs between the approaches. The paper is structured as follows: section 2 briefly revisits the BS PDE and BS pricing. Section 3 is about the option pricing under MC simulation. We briefly discuss the iterative methods under consideration in section 4. Section 5 shows how one can price options using the iterative methods. We conduct numerical implementation and discuss our findings in section 6. Section 7 concludes the paper. BS PDE The Black-Scholes equation is a partial differential equation which describes the evolution of an option price over time. The key idea behind the equation is that one can perfectly hedge the option by buying and selling the underlying asset and consequently "eliminate risk". This hedge, in turn, implies that there is only one right price for the option, as returned by the Black-Scholes formula. We consider the BS PDE formula for analytic pricing as we will compare the analytic prices both with numerical prices(coming from solving the BS PDE using iterative methods) as well as prices obtained by applying MC simulation to BS dynamics. For European style option with pay-off 'V' the celebrated BS equation is (see Black and Schole [2]) : The values of call option, ( , ) and put option ( , ) for a non-dividend paying underlying stock in terms of the Black-Scholes paradigm are: Here the price put option is based on put-call parity. In (1): For both the option pricing formulas as in (1), (. )is the cumulative distribution function of the standard normal distribution, ( − ) is the time to maturity, is the stock price of the underlying asset, is the strike price, is the volatility of returns of the underlying asset and 'r' is the annual risk free rate. Option Pricing Using Monte Carlo Simulation Monte Carlo Simulation is used in finance to value and analyze instruments, portfolios and investments by simulating the various sources of uncertainty which affect their values, and then determining their average values over the range of resultant outcomes. This is usually done by stochastically simulating the price patterns of the underlying on which the derivatives are traded. For option pricing, the technique involved the following operations, see Hull [6], Higham [5], Jackel [8]: (1) To generate several thousand possible (but random) price paths for the underlying via simulation. (2) To calculate the associated exercise value (i.e. "payoff") of the option for each path. (3) To average these pay offs. This result is the value of the option. (4) To discount this value to the present value. Usually a large number of asset path are generated that evolve according to the equation: = + (3) Where is the asset value, and are incremental changes in asset value and time respectively, is the volatility, is the rate of return and is a Wiener process. Because of the term, itself is also a random variable. Iterative Methods An iterative method is a mathematical procedure that generates a sequence of improved approximation to the solutions for a class of problems. In the process of finding a solution of a system of equations, an iterative method uses an initial guess to generate successive approximations to a solution. We show application of three frequently referred iterative techniques namely: Jacobi (JC), Gauss-Seidel (GS) and Successive-Over-Relaxation (SOR) which produce the numerical solutions for BS PDE. In general we solve a system = ; being a matrix and and are column vectors. See Atkinson [1], Isaacson [7]. Jacobi (JC) method is the simplest of all three iterative methods we consider. It uses the ℎ value of the approximation to determine the ( + 1) ℎ approximation. The GS(or the method of successive displacement) is an iterative method, used to solve a linear system of equations, which is simple modification of the Jacobi method. Though it can be applied to any matrix with non-zero elements on the diagonals, convergence is only guaranteed if the matrix is either diagonally dominant or symmetric and positive definite. GS iterative method uses the recent approximations, instead of ℎ value, to determine the ( + 1) ℎ value. See Atkinson (1978), Isaacson (1994). The last iterative method of our consideration SOR is a variant of the GS method for solving a linear system of equations. SOR iterative method is the modification of GS iterative method. SOR iterative method uses the ( + 1) ℎ approximation with a relaxation parameter as a modification of GS method. Among the three iterative methods SOR gives faster result, as it converges faster than other two iterative methods. An algorithmic view of three iterative methods looks like as below: = 1,2, . . . n = 1,2, . . . , Jacobi method: Gauss-Seidel method: SOR method: Pricing Options Using Iterative Methods We use an implicit Euler characterization of finite difference method for our illustration. is a maximum value for that we must choose sufficiently large (of course, a discretization cannot allow S to assume infinity!). Following boundary conditions are for European Put options: In mesh-grid notations: For a given payoff at expiry, our problem is to solve the Black-Scholes PDE backwards in time from expiry to the present time (t = 0). If all three derivative terms are approximated with Implicit Euler finite difference the discretisation is given by: This uses a backward difference to approximate the time derivative (since we are solving backward in time from expiry) and a central difference to approximate the first derivative in asset value. The second derivative term is approximated by the usual second difference. Numerical Implementation We produce the results for put options and one can have the similar analysis for call options using put-call parity. We replicate various numbers of paths to price each option and found that MC method is more powerful than numerical schemes even with relatively less number of replication to price each option. We compare the Monte Carlo simulation with the three iterative methods using the illustrative parameter values as value of the underlying, S = 5, the volatility =0.3 1 , risk free interest rate r = .06, time to expiry T=1,n umber of time and asset mesh points N=500. Of course in particular context these parameter sets could be quite different and some parameters might have some effect on numerical schemes and/or MC method as well. However we ignored that issue in this research. We compare each of the numerical schemes with different number of replications in MC pricing. We then choose 2 4 and 2 10 replications to draw the graphs showing the comparison of iterative methods with MC method. Figure 2 shows the trade-offs between number of replications and corresponding price differences for each of the numerical schemes and MC method. From the results in table 1 we can say that there is hardly any difference in SOR, GS & JC iterative methods for European option pricing. We get little bit better prices for SOR method compared to GS and JC methods; however the difference is really negligible. Nonetheless little preference can go with SOR among the numerical methods. Our main concern is how many paths can ensure that MC prices are in the close proximity of numerical prices. We found that for number of paths gradually increasing from 2 4 to 2 10 the MC prices become gradually closer to numerical prices. It is not impossible to get MC prices with as little as 2 4 replications which are well comparable with the prices obtained by with any of the numerical schemes. However, more importantly we observe that for number of replications being near 2 10 MC prices are in very close proximity to the prices coming from any of the numerical schemes. Furthermore comparing with analytic BS prices it is clear that with more number of replications in MC prices can be made increasingly closer to those obtained with analytic BS formula, however only at the expense of additional computational time. With respect to equally comparable pricing, however, numerical iterative prices are much faster than MC prices; as is evident from table 1 and table 2. Though only reported for an illustrative option type these observations are found true for other maturity options as well as other types (e.g. European Call). Conclusions In this paper we studied iterative methods and the Monte Carlo simulation method for the dynamics underlying the celebrated Black and Scholes (BS) model in order to see the trade-offs between the approaches. We tried various numbers of replications under MC method in order to see how simulation based pricing fares with iterative pricing. Our observation is that if one is comfortable with little bit more time requirement i then MC prices are much better than the iterative prices. In particular we observe that number of replications around 2 10 can often ensure that MC prices are better than iterative prices; though one can obtain comparative prices with far less number of replications. We observe these for both European Put option and Call option. In a future work we plan to investigate other derivatives and the effects of other determinates in such trade-offs.
3,015.6
2015-01-01T00:00:00.000
[ "Mathematics" ]
Biodetoxification and Protective Properties of Probiotics Probiotic consumption is recognized as being generally safe and correlates with multiple and valuable health benefits. However, the mechanism by which it helps detoxify the body and its anti-carcinogenic and antimutagenic potential is less discussed. A widely known fact is that globalization and mass food production/cultivation make it impossible to keep all possible risks under control. Scientists associate the multitude of diseases in the days when we live with these risks that threaten the population’s safety in terms of food. This review aims to explore whether the use of probiotics may be a safe, economically viable, and versatile tool in biodetoxification despite the numerous risks associated with food and the limited possibility to evaluate the contaminants. Based on scientific data, this paper focuses on the aspects mentioned above and demonstrates the probiotics’ possible risks, as well as their anti-carcinogenic and antimutagenic potential. After reviewing the probiotic capacity to react with pathogens, fungi infection, mycotoxins, acrylamide toxicity, benzopyrene, and heavy metals, we can conclude that the specific probiotic strain and probiotic combinations bring significant health outcomes. Furthermore, the biodetoxification maximization process can be performed using probiotic-bioactive compound association. Introduction Food is vital for human health, delivering energy, and nutrients, and plays crucial roles in the human body, tissues, growth and development of organs, normal function, and metabolism [1,2]. Besides nutrients, food could have traces of different toxins (as naturally occurring or by-products during food processing or storage), usually at non-detectable levels and below unobserved adverse effect levels. Food toxins, including fungi (yeasts and molds), industrial waste contaminants-heavy metals (arsenic (As), cadmium (Cd), mercury (Hg), and lead (Pb)), acrylamide, and benzopyrene, increase the risk of dysbiosis, mutagenesis, and carcinogenesis [3][4][5]. Food and feed contamination are almost impossible to entirely avoid. Instead, the adoption of various measures to detoxify contaminated food and feed is more feasible and necessary. Several techniques (physical, chemical, and biological) have been studied to detoxify and mitigate hazards affecting the population's health and significantly diminish the economic damage caused by these toxins in food and feed. These methods act by destroying or modifying the toxin's molecular structure, resulting in the toxin's low accessibility to the digestive system [1,3,4,6]. The toxic chemical biodetoxification could be associated with the gut microbiota, which is essential for maintaining intestinal integrity in the longer term. Overall, the gut tract's microbiota might also be crucial for in vivo biodetoxification. In the past decades, probiotics have raised interest due to their comprehensive properties, not only in the digestive system but also in vivo biodetoxification [7][8][9]. This review aims to demonstrate Microorganisms 2022, 10, 1278 3 of 20 (ii) increase mucosal immunity-increasing IgA-producing cells; and (iii) reduce pathogens numbers and/or their gene expression [15,30]. In their study, Barouei et al. show how mucin secretion, is sustained by downregulation of plasma IFN-γ and haptoglobin in the presence of B. animalis subsp. lactis BB-12 and Propionibacterium jensenii 702 in concentration of 3 × 10 9 and 8.0 × 10 8 CFU/mL respectively [31]. Yang et al. 2012 demonstrate the protective effect of yogurt probiotics (L. acidophilus, B. lactis, L. bulgaricus, and Streptococcus thermophilus) in Helicobacter pylori infection by restoring affected Bifidobacterium in the gut microflora and by increasing serum IgA titer (low in H. pillory infection) [32]. In most cases, the protective effect of probiotics, in various pathologies, is based on multiple ways of action. Probiotic Safety Issues It is widely accepted that the most used probiotic strains are safe for usage [16,21]. These strains received the status "qualified presumption of safety." The safety assessment should include the type of microorganism being used, the method of administration, the exposure levels, and the consumer's health status. While probiotics are commonly acknowledged as safe for healthy subjects, few pieces of evidence emphasize the contrary for certain groups with unique risks [33]. Nevertheless, the potential benefits of probiotics compensate for the potential risks when considering the long-term. Probiotic species may have a natural origin or could be genetically engineered (tailored probiotics) for a specific effect (i.e., expressing a specific protein, biomaterial delivery, annihilating infectious pathogens to combat infectious, and metabolic diseases), so their impact on human health may differ, or their mechanisms of action may vary [34]. The FAO research group reports that probiotic action in patients with special medical status could be associated with four specific forms of side effects and risks ( Figure 1): "(1) systemic infections; (2) deleterious metabolic activities; (3) excessive immune stimulation in susceptible individuals; and (4) gene transfer" [35]. Microorganisms 2022, 10, x FOR PEER REVIEW 3 of 21 increase mucosal immunity-increasing IgA-producing cells; and (iii) reduce pathogens numbers and/or their gene expression [15,30]. In their study, Barouei et al. show how mucin secretion, is sustained by down-regulation of plasma IFN-γ and haptoglobin in the presence of B. animalis subsp. lactis BB- 12 and Propionibacterium jensenii 702 in concentration of 3 × 10 9 and 8.0 × 10 8 CFU/mL respectively [31]. Yang et al. 2012 demonstrate the protective effect of yogurt probiotics (L. acidophilus, B. lactis, L. bulgaricus, and Streptococcus thermophilus) in Helicobacter pylori infection by restoring affected Bifidobacterium in the gut microflora and by increasing serum IgA titer (low in H. pillory infection) [32]. In most cases, the protective effect of probiotics, in various pathologies, is based on multiple ways of action. Probiotic Safety Issues It is widely accepted that the most used probiotic strains are safe for usage [16,21]. These strains received the status "qualified presumption of safety." The safety assessment should include the type of microorganism being used, the method of administration, the exposure levels, and the consumer's health status. While probiotics are commonly acknowledged as safe for healthy subjects, few pieces of evidence emphasize the contrary for certain groups with unique risks [33]. Nevertheless, the potential benefits of probiotics compensate for the potential risks when considering the long-term. Probiotic species may have a natural origin or could be genetically engineered (tailored probiotics) for a specific effect (i.e., expressing a specific protein, biomaterial delivery, annihilating infectious pathogens to combat infectious, and metabolic diseases), so their impact on human health may differ, or their mechanisms of action may vary [34]. The FAO research group reports that probiotic action in patients with special medical status could be associated with four specific forms of side effects and risks ( Probiotic functional foods or supplements may contain a single or mix of bacteria species on the market. Several products containing probiotics, including milk, infant formula, cheese, drinks, and dietary supplements, are marketed in classical or novel procedures worldwide. This aspect results in the large ingestion of probiotic cells and significant interaction with various gut microbes at high densities. Thus, any gene resistant to antibiotics carried by probiotic cells may be relocated to the gut microorganisms, including pathogens [36]. Therefore, we should consider the risk of ingesting antibiotic resistance genes or antibiotic-resistant bacteria. The genes in Bacillus probiotics indicate a potential health risk due to the production of their toxins, and they harbor various antimicrobial resistance genes [37]. Probiotics could have adverse effects if used inappropriately or if they do not meet the required standards [38]. Indeed, their application in preventing, ameliorating, or Probiotic functional foods or supplements may contain a single or mix of bacteria species on the market. Several products containing probiotics, including milk, infant formula, cheese, drinks, and dietary supplements, are marketed in classical or novel procedures worldwide. This aspect results in the large ingestion of probiotic cells and significant interaction with various gut microbes at high densities. Thus, any gene resistant to antibiotics carried by probiotic cells may be relocated to the gut microorganisms, including pathogens [36]. Therefore, we should consider the risk of ingesting antibiotic resistance genes or antibiotic-resistant bacteria. The genes in Bacillus probiotics indicate a potential health risk due to the production of their toxins, and they harbor various antimicrobial resistance genes [37]. Probiotics could have adverse effects if used inappropriately or if they do not meet the required standards [38]. Indeed, their application in preventing, ameliorating, or treating some diseases is essential, but knowing and facing the other side is crucial. In specific cases, probiotics and probiotic mixt administration in high-risk populations may result in health complications [39]. Therefore, we conclude that probiotic supplements can be effective in different age groups of consumers and should be wisely selected. Furthermore, it is prudent to take precautions when administering probiotics. Food Contaminants and Their Impact on Human Health In specific cases, food can be hazardous to one's health, causing disease and death. Approximately two million people die annually, including children, due to contaminated foods full of harmful chemicals (heavy metals, acrylamide, polycyclic aromatic hydrocarbonbenzopyrene) or biological (microorganisms, pathogens, fungi-molds, and yeasts) compounds [40,41]. Everything we eat that is not harmful to animals and humans is labeled "safe food". An organization in every country is responsible for food safety, regulating additives and their concentrations permitted in food [41]. Toxic compounds may be naturally occurring or by-products resulting from processing, storage, or cooking [42]. It is difficult to test for intoxications because most foods cannot be tested for every possible toxic compound. To accurately detect unknown and known contaminants, it is necessary to run several follow-up cases of intoxication [40]. Currently, the authorities need to be more concerned about food safety because globalization, easy traveling, and rapid food habits are changing. The illnesses caused by pathogens, toxins, and other contaminations in the food ( Figure 2) cause a real health risk to humans and animals. Analyses and control measures mean a significant budget loss for the food industry [41]. treating some diseases is essential, but knowing and facing the other side is crucial. In specific cases, probiotics and probiotic mixt administration in high-risk populations may result in health complications [39]. Therefore, we conclude that probiotic supplements can be effective in different age groups of consumers and should be wisely selected. Furthermore, it is prudent to take precautions when administering probiotics. Food Contaminants and Their Impact on Human Health In specific cases, food can be hazardous to one's health, causing disease and death. Approximately two million people die annually, including children, due to contaminated foods full of harmful chemicals (heavy metals, acrylamide, polycyclic aromatic hydrocarbon-benzopyrene) or biological (microorganisms, pathogens, fungi-molds, and yeasts) compounds [40,41]. Everything we eat that is not harmful to animals and humans is labeled "safe food". An organization in every country is responsible for food safety, regulating additives and their concentrations permitted in food [41]. Toxic compounds may be naturally occurring or by-products resulting from processing, storage, or cooking [42]. It is difficult to test for intoxications because most foods cannot be tested for every possible toxic compound. To accurately detect unknown and known contaminants, it is necessary to run several followup cases of intoxication [40]. Currently, the authorities need to be more concerned about food safety because globalization, easy traveling, and rapid food habits are changing. The illnesses caused by pathogens, toxins, and other contaminations in the food ( Figure 2) cause a real health risk to humans and animals. Analyses and control measures mean a significant budget loss for the food industry [41]. A constant preoccupation of the food safety authorities is the exposure levels of the population or specific group of people, resulting in regulations that assess the maximum level of exposure allowed. Several studies discuss the toxicokinetic and toxicodynamics interaction of toxic compounds with the human body [43]. These studies reveal various detoxification strategies for reducing, annihilating, or converting toxic compounds. These strategies can be classified into physical (peeling, heat, ultraviolet light, ionizing radiation, and solution absorption), chemical (chlorination, oxidant, and hydrolytic substances utilization), and biological (inside the body or in food products using enzymes or probiotics). Because physical and chemical methods have some disadvantages associated with nutritional degradation, inefficiency for some toxins, secondary contaminants, and consumers' acceptance and concerns, as well as a need to reduce and replace chemical technology with high sensitivity, specificity, and environment-friendly methods, biodetoxification using probiotics is proposed [6,44]. A constant preoccupation of the food safety authorities is the exposure levels of the population or specific group of people, resulting in regulations that assess the maximum level of exposure allowed. Several studies discuss the toxicokinetic and toxicodynamics interaction of toxic compounds with the human body [43]. These studies reveal various detoxification strategies for reducing, annihilating, or converting toxic compounds. These strategies can be classified into physical (peeling, heat, ultraviolet light, ionizing radiation, and solution absorption), chemical (chlorination, oxidant, and hydrolytic substances utilization), and biological (inside the body or in food products using enzymes or probiotics). Because physical and chemical methods have some disadvantages associated with nutritional degradation, inefficiency for some toxins, secondary contaminants, and consumers' acceptance and concerns, as well as a need to reduce and replace chemical technology with high sensitivity, specificity, and environment-friendly methods, biodetoxification using probiotics is proposed [6,44]. Heavy Metals Human daily activities release heavy metals into the soil, air, and water. The most studied and known damage produced by heavy metals is the induced oxidative stress, resulting in cellular damage. Each heavy metal has its free radical generation mechanism targeting proteins involved in the apoptosis, cell cycle regulation, growth and differentiation, DNA methylation, and DNA repair-materializing in carcinogenesis. Some heavy metals may have neurotoxic impact induced by mechanisms, such as reducing neurotransmitters or accumulating mitochondria of neurons that disrupts adenosine triphosphate (ATP) synthesis [6,45]. Overwhelming metal contamination is a critical issue within the food industry, which undermines human health. The most common heavy metal contaminants are arsenic (As), cadmium (Cd), copper (Cu), mercury (Hg), nickel (Ni), lead (Pb), and zinc (Zn) [41]. Heavy metals entering the body increase the risk of developing cardiovascular, kidney, and neurological diseases [6,45]. A more toxic form of Hg is methylmercury, a strong neurotoxin, which affects the human central nervous system [46]. Pb is mutagenic and teratogenic, and it can negatively affect the neurotic system, interfering with the synthesis of hemoglobin, damaging kidney functionality, and reducing semen quality [47]. Cd can cause various diseases, such as cardiovascular, liver, reproductive system disorders, osteomalacia, and lung and renal cancer [48]. Among the detection methods for heavy metals, the most utilized are atomic absorption spectrometry, atomic fluorescence spectrometry, or spectrophotometry; lately, due to a demand for real-time detection, electrochemical biosensors have been used in this sense [6]. Bacterial biomass can also remove heavy metals from aqueous solutions [46]. Acrylamide Foods subjected to heat treatments (roasting and baking) undergo several unwanted changes, such as lipid oxidation, protein denaturation, vitamin degradation, and the formation of compounds harmful to the human body [49]. Acrylamide is mainly found in carbohydrate-rich bakery products (bread, biscuits, cookies, and baby foods based on cereals), french fries, chips, coffee, and meat preparations, subjected to high heat treatments. Small quantities of acrylamide, almost undetectable, can also be found in packaged foodstuffs due to its migration from the materials used in packaging that directly contact the product [49,50]. It is formed due to reactions between asparagine and reducing sugars (glucose, glyoxal, glycerol, and 2-deoxyglucose) [51]. The most conclusive detection method is mass spectrometry combined with capillary electrophoresis, gas, or liquid chromatography, especially high-performance liquid chromatography [50]. After ingestion, the human and animal bodies absorb and accumulate it in various organs, such as the heart, brain, liver, thymus, kidneys, muscle tissue, skin, and testes. The main pathway of acrylamide metabolism involves conversion to glycidamide and its conjugation to glutathione [49,50]. Studies have shown that acrylamide can cause genetic and reproductive toxicity, neurotoxicity, carcinogenicity, oxidative stress, and changes in genetic material (Khorshidian et al., 2020). When ingested doses are higher than recommended (100 mg/kg), it can cause acute toxic effects and lethal effects when it exceeds 150 mg/kg [49]. Probiotic capacity to reduce the damage produced by acrylamide ingestion is associated with their antioxidant activity [52]. Polycyclic Aromatic Hydrocarbons-Benzo[a]Pyrene The primary source of polycyclic aromatic hydrocarbons (PAHs) is the incomplete combustion of the material's body, such as coal, oil, and wood. PAHs are toxic to aquatic life, birds, and soil. They are absorbed by mammals through various methods (inhalation) and by plants through roots, which afterward translocate them to other parts of the plant. The most toxic member of the environmental pollutant PAHs family is Benzo[a]pyrene (B[a]P). B[a]P absorptions have pro-inflammatory effects and can induce tumors (gastrointestinal, bladder, and lung cancers), reproduction disorders, mutagenesis, disturbing development, and immunity deficiency [53][54][55]. Studies established that contamination with B[a]P is inevitable, which is caused by polluted water, soil exposure, and food consumption. Due to its low water solubility, B[a]P is recalcitrant to microbial degradation [54]. Fungi-Molds and Yeasts and Their Mycotoxins Probiotics are studied to improve food security and human health by inhibitory action on fungi-yeasts and molds. Around 5-10% of the world's food system is affected because of fungal impairment causing carcinogenesis through the produced mycotoxins. As a result, many acids, including acetic, propionic, sorbic, lactic, and benzoic are used in food preservation. Concerns are raised because yeasts and molds developed a resistance to antibiotics, preservatives, and sanitizer agents, demanding a better alternative [56][57][58]. Among contaminants, mycotoxins are probably the biggest threat to human health due to their high carcinogenesis. In addition, mycotoxins formed by certain kinds of fungi can cause acute poisoning and a significant deficit in the immune system [59,60]. The interaction between mycotoxins and probiotic cells is influenced by the cellular wall's integrity, which is responsible for the absorption capacity [61]. Biodetoxification Activity of Probiotics Producers, authorities, and consumers face food safety-related challenges. The population is exposed to fungus, mycotoxin and virus infections, chemicals (acrylamide, benzopyrene, heavy metals), mutagenic and carcinogenic compounds. The need for viable, generally accepted, and applicable detoxification methods are sustained not necessarily by economic damage but by the danger to human well-being in general [37,41]. Biodetoxification may be an intrinsic phenomenon, mainly in the enzymatic system and human microbiota, but it can also be an external, controlled, and directed phenomenon that ensures food safety before the contaminated food product is ingested [4]. Probiotics can bind mutagens and carcinogens, such as aflatoxins [4]. Further, we will discuss how the most commonly used probiotic genera reported as being able to biodegrade, absorb, or induce physical adhesion to different toxic compounds or pathogen microorganisms are frequently incriminated for foodborne diseases. Lactobacillus (LAB) Genera and Their Biodetoxification Capacity Lactobacillus genera are the most known and used probiotic [16,53]. For the conversion of glucose to lactic acid, Gram-positive and non-spore-forming bacteria, such as Lactic Acid Bacteria (LAB), can be used to initiate lactic acid fermentation [16]. During fermentation, lactose to lactate conversion reduces the danger of carcinogenic and mutagenic chemicals [62]. LAB can also remove pollutants from food through various metabolic activities, according to recent research (Table 1). Fermentation, antibiosis, and the capacity of the microbial cell wall to attach to the toxin are factors in these microorganisms' decontaminant action [57]. The antimicrobial qualities of LAB can effectively limit the growth of other pathogenic microorganisms and fungi [54]. Yeast and lactic acid bacteria (LAB) mycotoxin involve fighting binding aflatoxins [63]. LAB reduces AFM1 (aflatoxin M1) and potentially decreases toxins in yogurt to a safe concentration for consumption (below 0.05 µg/kg). Research proved the capacity of L. acidophilus to bind AB1 and AM1 in cow's milk [64]. A simulated gastrointestinal model sustained these results by proving the ability of L. acidophilus and L. casei (~10 log CFU/mL) to bind with AFB1 (aflatoxin B1); however, in contrast, it also underlined a reduced binding capacity in the presence of milk. The authors of the study concluded that micronutrients present in milk have a protective effect on the micotoxin (covering effect) [65]. Another study revealed that L. kefiri FR7 can reduce Aspergillus flavus and A. carbonarius growth and their mycotoxin production capacity [57]. Wu and his colleagues examined the prevention of colorectal carcinoma induced by B[a]P. They administered to mice, with colorectal-induced tumorigenesis, polymethoxiflavone (PMF), an anticancer agent found in citrus peels. The result indicated that by the oral administration of PMF, B[a]P-induced colon tumorigenesis (Benzo[a]pyrene) was blocked [66]. A practical explanation is gut microbiota modulation by prebiotic-like compounds. In 2021, a group of researchers proved that L. acidophilus NCFM (1 × 10 10 CFU/mL), among five bacterial strains (L. plantarum 121, Leuconostoc mesenteroides DM1-2, L. acidophilus NCFM, L. paralimentarius 412, and L. pentosus ML32), has the best capacity to bind B[a]P, the pH being (optimal pH 6) the parameter that influenced its binding yield the most, among incubation time, temperature, and strain concentration, [53]. Madreseh et al. proved that L. fermentum 1744 (ATCC 14931) (1 × 10 9 CFU/mL) may significantly reduce heavy metal (Pb, Zn, Ni, and Cd) absorption and accumulation in living organisms (rainbow trout). The best results were obtained for the encapsulated probiotics in the presence of lactulose (10 g/kg feed) [45]. The probiotic's ability to reduce heavy metals as well as the toxic effects of heavy metals in vitro and in vivo is related to its mechanism's binding ability due to the numerous negatively charged functional groups found in the probiotics cell wall [67], the modulation of different over-expressed genes upon exposure to heavy metals [6], and an enhancement in the fecal excretion of ingested heavy metals [68]. To sum up, the two hypotheses are attributed to the probiotic detoxification action. The first mechanism consists of the physical connection between the probiotic and contaminant. The second is when probiotics and strains can mitigate the carcinogenic danger through their metabolism. The cell wall of probiotics is primarily composed of peptidoglycan found in glycan chains consisting of alternating N-needles tilglucosamine and N-2 acetylmuramic acid, linked by β-1,4 bond [74]. Factors affecting contaminants' LAB bindings are associated with the proper selection of strains with a high capacity to eliminate the food contaminant [53,54,69]. We can state that the growth phase, incubation time, pH, contaminant concentration, and characteristics significantly affect probiotics' binding/antimicrobial properties, but the binding ability may be related to the dose [54,[75][76][77]. All studies showed that the detoxification rate is influenced by the contaminant and probiotic cell concentration, exposure time, pH, temperature, and nutrient presence [4,65,78,79]. Bifidobacteria Genera and Their Biodetoxification Capacity The genus Bifidobacterium includes Gram-positive, non-motile, non-spore, Y-or Vshaped, anaerobic bacteria that produce lactic and acetic acids without producing CO 2 . Bifidobacterium growing temperature is around 36 • C and 38 • C, with optimum pH values ranging from 6.5 to 7. Amino acids, thiamin, and riboflavin can all be synthesized by Bifidobacterium [80,81]. Antibiotic resistance is a feature of select LAB, which has been widely employed to manufacture probiotic-fermented foods, called preparations. Shortchain fatty acids (SCFAs) interact with the host cell and gut microbiota as significant products of substrate fermentation in the gut [16]. The pH of the gut is lowered by these two acids, notably in the cecum and ascending colon. Many dangerous bacteria are suppressed in a low-pH environment; hence Bifidobacterium's capacity to raise the acidity of the gut likely plays a role in its probiotic benefits [81]. Studies have indicated the binding or physical absorption of toxins by Bifidobacterium [42,55,70] (Table 2). A research paper discovered that having B. lactis HN019 in the diet can boost natural immunity. In macrophage cell lines, live or heat-killed Bifidobacterium and Lactobacillus species and certain of their cellular constituents can increase the generation of nitric oxide, hydrogen peroxide, cytokines tumor necrosis factor-, and interleukin-6 [80]. The binding ability of protoplasts and cell-free extract of three strains was determined in another study, revealing that the cell membrane was not the primary binding site and that B[a]P is not eliminated by metabolism. These observations highlight the relevance of cell wall preservation in B[a]P binding and support the existence of a cell wall-related physical phenomena that opposes metabolic breakdown [55]. Bifidobacterium's potential mechanisms of detoxifications are therefore linked to their ability to bind the toxic compounds due to the presence of peptidoglycan and polysaccharides in the cell wall [82]. As in the Lactobacillus cases, the incubation time and viability -cell wall integrity strongly affects their biodetoxification ability [55,82]. Probiotic Yeasts and Their Biodetoxification Capacity The genus Saccharomyces, more exactly S. cerevisiae strain is the most widely used for baking, alcoholic fermentation, and nutritional supplements for people and animals. Due to their inclusion in the Generally Recognized as Safe group, these species, together with LAB, offer an appropriate starting point for finding strategies to decrease food and human exposure to chemical contaminants [69,73,86,87]. Some yeast species (Table 3) have been used as biocontrol agents to prevent mycotoxin-producing filamentous fungus from growing on crops, food, and feed [86]. These species might help preserve agricultural goods and decrease mycotoxin contamination. In different technological processes, yeasts can be used for their direct inhibitory impact on pollutants, particularly mold toxin generation, independent of their growth-inhibiting effect [69]. Several yeast species' cell walls can also bind mycotoxins from agricultural goods, successfully sanitizing them. Mycotoxicosis in cattle is also treated using probiotic yeasts or foods containing yeast cell walls or other ingredients. Yeasts are also known to have additional beneficial properties, such as breaking poisons into less harmful or even non-toxic forms [72]. Yeasts and their biotechnologically necessary enzymes may be sensitive to particular mycotoxins, posing a severe challenge to the biotechnological field, but unfortunately, yeast-mycotoxin interactions have been seriously understudied [70,86,88]. Filamentous fungus development and/or decreased gene expression involved in mycotoxin production can be limited by the yeast-generated metabolites. The main volatile organic compounds generated by Pichia anomala (fungi biocontrol agent), 2-phenylethanol (2-PE), have been shown to hinder spore germination and toxin production; in other words, biosynthesis was suppressed [87]. Yeast's capacity to bind ochratoxin A increased during fermentation with two Saccharomyces strains by the addition of anthocyanin [86]. The yeast cell integrity seems to be the most important factor that influences the efficacity of yeast-related biodetoxification. Namely, cell surface areas, volume, cell wall thickness, and the presence of O-H/N-H bonds of proteins, polysaccharides, and 1,3-βglucan from the yeast cell walls [87,89,90]. Parameters such as yeast exposure time, yeast concentration, initial toxin concentration, and temperature are the ones mentioned by the scientific literature as being influencing factors in the biodetoxification process [87,91]. In contrast to Lactobacillus, yeast's capability to bind mycotoxins is not significantly influenced by cell viability. The main condition for the inactive yeast cells was the cell wall wholeness. Destroyed yeast cells proved with almost 50% less biodetoxification capacity [90]. PAT-patulin; OTA-ocratoxin; AFB1, AFB2, AFG1, AFG2, AFM1-aflatoxin B1, B2, G1, G2, M1; As-arsenic; ↑-increase. Other Probiotics or Promising Probiotic Candidates and Their Biodetoxification Capacity Several probiotic species and promising probiotic candidates (Table 4) have different biodetoxification activities. Probiotics may use several mechanisms (such as epoxidation, hydroxylation, dehydrogenation, and reduction) or metabolites (antimicrobial proteins) for the toxins' degradation [88,93]. Bacitracin A, for example, is a non-ribosomal peptide antibiotic developed by Bacillus licheniformis strain HN-5 with high antibacterial activity. Bacillus spp. are rod-shaped, Gram-positive, endospore-forming organisms that can be obligate aerobes or facultative anaerobes and are a potent antibiotic against Gram-positive and -negative bacteria. The bacABC operon and bacT, which encode non-ribosomal peptide synthetase and thioesterase, respectively, make up the bacitracin synthetase gene cluster in B. licheniformis. Commercially B. licheniformis is utilized in the manufacture of bacitracin, an extensively used animal feed. The processes behind bacitracin's ability to reduce infectious illnesses in animals have previously been studied [94]. Often utilized in producing industrial enzymes, including amylase and protease, Bacillus licheniformis is a common bacteria found in soil and waste organic material. According to a prior study, several strains of B. licheniformis have a lot of promise as probiotics or nutrition supplements for humans [93]. After 36 h of incubation, B. licheniformis CK1 reduced ZEN by 95.8% in Lactobacillus broth by degradation of the mycotoxin (the HPLC chromatogram B. licheniformis CK1 cell wall revealed no ZEN). The authors believe that the extracellular xylanase, cellulase, and protease produced by B. licheniformis CK1 are responsible for the degradation. According to the data, ZEN at a concentration of 2 ppm was not harmful to B. licheniformis [95]. Not so commonly used probiotic strains, such as Pediococcus acidilactici RC005 and P. pentosaceus RC006, absorbed between 26% and 34% of aflatoxin M1 from milk, from a concentration of approx. 30 and 34 ng/mL. The authors also discussed the desorption phenomena observed in 100% of the tested yeasts strain [88]. Future studies need to sustain the less-studied probiotic genera's (other than Lactobacillus, Bifidobacterium, and Saccharomyces) biodetoxification capacities and elucidate their mechanisms of action. Probiotic Antimutagenic Activity It has been proven that genotoxic substances and antibiotics created in the human body can induce genetic mutations and carcinogenesis [99]. As a solution to this effect, it is recommended to use antimutagens to prevent genetic mutations transmitted by some foods, cancers, or tumors. Antimutagenics are substances that can reduce the occurrence of mutations at the cellular level, acting on DNA replication and repair [100]. Antimutagens use chemical or enzymatic pathways to annihilate mutagens' actions. The autochthonous microflora in the human GI tract is wide-open to genotoxic compounds at high frequency. Some bacteria in the gut can efficiently bind mutagenic pyrolysates to decrease their mutagenicity. Bifidobacteria are among the more significant bacteria in the human gut with this effect [34]. They are used as probiotic dietary supplements. It was also demonstrated that probiotics could act as immunomodulators by influencing the gut-associated lymphoid tissue distributed throughout the GI tract [101]. Additionally, literature reports that probiotics can produce butyric and acetic acids with antimutagenic activity (can fight chemical mutagens or promutagens). Thus, these properties are associated with the consumption of viable and able-to-colonize probiotic cells. Compounds that diminish the effects of the mutagen are classified as desmutagens or bioantimutagens. Desmutagens act in a chemical or enzymatic direction by inducing inhibition of the mutagens' activity. Meanwhile, bioantimutagens act on DNA replication and inhibit the effects of the mutagen [102]. Another side of probiotics and their antimutagenic effect is how they could be introduced into humans' diets in an effective manner. Thus, a key characteristic of probiotics present in functional foods is viability. Among these types of functional products is yogurt. Yogurt is an excellent matrix used for probiotics delivery, with the mention that it should contain a minimum number of 10 6 CFU/g probiotics at the time of use. Several factors such as pH, water activity, oxygen, strain type, and other strains influence this [11,16]. The adverse effects of probiotics could be minimized by different strategies, such as microencapsulation of probiotics, the addition of enzymes, and prebiotics [103]. DNA alteration and carcinogenesis may be induced by the increase in mutagens and promutagens in the system [102]. Scientists have proven that butyric and acetic acids, of probiotic nature, have a broad antimutagenic activity. Thus, GI disorders may be reduced using probiotics, which can avoid the hazard of DNA genotoxins. Probiotics act as immunomodulators by influencing the gut-associated lymphoid tissue distributed throughout the GI tract. To have a positive impact on human health, probiotic cells need to be able to colonize the intestine. For L. acidophilus and Bifidobacterium spp., their products of fermentation are probiotic bacteria that provide antimutagenic and anti-carcinogenic activities. It has been reported that activating carcinogenic enzymes, such as nitroreductase, βglucuronidase, and azoreductase, are inactivated or L. acidophilus reduces their activity [104]. By fermenting milk with different Lactobacillus strains to obtain the yogurt, more peptides are formed, which present various bioactive compounds. These compounds have positive effects on consumer health, namely antimutagenic and antioxidative effects. Simultaneously, bioactive compounds are used to create functional foods and increase some foods' shelf life through the antioxidant effect [105]. L. paracasei subsp. tolerance JG22 also has a positive effect on the control of compounds that can express mutagenesis. There are some proven valuable characteristics of this strain, namely, high resistance to an acid environment (pH 2.0) and bile salts (0.5%), resistance to different antibiotics, and an adequate ability to colonize the gut [102]. The authors conclude that L. paracasei subsp. tolerance of JG22 is an excellent probiotic to be included in functional foods to prevent colon mutagenesis or tumorigenesis [102,106]. Therefore, only viable probiotic cells can inhibit or bind mutagens. Anti-Carcinogenic Effect of Probiotics Cancer is a pathology caused by multiple triggers. Our World in Data reports cancer as the second cause of death worldwide [107]. Food carcinogens formed in foods cooked at high temperatures and inadequately stored or contaminated with raw materials (heterocyclic amines (HCA), polycyclic aromatic hydrocarbons (PAH), mycotoxins (aflatoxins), N-nitroso compounds, acrylamide, and heavy metals) increase the potential risk factors for cancer [61,100]. In the GI tract, probiotics connect and degrade carcinogenic compounds [108,109]. The cell wall of probiotics may be an essential factor in binding free toxins in the intestine [104]. Factors, such as genetic predisposition, personal diet, lifestyle, physical activity, obesity, type 2 diabetes, abusive alcohol use, inflammation, and smoking, significantly influence carcinogenesis [110]. Several studies have confirmed that some opportunistic microorganisms, such as Bacteroides fragilis, Fusobacterium nucleatum, Helicobacter hepaticus, Streptococcus bovis, and E. coli, may indicate different types of cancer [7]. There are several pathways attributed to the anti-carcinogenic effect of different probiotics (Table 5). Among these, the most stated ones are the alteration and deactivation of carcinogens or mutagens, decreasing pH of the gut environment regulating the gut microflora and suppressing the growth of carcinogenesis microbiota, immunomodulatory properties (such as increased peripheral immunoglobin production, stimulation of IgA secretion, and decreased pro-inflammatory cytokine production), modulation of apoptosis (through SFCA production, and glutathione transferase activity stimulation), sustain cancer cell differentiation (through butyric acid action), inhibition of the tyrosine kinase signaling pathway, and DNA protection from oxidation [19,80,111,112]. Cancer cell proliferation is inhibited by probiotic action by making the cells more susceptible to apoptosis [8]. These mechanisms involve activation of pro-caspases, decreasing the anti-apoptotic Bcl-2, and increasing the sensitivity of pro-apoptotic Bax proteins. The scientific literature reveals that living or dead probiotic cells, their components (cell wall, peptidoglycan, and cytoplasmic fraction), or metabolites (exopolysaccharides, SCFA) can produce substantial antiproliferative effects in cancer cell lines [81,113]. Figure 3 describes the probiotic mechanisms reported in the scientific literature responsible for their anti-carcinogenic activity. Conclusions and Perspectives This paper focuses on the protective and biodetoxification capacities of different probiotic strains. Probiotics are popular for their role in different pathologies, mostly in the intestinal related ones. Their impact on gut microorganisms is crucial because they can positively (i.e., biodetoxification from mycotoxin, fungi, acrylamide, metals, virus, reduce pro-inflammatory responses, antimutagenic and anti-carcinogenic activities) and nega- ↓-decrease/downregulating; ↑-increase/upregulating, CRC-colorectal cancer, IL-6-cytokine related with bad prognosis in advanced cancers, SeCys-selenocysteine, SeMet-L-selenomethionine, CFU-colony formin units, SCFAshort-chain fatty acid, Bcl-2-B-cell lymphoma 2 with role in apoptosis regulation, Bax gene-modulates apoptosis. Conclusions and Perspectives This paper focuses on the protective and biodetoxification capacities of different probiotic strains. Probiotics are popular for their role in different pathologies, mostly in the intestinal related ones. Their impact on gut microorganisms is crucial because they can positively (i.e., biodetoxification from mycotoxin, fungi, acrylamide, metals, virus, reduce pro-inflammatory responses, antimutagenic and anti-carcinogenic activities) and negatively (i.e., transfer of antibiotic resistance) modulate human health. Considering that consumers respond differently to probiotics according to age, genetic characteristics of gut bacteria, diet, antibiotic use, and environmental cues, precautions are necessary before their use, and for this reason, they should be recommended only by health care personnel/clinicians, while more concerns are in their market distribution. The biodetoxification mechanisms of the action of probiotics belonging to Lactobacillus, Bifidobacterium, Saccharomyces, and other types of more or less popular genera (Bacillus, Enterococcus, Escherichia, Streptomyce, Pediococcus) are proved to be influenced by factors specific to (i) bacteria genus and strain; (ii) environmental dependent factors; and (iii) toxin dependent factors. Probiotics belonging to Lactobacillus are the most studied ones and are more correlated with the ability to bind toxins (mycotoxins, heavy metals, etc.) on the cell wall. Reactive functional groups and compounds present in the cell wall, such as proteins, peptidoglycan, and polysaccharides; 1,3-β-glucan for the yeast cell wall, are recognized to be responsible for probiotic binding capacity. The differences between the strains in relation to toxin absorption and binding are given probably due to the diversity in cell wall structures and bacterial cell membranes. Other probiotic biodetoxification pathways are correlated with probiotic metabolites, co-cultivation of different probiotics or different probiotic/compound formulations (i.e., lactulose), gene expression, and sustaining fecal excretion. Due to the fact that probiotics decrease toxin absorption and by reducing its toxicity, they are correlated with strong anti-carcinogenesis and anti-mutagenesis action. Based on a thorough review of the capacity of probiotics to react with pathogens, fungi infection, mycotoxins, acrylamide toxicity, benzopyrene, and heavy metals, we conclude that specific probiotic strains and combinations offer significant health outcomes and positively impact in vitro and in vivo detoxification processes. Despite the fact that there are many publications on the biodetoxification properties of probiotics, their application in practice in the detoxification of food and/or feed has been narrow. To increase this utilization, we concluded that specific mechanism pathways should be elucidated, the toxicity of degradation products should be also studied, and there should be safety regulation on the use of probiotic strains towards food matrices and in vivo systems. Conflicts of Interest: The authors declare no conflict of interest.
8,626.8
2022-06-23T00:00:00.000
[ "Biology" ]
Charged particles moving around a spherically symmetric dilatonic black hole For the static spherically symmetric dilatonic black hole described by the Gibbons-Maeda-Garfinkle-Horowitz-Strominger geometry, we analyze the timelike trajectories for electrically charged test particles. Both cases of an electric black hole and a magnetic one are considered. Finally, we are obtaining the solution to the Klein--Gordon equation in terms of Heun confluent functions and the corresponding energy spectrum. A special attention is given to the role of the dilaton parameter. Introduction In the last years, different types of dilatonic black hole (BH) solutions have been used for testing theories of gravity by means of black hole shadows [1].Working in the context of low-energy heterotic string theory compactified to four dimensions, a spherically symmetric static solution describing a charged black hole in presence of a scalar field was found by Garfinkle, Horowitz and Strominger [2], while the same solution was initially derived by Gibbons and Maeda in 1988 [3].This metric is now known as the GMGHS solution and, in the last thirty years, it has received considerable attention from the physical community. Besides the BH's gravitational mass M and electromagnetic charge Q, the general solution contains the asymptotic value of the scalar dilaton field, ϕ 0 , while the scalar field has a dilatonic 'secondary hair' D ∝ Q 2 M e ϕ 0 [2], which is also conserved and this has important physical consequences.In our work we shall consider for simplicity the asymptotic value of the dilaton to be ϕ 0 = 0, such that the dilaton scalar charge is proportional to the dilaton parameter r 0 = Q 2 M .The rotating version of the GMGHS solution was found by Sen [4] in context of the so-called EMDA theory.The EMDA theory, which is obtained from the compactification of the ten-dimensional heterotic string theory on a six-dimensional torus, contains besides the metric a dilaton, a Maxwell field and a pseudoscalar axionic field.This rotating solution is characterized by the mass parameter M , the electric charge Q and the angular momentum per unit mass, a.In absence of rotation, when a = 0, it reduces to the static spherically symmetric GMGHS solution, with the dilatonic parameter r 0 = Q 2 M .On the other hand, if the electric charge vanishes then the Kerr-Sen solution reduces to the vacuum Kerr solution.As a low-energy effective field theory of the heterotic string theory, the EMDA theory attracted a lot of attention in recent years.It is valuable to investigate the role of such a theory in astrophysical observations, as an alternative to the usual General Relativity [5]. Recently, the range of the dilaton parameter r 0 = Q 2 M has been estimated by monitoring the geodesic motions of stars in our Galactic Center.Among the short-period stars, the S2 star with a 16 year orbit around Sagittarius A* (Sgr A*) has been seen as a very attractive target.In 2020, the GRAVITY Collaboration reported the first detection of the General Relativity Schwarzschild precession in its orbit [6].Using these results, in [7] the dilaton parameter was constrained to r 0 ≲ 0.066M .In [8], a preferred value of r 0 ≈ 0.2M was determined using the optical continuum spectrum of quasars.Finally, by observations of the shadow diameters of M87* and Sqr A*, in [9] the dilaton parameter was also constrained to the interval 0.1M ≲ r 0 ≲ 0.4M .More recently, in [10] a more stringent constraint r 0 < 0.011M was found using X-ray reflection spectroscopy observations.Further studies of the EMDA theory include investigation of the accretion processes in black holes [11] - [13], the use of a Kerr-Sen black hole as a particle accelerator for spinning particles [14], studies of exact solutions of the Klein-Gordon equation in the Kerr-Sen background [15], [16], as well as the connection between the quasinormal modes (QNMs) to the black hole shadow of a rotating Kerr-Sen black hole [17], [18]. From a theoretical point of view, the null and timelike trajectories of uncharged particles around the GMGHS black hole have been extensively worked out.There is a rich literature dealing with this subject, see for example [19] - [22].However, the case of the charged particles to which the first part of the present work is dedicated is much more complicated. In our work we made use of a Lagrangean approach to derive the corresponding equations of motion in both the electrically charged case as well as for the magnetically charged black hole.In the purely electric potential, our results can be compared to those derived by Villanueva and Olivares, who solved the equations of motion using the Hamilton-Jacobi method [23].In this context, one would expect an excessively large rate of orbital precession due to the stronger attraction between the test particle and the central source. In particular, our results in the magnetic case can be compared to those obtained for instance in [24].While the full equations of motion for charged test particles in the magnetic case can be integrated exactly, using our Lagrangean approach one can easily show that the motion of electrically charged particles in presence of a magnetically charged black hole is confined to the so-called Poincaré cones of various angles.In particular, using our method it is easy to determine the characteristics of the Poincaré cones on which the motion of electrically charged particles is bounded to and their dependence on the physical quantities describing the GMGHS geometry and the charged test particle.In particular, we also investigate the existence of circular orbits located at the intersection of a Poincaré cone with a sphere. On the other hand, it is well known that perturbations of a black hole spacetime can be investigated by considering relativistic particles evolving in the corresponding manifold.That is why in the second part of the paper, we discuss the Klein-Gordon equation for charged particles moving in the background of a GMGHS black hole.Again, for uncharged particles, the Gordon equation has been investigated in [26] and a relation between null geodesics and quasinormal modes frequency in the eikonal approximation has been found. In the case considered in the present paper, the corresponding Klein-Gordon equation written for charged particles contains additional terms due to the interaction with the electric or magnetic fields.The solution to the radial equation is obtained in terms of Heun confluent functions [27] - [30].The so-called resonant frequencies are essential characteristics of a black hole and they can be derived by imposing the requirement that the Heun function can be cast into a polynomial form [30]. In general, the Heun functions are unique local Frobenius solutions to a second-order linear ordinary differential equation of the Fuchsian type with 4 regular singular points.Once the singularities coalesce, one can obtain the so-called confluent Heun functions with two regular and one irregular singularities.The main advantage is that these functions provide analytical solutions to a high number of problems encountered in theoretical and applied sciences.That is why, in the last two decades, there is a raising number of articles on Heun general or confluent functions in view of their wide range of applications.However, especially because of their singularities, there are unsolved problems related to their normalization, series expansions and integration techniques.That is why, many specialists are still relying on approximate methods. In relation to exact solutions of the Klein-Gordon equation, the theory of the Heun functions provides a direct way of obtaining the quasinormal frequencies (QNMs) by imposing that the Heun functions get a polynomial form.Next, one may use this result to compute the temperature on the black hole's horizon and the corresponding emission rate.The obtained results agree with those derived using the standard WKB method [31]. The present paper is organized as it follows.In the next section we introduce the GMGHS geometry.Note that this geometry can be sourced by an electromagnetic field using either an electric ansatz (in which case the black hole is electrically charged) or a magnetic ansatz (in which case the black hole carries a magnetic charge).The motion of electrically charged test particles will have different characteristics in each case.In Section 2.1 we consider the electrically charged black hole and study the effective potential and timelike trajectories of charged particles in this background.In Section 2.2 we consider the case of a magnetically charged black hole.In this case the motion of a charged particle will be confined on Poincaré cones of various angles.We also address in Section 2.3 the case of circular motions on the Poincaré cones.They correspond to trajectories of constant r which are the intersection of a sphere with the Poincaré cone.In Section 3 we are dealing with the Klein-Gordon equation and its analytical solution.A special attention is given to the energy spectrum and its dependence on the dilatonic parameter.The last section is dedicated to conclusions and avenues for further work. 2 Charged particles in the GMGHS geometry The GMGHS geometry Let us start with the static spherically symmetric charged dilatonic black hole line element known as the Gibbons-Maeda-Garfinkle-Horowitz-Strominger (GMGHS) solution: where and In the above expressions, M and Q are the mass and charge of the black hole and the dilaton field is given by: This geometry is sourced by a dilaton field ϕ, which is coupled to the electromagnetic field in the action.Note that, for simplicity, we have chosen in the above expressions the asymptotic value of the dilaton field as ϕ 0 = 0 [2].One can use the magnetic ansatz for the electromagnetic potential A µ = (0, 0, 0, −Q m cos θ), in which case the black hole carries a magnetic charge.In the electrically charged case one has A µ = (−Q/r, 0, 0, 0) and the black hole is now endowed with an electric charge.In both cases the black hole geometry remains the same as in (1), although in the electric case the dilaton field changes its sign.This charged black hole has a regular event horizon at r h = 2M , which is identical to the Schwarzschild one, while the theoretical range of the parameter r 0 is 0 < r 0 < 2M .However, in the last years, there were efforts to put an experimental bound on r 0 using astrophysical observations [8].Recently, within a fully relativistic orbital model for the S2 star in the Galactic Center of the Milky Way, it was shown that improved astrometric precision can narrow down the dilaton parameter.Moreover, by taking into account the information about the orbital precession, the upper limit on r 0 has been set at r 0 ≤ 1.6M [7]. If one considers now a charged particle moving in the electric field generated by the electromagnetic potential A µ the motion will be described by the Lagrangean: where q = e/m is the specific charge of the test particle with charge e and mass m. The electrically charged black hole In the followings, for the line element (1) we shall analyze how the value of the black hole's charge is affecting the charged particles trajectories.We focus first on the case of an electric black hole, with the electromagnetic tensor F µν generated by the four-vector potential while the magnetic ansatz will be discussed later. Compared to the motion of uncharged particles [19] - [21], in the case of the particle of charge q and unit mass moving along timelike geodesics, the situation is more complicated, since the Lagrangean contains an additional coupling term, For the cyclic coordinates t and φ, one can define two conserved quantities, i.e. the energy and the angular momentum Following the usual procedure, we replace them in the normalization condition g µν ẋµ ẋν = −1: where dot are the derivatives with respect to the proper time τ .Similarly to the Schwarzschild case, the geodesics are planar and therefore, one may consider the particles moving in the equatorial plane (θ = π/2).One finds the important relation where On the event horizon r h = 2M , the above expressions for the electric potential (11) become V h = qQ/(2M ) and can be either positive or negative, depending on the sign of qQ.For large values of the radial coordinate these potentials tend to 1. Since the negative branch V − has no classical interpretation being associated with antiparticles in the framework of quantum field theory, from now on, we shall consider the positive potential V + given in (11) as being the effective potential in which the particle is evolving.The first term in V + represents the Coulomb interaction of the particle of charge q with the charged black hole, while the second term corresponds to the neutral particle case.Depending on the sign of qQ, the Coulomb contribution in the potential can be either attractive or repulsive. In terms of rescaled quantities: Figure 1: The effective potential given in (12), against the rescaled radial coordinate x = r/(2M ).The numerical values are: Q = 0.3, q = −4, L = 4.2.The horizontal yellow, red and green lines correspond to the particle's energy and V max = 1.11. the effective potential reads: The potential ( 12) is represented in Figure 1 and one may notice that it allows different types of trajectories, for the specified values of the black hole's charge and particle's energy and angular momentum.Thus, for 1 < E < V max the particle whose energy is represented by the red horizontal line has unbounded trajectories.There are two turning points, denoted by x 1 and x 2 (from left to right), solutions of the equation E = V ef f .Thus, the particle starting from x 0 > x 2 reaches the turning point x 2 and goes back to infinity, while the particle coming from 1 < x 0 < x 1 will be attracted into the black hole. The particle with E = V max (the green horizontal line) has an unstable circular orbit.Depending on the starting point x 0 , the particle will eventually tend to the singularity (the left panel in Figure 2) or goes to infinity (the right panel in Figure 2). One may notice in Figure 1 that there is a region which allows periodic bounded trajectories, for the energy V min < E < 1 represented by the horizontal yellow line.The parametric plot in the equatorial plane of such a closed trajectory in given in Figure 3.For E = V min , the particle is experiencing a stable circular motion.The innermost stable circular orbits (ISCO) around the black hole for charged particles have been recently studied in [32], using a numerical approach.A detailed discussion on the radial and angular motions with different types of trajectories can be found in [23].For the Kepler-like orbits, the precession angles were also considered in [23].The left panel in Figure 4 is a graphical representation of the effective potential (12), for the angular momentum L = 1.7.Note that this value is less than L/M = √ 12 for which, in the Schwarzschild case (represented by the red plot), all the trajectories end up inside the black hole.The blue, green and black plots correspond to Q = 0.3, Q = 0.5 and Q = 0.7, respectively.One may notice, that once Q is increasing, the potential gets a maximum value which is strongly increasing and comes closer to the horizon.Thus, the particle with q > 0 and E < V max which is approaching this region coming from large distances will not fall into the black hole. On the other hand, similarly to the Schwarzschild BH, the green and black plots do not show a minimum value of the potential.However, it is important to identify a finite domain where particles move on stable bound orbits, neither falling into the black hole nor escaping to infinity.In this respect, for Q = 0.3 i.e. r 0 = 0.36M , a small potential well is formed, as it can be noticed in the right panel of Figure 4. Thus, r 0 = 0.36M is a physically important value of the dilaton parameter for which the particles with V min < E < 1 can be trapped on bounded periodic orbits.This value is much below the theoretical limit r 0 = 2M and agrees with the conclusions in [7]. Thus, one may conclude that the dilaton black hole charge Q has a strong influence on the potential shape.There are values of the dilaton parameter for which the particles are moving on stable circular orbits or are trapped on bounded trajectories. The magnetic ansatz In the case of the magnetic black hole of charge Q m with the magnetic field generated by the vector potential component the corresponding Lagrangean describing the motion of an electrically charged particle of unit mass and charge q becomes: Since the coordinates t and φ are cyclical the conserved energy becomes E = f ṫ while the conservation of the total angular momentum along the axis Oz corresponds to: Generically, one expects that the motion of an electrically charged particles in the field of a magnetic monopole should be confined on the so-called Poincaré cones [25].This is due to the SO(3) symmetry of the system, even if the Lagrangian describing this motion is not manifestly exhibiting this SO(3) symmetry, which is basically reflected in the geometry.As such, for the black hole geometry (1) there do exist three spacelike Killing vectors of the form: These Killing vectors can be subsequently used to define the components of the orbital angular momentum: as well as a nontrivial Killing tensor of the second order: which can be used to construct the Carter constant: Note that the above expression of the Carter constant is related to the square of the magnitude of the orbital angular momentum: If the motion were to be geodesic, these quantities will be all be conserved along the motion.However, in our case the motion is non-geodesic due to the presence of the electromagnetic forces acting on the electrically charged particle in the magnetic field of a magnetic monopole.Nonetheless, even if the components of the orbital angular momentum (17) are not conserved during the motion, the Carter constant (19) is actually conserved, as we shall see bellow. To this end one should note that the electrically charged particle has an angular momentum ⃗ S with constant magnitude proportional to −qQ m , which is directed along the radial direction, which connects it to the magnetic monopole in the origin, ⃗ S = −qQ m r.Defining now the total angular momentum ⃗ J = ⃗ S + ⃗ L one can directly check that its components are all conserved along the motion: which means that the vector ⃗ J is constant during the motion.To check the above equations one has to take into account the explicit equations of motion derived from (14) for the θ and φ variables.Note also that the constant L m in (15) corresponds directly to the z-component of the total angular momentum J z = L m . Using now the relation ⃗ J • r = −qQ m one can see that the vector ⃗ r is describing a cone whose axis is along ⃗ J, with the opening angle χ = 2α, where Also, the relation is pointing out the existence of another cone, of constant angle β, with on which the angular momentum ⃗ L is moving around ⃗ J. Since | ⃗ J| is constant this means that L = | ⃗ L| and also the Carter constant in ( 19) is then a constant, as expected.Since r is orthogonal on ⃗ L, it follows that α + β = π/2.Thus, the particle's trajectory lies on a three-dimensional cone whose opening angle is completely determined by the total angular momentum and the charge combination qQ m as being χ = 2α with tan α = L qQ m (22) Finally, the angle between the direction of the total angular momentum ⃗ J and the Oz axis is given by: Returning now to the characteristics of the motion along the angular directions, by using in the Carter constant (19) one obtains the important relation Obviously, the right hand side should be a positive quantity and therefore one has to impose the relation where The conditions cos θ 1,2 ∈ [−1, 1] and lead to the following the range of the angular momentum Finally, using the relations ( 24) and ( 25) in the normalization condition g µν ẋµ ẋν = −1, one obtains the equation describing the r−motion: This has the same form as the one for uncharged particles with K = J 2 z , and the corresponding trajectories, in the equatorial plane, have been investigated by many authors, as for example in [19].The difference is that, in our case, the charged particle with the angular momentum in the range ( 29) is following a trajectory which lies on a 3-dimensional Poincaré cone as we have seen above.(see also [33], [34] and [35]). On the other hand, since K = L 2 and J 2 = L 2 + (qQ m ) 2 , the relations in ( 27) become These define the angles of two cones, which in general are not symmetric to the plane xOy and the motion of the test particle is confined in between them.The periodic bounded trajectory for a particle whose orbital momentum is in the allowed range (30) and the energy is in the classical region V ≤ E < 1, can be numerically obtained, using Maple or Mathematica.In this respect, we start with the Lagrangean ( 14) and derive the system of Euler-Lagrange equations: where () is the derivative with respect to τ and () ′ is the derivative with respect to r. The equations in (31) were solved numerically by using the Maple software and implementing a Runge-Kutta algorithm of 4th order.As a consistency check we used the first integral of motion given by (30). In Figure 5, we represented the potential defined in (30), as a function of the rescaled quantities x = r/(2M ) and a = Q2 m .The black dashed plot corresponds to the Schwarzschild BH (a = 0).As the charge Qm is increasing, the maximum of the effective potential is also increasing.The curves are approaching the potential value V = 1 for large values of the radial coordinate.A particle with the energy E = √ 0.95, represented by the black horizontal line will follow a periodic bounded orbit on a cone whose opening angle depends on the value of Qm .Thus, when Qm is decreasing, the cone's angle becomes wider, as it can be seen in Figure 6.One may easily check that, when Qm → 0, one recovers the usual Schwarzschild case, with the periodic bounded orbit in the xOy plane. Circular motion on the Poincaré cone At the end of this section, let us briefly mention the trajectory of constant r, which can be named as being a circular orbit.The term 'circular motion' used here was explained in [34].Thus, the constant-r trajectories may lie on a sphere and, at the same time, on the Poincaré cone and the intersection between these two is, again, a circle.According to the relation (21), by changing the angular momentum, one is able to change both the opening angle and the orientation of the cone on which the circular orbit is situated. As it is known, a circular orbit of radius r c must satisfy the usual conditions: where V (r) is defined in (30) by: Thus, the energy of the corresponding particle must have the expression: while the Carter's constant K = L 2 is: If we impose that the above quantities are positive, i. e. 2r 2 c − (6M + r 0 )r c + 4M r 0 > 0, we obtain the allowed range of the circular orbit radius The minimum value r * is depending on the dilaton's parameter r 0 and will take values in the range r * ∈ (2M, 3M ).Thus, for r 0 = 0, we recover the unstable circular orbit of the Schwarzschild black hole, r c = 3M .While r 0 is increasing, the value r * is decreasing to r * = 2M for r 0 = 2M .For the potential with a = Q 2 m = 0.125, represented in Figure 5 by the blue line, one has an unstable circular orbit and a stable one corresponding to the maximum and minimum values of the potential.These trajectories are represented in Figure 7. One may conclude by saying that the radius of the circular orbit and its corresponding range are depending on the dilaton parameter.Once Q m is decreasing, the particle is moving on a circular orbit which lies on a cone whose angle is wider until it reaches the xOy plane for Q m → 0. The analytical solution in the electric case In the case of a charged particle characterized by q > 0 and m 0 , the Klein-Gordon equation has the general expression: where the gauge derivatives defined as: contain the electromagnetic potential component A t given in (5).Thus, the extended form of the equation ( 33) is With the variables separation: where Y m l are the usual spherical functions, one obtains the following radial equation: whose solutions can be obtained using Maple.Up to some integration constants, these are given by the Heun confluent functions [27], as: with k = ω 2 − m 2 0 , the variable is: and the Heun function's parameters are: where a = r 0 /(4M ) = Q2 is in the physical range a ∈ [0, 0.5].In the followings, we shall consider the value β = −2i(ω − q Q).The solution with β = +2i(ω − q Q) corresponds to ω → −ω and q → −q. The absolute value of the radial function ( 36) is represented in Figure 8, as a function of the rescaled coordinate x = r/(2M ), for different values of the dilaton parameter a = Q2 .The density probability is given by the square modulus of the Heun confluent function.One may notice that |R| 2 = 1 on the horizon, it gets a series of local decreasing maxima and is vanishing for large r's.The black hole's charge is affecting the Heun function parameters and its variables.Thus, the maxima have higher values as Q increases and they are shifted to larger radial distances. In order to study the scattering of the charged scalar field in the background of this dilatonic black hole, one has to put the radial equation in a Schrodinger-like form.Thus, with the change of function R(r) = F (r)/p, and using the tortoise coordinate: the radial equation ( 36) gets the Schrodinger-like expression: In the eikonal approximation, i.e. large values of l, and for m 0 = 1, this turns into the simple expression: where one can notice the same form of the potential as the one in (10).The potential depends on the particle's charge and angular momentum as well as on the black hole's parameters M and Q. Energy spectrum The confluent Heun function has a polynomial form for the following relation between its parameters [27] - [28]: and the general solution of (33) can be written as: where R nl is given in (36).For the parameters (37), the above condition leads to the following cubic equation: where the complex energy is ω = ω I + iω R .Among the three roots of (41), we choose, for each value of n, the complex one with negative imaginary part so that the function (40) decreases exponentially for t → ∞.The real part ω R and the absolute value of the imaginary part ω I are represented in Figure 9, for different n values.The imaginary part takes equally spaced numerical values which are increasing with n.These are projected on the red vertical line. In the massless case, the radial function corresponding to the uncharged boson has the same expression as in (36), with k = ω, while the Heun function's parameters have the simple expressions If one imposes the relation (39), the Heun confluent functions get a polynomial form.By using the expressions of the parameters (42), in the condition (339), we get the same energy quantization law as in the Schwarzschild case [30]: The magnetic ansatz For the magnetic black hole, we use the electromagnetic potential component (13) in the expression of the gauge derivative, D φ = ∂ φ + iqQ m cos θ, and the Klein-Gordon equation (33) has the explicit form: With the variables separation: one obtains the following system of decoupled equations: The azimuthal equation is satisfied by hypergeometric functions, while the solutions to the radial equation can be written in terms of Heun confluent functions as with the same variable as in (36) and the parameters (37) written for q Q = 0, namely: The relation (41) becomes: which is the same as the one corresponding to a massive particles in the Schwarzschild black hole [30]. Conclusions In the present work we considered the background geometry of the GMGHS black hole.This geometry is a solution of the low-energy field description of the heterotic string theory compactified down to four dimensions on a six-dimensional torus.It can be described using the black hole mass M , the black hole charge Q (in the electric or the magnetic case) and the asymptotic value of the dilaton field ϕ 0 .For simplicity, we have considered here a vanishing asymptotic value of the dilaton field, ϕ 0 = 0. Note that there also exists a 'secondary hair', as described by the dilaton charge D, which is in this case proportional to the dilaton parameter r 0 = Q 2 M .Since this black hole is charged, the motion of electrically charged particles has new interesting features (as opposed to the uncharged test particles).This is due to the extra interaction between the particle and the electromagnetic field in the background. In our work we made use of a Lagrangean approach to derive the corresponding equations of motion in both the electrically charged case, as well as for the magnetically charged black hole.In the electric case our results can be compared to those derived by Villanueva and Olivares who solved the equations of motion using the Hamilton-Jacobi method [23].For charged particles moving in an electric field, the effective potential ( 11) is represented in Figure 1.The shape of the potential depends on the model's parameters and it leads to different types of trajectories, as it can be noticed in the Figures 2 and 3. Using the potential represented in Figure 4, we have shown that bounded periodic trajectories can be obtained for the preferred value Q = 0.3, i.e. r 0 = 0.36.Such a conclusion agrees with the experimental bounds on r 0 imposed by astrophysical observations.For example, the optical continuum spectrum of the quasars has provided r 0 = 0.2 as a preferred value [8].Recently, the measured shadow diameter for M87* and Sgr A* proposed as the allowed interval r 0 ∈ [0.1, 0.4] [9]. 1he case of a magnetically charged black hole was addressed in Section 2.3, where we described the main characteristics of the motion of an electrically charged particle in this background.We showed directly the conservation of the total angular momentum ⃗ J, which will define the direction of the axis of the Poincaré cone on which the motion of the particle takes place.In particular, the angular momentum component J z of the charged particle must be in the range (29).Once Q m is increasing, the range ( 29) is also increasing.The values θ 1 and θ 2 in the relation (27), which correspond to θ = 0, define the angles of two cones that bound the motion of test particles.One may notice that for nonzero J z and Q m the two cones are not symmetric with respect to the equatorial plane. The particle's trajectory is not only constrained by these two cones but it also lies on a three-dimensional Poincaré cone.As it can be seen in equation ( 21), the opening angle of this cone is depending on the angular momentum and on the magnetic charge Q m .As the black hole's charge is decreasing, the cone's angle becomes wider and the cone becomes much flatter as it can be noticed in Figure 6. In particular, our results in the magnetic case can be compared to those obtained for instance in [24].While the full equations of motion for charged test particles in the magnetic case can be integrated exactly in terms of Weierstrass' s special functions, in our Lagrangean approach one can easily show that the motion of electrically charged particles in presence of a magnetically charged black hole is confined to Poincaré cones of various angles.In particular, using our method it is easy to determine the characteristics of the Poincaré cones on which the motion of electrically charged particles is bounded to and their dependence on the physical quantities describing the GMGHS geometry and the charged test particle.In particular, we also investigate the existence of circular orbits located at the intersection of a Poincaré cone with a sphere. The second part of our paper is dedicated to finding exact solutions of the charged Klein-Gordon equation in the GMGHS background.The role of the scalar field has been extensively investigated in different contexts and analytical solutions to the Gordon equation are particularly important [36].In the Einstein-Maxwell-dilaton theory, the scattering and absorption of a scalar field impinging on a charged black hole have been discussed in [37].Recently, Heun-type solutions of the wave equation of scalar particles for different types of black holes have been derived in [38], [39]. In our work, we solved the Klein-Gordon equation in the GMGHS background described by the line element (1) and expressed its analytical solution in terms of Heun confluent functions.As it can be noticed in Figure 8, the value of the black hole's charge is strongly affecting the behavior of the radial amplitude function.Thus, when Q is increasing, the maximum values become more prominent and they move to larger r−values. Even though the radial equation (35) and its solutions are the same as the ones derived in [40], the energy spectrum and its dependence on the model's parameters are quite different, the conclusions in [40] being based on a quartic equation.In our case, we have derived the cubic equation ( 41) and, among the three solutions, we have chosen, for each value of n, the complex root with ω I < 0, so that the function (40) decreases exponentially for t → ∞. The presence of the dilaton has a strong influence on the quasi-spectrum and we are interested in the imaginary part of ω, which is characterizing the decay rate of the oscillations.As it can be noticed in Figure 9, ω I takes almost equally spaced numerical values which are increasing with increasing n.These values are projected on the vertical thin red line.The sign of the test particle's charge, q, is affecting only the real part of ω which changes the sign, while the imaginary part remains the same. For massless bosons, the situation is similar to the Schwarzschild case [30], but the parameters of the Heun confluent function and the variable are affected by the black hole's charge.However, the equally spaced energy spectrum (43) remains the same as for the Schwarzschild black hole.For fermions, a similar analysis has been performed in [41]. For a direct transition from the Klein-Gordon equation to the classical limit of particles moving along timelime geodesics, we have shown that, in the eikonal limit, the effective potential in (10) has the same form as the one given in the Schrodinger-like equation (38). As avenues for further work, it might be interesting to extend the above analysis to the more general case of the charged dilatonic black hole with an arbitrary dilatonic coupling constant.In particular, for a particular coupling constant one can recover the black hole geometry obtained by Kaluza-Klein compactification from the corresponding five-dimensional black hole.It might be of interest how the black hole properties in five dimensions are related to those of the four-dimensional Kaluza-Klein compactification [42].Work on this issues is in progress and it will be reported elsewhere. Conflicts of Interest.The authors declare that they have no conflicts of interest.Data Availability.No data were used to support this study. Figure 2 :Figure 3 : Figure 2: Left panel.Unstable circular orbit for the particle with E = V max = 1.11 represented by the green horizontal line in Figure 1.The initial coordinate is x 0 = 1.9580.Right panel.Unstable circular orbit for the particle with E = V max = 1.11 and the starting point x 0 = 1.9583.The blue circle is the horizon. Figure 6 : Figure 6: Orbits on cones for the potential represented in Figure 5.The solid sphere represents the horizon.The left figure is for Q m = √ 0.32 and the right one is for Q m = √ 0.08.The other numerical values are: 2M = 1, K = 4, E = √ 0.95, q = 3 √ 2, J z = 1.5.
8,614.2
2024-01-01T00:00:00.000
[ "Physics" ]
Preference Construction Processes for Renewable Energies: Assessing the Influence of Sustainability Information and Decision Support Methods Sustainability information and decision support can be two important driving forces for making sustainable transitions in society. However, not enough knowledge is available on the effectiveness of these two factors. Here, we conducted an experimental study to support the hypotheses that acquisition of sustainability information and use of decision support methods consistently construct preferences for renewable power generation technologies that use solar power, wind power, small-scale hydroelectric power, geothermal power, wood biomass, or biogas as energy sources. The sustainability information was prepared using a renewable energy-focused input-output model of Japan and contained life cycle greenhouse gas emissions, electricity generation costs, and job creation. We measured rank-ordered preferences in the following four steps in experimental workshops conducted for municipal officials: provision of (1) energy-source names; (2) sustainability information; (3) additional explanation of public value; and (4) knowledge and techniques about multi-attribute value functions. The degree of changes in preference orders was evaluated using Spearman's rank correlation coefficient. The consistency of rank-ordered preferences among participants was determined by using the maximum eigenvalue for the coefficient matrix. The results show: (1) the individual preferences evolved drastically in response to the sustainability information and the decision support method; and (2) the rank-ordered preferences were more consistent during the preference construction processes. These results indicate that provision of sustainability information, coupled with decision support methods, is effective for decision making regarding renewable energies. Introduction Explicit appraisal of three dimensions in sustainability-environment, society, and economy-is the key to transitioning toward a renewable energy-based society [1,2].For example, environmental problems such as global warming and biodiversity degradation are indispensable criteria for successful introduction of renewable energy into society.Furthermore, the social dimension has increased the importance in the assessment of energy technologies recently [3].A typical example is labor issues, such as the use of child labor, although it is not limited to developing countries.Finally, economic sustainability should not be neglected in establishing a sustainable society, because profitable technologies for utilizing renewables are the foundation for sustainable transition. In operationalizing the assessment of sustainability, a decision support framework plays an important role [4].Recent extension of life cycle assessment (LCA) into life cycle sustainability assessment (LCSA) [5] provides a good opportunity to understand the importance of the decision support framework.Although the relationship between decision analysis and LCA has already been discussed [6][7][8], the use of LCSA necessitates indicator-based assessment covering environmental, social, and economic pillars throughout entire supply chains.The plurality of indicators as development goals is important in designing regional energy strategies, as well as the plurality of alternative scenarios under multiple stakeholders [9]. However, perspectives on whether the provision or acquisition of sustainability indicators actually changes people's behavior are limited.Clarifying the influence of sustainability indicators on energy decision making is important in establishing democratic and science-based policymaking processes.In addition, perspectives on whether the use of the decision support framework changes people's behavior are limited.If simple provision or acquisition is not enough in improving energy decisions, it is necessary to apply decision support methods to use the sustainability indicators efficiently. Therefore, we analyzed how people's decisions on renewable energies change in association with their knowledge on consequences of using renewable energy (whether they acquired sustainability information) and their knowledge on decision support (whether they studied how to use sustainability information). Hypotheses in Preference Construction Processes for Renewable Energies The Japanese public, for example, is not familiar with energy problems necessarily and it (including experts in energy sciences and politics) tends to exhibit unstable preferences in its energy decisions.Therefore, perspectives provided by the preference construction theory [10] are crucial in designing regional renewable energy scenarios.Preference construction has attracted much attention recently in a wide variety of research fields, including group processes, consumer choices, economic theory, and environmental valuation, because people, in general, construct preferences rather than revealing well-articulated, pre-stored, and retrievable preferences [10][11][12][13][14].In this study, we focused our attention to preference construction in decisions made regarding renewable energies. Despite the importance of preference construction, not enough research on such has been conducted in practical settings.One reason for this is that in LCA, the preferences are provided within the life cycle impact assessment (LCIA) as predetermined weights, which have already been prepared by LCIA methodology developers.LCA practitioners simply use the weights, although a typology based on sociology (cultural theory) is used in some LCIA methods (Eco-indicator 99 and ReCiPe) to cope with the different views of stakeholders [15], providing a striking contrast to decision analysis such as structured decision making (SDM) [16]. We analyzed two types of changes in preference construction processes.One is the changes caused by provision of information, which is information acquisition from the perspective of decision makers.The problem is how the provision of sustainability information changes rank-ordered preferences regarding renewable energies.The other type of change is caused by the use of decision support methods.Here, we concentrated on how the use of multiple criteria decision analysis using value functions changes the rank-ordered preferences for renewable energies.Furthermore, we analyzed the changes in consistency of rank-ordered preferences among participants during preference construction processes. Rank-Ordered Preferences in Individual Participants People, in general, do not possess enough knowledge about renewable energies; instead, they make decisions using intuitive impressions empirically formulated from the limited information available to them.This corresponds to a situation where classical descriptive decision analysis was used to study heuristics and biases in human judgment [17].This is the reason why deliberative polling involves information provision with group discussions [18].Hence, acquiring sustainability information about a wide range of renewable power generation technologies will reconstruct understanding about the forms of renewable energy and change the rank-ordered preferences of these subjects. Hypothesis 1. Provision of sustainability information changes rank-ordered preferences about renewable energy. Sustainability information includes, in many cases, abstract indicators related to environmental, social, and economic pillars, and people have difficulty understanding the meaning of the indicators.The difficulties can be related to a dichotomy between public and private.Hence, acquiring supplemental explanation about sustainability information from another perspective (public and private values) will change their rank-ordered preferences. Hypothesis 2. Provision of a supplemental explanation on sustainability information from a public-private perspective changes rank-ordered preferences regarding renewable energy. Although information provision is expected to change rank-ordered preferences, the procedures to construct preferences are still based on heuristics.In general, when decisions are involved with complex preferences, major uncertainties, and important consequences, making decisions based entirely on intuition is not a useful process [19].Hence, the use of methods to analyze and support decisions (formalized ways to construct preferences) will change rank-ordered preferences.It is known that the application of simple linear scoring rules improves the decisions [20]. Hypothesis 3. The use of decision support methods changes rank-ordered preferences for renewable energy. Consistency of Preferences among Participants We focused our attention in this study on individual decision making, and thus did not analyze group processes.That is, direct interactions among plural decision makers, and their impacts on preferences, in addition to group decision support methods such as decision conferencing [21], were outside the scope of this study.However, the provision of sustainability information and the use of decision support methods will change participant rank-ordered preferences in a coherent manner, and as a result, the differences in rank-ordered preferences among participants could become smaller.Hypothesis 4. Rank-ordered preferences concerning renewable energy become consistent during the preference construction process with the provision of sustainability information and the use of decision support methods. Preference Elicitation through Organizing Workshops As a method to gather preferential data in hypothetical but realistic situations, we organized workshops to elicit the preferences of respondents in the city of Tsuru (Yamanashi Prefecture) on 6, 13, and 14 November 2014.Municipal officials who had found employment one or two years previously were selected as participants (respondents, decision makers).The number of participants was 18 and the number of effective samples was 14, because not all participants were able to attend the whole workshops. Among the 14 participants (effective samples), 6 participants began working in their current office in 2013 and the other 8 started in 2014.The average ages for the former and latter groups were 26.2 and 24.6 years, respectively.Three participants were female.Four participants worked for the Policy Formulation Division, which was responsible for organizing the workshop on the municipal office side.The participants worked in nine different divisions of the city government. Rank-Ordering of Power Generation Technologies Using Renewable Energy Sources The example decision problem we used in the workshop had six alternatives and three criteria.Participants in the workshop were asked to rank-order the alternatives, rather than to choose the most preferable alternative.Rank-ordering of alternatives was based on multiple criteria decision analysis and two types of cards, one simply provided the names of power generation technologies and the other provided attribute values (profiles) of the technologies. The rank-ordered preferences of participants were determined for power generation technologies using renewable energy sources.These energy sources included solar power, wind power, small-scale hydroelectric power, geothermal power, wood biomass, and biogas, each of which is important in current Japanese energy policies [22]. The criteria by which to evaluate alternative power generation technologies using renewable energy sources were (1) life cycle greenhouse gas emissions (kg-CO 2 ); (2) life cycle job creation (person-day); and (3) electricity generation costs (Yen).Each criterion was measured per the amount of electricity used by an average household annually (4.8 MWh) and corresponded to an environmental, social, or economic aspect of sustainability. The 6 × 3 evaluation matrix shown in Table 1 was tentatively prepared for use in the workshop and was made using a renewable energy-focused input-output model of Japan.The detailed calculation structure and more refined numerical values were provided from the final version of the renewable energy-focused input-output model [22].During the workshop, we mentioned the contents of the table as sustainability information.Note: Each criterion was measured per the amount of electricity used by an average household annually (4.8 MWh). Decision Support Method We applied multiple criteria decision analysis based on multiple attribute value functions, which is commonly explained in textbooks on decision analysis [16,19,[23][24][25] and is already discussed within the context of LCA [6][7][8].For the sake of simplicity and because of limited time for the workshop, we assumed that all single attribute value functions were linear, and in addition, we set the attribute ranges beforehand (Table 1).Therefore, participants in the workshop could concentrate on the weighting procedure. Weight elicitation, without taking into account the specific ranges of attribute values, is the most common mistake in the application of multiple attribute value functions [19,26], and is not good preference construction practice [13].Therefore, we used the weighting procedure with an appropriate range sensitivity of attribute weights, in which the swing weighting method and the indifference method based on difference value measurement were combined.The detailed weighting procedures are explained in Section 3.4 and Appendix A. Four-Step Procedure in the Workshop We conducted rank-ordering in four steps (Table 2). Step 0. Participants intuitively rank-ordered the cards with the names of alternatives and related illustrations.An example of the cards with illustrations is shown in Figure 1. Step 1. Participants rank-ordered the cards with sustainability information (attribute value for each criterion), after attending a lecture on renewable energy technologies.An example of a card with sustainability information is shown in Figure 2. In the lecture, after showing an outline of energy consumption and characteristics of renewable energy technologies, the meanings of life cycle CO 2 emissions, life cycle job creation, and life cycle electricity generation costs were explained.At the same time, cards with sustainability information were shown on the screen in the lecture room. Step 2. Participants rank-ordered the cards with sustainability information once again, after getting a supplemental explanation about the differences in evaluation criteria from the perspective of public and private values.In the explanation, examples of public and private values in renewable energies and balancing of public and private values were illustrated.Step 3. Weighting was conducted using the forms for weight elicitation and sticky tags, after attending a lecture on multiple criteria decision analysis and performing an exercise of weighting procedures.In the lecture, the difference between alternatives and criteria, the characteristics of the selection problem of power generation technologies, the concept of trade-offs and weighting, the calculation procedure of overall values, and the problems with weighting procedures without reference to attribute ranges were given. of energy consumption and characteristics of renewable energy technologies, the meanings of life cycle CO2 emissions, life cycle job creation, and life cycle electricity generation costs were explained.At the same time, cards with sustainability information were shown on the screen in the lecture room. Step 2. Participants rank-ordered the cards with sustainability information once again, after getting a supplemental explanation about the differences in evaluation criteria from the perspective of public and private values.In the explanation, examples of public and private values in renewable energies and balancing of public and private values were illustrated.Step 3. Weighting was conducted using the forms for weight elicitation and sticky tags, after attending a lecture on multiple criteria decision analysis and performing an exercise of weighting procedures.In the lecture, the difference between alternatives and criteria, the characteristics of the selection problem of power generation technologies, the concept of trade-offs and weighting, the calculation procedure of overall values, and the problems with weighting procedures without reference to attribute ranges were given. of energy consumption and characteristics of renewable energy technologies, the meanings of life cycle CO2 emissions, life cycle job creation, and life cycle electricity generation costs were explained.At the same time, cards with sustainability information were shown on the screen in the lecture room. Step 2. Participants rank-ordered the cards with sustainability information once again, after getting a supplemental explanation about the differences in evaluation criteria from the perspective of public and private values.In the explanation, examples of public and private values in renewable energies and balancing of public and private values were illustrated.Step 3. Weighting was conducted using the forms for weight elicitation and sticky tags, after attending a lecture on multiple criteria decision analysis and performing an exercise of weighting procedures.In the lecture, the difference between alternatives and criteria, the characteristics of the selection problem of power generation technologies, the concept of trade-offs and weighting, the calculation procedure of overall values, and the problems with weighting procedures without reference to attribute ranges were given.Nov. 13 Explanation of decision support method Weighting of the three attributes Japanese descriptions were used on the actual cards. Weighting Procedure In the weighting procedure in Step 3, we used swing weighting and trade-off weighting (an indifference method) [27].Both weighting methods take attribute ranges into account explicitly, and thus the range sensitivity of both methods is higher than weighting methods without reference to attribute ranges, such as direct weighting using simple scoring rules [28,29].However, there are pros and cons for both methods used in this study.Swing weighting is suitable for getting overall structure, although it is less sensitive to the range.Weighting based on the concept of indifference is more sensitive, although understanding the concept of indifference is time-consuming.This is the reason why the two methods were used in a complementary way. Step W1.Write the names of evaluation criteria, units, and attribute ranges on two types of sticky tags.One is a pentagonal arrow type and the other is a rectangular type.The former was used in Step W2 and the latter was used in Step W3. Step W2.Put sticky tags on the 0-100 scale to measure weights.This step is equivalent to swing weighting [16,19,25,27,30].The most important criterion was considered 100 as reference. Step W3.Check the result of the weighting at Step W2 using the concept of indifference (trade-off weighting) [27,31].Go back to Step W2 and revise the weighting if necessary. Step W4.Calculate overall values and determine rank-orders.Go back to Step W2, if the result is unsatisfactory. The details of this procedure are shown in Appendix A. Degree of Deviation between Rank-Ordered Preferences To measure the degree of deviation between rank-ordered preferences for alternatives, we used the Spearman metric, which is equivalent to the Euclidean metric applied to rank vectors.We normalized the metric between 1 and 0 in this study.If the deviation between Steps i and i + 1 in Table 2 was 1, the deviation was at its maximum; however, if the deviation was 0, the deviation was at its minimum (the two rank-ordered preferences were identical). Measure of Consistency among Participants As an indicator to express the consistency of rank-ordered preferences among participants, we used the maximum eigenvalue for the Spearman's rank coefficient matrix.The larger the maximum eigenvalue, the more consistent the preferences were among the participants.In addition, to cope with small sample inferences, we used the resampling method of nonparametric bootstrapping (10,000 iterations) to estimate uncertainty (standard errors) in the eigenvalues. Changes in Rank-Ordered Preferences of Individual Participants The degree of preference deviation between adjacent steps is illustrated for each participant in Figure 3 and the averages and standard deviations between adjacent steps are shown in Table 3. Preference orders of participants at Steps 0, 1, 2, and 3 are presented in Supplementary Tables S1-S4.From Step 0 to Step 1, rank-ordered preferences for most participants changed greatly.In other words, preference orders for renewable energy sources, which were formulated unconsciously, were changed as a result of the provision (acquisition) of sustainability information.The average degree of preference deviation was half of the complete change.Therefore, we judged that Hypothesis 1 was confirmed.Table 3. Averages of preference deviation between two adjacent steps. Step 0 to Step 1 Step 1 to Step 2 Step 2 to Step 3 Degree of deviation 1 Average 0.496 0.045 0.216 Standard deviation 0.242 0.052 0.217 1 The Spearman metric was normalized between 0 and 1 to show degree of deviation between the two rank-ordered preferences. In contrast, the changes from Step 1 to Step 2 were small.The average degree of deviation was 0.045, with the maximum deviation only 0.143.In addition, the differences among participants in the changes between steps were also small, as indicated by the small value of the standard deviation.Therefore, the provision of a supplemental explanation of evaluation criteria from the perspective of public and private interests did not cause major changes in rank-ordered preferences.Thus, we judged that Hypothesis 2 was not confirmed. The changes from Step 2 to Step 3, which were caused by the use of the decision support method, were significant for participants 6 to 14, but not 1 to 5. On average, the degree of the changes was almost half of that for the changes between Step 0 and Step 1 (information provision).We, therefore, judged that Hypothesis 3 was confirmed. The details of the changes between Step 2 and Step 3 for participants 1-5 are illustrated in Figure 4.During the session, they practiced the weighting procedure several times and experienced how the rank-orders changed with the weights for the three criteria, by using the spreadsheet.As a result, only 0-2 couples in their rank-orders changed, as shown in Figure 4, and the degree of deviation between the two steps was small. Table 3. Averages of preference deviation between two adjacent steps. Step 0 to Step 1 Step 1 to Step 2 Step 2 to Step 3 Degree of deviation 1 Average 0.496 0.045 0.216 Standard deviation 0.242 0.052 0.217 1 The Spearman metric was normalized between 0 and 1 to show the degree of deviation between the two rank-ordered preferences. In contrast, the changes from Step 1 to Step 2 were small.The average degree of deviation was 0.045, with the maximum deviation only 0.143.In addition, the differences among participants in the changes between steps were also small, as indicated by the small value of the standard deviation.Therefore, the provision of a supplemental explanation of evaluation criteria from the perspective of public and private interests did not cause major changes in rank-ordered preferences.Thus, we judged that Hypothesis 2 was not confirmed. The changes from Step 2 to Step 3, which were caused by the use of the decision support method, were significant for participants 6 to 14, but not 1 to 5. On average, the degree of the changes was almost half of that for the changes between Step 0 and Step 1 (information provision).We, therefore, judged that Hypothesis 3 was confirmed. The details of the changes between Step 2 and Step 3 for participants 1-5 are illustrated in Figure 4.During the session, they practiced the weighting procedure several times and experienced how the rank-orders changed with the weights for the three criteria, by using the spreadsheet.As a result, only 0-2 couples in their rank-orders changed, as shown in Figure 4, and the degree of deviation between the two steps was small. Increased Preference Consistency Changes in the preference consistency among participants, which was measured by the maximum eigenvalue for correlation coefficient matrices, are illustrated in Figure 5.It was revealed that the preference consistency among participants improved from Step 0 to Step 1 and from Step 2 to Step 3.There was a correspondence between changes in rank-ordered preferences and changes in preference consistency.We, therefore, judged that Hypothesis 4 was confirmed. Changes Caused by Information Provision The results of this study indicate that participant preferences were constructed through the acquisition of sustainability information and the use of a decision support method.This has the following implications.First, in policymaking processes, we have to recognize that the intentions of residents, which are used in policymaking processes to reflect preferences of residents, are changeable.The simple questionnaires typically used to survey preferences about, for example, Increased Preference Consistency Changes in the preference consistency among participants, which was measured by the maximum eigenvalue for correlation coefficient matrices, are illustrated in Figure 5.It was revealed that the preference consistency among participants improved from Step 0 to Step 1 and from Step 2 to Step 3.There was a correspondence between changes in rank-ordered preferences and changes in preference consistency.We, therefore, judged that Hypothesis 4 was confirmed. Increased Preference Consistency Changes in the preference consistency among participants, which was measured by the maximum eigenvalue for correlation coefficient matrices, are illustrated in Figure 5.It was revealed that the preference consistency among participants improved from Step 0 to Step 1 and from Step 2 to Step 3.There was a correspondence between changes in rank-ordered preferences and changes in preference consistency.We, therefore, judged that Hypothesis 4 was confirmed. Changes Caused by Information Provision The results of this study indicate that participant preferences were constructed through the acquisition of sustainability information and the use of a decision support method.This has the following implications.First, in policymaking processes, we have to recognize that the intentions of residents, which are used in policymaking processes to reflect preferences of residents, are changeable.The simple questionnaires typically used to survey preferences about, for example, Changes Caused by Information Provision The results of this study indicate that participant preferences were constructed through the acquisition of sustainability information and the use of a decision support method.This has the following implications.First, in policymaking processes, we have to recognize that the intentions of residents, which are used in policymaking processes to reflect preferences of residents, are changeable.The simple questionnaires typically used to survey preferences about, for example, renewable energy sources, only reveal temporal and shallow preferences.Therefore, information provision has to be coupled with the questionnaires to increase the understanding by the participants.This is consistent with recent findings on deliberative polling, which illustrated that deliberators changed their views significantly on immigration, climate change, and the EU [32].This implies that public engagement prior to initiation of renewable energy projects is very important [33]. However, there is a caveat.Although we provided supplemental information on public value to make the problem more understandable, most participants did not change their rank-ordered preferences based on this information.There may be several reasons for this.One possibility is that there was conflict in the understanding and framing between the sustainability framework (a triangle of environment, society, and economy) and the public-private contrast.In other words, inconsistent duplication of evaluation criteria did not change the participant's preferences.Another reason may be that participants had already implicit preferences on public-private issues and thus additional provision of the perspective on public value was redundant. Changes Caused by the Use of Decision Support Methods Although our results suggest that information provision (acquisition) is important in constructing preferences, the preferences further changed because of the use of the decision support method.That is, not only understanding numerical information on sustainability intuitively, but also using formalized decision support techniques for processing the numerical information, helped the decision makers formulate appropriate preferences.Therefore, recommendations to use sustainability information in policymaking processes regarding renewable energy sources need to be coupled with instructions on how to consistently process the information.It is important to notice that there is an educational implication in applying decision support methods.Even if rank-ordered preferences were not changed, applying decision support methods gives the participants an opportunity to think about whether their preferences were suitable or not. Increased Consistency during Preference Construction One of the important results in our study was that in parallel with the stepwise changes in preference construction processes for each participant, consistency among an individual's preferences improved.This means that information provision and decision support positively affected consensus building, although this study was limited to individual decision making.Because group discussions are expected to improve the preference consistency further, the simultaneous use of decision support methods and group discussions will be promising in real-world policymaking processes regarding renewable energy. Supporting Decisions through Preference Elicitation From the results of this study, we implied that, as compared with the use of predetermined weighting factors in integrating several environmental indicators (impact categories) as practiced in LCIA, applying decision support methods is also useful in assessing and designing, for example, future energy options based on renewables.A crucial point of this is that the application of decision support methods can be explicit about preference construction processes and is, therefore, suitable for participatory frameworks of renewable energy problems, in which a wide variety of participants, evaluation criteria, and future energy options are encompassed. Conclusions This study revealed that the provision of sustainability information and the use of a decision support method changed participant preference orders about renewable energy sources.Furthermore, the preference orders among participants become consistent during the workshops, which can be considered preference construction processes.Although these results have important implications for establishing participatory frameworks for better decision making concerning renewable energies, further studies are necessary to generalize our results.For example, this study is based on small samples of municipal officials and thus attention to other various stakeholders would be necessary, as well as an increased sample size. Although we analyzed a predetermined problem (a sorting problem with three criteria and six alternatives) in order to determine the influence of the provision of sustainability information and the use of decision support methods, a study on how to structure the problem (problem structuring [34]) may be the key to setting up participatory frameworks.This would include two important topics.The first is how to invent effective criteria and includes reconsideration of life cycle sustainability assessment.Although the calculation of numerical values for the three criteria involves complicated processes, our attention to a wide range of sustainability criteria in sustainability assessment was limited.Therefore, future studies on how to identify and create better decision criteria are important.The second topic is how to develop creative alternatives.The combination of plural energy options and the definition of alternatives using geographic information will be important.Problem structuring in the real world inevitably involves group processes.We have to be explicit about how to involve stakeholders and how to support group decisions. Supplementary Materials: The following are available online at www.mdpi.com/2071-1050/8/11/1114/s1,Table S1: Preference orders of participants at Step 0, Table S2: Preference orders of participants at Step 1, Table S3: Preference orders of participants at Step 2, Table S4: Preference orders of participants at Step 3. Step W2.Put sticky tags on the 0-100 scale to measure weights.An example is shown in Figure A2. The weighting is practiced with reference to the attribute ranges; thus, it is equivalent to swing weighting.The most important criterion, "Generation costs" in the example, was 100 as reference. Sustainability 2016, 8, 1114 11 of 14 Step W2.Put sticky tags on the 0-100 scale to measure weights.An example is shown in Figure A2.The weighting is practiced with reference to the attribute ranges; thus, it is equivalent to swing weighting.The most important criterion, "Generation costs" in the example, was 100 as reference.Step W3.Check the result of the weighting at Step W2 using the concept of indifference as shown in Figure A3.Because the ratio of importance between "Generation costs" and "Job creation" is 100:80, the range 0-10 for "Job creation" is indifferent to the range 200,000-120,000 (80% of the original range) for "Generation costs".In this case, it is presumed that the decision maker thinks that the former (the range 0-10 for "Job creation") is indifferent to the range 200,000-140,000 (60% of the original range) for "Generation costs" as shown in Figure A4.Then, the decision maker goes back to the 0-100 scale and moves the sticky tag "Job creation" into the position "60" as illustrated in Figure A5. Step W4.Calculate the overall values and determine the rank-orders.An Excel spreadsheet was used for summation, division, rank-ordering, and recoding of the calculation results.If the decision maker inputs weights for the three criteria, then he/she obtains overall values for each alternative, and the rank-order of alternatives, after clicking the calculation button.The decision maker goes back to Step W2, if the result is unsatisfactory to him/her.The spreadsheet can be used to record these trials by clicking the record button.The converged values are the weights of the decision maker.Step W3.Check the result of the weighting at Step W2 using the concept of indifference as shown in Figure A3.Because the ratio of importance between "Generation costs" and "Job creation" is 100:80, the range 0-10 for "Job creation" is indifferent to the range 200,000-120,000 (80% of the original range) for "Generation costs".In this case, it is presumed that the decision maker thinks that the former (the range 0-10 for "Job creation") is indifferent to the range 200,000-140,000 (60% of the original range) for "Generation costs" as shown in Figure A4.Then, the decision maker goes back to the 0-100 scale and moves the sticky tag "Job creation" into the position "60" as illustrated in Figure 1 . Figure 1.An example (solar power generation) card with illustration used in Step 0. The back sides were blank.Japanese descriptions were used on the actual cards. Figure 2 . Figure 2.An example (solar power generation) card with sustainability information used in Step 1.Japanese descriptions were used on the actual cards. Figure 1 . Figure 1.An example (solar power generation) card with illustration used in Step 0. The back sides were blank.Japanese descriptions were used on the actual cards. Figure 1 . Figure 1.An example (solar power generation) card with illustration used in Step 0. The back sides were blank.Japanese descriptions were used on the actual cards. Figure 2 . Figure 2.An example (solar power generation) card with sustainability information used in Step 1.Japanese descriptions were used on the actual cards. Figure 2 . Figure 2.An example (solar power generation) card with sustainability information used in Step 1.Japanese descriptions were used on the actual cards. Figure 3 . Figure 3. Participants' preferences between the steps.The degree of deviation between two adjacent steps is shown in the vertical axis.The numbers in the horizontal axis signify each of the 14 respondents. Figure 3 . Figure 3. Participants' preferences between the steps.The degree of deviation between two adjacent steps is shown in the vertical axis.The numbers in the horizontal axis signify each of the 14 respondents. Figure 4 . Figure 4. Rank-orders of the preferences for Step 3 and the differences between Step 2 and Step 3. Circles and arrows illustrate exchanges in the rank-orders. Figure 4 . Figure 4. Rank-orders of the preferences for Step 3 and the differences between Step 2 and Step 3. Circles and arrows illustrate exchanges in the rank-orders. Figure 4 . Figure 4. Rank-orders of the preferences for Step 3 and the differences between Step 2 and Step 3. Circles and arrows illustrate exchanges in the rank-orders. Figure 5 . Figure 5. Changes in preference consistency: the maximum eigenvalues for the Spearman's rank-order correlation coefficient matrix (14 × 14).Nonparametric bootstrap means and standard errors (10,000 samples) are depicted. Figure A2 . Figure A2.An example of weighting using sticky tags on the 0-100 scale. Figure A2 . Figure A2.An example of weighting using sticky tags on the 0-100 scale. Figure A5.Step W4.Calculate the overall values and determine the rank-orders.An Excel spreadsheet was used for summation, division, rank-ordering, and recoding of the calculation results.If the decision maker inputs weights for the three criteria, then he/she obtains overall values for each alternative, and the rank-order of alternatives, after clicking the calculation button.The decision maker goes back to Step W2, if the result is unsatisfactory to him/her.The spreadsheet can be used to record these trials by clicking the record button.The converged values are the weights of the decision maker. Figure A3 . Figure A3.An example of checking the weighting at Step W2 using the concept of indifference. Figure A4 . Figure A4.An example of checking the weighting at Step W2 using the concept of indifference.A revised judgment from Figure A3. Figure A3 . Figure A3.An example of checking the weighting at Step W2 using the concept of indifference. Figure A3 . Figure A3.An example of checking the weighting at Step W2 using the concept of indifference. Figure A4 . Figure A4.An example of checking the weighting at Step W2 using the concept of indifference.A revised judgment from Figure A3. Figure A4 . Figure A4.An example of checking the weighting at Step W2 using the concept of indifference.A revised judgment from Figure A3. Figure A5 . Figure A5.An example of weighting using sticky tags on the 0-100 scale.A revision based on the judgment shown in Figure A4.The dashed lines are used to distinguish the original judgment and the revised judgments. Table 1 . The evaluation matrix used in the workshop. Table 2 . Four steps in the workshop. 0 Nov. 6 Names of alternatives (technologies) Rank-ordering of alternatives (cards) 1 Nov. 6 Attribute values for alternatives Rank-ordering of alternatives (cards) 2 Nov. 13 Explanation of public value Rank-ordering of alternatives (cards) 3 Nov. 13 Explanation of decision support method Weighting of the three attributes Table 2 . Four steps in the workshop. Step Day Information Provided Task for Decision Makers (Respondents) 0 Nov. 6 Names of alternatives (technologies) Rank-ordering of alternatives (cards) 1 Nov. 6 Attribute values for alternatives Rank-ordering of alternatives (cards) 2 Nov. 13 Explanation of public value Rank-ordering of alternatives (cards) 3 Nov. 13 Explanation of decision support method Weighting of the three attributes Table 2 . Four steps in the workshop.
8,072.4
2016-11-01T00:00:00.000
[ "Environmental Science", "Economics" ]
Cell-Specific Cre Strains For Genetic Manipulation in Salivary Glands The secretory acinar cells of the salivary gland are essential for saliva secretion, but are also the cell type preferentially lost following radiation treatment for head and neck cancer. The source of replacement acinar cells is currently a matter of debate. There is evidence for the presence of adult stem cells located within specific ductal regions of the salivary glands, but our laboratory recently demonstrated that differentiated acinar cells are maintained without significant stem cell contribution. To enable further investigation of salivary gland cell lineages and their origins, we generated three cell-specific Cre driver mouse strains. For genetic manipulation in acinar cells, an inducible Cre recombinase (Cre-ER) was targeted to the prolactin-induced protein (Pip) gene locus. Targeting of the Dcpp1 gene, encoding demilune cell and parotid protein, labels intercalated duct cells, a putative site of salivary gland stem cells, and serous demilune cells of the sublingual gland. Duct cell-specific Cre expression was attempted by targeting the inducible Cre to the Tcfcp2l1 gene locus. Using the R26Tomato Red reporter mouse, we demonstrate that these strains direct inducible, cell-specific expression. Genetic tracing of acinar cells using PipGCE supports the recent finding that differentiated acinar cells clonally expand. Moreover, tracing of intercalated duct cells expressing DcppGCE confirms evidence of duct cell proliferation, but further analysis is required to establish that renewal of secretory acinar cells is dependent on stem cells within these ducts. Introduction The salivary glands are responsible for the secretion of saliva, which is essential for oral health. The major cellular component of the salivary glands is the secretory acinar cells (reviewed in [1]), which are arranged in clusters. The acinar cells secrete primary saliva into the small, intercalated ducts, which are linked to striated ducts. Eventually, the saliva is conducted through the ductal tree to the large excretory ducts, which empty into the oral cavity (Fig 1). A decrease in saliva secretion leads to the condition known as xerostomia and results in debilitating health problems. Saliva secretion is severely reduced by radiation therapy to treat head and neck cancers, and as a consequence of the autoimmune disease, known as Sjögren's syndrome. In both cases, the underlying cause is an irreversible loss of the acinar cells [2]. Thus, repair or regeneration of the salivary glands is primarily concentrated on replacement of the secretory cells. Current strategies to accomplish this are focused on the use of putative adult stem cells [3][4][5]. The prevailing view is that stem cells are localized to the small intercalated, and large excretory ducts in the salivary gland (reviewed in [6]) (see Fig 1). However, evidence of their differentiation into acinar cells has not yet been directly demonstrated. Furthermore, in a study to directly determine the source of newly generated acinar cells, we found that there is little stem cell contribution to acinar cell renewal in adult salivary glands [7]. In order to further investigate the role of each cell type in salivary gland homeostasis, we have generated three cell-specific inducible Cre recombinase mouse strains. The prolactininduced protein (Pip) is a secretory glycoprotein produced by serous cells of the mouse and human salivary glands [8][9][10]. The Pip gene locus was targeted to generate a Cre-driver active in acinar cells. To label intercalated duct cells, the presumptive site of stem cells in the parotid gland, the demilune cell and parotid protein gene (Dcpp1) [11], one of three linked Dcpprelated genes on mouse chromosome 17 [12], was targeted. Dcpp1 is also a marker of serous demilune cells in the sublingual gland [13]. Each acinus of the sublingual gland is comprised of mucous-secreting cells and one or two serous cells, distinguished by their expression of Dcpp1 (see Fig 1) [11,14,15], and of Sox2, a stem cell marker [7,16]. A third Cre-driver strain was generated to label duct cells by targeting the Tcfcp2l1 gene locus, which encodes a transcription factor specifically expressed in duct cells of the developing kidney and all three major salivary glands [17,18]. Although this line does show Cre activation in duct cells, ectopic expression in acinar cells may limit its usefulness in lineage tracing studies. Pip GCE labels acinar cells in the submandibular gland The Pip gene (gene ID 18716) was targeted by homologous recombination with a fusion cassette encoding GFP and CreER T2 (GCE) [19] to remove the coding sequences from Exon 1 and place the GCE cassette under the control of Pip regulatory sequences (Fig 2A). To determine the pattern of GCE expression, Pip GCE/+ heterozygote males were crossed with females from the Gt(ROSA)26Sor tm9(CAG-tdTomato)Hze /J reporter strain, hereafter referred to as R26 TdT . Double heterozygous Pip GCE/+ / R26 TdT/+ animals (3 weeks old) were administered tamoxifen by gavage for 3 consecutive days. Tissues were harvested after a 3-day chase, and frozen sections were examined for RFP fluorescence. Labeled cells were detected specifically in the submandibular gland (SMG) (Fig 2B). In contrast to the endogenous expression of Pip in parotid, sublingual and lacrimal glands (S1 Fig) [20], there was no evidence of Cre activation in these tissues (Fig 2C-2E). To ascertain the cell type expressing Pip GCE , sections of SMG were co-stained with antibody to Nkcc1, which labels acinar cell membranes ( Fig 2F). All tomato red-positive Pip genomic structure and restriction map is shown at the top. White box represents the non-coding exon sequences and filled boxes, the coding sequences. Thick bars show the sequences used to generate the homologous arms in the targeting vector. Gray box represents 3' external probe used for Southern blotting. Arrows indicate positions of genotyping PCR primers (An3' and PipR). (B-E) Analysis of Cre expression in mice after 3 days of tamoxifen treatment, followed by a 3-day chase. (B) Frozen sections were prepared from submandibular gland (SMG); activation of Cre results in expression of Tomato red reporter (TdT) (red); Scale bar = 50 μm. No Cre activity is detected in (C) parotid (Par), (D) lacrimal gland (Lac) or (E) sublingual gland (SLG). Nuclei are stained with DAPI (blue). Scale bars = 25μm. (F) Section from Pip GCE/+ ;R26 TdT/+ SMG at 3 days after tamoxifen treatment. Single labeled acinar cells (red) co-localize with antibody to Nkcc1 (green). Scale bar = 50μm (G) Section from Pip GCE/+ ;R26 TdT/+ SMG at P9, isolated 3 days after tamoxifen administration. Positively labeled acinar cells are red. Nuclei are stained with DAPI. Scale bar = 25μm (H) Section from Pip GCE/+ ;R26 TdT/+ SMG at 3 months after tamoxifen treatment, co-stained with antibody to Nkcc1 (green) to label acinar cells. Labeled acinar cells have expanded to clones (red). Scale bar = 50μm (I) Section from Pip GCE/+ ;R26 TdT/+ SMG after 3 month chase shows expansion of labeled acinar cells into clones (arrowheads). 3d, 3 days chase; 3mos, 3 month chase; d, duct; Scale bar = 50 μm. cells are co-localized with Nkcc1, indicating that they are acinar cells. Pip GCE expression was not detected in duct cells. No expression of R26 TdT was detected in the absence of tamoxifen (data not shown). The expression of Pip is initiated by embryonic day 14 (E14), and marks proacinar cells in the developing SMG [21]. However, administration of tamoxifen to pregnant females on 2 consecutive days failed to induce Pip GCE activity in embryonic SMG after a 2-day chase when analyzed at E15.5 or E17.5 (data not shown). In contrast, tamoxifen administered by gavage on postnatal days 4 (P4) through P6 labeled a large number of acinar cells in the SMG by P9 ( Fig 2G), although not in parotid, lacrimal or sublingual glands (data not shown). Pip expression is limited to apocrine glands of the eye, ear canal, and reproductive organs [22]. In agreement, we detected no activation of Pip GCE in kidney, lung, pancreas, prostate, or ovary (data not shown) following tamoxifen administration. The Pip GCE allele therefore represents a specific Cre driver for genetic manipulation in the SMG. We have recently reported that differentiated acinar cells in the adult salivary glands continue to proliferate, and are maintained through self-duplication [7]. As the Pip GCE allele drives tightly controlled, inducible Cre expression in postnatal and adult SMG acinar cells, we examined whether this system can also be used to follow clonal expansion. Single acinar cells were genetically labeled in heterozygous Pip GCE ; R26 TdT/+ mice (3 weeks old) by administering tamoxifen for 3 consecutive days (Fig 2F). After a chase period of 3 months, labeled acinar cells are present in clusters, evidence of clonal expansion through self-duplication ( Fig 2H and 2I), as described [7]. Thus, the Pip GCE can be used to genetically label or modify an expanding population of secretory acinar cells in the SMG. Dcpp1 GCE labels sublingual serous demilune cells and parotid gland intercalated duct cells The GCE fusion cassette [19] was inserted into the Dcpp1 gene (gene ID 13184) at the initiation site in Exon 2 through homologous recombination ( Fig 3A). To assess the Cre expression pattern in this line, Dcpp1 GCE heterozygote males were mated with females from the R26 TdT reporter strain. Tamoxifen was administered by gavage to 3-week-old Dcpp1 GCE/+ / R26 TdT/+ mice on 3 consecutive days, and tissues were analyzed after a 3-day chase. In the sublingual gland (SLG), the tomato red reporter was activated in single cells ( Fig 3B). Both cell morphology and co-staining with antibody to Nkcc1 indicate that the labeled cells are serous demilunes ( Fig 3C and 3D), as expected based on endogenous Dcpp1 expression (see Fig 1 and S2 Fig). No expression was detected in mucous acinar cells. The availability of Dcpp1 GCE as a molecular tag for the serous acinar cell type will be useful for defining the specific role of serous demilune cells in the SLG. Analysis of parotid glands after a 3-day chase also showed activation of Dcpp GCE in a scattered cell population ( Fig 3E). In the parotid gland, Dcpp1 is exclusively expressed in intercalated duct cells (see Fig 1 and S2 Fig) [11]. Higher magnification, as well as absence of colocalization with antibody to Nkcc1, was used to confirm that tomato red-positive cells are intercalated duct cells (Fig 3F and 3G). Thus, the Dcpp1 GCE allele faithfully recapitulates the expression pattern of the endogenous Dcpp1 gene. Intercalated ducts in the parotid gland have long been thought to be the site of salivary gland stem cells [23][24][25][26]. Short-term lineage tracing demonstrated progenitor activity in intercalated ducts [27], but lineage tracing from intercalated duct cells into acinar cells has not been reported. We used Dcpp1 GCE / R26 TdT/+ mice to trace the intercalated duct cells over time. Tamoxifen was administered by gavage to 4-week-old mice for 3 consecutive days. After a 3-month chase, analysis of the parotid glands showed an increased number of labeled cells in Fig 3H and 3E). Most of the TdT-positive cells remain within intercalated ducts (Fig 3I, arrows). However, there were some TdT-positive cells that co-localized with Nkcc1 antibody, suggesting that they may be acinar cells (Fig 3I, arrowheads). Given the widely held view that the intercalated ducts harbor stem cells, further characterization of these double-labeled cells will clearly be interesting, and underscores the potential utility of this Cre line. While the question of stem cells remains open, the low number of double-labeled acinar cells after a 3 month chase is in agreement with our recent conclusion [7] that the intercalated ducts do not make a significant contribution to replenishment of acinar cells under normal homeostatic conditions. Tcfcp2l1 GCE drives ectopic expression of Cre in salivary gland acinar as well as duct cells The Tcfcp2l1 gene (gene ID 81879) was targeted with the GCE fusion cassette at the initiation codon in Exon 1 (Fig 4A). The Cre expression pattern was investigated by mating Tcf GCE with the R26 TdT/+ reporter strain. Tamoxifen was administered by gavage to 3-week-old mice. After only one day and a single administration of tamoxifen, tomato red reporter expression was detected in the SMG, and SLG (Fig 4B and 4D). As expected, many labeled cells were in the ducts. However, Cre activation was also observed in acinar cells outside the ducts in both glands (Fig 4B and 4D; arrows). Tamoxifen treatment for 3 days resulted in significantly more labeled cells, showing that the labeling of acinar cells is dependent on tamoxifen induction (Fig 4C and 4E). When these cells were analyzed after 1 month, the labeled acinar cells had expanded into multicellular clones (data not shown), as would be expected based on our recent demonstration that acinar cells are maintained by self-duplication [7]. Tamoxifen activation of Tcf GCE also induced reporter expression in both duct and acinar cells of the parotid glands ( Fig 4F), and in the lacrimal glands, which produce tear secretions at the eye (Fig 4G). Tcfcp2l1 gene expression is initiated by E13.5 in development, and is required for the maturation of kidney and salivary gland duct cells [18]. To induce activation of Tcf GCE in embryos, tamoxifen was administered intraperitoneally to pregnant females at E13.5, and sections were analyzed at E15.5. However, we found no evidence of Tomato red reporter expression in the embryonic salivary glands (data not shown). The pattern of Tcf GCE expression was also analyzed in other tissues. In agreement with published expression patterns for the Tcfcp2l1 gene [17,18], tamoxifen did activate Tcf GCE in single bronchial cells of the adult lung, and in ducts of the kidney cortex (Fig 4H and 4I). We note that in all trials the use of the ROSA 26S tm1Sor (R26 LacZ ) reporter yielded only limited detectable expression following tamoxifen induction (data not shown) and is not recommended with these GCE strains. Furthermore, although the GCE cassette comprises a GFP and CreER T2 in-frame fusion, we were unable to detect GFP fluorescence or GFP signal with antibody in any of the 3 strains (data not shown). Discussion An understanding of salivary gland biology is essential for the development of therapies to treat salivary gland dysfunction, or to harness the glands for expression and secretion of heterologous proteins for therapeutic use. The parotid, submandibular and sublingual salivary glands bear morphological, developmental and molecular similarities, but also have distinct cellular compositions and secretory products. A better understanding of the biology of these glands will come only with the ability to manipulate or modify salivary gland gene expression. We report the generation of three mouse strains driving expression of tamoxifen-inducible CreER T2 for genetic manipulation in the salivary glands. Knock-in targeting of the Pip gene drives Cre expression specifically in the secretory acinar cells of the SMG. Targeting of the Dcpp1 gene yields Cre expression specifically in the serous demilune cells of the sublingual gland, as well as in parotid gland intercalated duct cells, the putative site of salivary gland stem cells. Targeting of the Tcfcp2l1 gene directs Cre expression to duct cells in all three salivary gland types, as well as the lacrimal gland ducts, but also drives unexpected ectopic expression in acinar cells. Although selection of these genes was also based on their patterns of expression during embryonic salivary gland development, none of the three strains has thus far demonstrated evidence of prenatal activation of the GCE cassette. In contrast to the widely held dogma that salivary gland homeostasis is dependent on stem cells, we have recently reported that maintenance of differentiated acinar cells in the postnatal salivary glands is accomplished through self-duplication [7]. Genetic cell labeling using the Pip GCE allele also demonstrates that acinar cells continue to divide and expand clonally in the adult gland. In agreement with a recent report that intercalated duct cells harbor proliferative progenitor cells [27], genetic tracing in Dcpp1 GCE/+ / R26 TdT/+ mice for 3 months showed evidence of duct cell expansion within the intercalated ducts of the parotid gland. However, there was little evidence of acinar cell replenishment from the labeled intercalated duct cells. This suggests that acinar cells are not generally replaced by stem cells located within the intercalated ducts. Taken together, cell tracing using the Pip GCE and Dcpp1 GCE strains supports a revised model of acinar cell homeostasis [28], which does not strictly depend on stem cells. The reason for the unexpected ectopic expression of the Tcf GCE in acinar cells is not clear. In a previously published report, insertion of the βgeo gene trap construct into Exon 2 of the Tcfcp2l1 gene showed expression that was limited to duct cells [18]. In our targeting construct, the GCE cassette was inserted into Exon 1 at the initiation site (Fig 4A), suggesting that downstream sequences may be required for duct cell-specific expression. Although activation of the reporter in both duct and acinar cells by Tcf GCE could be interpreted as evidence of a stem cell, we consider this unlikely. First, the ectopic labeling of acinar cells is detected within 24 hours of tamoxifen administration, a short time window for tracing of acinar cells putatively derived from a ductal stem cell. Second, in comparison to one day (Fig 4B and 4D), gavage on 3 consecutive days resulted in extensive acinar cell labeling (Fig 4C and 4E), suggesting that increased numbers of labeled acinar cells is dependent on tamoxifen dose. Third, the high number of labeled acinar cells generated after the short 3-day chase is inconsistent with recent reports demonstrating a lack of significant duct cell contribution to acinar cell replacement under normal homeostatic conditions [7,27]. We anticipate that the Cre driver strains described here will facilitate numerous types of investigations into salivary gland biology, including a more detailed analysis of each specific cell type, to provide further insight into their lineage relationships. These alleles can also be used to analyze knockout phenotypes as the GCE cassette has been inserted at the start codon to yield a null allele in all three genes. Recently, it was discovered that Tcfcp2l1 plays a central role in sustaining ES cell pluripotency through the LIF/Stat3 signaling pathway [29,30]. The Tcfcp2l1 transcription factor can reprogram post-implantation epiblast stem cells to ES cells. Although we have not investigated this, the Tcf GCE allele might prove useful for early embryonic studies. In humans, PIP has been used as a diagnostic marker for breast cancer [31]. Although it has not yet been established whether the Pip gene is also activated in mouse breast cancer models, the availability of the Pip GCE driver may provide a tool for such studies. Finally, these Cre drivers can also be used to target floxed genes, or to activate ectopic gene expression in a cell-specific manner. We expect that these alleles, which are freely available to the general scientific community, will be valuable tools for genetic manipulation in the salivary glands. Generation of GCE knock-in mice The Pip GCE (MGI:5661584), Tcfcp2l1 GCE (MGI:5662395) and Dcpp1 GCE (MGI:5661581) mouse strains were produced in the Transgenic Facility at the University of Rochester. Genomic sequences for each targeted gene were isolated from Bac clones (ordered from Children's Hospital Oakland Research Institute) by PCR amplification. The Pip GCE targeting construct was generated by inserting the diphtheria toxin A gene (DTA, for positive selection), 4.4 kb 5' homologous arm containing the 5' UTR of Exon 1, and 4.7 kb 3' homologous arm including Exon 2, 3 and 4 into pBluescript SKII(+). The eGFP-CreERT2 (GCE) fragment [19] with an SV40 polyadenylation site and Neo cassette were inserted immediately downstream of the translational initiation codon ATG. The Pip GCE knock-in construct removed the coding sequences from Exon 1 and placed the GCE gene under the control of Pip regulatory sequences. The Tcfcp2l1 GCE and Dcpp1 GCE targeting constructs were generated in a similar manner. Briefly, a 3.2 kb 5' homologous arm containing the 5' UTR of Exon 1 and 5.5 kb 3' homologous arm containing Exon 2 were used for the Tcfcp2l1 GCE targeting construct. A 3 kb 5' homologous arm containing the 5' UTR of Exon 1, and 6.9 kb 3' homologous arm including Exons 2, 3 and 4 were used for the Dcpp1 GCE targeting construct. Both GCE knock-in constructs removed the coding sequences downstream from the initiation codon and placed the GCE gene under the respective regulatory sequences. To generate Pip GCE knock-in mice, the Pip GCE targeting construct was linearized with NotI and was electroporated into 129S6-C57BL/6J embryonic stem (ES) cells. Two targeted ES clones were identified by Southern blotting using an external 3' probe (Fig 2A, gray box) and were injected into C57BL/6J blastocysts to generate mouse chimeras. Chimeras were mated with C57BL/6J mice to generate heterozygous Pip GCE/+ mice. PCR method was used to genotype mice generated from subsequent breeding of Pip GCE/+ heterozygotes. The PCR primers used to identify the GCE knock-in allele are An3': 5-CCA CAC CTC CCC CTG AAC CTG -3' and PipR: 5'-GCT CTC ATT CTC AGA GAC TCC TG -3'. To generate Dcpp1 GCE knock-in mice, the Dcpp1 GCE targeting construct (Fig 3A) was linearized with AscI and was electroporated into 129S6-C57BL/6J embryonic stem (ES) cells. Nine correctly targeted ES clones were identified by 5' long range PCR using an external 5' primer and GCE internal primer; two of the clones were injected into C57BL/6J blastocysts to generate mouse chimeras. Chimeras were mated with C57BL/6J mice to generate heterozygous To generate Tcfcp2l1 GCE knock-in mice, the Tcfcp2l1 GCE targeting construct was linearized with AscI and was electroporated into 129S6-C57BL/6J embryonic stem (ES) cells. Ten targeted ES clones were identified by Southern blotting using an external 5' probe ( Fig 4A, shaded box). Two were injected into C57BL/6J blastocysts to generate mouse chimeras. Chimeras were mated with C57BL/6J mice to generate heterozygous Tcfcp2l1 GCE/+ mice. The PCR primers used to identify the GCE knock-in allele are An3': 5'-CCA CAC CTC CCC CTG AAC CTG -3' and TcfcpR: 5'-TGC AGC GCA GAC CTG CT -3'. The neomycin gene cassette was removed from each targeted allele by crossing with the Actin-Flippase mouse strain (Jackson Laboratory) [32]. All three strains were subsequently backcrossed onto a C57Bl/6 background. Mice were maintained on a 12-hour light/dark cycle in a one-way, pathogen-free facility at the University of Rochester Medical Center. Food and water were provided ad libitum. This study was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The protocol was approved by the University Committee on Animal Resources at the University of Rochester Medical Center (protocol: 101362). Tamoxifen administration Tamoxifen (156738, MP Biomedicals) was dissolved at 20 mg/ml or 40 mg/ml in corn oil (Sigma) and administered by intraperitoneal injection (i.p.) at 0.075 mg/g body weight, or by oral gavage at 0.25 mg/g body weight (adults and neonatal pups) on 3 consecutive days. To induce Cre in embryos, 0.125 mg/g body weight of tamoxifen was administered to the pregnant female by i.p. for 1 or 2 consecutive days. Tissues were harvested 3 days after tamoxifen administration unless otherwise indicated. Imaging Imaging was done using an Olympus iX81 microscope, Hamamatsu CCD camera, and Meta-Morph software. Adobe Illustrator1 CS6 and Photoshop1 CS5 (Adobe Systems Incorporated, San Jose, CA) were used to compile illustrations and to perform image adjustments. All changes in contrast and brightness were applied to the entire image.
5,405.4
2016-01-11T00:00:00.000
[ "Biology", "Medicine" ]
The effect of hydrophobic ( silane ) treatment on concrete durability characteristics Hydrophobic (silane) impregnation represents a cost-effective way to increase the durability of concrete structures in cases where insufficient design cover quality and depth have been achieved. The water repellent product lines the internal capillary pore structure and provides a waterrepellent concrete surface. Thus, the risk of reinforcement corrosion initiation and subsequent deterioration can be reduced as the ingress of water-dissolved aggressive species (chlorides) is minimised or prevented. The purpose of this study was to investigate the effect of silane impregnation on durability indicators, including penetrability tests and chloride ingress (bulk diffusion). The results indicate that silane impregnation reduces capillary absorption and conductivity of chloride ions. Similarly, chloride ingress in the treated concrete mixes was suppressed. Introduction Cover concrete represents the primary barrier against the ingress of aggressive agents towards the reinforcing steel and several design codes define cover depths according to particular environmental classes [1].The thickness and quality of this zone is largely dependent on on-site quality control and curing conditions respectively [2].As the modern construction industry is under the perpetual constraints of time and money, quality control is often neglected on site, resulting in sometimes poor execution and outcome of works.Hence, the design cover depth and quality are not achieved due to improper placing, compaction and curing of in situ concrete.In this respect, considerable research has been undertaken to identify suitable solutions to avoid premature deterioration and extend the service life of reinforced concrete.Surface treatment represents a preventative measure to protect new and existing structures from environmental attack and reduce the risk of associated reinforcement corrosion.The aim of surface treatment is to reduce the concrete cover's penetrability to aggressive substances.Hydrophobic impregnation (penetrant pore liner) is one type of surface treatment that has the ability to reduce the capillary absorption of water containing dissolved deleterious species (chlorides) and thus delay the initiation of rebar corrosion [1,3]. Hydrophobic impregnations are products that are typically applied on the surface of a concrete substrate (as an invisible film) to reduce the uptake of water and dissolved aggressive species.The hydrophobic agent is applied by spraying or brushing, depending on its viscosity.The material is transported into the pore structure through capillary action.Cream based hydrophobic treatments have a longer drying time relative to liquid systems and this usually result in superior penetration efficiency, and thus higher protective potential.The main advantage of hydrophobic treatment is that it provides a water-repellent surface without affecting the appearance of the concrete and does not hinder the movement of water vapour in and out of the concrete [4,5]. Fig. 1. Silane based cream [6] Liquid water is rapidly transported in non-saturated pores by capillary action, and the rate of absorption is a function of the surface tension, the density and viscosity of the liquid, contact angle between the liquid and pore walls, and the pore size opening.In normal concrete, the contact angle (θ) is low (<90°) because of molecular attraction between the cement paste and water (hydrophilic behaviour).The drop of water will typically MATEC Web of Conferences 199, 07015 (2018) https://doi.org/10.1051/matecconf/201819907015ICCRRR 2018 spread flat on the surface followed by capillary rise in the pore structure, resulting in the suction of water. The use of a hydrophobic impregnation weakens the molecular attraction between water and concrete; as silane molecules cover the capillary walls, they become devoid of ionic electrical charges and polar molecules such as water are no longer attracted to the concrete surface.The contact angle is thus increased (>90°), resulting in a spherical shape and subsequent negative capillary rise (the level of the liquid in the pores is lower than that of the surrounding liquid -Figure 2 [1,7].Fig. 2. Interaction between water and concrete surface for untreated and treated surfaces [1] Silane, siloxane or a mixture of these two components are typically used as hydrophobic impregnation products.Silane molecules are smaller (1 x 10 -6 -1.5 x 10 -6 mm diameter) relative to siloxane molecules (1.5 x 10 -6 -7.5 x 10 -6 mm diameter) and hence, in general greater penetration depth is obtained with the use of pure silane products [8].However, a smaller molecular size correlates to higher volatility.Pure silane is hence used mostly in gel consistency as this enables the application of thick layers of water repellent product on vertical surfaces without slumping or sagging [6,8]. Methodology 2.1 Mix design Two water to binder ratios (w/b 0.45 and w/b 0.60) and four binder types were selected (CEM I 52.5N,Fly-ash (FA), Ground granulated Corex slag (GGCS) and CEM III/B 42.5N).Hence a total of 8 concrete mixes were used.The concrete specimens were demoulded 24 hours after casting, wrapped in plastic sheeting and placed in an environmental room (23 + 2°C temperature and 63 + 2% relative humidity).After 7 days, the plastic sheeting was removed, and the specimens were air cured under the aforementioned controlled environmental conditions until the age of 56 days.The mix design is shown in Table 1. Silane treatment Silane treatment was performed at the age of 28 days by applying Sikagard ® -706 Thixo [6] (a silane based water repellent impregnation cream) at a consumption rate of The hydrophobic (silane) impregnation depth was measured 4 weeks after treatment.The indirect tensile splitting test was performed on two silane treated concrete cubes (100 mm) and water was sprayed on the internal surface.The hydrophobised part of concrete repelled any water while the untreated part showed darker coloration, due to water absorption (Figure 3).A Vernier caliper was used to take the measurements.The method used to measure the silane penetration depth follow the recommendations of BS EN 1504-2 [13]. Fig. 3. Silane penetration depth measurement Bulk diffusion tests were carried out in accordance with ASTM C1556 (2004) [14], starting at a sample age of 56 days.Six test specimens (3 treated/3 untreated) were used per mix.After the chloride exposure period, specimens were removed from the salt solution and cut into slices at suitable increments.These slices were pulverized and milled into approximately 10 g powder samples.A potentiometric titrator was used to determine the acid soluble chloride ion content, in accordance with ASTM C1152 (2012) [15]. Cut surfaces Durability Index tests (OPI, WSI and CCI) were performed on cut surfaces for mix characterisation purposes.All the concrete mixes (Mix 1-8) achieved OPI values between 9.90 and 10.67, WSI values ranging from 4.2 to 7.8 mm/hr 0.5 , and CCI values between 0.16 and 1.09 mS/cm (Table 2).The OPI decreased while WSI and CCI values generally increased with a higher w/b and this was attributed to an increase in porosity of the cement paste microstructure and larger number of interconnections between the pores which act as channels of flow in the cement paste [16].This allowed greater permeation of oxygen gas, capillary absorption of liquids and migration of chloride ions respectively. Uncut surfaces The effect of the hydrophobic treatment on DI values was assessed on uncut (formwork) surfaces, to represent site conditions.According to the results, silane treated concrete recorded lower WSI (Figure 4) and CCI values (Figure 5) relative to untreated concrete.Hydrophobic (silane) impregnation chemically modifies the near surface zone of the concrete and reduces the capillary uptake of water.As the silane molecules bond and cover the capillary walls, the latter become devoid of ionic electrical charges and polar molecules such as water is no longer attracted to the concrete surface [4,7,17].Migration of chloride ions was minimised as the capillary pores within the silane impregnated layer are less saturated relative to untreated concrete [18,19]. Silane penetration depth As expected, the silane penetration depth increased with a higher w/b ratio (Figure 6).This was attributed to the higher capillary porosity of the cement paste microstructure which allowed deeper penetration of the water repellent product [5,20].The effect of binder type for the w/b 0.60 mixes was unclear due to the overlapping of error bars but for the w/b 0.45 mixes, the inclusion of FA and GGBS (CEM III/B) increased and reduced the penetration depth respectively.It must be noted that a fixed silane consumption rate (400 g/m 2 ) was used for all the concrete mixes (Mix 1-8).In the case of in-situ structures, preliminary trials should be carried out to findthe consumption rate required to achieve sufficient penetration depth (typically 5-6 mm) [5].The silane penetration depth (mm) was also found to be highly related to the Oxygen Permeability Index (OPI/log-scale), as shown in Figure 7.Note that the measured OPI values ranged from about 9.90 to 10.67, representing the negative log of the coefficient of permeability.Due to the logarithmic scale, a concrete with an OPI value of 9.90 is about 6 times more permeable, compared to a concrete with an OPI value of 10.67.The good correlation between OPI and silane penetration depth was explained by the fact that the penetration depth of the product is a function of the overall quality (interconnectedness, tortuosity of the capillary pore structure) of the near surface concrete and the OPI test evaluates these properties [21]. Bulk diffusion The detailed bulk diffusion results are contained in [22].An example for typical chloride ingress profiles is given in Figure 8.In general, silane treatment reduced the surface chloride concentration and the effect was most pronounced in the FA, GGCS, and CEM III/B mixes. Similarly, the silane treated concrete had lower apparent chloride diffusion coefficients relative to the untreated concrete.As the chloride penetration and content is reduced within the near surface zone, the supply of chloride ions that can diffuse deeper into the concrete is smaller.Diffusion of chlorides is also significantly slowed down as the capillary pores are less saturated in the silane impregnated (treated) concrete [18,19]. Conclusions  The silane penetration depth was strongly dependent on the quality (porosity) of the near surface zone as deeper penetration was observed in the higher w/b concrete mixes.A near linear correlation between Oxygen Permeability Index values and silane penetration was recorded, indicating that the OPI test is an excellent method to assess likely penetration depth of the product. Silane impregnation improved the transport properties (lowered sorptivity and conductivity) of the concrete mixes, indicating a significant decrease in penetrability.Similarly, in relation to chloride ingress, chloride surface concentrations and chloride penetration depth in general were reduced for all treated concrete mixes; chloride ingress in treated concrete was considerably lower, compared to untreated concrete. For practical applications, the results indicate that the durability of reinforced concrete structures in marine environments (splash/spray zone, airborne exposure), regardless of the binder type, can effectively be improved using hydrophobic impregnation, assuming proper surface preparation and application methods. Table 1 . Mix designs, slump values and compressive strengths The treated specimens were then placed in an environmental room (maintained at a temperature of 23 + 2°C and relative humidity of 63 + 2%) until the age of 56 days. Table 2 . Durability Index test results (cut surfaces)
2,500.4
2018-01-01T00:00:00.000
[ "Materials Science" ]
The Noticing Function of Output in Acquisition of Rhetorical Structure of Contrast Paragraphs of Iranian EFL University Students This article is an attempt to contribute to the growing body of research investigating the noticing function of output (cf. Swain 1995 in Izumi/Bigelow 2000: 239), and more specifically the use of output-fronted activities that might prompt FL learners to notice their linguistic problems to facilitate their gain of rhetorical structure of contrast paragraphs in English. Three groups of EFL learners participated in the study. Two groups (the experimental group and comparison group 1) were required to initially produce a paragraph (output 1), then they received a model contrast paragraph to underline, and finally they were asked to produce a contrast paragraph (output 2). For the experimental group, the topic to write was a contrast topic; whereas, the comparison group were to write on a non-contrast topic. The third group (the preemptive comparison group 2) received the teacher's deductive instruction and explanation of contrast paragraphs in English followed by an output to produce a contrast-related paragraph. The results indicated considerable effect of outputfronted activities on learners' noticing the targeted structures and forms. In addition, the output-first-then-input activities were found to be much more effective than pre-emptive input activities. Introduction Questions that have remained central for psycholinguists since second language acquisition emerged as a discipline in the 1970s as a field of inquiry in its own right centre around issues as what cognitive processes underlie success and failure in learners' attempts to master the linguistic patterning of a second language (L2), how general mechanisms of memory and attention are involved in second language acquisition, and how they contribute to language acquisition.In recent years, a number of researchers have attempted to present fully elaborated, cognitively oriented frameworks for thinking about SLA (e. g., Gass 1997, Johnson 1996, Skehan 1998, Van Patten 1993).These works build upon earlier efforts to bring an information-processing orientation to the SLA field (e. g., McLaughlin 1987, McLaughlin/Rossman/McLeod 1983, McLeod/McLaughlin 1986), and they draw upon theories of attention, memory, and skill to be found in both the SLA and general cognitive psychology literatures (cf.Segalowitz/Lightbown 1999).Specifically speaking, the questions which serve as departure points for second language acquisition research and pedagogy over the two past decades have been whether the process of acquiring an L2 is a conscious or subconscious process, and whether consciousness is necessary at all in the process of internalization of information.Different positions have been identified.According to Krashen's Acquisition Hypothesis (1983), acquisition takes place subconsciously; however, to Schmidt (Noticing Hypothesis, 1994) acquisition is largely a conscious process (cf.Izumi/Bigelow 2000: 240).Tomlin/Villa (1994) claim that acquisition is in part conscious and in part subconscious (cf.Hulstijn/Schmidt 1994: 7).So, the role of attention has recently too much preoccupied SLA researchers, psycholinguists, and cognitive psychologists (McLaughlin/Heredia 1996;Sharwood Smith 1993;Long 1991;Ellis 1994;Schmidt 1994Schmidt , 2001) ) to theorize how input changes into intake. In addition to exposure to input and requirements for input to change into intake, equally important is recognizing by the researchers the role of output in the process of second language acquisition.Schmidt (1992) has stated the need for learners to engage with language in their own output which is similarly developmental, so that by readily calling on a rich linguistic repertoire they can progressively 'automatize' their knowledge.As with new intake, the learners' early efforts to output new forms are likely to require conscious attention, since the ease with which competent users call on language in their output is something which is only gradually accomplished (cf.Hulstijn/Schmidt 1994: 1-2).Because of this, and also because such output often requires the selection of a more complex and challenging form over a simpler paraphrase, it is sometimes known as 'pushed output ' (cf. Swain 1985in Izumi/Bigelow 2000: 244).According to Izumi (2002), pushed output, by virtue of producing utterances, can place the learner in an ideal position to make a cognitive comparison between the IL and TL forms.In short, pushed output can induce the learners to process the output effectively for their greater interlanguage development.Output may lead to greater metalinguistic awareness.In the process of striving to produce output that their interlocutors will understand, learners may pay particular attention to form, and may notice a gap between what they want to say and what they can say, leading them to recognize what they do not know, or know only partially (cf. Swain 1995in Izumi/Bigelow 2000: 244). 2 Theoretical Background Attention and Second Language Acquisition Research In fact, in classical psychology, attention and consciousness are often viewed as two sides of the same coin.As Carr/Curran (1994) point out, "if you are conscious of something, then you are attending to it... and if you are attending to something, then you are conscious of it" (cited in Al-Hejin 2002).Moreover, everyday use of the term conscious has a variety of overlapping meanings such as awake, aware, and deliberate.The reason for this overlap, as the following discussion will illustrate, is that these concepts are inherently connected, and one concept often entails the other. Schmidt (1994) identifies four dimensions to the concept of consciousness.The first is intention, which refers to learner's deliberateness to attend to the stimulus.Intention is often associated with intentional versus incidental learning.The second dimension of consciousness is attention, which basically refers to the detection of a stimulus.The third aspect of consciousness is awareness, which refers to the learner's knowledge or subjective experience that s/he is detecting a stimulus.Awareness is often associated with explicit versus implicit learning, since learners may or may not be aware that they have acquired a new structure (e. g, children generally seem unaware of the complex syntactic rules they acquire).The fourth dimension of consciousness is control, which refers to the extent to which the language learner's output is controlled, requiring considerable mental processing effort, or spontaneous, requiring little mental processing effort (cf.Hulstijn/Schmidt 1994: 5-11). Another group of second language acquisition researchers (Tomlin/Villa 1994) claim that detection, attention without conscious awareness or noticing, is a key process in second language acquisition (cf.Bärenfänger et al. 2002: 1).Tomlin/Villa (1996) assert that detected information can be registered in memory and disassociated from awareness.Sharwood Smith (1993) intended to facilitate the learner's selection process of input by increasing the perceptual saliency of specific targeted forms in the input.This process would appear to engage the learner's attention as a selective process as it involves directing the learners focal attention to a specific form from an array of verbal or written forms.Another point Sharwood Smith (1993) emphasizes in his rationale for Input Enhancement is the possibility of increasing the saliency of a selected form in order to promote the restructuring of a the learners developing interlanguage system.This would seem to involve not only the process of selective attention, but also the way in which the form is to be subsequently processed by the working, short-term memory and long-term memory (cf.Sharwood Smith 1994: 178-180). Comprehensible Output Hypothesis It seems to be universally accepted that SLA is dependent on input (cf.Krashen 1985in Shehadah 2003: 155-157).The earlier studies of input examined what is available to language learners and what part of input is relevant in language learning.The former issue led to studies of modified speech such as caretaker talk, foreigner talk, and teacher talk.The latter was investigated in terms of comprehensible input hypothesis.Krashen (1982Krashen ( , 1985) ) claims that humans learn a language only by receiving enough comprehensible input which is called the Input Hypothesis.What is crucial in language development is i + 1, or the input that contains structures of the learner's next level.That is, the input learners expose to must be a little beyond the learners' existing level to prompt acquisition (cf. Shehadah 2003: 155-157). In the literature of second language acquisition research, research on noticing in L2 acquisition has largely focused on input, and little attention is paid to the role of output in facilitating language acquisition.In a seminal article, Swain (1985) argued that comprehensible input may not be sufficient for successful second language acquisition, but that opportunities for nonnative speakers to produce comprehensible output are also necessary.Swain based her conclusions on findings from studies she conducted in immersion contexts in Canada.She found that although immersion students were provided with a rich source of comprehensible input, their interlanguage performance was still off-target; that is, they were clearly identifiable as nonnative speakers or writers.In particular, Swain found that the expressive performance of these students was far weaker than that of same-aged native speakers of French.For example, they evidenced less knowledge and control of complex grammar, less precision in their overall use of vocabulary and morphosyntax, and lower accuracy in pronunciation.Thus, Swain claimed that understanding new forms is not enough and that learners must also be given the opportunity to produce them.She proposed a hypothesis relating to the second language learner's production comparable to Krashen's comprehensible input hypothesis termed as the Comprehensible Output Hypothesis for SLA.Swain argued that comprehensible output is the output that extends the linguistic repertoire of the learner as he or she attempts to create precisely and appropriately the meaning desired.She argued further that the role of learner production is independent in many ways of the role of comprehensible input, claiming that CO hypothesis is also a necessary mechanism which aids SLA in many ways.In a nutshell, Swain's (1985) CO hypothesis predicts that we acquire language when there is a communicative breakdown and we are "pushed to use alternative means to get across the message precisely, coherently, and appropriately" (cited in Krashen 1998: 179). Since the output hypothesis was first proposed, Swain has refined her hypothesis and specified the following four functions of output.First, output provides opportunities for developing automaticity in language use.This is the fluency function.In order to develop speedy access to extant L2 knowledge for fluent productive performance, learners need opportunities to use their knowledge in meaningful contexts, and this naturally requires output.The second function of output is a hypothesis-testing function.Producing output is one way testing one's hypothesis about the target language.Learners can judge the comprehensibility and linguistic well-formedness of their interlanguage utterances against feedback obtained from their interlocutors.Third, output has a meta-linguistic function.Swain (1995) claimes that "as learners reflect upon their own TL use, their output serves a metalinguistic function, enabling them to control and internalize linguistic knowledge" (cited in Izumi/Bigelow 2000: 245).In other words, output processes enable learners not only to reveal their hypotheses, but also to reflect on them using language.Reflection on language may deepend the learners' awareness of forms, rules, and form-function relationships if the context of production is communicative in nature.2000: 244).The recognition of problems may then prompt the learners to attend to the relevant information in the input, which will trigger their IL development. Evidence from research that supports some of the functions of the output hypothesis suggests that output might indeed be beneficial for SLA.Izumi/Bigelow (2000) compared an experimental group, which received input via written exposure to the target form and engaged in output tasks, to a comparison group, which received the same exposure to the target form but did not engage in output.With only one exception, in which the experimental group outperformed the comparison group, there were no statistical differences between groups on any measure.The general lack of difference between groups was attributed to task demands rather than the learning conditions.Following a similar study design, but one that reduced task demands, Izumi/Bigelow (2000) compared four experimental groups (composed of combinations of 6 output and 6 input enhancement) and a control group.The output groups engaged in a text reconstruction task, whereas the control groups answered extension questions based on the text.Results indicated that participants in the output groups used the target form in the reconstruction tasks and outperformed non-output and control groups on posttest measures (cf.Morgan-Short/Bowden 2006: 38). Horibe ( 2002) conducted a study which compared two instructional treatment conditions (input only and input + output) to examine the effects of opportunities for output on the acquisition of the target forms, which were several syntactic structures.The subjects' thought processes in spoken output were elicited in think-aloud protocol interviews.Study participants were 31 college students in a Japanese course in 3 intact classes: input only (input group), input and output (output group), and no instruction (control group).The results indicated no 18 statistically significant difference between the input group and the output group in terms of the acquisition rates of the target forms (cf. in Lluna-Mateu 2006: 17). A study by Nobuyoshi/ Ellis (1993) provided data showing that comprehensible output results in actual improvement.In their study, six adult EFL students in Japan were asked to participate in a jigsaw task with their teacher in which they described actions in pictures that, they were told, occurred the previous weekend or previous day.Nobuyoshi /Ellis (1993) concluded that their study provided "some support for the claim that 'pushing' learners to improve the accuracy of their production results not only in immediate improved performance but also in gains in accuracy over time" (cited in Krashen 1998: 178). To sum up the literature, the majority of studies mentioned focused on output and its role in acquisition and little attention was paid to the noticing function of output.In fact, it appears that the missing point is the noticing nature of output which facilitates the learning process. Considering the shortage mentioned, the present study made an attempt to investigate the issue of output in terms of its noticing effect in the acquisition of specific targeted forms and items. Research Questions and Hypotheses Considering the issues mentioned in the literature, the present research follows up on Izumi and Bigelow's (2000) study in an attempt to shed light on the learners' psycholinguistic processes as involved in output fronted activities.Izumi and Bigelow's study focused on the acquisition of one specific type of conditionals while the focus of this study was on the acquisition of rhetorical structure or text structure of a particular type of expository text, namely contrast paragraphs in English.The study outlined in this paper was designed to provide answers to the following questions: Question 1: Do output-first-then-input activities promote learners' noticing the rhetorical structures of contrast paragraphs? Question 2: Do output-fronted activities result in the immediate improvement of production of the target rhetorical structures? Question 3: Do the output-first group learners outperform the non-output-first group learners? Taking these questions into account, the following hypotheses were formulated: Hypothesis 1: Producing output will enhance greater noticing of the target rhetorical structures contained in the input. Hypothesis 2: The noticing function of output will significantly affect acquisition of rhetorical structures of contrast paragraphs in English. Hypothesis 3: The output-first group learners outperform the non-output-first group learners (pre-emptive group learners) concerning the acquisition of targeted forms. Participants For the purpose of the present study, there were initially 75 participants, but 12 of them had to be excluded from data analyses, since they failed to complete the Output 2 parts; therefore, the data analyzed come from 63 subjects.The participants ranged from early 20 to 25 years of age.14% (N = 9) were males and 85% (N = 54) were females.All study participants were L1 Persian speakers studying English as a Foreign Language enrolled in the same university (Qom Azad University, Iran).All were second-year students taking part in an obligatory writing course entitled "Advanced Writing", the purpose of which was asserted to instruct paragraph writing in English.Placement of the subjects into the classes was based on the enrollment procedures of the university and students' passing the preparatory grammar courses.The students from the 3 intact classes randomly selected from 6 available classes had to have been present for all phases of the experiment: for the pretest, Output 1 session, the instructional treatment, and the Output 2 session. To verify that the participants at each of the classes were homogeneous in terms of proficiency, they were administered a Cambridge English Placement Test (CEPT).The mean scores of each group on the test were analyzed.The results of a least significant difference (LSD) test, 3 with alpha set at .05, indicated no statistically significant difference among the CEPT test scores of the three classes.Thus the test confirmed that the students within the groups were at the same level of English proficiency.The participants in the study did not receive funding for participating in the study.It was at the discretion of their respective instructors to decide whether they would be given extra credit. Research Design and Procedure To examine the research hypotheses, a comparison group design was employed.Three groups were established: one experimental group and two comparison groups (see Figure 1 for the overall research design).The groups were three intact classes randomly selected from among 6 available classes, and their homogeneity was verified utilizing the LSD analysis.All the subjects in the experimental group and comparison group 1 were required to produce an output first, except comparison group 2 (non-output or pre-emptive group) learners who started with their instructor's input.All the participants in the experimental and comparison groups were thoroughly informed of the procedures to be followed throughout the study prior to the tasks. The Rhetorical Structure of Contrast In the present study, the target forms to be focused were the rhetorical structures used in typical contrast paragraphs in English.Following Taboada and Mann's (2006) Experimental Group As illustrated in Figure 1, all the participants in the experimental group were given an opportunity to produce a paragraph.To meet the requirements of the study, the topic was selected by the researchers to function as a prompt for the subjects to elicit the desired rhetorical structures frequently used in English contrast paragraphs: "The Differences between Men and Women in Iran."The researchers selected this topic due to their personal teaching experiences in actual paragraph writing and essay writing courses in universities. Students in those classes are ordinarily observed feel at ease with this topic perhaps because no specific background or prior schemata is required to develop the topic.No limitation was announced concerning the length of their paragraphs, but they were told to name at least three differences.The maximum time allotted to complete the task was 30 minutes, but nearly all the participants finished their task in less time than the allotted one. Having completed their first paragraphs, the participants were asked to submit their paper to the instructor, in this phase one of the researchers.Next, the researcher handed out a model paragraph of contrast by a native speaker among the participants (see the Materials section for the details).The model paragraph contained a variety of contrast forms which are typically used in academic contrast paragraphs.The participants in the experimental group were told to read the paragraph carefully and underline the parts of the input to help in their second writing attempt.They were required to underline every thing they thought would help them in their rewriting task from punctuation to a whole sentence.Again for this phase of reading and underlining the input model paragraph, no time limitation was allotted.The participants completed the task in approximately 10 minutes in average. In the third phase of the treatment, the participants were required to write their second output. Here again, they were to rewrite on the same topic assigned formerly: "The differences between Men and Women in Iran."To produce their second output, the participants were again given as much time as they demanded.Moreover, they were asked to mention at least three differences to fulfill the requirements of length of their output.They were also announced misspelling would not be penalized.In approximately 30 minutes in average, they accomplished their second output. Comparison Group 1 As shown in Figure 1, all the 19 subjects taking part in the study as Comparison Group 1 began with their output 1.In fact, they were required to write a paragraph.In this case, however, the topic of the paragraph was different from the topic assigned to the experimental group: "The characteristics of Good Students."No limitation was announced concerning the length of their paragraphs.The rest of the procedure used with Comparison Group 1 was exactly the same as the procedure applied to the Experimental Group: they were presented with the same model contrast paragraph to read carefully and underline, and they were asked to produce their second output using the same topic used with the experimental group. Comparison Group 2: Pre-emptive Input Within the category of incidental focus on form, Ellis/Basturkmen/Loewen (2001) distinguished between pre-emptive and reactive techniques.In pre-emptive focus on form, the teacher draws the learner's attention to a form before a problem arises.The teacher briefly treats language as an object and may or may not use meta-linguistic terminology.Following Ellis, in the comparison group 2, the study procedure started with the teacher's explicit teaching of paragraphs of contrast.In fact, using the deductive method, the teacher started with the definition of contrast paragraphs.Then, applying syntactic terminologies, she enumerated the types of contrast structures frequently used in English contrast paragraphs.A model sentence followed each explanation of the pattern.The explanations and the model sentences were basically selected from students' textbook entitled Paragraph Development by Arnaudet/Barret (1981: 140-143).After explaining the contrast structures accompanied with model sentences, the instructor presented the learners with a complete model paragraph of contrast from their textbook, and analyzed the structures of contrast already explained. The instructor's teaching having finished, she handed out predetermined blank sheets with a topic.The participants were required to write a paragraph of contrast on the topic similar to the experimental group's topic: "The differences between Men and Women in Iran".The allotted time for them to complete their output was 30 minutes (in approximately 30 minutes in average, they accomplished their second output), and similar to the participants in the experimental group, they were told to explain at least three differences in their paragraphs.Furthermore, they were free to ask unknown words in English, and no penalty was given to their misspellings. Materials and Data Collection Two types of materials were used to collect the necessary data for the purpose of the study: the participants' outputs which involved their written paragraphs and the model contrast paragraph which was selected to be underlined by the subjects.The model contrast paragraph was a paragraph of approximately 200 words written by a native speaker to contrast Arizona and Rhode Island.The researchers decided to select the model paragraph used, from among many selected paragraphs, mostly due to its conformity with could be called typical academic contrast paragraphs: it enjoyed a good topic, coherence, cohesion, and supporting ideas (see Appendix A for the model contrast paragraph).Moreover, the model paragraph seemed to be a rich one in terms of contrast rhetorical structure.In each sentence of the paragraph would learners notice contrast structures and lexemes.Consequently, it appeared that the model paragraph was in congruity with Sharwood Smith's input enhancement model.The second comparison group (the pre-emptive input group) received a lesson in contrast from their textbook.In the lesson were the explanation of a contrast paragraph, a model contrast paragraph, numerous contrast structures and patterns, and some exercises. Data Analysis To examine the research hypotheses, the data were collected using a two-fold instrumentation procedure: the participants' written outputs during the experimentation and their underlining. To score the participants' written output throughout the study, a scoring module was designed (see Appendix B for the scoring module).According to Norris/Ortega (2003), "an interpretation is warranted when researchers can demonstrate that a measure has provided trustworthy evidence about the construct it was intended to measure" (cited in Doughty/Long 2003: 722).They mentioned two major threats to construct validity in measurement: construct underrepresentation and construct-irrelevant variance.For the purpose of this study, since we focused on the acquisition of rhetorical structure of an academic contrast paragraph, contrastrelated items were defined as follows: 1) Topic sentence involving topic existence and topic effectiveness; 2) Topic development involving clarity of expressions of ideas and overall effectiveness of the whole paragraph; and 3) Contrast-related structures and items involving the number of error-free T-units, unique contrast lexemes, punctuation, coordinate conjunctions, predicate structures, and sentence connectors.In the present study, the use of the above mentioned structures and items by subjects in their outputs were indicative of their learning the structures. Each participant's production score was computed as follows: the first section of the module consisted of items arranged to meet the requirements of organization and coherence needed in the rhetorical structure of contrast paragraphs.For this purpose, an ordinal three-scale Likert was designed to meet the requirements of topic, cohesion, and coherence of each paragraph. To score the second section, the frequency of use of each of the contrast-related items was computed for each participant's output.One point was assigned for each item used by the subjects.At first, two of the researchers scored two papers collaboratively to come up with the desired conformity in scoring, and then all the produced outputs were scored by the two researchers separately. In order to assess the participants noticing of the target rhetorical structures used in contrast paragraphs, the participants in the experimental group and the first comparison group were asked to underline parts of the passage they thought necessary in their reproduction. According to Izumi/Bigelow (2000), using underlining as measure of noticing would be advantageous owing to the fact that it could be considered as an on-line measure to tap the participants' attentional processes in the real time of the task.They believe, "they have an advantage over postexposure measures of noticing because they allow more direct access to learners' ongoing internal processes and minimize possible memory loss."In addition, Schmidt's (1994) notion of noticing formulated in his Noticing Hypothesis was also tapped since underlining was expected to engage at least a minimum amount of awareness.To be precise, all the participants were primarily familiarized with the underlining procedure by the researcher (cf.Izumi/Bigelow 2000: 250). To verify the homogeneity of the intact classes randomly selected for the purpose of the research, the LSD was used as a measure of homogeneity using CEPT.In the present study, due to the nature of the study, we used non-parametric tests of difference.For the experimental group, we used the Wilcoxon signed-rank test to examine the effect of noticing on the production of the learners.Mann-Whitney U test was applied to examine the statistical significance between the experimental group and the third comparison group.Kruskal-Wallis or H test was used to compare the groups involved in the research, with the alpha level set at .05 in all tests. Results The initial statistical analysis of the research results revealed a comprehensive picture of the groups and their related type of task under study.Because the application of normality tests on the data showed no normal distribution of the data, and since the samples were small-sized classes, we used the median and interquartile range as measures of central tendency and dispersion.The median for all the three groups under study (experimental group, comparison group 1, comparison group 2) are displayed in Hypothesis 1 In response to the first question of the present research, hypothesis 1 predicted that producing output-fronted tasks will enhance greater noticing of the target rhetorical structures contained in the input.Following the Noticing Hypothesis requirement of noticing as focal attention on the part of the learner, the noticing function of output for noticing as the gap between what a person wants to say and what s/he can say, and Izumi and Bigelow's (2000) use of underlining as an on-line measure of noticing, the participants underlining of the model passage was analyzed to address the noticing issue as an on-line measure.The EG, having produced their output 1 on contrast, received a model paragraph of contrast to underline.Similarly, the CG1 received the same model paragraph once producing a paragraph on a different non-contrast topic.Table 2 shows the statistics of the EG and CG underlining task.The main supposition based on hypothesis 1 was whether underlining of the contrast-related words increased after the related output-fronted activities.Table 2 shows that the median of the experimental group (Mdn = .800)exceeds the control group's median (Mdn = .500).To examine whether the difference between the Mdns obtained by the EG and CG was significant, we applied the Mann-Whitney Test (see The statistics showed that a significant difference emerged between the two groups. Hypothesis 2 According to hypothesis 2, the noticing function of output will significantly affect acquisition of rhetorical structures of contrast expository texts.In other words, the prediction based on the second hypothesis was: having noticed the input in terms of specific structures, learners would acquire the target structures, and consequently their production might improve.In this phase, the EG participants were required to produce a contrast related paragraph (Output 1), then they were presented with a model contrast paragraph enriched with contrast-related structures and words, and again they were required to rewrite their first output (Output 2).The results of the outputs produced by the EG participants are displayed in Table 4.As Table 4 shows, the median of the output 2 for the EG participants (Mdn = 13.00)exceeded that of the participants' output 1 (Mdn = .900).The Wilcoxon signed-rank test was applied to find whether any statistically significant difference existed between the outputs produced by the EG participants (see The test found a significant difference between the outputs produced by the participants in the EG. Hypothesis 3 In this phase of the study, the third hypothesis was tested: The output-first group learners (EG) outperform the non-output-first pre-emptive group learners concerning the acquisition of targeted forms.In fact, the prediction was that the group producing a comparison-related output first was expected to exceed the non-output-first group only receiving the teacher's input (the preemptive comparison group).For this purpose, the CG2 (the preemptive comparison group) initially received the input in the form of explicit explanation of contrast paragraphs by the teacher in English followed by examples.Having received the preemptive input, the CG2 participants were required to produce a paragraph on contrast (Output).The test statistics reveals that the difference between the two groups was statistically significant (Asymp.Sig.2-tailed = .000).The higher median of the EG and the statistically significant difference might be used to imply that the specific experimental of the atmosphere of the EG contributed to the outperforming of the participants in the study. Discussion The main research question motivating this study was to investigate whether or not noticing would produce significant acquisition of knowledge of the rhetorical structure of an academic contrast paragraph in English.To this end, the participants were divided into three groups: EG (contrast output1-input-contrst output2), CG1 (noncontrast output1-input-contrast output2), and CG2 (preemptive input-contrast output).Table 1 displays the general statistics of the three groups participating in the study. The first research question investigated whether output-first-then-input activities promote learners' noticing the rhetorical structures of contrast paragraphs or not.The results of the non-parametric Mann-Whitney U test for two unrelated sample groups give strong support for a positive answer to this critical research question.A significant difference was found between the EG and the CG participants' underlining tasks (see Table 3a & b for details).The result might be used to imply that the differing experimental conditions of the EG appear to contribute significantly to the extent of attention paid to and finally noticing the target rhetorical structures.In addition, Table 2 displays that the IQR of the EG participants (IQR = .15)was much less than that of the CG (.32).The result indicates that individual variation in terms of noticing the target rhetorical structures for the EG was considerably less than that of the CG.In other words, it reveals that the EG attention was much less dispersed than the CG participants.The lower IQR for the EG participants might be attributed to the task they did preceding the underlining task: producing a contrast-related paragraph.Taking the noticing function of output into account, it might be claimed that the participants output 1 functioned as noticing booster, and consequently increased learners' attention to the target structures.In fact, as a result of their output 1, the learners recognized the gap in their knowledge, and for the gap to be bridged, they probably noticed the input they received immediately following their output. The test result might be used to imply that the amount of noticing of the EG participants increased due to the gap of knowledge in learners interlanguage system.The EG participants' output 1 was used as a prompt to raise the learners' noticing their lack of necessary knowledge of contrast text structure and linguistic forms needed.The finding of the first research hypothesis might be used to confirm Swain's argumentation for the noticing function of output in language acquisition.Swain (1995) The second hypothesis of the study examined whether the noticing function of output will significantly affect acquisition of rhetorical structures of contrast paragraphs.In fact, the prediction was that noticing promotes learners' internalization of targeted structures in the input.To this end, the EG participants output 2 was compared with their output 1 to investigate the difference between them.The results of non-parametric Wilcoxon signed-rank test for two related samples revealed a significant statistical difference between the groups (Asymp.Sig.2-tailed .002,p .05).It might be concluded that the noticing resulting from the participants' recognized gap in their interlanguage system contributed to better internalization of some targeted rhetorical structures in the input.However, no significant difference was found between the EG output 2 and the CG1 output. The supposition made by the third hypothesis was that the output-first group learners outperform the non-output-first group learners (pre-emptive group learners) concerning the acquisition of targeted forms.In other words, the hypothesis predicted that output-fronted activities result in higher performance than pre-emptive input activities traditionally applied in paragraph writing classes.To test the claim, the EG learners' output 2 was compared with that of the CG2.Applying Mann-Whitney U test, hypothesis 3 was supported in that the EG participants, with contrast output 1-input-output 2, displayed a significant gain in their accurate use of target rhetorical structures (Asymp.Sig.2-tailed .000,p .05).In addition, in the present study, the median score of the EG was the highest (Mdn = 13) and the lowest median score was related to the CG2 (Mdn = 8) which received no output-fronted activity.It might be argued that since the participants in the pre-emptive comparison group could not recognize the gap in their interlanguage system, they could not raise their awareness of the target structures in the input which itself resulted in lower gain of the structures of contrast.Moreover, it might be concluded that the type of teacher-generated noticing in pre-emptive activities would be less effective than learner-generated noticing which occurs as a result of learners recognizing the gap of knowledge in output-fronted activities.This finding was partially consistent with Izumi and Bigelow's (2000) study: "learners come to notice their linguistic problems when trying to produce language, which then prompts them to notice the gap between their IL form and the target form upon receiving relevant input". Conclusion This study made an attempt to explore a highly consequential but neglected aspect of classroom teaching of paragraph writing: the effect of output-fronted activities.The questions addressed in the present research might be answered in ways to support the centrality of noticing as one possible requirement for the acquisition of rhetorical structures of contrast paragraphs in English.In addition, this study confirmed the use of output-first-then-input activities to enhance learners' uptake of needed structures and forms in the input.To date, the findings of the present study might support Schmidt's Noticing Hypothesis and Swain's Comprehensible Output Hypothesis. Finally, some methodological concerns are in order.Firstly, to tap learners noticing in the study, following Izumi/Bigelow (2000), we used underlining parts of the input.In the literature some criticisms have been put forward against precision and accuracy of underlining as an on-line measure of noticing.Considering the asserted shortcomings, the present research results related to underlining as a measure of noticing should be interpreted cautiously. Secondly, due to logistic considerations, the maximum length of experimentation time for each of the groups participating in the study was around 90 minutes.Clearly, within such brevity of experimentation and treatment, participants might be constrained to perform at their utmost ability.In addition, problems in selection of the participants into the study and the number of participants in each group negatively affect the validity of the results; consequently, care must be taken in generalization of the research results. Finally, output serves as a noticing/triggering (or consciousness-raising) function.According to Swain (1995), in producing the TL learners "may notice a gap between what they want to say and what they can say, leading them to recognize what they do not know, or know only partially" (cited in Izumi/Bigelow RhetoricalStructure Theory (RST) and examining the paragraph writing and essay writing textbooks in English (in particular Refining Composition Skills by Smalley/Rutten/Kozyrev (2001: 17-175); Paragraph Development byArnaudet/Barrett (1981: 140-143), the most frequently used rhetorical forms used in English contrast paragraphs were selected as followings: prepositions (different from, unlike, in contrast to, contrary to, as opposed to) Table 1 . As displayed in the table, the median score of the experimental group output 1 considerably increased in comparison with the same group's output 2 median score.The lowest median score was obtained by comparison group 2 which received no output-fronted activity and was a typical traditional paragraph writing class. Table 3a : Mean Ranks for the EG and CG Underlining EG Underlining Table 3a & b for the test statistics). Table 6 : EG and CG2 Median scores Table 6 displays the statistics related to the EG and CG2 outputs. Table 7a : The EG & CG2 Ranks Com2 output Table6indicates that the EG median is much higher than that of the CG.We applied the Mann-Whitney Test to examine whether the difference between the Mdns obtained by the EG and CG was significant (see Table7a& b for the test statistics). Table 7b : Mann-Whitney Test Statistics (a) claimed that learners' output might function to help learners recognize the gap between what they want to say and what they can.Having noticed the gap, learners might internalize the specific structures in the input.(cf.Izumi 2002: 545-546).
8,747.2
2008-01-01T00:00:00.000
[ "Linguistics", "Education" ]
Topologically twisted SUSY gauge theory, gauge-Bethe correspondence and quantum cohomology We calculate the partition function and correlation functions in A-twisted 2d N\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N} $$\end{document} = (2, 2) U(N) gauge theories and topologically twisted 3d N\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N} $$\end{document} = 2 U(N) gauge theories containing an adjoint chiral multiplet with particular choices of R-charges and the magnetic fluxes for flavor symmetries. According to the Gauge-Bethe correspondence, they correspond to the Heisenberg XXX1/2 and XXZ1/2 spin chain models, respectively. We identify the partition function with the inverse of the norm of the Bethe eigenstate. Correlation functions are identified to coefficients of the expectation value of Baxter Q-operator. In addition, we consider correlation functions of 2d N\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N} $$\end{document} = (2, 2)* theories and their relations to the equivariant integration of the equivariant quantum cohomology classes of the cotangent bundle of Grassmann manifolds and the equivariant quantum cohomology ring. Also, we study the twisted chiral ring relations of supersymmetric Wilson loops in 3d N\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N} $$\end{document} = 2* theories and the Bethe subalgebra of the XXZ1/2 spin chain models. Introduction The Gauge-Bethe correspondence states that quantum integrable models correspond to supersymmetric gauge theories. The XXX Heisenberg spin chain model was considered as one of the primary examples of the Gauge-Bethe correspondence in the original papers [1,2]. It was argued that the supersymmetric vacua of the softly broken 2d N = (4, 4) U(N ) gauge theory by the mass of the adjoint chiral multiplet, usually called 2d N = (2, 2) * U(N ) gauge theory, is naturally identified with the Bethe ansatz equation for the XXX 1/2 spin chain model. Also, the twisted superpotential was identified with the Yang-Yang potential. In this paper, we study 2d N = (2, 2) and 3d N = 2 theories containing an adjoint chiral multiplet with two different choices of R-charges and background magnetic fluxes but with same gauge group and matter contents. We calculate partition functions of A-twisted 2d N = (2, 2) theories on S 2 and partition functions of topologically twisted 3d N = 2 theories on S 1 × S 2 with all the equivariant parameters associated to flavor symmetries turned on but with the equivariant parameter associated to the rotational symmetry on S 2 turned off. We match them with the inverse of the norm of Bethe eigenstates by choosing particular R-charges and background fluxes for flavor symmetries. The gauge invariant operators form a twisted chiral ring and expectation values of them provide the coefficient of the expectation value of the Baxter Q-operator. Thus, with a proper choice of coefficients, the expectation value of gauge invariant operators provide the expectation value of conserved charges of the corresponding spin chain model. We also calculate correlation functions of the A-twisted 2d N = (2, 2) * theory whose target space (in the nonlinear sigma model limit) is the cotangent bundle of the Grassmannian for several examples. We calculate the equivariant integration by using the results in [9] where they showed that the Bethe subalgebra of the XXX spin chain model is isomorphic to the equivariant quantum cohomology ring, 1 and check that the result is consistent with correlation functions of the A-twisted 2d N = (2, 2) * theory and also with the Seiberg-like duality. It was shown in [10] that the Bethe subalgebra of the XXZ spin chain model is given by certain generators and relations analogous to the equivariant quantum cohomology ring in [9]. 2 With the Gauge-Bethe correspondence in mind, we see that the Wilson loop algebra agrees with the Bethe subalgebra of the XXZ 1/2 model by checking several examples. Also, we consider the Seiberg-like duality of the 3d N = 2 * theory in the context of the Bethe subalgebra of the XXZ 1/2 model. In the final section, we conclude with a summary of our results and discuss some future directions. 2 The gauge-Bethe correspondence and the Bethe norm Given 2d N = (2, 2) gauge theories, the condition for the supersymmetric vacua is given by exp 2πi ∂ W eff (σ) ∂σ a = 1 , (2.1) 1 They considered general partial flag manifolds and the Grassmannian is a part of them. 2 The Bethe subalgebra of the XXZ spin chain model was conjectured to be identical to the equivariant quantum K-theory ring [10]. JHEP02(2019)052 where W eff (σ) is the effective twisted superpotential. According to the Gauge-Bethe correspondence [1,2,11], it is identified with the Bethe ansatz equation of a certain integrable model. Also, the twisted superpotential W eff (σ) of 2d N = (2, 2) theories corresponds to the Yang-Yang potential of the corresponding integrable model. For the isotropic SU(2) Heisenberg XXX 1/2 spin chain model and similarly for the anisotropic XXZ 1/2 spin chain model where spin-1/2 degree of freedom of SU(2) is attached to each sites, twisted mass parameters for flavor symmetries are related to parameters for the displacement of lattice sites with respect to the symmetric round lattice configuration. In this section, we relate the norm of the Bethe eigenstates of the XXX 1/2 and the XXZ 1/2 spin chain model to the partition function of a certain topologically twisted 2d N = (2, 2) and 3d N = 2 theory, respectively. We also discuss coefficients of the expectation value of the Baxter Q-operator and conserved charges in terms of correlation functions. 2.1 The norm of the Bethe eigenstate in the XXX 1/2 and the XXZ 1/2 spin chain model We are interested in the inhomogeneous XXX 1/2 and XXZ 1/2 spin chain model with M lattice sites. 3 The monodromy matrix, T(λ), of the XXX 1/2 and the XXZ 1/2 model takes a form of a 2 × 2 matrix acting on the 2-dimensional auxiliary space V where λ is a spectral parameter. Therefore the transfer matrix, τ , which is given by the trace of monodromy matrix is τ (λ) = A(λ)+D(λ). With the quasi-periodic boundary condition S M +1 = e where S a = 1 2 σ (a) are generators at the a-th site and σ are the Pauli matrices, the transfer matrix is given by A(λ) + e iϑ D(λ) [1,14]. The R-matrix of the XXX 1/2 and the XXZ 1/2 model is for the XXX 1/2 model where c is an auxiliary parameter, and for the XXZ 1/2 model where η is related to the anisotropy parameter ∆ = cos 2η, 0 < 2η ≤ π. JHEP02(2019)052 The R-matrix satisfies the Yang-Baxter equation acting on the auxiliary space where T a acts on the auxiliary space V a , and this provides the commutation relations of matrix elements of the monodromy matrix T(λ). Also, from (2.8) and due to the trace identities, one can show that the transfer matrix, τ (λ), commutes with the Hamiltonian, Therefore τ (λ) is a generating function of conserved charges. As they commute, eigenfunctions of the transfer matrix are also eigenfunctions of the Hamiltonian. The pseudo-vacuum |0 satisfies the following conditions where a(λ) and d(λ) are called the vacuum eigenvalues. For the Heisenberg spin chain model, the pseudo-vacuum |0 is given by the state with spins being all up or all down. The Bethe eigenstate. Consider a state that is obtained by acting an operator B on the pseudo-vacuum where N is the number of particles or excitations. This state becomes the eigenvector of the transfer matrix when spectral parameters, λ a , satisfy the Bethe ansatz equation, and the eigenvector is called the Bethe eigenstate. The dual vector of |Ψ N (λ) is defined by The vacuum eigenvalues, a(λ) and d(λ), of the inhomogeneous XXX 1/2 and XXZ 1/2 model are and respectively, and the Bethe ansatz equation is respectively, for the quasi-periodic boundary condition. The norm of the Bethe eigenstate for XXX 1/2 model. The norm of the Bethe eigenstate [15] is given by . For the inhomogeneous XXX 1 2 spin chain model, and we obtain (2.21) Therefore, the inverse of the norm of the Bethe eigenstate is given by Here P XXX is a set of independent solutions of (λ) := (λ 1 , · · · , λ N ) satisfying the Bethe ansatz (2.15) with the quasi-periodic boundary condition. The norm of the Bethe eigenstate for XXZ 1/2 model. The norm of the Bethe eigenstate for the XXZ 1/2 model [15] can be obtained similarly as in the case of the XXX 1/2 JHEP02(2019)052 (2.24) In terms of ϕ ′ , the inverse of the norm of the Bethe eigenstate is given by and P XXZ is a set of independent solutions of (λ) := (λ 1 , · · · , λ N ) satisfying the Bethe ansatz equation (2.16) with the quasi-periodic boundary condition. JHEP02(2019)052 The partition function of the A-type topologically twisted theory can be calculated by using the formula in [3]. In the following calculation, we turn off the background value of the graviphoton associated to S 2 . The one-loop contributions from the chiral, anti-chiral, and adjoint chiral multiplets are given by and the one-loop contribution from the vector multiplet is We denote a constant configuration of the scalar in the vector multiplet as σ = diag(σ 1 , · · · , σ Nc ). The partition function of A-twisted gauged linear sigma models on S 2 is given by Here the choice of the contour is specified by the Jeffrey-Kirwan residue prescription, which depends on the choice of the covector η ∈ R Nc . The parameter q is the exponential of the complexified FI-parameter q := exp 2πiτ = exp 2πi θ 2π + iξ and Z 1-loop total (k) is By choosing a covector η, for example, to be η = (−1, · · · , −1), the one-loop determinants of anti-chiral multiplets and an adjoint chiral multiplet contribute to Jeffrey-Kirwan residues. Poles from anti-chiral multiplets exist when k a < n i − 1 2 l − r 2 + 1. Summing over k a < K first for a sufficiently large positive integer K, the partition function is expressed as where W eff is the effective twisted superpotential Due to the factor exp(2πi∂ σa W eff ) K with large K in numerator, there are no poles at −σ a + m y i − 1 2 m z = 0 and σ a − σ b + m z = 0 and only poles at exp(2πi∂ σa W eff ) − 1 = 0 contribute. Then, dependence on K disappears and we obtain where P 2d := {(σ 1 , · · · , σ Nc ) | exp(2πi∂ σa W eff ) = 1 for all a = 1, . . . , N c }/S Nc (2.37) (2.40) In P 2d , we identify solutions which are the same up to Weyl permutations, S Nc , of (σ 1 , · · · , σ Nc ). And the condition for supersymmetric vacua, exp(2πi∂ σa W eff ) = 1, is given by the partition function of the A-twisted 2d N = (2, 2) gauge theory 5 and the inverse of the norm of the Bethe eigenstate (2.22) agree up to an overall factor. This type of relation was first studied for the U(N )/U(N ) gauged WZW model on genus-g Riemann surfaces Σ g in [16] where the corresponding integrable model is the phase model. See also [17][18][19][20]. Correlation functions, the Baxter Q-operator, and conserved charges. We have identified the partition function and the norm of the inverse of the Bethe eigenstate. We can also consider correlation functions of the A-twisted 2d N = (2, 2) theory discussed in section 2.2 in the context of the Gauge-Bethe correspondence. In the A-twisted 2d N = (2, 2) theory, correlation functions of gauge invariant operators O(σ) are given by This can also be written as (2.47) 5 For example, we can choose all background fluxes and R-charges to be zero and don't include the superpotential QΦQ in the theory. The canonical assignment of the R-charge for superpotential QΦQ is not allowed if we want to match the A-twisted partition function and the inverse of the norm of the Bethe eigenstate. Indeed if we sum three conditions in (2.43), we obtain However, as flavor symmetries are SU(N f ) instead of U(N f ), we have r1 + r2 + R = 0. Therefore, the canonical assignment of the R-charge such as r1 = r2 = 0 and R = 2 is not allowed for the match with the inverse of the norm. Also note that, given same matter contents, the Bethe ansatz equation is same whatever the R-charges and background magnetic fluxes are. JHEP02(2019)052 The operator O(σ) is provided by gauge invariant polynomials of the Cartan of the scalar component σ of the vector multiplet, which is a symmetric function of σ a , a = 1, · · · , N c . Thus it can be written in terms of the elementary symmetric polynomials. We denote the polynomial Q(x) as then the coefficients of x Nc−l provide the l-th elementary symmetric polynomial of σ a . Meanwhile, in integrable models there is a fundamental quantity known as the Baxter Q-operator Q(x) whose eigenvalue is actually (2.48) with N c identified with the number of particles N and σ a with spectral parameters λ a . Thus, we see that the expectation value of the Baxter Q-operator provides the generating function of correlation functions of gauge invariant operators in the 2d N = (2, 2) theory in section 2.2, i.e. . The eigenvalue of the transfer matrix τ (µ) for the XXX 1/2 model is given by Therefore, the eigenvalue θ (µ, {λ a }) is expressed in terms of symmetric polynomials of λ a . As discussed above, the eigenvalue of the transfer matrix is actually a generating function of mutually commuting conserved charges (or Hamiltonians). Accordingly, we can identify the expectation value of conserved charges of the XXX 1/2 spin chain model with the twisted GLSM correlators with appropriate coefficients. Correlation functions in the 3d N = 2 theory and the XXZ 1/2 spin chain model We consider topologically twisted 3d N = 2 U(N c ) gauge theories with an adjoint chiral multiplet Φ and N f chiral and anti-chiral multiplet Q a i , Q i a , a = 1, . . . , N c , i = 1, . . . , N f , respectively, where we use the same notation as in the 2d case. There are flavor symmetries SU(N f ) Q , SU(N f ) Q , and U(1) D . In addition, there is a U(1) T topological symmetry in three dimensions. The matter contents and charge assignment are specified in table 2. We denote fugacities and magnetic fluxes of the Cartan part of global symmetries as follows; Then the topologically twisted index of the 3d N = 2 theory is given by Nc a,b=1 Here x a is a constant value of the Wilson loop for the a-th diagonal U(1) of the gauge group U(N c ). We take, for example, η = (−1, −1, . . . , −1) to choose a contour so that it picks poles from anti-chiral multiplets and the ajdoint chiral multiplet. Poles exist when m a < n i − l 2 − r 2 + 1, and we resum over m i < K for a sufficiently large positive integer Summing over all fluxes for m i < K in (2.53), we get where B a (x) is given by Due to (ζe iBa(x) ) K factor in the numerator with a sufficiently large K, poles at x a = 0, 1 − x −1 a y i z −1/2 = 0, and x a − zx b = 0 are not available and only relevant poles come from ζe iBa(x) = 1 for all a. We denote the solution for this equation by where solutions that are related by Weyl permutations S Nc of (x 1 , · · · , x Nc ) are identified. With the contour integral becomes (2.60) Also, upon (2.57), the condition for supersymmetric vacua, ζe iBa(x) = 1, i.e. is exactly same as the Bethe ansatz for the XXZ 1/2 spin chain (2.16) If we choose R-charges and magnetic fluxes in such a way that hold, then the 3d topologically twisted index (2.58) and the inverse of the norm of the Bethe eigenstate of the XXZ 1/2 spin chain model agree up to overall constants. We can also consider correlation functions and conserved charges in the 3d N = 2 theory and the XXZ 1/2 spin chain model as in section 2.2. The eigenvalue Q(u) of the Baxter Q-operator Q(u) in the XXZ 1/2 model is given by Meanwhile, the Wilson loop in 3d N = 2 theories is given by the Schur polynomial where Y is the Young diagram for the representation R of U(N c ). When R is a totally antisymmetric representation Y = 1 r , r = 1, . . . , N c , the Schur polynomial is given by the elementary symmetric polynomials, s 1 r (x 1 , . . . , x Nc ) = e r (x 1 , . . . , x Nc ). Therefore, with the identifications (2.57), the expectation value of Wilson loop operators is proportional to the coefficient of the eigenvalue of the Baxter Q-operator. Also, as the eigenvalue of the transfer matrix τ (µ) for the XXZ 1/2 model is given by (2.50) with (2.14), we can identify the expectation value of conserved charges of the XXZ 1/2 model with the expectation value of Wilson loops with appropriate coefficients. Equivariant quantum cohomology, GLSM, and integrable model In the previous section, we studied the relation between the A-twisted N = (2, 2) GLSM and the XXX 1/2 spin chain. It was shown in [21] that integrations of cohomology classes of toric Fano manifolds can be interpreted as correlation functions of σ of the corresponding A-twisted N = (2, 2) GLSM where the cup product of cohomology classes are deformed by using three point Gromov-Witten invariants (quantum cup product). We may expect that such a relation holds for N = (4, 4) GLSM where the target space is a hyperKähler manifold. We turn on all the possible twisted mass parameters including the one for the N = (2, 2) adjoint chiral multiplet. 6 In this section, we consider correlation functions of the A-twisted N = (2, 2) * GLSM on S 2 and study its relation to the equivariant quantum cohomology of the cotangent bundle of the Grassmannian. Equivariant quantum cohomology and equivariant integration Firstly, we summarize the equivariant quantum cohomology of the cotangent bundle of the Grassmannian T * Gr(r, n) [9]. The Grassmannian Gr(r, n) is specified by the chains of subspaces, We would like to consider the cotangent bundle T * Gr(r, n) of the Grassmannian Gr(r, n). We sometimes denote (r, n − r) by (λ 1 , λ 2 ) := (r, n − r) below. There is a torus action (C * ) n ⊂ GL n (C) on C n , accordingly on Gr(r, n). In addition, there is also a C * action on the fiber direction of T * Gr(r, n). With these actions, one can consider a GL n (C) × C * equivariant cohomology ring. The set of the Chern roots of bundles on Gr(r, n) with fiber Also, the Chern root corresponding to each factors of (C * ) n action and C * action is denoted by z = {z 1 ; · · · ; z n } and h, respectively. Then the equivariant cohomology ring is given by where S n , S λ 1 and S λ 2 denote the symmetrization of variables {z 1 , · · · , z n }, {γ 1,1 , · · · , γ 1,λ 1 } and {γ 2,1 , · · · , γ 2,λ 2 }, respectively. The ideal I is generated by n coefficients of a degree n − 1 polynomial of u, The equivariant quantum cohomology ring of the cotangent bundle of the Grassmannian is given by ] is a ring of formal series of the quantum parameter q. The ideal I q is generated by n coefficients, p l , defined by n l=1 p l (z, Γ, h, q)u n−l : (3.5) The coefficients p l are degree l polynomials of each Γ and z, and are invariant under the action of S n × S λ 1 × S λ 2 . Meanwhile, in [22] the Yangian acting on the equivariant cohomology was constructed and the equivariant quantum cohomology ring was identified with the Bethe subalgebra of the integrable model. The cotangent bundle of the Grassmannian is a typical example of [22]. The equivariant integration of the cohomology class [f (Γ, z, h)] ∈ H * GLn(C)×C * (T * Gr(r, n); C) is calculated by the formula T * Gr(r,n) where I r is a subset of I = {1, · · · , n} with |I r | = r and I n−r is the complement of I r in I. The factor f (z I , z; h) in the numerator is defined by the substitution Γ = (Γ 1 , Γ 2 ) → (z Ir , z I n−r ) in f (Γ, z, h). Summation Ir⊂I in (3.6) runs for all the possible subsets in I with fixed r. In section 3.2 we calculate the equivariant integration of the elements [f (Γ, z, h; q)] in the equivariant quantum cohomology ring QH * GLn(C)×C * (T * Gr(r, n); C) for several examples by using the formula (3.6) and check that they match with the corresponding GLSM correlators. More specifically, given a ring element, we reduce the degree of the ring element by using the ideal I q whenever it is possible and then apply the formula (3.6) to the resulting ring element, which depends on the parameter q in general. 7 The GLSM correlation function of the operator corresponding to a given original ring element before reducing is expected to match with the result of the equivariant integration obtained in a way we have just described. Correlation functions of A-twisted GLSM and equivariant integration of equivariant quantum cohomology We study the relation between correlation functions of the A-twisted 2d N = (2, 2) * GLSM and the equivariant integration for the equivariant quantum cohomology classes in the cotangent bundle of the Grassmannian. The gauge group and the matter contents are the same as in section 2.2, but we choose different R-charges from the previous case in such a way that we now have the superpotential In the positive FI-parameter region, the target space of the non-linear sigma model limit of the theory is T * Gr(N c , N f ) where the base space Gr(N c , N f ) is parametrized by Q b i . On the other hand, in the negative FI-parameter region, the target is again T * Gr(N c , N f ) but the base space is parametrized by Q i a . The superpotential breaks SU We turn off all the background fluxes for flavor symmetry groups. The twisted mass parameters for the SU(N f ) flavor symmetry are denoted by m i and the twisted mass parameter for U(1) D flavor symmetry by m z . JHEP02(2019)052 The correlation function of the gauge invariant operator O(σ) constructed from σ = diag(σ 1 , · · · , σ Nc ) is (3.8) Here we take the charge vector in the Jeffrey-Kirwan reisdue formula as Re q < 1. Then residues are evaluated at the poles (σ a − m i − m z 2 ) −(ka+1) and it is easy to show that poles coming from (σ a − σ b + m z ) −(ka−k b −1) do not contribute to the residues. The overall sign ambiguity will be fixed below. we obtain two equations −q Dividing (3.10) by (3.11), we get Upon the identifications JHEP02(2019)052 Quantum cohomology of CP n−1 and correlation functions of the A-twisted GLSM. We briefly recall the well-known relation between the N = (2, 2) U(1) GLSM with n charge +1 chiral multiplets and the quantum cohomology of CP n−1 . This GLSM flows to the N = (2, 2) non-linear sigma model with target space CP n−1 [21,23]. The quantum cohomology of CP n−1 is given by (3.14) The equivariant integration of γ l 1,1 ∈ QH * (CP n−1 ; C), which we denote as γ l 1,1 CP n−1 , is obtained as follows. If a < n, γ a 1,1 CP n−1 is the same as the integral of the cohomology class γ a 1,1 ∈ H * (CP n−1 ; C) and is given by For γ mn+a 1,1 CP n−1 with a < n, we reduce the degree by using the relation γ n 1,1 − q = 0 to γ mn+a 1,1 = q m γ a 1,1 and obtain γ mn+a On the other hand, the expectation value of σ l is obtained by supersymmetric localization We perform a similar calculation for the cotangent bundle of the Grassmannian. T * CP n−1 We would like to relate the expectation value of σ l in the GLSM to the equivariant integration of equivariant quantum cohomology classes when the target space is T * CP n−1 . JHEP02(2019)052 From (3.21), the equivariant integration of γ l 1,1 is given by The correlation function σ l Nc=1,N f =2 A-twist is expected to be related to the equivariant integral on via the identification of parameters (3.13). We can check this explicitly. For example, when l ≤ 1, the equivariant integration γ l 1,1 T * CP 1 gives for several orders of q and see that there are no q corrections, . (3.27) Here we fixed the overall sign in order to have an agreement with the equivariant integration T * CP 1 [1]. Therefore, we checked that We also computed σ l Nc=1,N f =2 A-twist perturbatively and γ l 1,1 T * CP 1 exactly by using (3.22) for l = 2, 3, 4, 5, and checked agreement (3.23). • T * CP n−1 . We expect that the expectation value of σ l agrees with the integration of γ l 1,1 ∈ QH GLn(C)×C * (T * CP n−1 ; C), From the ideal, we obtain the following relation JHEP02(2019)052 This relation is the same as the twisted chiral ring relation of the corresponding GLSM via (3.13). From (3.31), γ l 1,1 with l > n − 1 is uniquely expressed as With the identification σ = γ 1,1 , we expect that σ l Nc=1,N f =n A-twist agrees with the equivariant integration of the equivariant quantum cohomology class γ l 1,1 , We also checked this for n = 3, 4 with several higher powers of σ and found agreement. (3.36) We have checked (3.34) and (3.35) for k + l ≤ 3 perturbatively. The detailed calculation of the reduction (3.35) is available in appendix A as an example. We also checked the cases of k + l = 4 and some of k + l = 5 for T * Gr(2, 5) and found agreement. We expect to have agreement for general r ≤ n − r. T * Gr(r, n) with r > n − r and the Seiberg-like duality From the ideal p 1 = 0 for the equivariant quantum cohomology of T * Gr(r, n), we have JHEP02(2019)052 where λ 1 = r and λ 2 = n − r. In section 3.2.2, we expected, for example, r a=1 σ a Nc=r,N f =n A-twist = T * Gr(r,n) r a=1 γ 1,a for r ≤ n − r , (3.38) i.e. r a=1 γ 1,a T * Gr(r,n) does not have any q corrections and can be computed by using (3.6). From the relation (3.37), it is expected that r a=1 γ 1,a T * Gr(r,n) with r > n − r receives q corrections and differs from the result directly obtained by the classical equivariant integration (3.6). In order to calculate the equivariant integration properly for the case r > n − r, it is useful to study the isomorphism T * Gr(r, n) ≃ T * Gr(n − r, n), which corresponds to the Seiberg-like duality [24] between A-twisted N = (2, 2) * GLSM's with gauge groups U(r) and U(n − r). For this purpose, we consider the relation between ideals of the equivariant quantum cohomology of T * Gr(r, n) and T * Gr(n − r, n). The latter is given by where we use tilde to distinguish the notations for QH * GLn(C)×C * (T * Gr(r, n); C). The ideal I q is generated by n polynomialsp l defined by The ideals of the quantum cohomology of T * Gr(r, n) and of T * Gr(n − r, n) are the same upon the following parameter identification 8 When equivariant parameters are turned off, γ 1,a andγ 2,a are exchanged with each other under T * Gr(r, n) ↔ T * Gr(n − r, n). This is consistent with the fact that vector bundles with fibers F 1 and F 2 /F 1 are exchanged vice versa under Gr(r, n) ↔ Gr(n − r, n). Next we identify the variables in QH * GLn(C)×C * (T * Gr(n−r, n); C) with those in U(n−r) GLSM. By substituting u =γ 1,c andγ 1,c + h into n l=1p l (z,Γ,h,q)u n−l = 0, we obtaiñ With the identifications We begin with the simplest case, which corresponds to the partition function. From (3.6), we obtain T * Gr(r,n) [1] = T * Gr(n−r,n) [1] (3.45) and this implies 1 We computed each side of (3.46) for (N c , N f ) = (1, 3), (1,4), (2,5) in several orders of q and checked the agreement. Next, with the identification (3.41), we have There is another way of identification, but considering the Seiberg-like duality above identification is more appropriate. 4 Wilson loops in the 3d N = 2 * theory and the Bethe subalgebra of the XXZ 1/2 model In the previous section, we saw that the twisted chiral ring relation of the GLSM agrees with the equivariant quantum cohomology ring of the cotangent bundle of the Grassmannian, which corresponds to the Bethe subalgebra of the XXX 1/2 spin chain model. Therefore we can do similar calculations and checks for the S 1 uplift of the twisted chiral ring relation of the 3d N = 2 * theory and the Bethe subalgebra of the XXZ 1/2 spin chain model [10]. The S 1 uplift of the twisted chiral ring in the 3d N = 2 * theory on S 1 × S 2 is generated by Wilson loops wrapped on S 1 . In the 3d N = 2 * theory, which is obtained by the adjoint mass deformation of the 3d N = 4 theory, there is a superpotential Table 4. Matter contents of the 3d N = 2 * theory. JHEP02(2019)052 Here, we turn off all the background magnetic fluxes for flavor symmetries. Then the expectation value of supersymmetric Wilson loops in the representation R is given by where we absorbed (−1) Nc−1 into the definition of the fugacity ζ for U(1) T . Here the fugacity for SU(N f ) flavor symmetry is denoted by y i and the one for U(1) D flavor symmetry by z. When the representation R is the l-th anti-symmetric representation A l , Tr A l (x) is given by the l-th elementary symmetric polynomial of (x) = diag(x 1 , · · · , x Nc ) Note that any product of supersymmetric Wilson loops is a symmetric function of (x), which is also expressed in terms of the elementary symmetric polynomials. JHEP02(2019)052 Bethe ansatz equation and match of parameters. We consider the identification between generators of K q and variables in the topologically twisted 3d N = 2 * supersymmetric theory by deriving the Bethe ansatz equation from (4.5). By substituting u = γ 1,a and γ 1,a h −1 into P (Γ, z, h, q) = 0, we obtain, respectively, Dividing (4.6) by (4.7), we get the Bethe ansatz equations, which is the SUSY vacua condition ζe iBa = 1 of the 3d N = 2 * theory with the following identifications Abelian cases. From (4.8), which is equivalent to ζe iBa = 1, we expect that the supersymmetric Wilson loop W = x for U(1) gauge theories satisfy with the parameter identification (4.9). Also by using (4.8), the higher order correlation functions W l Nc=1,N f for l ≥ n are expressed in terms of W k , (k = 0, 1 · · · , n − 1) as (4.11) In the 2d N = (2, 2) * theory with N f = n flavors, we found σ l Nc=1,N f =n A-twist with l ≤ n − 1 do not have q corrections. There is a similar property in the 3d N = 2 * theory. For 0 ≤ l ≤ n − 1, W l Nc=1,N f =n does not have ζ corrections and is given by the zero magnetic charge sector We have checked (4.10) and (4.12) for N f = 2, 3, 4 in several orders of ζ. JHEP02(2019)052 Non-Abelian cases. In two dimensions, we observed that the partition function 1 A-twist does not receive any q corrections and is given by residues at the zero magnetic charge sector. Similarly we observed that the partition function (index) of the topologically twisted 3d N = 2 * theory on S 1 × S 2 does not receive ζ corrections neither and is given by (4.13) In two dimensions, 1 Nc=r,N f =n A-twist has a geometrical interpretation as the equivariant integration of [1] ∈ H * (T * Gr(r, n); C). The index (4.13) also has a geometrical interpretation. If we identify 3d parameters z i and h as z i = e z i and h = e −h , respectively, (4.13) is the sinh uplift of the equivariant integration, which can be interpreted as the equivariant Dirac index. For 2 ≤ n − 2 (r = 2), we also observe that the expectation values of x ±1 (4.14) However, the properties of correlation functions (x 1 + x 2 ) 2 , (x 1 + x 2 )(x 1 x 2 ), and (x 1 x 2 ) 2 are different from the 2d case. In the 2d N = (2, 2) * theory with N c ≤ N f − N c , we expected that correlation functions of symmetric polynomials of σ, Nc a=1 e la a (σ) with Nc a=1 l a ≤ N c , don't have q dependence. This may be because the degree of the polynomial cannot be reduced to a lower degree in the polynomial ring by the ideal. For example, (σ 1 + σ 2 ) l (σ 1 σ 2 ) k Nc=2, N f =4 with k + l = 2 agrees with the residues at zero magnetic flux sector and do not have q dependence. On the other hand, if we eliminate γ 2,1 + γ 2,2 and γ 2,1 γ 2,2 from the ideal of K q , we obtain Then we find that the degree of (x 1 + x 2 ) l (x 1 x 2 ) k Nc=2,N f =4 with k + l = 2 is reduced by the above equations and that (x 1 + x 2 ) l (x 1 x 2 ) k has ζ dependence. We have checked (4.15) and (4.16) hold for several orders of ζ in terms of expectation values of Wilson loops. JHEP02(2019)052 With the identifications λ 1 =λ 2 , λ 2 =λ 1 , γ 1,a =γ 2,ah , γ 2,a =γ 1,ah −1 , z i =z i , h =h, q =q −1 , (4.18) (4.17) is identical to P (Γ, z, h, q) for (λ 1 , λ 2 ) = (r, n − r). Thus, with (4.18) the Bethe subalgebra for (λ 1 , λ 2 ) = (r, n − r) and the one for (λ 1 ,λ 2 ) = (n − r, n) are isomorphic. By substituting u =γ 1,a ,γ 1,ah −1 into (4.17), we obtain which is again the same with the SUSY vacua conditionζe iBa = 1 of the U(n − r) gauge theory with the identifications From (4.9), (4.18) and (4.20), we have maps between parameters in U(r) and U(n − r) 3d N = 2 * gauge theories, So from now on, we don't distinguish y i , z, and z i fromỹ i ,z, andz i , respectively. From (4.13), we have at the level of the partition function (or index). We have checked this for several N f and N c . For the fundamental representation, we consider the coefficient of u −n+1 of P = 0, which becomes by using relations between two sets of parameters, (4.9), (4.18) and (4.20). Therefore, this indicates that the Wilson loop in the fundamental representation W F = Nc a=1 x a in the U(N c ) gauge theory with N c > N f − N c is provided by where W F = n−r a=1x a is the Wilson loop in the fundamental representation in the U(N f − N c ) gauge theory. JHEP02(2019)052 When calculating the index, the evaluation of the l.h.s. in the region ζ < 1 (resp. ζ > 1) means that the r.h.s. is evaluated in the regionζ = ζ −1 > 1 (resp.ζ = ζ −1 < 1) where the negative (resp. positive) magnetic fluxes contribute to the Jeffrey-Kirwan residue operations. We evaluated the l.h.s. and the r.h.s. separately and have agreement for several r and n. Next we consider the second antisymmetric representation. We eliminate e 1 (γ 1 ) from the coefficient of u −2 in P = 0. Then we obtain the relation This can be written in terms of x a andx a as Therefore, this suggests that the expectation value of the second antisymmetric representation W A 2 = a<b x a x b is given by Nc=r,N f =n (4.28) 2 −Nc ζ, and h = z −1 =z −1 . We checked this for several N c and N f . In a similar way, we can have the Seiberg-like duality for Wilson loops in other representations from the ideal with identification of parameters (4.9), (4.18), and (4.20). Also, as done in the 2d case, we can have the S 1 -uplift of twisted chiral rings by eliminating symmetric polynomials of γ a,2 in (4.5) with the identification of parameters (2.57). Conclusion and future directions In this paper, we discussed the relation between the partition function in the A-twisted 2d N = (2, 2) theory (resp. the topologically twisted 3d N = 2 gauge theory) and the inverse of the norm of the Bethe eigenstate for the XXX 1/2 (resp. XXZ 1/2 ) spin chain model with a particular choice of R-charges and background magnetic fluxes for flavor symmetries in the gauge theory side. Coefficients of the expectation value of the Baxter Q-operator and the conserved charges were understood in terms of correlation functions in gauge theories. We also studied the relation between correlation functions in the A-twisted 2d N = (2, 2) * U(N c ) gauge theories and the equivariant integration of equivariant quantum cohomology classes for the cotangent bundle of the Grassmannian. We calculated each of them for several examples, checked that they agree, and expect that the relation holds for general cases. For the case N c > N f − N c , we used the isomorphism of Grassmannians to calculate JHEP02(2019)052 the equivariant integration where such isomorphism corresponds to the Seiberg-like duality in the GLSM side. As the twisted chiral ring of the 2d N = (2, 2) * theory is identified with the Bethe subalgebra of the XXX 1/2 spin chain model, we were able to make a similar identification for the 3d N = 2 * theory. We calculated correlation functions of Wilson loops and checked that they agree with the Bethe subalgebra of the XXZ 1/2 spin chain model. There are several interesting directions. Firstly, it will be interesting to find the analogue of the equivariant integration in the equivariant quantum K-theory and match them with the correlation functions of Wilson loops in the topologically twisted 3d N = 2 * theory. Another interesting direction is to study relations between the Bethe ansatz and the finite-dimensional commutative Frobenius algebra. In [26], a finite-dimensional commutative Frobenius algebra was constructed in terms of the Bethe ansatz for the q-boson model. It is known that the finite-dimensional commutative Frobenius algebra is essentially same as the 2d topological quantum field theory (TQFT) and the 2d partition function on genus g Riemann surface Σ g corresponding to q-boson can be written as [17] Z(Σ g ) = (λ)∈P q-boson Ψ(λ)|Ψ(λ) g−1 . (5.1) Here |Ψ(λ) is the eigenvector of the q-boson determined by the Bethe root (λ). We obtained the same type of formula for the XXX 1/2 spin chain model where the corresponding TQFT is the topologically twisted 2d N = (2, 2) theory and also for the XXZ 1/2 model that corresponds to the 3d N = 2 theory with the partial topological twist along S 2 . By using recent results [27,28], the partition function of the 2d N = (2, 2) and the 3d N = 2 theories studied in this paper can be generalized to Riemann surfaces of genus g as Z(Σ g ) = These formulas are similar to the q-boson case and imply that there exist finite-dimensional commutative Frobenius algebras associated with the Bethe ansatz for the XXX 1/2 and also the XXZ 1/2 spin chain models. It would be interesting to construct the Frobenius algebras in terms of the XXX 1/2 and the XXZ 1/2 spin chain models. Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
10,012.8
2019-02-01T00:00:00.000
[ "Physics" ]
From Periods to Anabelian Geometry and Quantum Amplitudes Periods are algebraic integrals, extending the class of algebraic numbers, and playing a central, dual role in modern Mathematical-Physics: scattering amplitudes and coefficients of de Rham isomorphism. The Theory of Periods in Mathematics, with their appearance as scattering amplitudes in Physics, is discussed in connection with the Theory of Motives, which in turn is related to Conformal Field Theory (CFT) and Topological Quantum Field Theory (TQFT), on the physics side. There are three main contributions. First, building a bridge between the Theory of Algebraic Numbers and Theory of Periods, will help guide the developments of the later. This suggests a relation between the Betti-de Rham theory of periods and Grothendieck’s Anabelian Geometry, towards perhaps an algebraic analog of Hurwitz Theorem, relating the algebraic de Rham cohomology and algebraic fundamental group, both pioneered by A. Grothendieck. Second, a homotopy-homology refinement of the Theory of Periods will help explain the connections with quantum amplitudes. The novel approach of Yves Andre to Motives via representations of categories of diagrams, relates from a physical point of view to generalized TQFTs. Finally, the known “universality” of Galois Theory, as how symmetries “grow”, controlling the structure of the objects of study, is discussed, in relation to the above several areas of research, together with ensuing further insight into the Mathematical-Physics symbiosis. To better understand and investigate Kontsevich-Zagier conjecture on abstract periods, the article ponders on the case of algebraic Riemann Surfaces representable by Belyi maps. Reformulation of cohomology of cyclic groups as a discrete analog of de Rham cohomology and the Arithmetic Galois Theory will provide a purely algebraic toy-model of the said algebraic homology/homotopy group theory of Grothendieck as part of Anabelian Geometry. The corresponding Platonic Trinity 5,7,11/TOI/E678 leads to connections with ADE-correspondence, and beyond, e.g. Theory of Everything (TOE) and ADEX-Theory. In perspective of the “Ultimate Physics Theory”, quantizing “everything”, i.e. cyclotomic quanHow to cite this paper: Ionescu, L.M. (2020) From Periods to Anabelian Geometry and Quantum Amplitudes. Advances in Pure Mathematics, 10, 229-244. https://doi.org/10.4236/apm.2020.105014 Received: March 7, 2020 Accepted: May 8, 2020 Published: May 11, 2020 Copyright © 2020 by author(s) and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Open Access Introduction Periods are a new class of numbers represented by algebraic integrals, extending the class of algebraic numbers and pervasive in applications, notably appearing as scattering quantum amplitudes. Their study was stimulated by the relatively recent work of Kontsevich and Zagier [1], following the programmatic paper relating them with Motives and Deformation Quantization in [2]. Author's preliminary efforts to understand the "core" of this network of ideas, including [3] [4], with focus on the relation between Periods, as coefficients of the period isomorphism, and scattering amplitudes [5]- [10] lead to the present viewpoint, regarding the relation to Motives, as suggested by the following diagram: The Main Ideas: An Overview There are three key ideas and directions of development, as principal contributions within this article. First, the study of the relation between the Theory of Algebraic Numbers and Theory of Periods provides a foundational bridge, which will help guide the developments of the later. A starting point is to conduct a study of Riemann surfaces over Q that admit a Belyi map, and to understand the relations between abstract periods and motives via Dessins D'Enfants §2.2.2. Second, the article emphasises the need for a homotopy-homology refinement of the Theory of Periods, which helps explain the connections with quantum amplitudes. Recall that Feynman Integrals representing these, are related to Chen Iterated Integrals, which were historically playing the role of a homotopical de Rham Theory, and which in a similar way to the prototypical Hurwitz Theorem, should be tentatively related to the algebraic de Rham cohomology, hence to the Theory of Periods. The homotopical aspects are clearly present in Grothendieck's developments of the algebraic de Rham cohomology theory establishing the theory of an algebraic fundamental group, towards a "general Galois Theory" as part of the so called Anabelian Geometry [11]. This later connection will allow to understand, for example, the novel approach of Yves Andre to Motives [12], via representations of categories of diagrams, from a physical point of view of a generalized Topological Quantum Field Theory, in the sense of Atiyah [13], and generalized by the present author [14]. As a hint for now, Chow motives can be envisioned as "embedded cobordisms", and associated integrals, the periods as de Rham isomorphism coefficients, as numerical "shadow" of TQFT amplitude. This analogy will be pursuit and documented elsewhere. Finally, the known "universality" of Galois Theory, as how symmetries "grow", controlling the structure of the objects of study, is discussed, in relation to the above several areas of research, together with ensuing further insight into the Mathematical-Physics symbiosis. A simple and elementary example of this universality is Arithmetic Galois Theory, in the context of finite abelian groups, and formulated in the language of Category Theory [15] [16]. Its relation with a discrete version of algebraic de Rham cohomology, as suggested in [17] from the cohomological side, deserves, and will be the subject of a separate article. The ideas and specific problems to be studied to make this correspondence more precise, are presented below. Further speculations and questions, regarding important related problems in Physics, notably the corresponding Platonic Trinity 5,7,11/TOI/E678 leads to connections with ADE-correspondence, and beyond, e.g. TOEs and ADEX-Theory [18], are included for future reference. The appearance of Platonic groups of symmetries, should not be surprising at this stage; in interests of physicists in finite groups of symmetry as vertical gauge groups dates since the 1950s. In perspective of the "Ultimate Physics Theory", quantizing "everything", i.e. cyclotomic quantum phase and finite Platonic-Hurwitz geometry of qubits/baryons, could perhaps be "The Eightfold (Petrie polygon) Way" to finally understand what quark flavors and fermion generations really are. Nomenclature To help the reader skim through the paper, a list of the main mathematical objects involved, and associated notation used, is briefly explained below. • The abstract periods of an algebraic variety X are denoted by ( ) Per X ; γ is a singular chain and D γ = ∂ a divisor. The corresponding pair is denoted On Kontsevich Conjecture Regarding Abstract Periods The Kontsevich conjecture on abstract periods is related to Grothendieck's Conjecture [19], and by considering the special case of Riemann Surfaces representable by Belyi maps [20], which started Grothendieck on his long march on Galois Theory [21] [22], it is expected to obtain additional insight and perhaps a proof in a special case. It is remarkable its deep relation to Galois Theory [23], hence with algebraic numbers, as "classical periods", as well as a plethora of other "essential ingredients": Trends and State-of-the-Art of Research on Periods An update of the research on periods was in order, highlighting some points to be investigated. Period Identities, Hilbert's 3rd Problem and Dehn Invariants In the concrete direction for studying periods, the work of Juan Viu-Sos for example [25], reduces periods D ω ∫ to geometric volumes Semi Alg Vol − ∫ of semialgebraic sets (see also [26], in an analysis in the sense of measure theory, to better understand the limit process from finite-additive to σ -additive. In some sense this is going "back-in-time" to Lebesgue and Borel, but notably under the guidance of Hilbert (3rd Problem). It is notable, and worth investigate how Dehn surgery and invariants enter the picture. The reason is, that Dehn surgery provide (perhaps) an alternative description of how to build/glue a manifold, capturing Betti homology in a homotopical way. Does this refine the homological period isomorphism? Additional references to be investigated are listed in [26], notably those presenting work by Waldshmidt and Yushinava. The difficulty of this line of investigation lies on the "forgetting structure" when going from a categorical point of view to a "Cauchy/numerical-methods/algorithmic" approach for the real numbers: Why Belyi Ramified Covers It is natural to look at Belyi ramified covers of the Riemann sphere with a Mobius homological mark-up ( ( ) 2 SL C /"conformal group base point"): this (Belyi's Theorem) was Grothendieck "turning moment" (letter to Faltings). Ajub's geometric conjecture (loc. cit. §5, Th. 40, p. 7) seems to compare the algebraic fundamental group of a field extension and the relative motivic group. The take from this, is the relevance of the homologic/homotopic algebraic de Rham/Chen framework, probably subject to a version of "Hurwitz Theorem" [4]. Is There a Ramification Theory of Periods? In view of the tight connections between algebraic numbers and periods, it is worth strengthening the analogy: "Is there a Ramification Theory for Periods?". The Category of Ramified Belyi Covers Consider Belyi maps for Riemann surfaces defined over the rationals, in analogy to covering maps and their deck transformations, or field extensions and Galois groups. torsor of its associated subgroupoid, and a pair of adjoint functors, playing the role of a Galois Connection, in order to derive the "absolute theory" at Categorical Theory level, as a "tool-box". It may lead to connections between motives and the theory of generalized cohomology theories (P. Hilton [29]), via triples and spectra. The Relation with KZ-Moves Linearity and Stokes Theorem are captured by considering the period isomorphism. The "change of variables" (diffeo/biholomorphic) is built in the formalism of differential forms. Hence, it seems that the essential part of the KZ-moves, modulo the torsor structure due to equivalence via isomorphisms, is the way the period isomorphism behaves under a ramified cover. For covering maps this would correspond to the lattice structure of the fundamental group of the base space, via its universal covering map: On the other hand a differential form, e.g. a 1-form in our case, defines a monodromy, and therefore a ramified cover via path integral lifting. How all these relate to periods remains to be seen… Prime Decomposition and Ramification of Dessins D'Enfant? The ramification process (and theory) should go parallel to the ramification of primes under field extensions: Integrals as amplitudes. The case of higher number of ramification points probably corresponds to families of periods indexed by parameters. Homological vs. Homotopical An investigation of the Conjecture will start from understanding the relation between these periods and the "discrete DATA" (Dessins D'Enfants), as a homotopical analog of the Hodge structure characterizing the Betti-de Rham homological period isomorphism. The pertaining goal is to identify a more tangible combinatorial structure that corresponds to periods, invariant under a different kind of "moves" (e.g. Pachner moves, Rademeister moves, chord diagrams relations etc.) and allow for a correspondence with the 3-moves of Kontsevich's Conjecture. Byproduct of the study would be a better understanding of the relation between "homological" and "homotopical" periods, as intuitively corresponding to the "abelian vs. anabelian" case. Indeed, Galois groups controlling algebraic numbers, as a special case of algebraic fundamental groups, are special cases of periods, controlled by the algebraic de Rham cohomology. But these two algebraic theories should be related by an analog of Hurwitz Theorem, as intuitively "hoped" by the present author in the IHES talk [4]. A Study of Periods of Elliptic Curves Conform Polya's advise "If you can't solve a problem, there is an easier one you can't solve; find it!" [31], this could be the specific study which could yield a better understanding of the basic concepts, and of the relations between them. In this case, the periods are related by the Legendre relation [ Periods of the Klein Quartic and Belyi, Galois, Gauss Etc Klein's quartic is a very good example to see how the geometry, with group theory aspects, relates to algebra (Galois action) and the analytic (the Jacobian) [37]. From Periods to Anabelian Geometry This suggests a relation between the theory of homological periods and Grothendieck's Anabelian Geometry, towards perhaps of an algebraic analog of Hurwitz Theorem, relating the the algebraic de Rham cohomology and algebraic fundamental group, both pioneered by A. Grothendieck in Esquisse d'un Programme [11]. Indeed, as early as during the previous visit, the author suggested a "Hurwitz Theorem" larger framework surrounding the theory of abstract periods, motives and Galois Theory, as presented in [4]: One would view γ as a cobordism, ω as a propagator and γ ω ∫ as an amplitude (work/circulation); ( ) 1 X π as a groupoid makes sense and a "physical form" of Hurwitz Theorem seams to "refine" the period isomorphism. According to the philosophy of Anabelian Geometry, "What is being represented here as space with this algebraic fundamental group?" (Compare with 1-forms defining connections whose monodromy define a representation of the fundamental A Physics Interpretation of Period, and Montonen-Olive/T-Duality Periods γ ω ∫ can be interpreted along the following lines, as follows; we will be specific: 1D-case of Riemann surfaces. Closed vs. Open Periods; Electric vs. Magnetic The There are relations between charges, and Riemann-Roch Theorem restricts the possible dynamics. Helmholtz/Hodge and "Maxwell's Equations" The Hodge duality corresponds to Helmholtz Decomposition. It hints to the fact that it reflects the structure of the group of symmetries of the space X ("Gauge group"): translations/grad, rotations/curl, similarities/divergence. The local group is conformal, with its polar decomposition; the "global aspects" are captured by the fundamental group ( ) 1 X π ("Galois Group"). Correspondingly there is an underlying "gauge theory" with connection 1-form ω , and hence a Motonen-Olive Duality between "electric" and "magnetic", which in String Theory corresponds to T-Duality. The point is that the physical interpretation complements a "purely Grothendieck" approach for understanding periods, paving the road towards understanding Feynman Integrals, and more importantly, intrinsic scattering amplitudes (to be made precise in view of the new methods for computing MHV amplitudes [41]). … And Quantum Physics Amplitudes (beyond Veneziano) There seems to be good prospects of better understanding the role of absolute Galois group in the physics context of scattering amplitudes and Multiple Zeta Values, with their incarnation as Chen integrals on moduli spaces, as studied by Francis Brown, since the latter are a homotopical analog of de Rham Theory: Quantum Amplitudes↔Integrals on Moduli Spaces. The fact that maximally helicity violating (MHV) 3-point amplitudes reassemble the cross-ratios on the Riemann sphere (essentially the unique Lorentz invariant), with logarithm a hyperbolic metric, and that these "structure constants" determine Arithmetic Galois Theory and Anabelian Geometry Specifically, the author's reformulation of cohomology of cyclic groups as a discrete analog of de Rham cohomology [17] and the associated analog period isomorphism, will be related with the arithmetic Galois Theory [16], again as a discrete, purely algebraic toy-model of the said algebraic homology/homotopy group theory of Grothendieck. It will allow an elementary investigation of the main concepts defining periods and algebraic fundamental group, together with their conceptual relation to algebraic numbers and Galois groups. Research Ramifications to TOEs and ADEX-Theory The research will be placed in the larger context of the ADE-correspondence, The applications to ADE-correspondence, and beyond, e.g. toe ADEX-Theory [18], is an exciting R&D opportunity to perhaps finally understand what quark flavors and generations really are. Role of the Exceptional Lie Algebra Note that the role played by the exceptional Lie algebra, in this Finite Qubit Conclusions The main result of the author's research on the presented topics is rather a network of conceptual connections relating the Theory of Periods, Anabelian Geometry and scattering amplitudes on the physics side, and leading to a more focused plan of study. More specifically, the main thread in this line of reasoning is to study algebraic de Rham cohomology and algebraic fundamental groups together, in order to understand why Feynman integrals (or scattering amplitudes in general, independent on a particular method of computation), are related to "homological periods" (algebraic de Rham isomorphism) on one hand, but are related to Chen iterated integrals in the Number Theory side, which is a homotopical de Rham Theory, hence to be studied in the algebraic context of Anabelian geometry. One useful strategy is to investigate the relation between the theory of algebraic numbers and that of periods, with a starting point Riemann surfaces having a Belyi map. This leads to the rich area of graphs embedded on surfaces, i.e. Dessins D'Enfant, as a sort of generalization of a lattice embedded in a vector space, and Hodge structures controlling how Betti homology "sits" inside de Rham cohomology. A study of the category of morphisms of Belyi maps, which capture both the ramification data, but also divisor data and homotopy groups (Anabelian Galois groups), could in principle help clarify the role of the "change of variables" Kontsevich-Zagier relation, beyond the torsor due to isomorphism equivalence. The analogy with the theory of decomposition of primes, corresponding to the structure of the Galois group (inertia, decomposition and degree), supports the belief that such a study could yield new results and a better understanding of the structure of the "absolute" ring of periods. At the "elementary" level of Algebraic Number Theory, the representation point of view used to study the arithmetic Galois Theory (short exact sequences Advances in Pure Mathematics of Abelian groups and their ( ) Ab Aut symmetries), functorially corresponding to (algebraic) Galois Theory, can be thought of as an analog of covering spaces and deck transformations. This provides another example of "Anabelian Geometry", "a la Grothendieck", together with, and corresponding to, the "main example" of algebraic fundamental groups, namely the Galois groups. This homotopical theory aspects of Galois Theory in the non-commutative case will be studied elsewhere, together with the homological aspects, via the relation between the discrete de Rham cohomology [17] and algebraic de Rham cohomology of Grothendieck [3]. Specific problems, refining the general research suggested by the above considerations, will be addressed elsewhere: 1) The discrete analog of de Rham cohomology, and its connection with the Theory of Periods, via the algebraic de Rham cohomology. 2) Arithmetic Galois Theory and its functorial connection with classical Galois Theory of field extensions. In the light of (1) above, (2) is expected to provide additional insight into the theory of periods and the period isomorphism, as homological analogues of algebraic numbers and associated Galois groups. 3) The connection with the Theory of Motives just claimed above, can be addressed by viewing Chow cycles as embedded cobordisms, allowing to establish an analogy at least, with TQFTs and providing support to the recent approach to motives by Y. Andre, via categories of representations of diagrams (analog to quiver representations, yet not necessarily finite). This direction is clearly consistent with the Physics side of the picture, starting to Feynman path integrals and Feynman diagrams, to quark line diagrams of the Standard Model, with its mathematical counterpart, Turaev ribbon calculus in modular categories. At a more concrete level, applications to physics are proposed via the special case of Riemann Surfaces with Platonic tessellations, and the study of the role of the Hurwitz surfaces, i.e. those with maximal symmetry. For example, Klein quartic is instrumental in String Theory. It is explained how a detailed study could be a bridge between the Standard Model and String Theory, via a qubit model (Hopf bundle) interpretation. There are of course rich connections with ADE-correspondence, Klein singularities and orbifolds, RS as crepant resolutions etc. [45], and also with TOEs (e.g. Lisi's [46]) and ADEX Theory [18], emphasizing the roles of the exceptional Lie groups in fundamental physics, and perhaps more generally, the conceptual unity in Mathematical-Physics. Conflicts of Interest The author declares no conflicts of interest regarding the publication of this paper.
4,517.8
2019-12-01T00:00:00.000
[ "Mathematics" ]
1/x power-law in a close proximity of the Bak–Tang–Wiesenfeld sandpile A cellular automaton constructed by Bak, Tang, and Wiesenfeld (BTW) in 1987 to explain the 1/f noise was recognized by the community for the theoretical foundations of self-organized criticality (SOC). Their conceptual work gave rise to various scientific areas in statistical physics, mathematics, and applied fields. The BTW core principles are based on steady slow loading and an instant huge stress-release. Advanced models, extensively developed far beyond the foundations for 34 years to successfully explain SOC in real-life processes, still failed to generate truncated 1/x probability distributions. This is done here through returning to the original BTW model and establishing its larger potential than the state-of-the-art expects. We establish that clustering of the events in space and time together with the core principles revealed by BTW lead to approximately 1/x power-law in the size-frequency distribution of model events. Bak, Tang, and Wisenfeld wrote "we believe that the new concept of self-organized criticality can be taken much further and might be the underlying concept for temporal and spatial scaling in a wide class of dissipative systems with extended degrees of freedom" 1 . SOC-systems evolve to a critical state characterized by power-laws without parameter tuning. The absence of adjustable parameters such as the temperature or magnetization distinguishes the SOC systems from the systems which generate the critical dynamics at the phase transition. As BTW projected, a huge quantity of real systems and processes exhibiting SOC were exposed [2][3][4][5] . Nevertheless, the powerlaw exponents usually depend on the features of sub-systems (f. e., seismic faults, geographical regions, forest fires, and stellar flares, respectively [6][7][8], thus leaving the question regarding the extent to which the underlying systems are self-organized to be open. The BTW model is defined on a square lattice that contains integers interpreted as grains. Initially, all lattice cells contain less than 4 grains. At each time moment a grain is added to a randomly chosen lattice cell. If the resulting number of grains is still less than 4, nothing more happens at this time moment. Otherwise, the overloaded cell transfers 4 grains in such a way that all adjacent cells (their number is 4 inside the lattice) receive 1 grain. Grains are lost off the edge of the lattice since the boundary cells do not have 4 nearest neighbors. As a result of the transfer, other cells can be overloaded. The transfers continue while there are overloaded cells. The sequence of the transfers occurred at a single time moment forms an avalanche, the size of which is the number of the transfers. For any initial distribution of grains over the lattice, the system attains a critical state characterized by the power-law size-frequency relationship of the avalanches with the exponent τ ≈ 1.20 9 . Modeling of real-life systems characterized by power-laws at the critical state can be potentially performed with modifications of the original BTW model that involve various ways of stress propagation including its directed transportation, quenched disorder, and remote transfers [10][11][12][13] and implement the BTW mechanism on different spaces including fractals and networks [14][15][16][17] . The value of the exponent τ characterizing the power-law segment x −τ of the size-frequency relationship has been obtained numerically for various models; rigorous proofs have been obtained for some of them 9 . Changes in the details of the steady loading or transport mechanism conserve the exponent τ ≈ 1.20 , known 18 for the BTW sandpile, for its deterministic isotropic modifications 19 . A turn to stochastic transport in isotropic sandpiles switches the exponent to τ ≈ 1.27 [19][20][21] . The nature of self-organized criticality is captured by the independence of the power-laws on model details and the existence of just a few exponents within a broad class of isotropic sandpiles on the square lattice. This imposing feature of the isotropic sandpiles, nevertheless, reduces the range of its direct applications to real systems because the latter exhibit various power-law exponents. www.nature.com/scientificreports/ The purpose of this paper is a BTW-mechanism extension that allows to tune the power-law exponent and belongs to a "narrow neighborhood" of the original BTW sandpile, thus compromising between a certain refusal from self-organization and keeping the mechanism staying behind it. With applications in mind, we weaken the complete separation of the slow and quick times scales, understood as the idealization of the BTW mechanism, and combine close in space and time events into mega-events. Our design of the isotropic BTW mechanism on the square lattice will lead to ∼ 1/x size-frequency relationship. If none of them attains the threshold H, nothing more occurs at this time moment. If at least a single height attains the threshold H, the grain transport starts: unstable cells pass H grains equally to the neighbors. Formally, for any i with h i = H, As the number of the neighbors |N i | is 4 for the inner cell i and less than 4 for the boundary cell, the grain transfer (2), (3) is conservative inside the lattice and dissipative at the boundary. Let us say that each unstable cell generates an avalanche. If n unstable cells {i 1 , . . . , i n } , n ≤ N , appear as a result of the grain adding at the time t, then n avalanches a i 1 ,t , . . . , a i n ,t occur at t. At the beginning, each avalanche a i k ,t , k = 1, . . . , n , "propagates" to a single cell, namely, the origin i k that generates the avalanche. The size s i k ,t of each avalanche is set to 0 at this moment. The unstable cells i 1 , . . . , i n and their neighbors update the heights in line with (2), (3) simultaneously. The size s i k ,t of the avalanches a i k ,t is increased from 0 to 1. The updates can induce instability in other cells. New unstable cells are associated with just those avalanches that propagate to them. In other words, if an unstable cell j obtained a grain from a cell j ′ associated with the avalanche a i,k , then j is also associated with a i,k . If two (or more) avalanches propagate to j (i. e., pass a grain to j) simultaneously, then the choice of the avalanche to be assigned to j is performed at random. Each update induced by the instability of the cell associated with the avalanche a i k ,t results in the rise of its size s i k ,t by 1, k = 1, . . . , n . The updates ruled by (2) and (3) occur while there are unstable cells. As soon as h i < H for all cells i, the next time moment begins. Model Note that a cell can attain the threshold H several times within a single time moment. The correspondence to the avalanche is determined when the cell becomes unstable. The result of the determination can differ from case to case. Mega-avalanches and their size. We note that the above dynamics extends the original BTW model with N = 1 to the case of N > 1 . The extension results in several avalanches spreading simultaneously. Resolving this ambiguity, we merge the avalanches that are close in space and time into the mega-avalanches and focus on the probability distribution of the mega-avalanches. A mega-avalanche consists of a single avalanche if this avalanche is not merged with another avalanche. The proximity between the avalanches is found through the comparison of the Manhattan distance (the sum of the absolute differences of the Cartesian coordinates) ρ ρ ρ with an appropriate function φ of the avalanches' sizes. To formalize the rule, we introduce the characteristic (two-state) function 1 condition that attains 1 if the condition holds and 0 otherwise. Let U ∼ Uni(0, 1) be a uniform [0, 1] random variable. Then the inequality underlies the merging of a i 1 ,t 1 and a i 2 ,t 2 , where p ∈ [0, 1] , T ≥ 0 , C ′ > 0 , and d > 0 are the parameters. We fix C ′ = 0.025 and d = 0.33 , taking them from a range of affordable values. The specific choice affects the other parameters that result in the scale-free distribution of the mega-avalanches. With T = 0 and p = 0 , (4) becomes Therefore, the first term in (4) controls the deterministic merging of the avalanches, specifying a monotone increasing function of sizes φ(s 1 , s 2 ) = C ′ L(s d i 1 ,t 1 + s d i 2 ,t 2 ) that has to exceed the distance ρ ρ ρ between the avalanche origins in order to secure the coalescence. The switch to positive values of p admits the random merging of (possibly, small) avalanches located anywhere with the intensity p. In general, the second term in (4) means that remote instabilities can occasionally cause one another. Positive integers T allow to coalesce the avalanches www.nature.com/scientificreports/ observed at subsequent time moments. As we will see, a gradual increase in T from zero is required rather than the jump to 1. This leads us to the fractional values of T ∈ (0, 1) and the probabilistic nature of the inequality |t 1 − t 2 | ≤ T . This inequality is claimed to hold with certainty if t 1 = t 2 and with probability T if |t 1 − t 2 | = 1. If the avalanches a i 1 ,t 1 , . . . , a i k ,t k , k ≥ 1 , form the mega-avalanche a, then the size s = size(a) of a is the sum of the corresponding sizes: s = s i 1 ,t 1 + . . . + s i k ,t k . The origin of the mega-avalanches is the weighted average of the origins of the contributing avalanches, where the weights are proportional to the sizes. ). If f L (s) follows a power-law 1/x τ , so does F L (s) but the exponent is τ − 1 instead of τ . Gathering the points of f L (s) within the exponentially growing bins into F L (s) , we give the relevant pattern of the power-law segment up to the abrupt bend down in the log-log scale, Fig. 1. The scaling s → s/L 2 normalizes the right endpoint of the power-law segment, Fig. 1. Probability distribution of the mega-avalanches. Let All four graphs of Fig. 1 (the right part is omitted to highlight the power-law segment) follow an almost flat step corresponding to s 1−τ , τ ≈ 1 , that is turned to a quick decay at the right. The power-law segments are collapsed after the transformation of the axis: s → s/L 2 , F L → F L log L (Fig. 1b). The fact that the transformation s −→ s/L 2 of the horizontal axis normalizing the right endpoint of the powerlaw segment does not allow to collapse the tails is inherited from the BTW sandpile (because of its multifractal scaling 22 ). The logarithmic extra-loading N ∼ log L conserves the density of the grains at its critical level (not supported by graphs). The deterministic merging of the avalanches into the mega-avalanches creates two power-law parts of F L (s) The left part extends to the size of approximately 3000 for all values of L (as the blue curve does in Fig. 2), but the right endpoint of second power-law part scales as L 2 and its slope becomes steeper as L increases (not illustrated here). The introduction of the time clustering with the parameter T > 0 makes the right power-law part flatter in the log-log scale (the orange curve in Fig. 2). The contraction of the gap between two consecutive values of T shown in Fig. 1 in approximately 1.5 times suggests that T saturates at ≈ 0.05 as L → ∞. Interestingly, the changes in the exponent of the right power-law part preserves the existence of the power-law at the left but alters its slope. The return to the flat part of F L (s) is performed with the random merging through the adjustment of the parameter p. The choice of p = 0.19 is affordable for all graphs constructed with different values of L. Thus, our merging is expected to lead to T ≈ 0.05 , p ≈ 0.19 , and N ∼ log L as L goes to infinity. Discussion We insist that our approach principally differs from the two following simple constructions: the summation of the independent power-law random variables and merging of avalanches, which are adjacent in time, in the original BTW model. The first construction leads to the probability density, which is concave in the log-log scale, tending to the power function at the right part of the graph (the gray curve in Fig. 3 www.nature.com/scientificreports/ second construction can be defined through the coalescence of avalanches occurred during T subsequent time moments. The uncertainty with fractional values T is resolved with a probabilistic rule (say, if T = 2.5 and a t is not merged with a t−1 , then the avalanches a t , a t+1 , and a t+2 are combined with certainty, whereas the avalanche a t+3 is added with the probability of 0.5). This modification of the BTW model preserves the power-law segment that does not extend to the right with the growth of the system. The power-law part of F L (s) constructed for the different values of L is collapsed after the normalization of the size by the lattice area, Fig. 3. Our paper gives evidence that the 1/x power-law is feasible with isotropic extensions of the BTW sandpile (Fig. 1). The extension is constructed with the stress accumulation, proportional to log L , and the coalescence of the avalanches propagated closely in space and time. Such a coalescence is known, for example, in seismology, as the stress accumulation and the earthquakes themselves occurred in the slow and quick time respectively are www.nature.com/scientificreports/ not completely separated 23 . An additional stress accumulation without the construction of mega-avalanches does not lead to the 1/x power-law. While the BTW critical density is conserved, the size-frequency relationship of the avalanches (not be confused with the mega-avalanches) follows the power-law segment found with the original BTW sandpile. An excessive loading ruins the critical state, destroying the power-law segment. The construction of the mega-avalanches, performed in the paper with equation (4), are likely to be designed in various ways. Nevertheless, when ignoring spatio-temporal clustering, e.g., assigning p = 1 in (4) and merging all avalanches occurred at the same time moment, one also destroys criticality. We argue that spatio-temporal correlations in the BTW-like models, exposed for the BTW sandpile in paper 24 , underlie the possibility to end up with the 1/x power-law. Chen et al. 13 used this correlation, when allowing to pass grains to remote distances with a certain probability P c , and perhaps obtained the power-law exponents that are located above 1.20 and controlled by P c (the values of the exponents in the thermodynamic limits in their model are not clear as they simulated the model on the 50 × 50 lattice). A full description of possibilities, which result in the 1/x power-law, and the choice of their "best" version remain the daunting challenge. Proposed here minor deviations from the BTW model through the parameter domain preserve the critical density of the grains and the power-law size-frequency relationship for the mega-avalanches over the majority of feasible sizes (Fig. 2). The adjustment of the parameters pulls the exponent τ towards 1 (through a weak time clustering, parameter T) and corrects the slope of the restricted left part to fit the whole power-law segment (with the random coalescence in space, parameter p). Thus, our approach does not require any tuning of the dissipation-to-loading ratio as in attempts to relate self-organized criticality to the phase transition modeling 25 but controls the universality class of the sandpile and might lead to adjustable power-law exponents in a neighborhood of 1. Furthermore, the horizon of the avalanche grouping in time, given by T, acts as the level of noise in the system that (see arguments of paper 26 ) increases the power-law exponent. Eventually, the BTW-like sandpile with the control of the power-law exponent would improve our understating of real-life SOC-phenomena. Methods We have sampled the data for the empirical functions f L (s) and F L (s) for 5 · 10 5 subsequent time moments for all lattices. Sampling is performed after some transient period to let the system reach the steady state and eliminate the dependence on the initial conditions. The summation of the probability density f L (s) over exponentially increasing bins increases the exponent of the power-law from −τ to −τ + 1 , as the sum follows the integral: as s is large. In contrast to F L (s) , the graph of f L (s) is too noisy at the right to illustrate the full power-law segment ( Fig. 4 exhibits f L (s) found with L = 1024 and L = 8192). The logarithmic correction of the vertical axis required for the collapse of the power-laws in Fig. 1b is caused by the proximity of the probability density to 1/s-segment, the power-law scaling of the right endpoint s * of this segment, and a fast decay of F L at the right from s * . Then the integration of the density f L (s) = C L /s over [1, +∞] results in the estimate C L · c log L ≈ 1 , which implies C L ∼ 1/ log L.
4,183.2
2021-05-10T00:00:00.000
[ "Physics" ]
Capitalization and profitability: applicability of capital theories in BRICS banking sector The interrelationship between capitalization and profitability in banking sector of BRICS countries is studied with reference to existing five capital theories with the help of the ARDL and VECM/VAR models. These models are applied in the panel and individual settings on BRICS banking sector data from 2000 to 2020 to examine the presence of capital theories in the BRICS banking sectors. The study’s long-term empirical findings hold up the signalling and the bankruptcy cost hypothesis for the BRICS, Brazil, Russia, and India. Capitalization appears to be having a detrimental effect on profitability in China and South Africa, the agency argument is upheld. Profitability appears to have a considerable positive long-run influence on capitalization, which is consistent with Myers and Majluf’s (J Financ Econ 13:187–221, 1984) pecking order model for BRICS and Brazil. Profitability has a detrimental influence on capitalization in India and South Africa, corroborating the Modigliani and Miller (Am Econ Rev 48:261–297, 1958) and Miller (J Financ 32:1151–1168, 1977) notion. Although least significance is observed in most circumstances, the results of short-term prediction are comparable to those of long-run estimation. Both short-run and long-run evaluations of the capital-profitability link help in designing the “macroprudential” policies that demonstrate significance of our research. Introduction Capitalization decisions are important to the success of modern institutions. Banks are expected to follow rigorous international and national standards in this connection. The aim of bank capital requirements is to ensure the stability and solvency of banking system in any country. By implementing several Basel Accords, regulators change capital requirements according to economic situations and adjust capital requirements time to time [20]. Capital adequacy defends against negative shocks and enhances the possibility of better earnings and profitability [3,16,35]. The capitalization-profitability nexus can be examined under the following hypotheses, namely the signalling hypothesis, the bankruptcy cost hypothesis, the Agency hypothesis, the pecking order hypothesis, and the Modigliani and Miller hypotheses and general theory of the cost of capital and capital structure (the Brusov-Filatova-Orekhova (BFO) theory) [9,10,11]. According to the signalling theory, increasing the capital of a bank conveys to the market favourable information about the bank's prospects and profitability which eventually increases the bank's business and leads to better profitability [13,14,15]. A well-capitalized bank, according to the bankruptcy cost hypothesis, is not relied on borrowing and has low credit and bankruptcy cost. This prevents the banks from bankruptcy while simultaneously increasing profitability. Some researchers, however, supported agency theory and claimed a negative association existed between capitalisation and profitability. They argued that equity is a costly source of funding due to high agency costs and higher returns required by shareholders which will affect profitability [6,19]. According to agency theory, a greater capital ratio raises Open Access Future Business Journal *Correspondence<EMAIL_ADDRESS>the agency cost, which limits managers' capacity to put more effort in creating shareholder value, resulting in poorer bank profitability. Some researchers endorse the pecking order theory, including Annor, Obeng, and Nti [4], Mili, Sahut, Trimeche, and Teulon [30], Abusharba, Triyuwono, Ismail, and Rahman [1], Konishi and Yasuda [26], Saunders and Wilson [39], Keeley [24]. They argued that a profitable corporation could easily keep regulatory capital as needed. Internal funds, according to pecking order theory, are the least information-intensive source of funding,hence, a more prosperous corporation may maintain revenues to finance known investment prospects, resulting in better capital ratios. Berger and Patti [7] and Williams [43] investigated hypotheses of reverse causation from profitability to capital and supported Modigliani miller model theory. According to their findings, profitable banks prefer lesser equity capital and prefer more leverage because increased efficiency reduces the cost of insolvency and financial turmoil (a substitution effect). Modigliani and miller's model assumes that in presence of tax, a corporation can opt for higher debt financing because it will reduce the overall cost of capital due to tax advantages. But increased use of debt increases the risk of insolvency in the business. However, if a bank is constantly earning profit, it can opt for more debt and lower capital. Modigliani and Miller proposed that more prosperous corporations may opt to keep lower capital ratios, and a negative relationship exists [31,32]. Modigliani and Miller's preposition is supported by various research works undertaken in numerous industrialized and emerging nations [2,8,29]. The Brusov-Filatova-Orekhova (BFO) theory (the general theory of the cost of capital and capital structure) characterizes enterprises of any age. According to BFO theory, the assumption of corporate perpetuity in Modigliani and Miller's proposition leads to an underestimating of weightage average cost of capital, cost of equity, and firm capitalization. The Modigliani-Miller theory was expanded by the BFO theory, which developed a quantitative theory for evaluating essential parts of a company's financial activities over a short period of time. The application of BFO theory allows for the application of derived conclusions in actual economics, for firms with limited lifetimes, the introduction of a time component into theory, and the estimation of the conditions of companies with arbitrary lifetimes (or arbitrary age). We did not examine BFO theory in this study since banks are always focused on the long term and are not supposed to have an arbitrary life. The interrelationship of capitalization and profitability is a contentious issue, and the available literature presents contradictory findings in many industries and situations, necessitating more research in this field. This study's contribution and novelty may be seen in numerous areas. This study investigated five main capitalization and profitability hypotheses (signalling hypothesis, bankruptcy cost hypothesis, Agency hypothesis, pecking order hypothesis, Hypothesis of Modigliani and Miller) that yet not been empirically tested by existing literature, contributing to the study's exclusiveness. This study investigated the interrelationship of capitalization and profitability across BRICS states where no earlier study has been conducted. The banking industry has aided the exceptional financial development of several emerging nations, notably the BRICS countries, which have seen significant economic upheavals in recent decades. To maintain a well-capitalized position, most countries, including the BRICS, require banks to hold the needed minimum capital. In terms of methodology, this study utilises two alternate capitalization and profitability measurements to provide precise finding on bank's capitalization and profitability nexus. We looked at two capital indicators: bank capital to total assets (CR) and bank regulatory capital to risk-weighted assets (CAR). We also used two profitability indicators to assess a bank's profitability: return on equity (ROE) and return on assets (ROA). We are investigating the interrelationship between capitalization and profitability in BRICS nations from 2000 to 2020 utilizing yearly data for BRICS countries in the panel and individual settings. This study contributes to the existing range of evidence capitalization and profitability nexus by utilizing a variety of ideas, samples, procedures, time periods, and conditions. This study's empirical findings are drawn on the basis of more acceptable approach of the VECM/VAR Granger causality test and ARDL estimation, which delivers consistent and robust results. We anticipate that the results of our research will assist policymakers in making capitalization and profitability choices. Long-term empirical findings of the study corroborate the signalling and the bankruptcy cost hypothesis for the BRICS, Brazil, Russia, and India, implying a favourable influence on capitalization from profitability. Capitalization appears to have a significant adverse influence on profitability in China and South Africa, lending credence to the agency hypothesis, which claims that capitalization has a detrimental effect on profit. Profitability appears to have a significant positive long-run influence on capitalization, agreeing with pecking order argument of Myers and Majluf 's [34] for BRICS and Brazil that increased profitability may support higher capital ratios since earnings are a source of capital. Profitability has a detrimental influence on capitalization in India and South Africa, supporting Modigliani and Miller's [32] conclusions (1977). In Russia and China, profitability has no bearing on capitalisation. Although the significance is smaller in most situations, the shortterm estimation results are comparable to the long-run ones. We may also utilize our findings to make policy recommendations. Findings are relevant for BRICS bank regulators who are attempting to adjust capital requirements and help them in designing "macroprudential" regulations because our findings upheld that banks may enhance their profitability by increasing their capital ratios, and vice versa. Literature review Many nations have implemented the Basel capital requirements, recognizing the necessity of capital adequacy. However, some researchers are still conflicted on whether capitalization adds to banks' financial well-being. The signalling and the bankruptcy cost hypothesis were proposed by Berger [6] as major explanations for capitalization's positive influence on bank profitability. According to Berger [6], increased equity in a bank communicates favourable information about the firm's prospects and profitability to the market. According to the bankruptcy cost theory, a bank with high capital ratio is not relied on the borrowed fund which led to less bankruptcy cost and ultimately boosts profitability. According to Dietrich and Wanzenried [12], banks with adequate capital ratios are lucrative, stable and profitable during market crises and relied less on borrowed funds. Almaqtari et al. [3] proved that banks may survive the negative impacts of increased non-performing loans caused by imprudent lending during economic inflationary times by strengthening their equity. Furthermore, they emphasised that a large quantity of regulatory capital suggests trustworthiness, which lowers borrowing costs. Belaid et al. [5] showed evidence that increasing the regulatory capital ratio lowers the chances of loan defaults. Pasiouras and Kosmidou [35] and Goddard et al. [16] identified a beneficial influence of capitalization on profitability in banks of countries of Europe. In addition, Berger [6] confirmed previous evidence of a positive influence of bank capitalization on profitability in the USA. Garca-Herrero et al. [19] argue that bank in the developing market should have high equity holding because it protects depositors in adverse macroeconomic scenarios by offering higher resilience to financial crises. The capital ratio, according to Zarrouk et al. [44], has a beneficial effect on the profits of 51 lending corporations in the MENA area. Furthermore, according to Jensen and Meckling [22], a greater capital ratio, as per the agency theory, raises agency expenses and diminishes profit. High capital ratio may make banks to become more conservative and skipping out on opportunities and experience [16]. According to Martins et al. [28], the high capital ratio negatively affected the profits of 108 banks in the United Kingdom, Germany, and the USA. Tan and Floros [41] showed an association between a greater capital ratio and worse profitability in 101 Chinese banks. Increased capitalization in China's banking system, according to the authors, accompanies decreasing profit margins. The studies on this topic are extensive, and they have discovered an inverse relation between banks' capital and performance worldwide (see, for example, [12,16] Another group of researchers is looking at the effect of profit on capitalisation. According to the pecking order theory, internal funds are the least information-intensive source of funding; hence, a more prosperous corporation may maintain earnings to finance known investment prospects, resulting in better capital ratios [34]. Annor, Obeng, and Nti [4] investigated the drivers of capital decisions in a sample of Ghanaian commercial banks and discovered that ROA is favourably related to the capital ratio. Raising ROA enhances capital sufficiency while also allowing for the pursuit of riskier but more profitable activities. Banks are fully aware that raising their risk level increases the possibility of company failure but gives higher return. Hence, banks strive to increase their capital base so that they may take on greater risks [38,40]. When studying the variables of the capital mix in the Indonesian Islamic banking sector, Abusharba, Triyuwono, Ismail, and Rahman [1] discovered that profitability has a positive link with capital. This showed that as earnings increase, Islamic banks may have a higher motivation to protect their owners' money. Berger and Patti [7] and Williams [43] investigated hypotheses of reverse causation from profitability to capital. According to their findings, profitable banks prefer lesser equity capital because increased efficiency reduces the cost of insolvency and financial turmoil (a substitution effect). Gropp and Heider [18] explore the causes of leverage for the major USA and UK banks from 1991 to 2004. They include ROA and ROE multiplied by a regressor equal to 1 if the bank is close to attaining its regulatory standards. A more affluent bank may decide to keep a smaller precautionary buffer, knowing that it can rely on its reserves to reach the necessary levels in the future. The BRICS banking industry has contributed to the country's remarkable financial development and has experienced substantial changes in banking laws such as capital requirements, liquidity requirements, licensing standards, foreign bank presence restrictions, and solvency considerations. Even though, no study has thoroughly focused on the capitalization and profitability in BRICS. Khan, Akhtar, and Akram [25] discovered that banks in BRICS faced more constraints than banks in G7 countries in terms of licensing, capital sufficiency, admission of international banks, and supervision of banking operations. Mugova [33] used a GMM model to examine the influence of financial development on the growth of BRICS listed enterprises and discovered that financial development improves access to external funding and allows firms to alter their capital structure. Using panel data from 2007 to 2014, Hossain, Rahman, and Sadique [20] investigated the influence of Basel III on the Z score of banks in BRICS economies. The findings revealed that increased capital adequacy and leverage linked to increased BRICS bank resilience. Using GMM estimates, Jabra and Mighri [21] investigated the link between bank capital, risk, and profitability in the BRICS banking industry. The findings revealed that capital had a good influence on profit but a negative one on risk. However, these studies did not focus on the particular BRICS nations and did not investigate capitalization and profitability in the context of theories. Our analysis differs from others in that we focused on individual nations as well as the overall BRICS panel. Econometric modelling and data description Because capitalization and profitability are inextricably linked, our model explored the interrelationship between capitalization and profitability for the BRICS nations as a whole and each BRICS country. The following general equations are used to empirically evaluate the long and short-run interaction between capitalization and profitability in the panel and individual county settings: The relationship in Eq. 1a for the BRICS panel and Eq. 1b for individual countries might be positive or negative. Equations 1a and 1b, with a positive regression coefficient value, reflect the signalling and the bankruptcy cost hypothesis, respectively, since higher capital gives a positive indication about the position of banks and decreases the bankruptcy cost [6]. According to the agency theory, Eq. 1a and 1b with a negative regression coefficient value indicate agency theory because a greater capital ratio raises the agency cost, reducing profitability. (1a) Profitability may have a bearing on capitalization in either a favourable or negative way. As a result, the BRICS panel's sign in Eq. 2a and individual states' sign in Eq. 2b may be positive or negative. Pecking order theory is represented by Eq. 2a and 2b with a positive regression coefficient value. Equation 2a and 2b with a sign indicates the Modigliani and Miller hypotheses, which proposed that more profitable banks may want to maintain lower capital ratios. Two variables which are used to quantify capitalization are Capital Adequacy Ratio (CAR) and Capital Ratio (CR). CAR is taken as the percentage of regulatory capital in proportion to the risk-adjusted assets [42]. It is a percentage of total regulatory capital allocated to assets kept, weighted by the risk of those assets. The CAR is a regulatory case based on the BASEL principles that are aimed to monitor and improve the equity standing of banking organizations. CR is the percentage of bank equity and reserves to assets. Equity and reserves comprise all owner contributions, undistributed amount of profit, all kinds of reserves, contingencies, and value revisions. Assets encompass all assets in balance sheet. The bank's return on assets (ROA) and the bank's return on equity (ROE) are taken as profitability measures, both of which are widely used to assess the profitability of banks. ROA is the proportion of after-tax net income to total assets of a commercial bank, whereas ROE is the proportion of a commercial bank's after-tax net income to equity yearly. The study empirically examined existing capital theories in the backdrop of BRICS banking sector in panel and individual settings. The analysis utilizes annual data on the four capitalization and profitability variables discussed above from 2000 to 2020. The data are extracted from global financial development indices provided by World Bank. Methodology and estimation procedure Unit root test The stationary of the profitability and capitalization measures in the BRICS are assessed using unit root tests devised for panel data by Levin, Lin, and Chu (LLC). The null preposition states that every series has a unit root or that the series are not stationary, as opposed to the alternative preposition, which states that no series has a unit root or that the series is stationary. These statistics have the asymptotic distribution similar to a regular normal distribution [27]. The Augmented Dickey-Fuller test is used to assess the stationary property of variables in individual BRICS nations, using the null preposition of non-stationary series vs the alternative preposition of stationary series. ARDL cointegration test The next stage is to pursue cointegration after verifying that the series in our panel and member nations are integrated with a mixed order or stationary is observed at a different level. For that purpose, we use the ARDL bounds technique of Pesaran et al. [36] to investigate the long-term interaction effect. The null preposition of ARDL bound testing is that cointegration between variables does not exist, while the alternative preposition is that cointegration between variables exists. If the F test value of ARDL bound testing is more than the upper value, the null preposition may be rejected. The null preposition, however, cannot be rejected if the F test value is within upper and lower critical values. Following the validation of co-integration, the conditional ARDL long-term model for capitalization and profitability will be calculated in the second step. This entails utilizing SIC to determine the ordering of ARDL models. In the third and last stage, the error correction model (ECM) was estimated using the statistics of long-run estimations to derive the short-run dynamic parameters. The method is suitable for three reasons. First, unlike other co-integration techniques such as Johansen [23], the bound test is simple. The Johansen [23] technique necessitates that all variables are integrated into the same order (I (1)) or stationary at same level, or else the predictive validity is compromised. The ARDL technique succeeds whether the model's regressor is I (0) or I (1). However, for I (2) series, the procedure will fail. Second, the ARDL test is substantially more efficient for small samples and datasets, such as those utilized our study. Third, ARDL model gives both short-run and long-run equilibrium. Panel ARDL model The panel ARDL PMG estimator is being used to identify the long-and short-term interactions between capitalization and profitability. Traditional estimating approaches do not allow for the examination of variable adaptations to both short-and long-term equilibrium circumstances. The Panel ARDL PMG estimator appears to be required for limiting heterogeneity in variable interaction while integrating the influence of independent variables [37]. The three most often used estimating methods of panel ARDL are the Pooled Mean Group (PMG), the Mean Group (MG); and the dynamic fixed effects (DFE). We has used the Hausman test that allows you to choose between the MG and the PMG on one side and the PMG and the DFE on the other (Result of the Hausman test is available on demand). Hausman test shows that PMG is more consistent and efficient for our analysis. ARDL diagnostic tests The robustness of the ARDL findings is ensured through diagnostic and stability testing. The Breusch Godfrey's serial correlation LM test, the Breusch-Pagan Godfrey's Heteroskedasticity test or the White test, and the Jarque-Bera' normality test are some of the techniques employed in this context. In addition, the Ramsey (RESET) estimate is used to assess the model's linear function or stability. Table 4 summarises the diagnostic statistics of the ARDL model. These statistics demonstrate the absence of serial correlation or heteroscedasticity in our model. The Ramsey (RESET) and Jarque Bera statistics were used to test the stability and normality of the derived model, which demonstrates that the model is stable and the data are normal. VECM/VAR granger causality The direction of causation after the cointegration test is determined by using the Granger causality analysis. Once the cointegration test indicates a long-run association, a Granger-type causality may be verified by adding a single period legged error correction term to the model [17]. Hence, the vector error correction model (VECM) is appropriate. If no cointegration between variables is observed, a vector autoregression (VAR) is appropriate. We employed VAR/VECM Granger Causality in both panel and individual settings because in some cases, there is the existence of cointegration, while in others, there is no cointegration (see Tables 2 and 3). Empirical results and discussion The statistical stationarity of the series is studied before proceeding with the ARDL and the VECM/VAR Granger causality test in a panel and individual context. The ADF (Augmented Dickey-Fuller Test) and LLC (Levin, Lin, and Chu) tests are used to evaluate the level of integration or the stationary characteristics of the series. Table 1 presents the outcomes of ADF estimations for individual countries as well as the LLC models for the panel of BRICS countries. The results of test statistics indicated that the series in our panel and individual countries are integrated in a mixed order or stationary at different levels; hence, the next stage is to check for the cointegration using ARDL models. The ARDL bounds results on the basis of F value, as given in Table 2, present significant evidence for cointegration between variables for models 1-6 in Brazil, models 5-8 in Russia, all models in India, models 6-8 in China, and all models in South Africa except models 2, 4, 8. After the bound test indicated long-run cointegration for individual countries, we constructed Tables 3 (ARDL Model) and 5 (VECM/VAR Granger causality model) to represent the variables' short-run and long-run interactions. Table 3 displays that CAR and CR have a considerable positive effect on profitability for BRICS, Brazil, Russia, and India, corroborating the signalling and the bankruptcy cost theory, which presume a beneficial impact of capitalization on profit. This means that when capitalization rises, bank profitability rises as well. This might be because a bank's capital adequacy provides the market with positive signal about the bank's prospects and profitability. Bank with adequate capital does not rely on borrowed funds which reduces the cost of bankruptcy. While CAR and CR seem to have a considerable negative influence on profitability in China and South Africa, this validates the agency theory, which claims that capitalization has a detrimental effect on profit. Banking institutions with a greater capital ratio incur higher agency costs and operate more cautiously, perhaps missing out on growth opportunities. While analysing the influence of profitability on capitalization, it has shown that ROE and ROA have a considerable and favourable impact on CAR and CR in BRICS and Brazil in all models from 5 to 8, as presented in Table 3. This supports the pecking order theory, which assumes that increased profitability may lead to better capital ratios since earnings are a funding source. Both profitability indicators have a detrimental influence on capitalization in India and South Africa across all models from 5 to 8 in Table 3, confirming the Modigliani and Miller theory's applicability in these countries. However, significance does not exist for the impact of ROE on capitalization in South Africa. Profitability does not have any influence on capitalization in Russia and China. Table 3 also includes the findings of short-run estimation. The long-term coefficients sign is also persisted in the short-term. As a consequence, the short-run estimation within the ARDL framework also corroborated the positive influence of profitability on capitalization in BRICS, Brazil, and Russia, as shown in models 5-8 Table 3. Profitability negatively affected the capitalization in India and South Africa. Capitalization (CAR and CR) has no statistically significant association with profitability (ROA and ROE) in the short run across all models in Table 3. However, the positive value of the regression coefficient showed that capitalization has a favourable short-term impact on profitability. Diagnostic and stability estimation are used to confirm the robustness of the ARDL results. Table 4 summarises the diagnostic test findings for the ARDL model. These findings revealed the absent of serial correlation or heteroscedasticity in our estimated model. The Ramsey RESET test and Jarque Bera test statistics were used to assess the results' stability and normality. The testing revealed that the calculated model was stable and that the data are normal. The reliability and validity of the ARDL estimations were validated by all of the estimated diagnostic test data. VECM/VAR Granger causality is employed to assess the causal association of capitalization and profitability variables. Table 5 shows the causal connection between the CAR and the ROA, the CAR and the ROE, the CR and the ROA, and the CR and the ROE using VECM and VAR models. The findings in Table 5 (VECM/VAR) are in line with those presented in Table 4. (ARDL). In many cases, there is evidence of a long-run Granger causal connection between the variables since a negative lagged error correction coefficient is found. The long-run estimation results using the VECM framework indicated the existence of a bidirectional causal connection between profitability and capitalization for the BRICS in panel estimation and Brazil, India, and South Africa in individual estimation. For Russia and China, a unidirectional association is observed from capitalization to profitability. In the short term, we observe that unidirectional causality runs from profitability to Prob. Long run BRICS Conclusions and policy implications To sustain the banking sector's solvency, banking institution in the BRICS must maintain adequate capital. However, a prosperous entity can readily maintain regulatory capital as needed. Several studies have been undertaken to explore the effect of capitalization on profitability and vice versa. There are several hypotheses on the interrelationship between capitalization namely signalling hypothesis, bankruptcy cost hypothesis, Agency hypothesis, pecking order hypothesis, and hypothesis of Modigliani and Miller. This research intends to add to the current literature by investigating capitalization and profitability nexus in the banking sector of five emerging countries of BRICS in both panel and individual settings from 2000 to 2020. The ARDL and Granger's causality test are used to study the interrelationship between capitalization and profitability in five BRICS nations. The empirical findings of the study in the long-term validate the signalling and the bankruptcy cost hypothesis, for the BRICS, Brazil, Russia, and India, all of which imply a favourable impact on capitalization from profitability. While capitalization has a considerable negative effect on the profitability in China and South Africa, this lends credence to the agency hypothesis, which argues that capitalization has a major negative impact on profitability. Profitability positively influences the capitalization in long run and supporting the pecking order concept for BRICS and Brazil that is the increased profitability may support higher capital levels since earnings are a funding source. Profitability has a detrimental influence on capitalization in India and South Africa, validating the premise of Modigliani and Miller. In Russia and China, profitability has no bearing on capitalization. The short-run estimation findings are in line with the long-run results; however, the significance is lower in most cases. We also utilize our findings to make policy recommendations. First, our findings are beneficial for BRICS bank regulators in deciding the capital adequacy norms. The short-and long-run implications of capital on profitability are crucial for the formulation of the so-called "macroprudential" strategies. Regulators should keep monitoring all banks' minimum capital requirements to enhance strength and viability. Regulators should have strict compliance for every bank and do not let banks to defray from maintaining minimum capital. Our findings suggest that banks can boost their profitability by raising their capital ratios. Second, the study found that higher capitalization can impair the banking sector's profitability in some circumstances. Hence, before imposing any stated regulatory capitalization criteria, authorities should consider that capital amounts over a particular level might impair the banking industry's profitability. Third, this study also showed that banks with higher profitability can easily maintain adequate capital. Therefore, the regulators should consider bank profitability before imposing any stated statutory capitalization ratios. Banks with higher profit can retain profit to finance their investment opportunities rather than holding capital ratios beyond the required capital. This study also has several limitations. First, owing to a lack of all essential data beyond 2020, the research period is from 2000 to 2020. Second, our study emphasises the banking system of the BRICS countries, but future research might provide similar data from other countries as well. Third, this study has not studied the general theory of the cost of capital and capital
6,739.6
2022-09-04T00:00:00.000
[ "Economics", "Business" ]
Experimental Analysis of Continuous Beams Made of Self-Compacting Concrete (Scc) Strengthened with Fiber Reinforced Polymer (Frp) Materials Strengthening of concrete structures is applied as a solution for various deterioration problems in civil engineering practice. This also refers to the structures made of self-compacting concrete (SCC), which is increasingly in use, but there is a lack of research in this field. This paper presents an experimental analysis of flexural behavior of reinforced concrete (RC) continuous beams made of SCC, strengthened with fiber reinforced polymer (FRP) materials (glass (GFRP) and carbon (CFRP) bars, CFRP laminates), by the use of near surface mounted (NSM) and externally bonded (EB) methods. Six two-span continuous beams of a total length of 3200 mm, with the span between supports of 1500 mm and 120/200 mm cross section, were subjected to short-term load and tested. The displacements of beams and the strains in concrete, steel reinforcement, FRP bars and tapes were recorded until failure under a monotonically increasing load. The ultimate load capacities of the strengthened beams were enhanced by 22% to 82% compared to the unstrengthened control beam. The ductility of beams strengthened with GFRP bars was satisfactory, while the ductility of beams strengthened with CFRP bars and tapes was very small, so the failure modes of these beams were brittle. Introduction Strengthening of civil engineering infrastructure has gained significant attention due to deterioration problems of structures and need for meeting up-to-date design requirements [1]. One of the basic factors that causes the unsatisfactory condition of the existing infrastructure is corrosion of reinforced steel in concrete, which causes damage of concrete, loss of reinforcing steel and in some cases failure of construction [2]. Taking into consideration the existing concrete infrastructures both in Europe and worldwide, there is a large interest for the research in the field of strengthening of concrete structures. In addition, the most common reasons for strengthening the existing structures are damage to structures due to earthquakes, changes in the purpose of structures and the implementation of additional loads. Strengthening of RC structures can be achieved in several ways: by reducing static influences (by changing the static system or by subsequent external prestressing), increasing the load-bearing capacity of the cross section, changing the state of stress, etc. Increasing the load-bearing capacity of the cross-section is the most common type of strengthening of RC structures and can be achieved by increasing the dimensions of the concrete cross-section, adding steel reinforcement or adding reinforcement made of composite materials, such as FRP materials. As their name suggests, FRP materials are made up of high-strength fibers embedded in a polymer matrix. Depending on whether aramid, carbon or glass fibers are used for reinforcing the composite material, there are AFRP (Aramid Fiber Reinforced Polymer), CFRP (Carbon Fiber Reinforced Polymer) and GFRP (Glass Fiber Reinforced Polymer) materials [3]. Even though there are several researches about strengthening of reinforced concrete beam, the majority of papers discusses behavior of simple beams strengthened by the FRP laminates. In the experimental research presented in literature, behavior of RC continuous beams made of conventional concrete strengthened with the FRP reinforcement was compared to an unstrengthened control beam. Conclusions of such researches, some of which are listed below, indicate both increases in bearing capacity and reduction of deformations of strengthened beams. Existing studies on the strengthening of continuous RC beams indicate that the strengthening of the negative moment zone alone has a significant effect on increasing the bearing capacity of such beams [4]. According to Akbarzadeh et al. [5], although many in situ RC beams are of continuous constructions, there has been very limited research on the behavior of such beams with externally applied FRP laminates. For example, research related to the behavior of continuous RC beams strengthened by CFRP (strips, sheets or laminates) is presented in [6][7][8][9], while [10] deals with strengthening using the glass FRP strips by the EB method. In [8] and [9], continuous RC T-beams are investigated. In addition to the papers that generally deal with experimental research of strengthened beam girders, there is a significant number of papers in which methods for modeling reinforced RC beams using FEM analysis are proposed. Numerical analysis using Abaqus software of RC beams strengthened with hybrid FRP sheets are presented in [11,12]. Hybrid FRP sheets were prestressed and externally bonded to the concrete surface. While [11] deals with flexural behavior of the beams under the action of live and dead load, Ref. [12] deals with fatigue properties of examined beams. Good agreement of numerically obtained results with experimental data was observed in both papers. Zhang et al. [13] developed a viscoelastic solution for the interface stress distribution in a strengthened RC beam. The validation of this solution is verified by FEM analysis. The two basic methods most commonly used in strengthening RC beams with FRP material are: strengthening by gluing laminates of FRP material on the surface of concrete beams-EB method; and strengthening by mounting bars or narrow strips of FRP material in grooves made in the cover of concrete-NSM method. While most available research suggests that the NSM method, in particular the NSM method with strips, is able to take advantage of the higher strength of a FRP material compared to the EB method, several comparative studies have indicated that NSM is a non-economic solution [14][15][16]. The main disadvantage of the NSM method compared to the EB method is the price. The increase in price is due to the larger volume of labor when cutting the grooves in which the reinforcement will be mounted, as well as the larger amount of epoxy adhesive necessary for filling the grooves compared to the EB method. Since the use of self-compacted concrete started (Japan, 1980), it has been evident that its implementation has been increasing, whereby development and research has been intensified in the last several years. The term self-compacting concrete (SCC) refers to a high-performance concrete, which does not require additional vibrating in the placement process. It is capable of fully filling the formwork exclusively under the action of gravity, even in the presence of a considerable amount of reinforcement, simultaneously retaining its consistency without any emergence of segregation [17][18][19][20][21][22][23]. When it comes to application of SCC for the structural elements, almost all research conducted so far relates to a simply supported beam, except for a few studies on a twospan continuous beam. In [24], the beams were examined both for bending and shear. Their behavior under the effect of short-term static loading corresponds to the behavior of the beams made of conventional vibrated concrete (VC). The conditions defined by the regulations for constructions of VC are met as well. An experimental study of flexural behavior of two-span continuous beams made of SCC is presented in [25], where the parameter that varied is the percentage of tensile reinforcement. Abaqus/Standard software is used in that study for developing an adequate nonlinear numerical model. The main objective of this paper was to analyze the flexural performance of RC continuous beams made of SCC strengthened with FRP materials, as there is a lack of such research. The experimental tests of six two-span continuous beams of a total length of 3200 mm, with the span between supports of 1500 mm, with 120/200 mm cross section made of SCC, was done. The influence of the type of the FRP material, position of the reinforcement and the method of the strengthening on the ultimate load capacity, as well as on the failure mode of continuous beams made of SCC was analyzed. Since most researches are focused on the behavior of beams made of conventional RC strengthened with the FRP reinforcement, this research gives an insight into the flexural performance of continuous SCC beams strengthened with the FRP reinforcement, which is very important bearing in mind the increasing use of SCC in civil engineering. Test Specimens Six continuous beams were made of SCC concrete with natural aggregate in accordance with the program of experimental testing in which the geometrical characteristics of the test specimens, the method of their production, arrangement and type of measuring instruments, as well as the test procedure had been defined. The dimensioning of the continuous beams was carried out in accordance with the Eurocode (EC) 2 regulation [26]. The beams have the same percentage of the longitudinal and the transverse reinforcement shown in Figure 1. The tested beams have a rectangular crosssection, having dimensions b/h = 120/200 mm, of a total length of 3200 mm, beam span 1500 mm, and reinforced with B500B reinforcement of a C30/37 designed class of SCC concrete [26]. Mechanical Properties of SCC Concrete Concrete mixtures tested within the experimental research were made using CEM I 42.5 R cement, whose characteristics are presented in Table 1. The rock flour which was used in the experiment as fine aggregate was obtained by grounding of the limestone having specific mass 2.692 g/cm 3 and standard deviation of cavity share according to Rigden of 25.4% (suppliers data). For making of concrete mixtures, the particle size distribution was performed based on SRPS U. M1.057:1984, which defines the standard particle size distribution curves, A-D. The particle size distribution of the aggregate used for making SCC is presented in Figure 2. The composition of designed concrete mixture is presented in Table 2. The additive MC Power Flow 1102 was used in the experiment. This is a super-plasticizer, a modified polycarboxylate with density of 1.06 kg/dm 3 . Additive dosage was 0.5% in relation to the mass of powdery components (cement and rock flour). The experimental examination of the characteristics of SCC was carried out in two phases: (1) examination of the characteristics of the fresh concrete mass, (2) examination of the mechanical characteristics of hardened SCC. One of the primary methods used for testing self-compacting concrete in the fresh state is the Slump-flow test (Figure 3a), which tests the consistency of fresh concrete and checks one of the key properties of SCC: workability, i.e., fluidity. The results obtained by testing fresh concrete are shown in Table 3. Based on the recommendations defined in the European Federation of National Associations Representing for Concrete (EFNARC) [27] and EN 206-9:2010 [28], the designed mixture is classified as SF1. On the basis of the measured slump time of the designed mix, it is concluded that it belongs to class VS2. Determination of compressive strength of concrete was carried out on cubes having sides of 150 mm, after 2, 7, 14 and 28 days in accordance with EN 206-1. The tensile splitting strength of concrete was tested on cylinders measuring 150/300 mm, and it was determined after 28 days, in accordance with EN 12390-6. The determination of the tangent and secant modulus of elasticity was performed in accordance with the standard EC 2 [26]. The obtained results are provided in Table 4. Steel reinforcement was not tested, instead the characteristics provided by the manufacturer were adopted, while for FRP reinforcement, the characteristics obtained by testing in the Laboratory of the Faculty of Mechanical Engineering at University of Nis ( Figure 5) were adopted [29]. The testing was performed according to the recommendations of the American Concrete Institute (ACI) [30]. Mechanical properties of steel and FRP reinforcement are shown in Table 5. Fiber volume fraction in CFRP bar is 71%, while fiber volume fraction in CFRP tape is 63.3% (suppliers' data). Density (kg/m 3 ) Slump-Flow Test D (mm) t500 (s) ωc (Water Cement Mechanical properties of used epoxy adhesive are shown in Table 6. The surfaces of the beam were sound and without irregularities so there was no need for additional processing before installing the FRP tapes.  The surfaces of the beam were cleaned to remove dust, after which the surfaces were impregnated with primer.  FRP tape was also cleaned with appropriate cleaner. The solvent was evaporated and the surface of the tapes was completely dry before the application of the adhesive.  Two-component epoxy adhesive is applied in a thickness of approximately 1 mm on both the FRP tape and the surface of the RC beam.  The roller expels air bubbles and removes excess epoxy adhesive on both sides of the FRP tapes. Variants of Strengthening of Tested Beams Strengthening of RC beams was performed on a total of five beams, whereby position, type and method of FRP reinforcement mounting was varied ( Figure 6). Experiment Setup and the Procedure of Load Application Testing of continuous RC beams was performed in the Mechatronics laboratory of the Faculty of Mechanical Engineering of the University of Nis. The used measuring equipment and assisting personnel are from the Laboratory for Structural testing of the Faculty of Civil Engineering and Architecture of the University of Nis. The load was applied to the beam in the form of two concentrated forces, which act in the middle of both spans at an axial distance of 1500 mm (Figure 1). The load transfer was realized via steel plates 100 mm wide and steel rollers Ø30 mm placed between them. The support of the beam girder is also realized via steel contact plates 100 mm wide with simulation of one fixed and two movable bearings. The load was applied with a deflection increase rate of 0.02 mm/s (1.2 mm/min). During the test, the following data were recorded:  Vertical displacements-deflections,  Normal strain in concrete, steel and FRP reinforcement, To monitor the displacement, the following were used: LVDT (Linear Variable Displacement Transducers) W50, which measured deflections, and LVDT W20, as a measuring element of a dilatometer designed for measuring strains in a tensioned zone of concrete (Figure 7). Measurement of strains in steel and FRP reinforcement, as well as on the top compressed surface of concrete, was performed with strain gauges-SG. Strain gauges with an electrical resistance of 120 Ω were used, with a base length of 6 mm on steel and FRP reinforcement, and 50 mm on concrete ( Figure 8). The gluing of the measuring tapes was done with a special glue, whereby the strain gauges mounted on the steel and FRP reinforcement were additionally protected, since they were installed in concrete or epoxy glue. The position of strain gauges tapes and LVDT dilatometer in Sections I, II and II is presented in Table 7. Beam Label Tensioned Steel Reinforcement Compressed Concrete FRP Deflection An electronic dynamometer (HBM U2A) was used for measuring force (load), of a measuring range up to 100 kN, accuracy 0.5%. For recording signals and converting mechanical parameters into electrical ones, multi-channel measuring-acquisition systems MGC-plus and SPIDER 8 were used. The converters used are connected via computer acquisition systems to calibrate instruments, read and record data. Electronic instruments were read automatically each second (quasi-dynamic), while recording and processing of data was performed via the CATMAN software package. Results and Discussion The values of the measured quantities (displacements and local deformations), due to the effect of static load, were read quasi-dynamically (every second), and then the obtained values were processed in Excel and Catman. The output results are presented in the form of diagrams, from which it is possible to most clearly observe the measured parameters (deflections and strains) as a function of the applied load. Analysis of Recorded Deflections Special attention in this research is paid to the analysis of deflections of continuous beams. The disposition of the control beam, before and after deformation, is shown in Figure 9. A comparative analysis of the deflection of reinforced beams was performed in relation to the control, unreinforced beam and the dependence curves between the load, and the deflection of the section in the middle of the span are shown in Figure 10. On the curves, in general, characteristic zones can be observed when applying the load, and these are:  Zone before emergence of the cracks, where the relation between the load and displacement is linear,  Zone after emergence of the cracks, and before the yield of steel, crushing of concrete or debonding of the strengthening system, where the relationship between the load and displacement is non-linear; and  Zone after the yield of steel, crushing of concrete or debonding of the strengthening system until the cross-section failure. The behavior of the tested beams until the emergence of the first cracks is almost identical, after which the strengthened beams exhibit higher rigidity and higher loadbearing capacity. With the B-EC beam, the dependence curve is generally linear even after the appearance of cracks, which indicates a small deformability but also the higher ductility of this beam. Figure 11a shows the comparison of the maximum load capacity of the tested beams. The load-bearing capacity of all beams was increased:  For beams strengthened with GFRP bars, by the NSM method, the increase in loadbearing capacity was 22-67% depending on the position of the reinforcement, with the largest increase in load-bearing capacity of beam B-G3 when the strengthening is both above the middle support and in both spans of the beam.  In the case when beams are strengthened with CFRP bars by the NSM method both above the middle support and in both spans of the continuous beam (B-C beam), the increase in load-bearing capacity is 82%.  For beams strengthened with CFRP tapes of the same axial stiffness as CFRP bars, by the EB method (B-EC beam) the increase of the load-bearing capacity compared to the B-C beam is lower, and it amounts to 50%. Figure 11b shows maximum deflections at midspans at the moment of failure of the tested beams. It can be noticed that the maximum deflections of beams B-G1 and B-G2 increase with the increasing failure force, while in other strengthened beams the deformability is lower, which is shown by one of the most important parameters-the ductility index. The ductility index in beams strengthened with GFRP reinforcement (B-G1, B-G2 and B-G3) has satisfactory values (ID = 4 ÷ 5), while in beams strengthened with CFRP reinforcement the ductility is very small (ID = 1.5). The beam strengthened with CFRP tapes (B-EC) did not show any ductility, and the maximum deflection of this beam is the lowest compared to other beams. Analysis of Normal Strain in the Steel Reinforcement Load-strain dependence diagrams in the tensioned steel reinforcement of the tested beams are presented in Figure 12a (mid-span cross-section) and Figure 12b (cross-section above the support). It can be seen from the diagram that the strains in the tensioned steel reinforcement are negligibly small until the emergence of cracks in the concrete, after which they increase nonlinearly until the yield point in the steel reinforcement. The last phase from the yield of steel reinforcement to failure has an even more pronounced nonlinearity. There is a noticeable increase of the load at which the yield of steel reinforcement occurs in reinforced beams in relation to the control beam at which the load is Fy ≈ 80 kN. In the case of beams reinforced with GFRP reinforcement, the increase in load at which the reinforcement steel yields is 14% for beam B-G1, 70% for beam B-G2 and 86% for beam B-G3. In the case of the beam reinforced with CFRP bars by the NSM method, an increase in the load of 100% at which the steel reinforcement yields was achieved in relation to the control unreinforced beam. In the case of beams reinforced with CFRP tapes by the EB method, the yield of tensioned steel reinforcement was not observed. Figure 13 shows the comparison of the yielding load of the tested beams. Analysis of Normal Strains in the FRP Reinforcement Load-strain dependence diagrams in the FRP reinforcement for the tested beams are presented in Figure 14a (mid-span cross-section) and Figure 14b (cross-section above the support). When comparing the maximum measured strains in the FRP reinforcement with the ultimate strains of GFRP and CFRP reinforcement during tensioning, it was observed that the GFRP reinforcement is maximally utilized in beams B-G1 and B-G2. For other beams, the utilization of additional FRP reinforcement is not full and it is the lowest in the case of CFRP tape reinforcement when the utilization is approximately 40%. Analysis of Normal Strains in the Concrete Load-strain dependence diagrams in concrete for the tested beams are presented in Figure 15a (mid-span cross-section) and Figure 15b (cross-section above the support). From the presented diagrams, it can be observed that the strains in concrete are compatible to those in tensioned steel reinforcement, until the onset of first cracks in concrete. After that, the strains are nonlinear, whereby that nonlinearity is more pronounced after the yield of steel reinforcement. Figure 16 shows the failure mode of the control beam. The failure of the beam is caused by the formation of a plastic hinge, i.e., by reaching the ultimate strain of steel reinforcement during tensioning, first in the cross-section above the middle support. By reaching the ultimate strain of the steel reinforcement during tension in the cross section in the span a plastic hinge is formed and thus the failure mechanism of the beam. The maximum realized force was 100 kN, after which the deformations increase without increasing the force. In the case of the beam girder B-G1, which is reinforced with a GFRP bar above the middle support, the cross section in the span fails first, reaching the ultimate strain of the steel reinforcement, which is followed by concrete crushing (Figure 17)-by forming a plastic joint in that cross section. The formation of the failure mechanism occurs due to crushing of concrete in the cross-section over the middle support, without debonding of the FRP reinforcement system, which indicates the complete utilization of this system. The load-bearing capacity of this beam is 122 kN, and exceptional deformability and failure without major damage to the beam were observed. In Figure 18 the shape of the failure of the beam strengthened with GFRP reinforcement in the zone of positive moments (bottom side of the beam)-beam B-G2is shown. A failure of the section above the middle support was first observed due to reaching the limit strain of the steel reinforcement during tensioning (Figure 18a). The formation of the failure mechanism occurs due to the failure of the cross section in the span, caused by separation of the concrete at the level of the longitudinal steel reinforcement (Figure 18b). Crushing of concrete was observed in the zone of force application (Figure 18b). Transverse cracks in the concrete on the bottom side of the beam intersect the slot with the epoxy filling. The load bearing capacity of this beam is 148 kN, the deformability is good and the failure occurs with damage to the beam (Figure 18b). Beam fracture B-G3 is caused by debonding of epoxy and concrete, as a result of which concrete separates at the level of the protective layer (Figure 19a). The load-bearing capacity of this beam is high (Fu = 167 kN), but the failure is sudden with a loud crack and with a considerable damage to the beam. Crushing of concrete was observed in the zone of force application as well as transverse cracks in concrete both on the bottom and on the top side of the beams intersecting the groove with the epoxy infill ( Figure 19b). Deformability is lower compared to other beams in which strengthening was accomplished using GFRP reinforcement. Figure 20 show the failure shape of a beam strengthened with CFRP reinforcement in the zone of negative moments above the middle support (top side of the beam) and in the zone of positive moments (bottom side of the beam) caused by incremental application of test load until failure. It is observed that the failure occurs by debonding of epoxy and concrete, as a result of which concrete separates at the level of the protective layer ( Figure 20a). The load-bearing capacity of this beam is the highest (Fu = 182 kN) in comparison with other tested beams, but the deformability is extremely low, so that failure occurs suddenly with considerable damage to the beam. Failure of beams reinforced by the EB method with CFRP tapes (B-EC beams) occurs much earlier than in beams strengthened with CFRP bars by the NSM method (B-C beams). In general, the behavior of these beams is similar until the onset of tape separation in B-EC beams. Failure (fracture) occurs due to the sudden separation of the tape due to the loss of bond at the joint of concrete and epoxy adhesive (Figure 21a), after which the protective layer of concrete separates (Figure 21b). Failure due to the debonding of the reinforcement system is the most common failure of beams strengthened by the EB method, and it is also the most prominent deficiency of this method. Debonding failure prevention can be achieved by various anchoring systems such as: anchor spikes, U-Anchors, transverse wrapping, plate anchors, bolted angles and other miscellaneous systems [32]. The fracture load was 150 kN (at B-C beam 180 kN) where the deformability of this beam is very low with a very linear relationship between the load and the deflection of the beam. Conclusions Composite materials are certainly materials of the future but also of the present. Application of the fiber polymer materials provides great opportunities in repair and strengthening of the RC structures, thus extending their service life [33]. Experimental testing of six two-span continuous beams made of self-compacting concrete strengthened with fiber reinforced polymer (FRP) materials was done in this research. The influence of the method of strengthening, type of the FRP material and position of the strengthening on the flexural behavior of the examined beams were analyzed. Based on all of the above, general conclusions can be summarized: 1. The use of FRP materials in strengthening of RC continuous beams, even with a small amount of additional reinforcement, can considerably increase their load-bearing capacity. An increase of 22-82% in the load-bearing capacity of the strengthened beams compared to the control beam was achieved. 2. The highest increase of the load-bearing capacity was in the beams strengthened both in the negative moment zone above the middle support and in the positive moment zones in the spans, and it ranges within 50-82% depending on the type of FRP reinforcement (glass and carbon) and the method of strengthening (EB and NSM). 3. Strengthening with CFRP materials results in a higher failure load, but the price of these materials is considerably higher compared to GFRP materials so in each individual case, it should be assessed which material is more adequate. 4. The use of FRP materials also influenced the increase of the yielding load which was in the range from 14% to 100% compared to the yielding load of the control beam. 5. With the increase in the amount of FRP reinforcement, it is noticeable that there is a decrease in the deformability or ductility of reinforced beams, which is one of the main disadvantages of using FRP reinforcement. 6. The failure modes of the strengthened beam with higher percentage of FRP reinforcement were brittle with separation of the concrete protective layer and before fully utilizing the tensile strength of the FRP material. The lowest efficiency was observed in the beam strengthened by the EB method due to the separation of the tapes. 7. Debonding of the FRP system in the beams strengthened in the same manner occurred earlier with the EB method, so the efficiency and utilization of the tensile strength of the FRP reinforcement is higher with the NSM method. 8. The basic disadvantages of the NSM method compared to the EB method are the groove size for installing the FRP bars and the higher price, which lead to the conclusion that the assessment of which method is more adequate should be made on a case by case basis. In further experimental research, the use of narrow strips instead of bars in the NSM method, given the smaller size of the required gap in the concrete, can be analyzed. Further, the numerical model of the examined continuous beams using the finite element method should be made, validated and verified with experimentally obtained results. Conflicts of Interest: The authors declare no conflict of interest.
6,532.4
2021-04-28T00:00:00.000
[ "Engineering", "Materials Science" ]
Technology of Polymer Microtips’ Manufacturing on the Ends of Multi-Mode Optical Fibers The technology of polymer microtips’ manufacturing on the ends of selected multi-mode fibers has been reported. The study’s key element was an extended description of technology parameters’ influence on the shape of these 3D microstructures. Basic technology parameters such as spectral characteristics of the light source, monomer mixture type, optical power, and exposure time were taken under consideration. Depending on those parameters, different shapes, sizes, and surface structures of microtips were obtained. The spectral characteristics of the light and optical power delivered to a monomer drop were identified as the most important parameters for the formation of the desired 3D shape of the microtip. Presented experimental results are the base for further studies directed to the application of these micro-elements in the fields of optical measurements and sensors’ technology. Introduction The photopolymerization phenomenon has been used in various fields of science and technology because of its unique advantages such as low cost, fast chain reaction, time-efficient, ambient temperature processing, and the possibility of making desirable micrometer-sized 3D structures. Showing only a few examples of industrial implementation, it has found applications in surface fabrication, particle preparation, and continuous flow technology [1]. These materials are present as adhesives, coatings, photoresponsive gels, and photoresists dedicated to microlithography and nanolithography in microelectronics, optoelectronics, holographic data storage, etc. [2]. Application of the photopolymerization phenomenon in the optical fiber technology as a new production method of micrometer sizes' polymeric structures at the ends of selected optical fibers has been reported. Previously, these polymeric structures, named microtips, were fabricated at single-mode fibers (SMFs) [3,4], including standard telecommunication fiber (SMF-28e+) [5], SMF at 488 nm (SM-450) [6], as well as large mode area photonic crystal fibers (LMA-10) [7]. Their possible applications in microscopy required an investigation of the beam outgoing from the SMF and its distribution in the far-field region [3][4][5], as well as an analysis of the refractive index distribution in this type of optical elements [8]. However, a microtip can be considered as an optical fiber refractive index sensor transducer, as well. Known from the literature, the designs have used optical fiber transducers based on Bragg gratings [9,10], long-period gratings [10,11], plasmonic effects in standard and microstructured optical fibers [10], or even micro-interferometers [10,12]. In the authors' studies regarding microtips on multi-mode fibers (MMFs), including polymer MMF (GIPOF-62) [6] and silica-based MMF (GIF625) [13], the preliminary results have shown linear changes of return losses when the refractive index (RI) around the microtip was changed within the range of 1.3-1.5. The measured dynamic range of these losses was at the level of 28 dB [13,14]. In this way, it was shown that a microtip manufactured at the end of MMF has a higher back reflecting signal than microtip on SMF. Moreover, microtips produced on optical fibers with a large core diameter have a larger adhesion surface between the microtip base and fiber end face, which reduces the probability of delamination. The two above advantages are the main reason for the investigation of such structures. In this paper, the extended study presents a technology of polymer microtips' manufacturing on MMF with a detailed description of the experimental conditions, as well as an extended analysis of the influence of the technology parameters on microtips' shaping. The main technology parameters selected during the investigation were: composition of monomer mixtures, spectral characteristics of the used light sources, MMFs' types, deposited mixture quantity, position of the fiber, and energy absorbed by the monomer material during photopolymerization. The last one has been investigated as the composition of two technical parameters, i.e., optical power (P) and exposure time (t). Geometries of manufactured elements were analyzed and compared with different variations of the above-mentioned parameters. Finally, conclusions with a qualitative research commentary and a proposal for potential applications in the optical fiber sensors' technology were presented. Technology of Microtips on Multi-Mode Optical Fibers Microtips' manufacturing procedure consists of two steps. The first step is the application of a liquid monomer mixture as a drop at the end of a cleaved optical fiber [6,13,14] or immersion of the optical fiber into a cuvette with this mixture. In the second step, light propagated in the optical fiber core cures the photopolymer so it becomes a hardened 3D polymer microstructure. The main elements of the technology are: monomer mixture photopolymerization ability, light source radiation parameters, and type of applied optical fiber. Proper selection of these parameters significantly influences the microtip shape. Therefore, the next section of the paper is dedicated to the detailed description of the impact of these parameters on the geometry of these types of micro-optic elements. Monomer Mixtures and Optical Properties of Polymers From the huge number of monomers that could be the basis for microtips' production, the ones that are photopolymerizable were selected. Many tests were carried out, after which two types of multifunctional acrylate monomers met all requirements. In each of the tested mixtures, 3-functional pentaerythritol triacrylate (PETA; Sigma-Aldrich, St. Louis, MO, USA) or 2-functional tricyclo decanedimethanol diacrylate (TCDMA; Sigma-Aldrich, St. Louis, Missouri, United States) were used, and various additives depending on the light source were used in the experiment. Two types of photo-initiating systems (PISs) were used; the mixture was cured with UV (ultraviolet) or VIS (visible) radiation. The UV-curable mixture contained only two compounds, i.e., monomer and photo-initiator. The ranges of possible percentage compositions of mixtures are presented in Table 1. In this mixture, the PIS 2,2-Dimethoxy-2-phenylacetophenone (DMPAP) was used, which belonged to the α-Dialkoxy-acetophenones' photo-initiator class [14]. The VIS-curable mixture needs three compounds, i.e., monomer, sensitizer dye, and co-initiator. In this mixture, Eosin Y disodium salt and methyldiethanolamine (MDEA) were used as sensitizer and co-initiator, respectively. Eosin Y is photosensitive in the spectral range from 450 nm to 550 nm and allows use of the trigger photopolymerization process for the used VIS light source [15]. Both above-mentioned chemical compounds were purchased from Sigma-Aldrich. Applications of different spectral ranges of the light sources and various compositions of the monomer mixtures allow obtaining polymers with different refractive indices (RI) [1,16]. In Table 2 are presented measurements of RIs of the above-mentioned polymers. All prepared materials were measured on the Abbe refractometer, where uncertainty is defined at the level of 2σ where σ is the standard deviation of RI. In all cases, the RI of the prepared mixtures increases after polymerization. Referring them to silica glass for which RIs are of around 1.4607 at 530 nm and around 1.4745 at 365 nm, both polymers have higher RIs. Moreover, the microtip can have an anisotropic structure with a higher RI in the center and with a lower RI in the outer section. However, as was shown by tomographic examinations [8,17], this difference reaches a value of around 0.0007 and the assumption that the RI distribution is homogeneous is acceptable. Light Sources' Parameters In previous papers, the results obtained by using separately coherent light sources [3,13] or UV LED [14] have been presented. In this paper, the influence on the microtip geometry of such source parameters as a full width at half maximum (FWHM) and a central wavelength are investigated. As the light source, the coherent laser at a wavelength of 532 nm and two broadband UV and VIS LEDs were used and compared. Spectral characteristics of the light sources used in all experiments are presented in Figure 1. different refractive indices (RI) [1,16]. In Table 2 are presented measurements of RIs of the abovementioned polymers. All prepared materials were measured on the Abbe refractometer, where uncertainty is defined at the level of 2σ where σ is the standard deviation of RI. In all cases, the RI of the prepared mixtures increases after polymerization. Referring them to silica glass for which RIs are of around 1.4607 at 530 nm and around 1.4745 at 365 nm, both polymers have higher RIs. Moreover, the microtip can have an anisotropic structure with a higher RI in the center and with a lower RI in the outer section. However, as was shown by tomographic examinations [8,17], this difference reaches a value of around 0.0007 and the assumption that the RI distribution is homogeneous is acceptable. Light Sources' Parameters In previous papers, the results obtained by using separately coherent light sources [3,13] or UV LED [14] have been presented. In this paper, the influence on the microtip geometry of such source parameters as a full width at half maximum (FWHM) and a central wavelength are investigated. As the light source, the coherent laser at a wavelength of 532 nm and two broadband UV and VIS LEDs were used and compared. Spectral characteristics of the light sources used in all experiments are presented in Figure 1. In Figure 2a,b, examples of intensity patterns in the near field of the fiber output for the standard gradient-index MMF with a 62.5 µm core were shown when the fiber was illuminated by the abovementioned VIS light sources, whereas in Figure 2c,d, a pair of microtips based on PETA monomer manufactured by using these sources is presented. The surface shape of the manufactured microtips reflects the modal characteristics of the used optical fiber illuminated by given sources. The broadband spectral characteristics of UV LED have a uniform Gaussian-like intensity distribution; the microtip has smooth edges and apex, while the intensity speckle pattern of the narrowband laser remodels these patterns in the 3D polymer structure. Both microtips have quasi-trapezoidal crosssections, wider in the bottoms (base) and narrower at their tops. In Figure 2a,b, examples of intensity patterns in the near field of the fiber output for the standard gradient-index MMF with a 62.5 µm core were shown when the fiber was illuminated by the above-mentioned VIS light sources, whereas in Figure 2c,d, a pair of microtips based on PETA monomer manufactured by using these sources is presented. The surface shape of the manufactured microtips reflects the modal characteristics of the used optical fiber illuminated by given sources. The broadband spectral characteristics of UV LED have a uniform Gaussian-like intensity distribution; the microtip has smooth edges and apex, while the intensity speckle pattern of the narrowband laser remodels these patterns in the 3D polymer structure. Both microtips have quasi-trapezoidal cross-sections, wider in the bottoms (base) and narrower at their tops. Although the broadband source is key for obtaining a smooth microtip surface [14], the other source parameter has an influence on the general microtip shape. In Figure 3 are shown SEM images of the microtips manufactured by using UV (Figure 3a) and VIS ( Figure 3b) LEDs based on a PETA monomer obtained on the same type of MMF as in the previous test. Depending on the used UV or VIS LEDs' source, the microtip cross-section shape is quasi-rectangular or quasi-trapezoidal, respectively. Additionally, the microtip manufactured by UV light has a rounded apex while the other has a flat one. The above aspect will be deeper discussed in Section 2.4 and summarized in Section 3. Selected Multi-Mode Optical Fibers As was shown in the previous section, the optical fiber influences the microtip's shape due to its Although the broadband source is key for obtaining a smooth microtip surface [14], the other source parameter has an influence on the general microtip shape. In Figure 3 are shown SEM images of the microtips manufactured by using UV (Figure 3a) and VIS ( Figure 3b) LEDs based on a PETA monomer obtained on the same type of MMF as in the previous test. Depending on the used UV or VIS LEDs' source, the microtip cross-section shape is quasi-rectangular or quasi-trapezoidal, respectively. Additionally, the microtip manufactured by UV light has a rounded apex while the other has a flat one. The above aspect will be deeper discussed in Section 2.4 and summarized in Section 3. Although the broadband source is key for obtaining a smooth microtip surface [14], the other source parameter has an influence on the general microtip shape. In Figure 3 are shown SEM images of the microtips manufactured by using UV (Figure 3a) and VIS (Figure 3b) LEDs based on a PETA monomer obtained on the same type of MMF as in the previous test. Depending on the used UV or VIS LEDs' source, the microtip cross-section shape is quasi-rectangular or quasi-trapezoidal, respectively. Additionally, the microtip manufactured by UV light has a rounded apex while the other has a flat one. The above aspect will be deeper discussed in Section 2.4 and summarized in Section 3. Selected Multi-Mode Optical Fibers As was shown in the previous section, the optical fiber influences the microtip's shape due to its modal characteristics. Therefore, different types of optical fibers were selected to optimize the procedure of microtip manufacturing. In this paper were used: gradient-index MMF with a 62.5 µm Selected Multi-Mode Optical Fibers As was shown in the previous section, the optical fiber influences the microtip's shape due to its modal characteristics. Therefore, different types of optical fibers were selected to optimize the procedure of microtip manufacturing. In this paper were used: gradient-index MMF with a 62.5 µm core diameter and three step-index MMFs with 50 µm, 105 µm, and 200 µm cores diameters. Based on the previous studies, these MMFs were selected in terms of their reflective properties [13,14]. In Table 3, the main parameters of the used MMFs are presented. They were divided into categories related to the core and cladding diameters, numerical aperture (NA), and RI profile distribution. All selected optical fibers were purchased from ThorLabs, and their specific names were presented. In Figure 4 are presented the selected microtips obtained using the UV LED and both PETA and TCDMA monomer mixtures at the ends of the MMFs from Table 3. Each of the micro-element was prepared at various optical power and exposure times to show the possibility of shaping this type of optical microstructure. on the previous studies, these MMFs were selected in terms of their reflective properties [13,14]. In Table 3, the main parameters of the used MMFs are presented. They were divided into categories related to the core and cladding diameters, numerical aperture (NA), and RI profile distribution. All selected optical fibers were purchased from ThorLabs, and their specific names were presented. In Figure 4 are presented the selected microtips obtained using the UV LED and both PETA and TCDMA monomer mixtures at the ends of the MMFs from Table 3. Each of the micro-element was prepared at various optical power and exposure times to show the possibility of shaping this type of optical microstructure. As demonstrated, all of them have smooth surfaces, but their shapes significantly differ. The microtip on a 50 µm step-index MMF from Figure 4a has a quasi-rectangular cross-section with a flat apex. The microtip on a 62.5 µm gradient-index MMF (Figure 4b) is quasi-rectangular with a rounded apex similar to the microtip in Figure 3a. Increasing the core size of the step-index optical fiber to 105 µm resulted in the shape change to quasi-rounded with high curvature (Figure 4c). Further increase of the core size of the same MMF refractive index profile formed a microtip with a quasi-rectangular cross-section and conical apex (Figure 4d). The above results showed the possibility for microtip shaping by choosing suitable optical fibers. Technical Parameters of the Manufacturing Process Technical parameters in the manufacturing process are: position of the fiber, amount of optical energy absorbed by the mixture, and amount of mixture that can be assessed as the droplet size. As an application method of the monomer mixture was used a drop deposition at the end of a cleaved optical fiber by the method previously described [4]. Immersing the optical fiber into a cuvette with the monomer mixture does not give positive results. The shape of the liquid drop deposited at the MMF's end depends on surface tension forces defined by optical fiber diameter, amount, and viscosity of the monomer mixture [1]. In Figure 5 are shown two pairs of optical microscope images of the optical fiber with deposited monomer drops (Figure 5a,c) and formed microtips for the same optical power and different exposure times (Figure 5b,d). The applied drops of the mixture (Figure 5a,c) have the same height and the same shape As demonstrated, all of them have smooth surfaces, but their shapes significantly differ. The microtip on a 50 µm step-index MMF from Figure 4a has a quasi-rectangular cross-section with a flat apex. The microtip on a 62.5 µm gradient-index MMF (Figure 4b) is quasi-rectangular with a rounded apex similar to the microtip in Figure 3a. Increasing the core size of the step-index optical fiber to 105 µm resulted in the shape change to quasi-rounded with high curvature (Figure 4c). Further increase of the core size of the same MMF refractive index profile formed a microtip with a quasi-rectangular cross-section and conical apex (Figure 4d). The above results showed the possibility for microtip shaping by choosing suitable optical fibers. Technical Parameters of the Manufacturing Process Technical parameters in the manufacturing process are: position of the fiber, amount of optical energy absorbed by the mixture, and amount of mixture that can be assessed as the droplet size. As an Materials 2020, 13, 416 6 of 11 application method of the monomer mixture was used a drop deposition at the end of a cleaved optical fiber by the method previously described [4]. Immersing the optical fiber into a cuvette with the monomer mixture does not give positive results. The shape of the liquid drop deposited at the MMF's end depends on surface tension forces defined by optical fiber diameter, amount, and viscosity of the monomer mixture [1]. In Figure 5 are shown two pairs of optical microscope images of the optical fiber with deposited monomer drops (Figure 5a,c) and formed microtips for the same optical power and different exposure times (Figure 5b,d). The applied drops of the mixture (Figure 5a,c) have the same height and the same shape because the surface tension forces form a rounded shape on the forehead of the fiber. The photopolymerization process creates a 3D polymer microstructure in the form of a microtip and the height of the microtip varies depending on the absorption by mixture energy, which is proportional to the exposure time, in this example. In the first case (Figure 5b), the short exposure time produces a 20 µm high trapezoidal microtip and in the second case ( Figure 5d) the microtip is rectangular at 39 µm high and is equal to the size of the initial drop. The exposure time, for a given optical power of the source, should be enough to polymerize over the entire area of the applied drop. It results from the fact that the chain process of photopolymerization occurs when the mixture is illuminated with a VIS source, defined in Section 2.1. After radiation stops, the polymerization process is stopped as well. As noted in [18], the cured part of the mixture becomes an extension of the optical fiber core and acts as a waveguide, illuminating and curing subsequent layers of the liquid polymer. Time of exposure does not affect the shape of the edge or the top of the microtip, but it determines its height. Keeping the same optical power, the resulting microtips have the same height as the height of the liquid drop if the exposure time is long enough for polymerization along the entire length of the drop. If the exposure time is too short, not all of the liquid mixture is polymerized, and the microtip has a lower height than the initial drop. For a different testing time of exposure (1 s, 10 s, 20 s, 30 s, 60 s, 120 s), it was found that 60 s is enough to cure the entire height of the drop. Reduced exposure time means less energy delivered to the system, so the manufactured microtip is lower than microtip for longer times. Microtips produced on the gradient-index MMF had a slightly rounded apex with a large radius of curvature. The curvature is in proportional relation to the curvature of the liquid drop before photopolymerization and the microtip's apex radius is always smaller than the drop's radius (see Figure 6). This discrepancy results from the fact that the polymerization process runs only in a certain area of the drop and the apex shapes are determined by the mode distribution of the light beam on the optical fiber output. This shrinking process of the material occurs when it changes its state from liquid to solid. The experimental results showed that for optical powers within the range of 5 µW-40 µW and exposure times from 1 s to 60 s, the average difference between the curvature radius of the microtip's apex and drop is of about 26 µm for the VIS light sources and of 36 µm for UV LED. It was previously noted [19] that elements produced on an SMF's end face had a greater curvature radius The exposure time, for a given optical power of the source, should be enough to polymerize over the entire area of the applied drop. It results from the fact that the chain process of photopolymerization occurs when the mixture is illuminated with a VIS source, defined in Section 2.1. After radiation stops, the polymerization process is stopped as well. As noted in [18], the cured part of the mixture becomes an extension of the optical fiber core and acts as a waveguide, illuminating and curing subsequent layers of the liquid polymer. Time of exposure does not affect the shape of the edge or the top of the microtip, but it determines its height. Keeping the same optical power, the resulting microtips have the same height as the height of the liquid drop if the exposure time is long enough for polymerization along the entire length of the drop. If the exposure time is too short, not all of the liquid mixture is polymerized, and the microtip has a lower height than the initial drop. For a different testing time of exposure (1 s, 10 s, 20 s, 30 s, 60 s, 120 s), it was found that 60 s is enough to cure the entire height of the drop. Reduced exposure time means less energy delivered to the system, so the manufactured microtip is lower than microtip for longer times. Microtips produced on the gradient-index MMF had a slightly rounded apex with a large radius of curvature. The curvature is in proportional relation to the curvature of the liquid drop before photopolymerization and the microtip's apex radius is always smaller than the drop's radius (see Figure 6). This discrepancy results from the fact that the polymerization process runs only in a certain area of the drop and the apex shapes are determined by the mode distribution of the light beam on the optical fiber output. This shrinking process of the material occurs when it changes its state from liquid to solid. The experimental results showed that for optical powers within the range of 5 µW-40 µW and exposure times from 1 s to 60 s, the average difference between the curvature radius of the microtip's apex and drop is of about 26 µm for the VIS light sources and of 36 µm for UV LED. It was previously noted [19] that elements produced on an SMF's end face had a greater curvature radius than the fiber core diameter, and the microtip curvature radius increased with the exposure time. While for the used MMF, the microtip's apex curvature radius can be greater or smaller than 62.5 µm and it depends on optical power. Besides, no relationship was found between exposure time and the microtips' curvature radius. In Figure 6, the microscope images of the liquid drop and microtip with the approximation circles are presented. The difference between the drop's curvature and microtip's apex in the above figures is around 32.5 µm. By careful control of the mixture amount, the microtip height is similar every time. In Table 4, the average microtips' heights with their uncertainties for an MMF gradient-index with a 62.5 µm core diameter and both monomers` mixture (PETA, TCDMA) are presented. Moreover, research results have shown that the highest microtip fabricated on an MMF with the core diameter of: 50 µm, 62.5 µm, 105 µm, and 200 µm have lengths of about: 31 µm, 39 µm, 29 µm, and 63 µm, respectively. The position of the optical fiber was always the same, i.e., MMF's end was directed vertically downward. However, it is worth noting that no differences were found in the microtip's creation in various fiber settings in space. What is most important here is that the adhesion force between the fiber and the polymer drops while the gravity force is of secondary importance. Table 4. Average heights of microtips with their uncertainties for optical power range from P = 5 µW to P = 40 µW and time of exposure from t = 1 s to t = 60 s. During the experiment, it was found that absorbed optical energy is a more important parameter. This energy is defined by optical power and exposure time. Depending on this parameter, microtips have different base sizes [13,14]. Moreover, it has been noticed that with the optical power increase, the microtip base diameter increases as well. The rate of this change depends on the monomer mixture composition and the source spectral characteristics. In Figure 7 the results for microtips, based on mixtures with PETA (black dots) and TCDMA (red dots) monomers, manufactured by VIS laser are presented. SEM images of microtips allow measuring the size of their bases. Analysis of the data related to the PETA-based microtips (black) indicates that, theoretically, the microtip base covers the entire core of the MMF at the optical power of about 30 µW (black curve approximation) while the experimental value is of around 40 µW. For the mixture with the TCDMA monomer, the approximated optical power value of 150 µW is enough to form a microtip with a base diameter similar to the core diameter The difference between the drop's curvature and microtip's apex in the above figures is around 32.5 µm. By careful control of the mixture amount, the microtip height is similar every time. In Table 4, the average microtips' heights with their uncertainties for an MMF gradient-index with a 62.5 µm core diameter and both monomers' mixture (PETA, TCDMA) are presented. Moreover, research results have shown that the highest microtip fabricated on an MMF with the core diameter of: 50 µm, 62.5 µm, 105 µm, and 200 µm have lengths of about: 31 µm, 39 µm, 29 µm, and 63 µm, respectively. The position of the optical fiber was always the same, i.e., MMF's end was directed vertically downward. However, it is worth noting that no differences were found in the microtip's creation in various fiber settings in space. What is most important here is that the adhesion force between the fiber and the polymer drops while the gravity force is of secondary importance. Table 4. Average heights of microtips with their uncertainties for optical power range from P = 5 µW to P = 40 µW and time of exposure from t = 1 s to t = 60 s. During the experiment, it was found that absorbed optical energy is a more important parameter. This energy is defined by optical power and exposure time. Depending on this parameter, microtips have different base sizes [13,14]. Moreover, it has been noticed that with the optical power increase, the microtip base diameter increases as well. The rate of this change depends on the monomer mixture composition and the source spectral characteristics. In Figure 7 the results for microtips, based on mixtures with PETA (black dots) and TCDMA (red dots) monomers, manufactured by VIS laser are presented. Summary of Microtips' Geometry Shaping on the MMF In the previous section, the main parameters of the microtips' manufacturing technology were identified and described. The graphical summary of the microtip shaping possibilities on the selected MMFs is shown in Figure 8. The evolution of the microtips' cross-sections' shape is presented in the form of sketches. Each sketch is related to the selected optical power ranges for all tested MMFs, separately. Evidently, the shape is connected with the used light source type, as well as the technical parameters of the MMF. The microtip produced by VIS light (right column, Figure 8b,d) had a cross-section that can be approximated as trapezoidal, and the microtip produced by UV light (left column, Figure 8a,c) had a cross-section of the rectangular shape. The reasons can be in the light transition and different refraction angles between the MMF and the applied drop of the monomer mixture. It is a transition from a material with a lower refractive index to a material with a higher refractive index ( Table 2). As a result, various of the cross-sections' shapes were obtained. The microtip formed on a step-index MMF with a 50 µm core diameter by UV light has a hemispherical shape, and its cross-section is semi-circular with a small optical power (P = 2 µW). With the increase of the optical power, it tends to a rectangular shape (Figure 8a). The microtip at the same MMF but manufactured with VIS light has a quasi-rectangular cross-section and evolves with an increase of the optical power to a trapezoidal shape (Figure 8b). The micro-element formed on a gradient-index MMF with a 62.5 µm core diameter (Figure 8c) has a semi-circular cross-section with a large radius of the curvature comparable to a plano-convex lens. Higher values of optical power change its cross-section to rectangular. The microtip formed on this type of MMF using VIS light changes from trapezoidal to a semi-circular cross-section with the increase of the optical power (Figure 8d). The microtip formed on a step-index MMF with a 105 µm core diameter has a similar semi-circular cross-section for both light sources, and with increasing optical power, the manufactured microtips increase (Figure 8e,f). Finally, the microtip on a step-index MMF with a 200 SEM images of microtips allow measuring the size of their bases. Analysis of the data related to the PETA-based microtips (black) indicates that, theoretically, the microtip base covers the entire core of the MMF at the optical power of about 30 µW (black curve approximation) while the experimental value is of around 40 µW. For the mixture with the TCDMA monomer, the approximated optical power value of 150 µW is enough to form a microtip with a base diameter similar to the core diameter of the used MMF (red curve), while the experiment's obtained value (red dots) was of 300 µW. Summary of Microtips' Geometry Shaping on the MMF In the previous section, the main parameters of the microtips' manufacturing technology were identified and described. The graphical summary of the microtip shaping possibilities on the selected MMFs is shown in Figure 8. The evolution of the microtips' cross-sections' shape is presented in the form of sketches. Each sketch is related to the selected optical power ranges for all tested MMFs, separately. Evidently, the shape is connected with the used light source type, as well as the technical parameters of the MMF. The microtip produced by VIS light (right column, Figure 8b,d) had a cross-section that can be approximated as trapezoidal, and the microtip produced by UV light (left column, Figure 8a,c) had a cross-section of the rectangular shape. The reasons can be in the light transition and different refraction angles between the MMF and the applied drop of the monomer mixture. It is a transition from a material with a lower refractive index to a material with a higher refractive index ( Table 2). As a result, various of the cross-sections' shapes were obtained. The microtip formed on a step-index MMF with a 50 µm core diameter by UV light has a hemispherical shape, and its cross-section is semi-circular with a small optical power (P = 2 µW). With the increase of the optical power, it tends to a rectangular shape (Figure 8a). The microtip at the same MMF but manufactured with VIS light has a quasi-rectangular cross-section and evolves with an increase of the optical power to a trapezoidal shape (Figure 8b). The micro-element formed on a gradient-index MMF with a 62.5 µm core diameter (Figure 8c) has a semi-circular cross-section with a large radius of the curvature comparable to a plano-convex lens. Higher values of optical power change its cross-section to rectangular. The microtip formed on this type of MMF using VIS light changes from trapezoidal to a semi-circular cross-section with the increase of the optical power (Figure 8d). The microtip formed on a step-index MMF with a 105 µm core diameter has a similar semi-circular cross-section for both light sources, and with increasing optical power, the manufactured microtips increase (Figure 8e,f). Finally, the microtip on a step-index MMF with a 200 µm core diameter created by using both types of light sources has an axiconal and semi-circular cross-section. The conical top for the starting optical power becomes hemispherical with the power increase (Figure 8g,h). This fact should be noticed so all the above-described optical elements have similar geometries regardless of the type of monomer used for the mixture. One of the most spectacular side effects of the photopolymerization process on the MMF is a polymer flange formed on the outer side of the optical fiber ( Figure 9). It is created by the polymerization of the monomer mixture deposited on the side of the MMF by back-reflected light at the interface of cleaved optical fiber and microtip. The produced polymer flange on the MMF reflects the intensity pattern of the modal characteristics of the used optical fibers. The polymer flange has a grooved surface and oval cross-sections. The produced polymer flange on the MMF reflects the intensity pattern of the modal characteristics of the used optical fibers. The polymer flange has a grooved surface and oval crosssections. Conclusions From a technological point of view, the geometry of polymer microtips at the end of the MMF depends on a monomer mixture composition, light sources' parameters including the amount of delivered energy and spectral characteristics, type of used optical fiber, as well as the size of the liquid drop, and optical fiber position. Microtips manufactured on chosen the MMFs had a large reflective surface and better optic properties compared to those manufactured on SMFs. However, the fiber modal structure connected with the used light source has an influence on the manufactured microtips, whose effect is not observed for SMFs. Therefore, a light source with a broad spectrum is preferred. When the beam has a wider spectrum, more modes are propagated inside the fiber, and thus the microtip surface becomes smooth. The used mixtures varied depending on the added monomer and the initiator type. By changing the mixture's composition changes, in its RI could be observed. Two main technical parameters of the manufacturing process are optical power and exposure time. They determine the amount of delivered optical energy to the monomer drop. It was noticed that the microtip base diameter increases with the increase of the optical power, while the exposure time influences the microtip height. The adhesion forces between the optical fiber end and the polymer drop are greater than gravity forces, so the optical fiber position does not significantly affect the creation of the microtip. Based on the presented analysis, an optimal manufacturing process of the microtips was obtained. These efforts have been made to gather knowledge on how to plan the strategy of such kinds of 3D microstructure formation in order to reach the required optical properties. All the above-presented experimental results are considered in their applications' point of view. Measurements of reflection and transmission properties of manufactured polymer microtips allow us to validate and indicate how to select process parameters to obtain the proper geometry of this element. Reflection measurements were previously performed and have led to testing such optical elements as optical fiber RI sensors' transducers. Transmission properties have been pre-characterized, and for the tested microtips, their output intensity patterns are comparable to divergent light. The microtip increases numerical aperture and can be considered as an illuminating point source dedicated to optical measurements. However, these properties will be studied in the future.
8,436.4
2020-01-01T00:00:00.000
[ "Materials Science", "Physics" ]
Acoustic Analyses of L1 and L2 Vowel Interactions in Mandarin–Cantonese Late Bilinguals : While the focus of bilingual research is frequently on simultaneous or early bilingualism, the interactions between late bilinguals’ first language (L1) and second language (L2) have rarely been studied previously. To fill this research gap, the aim of the current study was to investigate the production of vowels in the L1 Mandarin and L2 Cantonese of Mandarin–Cantonese late bilinguals in Hong Kong. A production experiment was conducted with 22 Mandarin–Cantonese bilinguals, as well as with 20 native Mandarin speakers and 21 native Cantonese speakers. Acoustic analyses, including formants of and Euclidean distances between the vowels, were performed. Both vowel category assimilation and dissimilation were noted in the Mandarin–Cantonese bilinguals’ L1 and L2 vowel systems, suggesting interactions between the bilinguals’ L1 and L2 vowel categories. In general, the findings are in line with the hypotheses of the Speech Learning Model and its revised version, which state that L1–L2 phonetic interactions are inevitable, as there is a common phonetic space for storing the L1 and L2 phonetic categories, and that learners always have the ability to adapt their phonetic space. Future studies should refine the data elicitation method, increase the sample size and include more language pairs to better understand L1 and L2 phonetic interactions. Introduction While most investigations on bilinguals' speech have focused on the characteristics of either their first language (L1) or their second language (L2), recent studies have suggested potential interactions between bilingual speakers' L1 and L2 phonetic systems [1,2].Late bilinguals are the ideal population for studying L1 and L2 speech interactions because they start to learn their L2 after puberty and naturally avoid any maturational constraints related to language acquisition.This study explored the interactions of Mandarin and Cantonese monophthongs in Mandarin-Cantonese late bilinguals via acoustic analyses to provide a better understanding of the issue of bidirectional influence. Interactions between L1 and L2 Speech in Late Bilinguals The term 'late bilinguals' refers to people who commence learning their L2 after having fully acquired their L1.Investigations of late bilinguals' speech development have mainly focused on their acquisition of L2 speech, based on the assumption that the L1 of bilinguals does not alter.However, more recent studies have shown that the L2 of late bilinguals interferes with the full-fledged L1, which, in turn, causes the L1 of bilinguals to differ from the L1 of monolinguals [3,4].This study focuses on the speech development of late bilinguals, who are usually immigrants who have relocated to a new environment in which their L2 is the dominant language, and in which their L1 is no longer used or is used less frequently [5,6].Starting with the same level of L1 competence, late bilinguals do not have the potential maturational constraints in child and adolescent language development [7], and are therefore an ideal population for investigating L1 and L2 interactions. Studies of the L1 and L2 in bilingual language development have developed in two separate directions.Until very recently, only a few studies have touched upon the issue of L1 and L2 interactions and confirmed the bidirectional influences of the L1 and L2 [8,9].To the best of our knowledge, there are only three studies that have examined the L1 and L2 speech interactions [1,2,10].Ref. [10] tested the production of voice-onset-time (VOT) by late Dutch-German bilinguals while [1] investigated the perception of VOT by English-Spanish bilinguals.Both studies suggest interactions between the bilinguals' L1 and L2, but it is unclear whether similar findings can be observed when vowels are considered, which is the gap the current study aimed to fill.Refs.[1,10] interpreted their results as providing supporting evidence for Flege's Speech Learning Model (SLM), as introduced below. Theoretical Framework The L1 and L2 speech interactions can be accounted for by the SLM [11] and its revised version, the Revised Speech Learning Model (SLM-r) [12], according to which there is a common phonetic space in a bilingual speaker's mind.This space stores the phonetic categories of both the L1 and the L2; the L1 and L2 categories can thus mutually influence each another, in the process of category assimilation or category dissimilation. The category assimilation hypothesis (CAH) in the SLM claims that in the common space, an L2 sound that is perceived as being similar to an L1 sound does not form a new category and is understood as a variant of the L1 sound at an allophonic level; that is, the cross-linguistic equivalence between the two sounds has been established.In this case, the phonemic variants for interdialectal contact are called diaphones [13], and the CAH advocates that only one single phonetic category is used to process the two linked diaphones.This mapping of diaphones will eventually give rise to a new merged category in mental representation of a bilingual, which will be realised differently from either the L1 sound or the L2 sound in production, and this phenomenon has been documented in several studies.For example, ref. [3] examined the post-vocalic /r/ of German-English bilinguals and discovered an influence from the L2, resulting in the assimilation of the consonant pair, which lends support to the CAH. SLM also postulates the category dissimilation hypothesis (CDH).A new category will be established if an L2 sound is absent in the L1 system, which will make the combined phonetic space more crowded, and as a result, the phonemes will tend to disperse to compensate so that the phonetic contrast can be maintained.When category dissimilation operates, neither the newly established L2 category nor the closest L1 category will be identical to the categories for monolinguals, and consequently, both categories may shift away from their original phonetic space.One support for the CDH can be found in [14], which reported that Spanish-Catalan bilinguals have developed two categories to accommodate the mid-back vowels in the two languages, respectively. Moreover, the SLM and SLM-r posit that the capacity for speech learning remains intact over the lifespan.This claim is particularly relevant to and can be tested by the current study.If it is the case, there must be L1 and L2 speech interactions even for late learners.The current study also extends the SLM and SLM-r to explore any L2-induced change(s) in the bilinguals' L1. The Current Study Although Mandarin and Cantonese are two varieties (dialects) of Chinese, they have different phonological systems and are mutually unintelligible [15].While there is no consensus regarding the numbers or distribution of monophthongs in these two varieties [16][17][18][19][20], it is acknowledged that Mandarin and Cantonese share three peripheral vowel pairs, namely, /a/, /i/ and /u/ [21], which were chosen as the target vowels to be examined in this study. Thus far, the investigations on bilingual speakers' vowel systems have largely concerned bilingual children (e.g., [22,23]).It remains to be explored as to whether there will be interactions between the late bilinguals' L1 and L2 vowel systems, given that late bilinguals are mature learners with a full-fledged L1, which is very different from the case of bilingual children.The current study aimed to investigate the production of monophthongs in the L1 Mandarin and the L2 Cantonese of Mandarin-Cantonese bilinguals in Hong Kong (henceforth 'bilinguals') to answer the following research questions: (1) Are the L1 Mandarin vowels of the bilinguals influenced by Cantonese after years of immersion?(a) Have the F1 and F2 in bilinguals' Mandarin undergone assimilation, namely, become more similar to their Cantonese counterparts and more deviant from those of the monolingual Mandarin speakers?(b) Have the F1 and F2 in bilinguals' Mandarin undergone dissimilation, namely, become more deviant to their Cantonese counterparts and even shifted away from those of the monolingual Mandarin speakers? (2) Have the bilinguals reached native-like competence in L2 Cantonese in terms of vowel production?(a) Have the F1 and F2 in bilinguals' Cantonese undergone assimilation, namely, become more similar to their Cantonese counterparts and more deviant from those of the monolingual Mandarin speakers?(b) Have the F1 and F2 in bilinguals' Cantonese undergone dissimilation, namely, become more deviant to their Cantonese counterparts and even shifted away from those of the monolingual Mandarin speakers? (3) Are there any interactions between the bilinguals' L1 and L2 vowels? Informants Three groups of informants were recruited to participate in this study via online advertisements.In the pre-screening stage, all the potential informants were first invited to provide their background information pertaining to their language use, based on which we invited eligible informants.The bilinguals group consisted of 22 new immigrants (19 females, three males; aged 30.14 ± 4.30) who spoke Mandarin as their L1 and had been exposed to Cantonese since their arrival in Hong Kong.The bilinguals had all arrived in Hong Kong after puberty (average age: 22.73 ± 4.21) and their average length of residence was 7.41 ± 3.11 years.To assess their language profile of Cantonese and Mandarin, the bilinguals completed a language background questionnaire prior to the recording session.The questionnaire was an adapted version of the Bilingual Language Profile [24], which was used to collect information about the participants' language history, language use, language proficiency and language attitudes, and the results were converted into scores for each subsection.The scores showed that, the participants were fluent Cantonese speakers, although they were more dominant in Mandarin at the time of the experiment.The Mandarin baseline group consisted of 20 native speakers of Mandarin (11 females, nine males; aged 24.75 ± 3.65), who were born and raised in Mandarin-speaking regions and had little exposure to Cantonese.There were 21 native speakers of Cantonese in the Cantonese baseline group (ten females, 11 males; aged 20.78 ± 2.56), and they were born and brought up in Hong Kong, where Cantonese is the native and dominant language.No participants had any history of speaking, hearing, or language difficulties. Materials and Procedures The vowels /a/, /i/ and /u/ were chosen as the target vowels for the following reasons.Firstly, as these three vowels are shared by and are commonly used in Mandarin and Cantonese, it was possible to use them to conduct cross-linguistic comparisons.Moreover, these vowels are peripheral vowels, which are ideal for measuring the vowel space of the informants.In addition, all three vowels are monophthongs, the subtle differences in which are easier to capture compared to the differences in more complicated diphthongs or triphthongs.The vowels were embedded in either the first or second syllable of disyllabic words.Because tones have been shown to influence the production of vowels [25], we restricted our target stimuli to the first tone (T1) in Mandarin and Cantonese to minimise the coarticulation effect [26].The words appeared in the subject or object position in different sentences.For the native speakers of Cantonese and Mandarin, each vowel appeared 15 times.For the bilinguals, the vowels appeared ten times in each language.In total, 3165 vowel tokens were collected: 3 vowels × 15 times × 20 Mandarin speakers + 3 vowels × 15 times × 21 Cantonese speakers + 3 vowels × 10 times × 22 bilinguals × 2 languages. To make the data collection setting more naturalistic, the speech data were collected in dialogues between the experimenter and the participants, wherein the experimenter always asked the precursor questions, and the participants were instructed to answer the questions using the provided sentences naturally.An example of the Cantonese questions and answers is presented in (1) below, with the target syllable in the answer underlined ('gaa1'): ( The Mandarin and Cantonese speakers attended the recording session for their respective language, while the bilinguals were recorded in both Mandarin and Cantonese.Since exactly the same materials were used in each language, it was possible to compare the vowel production by bilingual and native speakers directly. This project was approved by the Human Research Ethics Committee of Hong Kong Shue Yan University (Reference number: HREC 22-05 (M12).All the participants gave their written informed consent prior to the recording sessions. Data Analysis To process the data, the vowel portions of the target syllables were manually segmented by trained phoneticians and the values of the first and second formants (F1 and F2) were extracted over the midpoint of each vowel with a script in Praat [27].Individual differences among speakers are huge, and to eliminate the potential effect of inter-speaker variation on our data analysis, we adopted the Lobanov's approach [28] to normalise each speakers' F1 and F2 values individually, using Equation (1): where N is the normalised formant value of the N th formant for the vowel V, F n[V] stands for the original formant values that are measured in Hz, and MEAN n and S n are the mean and standard deviation (SD) of the N th formant of the target speaker, respectively.To make the F1 and F2 values comparable to the findings in previous studies [23,29], the normalised formant values were then rescaled to Hz following [30] with Equations ( 2) and (3) below: where F ′ i is the rescaled formant, F N i is the Lobanov normalised formant value, and and F N iMAX are the minimum and maximum values of F N i , respectively, across the dataset of the target speaker. Next, the rescaled F1 and F2 values were then analysed with linear mixed-effects modelling using the 'lme4' package [31] in R [32,33], with the formant values (F1 or F2) as the dependent variables, vowel, speaker group (or language) as the fixed effects, and speaker and repetition as the random effects. In addition, to measure the relative difference within each proposed pair accurately, we calculated the Euclidean distances of the Mandarin and Cantonese monophthongs based on the rescaled F1 and F2 values with Equation (4) [34]: where s is the distance between two points in a two-dimensional Euclidian vowel space defined by F2 on the x axis and F1 on the y axis, and m and c each represent a specific monophthong in Mandarin and Cantonese.For the native speakers, the average F1 and F2 values for each vowel were used to calculate the Euclidean distances.For the bilinguals, the distances between the vowel pairs were calculated for each speaker. Results In this section, we will first present an overview of the vowel production data and then report on the statistical analyses of the F1 and F2 values of the vowels produced by native speakers in Section 3.1.1.The comparisons of native speakers and bilinguals for each language are presented separately in Sections 3.1.2and 3.1.3(Questions 1 and 2).Next, we compare the F1 and F2 of the vowels in the bilinguals' Mandarin and Cantonese in Section 3.1.4(Question 3), which is followed by an interim summary of the F1 and F2 values.Finally, the Euclidean distances of the vowels are calculated and compared in Section 3.2. F1 and F2 of the Vowels An overview of the vowel production in Mandarin and Cantonese by the bilingual and monolingual speakers is plotted in Figure 1, in which the vowel letters represent the average F1 and F2 values of each vowel and the circles indicate approximately 67% of the vowel ellipses for each vowel category. Vowel Production by Native Speakers We first fitted models for the F1 and F2 of the vowels produced by native speakers of Mandarin and Cantonese.There was a main effect of vowel (χ 2 (2) = 2834, p < 0.001) but no main effect of language (χ 2 (1) = 1.46, p = 0.226) on F1, suggesting that the three vowels /a/, /i/ and /u/ were distinguishable in height, and that native speakers of Mandarin and Cantonese showed no height difference when producing these three pairs of vowels.For the F2 values, there were main effects of vowel (χ 2 (2) = 2265.8,p < 0.001) and language (χ 2 (1) = 48.316,p < 0.001), as well as an interaction of vowel and language (χ 2 (2) = 36.023,p < 0.001).Specifically, the vowels within a pair differed in the degree of backness: /i/ of both languages overlapped in backness (p = 0.115); /a/ was more back in Mandarin (p < 0.001) and /u/ was more back in Cantonese (p < 0.001). Vowel Production by Native Speakers We first fitted models for the F1 and F2 of the vowels produced by native speakers of Mandarin and Cantonese.There was a main effect of vowel (χ 2 (2) = 2834, p < 0.001) but no main effect of language (χ 2 (1) = 1.46, p = 0.226) on F1, suggesting that the three vowels /a/, /i/ and /u/ were distinguishable in height, and that native speakers of Mandarin and Cantonese showed no height difference when producing these three pairs of vowels.For the F2 values, there were main effects of vowel (χ 2 (2) = 2265.8,p < 0.001) and language (χ 2 (1) = 48.316,p < 0.001), as well as an interaction of vowel and language (χ 2 (2) = 36.023,p < 0.001).Specifically, the vowels within a pair differed in the degree of backness: /i/ of both languages overlapped in backness (p = 0.115); /a/ was more back in Mandarin (p < 0.001) and /u/ was more back in Cantonese (p < 0.001). Mandarin Vowel Production by Native Speakers and Bilinguals Next, we compared the F1 and F2 of the Mandarin vowels produced by native speakers and bilinguals.With regard to the F1 values of the vowels in Mandarin, there were main effects of vowel (χ 2 (2) = 3348.2,p < 0.001) and speaker group (χ 2 (1) = 4.426, p = 0.035), as well as a two-way interaction of vowel and speaker group (χ 2 (2) = 54.933,p < 0.001).A post hoc analysis of the group effect showed that the native speakers of Mandarin had larger F1 values compared to the bilinguals, suggesting that the vowels produced by the native speakers were generally lower than those produced by the bilinguals.For the specific vowels, the F1 values of /a/ and /u/ were significantly lower for the native speakers (ps < 0.001) while the bilinguals and the native speakers had comparable F1 values for /i/ (p = 0.092). For the F2 values of the vowels in Mandarin, there were main effects of vowel (χ 2 (2) = 2010.3,p < 0.001) and speaker group (χ 2 (1) = 6.853, p = 0.009), as well as a two-way interaction of vowel and speaker group (χ 2 (2) = 141.01,p < 0.001).Post hoc tests showed that the vowels /a/ and /u/ produced by native speakers were more back than those produced by the bilinguals (ps < 0.001).Conversely, the vowel /i/ was more back in the bilinguals' Mandarin (p < 0.001). Cantonese Vowel Production by Native Speakers and Bilinguals This subsection presents the analyses of the F1 and F2 of the Cantonese vowels produced by the native speakers and the bilinguals.For the F1 values of the vowels in Cantonese, there was a main effect of vowel (χ 2 (2) = 3677.5,p < 0.001) and a two-way interaction of vowel and speaker group (χ 2 (2) = 48.571,p < 0.001).A post hoc analysis showed that the native speakers of Cantonese had larger F1 values for /a/ and /u/ but smaller F1 values for /i/ compared to the bilinguals (ps < 0.001). Vowel Production by Bilinguals Lastly, we fitted models for the F1 and F2 of the Mandarin and Cantonese vowels produced by the bilinguals.For the F1 values, there was a main effect of vowel (χ 2 (2) = 2955.2,p < 0.001) but no effect of language (χ 2 (1) = 1.085, p = 0.298).The two-way interactions between vowel and language reached significance (χ 2 (2) = 21.619,p < 0.001).The vowels /a/ and /i/ shared similar F1 values in the bilinguals' Mandarin and Cantonese, but the vowel /u/ was produced higher in Mandarin than it was in Cantonese (p < 0.001). For the F2 values, there were main effects of vowel (χ 2 (2) = 1330.8,p < 0.001) and language (χ 2 (1) = 38.829,p < 0.001), as well as a two-way interaction of vowel and language (χ 2 (2) = 44.186,p < 0.001).The post hoc analyses revealed no differences in the F2 values of the vowel /a/ between the two languages.However, the vowels /i/ and /u/ had higher F2 values in Mandarin than they did in Cantonese (ps < 0.001), suggesting that they were more back in the bilinguals' Cantonese. Interim Summary Table 1 provides a summary of the F1 and F2 statistics reported in this section.The native speakers did not show any difference in the formants of the vowel /i/, but the vowel /a/ was more back and the vowel /u/ was more front in Mandarin.The bilinguals differed from the native speakers in all three vowels, suggesting cross-linguistic influences from both Mandarin and Cantonese. Euclidean Distances of the Vowels As the native speakers of Mandarin and Cantonese only produced vowels in their respective language, it was impossible to compare the Euclidean distances of the vowels produced by each speaker directly.Instead, we calculated the Euclidean distances of the vowels produced by the native speakers based on the average F1 and F2 values of each speaker group.For the bilinguals, we calculated the distances of each vowel pair based on the average F1 and F2 values of the informant's Mandarin and Cantonese and listed the average distances and SDs. The Euclidean distances of the vowels are presented in Table 2.For the native speakers, the distance between the Cantonese /i/ and Mandarin /i/ was the smallest, but the bilinguals showed a much larger distance between their Cantonese /i/ and their Mandarin /i/.Both native speakers and bilinguals exhibited the largest distance for the vowel /u/.With regard to the vowel /a/, both groups showed a moderate distance. Discussion This study investigated the production of L1 and L2 vowels by Mandarin-Cantonese bilinguals in Hong Kong and addressed three research questions: (1) Are the L1 Mandarin vowels of the bilinguals influenced by Cantonese after years of immersion?(2) Have the bilinguals reached native-like competence in L2 Cantonese in terms of vowel production?(3) Are there any interactions between the bilinguals' L1 and L2 vowels? With regard to the first research question concerning Mandarin vowel production, the data suggested differences in the Mandarin vowels produced by the bilinguals compared to those produced by the native speakers.Specifically, in the bilinguals' production, the front vowel /i/ became more back and the back vowel /u/ became more front, and the low vowel /a/ became higher, suggesting a more crowded Mandarin vowel space for the bilinguals.In addition, the vowel /a/ was also more front for bilinguals, making it further from the same vowel produced by the native speakers.According to our analysis of native speakers' production, the vowel /a/ was more back in Mandarin than it was in Cantonese.It is possible that, due to the extensive exposure to Cantonese, the bilinguals shifted their way of producing the Mandarin vowel /a/ towards that of the Cantonese vowel /a/, although there was a difference in the backness of the vowel /a/ in Cantonese and in Mandarin.In this case, the Mandarin /a/ was assimilated to the Cantonese /a/ in the bilinguals' phonetic space, resulting in only one merged category to represent these two vowels. Next, for the second research question on Cantonese vowel acquisition, we demonstrated that the bilinguals failed to successfully acquire the Cantonese vowels through immersion in the language.In the bilinguals' production, the vowel /i/ was more back and lower, and both vowel /a/ and vowel /u/ were higher compared to the vowels produced by the native Cantonese speakers.Note that the bilinguals were advanced Cantonese learners and their average length of residence in a Cantonese-speaking region was 7.41 years at the time of the recording.Despite their exposure to the target language, they had not yet succeeded in producing native-like vowel formants.This might be explained by the maturational constraints [35] or age effects [36] involved in language learning.It has been advocated that one should start to acquire an L2 as early as possible in order to attain native competence in the L2.The target group in this study consisted of late Mandarin-Cantonese bilinguals who had started to learn Cantonese after puberty (the average age of acquisition was 22.73), which may have prevented them from becoming successful learners of L2 Cantonese.However, as there is a lack of research on early Mandarin-Cantonese bilinguals' acquisition of Cantonese vowels, more data should be obtained before we can provide support for the claims regarding the maturational constraints or age effects.Another possible direction for future research could be to investigate whether the L2 Cantonese speech was accented, given the observed differences in the vowel formants, which would contribute to our understanding of the source of foreign accent in L2 speech and the relationship between accentedness and acoustic distances [37]. As shown above, the vowels in the bilinguals' L1 Mandarin and L2 Cantonese were generally deviant from the vowels in the corresponding native language.Our final research question concerned interactions between L1 and L2 vowel systems, the evidence for which was abundant because both category assimilation and dissimilation could be identified in the bilinguals' vowel production.The bilinguals' Cantonese /a/ showed no difference in terms of backness compared to the vowel /a/ produced by the native Cantonese speakers, suggesting that the bilinguals appeared to have successfully formed an /a/ category in their L2 Cantonese.As suggested in [12], such L2 category formation is not an easy task for L2 speakers because there is already a full-fledged L1 category in place, and the quality and quantity of L2 input usually vary.Moreover, in the bilinguals' production, the Mandarin /a/ was more front than the same vowel produced by the native Mandarin speakers; that is, the Mandarin /a/ produced by the bilinguals had shifted away from the native norms.Furthermore, in the bilinguals' own production of the vowel /a/, no distinction between Mandarin and Cantonese was made, indicating that the two vowel categories had merged into one.The question that then arises concerns which factors may have contributed to the bilinguals' L2 category formation and L1-L2 category merging.For the vowel /i/, the native speakers of Mandarin and Cantonese did not show any differences in the F1 or F2, and the Euclidean distance between the Mandarin /i/ and Cantonese /i/ was extremely small.With regard to the bilinguals, their production of /i/ differed from the /i/ production of the two native groups, and their own Mandarin /i/ and Cantonese /i/ productions were also different from each other.As shown in Figure 1, the /i/ tokens produced by the native speakers were consistent, but the /i/ tokens produced by the bilinguals were variable in both languages.A similar phenomenon was observed for the /u/ tokens, as the bilinguals also attempted to maintain a contrast between Mandarin and Cantonese.While the reason that the bilinguals tended to merge some categories but simultaneously preferred to split other categories remains to be explored, it is clear that there were interactions between the bilinguals' L1 and L2 vowel categories, thus replicating previous findings regarding interactions between L1 and L2 consonants and extending the applicability of SLM and SLM-r to vowel categories. The findings of this study must be seen in light of some limitations.Firstly, in terms of the elicitation method, the vowels were produced in sentences as responses to precursor questions asked by the experimenter.It has been demonstrated that different elicitation methods will influence how vowels are produced, leading to varying degrees of individual differences [38].The issue of phonetic accommodation is also worth noting because there is recent evidence that both the L1 and L2 undergo phonetic accommodation [39], so that the L1 and L2 vowels produced in this study may have been affected by the experimenter's vowels.Future studies could consider using the more traditional read speech approach, which would reflect the participants' actual production.Secondly, the collected data were conversational in nature, and the phenomenon of vowel reduction is not uncommon in naturalistic speech [40,41], which may have had some negative impacts on the quality of the vowel production.It is, therefore, necessary for future studies to include words in isolation [23,30] or to prepare the target words in the focused position [42,43] to make it possible to elicit the vowels that are uttered more clearly.In addition, as a pilot study exploring L1 and L2 vowel interactions, this study had a relatively small sample size, with 3,165 tokens of three peripheral vowels in Cantonese and Mandarin.To have a better understanding of L1 and L2 vowel trajectories, it would be useful to consider more vowels and to include other language pairs that are more typologically different. Conclusions In summary, this study investigated the vowel production of Mandarin-Cantonese bilinguals and revealed vowel category assimilation and dissimilation in the participants' L1 and L2 vowels, thus indicating interactions between their L1 and L2 vowel systems.In general, the findings are in line with the hypotheses of SLM and SLM-r in that L1-L2 phonetic interactions are inevitable because there is a common phonetic space for storing the L1 and L2 phonetic categories, and that learners always have the ability to adapt their phonetic space.Future studies should refine the data elicitation method, increase the sample size and include more language pairs to better understand L1 and L2 phonetic interactions.Informed Consent Statement: Informed consent was obtained from all informants involved in the study. Figure 1 . Figure 1.F1 and F2 of the vowels produced by native speakers and bilinguals.(A,B) represent Mandarin and Cantonese vowel production of native speakers, respectively, while (C,D) show Mandarin and Cantonese vowels produced by immigrants.The circles indicate 67% of the vowel ellipses for each vowel category. Figure 1 . Figure 1.F1 and F2 of the vowels produced by native speakers and bilinguals.(A,B) represent Mandarin and Cantonese vowel production of native speakers, respectively, while (C,D) show Mandarin and Cantonese vowels produced by immigrants.The circles indicate 67% of the vowel ellipses for each vowel category. Funding: The work described in this paper was partially supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project Number: UGC/ FDS15/H15/22) and an ASA International Student Grant from the Acoustical Society of America.Institutional Review Board Statement:The study was conducted in accordance with the Declaration of Helsinki, and approved by the Human Research Ethics Committee of Hong Kong Shue Yan University (Reference number: HREC 22-05 (M12); date of approval: 1 June 2022). Table 1 . Summary of the F1 and F2 statistics. Table 2 . Euclidean distances of the vowels produced by native speakers and bilinguals.
6,993
2024-06-17T00:00:00.000
[ "Linguistics" ]
Depicting changes in land surface cover at Al-Hassa oasis of Saudi Arabia using remote sensing and GIS techniques This study assessed the spatial and temporal variations of land cover in the agricultural areas of the Al-Hassa oasis, Kingdom of Saudi Arabia (KSA). Change detection technique was applied in order to classify variations among different surface cover aspects, during three successive stages between 1985 and 2017 (i.e., 1985 to 1999 (14 years), 1999 to 2013 (14 years), and 2013 to 2017 (4 years)), using two scenarios. During the first stage, significant urban sprawl (i.e., 3,200 ha) occurred on bare lands within the old oasis, while only 590 ha of the oasis’s vegetation area was occupied by urban cover. However, the final stage revealed rapid urban development (1,270 ha by 2017) within the oasis’s vegetation region, while no urban sprawl occurred on bare lands (area of 1,900 ha, same as that in 1999–2013). Vegetation cover of around 1,000 ha changed to the bare soil class, in addition to the areas that were occupied by the urban class (1,700 ha in total). The study provides quantitative information on the influence of urban development on the spatial changes in vegetation cover of the oasis, especially during recent decades. Introduction Globally, approximately 1.2 million km 2 of forests and woodlands and 5.6 million km 2 of grassland and pasture areas have been transformed into other land use types within the last three centuries, as stated by Ramankutty and Foley [1]. Significant portions of the land surface have been transformed by humans; where 10 to 15% is currently occupied by agricultural schemes or urban-industrial areas, and 6 to 8% have been transferred into pasture lands [2]. Such alterations in land use cause significant impacts on the Earth's climate. Understanding how changes in land use affect land degradation requires a good understanding of the active human-environment interfaces related to land use change [3]. During the last decade, several methods to evaluate land cover changes have been proposed. These methods generate predictive models for land-use and land-cover (LULC) change. Land cover changes can be observed by comparing sequential land-cover maps. However, assessing the fine-scale changes of land-cover types involves studying landscapes where the PLOS surface characteristics vary at seasonal and inter-annual scales in space and time [4]. Satellites provide detailed information on biophysical surface characteristics such as biomass, vegetation cover, and landscape heterogeneity. Multi-temporal analyses of these characteristics, their spatial pattern, and seasonal advancement have led to a detailed understanding of land-cover change. A wide-field of view from satellite sensors has revealed patterns of periodic disparities in land surface characteristics caused not only by change in land use but also by climatic variability. Global urban population has been increasing more rapidly than rural populations, especially in developing countries. Built-up areas occupy up to 3% of the Earth's land surface [5,6]. Studies have estimated that 1 to 2 million hectares of cropland are being altered every year in developing countries in order to satisfy the demands of infrastructure, housing, industry, and others [7]. Dhaka city of Bangladesh for instance, has witnessed a substantial intensification in the built-up areas during 1975 to 2003, where built-up areas increased by 10554, having an average reaching 400 ha year-1. This tremendous expansion was attributed to a blend of different environmental, geographical and socio-economic elements [8]. City of Ajmer in India is also another example of world great urban sprawl, where expansion in the built-up areas reached 32 ha year-1 during 1977 to 2002 a stated by Jat [9]. LULC change is considered as a key factor influencing global environmental change, particularly in arid and semi-arid regions where which land and water resources are inadequate. Saudi Arabia has witnessed intense change over the last 30 years because of the economic growth due to the rise in petroleum industry and a rapid increase of urban population [10]. In a study conducted by Saudi [11], the analysis of LULC for three major cities in KSA (Riyadh, Jeddah and Makkah) revealed that urban area used to be the most altered surface cover, and most of the changed surface to urban was from bare soil during the periods from 1985 to 2014. Although a massive decrease in the agricultural lands was observed in their analysis, most of the change in agricultural lands was to bare soils due to dwindling water resources. In contrast, in the Al-Hassa oasis at the Kingdom of Saudi Arabia, a tremendous increase in population has resulted in the creation of new surface features over the last few decades [12][13][14], with noticeable changes in the vegetation cover. The local authorities have tried to increase agricultural efficiency by increasing areas under agriculture [15]. However, a lack of understanding, insufficient planning, and agricultural mishandling, apart from urban growth, have led to drastic changes in LULC [16]. This has caused a further degradation of the desert environment [17]. Abdelatti [14] indicated that urban growth in the Al-Hassa oasis has decreased the area under cultivation from 33% in 2009 to 25% in 2017. The authors also affirmed that such urban growth in the area without sound planning in future would have negative implications on the local environment and the social life of the residents. However, no studies have been conducted to assess the extent of green cover that has been replaced by urban sprawl in the old oasis, versus the new extensions of green cover that has been established by the government. Remote sensing (RS) and the geographical information system (GIS) techniques have been proven to be useful tools to depict spatial and temporal changes in land cover at the Al-Hassa oasis [18][19][20][21][22]. This study summarizes that understanding the nature of changes in surface cover, besides quantifying the losses from cultivated lands at the Al-Hassa oasis, is of a great importance for restoration and future rehabilitation of agricultural activities. Since the economic history of the Al-Hassa oasis is tightly associated with agricultural practices where the oasis (in its old geometry) produces a considerable share of dates in the Kingdom of Saudi Arabia (KSA), this study aimed to depict the spatial variations in the oasis's green cover using two scenarios corresponding to urban sprawl over the past 32 years. Scenario (i) included the old oasis beside the surrounding cities, irrigation discharge lakes, and the newly embedded agricultural areas over the southern part of the oasis (i.e., the new oasis). In this scenario, the quantitative share of the new agricultural areas that extended out of the old oasis, where the new extended agricultural areas were aimed at compensating the degraded agricultural land inside the old oasis, was studied. Scenario (ii) was applied over the old oasis only in order to examine the actual change in vegetation cover (i.e., degradation) within this oasis, with respect to the other classes of surface cover throughout the estimated period (i.e., the last 30 years). In this scenario, the environmental conditions, urban sprawl, water source degradation, and the population's social activities were assumed to have an influence on LULC changes. However, human-social impacts over the oasis were found to be the major factors. Study area This study was conducted at the Al-Hassa (i.e., Al-Ahsa) oasis, KSA. This area is considered as the largest agricultural oasis in KSA and is probably the largest irrigated oasis globally [23]. This "L" shaped oasis (Fig 1) is located at about 45 km inland from the west coast of the Arabian Gulf, 150 km south-west of Dammam city and 320 km east of Riyadh city, the capital of KSA. It is situated at altitudes raging between 160 m in the west to 130 m in the east above the mean sea level. Within the oasis, there are 10 towns and 60 villages [12]. Data from a recent study by Abdelatti [14] showed that the population of Al-Ahsa oasis increased from 445,000 in 1992 to 768,000 in 2016. According to population census 2010, the total population of the main cities (Hofuf and Mubarraz) was 660,788, which constituted 61.89% of the total province population, and 16% of the total population of the eastern region. The same was estimated to reach 768,500 by the end of 2016. The number of housing units in the province in 2017 was 149,905, representing 24.2% of the total units (618,628) in the eastern region (Statistics 2010). By 1994, around 4.4% of the buildings were made from mud and wood; in 2014 70% of the houses were converted into concrete and cement structures, 25% were made of bricks, 5% were made of stone, and there were no mud and wood houses [24]. The study area has a very gentle topography with little relief and a few surrounding ridges [25]. Active and mobile sand dunes characterize its surface as the majority of the northern, eastern, and southern boundaries of Al-Hassa are located in the Al-Jafurah desert. The sand movement/drift, estimated at 3 m 3 /m, occurs from the north-west and north [24]. The sands surrounding the oasis are mobile in nature and have for many centuries been expanding upon cultivated areas and endangering the oasis. This expansion has been tackled with measures like dune containment and tree plantations (about three million new plants). Economically, agriculture has been the major source of livelihood for the population. Agriculture in the oasis depends on the supply of water from the numerous springs and underground sources. It has been reported by Rahman [26] that Al-Hassa is an important agricultural area for the eastern region of Saudi Arabia. The cultivated area consists of about 180 km 2 of palm trees and oasis gardens [25]. The total area under cultivation within the oasis is approximately 80 km 2 , of which 92% is occupied by date palm [27]. Two regions of the Al-Hassa oasis were considered in this study. The first area consists of the old oasis, historically well known for its groundwater abundance [28][29][30][31], which encouraged agricultural activities in the region. This part covers an area of 20,000 ha, of which about 8,200 ha is cultivated with various fruits, vegetables, and field crops. The main crop is the date palm tree, estimated at 3 million, covering 70% or more of the total cultivated area [32,33]. Also, the soil of this old oasis is fairly fertile and productive [34]. Thus, suitable water and soil conditions in this area encouraged the Saudi government to launch an irrigation and drainage project in 1971, which was considered as one of the most advanced water projects established in the country [35]. The project was based on a study produced by WAKUTI [28] with an aim of sustaining agricultural activities in this old oasis as well as to extend its cultivated area to encompass its total area. This cultivated area consisted of about 25,000 small farms [35]. However, in the early 70's of the last century, the Saudi government established another agricultural zone in addition to the old Al-Hassa oasis. This new area (i.e., the new oasis) is located in the Al-Ghwaibah area, south-east of the old oasis. It consisted of several farms that were distributed to the citizens. These new farms were relatively larger in area than those in the old oasis, with an area of 5 ha or more each. The soil of this new area was affected by high salinity and calcium carbonate contents but was low in organic matter [36]. This area also lacked drainage systems, in contrast to the old oasis where a network of drainage system was available that culminated at two evaporation lakes forming eminent water bodies. The first lake (Al-Uyon) is located in the north east, while the second (Al-Asfer) is located to the east of the old oasis. The Al-Hassa oasis is classified as having an under hyper-arid climate dominated by severe hot and dry conditions, causing the pan-evaporation (2000 mm yr -1 ) to significantly exceed annual rainfall (80-90 mm), with average temperatures fluctuating between 38˚C in summer to 15˚C in winter [37]. Data collection and processing As the oasis has witnessed a drastic change in the environmental, social, ecological, and demographical levels throughout the last 30 years, an area of about 1,500 km 2 over the Al-Hassa oasis, including its cities and surrounding suburbs, was masked and selected for change detection analysis, as shown in (Fig 1). Four cloud-free satellite images from the Landsat series were acquired for the assessment period (1985 to 2017), with a spatial resolution of 30 m (Table 1). These spatial data sets were acquired from the archives of the USGS Earth Explorer website (http://earthexplorer. usgs.gov/), and calibrated using the data-specific utilities of ENVI (Ver. 5.3) software, where the image's digital number was transformed into spectral radiance (Lλ). Subsequently, reflectance images were generated from the radiance pixels. Atmospheric correction tools such as dark object removal, haze removal, and cloud masking were used to correct the sensor radiance for atmospheric effects using Fast Line-of-sight Atmosphere Analysis of Spectral Hypercubes (FLAASH). FLAASH is a physics-based method for atmospheric correction that employs temporal and spatial metadata to develop a radiative transfer model using MODTRAN 4 [38]. Image enhancement and linear histogram stretching were also performed. Exo-atmospheric reflectance (reflectance above the atmosphere) was applied using published post-launch gain value in ENVI, which is a value that is multiplied by the pixel value to scale it into physically meaningful units of radiance: Radiance = DN � gain + offset, where offset was used in the context of remote sensing. The Lλ was calculated using the calibration coefficients from the metadata of the acquired image. Hence, reflectance value of images were determined from the obtained radiance values. In the pre-processing stage, image enhancement was conducted in order to improve the contrast between features in the images and to improve the visual interpretation of surface features. This involved manipulating the range of input digital values to create a new range of output values. For any possible atmospheric attenuation, FLAASH model was applied, which was found to be capable of producing highly precise surface reflectance values, although it required significant user inputs [38]. In scenario (i), a subset of 35 km by 50 km was masked over the entire Al-Hassa area. Scenario (ii) on the other hand, was represented in a subset mask of 10 km by 20 km; where the area was designated to be confined around the old oasis boundaries. Hence, only four surface cover classes occupied the area, namely: the vegetation cover (represented by the date palm trees), the urban area, the bare lands, and the sand dunes. Image classification The acquired images were processed using supervised classification and five basic class types in scenario (i) were determined; namely: vegetation cover, urban area, bare lands, sand dunes, and water bodies. Water bodies as a class was not included in scenario (ii) as no water body was located within the borders of this scenario. Training and testing sites were selected visually from the images, assisted with a high spatial resolution basemap (0.6 m) provided by the Arc-GIS (10.5) software. An image processing software system (ENVI 5.3) was then utilized to produce a statistical depiction of the reflectance for every information class. This phase is usually known as "signature analysis" and characterized the mean or the range of reflectance on each band and variances and covariance of overall bands. Upon achieving the statistical characterization for every class information, images were classified by analyzing the reflectance for every pixel and deciding on the signatures that were similar to the most frequent object. The "maximum likelihood" classifier was applied in this study. This is a supervised classification technique derived from the Bayes theorem, which employs the discriminant function to assign each pixel to the class with the highest likelihood [39]. This classifier is considered to provide better outcomes when compared to other types such as the parallelepiped and the minimum distance classification machines. However, it is significantly slower due to extra calculations involved in the process. Training and testing sites of the five classes (developed in the classification scheme) were digitized as areas of interest (AOI), producing the five identified regions based on their spectral signatures. Hence, the assessment of classification accuracy was done using the testing points, which were extracted randomly (using randomizer machine in ENVI 5.3) from all the points, with percentages of 40% and 60% for the testing and training points, respectively. Accuracy assessment A confusion matrix (also known as error matrix) is typically used as a numerical technique for portraying the accuracy of the classified image. It is set in a tabular form that illustrates correspondence between the result of the classification process and a reference image. In order to generate the confusion matrix, ground truth data, such as field observations documented with a GPS, map information, or a digitized image, are needed. The kappa coefficient is an important measure of matching the classification. When the kappa coefficient value is 0, it indicates no similarity between the classified image and the reference image. If the value equals to 1, the classified image and the reference image are completely identical. Thus, a higher kappa coefficient indicates an accurate classification [40]. In order to assess the accuracy of classification, classification errors were identified. Hence, omission and commission errors are assessed. For every class, errors of commission happen once a classification process allocates pixels to a specific class that actually do not belong to it. The total of commission errors is then defined by an indicator known as producer's accuracy, which is the total of correctly identified pixels divided by the total of the reference image pixels. Omission errors, on the other hand, arise once pixels that actually belong to one class, are classified as some other class. User's accuracy is the index that characterizes the sum of omission errors in which the total number of the correctly identified pixels of a class are divided by the total pixels of that class [40]. Change detection In the post-classification process, image differencing technique was applied for each of the two images. This technique uses change detection statistics to provide a detailed tabulation of changes between the two classified images. The statistical report includes a class-for-class image difference. The analysis focuses primarily on the initial state classification changes. Hence, for each initial state class, the analysis identifies the classes into which the corresponding pixels changed in the final state image [41]. ENVI 5.3 can report changes as pixel counts, percentages, and areas. In addition, a special type of mask image (classification masks) can also be produced so as to provide a spatial context for the tabular report. The class masks are classification images with class colors matching the final state image, making it easy to identify not only where the changes have occurred but also the class into which the pixels have changed (Assisting catalog of ENVI 5.3). The flow chart shown below (Fig 2) represents the procedure followed for satellite (Landsat series) image acquisition, preprocessing, classification, and the application of change detection techniques. Samples for training and testing were masked using a high spatial resolution base map (0.6 m) provided with the software packages within ArcGIS 10.5. A verification of the distinguished ground locations was achieved with the local knowledge of the authors in visually interpreting features on the base map as well as the processed images. Table 2 shows the resultant confusion matrix obtained using the pre-delineated ground truth region of interest (ROI) tools, in order to compute the classification accuracy metrics. ENVI 5.3 was utilized for achieving the same, where table columns represent the percentages of true (ground truth values) classes, whereas rows signify the percent of the classifier's predictions. This analysis was done for scenario (ii) only because the dynamic nature in surface cover at the old oasis that was observed in the form of urban development made it the focal point of the study. Confusion matrix The overall accuracies of surface cover were found to be 97.6%, 100%, 97.8%, and 98.7% for the urban area, vegetation cover, bare soil, and sand dune classes, respectively. This indicates a high similarity between the classifiers and the predictors, especially for the year 1999. During the process of error evaluation, > 94% for both producer and user accuracies were achieved with a kappa coefficient of more than 0.96 for all classified images. Classification statistics The summary statistics of the acquired areas of each surface cover class (ha) under scenarios (i) and (ii) throughout the analyzed periods (i.e., 1985, 1999, 2013, and 2017) is presented in Tables 3 and 4, respectively. The range value (ha) for each class was also produced as the difference between the early state (1985) and the later state (2017), in order to reveal the final state for each class. Therefore, the resulting ranges showed that the urban area class produced the Change detection techniques to depict vegetation degradation caused by urban sprawl highest change in surface cover (347.29%). Scenario (i) shows that the sand dunes class was the biggest and the most dominant among the others (Table 3). Its areas ranged from 125,997.12 to 114,475.68 ha from 1985 to 2017, with a noticeable fluctuation during the analyzed period, though with no apparent trend. Regardless of such fluctuation during the whole period, the classified maps showed that the area of this class declined by nearly 11,521.38 ha (i.e., -9.14%). In addition, scenario (i) shows that the area of bare land class was the second highest, followed by the vegetation class (Table 3) The class of water bodies, represented by agricultural drainage water evaporation lakes, occupied only a small portion of the surface cover of the new oasis (Table 3). However, it exhibited a noticeable increase (~100%) in area, rising from 601.38 ha to 1,235.61 ha between the years of 1985 and 2017. It reached its maximum value in 1999 (1,508.94 ha). Finally, it is worth to mention from scenario (i) of the new oasis that the areas of the sand dunes and bare lands classes were the most dominant in terms of area, which reflects the area geographical identity, where the area of these two classes represented together (i.e., 87%) of the total area (Table 3). Though scenario (ii) was applied in order to examine the actual change in vegetation cover within the old oasis (only) with respect to the other classes of surface cover, that both sand dunes and bare soil classes masked most of the old oasis surface cover (73.81%) ( Table 4). The area of vegetation cover in scenario (ii) was third largest in the category among other classes (Table 4). It showed a loss in its area from 9,095 ha in 1985 to 8,472 ha in 2017 (-6.85%). This trend is opposite to that of scenario (i), where there was an increase of + 17.44% over the Change detection techniques to depict vegetation degradation caused by urban sprawl same period (Table 3). This implies that the actual loss in the vegetation class occurred in the old oasis of Al-Hassa. The loss in this class in scenario (ii) however corresponded with a huge gain in the area of the urban class that increased from 4,427 ha in 1985 to 10,654 ha in 2017, gaining about 6,218 ha (+ 136.39%) ( Table 4). The urban class showed an increasing trend throughout the study period (1985 to 2017) in both the scenarios. A major part of this sprawl has occurred in the new oasis, as was verified from Figs 2 and 3. This increase reflects the continuous increase in population and their endeavor to settle within the green spots, alongside some other social and economic considerations [12, 14,42]. The percentage spatial change in each class is presented in Fig 5A for scenario (i) and Fig 5B for scenario (ii). The primary Y-axis (on the left hand) represents the percentage scale for the vegetation, urban area, and the water bodies classes, while the secondary Y-axis (on the right hand) shows the percentage scale for the bare land and sand dunes classes. From both the figures, it can be observed that there were drastic changes in the urban area class throughout the yearly time series, while the vegetation cover class showed some increase during 1985 to 1999 and then started decreasing, particularly in the old oasis. Also, an acute decrease in the area of sand dunes can be noticed during the year 1999, which could be assigned to land conservation practices that was applied to the north of the old oasis [23,25,43]. Referring to the percentages shown in Fig 5B, it can be concluded that the urban area class had a drastic change, as 16% of the entire oasis land was occupied by this class by 2017 as compared to 6% in 1985. Also, a slow expansion occurred in vegetation cover during 1985 to 1999 in scenario (i). Yet, a continuous decrease in the vegetation cover area, estimated as 4% of the total oasis area, was noticed during 1999 to 2017, with a simultaneous expansion in the urban area within the boundary limits of the old oasis (Fig 3). However, a major portion of the urban area expansion could not be includes as relative changes as this area was allocated outside the designated oasis boundary limits. Thus, the change detection technique was used to quantitatively assess factors affecting vegetation area losses. Change detection In order to assess the quantitative gain/loss of area in the old oasis, the study area from scenario (ii) was used for detecting the changes in vegetation cover. Hence, the study focused on detecting changes in urban area and vegetation cover classes using image differencing. The change was computed by applying a segmental subtraction for each of two classified images representing specific dates. Three successive periods were used for the change analysis: 1985 to 1999 (14 years), 1999 to 2013 (14 years), and 2013 to 2017 (4 years). The values of gain/loss in area (in ha) are given for the urban area ( Fig 6A) and the vegetation cover (Fig 6B). From Fig 6A, it can be seen that during the first period (1985 to 1999), almost all urban sprawl occurred upon the bare land class that used to be vacant within the oasis, and was estimated to be 3,200 ha; whilst only 590 ha of the oasis vegetation area was occupied by the urban class, in addition to few hectares (87 ha) of sand dunes that were converted into urban areas. The second period (1999 to 2013) showed a gradual decrease in urban sprawl over bare lands, where 1,900 ha of bare lands were occupied by urban areas. However, the final period (2013 to 2017) witnessed different values of change where a quick pace of urban (1,270 ha) development took place over the oasis's vegetation area, while no increase in urban sprawl took place over bare lands (1,900 ha, same as 1999 to 2013). As shown in Fig 6B, analysis of oasis vegetation cover showed that a significant increase in vegetation cover occurred over the bare lands during 1985 to 1999, estimated at 1,560 ha as compared to 176 ha over sandy soils. However, no noticeable increase in vegetation cover occurred during 1999 to 2017, except for few hectares (80 ha) over the sand dunes. Instead, vegetation cover class lost an area of around 1,000 ha to the bare soil class in total, in addition to the areas that were occupied by the urban class (1,700 ha in total), as indicated in Fig 6A. Discussion In spite of the spatial/temporal variation in bare land and sand dunes classes, the analysis showed that the bare land and sand dunes were the dominant land cover classes in both scenarios, the old one (40.66% and 33.15%, respectively) and the new one (32.42% and 54.35%, respectively). This concurs with the geographic location of the oasis, where it is surrounded by deserts that feature shifting sand dunes [23], presenting a long lasting ingress of sand on cultivated fields [13,31,43], despite the efforts devoted by the Saudi government to control the movement of these dunes toward the oasis [43]. The dominance of the sand dunes class in the Al-Hassa oasis was previously reported by Salih [25], who also indicated that sand dunes is the dominant land cover in the oasis, occupying up to 70% of the area, as compared to other classes including water bodies, Sabakha, bare soil, urban, and agriculture. This is also supported with other studies that indicated that the Al-Hassa oasis is located in a desert area featuring shifting sand dunes [23,25,43]. Furthermore, the results showed that a big portion of urban sprawl (3,200 ha) during the first stage (1985)(1986)(1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999) occurred over the bare land class that used to be vacant within the old oasis, while only 590 ha of the oasis's vegetation area was occupied by the urban class. This was in line with findings of a research achieved by (Saudi) [11] over three example locations in the Kingdom of Saudi Arabia (Riyadh, Jeddah and Makkah). The final period, however, witnessed a different direction of changes, where 1,270 ha of the urban class took over the oasis's vegetation area with no urban sprawl in upon bare soils (1,900 ha, same as 1999-2013), unlike most of the achieved researches in the surroundings. This could be attributed to the social activities, which highlight the cultural landscape with components of natural heritage in the agricultural practices. These new surface features found in the form of urban components, caused by the tremendous increase in population (accompanied by lack of bare lands within the old oasis) during the last few decades [12][13][14], occurred at the expense of the vegetation cover. This finding agrees with a regional LULC analysis conducted by (Moroco) [44], which stated that because of anthropogenic activities like urban sprawl, overgrazing, and degradation of forest sector in Béni-Mellal District (Morocco) during 2002 to 2016, agriculture in addition to forest surface covers decreased by 40.64% and 53.85%, respectively. On contrast, there was some expansion in the green area during 1985 to 1999, which might be due to the inclusion of cultivated projects located in the northern part of the old oasis. Abdelatti [14] emphasized the threat of urban growth in the Al-Hassa oasis to the local environment. The authors also discussed that such urban growth in the area without sound planning in future would impose negative implications on the local environment and social life. Urbanization has indirectly affected agriculture as a consequence of development in the KSA. Exploitation of oil has resulted in an increase in immigration to urban areas, principally in the Eastern Province. Although the oil business has created direct employment opportunities at the oilfields, unintended employment opportunities also came up in cities all over the country. The study by Al Jabr [43] showed that the high percentage of urban development was due to the rapid rise in immigration to the urban areas, in addition to the rise in population of their own inhabitants. Further, economic growth of ancient cities offered various non-agricultural jobs that paid higher salaries as compared to those of the agriculture sector. Hence, people employed as labor shifted from the traditional agricultural sectors. Furthermore, towns and villages (including the oasis) have been aided by the government in terms of housing loans and other related types of assistant. The interaction between climatic and LULC factors have had a vital influence on ecosystem progression, as stated in previous studies [45,46], indicating that changes in LULC have been a result of direct environmental influence of economic liberalization and globalization. Anthropogenic impact assessment It can eventually summarize the main cause of urban sprawl in Al-Hasa oasis in economic factors, where changes in LULC have arisen from the response of population to the new economic situation. Hence, the urban-vegetation relationship is important as development of agricultural practices in the oasis has converted the agriculture-based nature of the oasis into a cultural landscape with components of natural heritage, causing a severe invasion of urban features into the green land cover class. These factors had a direct influence on land management. Environmental/climatic conditions could also be taken into account as the interaction between climatic and LULC factors have a significant influence on ecosystem progression, as stated by Zeng [45]. Barbier [46] indicated that economic liberalization and globalization have had a major and direct influence on changes in LULC. Alghannam [26] proved from their study conducted in the Al-Hassa oasis that increasing vegetation cover was an effective way to cool urban areas, save energy, and improve the urban environment. Supporting that, several studies revealed that a strong and uneven urban growth increases land surface temperatures (LST) in a newly urbanized area [47,48]. Buyadi [49] also suggested that different LULC types have different LSTs, which is significantly influenced by the vegetation cover. LULC is a consequence of human activities engaged with the global environmental changes as referred by Erb [50], who also proposed that land use is a prime component of the interactions between society and nature that lead to changes in terrestrial ecosystems. Similar findings regarding LULC in the Al-Hassa oasis were also observed by other researchers [14,18,51]. It is possible that demographic changes caused by expansion in urbanization will eventually lead to a degradation in the fertility of lands [52]. This study found that the LULC change brought about by the population growth in the Al-Hassa oasis would have negative impacts on the climate of the area. Further, the study revealed that new expansions within the new oasis will not be able compensate the environmental equilibrium that has been lost in the old oasis. Conclusion Remote sensing and GIS techniques were assessed to be useful tools to depict spatial and temporal changes in land cover at the Al-Hassa oasis. This was consistent with the findings of many other studies, which emphasize on the fact that understanding the nature of surface cover changes, besides quantifying the losses from cultivated lands, is of great importance for restoration and future rehabilitation of agriculture. The economic impact resultant from vegetation degradation hasn't been among the study scope, due to the lack of relevant data. Changes in hydrogeological condition at both areas throughout the study period, and its effect on biosystem and oasis microclimate, hasn't been included as forcing element in the oasis's demography, as well. Fairly, it was intended to highlight and quantify the urban sprawl influence on the green cover that was well observed from the obtained results. However, the following conclusions are inferred from the study: • Change detection technique was applied in order to classify variations among different surface cover aspects, during three successive stages between 1985 and 2017 using two scenarios. • During the first stage, significant urban sprawl (i.e., 3,200 ha) occurred on bare lands within the old oasis, while only 590 ha of the oasis's vegetation area was occupied by urban cover. • Unlike the first stage, the final stage revealed rapid urban development (1,270 ha by 2017) within the oasis's vegetation region, while no urban sprawl occurred on bare lands (area of 1,900 ha, same as that in 1999-2013). • The study provides quantitative information on the influence of urban development on the spatial changes in vegetation cover of the oasis, especially during recent decades. Supporting information S1
8,183
2019-11-14T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Indirect unitarity violation entangled with matter effects in reactor antineutrino oscillations If finite but tiny masses of the three active neutrinos are generated via the canonical seesaw mechanism with three heavy sterile neutrinos, the 3\times 3 Pontecorvo-Maki-Nakagawa-Sakata neutrino mixing matrix V will not be exactly unitary. This kind of indirect unitarity violation can be probed in a precision reactor antineutrino oscillation experiment, but it may be entangled with terrestrial matter effects as both of them are very small. We calculate the probability of \overline{\nu}_e \to \overline{\nu}_e oscillations in a good analytical approximation, and find that, besides the zero-distance effect, the effect of unitarity violation is always smaller than matter effects, and their entanglement does not appear until the next-to-leading-order oscillating terms are taken into account. Given a 20-kiloton JUNO-like liquid scintillator detector, we reaffirm that terrestrial matter effects should not be neglected but indirect unitarity violation makes no difference, and demonstrate that the experimental sensitivities to the neutrino mass ordering and a precision measurement of \theta_{12} and \Delta_{21} \equiv m^2_2 - m^2_1 are robust. Introduction Experimental neutrino physics is entering the era of precision measurements, in which some fundamental questions about the properties of massive neutrinos will hopefully be answered. One of the burning issues is whether there exist some extra (sterile) neutrino species which do not directly participate in the standard weak interactions. Such hypothetical neutrinos are well motivated in the canonical (type-I) seesaw mechanism [1,2,3,4,5,6,7], which works at a high energy scale far above the electroweak symmetry breaking scale -it can naturally generate finite but tiny Majorana masses for the standard-model neutrinos (i.e., the mass eigenstates ν 1 , ν 2 and ν 3 corresponding to the flavor eigenstates ν e , ν µ and ν τ ) and interpret the observed matter-antimatter asymmetry of the Universe via the canonical leptogenesis mechanism [8] 1 . Assuming the existence of three heavy sterile neutrinos in this seesaw picture, one may write out the standard weak charged-current interactions in terms of the mass eigenstates of three charged leptons and six neutrinos as follows: where ν 4 , ν 5 and ν 6 stand for the three heavy-neutrino mass eigenstates, V is the 3×3 Pontecorvo-Maki-Nakagawa-Sakata (PMNS) flavor mixing matrix [14,15], and R is a 3 × 3 matrix describing the strength of flavor mixing between (e, µ, τ ) and (ν 4 , ν 5 , ν 6 ). Because V V † = 1 − RR † holds [16], where 1 denotes the identity matrix, the PMNS matrix V is not exactly unitary. Following the full angle-phase parametrization of the whole 6 × 6 neutrino mixing matrix advocated in Refs. [17,18] and taking account of the fact that all the mixing angles appearing in R must be very small, it is convenient to express V as V = (1 − κ) U, in which with κ ij =ŝ * i4ŝj4 +ŝ * i5ŝj5 +ŝ * i6ŝj6 for i ≥ j = 1, 2, 3. Here the notations c ij ≡ cos θ ij , s ij ≡ sin θ ij andŝ ij ≡ s ij e iδ ij have been used, where θ ij and δ ij are the rotation and phase angles, respectively. It is obvious that nonzero κ ij arise from the small mixing between light and heavy neutrino states described by θ ij (for i = 1, 2, 3 and j = 4, 5, 6), and therefore they measure the deviation of V from U -the effect of indirect unitarity violation (UV) caused by the heavy degrees of freedom which do not directly take part in the low-energy lepton-flavor-violating processes, such as neutrino oscillations. The current limits on the indirect UV effect can be found in Refs. [19,20,21,22], where the elements of |V V † | = |(1 − κ)(1 − κ † )| are constrained from the electroweak precision observables, low energy weak measurements and the neutrino oscillation data. A typical and conservative expectation is that the magnitude of κ ij should be smaller than 0.05, which indicates that the active-sterile mixing angles θ ij (for i = 1, 2, 3 and j = 4, 5, 6) can be taken as large as 7.5 • . So far a lot of attention has been paid to possible effects of indirect UV in the acceleratorbased long-baseline neutrino oscillation experiments [19,23,24,25,26,27,28,29,30], and limited attention has also been given to this kind of effect in a reactor-based antineutrino oscillation experiment [31,32,33]. It is already known that the UV-induced "zero-distance effect" must appear in the "disappearance" oscillation probability P (ν α → ν α ) (for α = e, µ, τ ) [31,34], for example, but extracting this small effect is extremely difficult even though there is a near detector, because uncertainties associated with the reactor antineutrino flux are expected to be overwhelming considering the reactor antineutrino anomaly and spectral features for the reactor antineutrino fluxes at around 5 MeV. In this case one may wonder whether the oscillating terms of P (ν e → ν e ) can provide some information about the indirect UV or not 2 . As pointed out in Refs. [36,37,38], terrestrial matter effects should not be neglected in the JUNO-like reactor antineutrino oscillation experiment with the baseline length L ≃ 53 km [35], since their strength is essentially comparable with the experimental sensitivity to the neutrino mass ordering. Two natural and meaningful questions turn out to be: (a) how the indirect UV effect is entangled with matter effects in ν e → ν e oscillations; (b) whether they can be distinguished from each other. The main purpose of the present work is just to answer these two questions. The remaining parts of this paper are organized as follows. In section 2 we derive the analytical expression of P (ν e → ν e ) by including both indirect UV and terrestrial matter effects and making a good approximation for the antineutrino beam energy of a few MeV [39,40]. Section 3 is devoted to some numerical simulations based on the setup of a JUNO-like detector in order to answer the above two questions. We find that the indirect UV effect is always smaller than terrestrial matter effects, and their entanglement does not appear until the next-to-leading-order oscillating terms are taken into account. We summarize our main results in section 4 with two concluding remarks: (a) indirect UV makes no difference in the JUNO-like experiment; (b) such an experiment's sensitivities to the neutrino mass ordering and a precision measurement of θ 12 and ∆ 21 ≡ m 2 2 − m 2 1 are robust. 2 Analytical approximations of P (ν e → ν e ) Of course, the three heavy sterile neutrinos are kinematically forbidden to take part in neutrino oscillations in any realistic accelerator-or reactor-based experiments. Given the indirect UV effect hidden in the PMNS matrix V , the effective Hamiltonian describing the propagation of the antineutrino mass eigenstates in matter with a constant density profile can be written as where E i ≃ E + m 2 i / (2E) with E and m i being the beam energy and masses of antineutrinos respectively (for i = 1, 2, 3), G F denotes the Fermi constant, N e and N n stand respectively for the electron and neutron densities in matter. It is clear that the neutral-current-induced coherent forward scattering effect (described by N n ) becomes trivial and negligible, if V is exactly unitary. Now this effect, together with the charged-current-induced coherent forward scattering effect (described by N e and only sensitive to the e-flavored neutrinos and antineutrinos), constitutes the terrestrial matter effect and can thus modify the behavior of antineutrino oscillations. Note that in Eq. (4) and throughout this paper we denote all the quantities in matter with tilde hats as their counterparts of the corresponding vacuum quantities in the indirect UV framework. We begin with the useful formula of the matter-modified antineutrino oscillation probability P (ν e → ν e ) derived by Kimura, Takamura and Yokomura (KTY) [41,42] and take account of the indirect UV effect [43]: where ∆ E jk ≡ E j − E k , L denotes the baseline length and X ee j ≡ (V * W ) ej (V W * ) ej (for j, k = 1, 2, 3) with E i being the eigenvalues of H and W ij being the unitary matrix which diagonalizes H (i.e., W † HW = Diag{ E 1 , E 2 , E 3 }). To be explicit, in which and Y ee k = (V * H k−1 V T ) ee . Since X ee j are real and ∆ E ij = ∆ ij /2E with ∆ ij ≡ m 2 i − m 2 j , the expression of P (ν e → ν e ) in Eq. (5) can be rewritten as: where X ee i ≡ X ee i / V V † ee (for i = 1, 2, 3), and F ij = 1267 × ∆ ij L/E with ∆ ij being in unit of eV 2 , L being in unit of km and E being in unite of MeV (for ij = 21, 31, 32). It is easy to check that X ee 1 + X ee 2 + X ee 3 = 1 holds. In the absence of both UV and matter effects, one is therefore left with X ee i = |U ei | 2 , depending only on θ 12 and θ 13 . The above equations tell us that once the eigenvalues E i are figured out, it will be straightforward to obtain the explicit expression of P (ν e → ν e ). Since the antineutrino beam energy E is only around a few MeV, one may calculate the eigenvalues of H by expanding them in terms of the small parameters with ∆ ij ≡ m 2 i − m 2 j (for ij = 21, 31, 32) in vacuum and the small elements of κ. It is certainly a very good approximation to take N e ≃ N n in reality, so β ≃ 2γ = A/∆ 31 with A ≡ 2 √ 2 G F N e E being a common matter parameter. Given A ∼ 1.52 × 10 −4 eV 2 Y e (ρ/g/cm 3 )(E/GeV) ≃ 1.98 × 10 −4 eV 2 (E/GeV) for ρ ≃ 2.6 g/cm 3 and E ∼ 4 MeV in reactor antineutrino experiments, β and γ are actually much smaller than α in magnitude: in which the "±" signs of ∆ 31 stand for the normal mass ordering (NMO) and inverted mass ordering (IMO) of three neutrinos, respectively. It is clear that β ∼ γ ∼ O(α 2 ) holds. As for the small UV parameters, we take κ 11 ∼ κ 22 ∼ κ 33 ∼ κ 21 ∼ κ 31 ∼ κ 32 ∼ O(α) as a reasonable assumption [19]. Now the effective Hamiltonian in Eq. (4) can be expressed as where Ω is a dimensionless matrix containing both UV and matter effects: By making some analytical approximations, one may first calculate the eigenvalues of Ω and then figure out the eigenvalues of H. After a straightforward but tedious exercise, we arrive at the expressions of the eigenvalues λ i of Ω in matter as follows: where ξ i (for i = 1, 2, 3) measure the effect of indirect UV: One can see that in ξ i the six UV parameters κ ij are all entangled with the two matter parameters β and γ, implying that switching off the terrestrial matter effects will automatically remove the indirect UV effect from λ i . This important observation tells us that it will be much harder to probe indirect UV for a low-energy oscillation experiment, because the latter involves much smaller terrestrial matter effects. Note that ξ 3 is more suppressed in magnitude than ξ 1 and ξ 2 , but it cannot be ignored in the expressions of λ 1 and λ 2 since the combination ξ 3 /α should be comparable with the ξ 1 term in Eq. (13). With the help of Eq. (13), the eigenvalues of H can be directly obtained from The three effective neutrino mass-squared differences ∆ ij defined below Eq. (7) turn out to be One can see that ∆ 21 = ∆ 31 − ∆ 32 holds to the accuracy of the approximations made above. For simplicity, we are going to use H ′ = H − E 1 1 to calculate the probability of ν e → ν e oscillations in the following, since such a shift of H does not affect any physics under discussion. The results of Y ee i and N ij (for i, j = 1, 2, 3) are listed in the Appendix. Then X ee i can be explicitly figured out with the help of Eq. (6). As a result, the analytical approximations of X ee i defined below Eq. (8) turn out to be in which The explicit expression of P (ν e → ν e ) can therefore be obtained from Eq. (8) with the help of Eq. (16). However, we prefer a different form of P (ν e → ν e ) whose oscillation terms depend on ∆ 21 and ∆ * ≡ ∆ 31 + ∆ 32 [38], because ∆ * is sensitive to the neutrino mass ordering in a more transparent way. According to Eqs. (2) and (15), we have where ∆ 21 and ∆ * ≡ ∆ 31 + ∆ 32 are the counterparts of ∆ 21 and ∆ * in vacuum, β ≃ 2γ has been used, and Different from ξ i (for i = 1, 2, 3, 4), ξ ′ i are purely the UV parameters. Such a treatment will allow one to see the UV effect in P (ν e → ν e ) more clearly. In Figure 1 we present a numerical illustration of ξ ′ i by inputting the 3σ ranges of the neutrino oscillation parameters for the NMO case [46] and choosing the reasonable ranges of the UV parameters (i.e., θ ij 7.5 • and δ ij ∈ [0, 2π) for i = 1, 2, 3 and j = 4, 5, 6). It is obvious that the magnitudes of ξ ′ i are either of the same order as α or much smaller. Since the allowed ranges of |ξ ′ i | in the IMO case are very similar to those in the NMO case, they will not necessarily be shown here. Now let us focus on the probability of ν e → ν e oscillations. In vacuum we have the elegant expression P (ν e → ν e ) = 1 − P 0 − P * with [38] P 0 = sin 2 2θ 12 cos 4 θ 13 sin 2 F 21 , Figure 1: An illustation of ξ ′ i given in Eq. (19) by inputting the 3σ ranges of the six neutrino oscillation parameters (i.e., ∆ 21 , ∆ 31 , θ 12 , θ 13 , θ 23 and δ 13 ) for the NMO case [46] and choosing the UV parameters in the ranges θ ij 7.5 • and δ ij ∈ [0, 2π) (for i = 1, 2, 3 and j = 4, 5, 6). in which the term proportional to sin F * is sensitive to the neutrino mass ordering. In matter with the UV effect, the expression of P (ν e → ν e ) shown in Eq. (8) can anagously be rewritten as P (ν e → ν e ) = 1 − P 0 − P * , where P 0 represents the ∆ 21 -triggered oscillation and P * stands for the ∆ * -triggered oscillation. Taking account of Eqs. (2), (8), (15), (16) and (18), we first define One can see that Eqs. (22) and (23) correspond to the matter-and UV-induced corrections to the P 0 and P * terms, respetively. Considering the smallness of sin θ 13 , let us simplify Eq. (23) to some extent as follows: Note that the above analytical approximations are valid for both the NMO and IMO cases, but can only be applied to the antineutrino oscillations. As for the neutrino case, one ought to make the replacement of β → −β and γ → −γ. Some discussions are in order. • In the presence of indirect UV, our main analytical results for P (ν e → ν e ) are summarized in Eqs. (21), (22) and (24). We have done the expansions up to O(α 2 ) in our calculations, in which A/∆ 21 ∼ 1267AL/E ∼ 10 −2 ∼ O(α) is taken into account. The leading-order oscillation terms P are consistent with those obtained in Ref. [38], where the UV effect was not considered. In contrast, P M 2 0 , P M 2 * , P UV 0 and P UV * appear as the next-toleading-order oscillation terms of P (ν e → ν e ). Among these four new terms, P M 2 0 and P M 2 * describe the fine terrestrial matter effects, and the other two characterize the comparable or much smaller indirect UV effect. • One can see that the UV effect is always smaller than terrestrial matter effects, and their entanglement does not appear until the next-to-leading-order oscillating terms are taken into account. As for the UV-induced terms, P UV 0 is modulated by the ∆ 21 -driven oscillation while P UV * is the oscillation term related to ∆ * and might therefore affect the determination of the neutrino mass ordering. Since both of them appear as the next-to-leading-order terms as compared with P M 1 0 and P M 1 * , however, their effects must be strongly suppressed. • In this paper, we only focus the indirect UV effect in which the masses of sterile neutrinos are larger than the electroweak interaction scale. There is also another type of direct UV effect, where sterile neutrinos can be produced and directly participate in the neutrino propagation process. Different from the indirect UV effect considered here, sterile neutrinos in the direct UV framework will contribute additional terms to the neutrino oscillation probability. In case that the oscillatory behavior can be observed, it will be tested or constrained in the short baseline oscillations [9,10,11,12,13] for the mass-squared difference at around 1 eV 2 and at the JUNO-like experiment for the mass-squared difference from 10 −5 eV 2 to 10 −1 eV 2 [35]. If these additional oscillations are averaged out, it will be similar to the indirect UV effect, but with an additional constant term appeared as shown in Ref. [33,44]. According to Ref. [45], the limit on the corresponding active-sterile mixing will be relatively weaker in comparison to the indirect UV effect. Numerical simulations In this section we shall first estimate the orders of magnitude of the oscillation terms associated with the UV and terrestrial matter effects using a JUNO-like detector, and then illustrate whether and how they can affect the neutrino mass ordering determination and precision measurements of ∆ 21 and θ 12 . In our calculation the best-fit values of six active neutrino oscillation parameters are taken from a global analysis of current three-flavor oscillation experiments [46], with ∆ 21 ≃ 7.56 × 10 −5 eV 2 , sin 2 θ 12 ≃ 0.321, ∆ * ≃ 5.024 × 10 −3 eV 2 , sin 2 θ 13 ≃ 0.022, sin 2 θ 23 ≃ 0.430 and δ ≃ 252 • for the NMO case, and with ∆ 21 ≃ 7.56 × 10 −5 eV 2 , sin 2 θ 12 ≃ 0.321, ∆ * ≃ −5.056 × 10 −3 eV 2 , sin 2 θ 13 ≃ 0.021, sin 2 θ 23 ≃ 0.596 and δ ≃ 259 • for the IMO case. The averaged terrestrial matter density along the reactor antineutrino trajectory is taken as ρ ≃ 2.6 g/cm 3 [47]. To illustrate the UV effect, we typically take θ 14 Tab. 1 of Ref. [48], a total thermal power of 36 GW th and a weighted baseline of 52.5 km. We assume the nominal running time of six years and 300 effective days per year in our numerical simulations. All the statistical and systematical setups are the same as those in Ref. [38], where one can find all the simulation details. The only exception is that here we have enlarged the flux normalization uncertainty to 10% in order to accommodate the reactor antineutrino anomaly and UV-induced zero-distance effect. In Figure 2 we illustrate the numerical orders of magnitude of the matter-induced and UVinduced corrections to the oscillation probability, where the first and second rows are for the absolute and relative differences of the matter-induced correction respectively, and the third and fourth rows are for the absolute and relative differences of the UV-induced correction respectively. In the left and right panels we show the NMO and IMO cases respectively. For illustration, we define the absolute error induced by the UV and matter effects as where P (ν e → ν e , κ = 0) denotes P (ν e → ν e ) in Eq. (8) by taking κ = 0 with 0 meaning all the elements of κ are zero (i.e., turning off the UV effect), and P (ν e → ν e , A = 0) stands for P (ν e → ν e ) with A = 0 (i.e., back to the case in vacuum). Compared to the left panel of Fig. 1 in Ref. [38], here the absolute difference ∆P M is defined in a generic framework with three active neutrinos and three heavy sterile neutrinos, and it includes the interference terms of the UV and matter potential parameters. The solid and dashed lines are shown for the exact numerical calculation and analytical approximations in Eqs. (21), (22) and (24), respectively. From the first and second rows, we can observe that the absolute and relative orders of magnitude of the matter-induced corrections can reach the levels of 0.6% and 4% respectively, consistent with those in Ref. [38] without the UV effect. On the other hand, the absolute and relative orders of magnitude of the UV-induced corrections are at most 0.02% and 0.1% according to the third and fourth rows. This is because the UV effect is always entangled with matter effects and appears at the next-to-leading order. The same conclusion can be drawn in Figure 3 where the individual terms of the expansion done in Eqs. (21), (22) and (24) are illustrated. The upper panels are for the leading oscillation terms P and P M 1 * , and the four next-to-leading terms are illustrated in the lower panels. The left and right panels are shown for the NMO and IMO cases respectively. To show how the UV-induced corrections depend on the standard oscillation and UV parameters, we illustrate the scattering plots of the UV-induced corrections in Figure 4 by varying the six oscillation parameters (∆ 21 , ∆ 31 , θ 12 , θ 13 , θ 23 , δ 13 ) within their 3σ ranges for the NMO case, and the UV parameters θ ij (i = 1, 2, 3, j = 4, 5, 6) and δ ij (i = 1, 2, 3, j = 4, 5, 6) in the ranges of [0, 7.5 • ] and [0, 360 • ] respectively. The left and right panels are shown for the exact numerical calculation and analytical approximations respectively. We conclude that the absolute magnitudes of the UV-induced corrections are within the region of smaller than 0.05%. In Figure 5 we illustrate the terrestrial matter (left panel) and UV (right panel) effects on the Figure 2: Numerical orders of magnitude of the matter-induced and UV-induced corrections to the oscillation probability, where the first (third) and second (fourth) rows are for the absolute and relative differences of the matter-induced (UV-induced) correction respectively. The left and right panels are shown for the NMO and IMO cases respectively. The solid and dashed lines are shown for the exact numerical calculations and analytical approximations respectively. neutrino mass ordering sensitivity in the generic framework of three active neutrinos and three heavy sterile neutrinos. In each panel the vertical distances of the black and red lines are defined as the sensitivity of the mass ordering (i.e., ∆χ 2 = |χ 2 min (NMO) − χ 2 min (IMO)|, where the least squares function χ 2 is defined as in Eq. (20) of Ref. [38] and χ 2 min is the minimum of χ 2 after the marginalization of all the oscillation and pull parameters). The solid lines are for the case considering both the matter and UV effects and the dashed lines are the scenario of neglecting the matter effects (left panel) or neglecting the UV effect (right panel). Note that the red dashed line in the right panel has been horizontally shifted by −0.35 × 10 −5 eV 2 to avoid the overlap of the curves. In the left panel, the inclusion of terrestrial matter effects can reduce ∆χ 2 by 0.61 from 9.89 to 9.28. This conclusion is consistent with that in Ref. [38] for the three neutrino mixing case (∆χ 2 reduced by 0.64 from 10.28 to 9.64). The absolute value of ∆χ 2 is reduced mainly because the true three neutrino oscillation parameters have been changed to those in Ref. [46]. The size of ∆χ 2 reduction by 0.61 is non-negligible because it can be comparable with other systematic uncertainties. On the other hand, one can observe from the right panel that the inclusion of the UV effect only change ∆χ 2 from 9.31 to 9.28, resulting in a reduction of ∆χ 2 ≃ 0.03, which is much smaller than that of terrestrial matter effects. By randomly sampling the UV parameters θ ij (i = 1, 2, 3, j = 4, 5, 6) and δ ij (i = 1, 2, 3, j = 4, 5, 6) in the ranges of [0, 7.5 • ] and [0, 360 • ] respectively, we find that the variation of ∆χ 2 is well within the ±0.04 range around 9.28, which demonstrates the robustness of the mass ordering measurement against the possible UV effect in the JUNO experiment. Next we are going to discuss the UV and terrestrial matter effects in the precision measurement of θ 12 and ∆ 21 . In Figure 6 we illustrate the fitting results of θ 12 and ∆ 21 where both the matter and UV effects are included in the measured neutrino spectrum but the matter effects (left panel) or the UV corrections (right panel) are neglected in the predicted neutrino spectrum. The red stars and blue circles are the true values and best-fit values of θ 12 and ∆ 21 , respectively. From the left panel for the case of neglecting matter effects, one can observe that the best-fit values of θ 12 and ∆ 21 deviate around 2.0σ and 0.7σ from their true values, with the parameter precisions of 0.63% and 0.29% respectively. The levels of deviations for the fitted θ 12 and ∆ 21 are similar to those obtained in Ref. [38] where the three-flavor oscillation framework is considered. Thus terrestrial matter effects are of importance for future precision spectral measurements of reactor antineutrino oscillations. Regarding the case of neglecting the UV effect as shown in the right panel, the deviations of the best-fit values for θ 12 and ∆ 21 are within the size of 0.1 σ with the parameter precisions of 0.60% and 0.27% respectively. The parameter accuracies in the left panel are a little bit worse because additional marginalization has been performed for the UV parameters in the same regions as in Figure 4. Therefore the precision measurement of θ 12 and ∆ 21 in the generic framework of three active neutrinos and three heavy sterile neutrinos turns out to be rather robust for the reasonable UV parameter space. Before finishing this section, we want to remark on the indirect UV effect in the accelerator neutrino experiments. Different from the oscillation channel ν e → ν e for the reactor antineutrino experiments discussed here, the indirect UV effect in long baseline accelerator neutrino experiments may be significant because the terrestrial matter effect becomes larger and its entanglement with the indirect UV effect will also be non-negligible. The additional mixing angles and CP-violating phases will induce multiple parameter degeneracy problem and the sensitivities to the neutrino mass ordering, leptonic CP violation and the θ 23 octant will be largely affected [23,24,25,26,27,28,29,30]. Taking the DUNE experiment as an example, the discovery potential for maximal CP violation would be degraded from 6σ to the 3.7σ for seven years of nominal running if the indirect UV effect is considered [30]. A robust method to remove the parameter degeneracy and have better sensitivities to the three neutrino oscillation and new physics effects would be the combinations of accelerator neutrino experiments with different baselines, different neutrino energies and different neutrino oscillation channels [30]. Summary We have examined whether the JUNO-like reactor antineutrino oscillation experiment can be used to probe the indirect UV effect caused by small corrections of heavy sterile neutrinos to the 3 × 3 PMNS matrix. In this regard we have paid particular attention to how such an effect is entangled with terrestrial matter effects in ν e → ν e oscillations. After deriving the oscillation probability in a good analytical approximation for the antineutrino beam energy of a few MeV, we have done some numerical simulations based on the setup of a 20-kiloton JUNO-like liquid scintillator detector. We find that the indirect UV effect is always smaller than terrestrial matter effects, and their entanglement does not appear until the next-to-leading-order oscillating terms are taken into account. Two immediate conclusions turn out to be: (a) indirect UV makes no difference in the JUNO-like experiment; and (b) such an experiment's sensitivities to the neutrino mass ordering and a precision measurement of θ 12 and ∆ 21 are robust. Although the indirect UV effect is too small to be accessible in the JUNO-like reactor-based antineutrino oscillation experiment, it may be probed or constrained in some accelerator-based long-baseline neutrino oscillation experiments. In either case terrestrial matter effects should be carefully studied, so as to make them distinguishable from the fundamental new physics effects.
7,119
2018-02-14T00:00:00.000
[ "Physics" ]
Network Anomaly Intrusion Detection Based on Deep Learning Approach The prevalence of internet usage leads to diverse internet traffic, which may contain information about various types of internet attacks. In recent years, many researchers have applied deep learning technology to intrusion detection systems and obtained fairly strong recognition results. However, most experiments have used old datasets, so they could not reflect the latest attack information. In this paper, a current state of the CSE-CIC-IDS2018 dataset and standard evaluation metrics has been employed to evaluate the proposed mechanism. After preprocessing the dataset, six models—deep neural network (DNN), convolutional neural network (CNN), recurrent neural network (RNN), long short-term memory (LSTM), CNN + RNN and CNN + LSTM—were constructed to judge whether network traffic comprised a malicious attack. In addition, multi-classification experiments were conducted to sort traffic into benign traffic and six categories of malicious attacks: BruteForce, Denial-of-service (DoS), Web Attacks, Infiltration, Botnet, and Distributed denial-of-service (DDoS). Each model showed a high accuracy in various experiments, and their multi-class classification accuracy were above 98%. Compared with the intrusion detection system (IDS) of other papers, the proposed model effectively improves the detection performance. Moreover, the inference time for the combinations of CNN + RNN and CNN + LSTM is longer than that of the individual DNN, RNN and CNN. Therefore, the DNN, RNN and CNN are better than CNN + RNN and CNN + LSTM for considering the implementation of the algorithm in the IDS device. Introduction Due to the vigorous development of technologies such as the Internet of Things (IoT), cloud computing, and 5G communication, there are many applications that make the prevalence of the Internet. The popularization of Internet has occurred in parallel with an increasing number of hackers' attack strategies. According to Acronis' Cyber Threat Report [1], the main attack methods in 2021 were phishing, ransomware, and cryptocurrency. These attacks penetrate networks through system vulnerabilities and send large amounts of malicious information by email. The cryptocurrencies, in particular, have been attracting hackers who use malicious software to steal digital assets due to the high number of investors in recent years [2]. In the future, there will be more attacks on automated transactions. Therefore, strengthening network security to prevent disasters in the case of the simultaneous development of the digital world and network attacks has become an important issue. There are many ways to prevent hacker intrusion. In addition to a firewall as the first line of defense, the second line of defense is an intrusion detection system, which is used to monitor network traffic for abnormal behavior. An IDS collects a large amount of malicious attack data in advance and compares their behavior patterns with the attack characteristics of the database to determine whether they comprise intrusion information to conduct effective defense against new ransomware. Deep learning involves a neural network with Sensors 2023, 23, 2171 2 of 21 a multi-level architecture, which is different from a machine learning network in that it can learn and process features by itself and then generate changes in feature values from the architecture. The deep learning of automatic feature processing engineering is the most efficient method for dealing with the rapid rise of big data, and appropriate combinations of neurons and layers should be designed to extract important features and make judgments for large amounts of data. In [3,4], the author listed many papers on the application of deep learning in network attack detection. Therefore, it is suitable to use deep learning to implement IDS. Whether deep learning is successfully applied to IDS, the network intrusion detection datasets of the training model is very important. Accordingly, research has been conducted on publicly available network intrusion detection datasets; the most commonly researched datasets have been KDD Cup 1999 (KDD99) and NSL-KDD [5]. The network traffic of these two datasets have been sufficient to detect intrusion-spreading viruses, but today's attack methods have diversified so these datasets are outdated and unreliable [6]. The CSE-CIC-IDS2018 dataset is derived from real network traffic data, and it can be applied to actual network detection capabilities through deep learning methods [7,8]. The lack of data volume and feature types has led to an inability to prevent current damage trends, so we used the latest network intrusion detection CSE-CIC-IDS2018 dataset for our experiments. That fact allows us to evaluate the capability of deep learning methods to work in real network. In this paper, we used the CSE-CIC-IDS2018 dataset for intrusion detection experiments. Because a large amount of data may cover repeated values, we also focused on data processing. In addition, we applied DNN, CNN, RNN, LSTM, CNN + RNN and CNN + LSTM models to detect network attacks. Finally, binary classification and multi-class classification tasks can be performed to judge whether traffic is a malicious attack. The main contributions of this paper are summarized as follows. 1. This paper uses NVIDIA GPU to accelerate the training procedure. We used the complete CSE-CIC-IDS2018 dataset to reflect the current network traffic conditions in our experiments, with a focus on data preprocessing, to provide comprehensive test results. We adopted the DNN, CNN, RNN, LSTM, CNN + RNN and CNN + LSTM models to handle binary and multi-class classification tasks. When using the proposed appropriate data preprocessing methods and systematically tune hyperparameters of all six models, the accuracy of all models was found to be above 98%. Compared with the IDS of other papers, the proposed model effectively improves the detection performance. 2. Along with the empirical demonstration, the inference time for the combinations of CNN + RNN and CNN + LSTM is longer than that of the individual DNN, RNN and CNN. When considering the implementation of the algorithm in the IDS device, we conclude that individual DNN, RNN and CNN are better than CNN + RNN and CNN + LSTM. The remainder of this paper is organized as follows. Related work is discussed in Section 2. In Section 3, we illustrate the methodology. In Section 4, we present experimental results, and Section 5 presents the conclusions. Related Work Under the current trends of internet popularization and the continuous growth of hacking, many researchers have applied deep learning methods to the field of network security to more effectively detect new and complex types of network attacks. In deep learning networks, multiple layers of nonlinear transformations automatically process information. If large amounts of data are considered by a deep learning method, the neural network learning characteristics of the multi-layer structure can be effectively utilized to obtain more accurate results. The existing deep learning approaches are introduced below and the summary of them is shown in Table 1. In Ref. [9], Xiao et al. applied CNN for IDS to pull out the features of dimensionality reduction data. The authors of Ref. [10] proposed a wireless network intrusion detection method based on an improved CNN. In Ref. [11], Lin et al. proposed an LSTM + AM (attention mechanism) model to enhance the recognition ability of the network. The LSTM method has memory characteristics that can grasp historical network traffic. In a hierarchical neural network structure, LSTM can effectively combine current data and previously learned features to achieve better classification results. The essence of deep learning is to imitate the operation mode of the human neural network. The structure of LSTM is similar to the memory ability of the human brain, and AM is similar to the attention mechanism of the brain. In Ref. [12], Karatas et al. used the CSE-CIC-IDS2018 dataset and proposed six machine learning algorithms, namely, adaptive boosting (AdaBoost), DT, random forest (RF), KNN, gradient boosting (GB) and linear discriminant analysis (LDA). In order to solve the problem of an unbalanced number of attack types, the synthetic minority oversampling technique (SMOTE) can be used to synthesize new samples to improve the detection efficiency of a few samples. In Ref. [ [15], Jiang et al. applied LSTM-RNN to implement a multichannel attack detection system. The author proposed an end-to-end framework that integrated data preprocessing, feature extraction and training, as well as detection. In Ref. [16], the author proposes a security protection platform for the control plane of a The methodology of our network intrusion detection model is shown in Figure 1. The diagram is divided into two parts. The first is the data preprocessing area, and the second is the training and evaluation area. Before model training, it was necessary to further understand the network traffic of the experimental dataset and the characteristics of the data. CSE-CIC-IDS2018 Dataset This paper used the CSE-CIC-IDS2018 dataset [22] for experimental evaluation. The CSE-CIC-IDS2018 dataset was established by the Canadian government's Communications Security Establishment (CSE) [23] and the Canadian Institute for Cybersecurity (CIC) [24] in cooperation with Amazon Web Services (AWS) [25]. It is the latest, most comprehensive and large-scale dataset among the publicly available intrusion detection datasets on the internet. The CSE-CIC-IDS2018 dataset is a ten-day dataset comprising data collected through the network topology of authentic network attacks, and it stores benign traffic and attack traffic in the CSV file format. The dataset has a total of 10 files with total size of 6.41 GB [22]. The total number of datasets in the CSE-CIC-IDS2018 is 16,233,002 [7]. Due to this huge number and the presence of redundant data, the official dataset did not provide divided training and testing samples. So far, studies have presented inconsistent results regarding the total amount of data obtained and the data processing methods used. For example, the authors of [26] randomly selected 40,000 benign data (the total number of benign traffic data was 13,484,708) and 20,000 attack data to conduct experiments. The authors of [6] used nine of the ten files for their experiments. In this study, we used all datasets for experimental evaluation. The dataset records a series of packets, including 83 data characteristics such as duration, number of packets and number of bytes. In the CSE-CIC-IDS2018 dataset, the last item of each sample data is a label that represents whether the network traffic is of the benign or attack types. The attack type is divided into six categories for a total of 14 kinds of attacks, as shown in Table 2. CSE-CIC-IDS2018 Dataset This paper used the CSE-CIC-IDS2018 dataset [22] for experimental evaluation. The CSE-CIC-IDS2018 dataset was established by the Canadian government's Communications Security Establishment (CSE) [23] and the Canadian Institute for Cybersecurity (CIC) [24] in cooperation with Amazon Web Services (AWS) [25]. It is the latest, most comprehensive and large-scale dataset among the publicly available intrusion detection datasets on the internet. The CSE-CIC-IDS2018 dataset is a ten-day dataset comprising data collected through the network topology of authentic network attacks, and it stores benign traffic and attack traffic in the CSV file format. The dataset has a total of 10 files with total size of 6.41 GB [22]. The total number of datasets in the CSE-CIC-IDS2018 is 16,233,002 [7]. Due to this huge number and the presence of redundant data, the official dataset did not provide divided training and testing samples. So far, studies have presented inconsistent results regarding the total amount of data obtained and the data processing methods used. For example, the authors of [26] randomly selected 40,000 benign data (the total number of benign traffic data was 13,484,708) and 20,000 attack data to conduct experiments. The authors of [6] used nine of the ten files for their experiments. In this study, we used all datasets for experimental evaluation. The dataset records a series of packets, including 83 data characteristics such as duration, number of packets and number of bytes. In the CSE-CIC-IDS2018 dataset, the last item of each sample data is a label that represents whether the network traffic is of the benign or attack types. The attack type is divided into six categories for a total of 14 kinds of attacks, as shown in Table 2. Data Preprocessing Since the total number of datasets was large, they could have contained features or outliers that would not have been helpful for training. If there was no proper preprocessing, the trained model would not have been able to identify various intrusion attacks. To this end, this study was focused on data preprocessing, including data merging, data cleaning, data transformation and split, and numerical standardization. For feature extraction in deep learning, the coefficient of the number of layers must be set, and the larger the number of layers, the larger the processing scale of a feature. According to Anaconda's 2020 engineering survey of data scientists investing in deep learning [27], in the deep learning field, nearly 50% of time is spent on feature engineering, including data cleaning, data conversion and text cleaning. In data analysis, there is a famous saying, "Garbage In, Garbage Out", which means that the input error or meaningless data are of the same nature as the output data. Therefore, before model training, the preprocessing of data Sensors 2023, 23, 2171 6 of 21 must be conducted. Unprocessed raw datasets usually come from diverse sources, which means that the data may have many non-numeric formats that cannot be read by computers, as well as missing values and noise. After these problems are resolved, high-quality data are obtained and then input into a model. Training is then conducted to achieve results with a low number of false positives and the best possible accuracy. The data preprocessing in this study comprised data merging, data cleaning (nonattack data, feature removal, outliers and duplicate values), data transformation, the split of training and test sets, and numerical standardization. After synthesizing the previous preprocessing, the total number of datasets was reduced from 16,233,002 to 10,114,753. Each of the preprocessing steps are explained in detail in the following sections. Data Merging The CSE-CIC-IDS2018 dataset has ten CSV files, and each file has benign traffic and different attack traffic. The attack data in file No. 1 is brute force attack; No. 2 and 3 are denial of service attacks; No. 4 and 5 are website attack; No. 6 and 7 are penetration attacks; No. 8 is a botnet attack; No. 9 and 10 are distributed denial-of-service attacks. Ten files must be combined into one file before loading the data for processing. Data Cleaning It is necessary to process the string data and errors that are not helpful for the training process or cannot be processed through numerical operations. The objective of processing in this study was to delete meaningless data features, outliers and repetitive data. Before entering the stage of deleting meaningless data features, the non-attack data labels, which are not officially described as benign or attack traffic, were first deleted. Next, the meaningless data features were deleted. First, we found that the six features of Timestamp, Flow ID, Src IP, Src Port, Dst IP and Dst Port in the dataset had no effect on the attack classification of network traffic, so they were removed. If considered data features all have values of 0, there would be no discrimination during training. Here, it was found that the values of Bwd PSH Flags, Bwd URG Flags, Fwd Byts/b Avg, Fwd Pkts/b Avg, Fwd Blk Rate Avg, Bwd Byts/b Avg, Bwd Pkts/b Avg and Bwd Blk Rate in the eight features had values of 0, so they were deleted. Our second data cleaning stage was the processing of outliers. There were two types of outliers in the dataset: Not a Number (NaN) and Infinity (Inf). There are methods to deal with NaN, such as average value, mode filling or deletion. It was found that there were abnormal values in the Flow Byts/s and Flow Pkts/s characteristics. We used the mode filling processing method instead of average filling to avoid changing the original data values, as the averaged values may have been affected by other outliers. The last step was to delete the repetitive data. The number of deletions in each stage and the total number of datasets after cleaning are shown in Table 3. The symbol (-) means no deleted data. Figure 2 shows the benign and attack labels of the dataset, which are all in the text format. Since a computer cannot understand non-numeric data, they must be converted into the numerical format so that a model can read them for training. In this paper, a classification index was adopted. Processing was used to classify the attack data into The symbol (─) means no deleted data. Figure 2 shows the benign and attack labels of the dataset, which are all in the text format. Since a computer cannot understand non-numeric data, they must be converted into the numerical format so that a model can read them for training. In this paper, a classification index was adopted. Processing was used to classify the attack data into binary and multi-class categories. The binary classification assigned each label an integer of 0 or 1 for the benign and attack samples, respectively. The multi-class classification benign samples were assigned a value of 0, and the remaining six attacks were classified as Bruteforce (1), DoS (2), Web Attack (3), Infiltration (4), Botnet (5) and DDoS (6) Because the dataset does not provide training and test samples, in this paper, we adopted the holdout method for split processing. This method is used to divide datasets into a training-validation set and testing sets according to a set ratio; the division ratio has no uniform requirement, as it is completely set by experience. In this experiment, 80% and 20% of the dataset were set as the training-validation set and testing data, respectively. This division allowed the model to have a generalization effect. Moreover, 80% and 20% of the training-validation dataset were set as the training set and validation set. The number of categories and the proportions of the experimental training-validation data and testing data are shown in Tables 4 and 5. Numerical Normalization The data range of each feature in the original dataset is different. We used the standardization method to change the mean of the original data to 0 and the standard deviation (SD) to 1 to scale each feature data, ensure that the data conformed to a normal distribution, and improve the convergence speed and accuracy of the model. The equation is shown in (1), where x is the original value to be standardized, µ is the average value of the feature, and σ is the standard deviation of the feature. After the data were standardized, an interval between the values still existed. Therefore, before using the natural logarithmic transformation to narrow the numerical range, the eight features with negative values were transformed to solve the problem of negative numbers without the use of a logarithm. Finally, if the data contained 0 values that could be logarithmically solved, we applied log e (1 + x), where x is the original value to be converted that cannot be less than 0. Deep Learning Models This paper used DNN, CNN, RNN, LSTM, CNN + RNN and CNN + LSTM for experiments. The two combined CNN + RNN and CNN + LSTM models used a CNN because we hoped to use their feature extraction characteristics to combine the best data feature extraction capabilities with time series properties to achieve efficient classification results. In addition to the input and output layers, a deep learning model contains a neural network with several hidden layers. However, deep learning is by no means completed by stacking multiple layers of neural networks. Sometimes, a network structure with a small number of layers combined with dropout and batch normalization can also have good results. At present, no research has defined any formula to calculate an optimal number of neural network layers and neurons. Too many neurons may lead to overfitting, which means that the training set data differ from the number of neurons. If a network is too large, it cannot cope with the learning process; if a network is too small, it will cause underfitting, which means that the learning degree is insufficient. When designing the neural network architecture, we continuously tested the results with various combinations and finally chose the appropriate number of neurons. In this paper, we tested various combinations of specific neural network node numbers, learning rates and excitation functions. The number of nodes in a neural network is proportional to the number of parameters. The learning rate directly affects the weight update during the operation of the back propagation method, which consequently affects the convergence of the model. The learning rate affects the learning speed and the time required for training, which should be determined according to the size of the considered dataset. In this paper, we set the learning rate range from 0.01 to 0.5 for our experiments, and we designed various combinations of neural networks in shallow to deep learning experiments to find suitable models and architectures, as shown in Table 6. The hidden Sensors 2023, 23, 2171 9 of 21 layer was set to comprise 1 to 5 layers. The total numbers of neurons in the hidden layers were set to 256, 512 and 768. Table 7 shows the DNN architecture used in this paper, which consisted of five hidden layers. Layers 0~1 are the input layers and the hidden layer. Layers 3~4, 6~7, 9~10 and 12~13 are all hidden layers. Layers 16~17 are output layers. The number of the layer parameters of the DNN was calculated as (number of input features × number of nodes) + deviation value. The number of features after data processing was 70; that is, the first 69 items were data features and the last item was a label. The number of first layer parameters was 4480 (shown in Table 6), and it was calculated as 69 × 64 + 64. The function of the deviation value was to excite a neuron and make the next neuron more conducive to receiving data, so the value was based on the number of nodes. Furthermore, the remaining layers were calculated in the same way. In the training phase, overfitting often occurs and results in a reduction in the generalization ability of a model. Therefore, in this experiment, batch normalization (BN) [28] and dropout layers were added between each hidden layer. BN can speed up the training process and prevent overfitting. Dropout randomly deletes neurons in each layer at a ratio. Both are effective in preventing neurons from over-relying on certain local features. Our purpose for using BN was to change the mean value of original data to a normal distribution with a standard deviation of 0 and a standard deviation of 1. BN was normalized for each batch in the training phase, and then we added two elements that controlled the size of the value, namely, scaling and offset. Through the normalization process during training, a value with a more even distribution could be obtained, which further improved the convergence speed of the model. BN has four calculation parameters, namely, mean, standard deviation, scaling and offset control, and these four parameters were applied to all data. Table 6 shows the parameter amount of the BN layer, which was calculated as 64 times 4 times the number of parameters. The number of the dropout layer parameters in Table 6 is 0, because the dropout layer's role was only to drop neurons. The number of parameters of the DNN model was shown in Table 8. Table 9 shows the CNN architecture used in this paper. The architecture consisted of five convolutional layers. Layers 0~1 comprised the input layer and the first convolutional layer. Layers 2~3, 4~5, 6~7 and 8~9 were all convolutional layers, and layers 13~14 were output layers. In operation of the convolutional layer, the filters and the kernel are used to calculate the input data according to stride movement. In Table 8, the number of filters in the convolutional layer is shown to be 32. The kernel size is the window size of the convolution kernel, and its value was set to 2 × 1. The number of the layer parameters was calculated as the number of filters × (filter height × filter width × input channel) + deviation value. The number of first layer parameters was 96 (shown in Table 8), and it was calculated as 32 × (2 × 1 × 1) + 32. The remaining layer to the output layers were calculated in the same way. In the CNN architecture, the BN and dropout layers were added before the output layer because the maximum pooling layer of 1 and 2 could effectively prevent overfitting. The feature map after convolution could be extracted, focusing on important data and reducing meaningless noise. Therefore, the output dimension of each maximum pooling layer was increased. The output dimension of the convolutional layer was half of the output dimension, and the reduction in the number of parameters also retained important characteristics. The number of parameters of the CNN model was shown in Table 10. Table 11 shows the RNN architecture used in this paper, which consisted of five recurrent layers. Layers 0~1 were the input layer and the first recurrent layer. Layers 3~4, 6~7, 9~10 and 12~13 were all recurrent layers. Layers 15~16 were output layers. The operation mode of the RNN is different from that of a DNN. It has an inner loop structure. The number of the layer parameters was calculated as (number of input features × number of nodes) + (nodes number × number of nodes) + bias value. The number of the first layer parameters was 8576 (shown in Table 10), and it was calculated as (69 × 64) + (64 × 64) + 64. In the second layer, the number of input features was changed to 64 of the output shape of the previous layer, so the number of parameters was 8256, and it was calculated as (64 × 64) + (64 × 64) + 64. The remaining hidden layers were calculated in the same way. In the RNN architecture, the placement of the BN and dropout layers was set as the same as that of the DNN in order to effectively avoid overfitting. The number of parameters of the RNN model was shown in Table 12. Table 13 shows the LSTM architecture used in this paper, which consisted of five LSTM layers. Layers 0~1 were the input layer and the first LSTM layer. Layers 3~4, 6~7, 9~10 and 12~13 were all recurrent layers. Layers 15-16 were output layers. The LSTM structure used forgetting gates, input gates, update gates and output gates to determine whether the data were added to the memory and to improve the problem of lack of long-term memory. These four control gates had four sets of parameters. The number of parameters of the RNN was calculated as 4 × (number of input features × number of nodes) + (number of nodes × number of nodes) + deviation value. The number of first layer parameters was 34,304 (shown in Table 12), and it was calculated as 4 × [(69 × 64) + (64 × 64) + 64]. The input feature number of second layer was changed to 64 of the output shape of the previous layer. The number of second layer parameters was 33,024, and was calculated as 4 × (64 × 64) + (64 × 64) + 64. The hidden layers were calculated in the same way. In the LSTM architecture, the placement of BN and dropout layers was consistent with the DNN and RNN. The number of parameters of the RNN model was shown in Table 14. Table 14 shows the CNN + RNN architecture used in this paper, which consisted of three convolutional layers and five recurrent layers. Layers 0~1 were the input layer and the first convolutional layer. Layers 2~3 and 4~5 were convolutional layers, and layers 8~9, 11~12, 14~15, 17~18 and 20~21 were all recurrent layers. Layers 23~24 were output layers. The number of parameters of CNN + RNN was calculated as filter number × (filter height × filter width × input channel) + deviation value and (input feature number × node number) + (node number × node number) + deviation value. The number of parameters of the first convolutional layer was 32 × (2 × 1 × 1) + 32 = 96. The number of parameters of the first recurrent layer was (128 × 64) + (64 × 64) + 64 = 12,352, as shown in Table 15. The number of parameters of the RNN model was shown in Table 16. 3.3.6. CNN + LSTM LSTM networks have time-series characteristics that aid the detection of benign and attack traffic sequence. Combining LSTM with the feature extraction characteristics of CNN can effectively improve the identification ability, which proves that the hybrid model does have a higher accuracy of network traffic classification. Table 17 shows the CNN + LSTM architecture used in this paper, which consisted of three convolutional layers and five LST layers. Layers 0~1 were the input layer and the first convolutional layer. Layers 2~3 and 4~5 were convolutional layers, and layers 8~9, 11~12, 14~15, 17~18 and 20~21 were LSTM layers. Layers 23~24 were output layers. The number of parameters of CNN + LSTM was calculated as filter number× (filter height × filter width × input channel) + deviation value and 4 × (number of input features × number of nodes) + (number of nodes × number of nodes) + deviation value. The number of parameters of the first LSTM layer was 4 × (128 × 64) + (64 × 64) + 64 = 49,408. The number of parameters of the RNN model was shown in Table 18. Evaluation Metrics This paper used four elements to judge the number of correct and misjudged results predicted by the experimental model. This involved four elements, namely, (1) true positive (TP), which is the number of correctly classified benign samples; (2) false positive (FP), which is the number of false positives that will attack the number of samples predicted as benign samples; (3) true negative (TN), which is the number of correctly classified attack samples; and (4) false negative (FN), which is the number of false positive samples predicted as attack samples. With these four elements, four evaluation indicators could be calculated to evaluate the performance of the experimental model [29] in terms of accuracy, precision, recall and F1-score. Accuracy represents the ratio of correct classifications of each sample. Precision represents the correctness of the prediction in the case of benign samples. Recall indicates the correct rate in the case of a benign sample. F1-score represents the harmonic mean of the precision and the recall, which is an indicator of the performance of the classification model. The equations are (2)~(5). Experimental Environment The experimental environment used the VCP-AI computing platform of Taipei Tech, along with GPU computing resources to speed up neural network-like processing. The detailed specifications and training speed of the environment are shown in Table 19. The experimental development language was Python. We used the glob tool [30] to obtain a list in a serial manner for subsequent data merging. When the amount of studied data is large, Pandas [31], NumPy [32] and scikit-learn tools can be used to perform efficient data processing and analysis. Results and Analysis In this section, the multi-class and binary classification experimental results of six neural network models-DNN, CNN, RNN, LSTM, CNN + RNN and CNN + LSTM-are shown and discussed. Evaluation of Multi-Class Classification The best multi-class classification accuracy of the DNN was 98.83%. The best multiclass classification accuracy of the CNN was 98.83%. The multi-class classification accuracy of the RNN and LSTM were 98.80% and 98.83%, respectively, as shown in Table 20. The accuracy of the multi-class classification of each model could reach 98%. Table 20 lists the inference time required to execute each output. The enhancement in multi-class classification accuracy is mostly in the range of 0.01-0.05% for the combinations of CNN + RNN and CNN + LSTM compared to the individual DNN, RNN, CNN and LSTM methods. In addition, the inference time for the combinations of CNN + RNN and CNN + LSTM is longer than that of the individual DNN, RNN and CNN. When considering the implementation of the algorithm in the IDS device, since the IDS is installed in the data center, the data processing speed is high, so the inference time needs to be short. For this reason, the authors of [33] proposed a DNN-based network intrusion detection system which can detect cyber attack in real time in an IoT network. Therefore, the combination of CNN + RNN and CNN+ LSTM techniques is not encouraging compared to the individual techniques. Table 20 lists the accuracy and inference time of the six models. When considering that the deep learning model is implemented in the actual IDS device, the user can choose the best model that meets the inference time requirements. As shown in Tables 21-26, the DNN showed good results in the evaluation metrics of the benign samples and the six attack categories. With Infiltration, due to the small number of samples, the model could not be effectively analyzed in the learning stage. The precision, recall and F1-score evaluation metrics of the DNN model for the Infiltration category were all 0%. The precision, recall and F1-score evaluation metrics of the CNN model for the Infiltration category were 52.23%, 2.48% and 4.73%, respectively. It can be seen that with the small number of samples, the CNN could identify attacks better than the DNN. The precision, recall and F1-score of the RNN model for Infiltration were 47.06%, 2.11% and 3.99%, respectively. The LSTM model was better than the RNN model, and its recall and F1-score were slightly increased by 1.47% and 1.29%, respectively. However, the CNN is better than the LSTM and RNN models. The CNN + RNN and CNN models showed the same results in the Infiltration category. CNN + LSTM was the best of all models at identifying the Infiltration category, and its recall and F1-score increased by 0.73% and 0.58%, respectively. We analyzed the BruteForce and Web Attack categories for a few samples. All the models showed poor results for Web Attack. The precision, recall and F1-score evaluation metrics of the DNN model for Web Attack were 100%, 37.50% and 54.55%, respectively, which were still not as good as those of the CNN, but better than those of the RNN and LSTM. The precision of CNN + RNN in the BruteForce category was 100%, which was the highest among all methods. Regarding DoS, Botnet and DDoS (which are commonly used by hackers today), this experiment showed that all obtained good results. The precision, recall and F1-score of the DoS category were all as high as 98~99%. In the Botnet attack category, the DNN, CNN, RNN and LSTM models all achieved a recall of more than 99%. The precision of CNN + RNN was 100%. DDoS had a large number of samples, but the obtained results were worse than those for DoS and Botnet, with an average of 98% for each method. The precision, recall and F1-score of the six models were all 98%. CNN + LSTM was the best of all the studied models. Table 27 summarize the best results of each model for multi-class classification. In addition to integrating the four evaluation metrics, the training parameters and inference time of each model are also attached. DNN and CNN achieved a high accuracy in the deep network. The RNN needed only one shallow layer to reach an accuracy of 98.80% in multi-class classification, and the inference time was also shorter. The LSTM model could reach a 98.83% classification accuracy under one shallow layer. In this experiment, the characteristics of CNN feature extraction were applied to time-series RNN and LSTM models. In multi-class classification, both showed an accuracy of 98.84%, thus improving by 0.04% and 0.01%, respectively. Most of the classification tasks tested in the literature are related to multi-class classification, and due to the huge amount of data in our studied dataset, there is no special quantitative standard for training and testing datasets. Table 28 lists a comprehensive comparison of our experimental results with related literature based on the CSE-CIC-IDS2018 dataset. Regarding the accuracy index, the DNN in this paper showed a value 1.55% higher than that reported in the literature [5] in multi-class and binary classification. Compared with the CNN methods of [10] and [5], our accuracy was 7.32% and 1.44% higher, respectively. Compared with the RNN method of [5], our accuracy was 1.51% higher. Compared with the method of TCN + LSTM [17], our accuracy was 2.64% and 1.06% higher, respectively. The accuracy results of the CNN + RNN and CNN + LSTM models of our experiment were also significantly higher than the aforementioned papers. Although the accuracy of [12] is 99.7% better than our paper, the paper does not use all datasets for training and testing, so it cannot be compared with our paper. Compared with [5,6,9,10,17], the proposed model effectively improves the detection performance. Table 29 summarize the best results of each model for binary classification. In addition to integrating the four evaluation indicators, the training parameters and inference time of each model are also attached. The inference time of multi-class classification was longer than that of binary classification, because the number of classifications of attack samples in the output layer was different, so the judgment required more time in the processing and analysis of all attack data. DNN achieved a high accuracy in the deep network. The CNN achieved the highest accuracy under the same structure (CNN) for binary classifications. The RNN needed up to five layers to reach an accuracy of 98.82% in binary classification and the inference time was also shorter. The LSTM model could reach a 98.83% classification accuracy under similar layers. In this experiment, the characteristics of CNN feature extraction were applied to time-series RNN and LSTM models. In binary classification, CNN + RNN and CNN + LSTM demonstrated accuracy levels of 98.84% and 98.85%, respectively, which were both improved by 0.02% compared with the RNN and LSTM models. Conclusions In this study, after data preprocessing using data cleaning, data transformation and split, and numerical normalization, the DNN, CNN, RNN, LSTM, CNN + RNN and CNN + LSTM models were used for the binary and multi-class classification of the CSE-CIC-IDS2018 dataset, and the accuracy of all models was found to reach more than 98%. The multi-class classification obtained the highest accuracy of 98.84% by the CNN + RNN and CNN + LSTM models. Compared with the IDS of other papers, the proposed model effectively improves the detection performance. There are minimal accuracy improvements at the cost of very high inference time for the combinations of CNN + RNN and CNN + LSTM compared to the individual DNN, RNN, CNN and LSTM methods. When considering the implementation issue in an IDS device, a shorter inference time of the deep learning structure is preferred. Because the accuracy of individual DNN, RNN, CNN and LSTM was found to reach more than 98%, they are more suitable than CNN + RNN and CNN + LSTM to realize the IDS device. In the future, we will study the feasibility of lightweight DNN, RNN, CNN and LSTM.
9,115.4
2023-02-01T00:00:00.000
[ "Computer Science" ]
An Improved Nonlinear Grey Bernoulli Model Based on the Whale Optimization Algorithm and Its Application In order to improve the prediction performance of the existing nonlinear grey Bernoulli model and extend its applicable range, an improved nonlinear grey Bernoulli model is presented by using a grey modeling technique and optimization methods. First, the traditional whitening equation of nonlinear grey Bernoulli model is transformed into its linear formulae. Second, improved structural parameters of the model are proposed to eliminate the inherent error caused by the leap jumping from the differential equation to the difference one. As a result, an improved nonlinear grey Bernoulli model is obtained. Finally, the structural parameters of the model are calculated by the whale optimization algorithm. The numerical results of several examples show that the presented model’s prediction accuracy is higher than that of the existing models, and the proposed model is more suitable for these practical cases. Introduction Professor Deng [1] originally proposed the grey system theory to solve the uncertain system with partially known and partially unknown information.As a crucial branch of the grey system theory, it has been widely used to address numerous real-world problems owing to its effectiveness, such as electricity prediction [2][3][4], energy prediction [5,6], and tourism prediction [7].In these models, a common characteristic is that they do not require a large number of observations (not less than 4).It has attracted considerable interests of researchers because it is difficult, even impossible, to collect enough data to build the traditional models, including linear [8] or nonlinear regression models [9], autoregressive integrated moving average model [10] and its extensive versions [11], support vector machine [12], and artificial neural network [13]. Generally speaking, the development of discipline also benefits from practical applications.In the past three decades, various grey models have been emerged rapidly according to practical applications.For example, Xie and Liu [14] investigated the discrete grey model and analyzed the traditional grey model's connection.Wu et al. [15] investigated the grey model with fractional order accumulation that made the grey model more flexible.For the purpose of considering the effects of related factors on the behavioral system, Tien [16] initially proposed a novel grey model called GM (1, n) in which the "n" stands for the n − 1 driving variable.More recently, Wang et al. [17] presented a data-grouping approach-based grey modeling method to predict quarterly hydropower production in China.Subsequently, they proposed a seasonal grey model based on the accumulation operators for forecasting the seasonal electricity consumption of China [18].Zeng et al. [19] predicted the sequence of ternary interval numbers using a novel multivariable grey model.Ma et al. [20] raised a conformable fractional grey system model; he also investigated the novel fractional time-delayed grey model with grey wolf optimizer [21].A large number of related research studies emerge continuously.Zeng et al. [22] presented a new-structure grey Verhulst model for predicting China's tight gas production.In the model, they deduced the time-response function and an initial value optimization method.e same year, they proposed another new-recent years, metaheuristic algorithms are used in grey models for finding the optimal parameter solutions.Zhang et al. [34] optimized the background value weighting coefficients of the grey model using the genetic algorithm.In [35], a multiobjective grey wolf optimizer was used to optimize the kernel-based nonlinear extension of the Arps decline model to ensure both prediction stability and accuracy.Wu et al. [36] used the particle swarm optimization algorithm to search optimal system parameters of the nonlinear grey Bernoulli model. is study focuses on improving the nonlinear grey Bernoulli model, which was initially proposed by Chen [37] and abbreviated as NGBM (1, 1).As is known, NGBM (1, 1) has been widely used in many problems with nonlinear characteristics and extended to general versions [38].However, there are still spaces to improve its accuracy.e root cause of loss of information in the conversion of the grey differential equation to the grey difference equation is proposed in the paper [39].Following the thought of Ma et al. [7], the model parameters of the NGBM (1, 1) model are optimized to better match these two equations to reduce prediction error.e main contributions of this paper are drawn as follows: (1) the grey differential equation is transformed into linear form rather than sharing the same form to the traditional NGBM (1, 1) model; (2) the optimized parameters are constructed and the whale optimization algorithm (WOA) is used to search for the optimal power index; (3) three cases are employed to verify the effectiveness of INGBM (1, 1).e rest of this paper is organized as follows: Section 2 briefly describes the NGBM (1, 1) model and obtains the "linear" solution to the NGBM (1, 1) model.In Section 3, the NGBM (1, 1) model with improved parameters is deduced in detail.Section 4 provides two real-world examples to validate the effectiveness of the proposed model.Section 5 applies INGBM (1, 1) to predict the number of R&D institutions of higher education in China to reveal the forecasting ability of INGBM (1, 1), and the main conclusions are listed in the final section. Description of the Nonlinear Grey Bernoulli Model e nonlinear grey Bernoulli model (NGBM (1, 1)), originally proposed by Chen [37], has wide applications, especially in solving nonlinear problems.However, this model still has some drawbacks that impair the prediction accuracy of NGBM (1, 1). is section is to analyze the root reason and propose a novel method to reduce the modeling bias.First, a brief description of NGBM (1, 1) is introduced.Additionally, a "linear" solution to the whitening equation of NGBM (1, 1) is proposed to make the parameter optimization more simplified. e Traditional Solution to the Nonlinear Grey Bernoulli Model.Assume to be a nonnegative series, and then the first-order accumulative generating operator (1-AGO) series is where is called the whitening equation of nonlinear grey Bernoulli model and n, regarded as the power index, cannot be equal to one.With the two-point trapezoidal formula, the discrete difference equation can be written as where z (1) (k) represents the background value and is obtained as 2 Mathematical Problems in Engineering e model parameters can be estimated by the least-squares method and shown that erefore, the solution to equation (3) with x (1) (1) � . (7) Using the firs-order inverse accumulative generating operator (1-IAGO), the simulated values of X (0) , X (0) , is 2.2.e "Linear" Solution to Nonlinear Grey Bernoulli Model. is section transforms the whitening equation of the nonlinear grey Bernoulli model (NGBM (1, 1)) into the linear formulation, rather than directly solving the whitening equation.at is, it does not share the same pattern as the traditional grey model.e detailed computational process can be depicted as follows. Analogously to Section 2.1, both sides of whitening equation (3) are multiplied by x (1) (t) − c , and then Set y (1) (t) � x (1) (t) 1− c ; furthermore, ereby, equation ( 10) can be written as which is called the linearization of the NGBM (1, 1) model.Moreover, it easily yields the discrete form by using the twopoint trapezoidal formula as follows: where z (1) y (k) � 0.5(y (1) (t) + y (1) (t − 1)).If T , the parameters can be estimated by the least-squares method and shown that After estimating the model parameters, the whitening equation, equation (11), is resolved.Multiply both sides in equation ( 11) by the integrating operator e a(1− c)t : Integrate both sides in equation ( 14) over the interval [1, t]: and which is also According to 1-AGO and e solution of the NGBM (1, 1) model, either in linearization or in nonlinearization, is essentially approximate because the conversion of equations ( 11) and ( 12) is based on two-point trapezoidal formula regarded as an approximate method.It implies that the "misplaced replacement" of the model parameters will cause the following: (i) the difference grey equation does not match with the differential grey equation because model parameters have different meanings in these equations; (ii) the prediction model is not satisfied in most situations.It indicates the performance of the NGBM (1, 1) model must be improved.In other words, the model parameters should be optimized to better match equations (11) and (12) and to increase the forecasting ability of the NGBM (1, 1) model. Parameter Optimization of Nonlinear Grey Bernoulli Model e whitening equation parameters, a, b and power index c, are important parameters of the nonlinear grey Bernoulli model.In this section, the parameters are calculated. Whitening Equation Parameter Calculation. e optimized parameters, a and b, are denoted as p and q for simplicity.e optimized parameters are substituted into the time-response function, and the following equation is obtained: Equation ( 18) is substituted into the left-hand side in equation ( 4): According to equation ( 4), the left-hand side L(t) should be equal to the right-hand side R(t); that is, L(t) − R(t) � 0. erefore, It is easy to find that Part 1 and 2 both are equal to zero in equation ( 20); hence, By doing so, the optimized parameters p and q can be estimated.Moreover, it is obviously believed that the optimized parameters can better match the differential equation and the difference equation and reduce the prediction error.For simplicity, NGBM (1, 1) with the improved parameters is abbreviated as INGBM (1, 1) in this study. Power Index Estimation Based on the Whale Optimization Algorithm.In the above descriptions, the power index c is assumed to be known.However, the power index is always changeable in a different situation that requires flexible adjusting over given datasets.To solve this problem, an intelligent algorithm, whale optimization algorithm, shorted for WOA, is employed to automatically determine the power index. Based on the humpback whale's hunting behavior that recognizes the location of prey and encircles them, Mirjalili and Lewis designed the WOA [40].In this optimizer, assume the current best candidate solution (search agent) to be the target prey or be near the optimum.Once the best search agent is defined, the other search agents will update their positions towards the best search agent: (i) In this behavioral system, they update their position by where t represents the current iteration, X → * (t) is the current best agent, and D → � (D 1 , D 2 , . . ., D d ), j � 1, 2, . . ., d denotes the length of the individual whale approaching the current best search agent in j th spatial position.In particular, the coefficient vector A → and C → are defined as where r is a random number generated from [0, 1] and a is called convergence factor that linearly decreases from 2 to 0. at is, (ii) A spiral equation is also designed between the position of whale and prey to mimic the helixshaped movement of humpback whales: where | and implies the distance of the i th whale to the prey, b is a constant for fixing the shape of the logarithmic spiral, and l is a random number and l ∈ [− 1, 1]. (iii) In addition, humpback whales also search for prey in a random way according to the position of each other.is behavior is written as the following mathematical expression: where X → rand is a random position chosen from the current position.For clearness, the detailed steps of the algorithm based on WOA to find the optimal c are listed as follows: Step 1: set algorithm parameters N, dim, and t max . Step 3: calculate the fitness of each search agent f(X → i ). Step 5: generate a random number p in [0, 1].If p ≥ 0.5, update the position of the current search agent by equation (27).If p < 0.5 and |A| ≥ 1, update the position of the current search agent by equation (29).If p < 0.5 and |A| < 1, update the position of the current search agent by equation (22). Step 6: return to Step 3, until the optimal value c is found. Note that the fitness function, f(X → i ), as usual, is often defined as an objective function, MAPE, and shown in the next section.Moreover, the flowchart of the INGBM (1, 1) model is graphed in Figure 1 for clearness. Validation of the Nonlinear Grey Bernoulli Model is section provides two examples to demonstrate the efficacy of the proposed model comparing with three competing models, including the GM (1, 1), DGM (1, 1), NGBM (1, 1), and ONGBM (1, 1).Additionally, to evaluate the prediction accuracy of these grey models, the mean absolute percentage error (MAPE) and root mean square error (RMSE) are applied to measure the level of prediction performance, which are defined as e grade of the prediction performance is depicted by Lewis [41] using the criteria for MAPE and listed in Table 1. Case 1. Forecasting education-in-practice-intensive university: the example from paper [42] is used to test for efficacy and applicability of the grey model.e data from 1 to 7 are used to build different grey models, and the final data are used to test for the prediction accuracies of these models.Accordingly, the five models' parameters are listed in Table 2, and especially parameter values of the proposed model by WOA are graphed in Figure 2. Consequently, the simulation and prediction results are shown in Table 3. Case 2. Forecasting subway passenger: the data sets of example from paper [43] are empirically broken down into two groups: the data from 2005 to 2012 are used to build five grey models, and the other data are used to test for the prediction accuracies of these models. First of all, the parameter values of the five grey models are computed in Table 4.Moreover, the track of searching for the optimal nonlinear parameter of the INGBM (1, 1) model using WOA is graphed in Figure 3. Furthermore, the simulation and prediction results are shown in Table 5. In Tables 1-5, the desired conclusions can be drawn as follows: (1) In case 1, the Note: ζrepresents the weighted parameter of background value and it is taken as 0.5 generally.It is, however, recommended to search for the optimal value in ONGBM (1, 1).In addition, β 1 and β 2 are parameters of DGM (1, 1) in this case. Mathematical Problems in Engineering for MAPE value listed in Table 1, it is easy to find that these models can effectively make predictions because of the low MAPE values.e proposed model has a smaller value that indicates higher accuracy.As is known, a favorable predictor performs well in the simulated period and satisfies prediction accuracy in the verifying period.Herein, the proposed model still is better than other grey models because of its lower MAPE value again in the predicted period.In this case, the fitting error and prediction error of all the models are small, which shows that no fitting has occurred.More, the nonlinear model (NGBM (1, 1), ONGBM (1, 1), and INGBM (1, 1)) performs better than the linear model (GM (1, 1) and DGM (1, 1)), which proves that the nonlinear grey model can well capture the nonlinear characteristics of the data. In cost-effectiveness, the grey model is a kind of model solving small sample modeling, so the time consumption is usually very small.For example, in case 1, the time cost of GM (1, 1), DGM (1, 1), NGBM (1, 1), and INGBM (1, 1) is 0.1638 s, 0.1489 s, 0.1744 s, and 0.1862 s, respectively.All the time costs are less than 1 s and within the allowable range.In summary, the INGBM (1, 1) model can enhance the prediction accuracy of the traditional NGBM (1, 1) model by optimizing the model parameters.Furthermore, the proposed model is applied to analyze the practical application. Application Universities play an irreplaceable role in the process of building a strong country in the field of science and technology in China, as the core department for cultivating talent and achieving technological innovation, which shoulder important responsibility and mission in the National Innovation System.As is expected, the number of R&D institutions of higher education has increased fast in the past few years.Accurately forecasting the number of R&D institutions of higher education will provide a reference for the Ministry of Education of the People's Republic of China and the government to make better plans and strategies in advance.However, the effects of related factors on the number of R&D institutions of higher education are quite uncertain, and reliable observations are limited because of China's rapid development, which implies the traditional models (e.g., regression analysis) are not suitable for this case because of the small sample size and uncertain factors.Herein, the proposed model, INGBM (1, 1), is obviously more suitable for this case with few observations.Empirically, the data collected from China's National Bureau of Statistics of the People's Republic China and listed in Table 6, are divided into two groups, the data from 2011 to 2016 are used to build these five prediction models, and the others are used to assess the accuracy of these models. Similar to Case 1 and 2, all the parameters in these models are computed and listed in Table 7.Moreover, the track of the power index c using WOA is exhibited in Figure 4. As a consequence, the simulated and predicted results are shown in Table 8. In this case, by ignoring the first item of predicted results, it should be known that the RMSE values (see Figure 5) of five grey models are 0.14, 0.14, 0.04, 0.04, and 0.03 for simulation and are 0.87, 0.85, 0.39, and 0.40 for prediction, respectively.Moreover, the MAPE values (see Figure 6) of these models are 1.17%, 1.18%, 0.28%, 0.27%, and 0.25% for simulation and those of models mentioned here are 5.54%, 5.46%, 2.32%, 2.39%, and 1.72% for prediction, respectively.erefore, in the simulation period, the proposed model outperforms other grey models with the lowest RMSE value of 0.03 and a MAPE value of 0.27%. e ONGBM (1, 1) model has the following prediction performance with a relatively lower MAPE value of 0.28%.As mentioned in [44], as a proper forecasting method, it performs excellently in simulation and should do well in the prediction stage.By observing Table 8, it is easy to find that the proposed model is better than other grey models again because of its lower RMSE value of 0.40 and MAPE value of 1.72%.Interestingly, the ONGBM (1, 1) is the second better because its MAPE value is a bit higher than that of the INGBM (1, 1), which implies the improved NGBM (1, 1) through optimization of background value can be regarded as the alternative Mathematical Problems in Engineering model to predict the number of R&D institutions of higher education in this paper.In this case, the prediction and fitting errors of all models are not big, which shows that there has no overfitting in the modeling.At the same time, the prediction effect of the nonlinear grey model is better than that of the linear model, which shows that the nonlinear grey model can effectively capture the nonlinear characteristics of the data.Finally, the improved model has the highest accuracy, which indicates that our improvement strategy is effective. In order to further verify the advantages of WOA, three kinds of intelligent optimizer, grey wolf optimizer (GWO) [45], particle swarm optimizer (PSO) [46], and ant lion optimizer (ALO) [47], are used for comparison.ese four kinds of algorithms are all excellent optimizers with their own characteristics and advantages.e population numbers of the four algorithms are all set to be 100 and the search times to be 100.e population is initialized 100 times to compare the final MAPE with the corresponding nonlinear parameters and calculate the average time.For the four types of optimization algorithms, the MAPE, and the corresponding nonlinear parameters after running 30 times are shown in Figure 7, and the time consumption is shown in Table 9. Mathematical Problems in Engineering It can be seen from Figure 7 and Table 9 that the operation of WOA is relatively stable, and the running time of WOA is 9.9931 s, which is relatively small.Overall, the WOA is reasonable as an optimizer. Conclusion is paper aims to further promote the prediction accuracy of the nonlinear grey Bernoulli model (NGBM (1, 1)), and as a result, the nonlinear grey Bernoulli model with improved parameters, abbreviated as INGBM (1, 1), is proposed.is study does not share the same differential equation as the traditional NGBM (1, 1) model.Instead, the differential equation is transformed into the linear formula.Besides, considering that "misplaced replacement" is the root cause of contradiction when converting the differential equation to the difference equation, the model parameters are optimized to better match these two equations to reduce prediction error.In particular, the whale optimization algorithm is used to automatically determine the optimal power index of INGBM (1, 1).ree examples are employed to validate the proposed model's effectiveness by comparing with commonly used grey models.In all cases, the proposed model both outperforms other grey models, implying that the INGBM (1, 1) model can effectively solve the nonlinear problems with a small sample size and provide valuable information for related decisionmakers to make strategies in advance. Although INGBM (1, 1) has a very good effect, there are some limitations that need to be overcome in future work: (1) although the model has a good effect, there may be overfitting in some special cases.(2) More accurate parameter values can be further obtained with multiple optimizers. Figure 2 : Figure 2: e track of searching for the optimal power index by WOA. Figure 3 : Figure 3: e track of searching for the optimal power index by WOA. Figure 4 : Figure 4: e track of searching for the optimal power index by WOA. Table 1 : e criteria for MAPE proposed byLewis. Table 3 : Simulated and predicted results by different grey models. Table 4 : Parameter values for five grey models. Table 2 : Parameter values for five grey models. Table 5 : Simulated and predicted results by different grey models. Table 6 : e number of R&D institutions of higher education from 2011 to 2018. Table 7 : Parameter values for five grey models. Table 8 : Simulated and predicted performance by five grey models using raw data of the number of R&D institutions of China's higher education. Table 9 : Average time cost of four optimizers.
5,170.2
2021-03-10T00:00:00.000
[ "Engineering", "Mathematics", "Environmental Science" ]
Ageing leads to nonspecific antimicrobial peptide responses in Drosophila melanogaster Evolutionary theory predicts a late-life decline in the force of natural selection, possibly leading to late-life deregulations of the immune system. A potential outcome of such immune-deregulation is the inability to produce specific immunity against target pathogens. We tested this possibility by infecting multiple Drosophila melanogaster lines (with bacterial pathogens) across age-groups, where either individual or different combinations of Imd- and Toll-inducible antimicrobial peptides (AMPs) were deleted using CRISPR gene editing. We show a high degree of non-redundancy and pathogen-specificity of AMPs in young flies: in some cases, even a single AMP could confer complete resistance. In contrast, ageing led to a complete loss of such specificity, warranting the action of multiple AMPs across Imd- and Toll-pathways during infections. Moreover, use of diverse AMPs either had no survival benefits, or even accompanied survival costs post-infection. These features were also sexually dimorphic: females expressed a larger repertoire of AMPs than males, but extracted equivalent survival benefits. Finally, age-specific expansion of the AMP-pool was associated with downregulation of negative-regulators of the Imd-pathway and a potential damage to renal function, as features of poorly-regulated immunity, Overall, we could establish ageing as an important driver of nonspecific AMP responses, across sexes and bacterial infections. renal function, as features of poorly-regulated immunity, Overall, we could establish ageing 48 as an important driver of nonspecific AMP responses, across sexes and bacterial infections. 2019). We also used '∆AMPs' flies where independent mutations were recombined into a 137 background lacking 10 inducible AMPs. However, we note that the impact of ∆AMPs could be 138 due to AMPs having specific effects or combinatorial action of multiple co-expressed AMPs. 139 To tease apart these effects, we also included various combined mutants where different 'young' and 'old' adults, respectively. We transferred the adults to fresh food vials every 3 157 days, during the entire experimental window. By screening the single mutants, along with 158 combined genotypes, we were able to compare the changes in specific immunity as a function into a bacterial suspension made from 5 mL overnight culture (optical density of 0.95, 168 measured at 600 nm) of either Providencia rettgeri or Pseudomonas entomophila adjusted to 169 9 OD of 0.1 and 0.05 respectively (See SI methods for details). In total, we infected 160-280 170 flies/sex/infection treatment/bacterial pathogen/age-group/fly genotypes and then held 171 them in food vials in a group 20 individuals (For each treatment, sex, age-group, pathogen 172 type, we thus had 8-14 replicate food vials). We carried out sham infection with a pin dipped 173 in sterile phosphate buffer solution (1X PBS). 174 We then recorded their survival every 4-hours (2) for 5 days. Due to logistical challenges of 175 handling a large number of flies, we infected each sex and age-groups with P. rettgeri (or P. 176 entomophila) separately in multiple batches, where they were handled as -(i) Groups AB, 177 BC, AC; (ii) Group-A, B & C; (iii) Imd-responsive and (iv) Toll-responsive single mutants for P. 178 rettgeri; or (i) Groups AB, BC, AC, A, B & C; (ii) Imd-responsive and (iii) Toll-responsive single 179 mutants for P. entomophila. Every time, we also assayed iso-w 1118 flies as a control to facilitate 180 a meaningful comparison across different batches. Therefore, although sexes and age-groups 181 for each mutant were not directly comparable, their relative effects with respect to control 182 iso-w 1118 were estimated across sexes, age-groups and pathogen types. Note that we 183 compared each mutant separately with iso-w 1118 flies, since we only wanted to capture their 184 changes in infection susceptibility relative to control flies. For each batch of flies, across 185 pathogen types, sexes and age-groups, we analysed the survival data with a mixed effects Cox 186 model, using the R package 'coxme' (Therneau, 2015). We specified the model as: survival ~ 187 fly lines (individual AMP mutant lines vs iso-w 1118 ) + (1|food vials), with fly lines as a fixed 188 effect and replicate food vials as a random effect. Since none of the fly lines had any mortality 189 after sham-infection, we were able to quantify the susceptibility of each infected mutant lines 190 (AMP knockouts) with respect to control flies (iso-w 1118 group) as the estimated hazard ratio 191 of infected AMP mutants versus control flies (hazard ratio = rate of deaths occurring in 192 infected AMP mutants /rate of deaths occurring in iso-w 1118 group). A hazard ratio 193 significantly greater than one indicated a higher risk of mortality in the AMP mutant 194 individuals. 195 Note that the above experimental design allowed us to repeat the assay for post-infection 196 survival of young and old iso-w 1118 flies infected with P. rettgeri (or P. entomophila) in 4 (or 3) 197 independently replicated experiments. We thus estimated the effects of ageing on their post-198 infection survival, using a mixed effects Cox model specified as: survival ~ age + (1|food vials), 199 with age as a fixed effect, and food vials as random effects. 200 III. Assay for bacterial clearance 201 Mortality of control flies (iso-w 1118 ) injected with the experimental infection dose began 202 around 24-hours and 20-hours after infection with P. rettgeri and P. entomophila respectively 203 ( Fig. S2A). We therefore used these time-points to estimate the bacterial load across the age-204 groups as a measure of the pathogen clearance ability across AMPs (see SI methods for 205 detailed protocol). We homogenized flies in a group of 6 in the sterile PBS (n= 8-15 replicate 206 groups/sex/treatment/age-group/fly lines), followed by plating them on Luria agar. Due to 207 logistical challenges with large number of experimental flies, we handled each sex, age-group 208 and pathogen type separately and in multiple batches as described above. 209 Also, similar to post-infection survival data, we were only interested in comparing the changes 210 in bacterial load for each mutant line relative to control iso-w 1118 flies across experimental 211 groups. We thus analysed the bacterial load data of each mutant genotype with iso-w 1118 flies 212 separately across age-groups, sexes and pathogen types. Since residuals of bacterial load data 213 were non-normally distributed (confirmed using Shapiro-Wilks's test), we log-transformed 214 the data, but residuals were still non-normally distributed. Subsequently, we analysed the 215 log-transformed data, using a generalised linear model best fitted to gamma distribution, with 216 fly lines (i.e., control iso-w 1118 line vs individual AMP knockout line) as a fixed effect. 217 IV. Assay for the Malpighian tubule activity, as a proxy for immunopathological 218 damage 219 Malpighian tubules (MTs), the fluid-transporting excretory epithelium in all insects, are prone 220 to increased immunopathology following an immune activation due to their position in the 221 body and the fact that they cannot be protected with an impermeable membrane due to their 222 functional requirement (Dow et al., 1994;Khan et al., 2017). Previous experiments have 223 shown that risk of such immunopathological damage can increase further with ageing in 224 mealworm beetle Tenebrio molitor (Khan et al., 2017). It is possible that nonspecific AMP 225 responses with ageing in Drosophila was also associated with increased immunopathological 226 damage to MTs. We thus estimated the fluid transporting capacity of functional MTs 227 dissected from experimental females at 4-hours after immune challenge with 0.1 OD P. 228 rettgeri (n=12-20 females/ infection treatment/age-group), using a modified 'oil drop' 229 technique as outlined in previous studies (Dow et al. 1994;Li et al., 2020) (also see SI 230 methods). 231 This method provides a functional estimate of their physiological capacity by assaying the 232 ability to transport saline across the active cell wall into the tubule lumen. The volume of the 233 secreted saline droplet is negatively correlated with the level of immunopathological damage 234 to MTs. Since we collected the flies across the age-groups on different days, we analysed the 235 MT activity data as a function of infection status for each age-group separately, using a 236 generalized linear mixed model best fitted to a quasibinomial distribution. 237 V. Gene expression assay 238 Finally, we note that transcription of negative regulators of Imd-pathway such as pirk and 239 caudal are important to ensure an appropriate level of immune response following infection 240 with gram-negative bacterial pathogens, thereby avoiding the immunopathological effects 241 al., 2018) (also see SI methods). We analysed the gene expression data using ANOVA (see SI 254 methods section-iv for details). 255 I. Ageing leads to an expansion of the required AMP repertoire against P. rettgeri 271 infection 272 To gain a broad understanding of how AMP specificity changes with age, we first tested 273 mutants lacking different groups of AMPs either from Imd-(e.g., group B) or Toll-pathways 274 (e.g., group C) (pathway-specific), or combined mutants lacking pathway-specific mutants in 275 different combinations (e.g., group AB, BC or AC) (See Fig. S1 for description of mutants). As lacking group-AB and -BC AMPs were highly susceptible to P. rettgeri infection ( Fig. 1A; Table 278 S2), and this was generally associated with 10-100-fold increased bacterial loads in these 279 mutants relative to the iso-w 1118 control ( Fig. 1B; Table S3). Subsequent assays with pathway-280 specific (i.e., Imd-or Toll-pathway) AMP combinations (group A, B or C) confirmed that such 281 effects were primarily driven by Imd-regulated group-B AMPs that were shared between both 282 AB and BC combinations ( Fig. 1C; Fig. S3; Table S2), and equally driven by increased bacterial 283 load ( Fig. 1D; Table S3). We found a comparable pattern in young females as well, except that 284 flies lacking group BC combinations of AMPs were not negatively affected by infection (Fig. 285 1E, 1F, 1G, 1H; Fig. S4; Table S2, S3). 286 In contrast to young flies, most of the pathway-specific or combined mutants became highly 287 susceptible to P. rettgeri infection with age, except females of group-A mutants flies lacking 288 Def. This would suggest a possible sexually dimorphic effect of Defensin in P. rettgeri infection, 289 which appear to be important for males, but not females (Fig. 1C, 1G; Fig. S3, S4; Table S2). Table S2) and increased bacterial growth (Fig. 1D, 1H; Table S3) 296 therein clearly indicated that other AMPs responsive to Gram-positive bacteria (e.g., Def) or 297 fungal pathogens (e.g., Mtk, Drs) might be needed as well. 298 II. Dpt-specificity against P. rettgeri infection is sex-specific and disappears with age 299 Next, we decided to test the role of individual AMPs deleted in the pathway-specific or 300 compound mutants across age-groups and sexes. Interestingly, Dpt provided complete 301 protection against P. rettgeri only in young males, but not in females or older males ( Fig. 2A, 302 2E; Table S5). This was verified by using fly lines where DptA and DptB are introduced on an 303 AMP-deficient background (∆AMPs +Dpt ). Dpt reintroduction could fully restore survival as that 304 of wild-type flies only in young males, and this was associated with a decrease in CFUs 305 compared to the Dpt deletion mutant (Fig. 2B, Table S6). However, reintroduction of 306 functional DptA and DptB (∆AMPs +Dpt ) in young or old females flies did not result in lower 307 CFUs ( Fig. 2F; Table S6) and these flies remained highly susceptible to P. rettgeri ( Fig. 2E; Table 308 S2). Young (or old) females also showed increased bacterial loads and associated higher 309 Table S5, S6). Older ∆AMPs +Dpt males, on the other hand, could limit the bacterial 313 burden as low as that of the control iso-w 1118 flies ( Fig. 2B; Table S6), but still showed very 314 high post-infection mortality ( Fig. 2A; Table S5). These results from older males thus 315 suggested that the ability to clear pathogens might not always translate into an improved 316 ability to survive after infection ( Fig. 2A, 2B; Table S5, S6). 317 Why did females always require AMPs other than Dpt after P. rettgeri infection? Although the 318 mechanisms behind sex-specific expansion of AMP repertoire are unknown, a possible 319 explanation is that females show inherently lower expression level of Dpt relative to males . is not yet experimentally validated. 326 Also, both males and females showed further extension to a Toll-responsive AMP repertoire 327 with ageing. In addition to the role of Def (included in group-A) as described above in older Taken together, these results describe ageing as a major driver behind the loss of specificity 333 of AMP responses. 334 Additionally, we also note that a few other mutations such as deletion of Dro and Dro-Att, 335 which otherwise had no effects on the survival of P. rettgeri-infected young males, caused 336 significant increase in the bacterial load ( Fig. 2A, 2B; Table S5, S6). Together, these results not 337 only underscored the multifaceted role of AMPs, but also provided functional resolution at 338 the level of single AMPs such as Dpt which in addition to playing the canonical role in resisting 339 the infection, also aided in withstanding the effects of increased pathogen growth, caused by 340 the dysfunction of other AMPs ( Fig. 2A, 2B; Table S5, S6). 341 III. Expansion of the required AMP repertoire does not improve, and even reduces, 342 survival in both older males and females infected with P. entomophila 343 To test if age-related loss of AMP specificity was specific to P. rettgeri, or also occurred with Table S7) and yet, died faster than young flies (old vs young: 4-fold vs 2-347 fold; Fig. 3A, 3C). In contrast to younger flies, where only group-B, -AB and -BC mutants were 348 susceptible to P. entomophila infection, all the other pathway-specific or combined mutants 349 of older males and females were also highly sensitive to infection (Fig. 3A, 3C; Table S7). Table 355 S9), though it is striking that that increased mortality was not associated with increased 356 microbe loads relative to iso-w 1118 in this case (Fig. 4D, 4H; Table S10). Overall, this is 357 comparable to P. rettgeri infection where potential crosstalk between Toll & Imd immune-358 signalling pathways has already been implicated with ageing ( Fig. 2; Table S5, S6). Also, the 359 broad similarity between age-specific expansion and cross-reactivity of AMP repertoire 360 against two different pathogens indicated the possibility where non-specificity can indeed be 361 a generalised feature of an ageing immunity. Moreover, the increased mortality in older flies 362 infected with P. entomophila, despite involving a higher number of AMPs, was perhaps an 363 indication of their exacerbated cytotoxic effects with age (Badinloo et al., 2018). 364 IV. Ageing-induced expansion of the required AMP repertoire was associated with 365 However, regardless of sex and pathogen, ageing led to a more drastic expansion of AMP 408 repertoire-instead of deploying only canonical expression of Imd-responsive AMPs to 409 counter Gram-negative bacterial infections, older males and females also used AMPs from 410 Toll pathways. 411 Surprisingly, despite using more diverse AMPs, late-life expansion either did not confer any 412 survival benefits (during P. rettgeri infection in older males) or was associated with survival 413 costs (after P. entomophila infection). We thus speculate that the nonspecific use of AMPs 414 with ageing was unnecessary, perhaps indicating an immune system failing to control over- We thank the grant supplement from SERB-DST India (No. ECR/2017/003370) to I. Khan Table S1 for qPCR primers used 738 in this study). For each cDNA sample across gene of interests, we had two technical replicates. We
3,596.6
2022-08-20T00:00:00.000
[ "Biology" ]
Matrix Analysis of Hexagonal Model and Its Applications in Global Mean-First-Passage Time of Random Walks Recent advances in graph-structured learning have demonstrated promising results on the graph classification task. However, making them scalable on huge graphs with millions of nodes and edges remains challenging due to their high temporal complexity. In this paper, by the decomposition theorem of Laplacian polynomial and characteristic polynomial we established an explicit closed-form formula of the global mean-first-passage time (GMFPT) for hexagonal model. Our method is based on the concept of GMFPT, which represents the expected values when the walk begins at the vertex. GMFPT is a crucial metric for estimating transport speed for random walks on complex networks. Through extensive matrix analysis, we show that, obtaining GMFPT via spectrums provides an easy calculation in terms of large networks. text organization, predicting chemical venomousness, and categorizing public buildings in human interactions. Though the permuting indices and the encoding's runtime effectiveness are hurdles in graph classification, for a simple and less order graphs, it is easy to construct an adjacency matrix to check the properties of such graphs [14]. As a result, the best encoding strategy for simple and finite graph classification is indices over node permutations. The adjacency matrix strategy is also convenient in neural graphs to the limitations and worldwide locations of the set of nodes [32]. Prevailing graph classification methods frequently necessitate an adjacent assessment of the structures or rely entirely on algebraic and spectral symbol, which are difficult to calculate. Appropriate illustration approaches are mandatory to encrypt the atomic assembly of the display concisely, which is well-organized. However, transformation invariance, scalability, in addition the programming's runtime proficiency are hurdles in graph classification. Because of the deficiency in order of graph vertices, numerous adjacency matrices can represent the same graph. Consequently, VOLUME 11, 2023 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ the optimum programming method for graph classification is the best invariant further down transformations of the vertices. The current network classification methods frequently require an adjacent assessment of the structures or rely entirely on arranged algebraic spectral investigation, both of which are difficult to investigate. It has recently been used to quantify the robustness of networks in distributed networked control systems based on noisy data. In reality, it has many comparable descriptions, such as the spectrums of graphs and it can be used to extract graph representations. It is renowned that hexagonal systems play a significant part in theoretical chemistry, since they are usual graph symbols of benzenoid hydrocarbon [11]. As a result, hexagonal systems have received a lot of attention. Kennedy and Quintas investigated the enumerations on perfect matchings in an arbitrary hexagonal chain model [17]. In [10] and [22], the authors determined the Wiener index (resp. Edge-Szeged index) of a hexagonal model. Li et al. [33] studied the normalized Laplacian of a penta-graphene with applications. For further studies on laplacian and normalized laplacian we refer [34], [35]. The reference [18] provided a comprehensive explanation of the distinctive polynomial of a hexagonal model. In [28], an explicit closed-form formula for the sum of a resistance distances of hexagonal chain is obtained with the help of Laplacian spectrum. II. PRELIMINARIES The networks in this paper are simple, undirected, finite and connected. Let N = (U N , E N ) be a network, where U N denotes the node set, and E N its links respectively. We denote the order of N as n = |U N | and its size as |E N |. For further notations we referred to [1], [4], [24], and [25]. Let A(N ) denotes an adjacency matrix of N , where the entry (i, j) contains 1 if and only if ij ∈ E N and 0, otherwise. Define the Laplacian matrix of N as (N ) = D(N ) − A(N ). We assume that µ 1 <µ 2 ⩽ · · · ⩽ µ n be the spectrums of (N ). It is obvious that if and only if N is a connected network, then µ 1 = 0 and µ 2 > 0. For further studies on (N ), we refer to the following interesting papers [13], [20], [21] The fact that λ 1 = 0 is well known assumption in spectral graph theory, and λ 2 > 0 when the graph G is assumed to be connected. We denote the spectral of (G) with Sp(G) = {λ 1 , λ 2 , . . . , λ n }. For more details on (G), we suggest [13]. For distance, among the nodes i, j of a graph G is defined as the dimension of a through i-j path in G [5]. Let denotes the characteristic polynomial of any n × n matrix then ϕ( ) = det(tI − ). The matrix I is called a unitary matrix having of . The automorphism of any network G is defined as a permutation of the nodes in G that maps links to links. Let be an automorphism in any network G; thereby, one may define it as the product of transpositions and disjoint 1-cycles, that is, = (t 1 )(t 2 ) · · · (t m )(l 1 , q 1 )(l 2 , q 2 ) · · · (l k , q k ). Following Lemmas are well known matrix-tree theorem. Lemma 2.2 ( [16], [19]): Let G be an n-vertex connected [26]): The cycle is denoted by C n and having n vertices, then R ij (C n ) = n 3 −n . Rendering the considered vertices of a HM n , as described in Fig. 1, an automorphism of HM n is given as g = Thus, ξ 11 and ξ 12 are constructed as follows: Assume that the spectrums of a matrix S are µ j , j = 1, 2, . . . , 2n. Note that µ j > 0 for all j. In view of Lemma 2.1, one has the Laplacian spectrum of T n is where α 2n ̸ = 0. III. THE MFPT OF HM n AND IMPORTANT LEMMAS Given a graph G, the MFPT F ij of any node j is the smallest number of steps of any random walk requires to reach at point j. The (MFPT) F ij is defined as the expected value of F j once walk starts at vertex i. MFPT is a vital quantity which is supposed to be useful to approximate the speed of any transport of the random walks of any graphs [15], [31]. The GMFPT denoted by ⟨F(G)⟩ actions the distribution competence of any walk, and obtained by averaging F ij over (|V G |−1) probable end point and |V G | roots of elements [12], that is with the fact that |V G | ̸ = 1. By [9], commuting time C ij among the vertices i and j are accurately 2|E G |r ij , i.e., Lemma 3.1: Assume that B be a 2n × 2n matrix given below: be two submatrices of B. Assume that η i := det χ i and η ′ i := det χ ′ i . We fix η 0 = 1, η ′ 0 = 1. Then for 0 ⩽ i ⩽ 2n, one has and VOLUME 11, 2023 Proof: First, we show (5). To check that η 1 = 4, η 2 = 7, η 3 = 24 is straightforward. In case 3 ⩽ i ⩽ 2n, we expand det χ i with regards to its last row In case, 0 ⩽ i ⩽ n, assume that c i = η 2i and for 0 ⩽ i ⩽ n−1, let d i = η 2i+1 and c 0 = 1, d 0 = 4. In case i ⩾ 1, we have From the first equation in (7), one has Substituting the values of d i−1 and d i in the second part of (7) gives Thus, η i fulfill the following recurrence Then the characteristic equation of (8) is r 4 = 6r 2 − 1, and its roots are . Hence, general solution of (8) is given by Together with the IC,s in (8) give the followings The unique solution of this system can be found to be . We get our result by putting ζ 1 , ζ 2 , ζ 3 and ζ 4 in (9). Through, the parallel directions as above, it is straightforward to obtain (6), which is omitted here. Then we claim the coefficient of r 2n−1 in F(r) is the same as the coefficient of r 2n−1 in µ 1 (r)µ 2 (r). In fact, as desired. Similarly, the coefficient of r 2n−3 in F(r) is the same as the coefficient of r 2n−3 in µ 1 (r)µ 2 (r). Hence, in order to determine −α 2n−1 , it suffices for us to determine the coefficients of r 2n−1 and r 2n−3 in F(r). □ By a direct calculation, we have . Notice that 1 + √ 2 and 1 − √ 2 are the roots of r 2 − 2r − 1. Hence, assume that where a, b, c, d belongs to the real numbers. Linking both sides of (13) gives a = −c = 3 16 √ 2 and b = d = 0, that is Hence, coefficient of r 2n−1 in . Through a parallel directions, we get that the coefficient of ) 2n+1 . Thus, the coefficient of VOLUME 11, 2023 Bear in mind that Then by a parallel discussion as above we can get the coefficients of r 2n−1 in Through the same directions, we establish the coefficient of By a direct calculation, Hence, in view of (3.5), one has Proof: Note that |V HM n | = 4n. By Lemma 2.2, one has here θ i (1 ⩽ i ⩽ 2n − 1) and µ j (1 ⩽ j ⩽ 2n) represents the spectrums of the matrices R and S . On the one hand, in view of Lemma 2.2 we have On the other hand, µ 1 , µ 2 , . . . , µ 2n are the roots of det(rI 2n − S ) = r 2n + α 1 r 2n−1 + · · · + α 2n−1 r + α 2n = 0. By Vieta's Theorem, one has 2n Together with (16) and (17), our result follows immediately. □ In order to obtain R ij (HM n ), it suffices to determine α 2n−1 and det S in (15). Based on Lemmas 3.1-3.5, we obtain Lemma 3.6: Let HM n be a zig-zag polyhex nanotube with n hexagons. Then IV. PROOF OF THEOREM Note that |E HM n | = 5n and |V HM n | = 4n. From (3) and (4), the GMFPT for HM n is Hence, we get our desired result. V. NUMERICAL CONSEQUENCES AND DISCUSSIONS In this section, by using Matlab, we give some graphical interpretations between the number of hexagons (n) and ⟨F(HM n )⟩. We also investigated the effect of ⟨F(HM n )⟩ for n = 3 and 4. For the sake of simplicity, we assume ⟨F(HM n )⟩ = M g . In Fig. 2, it shows that M g increases as we increase the hexagons (n). In Fig. 3, it shows that M g increases for both n = 3 and n = 4, but the slope of M g for n = 4 is larger than for n = 3. This means that the GMFPT works efficiently for the large nodes. Hence, we developed a unified strategy for obtaining the scaling properties of ⟨F(HM n )⟩ and achieved an organized study for GMFPT. Since, the study of spectrums are crucial in determining the scaling of GMFPT. Therefore, we used a closed-form formula for GMFPT and all pairs of nodes. Finally, looking at GMFPT in a network, we found that as the number of hexagons grows, so does GMFPT. This demonstrates that hexagons and network invariants have a direct relationship. The GMFPT between source and target exhibits search efficiency when we analyze many random walks equally. We demonstrated in Fig.2 and Fig.3 that GMFPT works efficiently for the large nodes. VI. CONCLUDING REMARKS In this contribution, we obtained the GMFPT of HM n . Note that Carmona, Encinas and Mitjana studied the resistance distances for ladder-like graphs [7]. Very recently, Barrett, Evans and Francis [2] studied the effective resistances in straight linear 2-trees (i.e., linear triangle chain) and some related problems. It is quite motivating to study the effective resistances for the Möbius hexagonal ring, and the Möbius pentagonal ring. We will do it in the near future. Declarations Conflicts of interest/Competing interests: The authors declare no any conflict of interest/competing interests. Data availability: Not applicable Code availability: Not applicable Authors' contributions: (All author contributed equally to this work.)
3,012.8
2023-01-01T00:00:00.000
[ "Computer Science" ]
Testing the $\chi_{c1}\, p$ composite nature of the $P_c(4450)$ Making use of a recently proposed formalism, we analyze the composite nature of the $P_c(4450)$ resonance observed by LHCb. We show that the present data suggest that this state is almost entirely made of a $\chi_{c1}$ and a proton, due to the close proximity to this threshold. This also suppresses the decay modes into other, lighter channels, in our study represented by $J/\Psi p$. We further argue that this is very similar to the case of the scalar meson $f_0(980)$ which is located closely to the $K\bar K$ threshold and has a suppressed decay into the lighter $\pi\pi$ channel. Introduction A clear peak has been observed in the invariant mass distribution of the J/Ψp subsystem in the three-body decay Λ b → K − J/Ψp in Ref. [1]. This signal is interpreted as a new pentaquark resonance P c (4450). These results have spurned a plethora of theoretical investigations, see Refs. [2][3][4][5][6][7][8][9][10][11]. Clearly, it is still necessary to obtain further experimental information to confirm that it is actually a resonance and not an anomalous-threshold singularity on top of the χ c1 p branch-point threshold [7]. Other proposals like in Ref. [4] consider the P c (4450) as a composite of much heavier channels, with some of them having a large width. Here, we further explore the ideas first given in Ref. [7], as the extreme close proximity of the mass of the P c (4450) to the χ c1 p threshold can not be accidental. Independent of its true nature, to simplify the argumentation, we employ the term resonance to indicate the peak P c (4450) unveiled in Ref. [1]. We proceed by analogy with the J P C = 0 ++ resonance f 0 (980) which couples mainly to two channels, one lighter (ππ) and another heavier (KK), with the latter threshold almost coinciding with the resonance mass. The lighter channel is the one that drives the relatively small width of the f 0 (980), in the sense that it is responsible for this resonance to develop a width, although it owes its origin to the nearby KK threshold [12]. In our present case for the P c (4450) resonance, the J/Ψp state is assumed to play the role of the ππ channel in the case of the f 0 (980) because, despite the resonance having plenty of phase space to decay into this channel, the P c (4450) width is rather small. This indicates that the coupling to this channel is suppressed. Following this line of reasoning, the χ c1 p channel is assumed to play the analogous role of the KK one for the f 0 (980), because it is almost on top of the mass of the P c (4450), dragging the mass of the resonance towards its threshold due to its large coupling, being furthermore the main contribution in the resonance composition. In the following, we work out the consequences of such a scenario making use of the formalism developed in Ref. [13]. This allows to calculate the compositeness coefficients for the involved channels and also the partial decay widths. As we will show, the present data can be well described with the assumption that the P c (4450) is a χ c1 p composite, either a resonance or an anomalous threshold singularity. How to exclude the latter scenario was already discussed in Ref. [7]. In particular, a precise measurement of the partial widths would be a clear test of this scenario of the composite nature of the P c (4450). Compositeness condition and the width of the P c (4450) As stated above, we consider a two-channel scenario, where the extreme proximity of the P c (4450) to the second channel, χ c1 p, suggests that the resonance is in fact a composite of this latter constituent state, suppressing in this way the otherwise large width to the lighter mass channel J/Ψp. To further investigate this possibility, we make use of the recent work from Ref. [13] that established a well defined procedure to interpret in a standard probabilistic way the compositeness of a resonance from the compositeness relation [14][15][16][17][18][19]. The final result is a simple prescription consisting in changing the phase of every coupling separately, such that the compositeness coefficient of the corresponding channel is a positive real number. In this way, if the original couplings are denoted by γ i , the procedure of Ref. [13] requires to transform them as This change results from the determination of the physically suited unitary transformation of the S-matrix. Each of these unitary transformations implies a new compositeness relation, all of them being associated with the same resonance projection operator A. A detailed account of this theory is developed in Ref. [13]. In this way the compositeness coefficient X i transform as where X f i is the final compositeness coefficient for channel i, representing the amount of this two-body channel in the resonance state. In the previous equation, s P is the pole position of the resonance and G i (s) is the scalar unitary loop function for channel i, which requires a subtraction constant, see e.g. Refs. [20,21] for explicit expressions of this function. But since only the derivative of G i enters in |X i |, this coefficient is independent of the subtraction constant. The derivative of G i (s) is a definite function of s and the precise values of the masses of the particles involved, as it would correspond to a convergent three-point one-loop function. Our basic criterion is to impose that the P c (4450) is a composite mainly of χ c1 p with some contribution from the J/Ψp. Indeed, if we assumed that this last channel were the main contribution in the composition of the resonance (in the following we indicate by 1 the channel J/Ψp and by 2 the χ c1 p, in order of increasing thresholds) then from the requirement it would follow that We determine s P from the values of the mass, m P , and width, Γ P , of the P c (4450) obtained in Ref. [1], as Performing this simple exercise one would then obtain a huge and completely unrealistic width where we denote by q i (s) the three-momentum of channel i at s (the total energy squared in the center-of-mass frame). Note that the threshold of J/Ψp is around 400 MeV lighter than the nominal mass of the resonance P c (4450) so that there is plenty of phase space to allow the previous decay. 1 This is similar to the case of the f 0 (980) resonance where the ππ channel is required to couple weakly to the f 0 (980), otherwise the width of the latter would be huge. Let us now proceed to reach quantitative conclusions within our working assumption, consisting in the analogy, on the one hand, between the channels ππ, KK and J/Ψp, χ c1 p, in order, and on the other hand, between the resonances f 0 (980) and P c (4450). Our main equations stem from imposing saturation of the compositeness relation and the width of the P c (4450) by the channels considered, J/Ψp and χ c1 p, respectively. These conditions imply two equations, in order, The second term on the right-hand side (rhs) of the last equation is the partial decay width of the P c (4450) to χ c1 p and, since the threshold of this channel is almost on top of the mass of the resonance, one should calculate it taking into account its finite width. For that we follow the procedure of Ref. [12] and introduce a Lorentzian mass distribution around the mass of the resonance due its width. In this way, the formula for the partial decay width is recast as in the last term on the rhs of Eq. (8) with 1 Because of this, one should take sP in the unphysical Riemann sheet for this channel when evaluating the derivative of G1 at sP . In practical terms we restrict ourselves to the resonance region so that we integrate in this equation up to m P + 2Γ P as in Ref. [12]. Otherwise, the tail of the integrand takes too long to converge. We have checked that in this way once the resonance mass is above the χ c1 p threshold, within the one-sigma interval according to the error in the mass provided by Ref. [1] (we add quadratically the statistical and systematic errors), the standard formula for the decay width (used in the first term on the rhs of Eq. (8) for the partial decay width to J/Ψp) and the one based on Eq. (9) provide consistent results. However, by making use of this procedure we avoid having a zero decay width to χ c1 p once m P is smaller than the χ c1 p threshold. Furthermore, this would be an unphysical result. Equations (7,8) are valid for any partial wave or combination of partial waves in which each state J/Ψp or χ c1 p could be involved. For the case of the partial decay widths in Eq. (8) higher powers of three-momentum are reabsorbed in the residues squared and this is why the resulting expression seems the typical one for an S wave decay. On the other hand, independently of the quantum numbers to characterize a given partial wave, e.g. the basis ℓSJ (orbital angular momentum, total spin and total total angular momentum, respectively), the threshold is always the same and fixed by the particle masses. As a result the derivative of G i (s P ) in Eq. (7) and the phase space driven factors in Eq. (8) do not depend on the specific partial wave and then each |γ i | 2 represents indeed the sum of residues squared to the involved partial waves. Regarding the pole position s P of the P c (4450), the fact that the threshold for χ c1 p is so close to its mass makes necessary the distinction on the Riemann sheet in which s P lies. In the following, we discuss our results distinguishing between whether the pole is located in the 2nd or 3rd Riemann sheet. In the former, Im q 1 < 0, Im q 2 > 0, while in the latter Im q 1 < 0, Im q 2 < 0. These are the two Riemann sheets that connect continuously with the physical axis below and above the χ c1 p threshold, respectively. Notice that dG 2 (s P )/ds in Eq. (7) depends on the actual Riemann sheet taken for the pole position s P . Nevertheless, our results are rather stable under the change of sheet because the calculation of |γ 2 | 2 from Eq. (7) depends mainly on ∂G 2 (s P )/∂s −1 , which is rather stable under the change of sheet because the threshold of χ c1 p is very close to the mass of the relatively narrow resonance P c (4450). Due to the latter reason it is also necessary to solve Eqs. (7,8) taking into account the error bars in the mass and width of P c (4450) from Ref. [1]. (7) and (8) are shown in columns 2, 3. We also give the average partial decay widths Γ i (columns 4, 5) and resulting compositeness coefficients X f i (columns 6, 7). Below each quantity we provide its corresponding error. The Riemann sheet (RS) where s P lies is given in the first column. We present in Table 1 the results of solving Eqs. (7,8), where in the first column we show the Riemann sheet and then the couplings, partial decays widths and final compositeness coefficients are given. As argued above, the channel χ c1 p has a much stronger coupling to P c (4450) than to the lighter one, otherwise it would be an extremely wide resonance. The heavier channel is also by far the largest component in our composite assumption for the P c (4450) resonance. Regarding the partial width, we observe that the χ c1 p has a larger partial decay width than the J/Ψp. This fact is a rather robust prediction of our model, because even if we reduced the weight of the two considered channels in the compositeness relation to 0.8 instead of 1 in the lhs of Eq. (7), still the partial decay width to χ c1 p would be twice the one to J/Ψp. This is shown in the second and third rows of Table 2. For definiteness, we give all the results in this table in the 2nd Riemann sheet since they are rather stable if changed to the 3rd Riemann sheet, similarly to the results shown above in Table 1. This observation could be turned around. If the partial decay width of the P c (4450) to either of the two channels were measured, but still assuming that the total width is the sum of these two partial decay widths, Eq. (8), one could find out whether the compositeness relation for the P c (4450) is saturated by the two channels J/Ψp and χ c1 p. This is illustrated in the last two rows of Table 2 where now the lhs of Eqs. (7) is 0.4. As we can see, only when the P c (4450) has other large contributions in its composition beyond the two-body states J/Ψp and χ c1 p, the partial decay width to the former channel is the largest. x (7) is equal to x (first column) and the pole lies in the 2nd Riemann sheet. For further notation, see Table 1 3 Summary In this work, we have analyzed the composite nature of the P c (4450) resonance measured by LHCb, in a two-channel framework. The first, lower mass channel, is J/Ψp and the heavier one is χ c1 p, with its threshold extremely close to the mass of the resonance [7]. We employ the present data on the relatively small width and mass of the resonance to conclude that within our assumption the P c (4450) is almost entirely a χ c1 p resonance, coupling much more strongly to this channel than to J/Ψp, so that the former has clearly the largest decay width too. As first noted here, this is very similar to the scalar meson f 0 (980), that sits very close to the KK threshold because of its strong coupling to this channel. However, the coupling to ππ is much smaller which explains its suppressed width, though this lighter channel has plenty of phase space available. We have shown that this two-channel composite nature of the P c (4450) can be tested by measuring precisely the partial widths into these two channels.
3,445.4
2015-07-27T00:00:00.000
[ "Physics" ]
Detecting Cognitive Impairment Status Using Keystroke Patterns and Physical Activity Data among the Older Adults: A Machine Learning Approach Cognitive impairment has a significantly negative impact on global healthcare and the community. Holding a person's cognition and mental retention among older adults is improbable with aging. Early detection of cognitive impairment will decline the most significant impact of extended disease to permanent mental damage. This paper aims to develop a machine learning model to detect and differentiate cognitive impairment categories like severe, moderate, mild, and normal by analyzing neurophysical and physical data. Keystroke and smartwatch have been used to extract individuals' neurophysical and physical data, respectively. An advanced ensemble learning algorithm named Gradient Boosting Machine (GBM) is proposed to classify the cognitive severity level (absence, mild, moderate, and severe) based on the Standardised Mini-Mental State Examination (SMMSE) questionnaire scores. The statistical method “Pearson's correlation” and the wrapper feature selection technique have been used to analyze and select the best features. Then, we have conducted our proposed algorithm GBM on those features. And the result has shown an accuracy of more than 94%. This paper has added a new dimension to the state-of-the-art to predict cognitive impairment by implementing neurophysical data and physical data together. Introduction Cognitive impairment, also known as neurocognitive disorders, is a loss of cognitive function. It has destructive effects on people and the community as well. People with this condition have problems with perception, attention, and memory; meanwhile, these are essential things to build human cognition and psychiatric disorders (e.g., depression, insomnia, psychotic symptoms, etc.) [1][2][3] and even physical diseases, such as diabetes mellitus (DM) and cardiovascular diseases [4]. People with cognitive impairment also experience a diminished quality of life [5]. Cognitive impairment can cause many psychological symptoms in patients [6]. Its devastating consequences may increase the risk of dementia [7]. A study has shown that about 30-40% of cases with cognitive impairment subsequently progress to dementia [8]. e total assessed expense of dementia was US$818 billion in 2015, implying 1.09% of worldwide total domestic product [9]. e economic difficulty and pathological complexities among victims with cognitive impairment are undoubtedly more crucial [10]. Researchers have figured out that by 2030, people with dementia will be about 75 million, and this contingency will cost the community US$ 2 trillion [11]. Early detection of cognitive impairment status supports a sufferer by allowing them to plan for the future and early treatment [12][13][14]. At present, an ideal approach to confine or limit this overwhelming course is identifying danger in individuals and starting intervention early [15]. Many researchers have explored neurobiological, hereditary, EEG signal, and neuroimaging biomarkers for cognitive impairment diagnosis, especially in Alzheimer's disease [15,16] and also dementia [17]. Magnetic resonance imaging (MRI) [18] and neuroimaging techniques were broadly used to detect cognitive impairment [19][20][21]. Many AI-inspired approaches have been discovered, yet no quantitative analysis of accomplishment is proposed. AI approaches using machine learning, artificial neural network, and deep learning show some significant improvement in impairment detection but still have challenging issues. We have proposed an advanced ensemble learning algorithm named Gradient Boosting Machine (GBM) to detect cognitive impairment among older adults. Data obtained from the smartwatch and keystroke were preprocessed and analyzed through Pearson's correlations. en, the wrapper feature selection technique was used to select the best features. Experimented algorithms were chosen by observing the distribution (standard deviation, outliers, etc.) of our dataset. e selective features have been trained and tested with proposed algorithms to determine the best prediction results. Our proposed method highlights the following: (1) We have proposed a combination of physical and neurophysical data to detect cognitive impairment levels. (2) A conventional customized machine learning technique is performed to detect cognitive impairment, and classification performances are compared with other models. (3) e accuracy is higher for this quantitative analysis of detecting cognitive impairment. (4) In particular, our proposed method has the best accuracy of predicting mild cognitive impairment (MCI) than previous work. e health care services area is perhaps the leading region for AI applications. It is quite possibly the most complex field [22] and may be the most testing, particularly in the areas of conclusion and expectation [23]. Given that early mediation can decrease cognitive deterioration, current cognitive appraisals can be ineffective and develop older adults' technology use. So, our proposed methodology can make a turnover to lead a happy life for older adults. Related Works ere are many kinds of research ongoing on the prediction of cognitive impairment using simple-to-deep learning algorithms. Artificial neural network (ANN) algorithm has been used to distinguish the cognitive state using multicenter neuropsychological test data with magnificent accuracy [24]. Reference [24] was confined to neuropsychological tools for diagnosing cognitive impairment. Random forest survival analysis and semiparametric survival analysis (Cox proportional hazards) were combinedly used to evaluate the relative significance of 52 predictors in predicting cognitive impairment and dementia immensely [25]. Reference [25] was time-consuming research, having some limitations. One is that predictive correlations were focused on correlational analysis, which is implicitly bidirectional. e other is that cognitive outcome calculations were based on a success index for self-respondents and a ranking measure for proxy respondents rather than on clinical diagnosis. Artificial Intelligence (AI) approaches, including supervised and unsupervised machine learning (ML), deep learning, and natural language processing, have been applied for cognitive impairment by providing a conceptual overview of this topic, emphasizing the features explored [26]. A more effective method has been experimented for monitoring cognitive function using keystrokes [27] and linguistic characteristics with IT [28]. ere are some limitations mentioned that should be solved, like security concerns about providing personal data. e "Panoramix suite-6" serious digital games ("Episodix," "Attentix," "Semantix," "Workix," "Procedurix," and "Gnosix") scores datasets have been experimented through some renowned ML (SVM, CART, and LR) algorithms to detect cognitive impairment [29]. But it may discriminate in result when targeting older adults. Based on the b test's accuracy, a model has been developed to detect cognitive symptoms malingering in predicting malingerers of mild cognitive impairment [30]. is research was based on the medical symptoms of patient datasets. ese models' applicability has spread in different directions [31,32]. Magnetic resonance imaging (MRI) [33], in combination with multiplex neural networks [34], and resting-state functional magnetic resonance imaging (rs-fMRI), in combination with graph theory [35], have been used to isolate healthy brains from progressive mild cognitive impairment (pMCI), in the diagnosis of AD and MCI. ese researches were based on functional data. When applied to functional data from groups of healthy control subjects and MCI and AD patients, AD and MCI could be identified as induced causes to the brain network. Based on the cognitive neuroscience researchers' abnormal activity routines datasets, a novel hybrid statistical-symbolical technique can detect cognitive impairment [36]. is study achieved promising results. But the recognition method was based on only nonprobabilistic rules that strictly determine the detection of an abnormal behavior based on a user-defined set of observations. Besides, based on routine primary care patient datasets, conventional statistical methods and modern machine learning algorithms have been used to develop a risk score [37] to determine how people may build dementia [38]. Few research studies have been published where a systematic [39], quantitative, and critical review [40] has been analyzed to predict cognitive impairment and dementia using different machine learning techniques. Few research studies have also developed machine learning algorithms to detect cognitive impairment based on authorized clinical questionnaires' datasets only [41,42]. Materials and Method is study aims to develop a model for classifying cognitive impairment levels using keystroke patterns and physical activity information. Figure 1 represents a flowchart describing the whole development of the system, which consists of four phases. In the data collection phase (Figure 1(a)), three types (keystroke patterns, physical, and SMMSE score) of data have been collected. In the data collection phase, keystroke patterns data as neurophysical data are collected from a developed android application. Regular physical activity data is collected from smartwatches, and SMMSE data is collected from the questionnaire session, as shown in Figure 1(a). After extracting features, feature analysis has been performed to determine the correlations in features and then select the highly correlated features, as shown in Figure 1(b). After Analyzing the dataset feature, a machine learning algorithm has been chosen, as shown in the machine learning approach phase ( Figure 1(c)). e result analysis phase has demonstrated the relation between features output and SMMSE score output using the regression model and showing the validation using "10-fold cross-validation" (Figure 1(d)). Participants' mental health status in terms of cognitive impairment has been assessed using the twelve-item Standardised Mini-Mental State Examination (SMMSE). e British Columbia Ministry of Health validates this SMMSE approach, and the questionnaire can also be found on their website [43]. Several research types [44][45][46][47] used these questions for related cognitive issues. For this study, we also selected these questions, and 33 participants were asked SMMSE questions to generate the SMMSE score. is score represents the cognitive impairment's actual value to label the participant for group selection. ere are 26 males and seven females whose age range is between 50 and 65. ey were followed up for up to 6 months. In this study, the participant's cognitive impairment levels have been categorized into four types based on SMMSE score: normal (SMMSE score ≥ 25), mild (21 ≤ SMMSE score ≥ 24), moderate (10 ≤ SMMSE score ≥ 21), and severe (SMMSE score ≤ 9). Table 1 represents the distribution of the cognitive impairment scores based on the SMMSE. Every day, SMMSE scores were collected from participants. Some data were excluded because of insufficient information. Table 2 represents a sample of our datasets. Data Collection. We have collected the datasets from the Bangladesh research organization. e study's motive was to detect cognitive impairment via keyboard stroke patterns and activities they performed every day, such as sleeping, walking, etc. is study's data is collected using smart environment technologies, including android applications and wearable smartwatches. On a particular day, the participant came to the research center smart apartment and performed keyboard stroke patterns activity, and this neurophysical data was recorded. Physical data were collected from the smartwatch they wore all day long. Also, the SMMSE score was generated through a questionnaire session. Participants were assigned identifiers during the study. e identifiers have been randomized before this data was made available to the research. Data Preprocessing e SMMSE score has been taken at the beginning of the study and represents the participant's cognitive impairment severity. e presented study explores each extracted feature correlated with cognitive impairment symptoms that can differentiate participants from cognitive impairment. is questionnaire score was estimated by using a linear regression model [48] on extracted features. e standard linear regression model can be represented as follows: where � E iesmmse is the estimated score of SMMSE for ith participants, f n is the n number of features, and α 0 , α 1 , α 2 , ..., α n are the coefficients of the linear regression model. e lasso regularization [49] was used to minimize the error between the estimated score and the actual SMMSE score. e lasso regularization restricted the regression model coefficient to become too high. It performed well in the model as all the features were highly correlated. where � E lr is the lasso regularization. e first part of equation (2) represents the "residual sum of squares," and the other part represents as the "sum of the absolute value of the magnitude of coefficients," and λ denotes the amount of shrinkage. Data Augmentation. e class imbalance may damage the predictive model's performance, most of the time, in machine learning algorithms because machine learning algorithms focus more on detecting the larger classes. Our dataset has class imbalance problems, which suggest that the predictive models could poorly detect the minority class. We have tried to mitigate the class imbalance problem by augmenting 10% of our datasets' data using the Conditional Tabular GAN (CTGAN) [50] algorithm with high fidelity. CTGAN is another GAN designed to synthesize tabular data proposed in 2019 by the same authors as TGAN [51]. As shown in Figure 2, the statistical descriptions between original data and augmented data have been given. Every value like mean, standard deviation, minimum, and maximum into original and augmented data is almost the same. It indicates that, after augmentation, the distributions of datasets remain the same. Feature Extraction. A total of 11 features have been extracted from participants' classified neurophysical behavior and physical activity patterns information. Four features for neurophysical behavior from our developed application and another seven physical activity features from wearable devices are shown in Table 3. Feature Subgroup. Our analysis has explored that working and nonworking day's features have some relationship based on the extracted features of days. So, we have divided our extracted features into three subgroups: (i) baseline, (ii) weekdays, and (iii) weekend days. In the equation, � E n represents the feature subgroup, n is the total number of days based on feature subgroups: baseline, n � 7; weekdays (Sunday to ursday), n � 5; and weekend days (Friday and Saturday), n � 2. D i represents the ith day features. Feature Selection. Feature selection is a strategy to choose optimal features from datasets. is technique improves model performance and reduces complexity and computational costs. Also, it can improve the accuracy, reduce the overfitting, speed up training, improve data visualization, and increase the explainability of the model. In this study, we have used the Pearson correlation coefficient [53] to analyze the feature. Pearson's correlation coefficient formula is where r is the correlation coefficient, x i is the x-variable in a sample, x is the mean of the values of the x-variable, y i is the values of the y-variable in a sample, and y is the mean of the values of the y-variable. Using Pearson's correlation, we can generate an "r" value of individual features to rank the datasets' significant features. is "r" value can vary between −1 and 1. Figure 3 shows the total scenario of every feature's correlation with each other. e "p" value also plays a significant role in Figure 4 shows the correlation with each other based on "p" values. If this "p" value of any feature is less than 0.05 and near to 0, that feature would be a significant feature. Our analysis has been shown from the "r" values heatmap and the "p" values heatmap. As shown in Figures 3 and 4, we can observe that some features are highly correlated, while some features are less correlated. It indicates that some features would be significant, and some would not be so for our model. en, the wrapper feature selection method [54] has been used to select the model's best features. Regression and a classification algorithm have been used to evaluate the selected feature's performance after "10-fold cross-validation" of the data. Regression model features have been selected using the root mean square deviation (RMSD) of the SMMSE score estimation. 3.6. Methods. Cognitive impairment was classified into four categories (Table 3), and for evaluating the classification performance, we mainly focus on supervised learning. Two famous classification algorithms, Ensemble Learning (EL) and Support Vector Machine (SVM), were considered to detect the users' cognitive impairment. Gradient Boosting Machine (GBM). e Gradient Boosting Machine (GBM) [55] algorithm is an advanced algorithm of Ensemble Learning (EL) algorithm. It is a supervised machine learning algorithm for regression and classification problems. It generates a prediction model, commonly decision trees. Meanwhile, a decision tree is a weak learner, and the resulting algorithm is called gradient boosted trees, which usually outperforms random forest. It And we repeat the same process: either the error becomes zero or we have reached the stopping criteria, which is the limit to the number of models we have built. en, it concludes them by allowing optimization of an absolute differentiable loss function. We have given GBM working procedures step by step in a block diagram as shown in Figure 5. In a nutshell, we built our first model, which has features x and target y. And the first model was named H 0 , which is a function of x and y. en, we built the next model on the error of the previous model repeatedly till the nth model, as shown in Figure 6. H 0 gives some predictions and generates error e 0 by the function "F 0 (X)" as shown in equation (4). en, the next model added the new predicted errors e 1 with "F 0 (X)" creating a new function "F 1 (X)" as shown in equation (5). Similarly, we built the next model as shown in equation (6) till the nth model. and the final equation is something like that shown in equation (7). In equation (7), "F n-1 (X)" is the prediction by the previous model. Some new predicted errors were added to this model. Finally, we are left with some errors named e n . So, at every step, we are trying to model the errors that help us reduce the overall error, and our focus is that the error tends to be zero (i.e., e n � 0). Each model here is trying to boost the performance of the model. We add a coefficient "c" Journal of Healthcare Engineering and the proper value of this coefficient will be decided using the gradient descent technique. F n (X) � F n−1 (X) + c n H n X, e n−1 + e n . e generalized equation will be like that shown in equation (8). It represents "F n (X)" as all the previous models, c n represents the coefficient, and "H(X, e n )" the current working model function, where X represents the features and e n means the model's error. F n+1 (X) � F n (X) + c n H X, e n . If we dive deeper into equation (9), to understand about loss function and calculate c n . We consider a loss function as shown in equation (10) where y is the actual value and y ′ is Build a model and make predictions on given data Calculate the error and set this error as target Step 1 Step 2 Step 5 Step 3 Step 4 Bulid model on the errors and make predictions Update predictions of model 1 Figure 5: e "GBM" model working procedure steps. 8 Journal of Healthcare Engineering the predicted value for the last model. So, the square difference of this would be the loss. In our case, the target here is y, where y ′ can be considered the updated prediction of the last model. So, we can replace y ′ with F n (X), and the new equation will be as follows: Here, we will use gradient descent techniques and differentiate this equation (10) with respect to F n (X). We will get something like that shown in the following equation: To simplify this equation (12), we will multiply both sides with "−1". And we will get something like that shown in the following equation: Now the right-hand side of the equation is similar to the error we are discussing. Here, we consider the error e n, which is actually (y − F n (X)). So, it can be said that e n is also equal to the left-hand side of the equation. So, it can be replaced, that is, H (X, −dL/dF n (X)), and our final equation will be as follows: Now the aim is to minimize the overall loss function. So, the overall loss would be the loss we get from all the models we have built so far, as shown in the following equation: LOSS � L(y, F(X)) + c n L H X, − dL dF n (X) . (15) e first part of the overall loss is fixed as these are the predictions we have generated from the previous models we built. So, this cannot be changed. e second part of this equation has another loss of the current model, and this loss cannot be changed. But we can still change the gamma value. Now it needs to select a value of gamma such that the overall loss is minimized. And this value would be selected using a gradient descent process. e idea is to minimize the overall loss by deciding the right value of gamma for each model. So, the next model, when we built that model, will again have the coefficient of c n , and we try to select the right value such that the overall loss is minimum. For this, we will be focusing on a special case of gradient boosting model, which is the Gradient Boosting Decision Tree (GBDT). In this case, each of the models we built like each of these H (X, −dL/dF n (X)) would be a tree. ere is an interesting part about GBDT; the gamma value, in this case, is calculated at every leaf level. It would be something like that shown in Figure 7. In the figure, each leaf of the tree would have a gamma value. Support Vector Machine (SVM). e Support Vector Machine (SVM) [56] is a very popular and widely used algorithm in machine learning for classification and regression [57]. It builds an intricate model as basically as conceivable, so it very well may be effectively investigated numerically. SVM sets aside a less figuring effort to recognize a hyperplane in an n-dimensional space (n being the number of features) that exclusively groups the information. e current research utilized a Sequential Minimal Optimization (SMO) algorithm with the polynomial piece to upgrade the SVM classifier model. SVM was considered for dealing with the issue of overfitting of high-dimensional information. Tuning e Model. Our model has been tuned with some hyperparameters to set some customized values to improve our model performance. In this case, we have selected "alpha" to 1.0, "criterion" to friedman_mse. e "n_estimators" is set to 32, which will create 32 DTs within GBM. e "learning rate" is set to 0.1, which determines each tree's impact on the outcome. e "random state" is set to 96; it is a random number seed so that the same random numbers are Journal of Healthcare Engineering 9 generated every time. e "colsapmle_bytree" is set to 0.7, which works for random feature selection at the tree. e "max_depth" is to 6; it is a stopping criterion (i.e., a maximum depth to which a tree can grow). e Model Evaluation and Validation. We have used conventional machine learning algorithms to analyze our participants' neurophysical conditions in this study. We have given accuracy, precision, recall, F1-score, and ROC curve; those are employed as evaluation metrics in our experiments to represent our work contribution. We divided our dataset into two parts: two-thirds of the datasets for the training process, and another one-third was for the testing process. To validate the model, we applied 10-fold crossvalidation with a 5 * 2 approach on the dataset. First, the dataset was divided into two halves randomly. Second, one part was employed in training and another in testing, and we repeated the same procedure as vice versa. is procedure was applied five times repeatedly. Finally, we averaged the results and generated a projected score and compared it with the actual score. is cross-validation procedure has the advantage that all data are used for both validation and training. We have presented a graph comparing the generated score with the actual score in the Results section. e root mean square deviations (RMSDs) have been used to calculate the error between the estimated score ( � E esc ) and the actual score ( � E iesmmse ). e RMSD value defined the model performance and has been calculated as follows: SMMSE Score Prediction. From participants' neurophysical behavior and physical activities pattern, 11 features have been extracted and divided into three subgroups. e linear regression model has been used to estimate each feature's corresponding cognitive impairment score. According to the cognitive impairment levels' score, four groups have been categorized (normal (SMMSE score ≥ 25), mild (21 ≤ SMMSE score ≥ 24), moderate (10 ≤ SMMSE score ≥ 21), and severe (SMMSE score ≤ 9)). Each subgroup's feature data distribution has a relationship with cognitive impairment symptoms. It has also been found that seven features have a high correlation with cognitive impairment symptoms as their "p" values are less than 0.05 to close to 0, as discussed in Section 3.5. ese seven features are total time (TT), error number of words (ENW), average time (AVG), absolute energy (AE), quality sleeping time (QST), walking step (WS), and heart pulse data (HPD). Table 4 represents the relationship between the regression model estimated SMMSE score and the actual score, evaluated using RMSD. e error has been minimized using the lasso regularization method, as discussed in section 3.2.1. Each of the features' results shown in Table 4 is calculated by using leave-one-out cross-validation. A subset has been selected using the wrapper feature selection method among 33 features from three subgroups (base, weekday, and weekend), and this technique shows the lowest RMSD of 3.125. is value demonstrates that the predicted SMMSE score has stronger correlations with the actual SMMSE score. Figure 8, data distribution analysis demonstrates that the features are highly distributed, with high standard deviation and too many outliers in features. A rule-based algorithm like "decision trees" or "ensemble learning" should work efficiently for these kinds of feature datasets. In this regard, we have the Gradient Boosting Machine (GBM), an ensemble learning algorithm. For evaluating and proving our selection, we have experimented with a distance-based algorithm, the "support vector machine (SVM)" also. Table 5 represents the overall accuracy of our used models. We can see that the "gradient boosting machine (GBM)" has the highest accuracy of 94.8%. Cognitive Impairment Level Detection. As shown in In terms of the four cognitive levels-(i) normal, (ii) mild, (iii) moderate, and (iv) severe-the classification algorithm results are shown in Table 6, where classification performance has been demonstrated by showing the results of accuracy of the individual classifier as well as individual classes. We can easily decide on an excellent algorithm to analyze Table 6 as the particular classifier precision, recall, F1-score, and accuracy have been given. e accuracy result has shown that GBM generally has done an excellent performance compared with other classification algorithms. In the "normal" class, the SVM accuracy level looks slightly good, but GBM performed well on all four cognitive impairment levels. e GBM classifier's performance using the receiver operating characteristic (ROC) curve has been shown in Figure 9. As shown in Figure 9, by considering the "normal" cognitive impairment as a negative test sample, the ROC curve in terms of cognitive impairment reached a maximum true positive rate of approximately: (i) 99% for mild cognitive impairment, (ii) 96% for severe cognitive impairment, and (iii) 94% for moderate cognitive impairment. In Figure 10, we have shown the random 15-day data of our participants for every cognitive impairment class. is figure demonstrates and validates the accuracy of the model. e mild (21 ≤ SMMSE score ≥ 24), moderate (10 ≤ SMMSE score ≥ 21), and severe ( SMMSE score ≤ 9) levels have a range, and if the value is within range, we have counted as an actual prediction, otherwise false prediction. Discussion is study used keystroke pattern data and smart wearable device data to extract information about our participants' neurophysical behavior and physical behavior patterns. We have used the "10-fold-cross-validation" with a 5 * 2 approach to validate our model. e model can detect four different cognitive impairment levels (i.e., normal, mild, moderate, and severe) with 94.8% accuracy. is accuracy is higher than that in a previous study, which recorded an accuracy rate of 86% [58]; however, this study focused on predicting dementia and mild cognitive impairment. Our extracted features from our developed application and wearable devices data have shown a strong correlation with the SMMSE score and are found in the regression model. e study by Vizer and Sears [28] was based on typed text's keystroke and linguistic features to detect cognitive impairment. Some researchers like Sofi et al. [59] did a metaanalysis on physical activity. In the present study, we have combined keystroke pattern behavior with our participants' physical activity to detect cognitive impairment. In our study, using the "Pearson" correlation for feature analysis and wrapper method to select the features has done a great job to achieve higher accuracy classification : e ROC curve demonstrates the performance of the classification of each cognitive impairment level. e yellow curve (mild cognitive impairment) shows higher performance, while the red curve (severe cognitive impairment) has a little bit lower result, and the blue (moderate cognitive impairment) shows lower performance. performance for each cognitive impairment level (normal, mild, moderate, and severe). To evaluate the classification performance, we have used two popular classification algorithms: GBM, SVM, and the GBM have shown better performance and higher accuracy in every cognitive level. Limitations. Although the model used in this research predicted cognitive impairment level with high accuracy, there are some limitations when interpreting the results. is research did not assess a clinically cognitive impaired population because the sample only comprised older adults. e assessment used to evaluate cognitive impairment was a self-report scale called the SMMSE rather than a clinical evaluation. Typing errors, taking a long time to complete sentences, or being unable to remember words might be critical factors for cognitive impairment. Some physical issues can be related to what this research tried to find out. But noncognitive impaired people might have those same problems for several reasons. Besides, some participants may not have followed our instructions carefully, which would make some data errors. Like answering misconceptions, participants may not always wear smartwatches, etc. Conclusions Machine Learning (ML) innovation holds noteworthy guarantees for changing how we determine and treat patients with neurocognitive disorders. ere exists an enormous assortment of potential highlights that in a mix can exhaustively describe the biopsychosocial determinants of an exceptional individual and consequently empower a more customized comprehension of intellectual decay. ML calculation presentation and potential clinical utility for distinguishing, diagnosing, and predicting psychological decline utilizing these highlights will keep on improving as we influence multifeature datasets on massive datasets. Setting up rules for research, including AI applications in medical services, will be essential to guarantee the nature of results and clinicians' commitment, besides allowing patients and their caregivers to contribute their ability to refine AI calculations. is study demonstrated the capability to passively detect cognitive impairment symptoms by monitoring daily physical activities and keystroke patterns. Given that the detection of cognitive impairment level is not dependent on traditional self-report psychometric instruments, such a method may improve the identification of cognitive impairment. Early detection of these progressions can allow for interventions that can lessen, delay, or thwart related functional impairments. erefore, more effective techniques that support the early detection of cognitive changes, mostly solutions that continuously leverage normal daily activities, could significantly impact older adults' health and independence. Given the connection between cognitive processes expected to utilize innovation and those affected by cognitive impairment and stress, this examination will investigate keystroke and physical attributes of unexpectedly composed content as a potential methodology for checking cognitive changes. is methodology has a few points of interest over conventional techniques for observing cognitive function. erefore, the proposed model in this study can examine the totality of the data not just at specific stages. It is subtle and assembles standard information for examination and finding just as constant information for everyday monitoring. Data Availability e datasets in this study are collected from users as a part of this study. us, these can be shared upon request. Conflicts of Interest e authors declare that they have no conflicts of interest regarding the publication of this paper.
7,415.8
2021-12-20T00:00:00.000
[ "Medicine", "Computer Science" ]
$c$-function and central charge of the sine-Gordon model from the non-perturbative renormalization group flow In this paper we study the $c$-function of the sine-Gordon model taking explicitly into account the periodicity of the interaction potential. The integration of the $c$-function along trajectories of the non-perturbative renormalization group flow gives access to the central charges of the model in the fixed points. The results at vanishing frequency $\beta^2$, where the periodicity does not play a role, are retrieved and the independence on the cutoff regulator for small frequencies is discussed. Our findings show that the central charge obtained integrating the trajectories starting from the repulsive low-frequencies fixed points ($\beta^2<8\pi$) to the infrared limit is in good quantitative agreement with the expected $\Delta c=1$ result. The behavior of the $c$-function in the other parts of the flow diagram is also discussed. Finally, we point out that also including higher harmonics in the renormalization group treatment at the level of local potential approximation is not sufficient to give reasonable results, even if the periodicity is taken into account. Rather, incorporating the wave-function renormalization (i. e. going beyond local potential approximation) is crucial to get sensible results even when a single frequency is used. In this paper we study the c-function of the sine-Gordon model taking explicitly into account the periodicity of the interaction potential. The integration of the c-function along trajectories of the non-perturbative renormalization group flow gives access to the central charges of the model in the fixed points. The results at vanishing frequency β 2 , where the periodicity does not play a role, are retrieved and the independence on the cutoff regulator for small frequencies is discussed. Our findings show that the central charge obtained integrating the trajectories starting from the repulsive lowfrequencies fixed points (β 2 < 8π) to the infrared limit is in good quantitative agreement with the expected ∆c = 1 result. The behavior of the c-function in the other parts of the flow diagram is also discussed. Finally, we point out that also including higher harmonics in the renormalization group treatment at the level of local potential approximation is not sufficient to give reasonable results, even if the periodicity is taken into account. Rather, incorporating the wave-function renormalization (i. e. going beyond local potential approximation) is crucial to get sensible results even when a single frequency is used. I. INTRODUCTION Statistical field theory has known an outpouring development in the last decades, with the systematic improvement of powerful theoretical tools for the study of critical phenomena and phase transitions [1]. A key role among these methods is played by the renormalization group (RG) approach: RG allowed to treat statistical physics models studying the behavior of the transformations bringing the microscopic variables into macroscopic ones [2,3]. Using RG one can have not only a qualitative picture of the phase diagram and fixed points, but also accurate quantitative estimates of critical properties as critical exponents and universal quantities -even though very few cases are known where RG procedure can be carried out exactly and the method itself offers few possibility to obtain exact results. On the other hand, the scale invariance exhibited by systems at criticality may give rise to invariance under the larger group of conformal transformations [4] locally acting as scale transformations [5]. The conformal group in d spatial dimensions (for d = 2) has a number of independent generators equal to 1 2 (d + 1)(d + 2), while for d = 2 the conformal group is infinitely dimensional [5]. The occurrence and consequences of conformal invariance for 2-dimensional field theories have been deeply investigated and exploited to obtain a variety of exact results [1,5] and a systematic understanding of phase transitions in two dimensions. A bridge between conformal field theory (CFT) techniques and the RG description of field theories is provided in two dimensions by the c-theorem. Far from fixed points, Zamolodchikov's c-theorem [6] can be used to get information on the scale-dependence of the model. In particular the theorem states that it is always possible to construct a function of the couplings, the so-called cfunction, which monotonically decreases when evaluated along the trajectory of the RG flow. Furthermore, at the fixed points this function assumes the same value as the central charge of the corresponding CFT [1]. Although the c-theorem is by now a classical result, the determination of the c-function is not straightforward and its computation far from fixed points is non-trivial even for very well known models, so that methods as form factor perturbation theory, truncated conformal space approach and conformal perturbation theory has been developed [1]. In d = 2 an expression of the c-function has been obtained in the framework of form factor perturbation theory [7] for theories away from criticality and it has been applied to the sinh-Gordon model [8]. The sinh-Gordon model is a massive integrable scalar theory, with no phase transitions. In [8] one finds ∆c = 1 for the sinh-Gordon theory. In a recent result [9], the analytical continuation of the sinh-Gordon S-matrix produces a roaming phenomenon exhibiting ∆c = 1 and multiple plateaus of the c-function. The analytic continuation β → iβ of the sinh-Gordon model leads to the well-known sine-Gordon (SG) model with a periodic self interaction of the form cos (βφ). The SG model presents the unique feature to have a whole line of interacting fixed points with coupling (temperature) dependent critical exponents. It is in the same universality of the 2-dimensional Coulomb gas [10] and of the 2-dimensional XY [11], thus being one of the most relevant and studied 2-dimensional model, with applications ranging from the study of the Kosterlitz-Thouless transition [11] to quantum integrability [12] and bosonization [13]. In particular, for the SG model an ubiquitous issue is how to deal with the issue of the periodicity of the field [14], which unveils and plays a crucial role for β = 0. Given the importance of the SG as a paradigmatic 2-dimensional model, the determination of the c-function from the non-perturbative RG flow is a challenging goal, in particular to clarify the role played by the periodicity of the field for β = 0. From the RG point of view, the determination of the behavior of the c-function is a challenging task requiring a general non-perturbative knowledge of the RG flow. Recently [15], an expression for the Zamolodchikov's cfunction has been derived for 2-dimensional models in the Functional RG (FRG) framework [16][17][18]: resorting to an approximation well established and studied in the FRG, the Local Potential Approximation (LPA), an approximated and concretely computable RG flow equation for the c-function was also written down [15]. By using this expression known results were recovered for scalar models on some special trajectories of the Ising and SG models. For the SG model, having a Lagrangian proportional to cos (βφ), the determination and the integration of the c-function was carried out for β = 0 as a massive deformation of the Gaussian fixed point [15]. Motivated by these results, both for the Ising and SG models and for general 2-dimensional models, it would be highly desirable to have a complete description of the c-function on general RG trajectories. In the present paper we present the first numerical calculation of the c-function on the whole RG flow phase diagram of the SG model. The goal is to determine the behavior of the c-function, and the presence of known results (namely, ∆c = 1) helps to assess the validity of our approach along the different flows. We also complete the description initiated in [15] moving to more complex trajectories and showing that these cases are not a straightforward generalization of the known results. We finally discuss the dependence of these results on the approximation scheme used to compute FRG equations. II. FUNCTIONAL RENORMALIZATION GROUP METHOD In this section, we briefly summarize the FRG approach [16][17][18]. Starting from the usual concepts of Wilson's renormalization group (RG) it is possible to derive an exact flow equation for the effective action of any quantum field theory. This flow equation is commonly written in the form where Γ k [ϕ] is the effective action and Γ (2) k [ϕ] denotes the second functional derivative of the effective action. The trace Tr stands for an integration over all the degrees of freedom of the field φ, while R k is a regulator function depending on the mode of the field and on the running scale k. When the running scale goes to zero k → 0 the scale-dependent effective action Γ k=0 [φ] is the exact effective action of the considered quantum field theory. Usually equation (1) is treated in momentum space, thus the trace stands for a momentum integration and the regulator R k is a smooth function which freezes all the modes with momentum smaller than the scale k. The exact FRG equation (1) stands for functionals, thus it is handled by truncations. Truncated RG flows depend on the choice of the regulator function R k , i.e. on the renormalization scheme. Regulator functions have already been discussed in the literature by introducing their dimensionless form where r(y) is dimensionless. Various types of regulator functions can be chosen (an archetype of regulator functions [19] has been shown to take the forms of the regulators used so far by setting its parameters). In this work we are going to consider the following regulators where the first is known as the power-law type [20] and the second one is the Litim (or optimized) [21] type regulator. Let us note that the so called mass cutoff regulator, which is used in [15], is identical to r pow (y) with b = 1. One of the commonly used systematic approximations is the truncated derivative expansion, where the action is expanded in powers of the derivative of the field [16]: In LPA higher derivative terms are neglected and the wave-function renormalization is set equal to constant, i.e. Z k ≡ 1. In this case (1) reduces to the partial differential equation for the dimensionless blocked potential (Ṽ k = k −2 V k ) which has the following form in 2 dimensions III. THE c-FUNCTION IN THE FRAMEWORK OF FUNCTIONAL RENORMALIZATION GROUP An expression for the c-function in FRG was recently developed in [15]. In this section we are going to give the guidelines of this derivation, reviewing the main results used in the next sections. Let us start considering an effective action Γ[φ, g] for a single field φ in curved space, with metric g µν . We can study the behavior of this effective action under transformation of the field and the metric: for a scalar field) while the background metric g µν has always conformal weight 2. From the requirement that the effective action must be invariant under the Weyl transformation (5)-(6), we obtain the following expression for a conformal field theory (CFT) in curved space [15], S CF T [φ, g] is the curved space generalization of the standard CFT action, which is recovered in the flat space case g µν = δ µν , c is the central charge of our theory and S P [g] is the Polyakov action term which is necessary to maintain the Weyl invariance of the effective average action in curved space. To obtain FRG equations one has to add an infra-red (IR) cutoff term ∆S k [φ, g] to the ultra-violet (UV) action of the theory. This is a mass term which depends both on the momentum of the excitations and on a cutoff scale k. where ∆ is the spatial Laplacian operator. The effect is to freeze the excitation of momentum q k, but leaving the excitation at q > k almost untouched. The result of this modification of the UV action is to generate, after integrating over the field variable, a scale-dependent effective action Γ k [φ, g] which describes our theory at scale k. When the scale k is sent to zero the cutoff term in the UV action vanishes and the Γ k [φ, g] is the exact effective average action of the theory. The generalization of (7) in presence of the cutoff terms is where c k is now the scale-dependent c-function and the dots stands from some geometrical terms which do not depend on the field. We should now consider the case of a flat metric with a dilaton background g µν = e 2τ δ µν . Using the standard path integral formalism for the effective action we can write where S U V [φ, g] is some UV action, c U V is the value of the c-function in the UV (which can be equal to the central charge of some CFT if we are starting the flow from a conformal invariant theory) and χ is the fluctuation field. The notation Dχ d.b. stands for an integration over the fluctuation field χ in the curved space of the dilaton background [15]. We can further manipulate latter expression moving c U V on the l.h.s The Polyakov action in the dilaton background case assumes the form where τ is the dilaton field, ∆ is the laplacian operator and the integral is over an implicit spatial variable. Substituting latter expression into (11) we obtain In order to recover the usual flat metric integration we have to pursue a Weyl transformation (5) for the fields φ and χ and now the integration measure is in flat space. Finally deriving previous expression with respect to the logarithm of the FRG scale we obtain that the flow of the c-function ∂ t c k can be extracted from the flow of the cutoff action (8) by taking the coefficient of the τ ∆τ term, which after some manipulation becomes [15] Eq. (16) shows that the c-function flow is proportional to the coefficient of the τ ∆τ term in the expansion of the propagator flow k∂ k G k [τ ], also this flow has to be computed taking into account only the k dependence of the regulator function, i.e. This equation describes the exact flow of the c-function into the FRG framework. Since it is not in general possible to solve exactly equation (1) and also equation (16) needs to be projected into a simplified theory space to be computed numerically. In Ref. [15] an explicit expression for the flow equation of the c-function in the LPA scheme has been derived with the mass cutoff with the dimensionless blocked potentialṼ k (ϕ) which is evaluated at its running minimum ϕ = ϕ 0,k (i.e. the solution ofṼ k (ϕ) = 0). We observe that an explicit expression for the c-function beyond LPA is not available in literature. It should be noticed that, while (4) is valid for any regulator (cutoff) function, the expression for the c-function (18) has been obtained by using the mass cutoff, i.e. (3a) with b = 1. Other cutoff choices proved to be apparently very difficult to investigate. In the following, we will argue that while the expression (18) is sufficient to obtain a qualitative (and almost quantitative) picture of the c-function phase diagram the usage of other regulator functions is necessary to achieve full consistency. Where it is possible we will check the cutoff dependence of our numerical results. IV. RG STUDY OF THE SINE-GORDON MODEL The SG scalar field theory is defined by the Euclidean action for d = 2 where β and u are the dimensional couplings. Since we are interested in the FRG study of the SG model which is periodic in the field variable, the symmetry of the action under the transformation [22] ϕ is to be preserved by the blocking and the potentialṼ k (ϕ) must be periodic with period length A. It is actually obvious that the blocking, i.e. the transformation given by replacing the derivative with respect to the scale k by a finite difference in (4) preserves the periodicity of the potential [22,23]. In LPA one should look for the solution of (4) among the periodic function which requires the use of a Fourier expansion. When considering a single Fourier mode, the scale-dependent blocked potential reads where β is scale-independent. In the mass cutoff case, i.e. the power law regulator (3a) with b = 1, one can derive [24] the flow equation for the Fourier amplitude of (21) from Eq. (4): (see Eq. (21) of [24] for vanishing mass). Similarly, using the optimized regulator (3b) gives B. The FRG equation for the SG model for scale-dependent frequency. A very simple, but still sensible, modification to ansatz (19) is the inclusion of a scale dependent frequency, which, in order to explicitly preserve periodicity, should be rather considered as a running wave-function renormalization. The ansatz then becomes where the local potential contains a single Fourier mode and the following notation has been introduced via the rescaling of the field ϕ → ϕ/β k in (19), where z k plays the role of a field-independent wave-function renormalization. Then Eq. (1) leads to the evolution equations with D k = 1/(z k p 2 +R k +V k ) and P 0 = (2π) −1 2π 0 dϕ is the projection onto the field-independent subspace. The scale k covers the momentum interval from the UV cutoff Λ to zero. It is important to stress that Eqs. (27)-(28) are directly obtained using power-law cutoff functions. One may expect that these equations continue to be valid for a general cutoff provided that R k → z k R k [16]. This substitution has been tested for O(N ) models, but its validity has been not yet discussed in the literature for the SG model. Inserting the ansatz (25) into Eqs. (27) and (28) the RG flow equations for the coupling constants can be writ-ten as [25] with P k = z k p 2 + R k . In general, the momentum integrals have to be performed numerically, however in some cases analytical results are available. Indeed, by using the power-law regulator (3a) with b = 1, the momentum integrals can be performed [24] and the RG flow equations read as with the dimensionless couplingũ = k −2 u. By using the replacements and keeping the frequency scale-independent (∂ k z k = 0 i.e. ∂ k β 2 k = 0) one recovers the corresponding LPA Eq. In this section we discuss the case β = 0. We start by summarizing the results obtained for the c-function of the SG model in [15]. The ansatz considered in [15] is where the frequency β k is assumed to be scale-dependent. If one directly substitutes (33) into the RG Eq. (4), then the l.h.s. of (4) generates non-periodic terms due to the scale-dependence of β k . Thus, the periodicity of the model is not preserved and one can use the Taylor expansion of the original periodic model In this case, (33) is treated as a truncated Ising model and the RG equations for the coupling constants read as The disadvantage of the scale-dependent frequency is that the periodicity of the model is violated changing the known phase structure of the SG model. However, the authors of [15] were interested in the massive deformation of the Gaussian fixed point which is at β = 0 and u = 0, so one has to take the limit β → 0 where the Taylor expansion represents a good approximation for the original SG model. Indeed, in the limit β → 0, the RG Eqs. (35), (36) reduce to Similar flow equations for the couplingsm 2 k and β k were given in [15]. The solution for the c-function based on (33) is in agreement with the known exact result, i.e. at the Gaussian UV fixed point c UV = 1 and in the IR limit c IR = 0, thus the exact result in case of the massive deformation of the Gaussian fixed point is ∆c = 1 (∆c = c U V −c IR ). The numerical solution [15] gives ∆c = 0.998 in almost perfect agreement with the exact result. Although the numerical result obtained for the cfunction in [15] is more than satisfactory, due to the Taylor expansion, the SG theory is considered as an Isingtype model. Thus, the RG study of the c-function starting from the Gaussian fixed point in the Taylor expanded SG model is essentially the same as that of the deformation of the Ising Gaussian fixed point. So, it does not represent an independent check of (18). Indeed, inserting (37) into (18) using the ansatz (33) one finds which is identical to Eq. (5.3) of [15] (with a = 1) obtained for the massive deformation of the Gaussian fixed point in the Ising model and it can be also derived from Eq. (5.19) of [15] in the limit of β 2 → 0. Therefore, it is a relevant question whether one can reproduce the numerical results obtained for the c-function (with the same accuracy) if the SG model is treated with scale-independent frequency (21), or beyond LPA, by the rescaling of the field (25). Also ref. [15] treats only massive deformations of non interacting UV fixed points, then on such trajectories only the mass coupling is running. Nevertheless the c-theorem should hold on all trajectories, even when more couplings are present. Our aim is to demonstrate that the derivation of [15] is valid even in these more general cases, but, due to truncation approach, the approximated FRG phase diagram does not fulfill the requirements of the c-theorem exactly and, therefore, only approximated results are possible. VI. c-FUNCTION OF THE SINE-GORDON MODEL ON THE WHOLE FLOW DIAGRAM In this section we study the c-function of the SG model on the whole phase diagram, studying both the scale in-dependent wave-function renormalization and the treatment with the running frequency. A. Scale-independent frequency case The definition for the SG model used in this work, i.e. (19), differs from (33) because the frequency parameter is assumed to be scale-independent in LPA. The running of β can only be achieved beyond LPA by incorporating a wave-function renormalization and using a rescaling of the field variable which gives z k = 1/β 2 k . Let us first discuss the results of LPA. Equations (22) and (23) have the same qualitative solution. In Fig. 1 we show the phase structure obtained by solving (23). The RG trajectories are straight lines because in LPA the frequency parameter of (19) is scale independent. Above (below) the critical frequency β 2 c = 8π, the line of IR fixed points is atũ IR = 0 (ũ IR = 0). For β 2 < 8π the IR value for the Fourier amplitude depends on the particular value of β 2 thus, one finds different IR effective theories, i.e. the corresponding CFT depends on the frequency too. The scaling for the c-function is the one expected from the c-theorem. It is a decreasing function of the scale k which is constant in the UV and IR limits, see Fig. 1. Due to the approximation of scale independent frequency β, here, the IR value of the c-function depends on the particular initial condition for β 2 . Then when we start at the Gaussian fixed points line (c = 1), in the symmetric phase, the flow evolves towards an IR fixed point, but at this approximation level, we have different IR fixed points which are all at differentũ values and consequently the ∆c values differ from the exact one. The exact result ∆c 1 is obtained only in the β → 0 limit. We notice that Eq. (22), where the mass cutoff was used, has very poor convergence properties and the flow, obtained from them, stops at some finite scale, thus the deep IR values of the c-function cannot be reached (dashed lines in Fig. 2). The use of the Litim cutoff RG Eq. (23) improves the convergence of the RG flow but the IR results for the c-function are very far from the expected ∆c = 1, which can be recovered only in the vanishing frequency limit. Also the inclusion of higher harmonics in (25) ( inset in Fig.2) does not improve this result. It should be noted that Eq. (18) is strictly valid only in the mass cutoff case, however in Fig. 2 we used Eq. (18) even in the optimized cutoff case. This inconsistence cannot be regarded as the cause for the unsatisfactory results obtained in the large β cases, indeed we expect very small dependence of the flow trajectories upon the cutoff choice. This small dependence on the regulator is evident from the comparison of the mass and Litim regulator results of trajectories for the c-function in Fig. 2, which are very similar, at least in the region where no convergence problems are found. This similarity justifies the use of the mass cutoff result (18) with RG flow Eqs. (23) obtained by the optimized (Litim) regulator. We also computed the c-function flow for other cutoff functions, namely the power-law b = 2 (solid lines in Fig. 2) and the exponential one (dashed lines). Apart from the mass cutoff, all the others converge to the IR fixed point. The conclusion is that there is not a pronounced dependence of the findings on the cutoff schemes and that the constant frequency case in not sufficient to recover the correct behavior for the c-function. We observe that the lack of convergence observed in mass cutoff case is not present in the small frequency limit analyzed in [15]. Indeed, expanding flow equations (22) and (23) we get which is valid for vanishing frequency and it is independent of the particular choice of the regulator function, i.e. it is the same for the mass and Litim cutoffs. Substituting (40) into (18) using (21) the following equation is obtained for the c-function of the SG model: where the identificationm 2 k =ũ k β 2 is used. The scale dependence of the c-function in that case is identical to the massive deformation of the Gaussian fixed point and the corresponding RG trajectory is indicated by the green line in Fig. 3. It is important to note that for finite frequencies β 2 = 0 the Taylor expanded potential (34) cannot be used to determine the c-function since it violates the periodicity of the model. In this case only Eqs. (22) or (23) can produce reliable results. In order to improve the LPA result for the c-function of the SG model without violating the periodicity of the model one has to incorporate a scale-dependent frequency, i.e. a wave-function renormalization (we refer to this approximation as z+LPA), as it is discussed in the next subsection. B. The scale-dependent wave function renormalization The inclusion of the running wave-function renormalization changes the whole picture of the SG phase diagram, with all theũ = 0 fixed points collapsing into a single (β k = 0,ũ = 1) fixed point, as it is expected from the exact CFT solution. The phase diagram obtained at this approximation level is sketched in Fig.3, where we evidence three different regions. The ∆c is strictly well defined only in region I, where we start from a Gaussian fixed point c U V = 1 and we end up on a massive IR fixed point c IR = 0. The massive IR fixed point related to the degeneracy of the blocked action is an important feature of the exact RG flow [26][27][28][29] and it was considered in SG type models [25,30,31]. In region II the trajectories end in the Gaussian fixed points c = 1 but they are coming from infinity where actually no fixed point is present. This is due to the fact that we are not considering in our ansatz (24) any operator which can generate a fixed point at c > 1 and then the trajectories ending at c = 1 are forced to start at infinity. Thus, ∆c is not defined in this region. Region III contains those trajectories which start at β = ∞ but end in the IR massive fixed point at c = 0. Even in this case the ∆c is not well defined. In the following we are going to discuss in details the results of region I where all the trajectories should give ∆c = 1. We shall ignore region II where the ∆c is not defined, briefly discussing region III. The presence of wave-function renormalization is necessary to obtain the qualitative correct flow diagram for the SG. Note that Eq. (18) has been derived only in the case of scale-independent kinetic term and the derivation of an equivalent expression in the case of running wave function renormalization appears far more demanding than the calculation sketched in this paper. However, it is still possible to get a sensible result using the mapping between the running wave-function renormalization and the running frequency β k cases (as shown in Eqs. (32a) and (32b)), finally obtaining Eq. (44) In other words the equation (18) is valid only at LPA level, but it is still possible to apply it to the z+LPA scheme, since, thanks to the mapping described in Eqs. (32a) and (32b) the z+LPA ansatz can be mapped into an LPA one. We will then use directly the ansatz, with no wave function renormalization present in the kinetic term. This ansatz is equivalent to ansatz (24) if we rescale the field and use the relations (32a) and (32b), with the running frequency playing the role of a wavefunction renormalization. Ansatz (42) is not suited to study the SG model when full periodicity has to be preserved, indeed when we substitute it into equation (27) symmetry breaking terms appear. The same happens when we substitute it into Eq. (18). However in the latter case symmetry breaking terms are not dangerous, since we have to evaluate the expression at the potential minimum where all the symmetry breaking terms vanish. Proceeding in this way we obtain where no inconsistency is present. We still cannot use expression (43), since we cannot write a flow for β k due to the non-periodic terms. To avoid these difficulties we rewrite expression (43) using the inverse transformation of (32a) and (32b), The last expression is fully coherent and represents the flow of the c-function in presence of a running wavefunction renormalization into the SG model; it is worth noting that the use of transformations (32a) and (32b) gave us the possibility to derive the expression (43) from Eq (18), which was derived in [15] in the case of no-wavefunction renormalization. In the limit β 2 k=Λ → 0, the IR result of the c-function (see the inset of Fig. 4) tends to zero. This implies that in the limit β 2 k=Λ → 0 the difference ∆c → 1. The numerical result found in this case reaches the accuracy 1 ≥ ∆c ≥ 0.99 of the scale-independent frequency solution (33) but now the periodicity of the SG model is fully preserved (which was not the case in [15]). It should be also noted that the accurate results of Fig.4 could not be obtained in the mass cutoff framework (3a) with b = 1, which does not allow the flow to converge, but our findings were obtained with the smoother b = 2 cutoff. Fig. 1. The inset shows the results obtained for cIR as a function of β k=Λ , these results show lower accuracy in the large frequency limit, while they become practically exact in the limit β k=Λ → 0, accordingly with [15]. level. This discrepancy shows that the flow obtained by approximated FRG procedure cannot satisfy the exact CFT requirements for the c-function. The discrepancy between the exact ∆c = 1 value and the actual results obtained by the FRG approach can be used to quantify the error committed by the truncation ansatz in the description of the exact RG trajectories. We observe that the results of Fig. 4 main and inset are obtained by using power-law regulator with b = 2. The same computation appears to be considerably more difficult using general cutoff functions, including the exponential one. Let us note that for vanishing frequency the RG flow equations become regulator-independent and that the cfunction value tends to the exact result ∆c = 1. This justifies the accuracy obtained in [15] even though the mass cutoff was used and the periodicity violated. Finally we go on showing the results in region III. As discussed in the description of Fig. 3, trajectories in region III of the SG flow diagram should not have a well defined value for the c-function, due to the fact that those trajectories start at β k=Λ = ∞ where no real fixed point is present. However the numerical results obtained for those trajectories Fig. 5 are not so far from ∆c = 1, due to the fact that they get most of the contribution in the region where they approach the "master trajectory" separatrix of region I i.e. the blue thick line in Fig.3, which we know to have a value ∆c ≈ 1, while the portion of the trajectories close to region II get almost zero contribute. The results of region III are also in agreement with the findings of region II (not shown) where ∆c ≈ 0 for all the initial conditions. Fig. 3, the result is approximately ∆c = 1 due to the fact that in region III most of the contribution to the c-function comes from the part of the trajectories very close to "master trajectory" separatrix of region I (the blue thick line in Fig. 3). VII. CONCLUSIONS In this paper we provided an estimation of the cfunction over the RG trajectories of the sine-Gordon (SG) model in the whole parameter space. Using this result we showed that the numerical functional RG study of the SG model with scale-dependent frequency recovers for β 2 < 8π (region I of Fig. 3) the exact result ∆c = 1 with a good quantitative agreement while preserving the periodicity, which is the peculiar symmetry of the model. We also pointed out the dependence of this c-function calculation on the approximation level considered. For β = 0 one retrieves directly ∆c = 1, also in the scaleindependent frequency case, while for β = 0, again using scale independent frequency, we recover this result in the β 2 → 0 limit, while increasing β 2 up to 8π in region I as a result of the used approximation the agreement becomes worst, remaining anyway reasonably good, as shown in Fig. 1. Retrieving ∆c = 1 is the SG counterpart of the computation of ∆c for the sinh-Gordon model [8,9]. This result can be understood by noticing that the analytical continuation β → iβ [32] may be expected not to alter the ∆c defined in the Zamolodchikov theorem, and that functional RG even in its crudest approximation does not spoil such correspondence for ∆c, provided that the periodicity of the SG field is correctly taken into account. We developed a fully coherent expression for the cfunction in the case of running frequency, which gives better results in the whole region I (defined in Fig. 3). These results are compatible with the exact scenario up to an accuracy of 80% in the whole region I. Such accuracy grows to 99% in the small beta region in agreement with [15], as we discussed in Fig. 4. We also noticed that for β 2 > 0 the use of the mass cutoff, as necessary to be consistent with expression (18), is not possible due to bad convergence properties, and the use of different b values or of different cutoff types is needed. It should be noted that while the numerical results are quite accurate the exact property that all trajectories of region I should have the same value for ∆c is not preserved by truncations schemes (24) (results in Fig. 1) nor (19) (results in Fig. 4). Actually the β 2 → 0 limit always gives the correct result even when treated with the most rough truncation, this result being independent from the cutoff function. At variance one needs to go to the running frequency case to obtain reliable results for β 2 >> 0. Even when full periodicity in the field is maintained, the z+LP A truncation scheme is not sufficient to recover exact results for the c-function. Indeed the quantity computed using expression (18) satisfies two requirements of Zamalodchikov's c-theorem, 1. ∂ t c t ≥ 0 along the flow lines, 2. ∂ t c t = 0 at the fixed points, but it fails to reproduce the exact central charge of SG theory. The final result of our calculation also depends on the chosen cutoff function and, as already mentioned, it was not possible to use the same cutoff scheme for both the couplings flow equations and the c-function flow (18). We do not expect these issues to be responsible for the error in the fixed point value of the c-function. Modifications of the cutoff scheme in LPA calculations have small influence on the results (around 5%) and we may expect this property to be maintained at z+LPA level, where calculations with different cutoff functions were not possible. The main source of deviation from the exact result ∆c = 1 is then probably due to z+LPA truncation in itself. We are not able to identify whether this deviation is only due to the approximation in the c-function flow or rather to the description of the fixed point given in z+LPA, which does not reproduce the exact central charge. Certainly the c-function flow at z+LPA level not merely violates the exact fixed point value of the cfunction, but it is also not able to produce trajectory independent results, as shown in Fig. 4. This scenario is not consistent with an unique central charge value at the SG fixed point and it is then impossible to dig out any information about this quantity from this approach. In this perspective it would be interesting to have an independent method to calculate the fixed point central charge at a given truncation level. Obviously the reproduction of the Zamalodchikov's ctheorem should be better satisfied increasing the truncation level considered. However it has been shown that, at LPA level, the addition of further harmonics in the potential does not improve the results presented, while the introduction of running frequency is crucial to achieve consistency of the phase diagram. This situation is peculiar of the LPA truncation level. Beyond z+LPA we expect the most relevant corrections from higher harmonics in the potential and only small variations are expected from the introduction of higher fields derivatives in (24) Finally we remark that the trajectories of the other two regions do not have a definite ∆c value, however while region II gives results ∆c ≈ 0, region III has the ∆c values close to the ones obtained in region I Fig. 5, due to the fact that all the trajectories in this region merge with the "master trajectory" separatrix of region I in the k → 0 limit. We conclude by observing that in our opinion a relevant future extension of this work could be the study of the c-function for the Ising model and for minimal conformal models in general. Our work also points out to the possible investigation with RG techniques of the analytical continuation relating the sine-and the sinh-Gordon models. From this respect we think it could be worthwhile to systematically study models interpolating between these two celebrated cases, both to highlight the roaming phenomenon for integrable interpolations and to put forward critical properties of non-integrable interpolations.
9,258.2
2015-07-17T00:00:00.000
[ "Physics" ]
Tolerising cellular therapies: what is their promise for autoimmune disease? The current management of autoimmunity involves the administration of immunosuppressive drugs coupled to symptomatic and functional interventions such as anti-inflammatory therapies and hormone replacement. Given the chronic nature of autoimmunity, however, the ideal therapeutic strategy would be to reinduce self-tolerance before significant tissue damage has accrued. Defects in, or defective regulation of, key immune cells such as regulatory T cells have been documented in several types of human autoimmunity. Consequently, it has been suggested that the administration of ex vivo generated, tolerogenic immune cell populations could provide a tractable therapeutic strategy. Several potentially tolerogenic cellular therapies have been developed in recent years; concurrent advances in cell manufacturing technologies promise scalable, affordable interventions if safety and efficacy can be demonstrated. These therapies include mesenchymal stromal cells, tolerogenic dendritic cells and regulatory T cells. Each has advantages and disadvantages, particularly in terms of the requirement for a bespoke versus an ‘off-the-shelf’ treatment but also their suitability in particular clinical scenarios. In this review, we examine the current evidence for these three types of cellular therapy, in the context of a broader discussion around potential development pathway(s) and their likely future role. A brief overview of preclinical data is followed by a comprehensive discussion of human data. AbsTRACT The current management of autoimmunity involves the administration of immunosuppressive drugs coupled to symptomatic and functional interventions such as antiinflammatory therapies and hormone replacement. Given the chronic nature of autoimmunity, however, the ideal therapeutic strategy would be to reinduce self-tolerance before significant tissue damage has accrued. Defects in, or defective regulation of, key immune cells such as regulatory T cells have been documented in several types of human autoimmunity. Consequently, it has been suggested that the administration of ex vivo generated, tolerogenic immune cell populations could provide a tractable therapeutic strategy. Several potentially tolerogenic cellular therapies have been developed in recent years; concurrent advances in cell manufacturing technologies promise scalable, affordable interventions if safety and efficacy can be demonstrated. These therapies include mesenchymal stromal cells, tolerogenic dendritic cells and regulatory T cells. Each has advantages and disadvantages, particularly in terms of the requirement for a bespoke versus an 'offthe-shelf' treatment but also their suitability in particular clinical scenarios. In this review, we examine the current evidence for these three types of cellular therapy, in the context of a broader discussion around potential development pathway(s) and their likely future role. A brief overview of preclinical data is followed by a comprehensive discussion of human data. InTRoduCTIon The complexity of immune tolerance mechanisms presents abundant opportunities for its breakdown, leading to the development of autoimmunity. In most cases, the precise pathogenesis of autoimmunity remains unknown but the genetic polymorphisms that underpin, for example, rheumatoid arthritis (RA), indicate that antigen presentation, cytokine dysregulation and the regulation of lymphocyte activation all play key roles. Furthermore, the clustering of different autoimmune diseases within families attests to common genetic predisposition and pathogenic mechanisms. However, for most autoimmune diseases, the provoking autoantigen(s) have not been defined and, critically, the predilection for the joint in RA versus the brain in multiple sclerosis (MS) versus the pancreas in diabetes mellitus remains enigmatic. Ultimately, the immune system can be viewed as a delicate balance of activation vs tolerance, with multiple mechanisms acting to maintain homeostasis. Historically, management of autoimmune disorders involved managing end-organ manifestations such as insulin replacement in diabetes and control of pain and inflammation in conditions such as RA (table 1). During the second half of the 20th century the discovery of glucocorticoids and, subsequently, immunosuppressant medications enabled modification of the autoreactive process with reduced tissue damage and even improved life expectancy in diseases such as systemic lupus erythematosus (SLE). The 21st century has seen the biologics revolution with potent, targeted therapies that neutralise key proinflammatory cytokines or interfere with lymphocytes themselves. And, most recently, potent synthetic signalling pathway inhibitors are providing a further means to modulate immune reactivity. 1 Nonetheless, current management options rarely lead to cure, or drug-free remission, and most patients require long-term maintenance therapy to control disease manifestations. For example, in RA, approximately 30% of patients achieve sustained remission, but 50% of these will flare if treatment is discontinued. The proportion that flare is usually higher once patients have moved on to more potent biological therapies. 2 Because immunosuppressants downregulate the normal adaptive immune system, it is not surprising that several of the therapies in table 1 are associated with an enhanced infection risk, including opportunistic infections, and the development of malignancy. This is in addition to disease comorbidities and drug-specific side-effects, for example, with chronic glucocorticoids. In extreme cases, haematopoietic stem cell transplantation has been used to treat autoimmunity but, with rare exceptions, this intervention has not proved curative. 3 4 The holy grail of treatment for autoimmunity would be the reinstatement of immune tolerance. So-called therapeutic tolerance induction offers the opportunity to 'reset' the diseased immune system to a state of immune tolerance, theoretically providing for long-term, drug-free remission. 5 While multiple strategies have proven effective in animal models of autoimmunity and transplantation, translation to the clinic has been slow. Multiple explanations have been offered, relating to disease stage, therapeutics employed, and the need for better biomarkers of tolerance, among others. Nonetheless, because of the slow progress with therapeutics that target the immune system, such as biologic drugs and peptides, recent strategies have focused on the use of tolerogenic cells themselves. ToleRogenIC Cell Types In recent years, investigators have turned their attention to the ex-vivo expansion or differentiation of 'tolerogenic' immune cells, followed by their adoptive transfer, as a potential route to therapeutic tolerance induction. To a large degree, these strategies have been catalysed by advances in bio-manufacturing in general, with robust and scalable processes leading to the efficient manufacture of advanced cellular For several therapies, particularly DMARDs, the precise mode of action is not known. Immunomodulation denotes that the treatment has a specific and defined effect on the immune system. DMARDs, disease-modifying anti-rheumatic drugs; MTX, methotrexate. therapies. 6 To date, three main types of tolerogenic cell have been the focus of therapeutic strategies in humans. Mesenchymal stromal cells Mesenchymal stromal cells (MSCs) are spindle-shaped, plastic-adherent, progenitor cells of mesenchymal tissues with multipotent differentiation capacity. 7 MSCs can modulate innate and adaptive immune cells including dendritic cells (DC), natural killer cells (NK) cells, macrophages, B-lymphocytes and T-lymphocytes. This occurs via both cell-cell contact and paracrine interactions through several soluble mediators including indoleamine-2,3-dioxygenase (IDO), prostaglandin E2 and transforming growth factor β. [8][9][10] These and other mechanisms have been summarised in figure 1. By definition, MSCs can differentiate into bone, chondrocytes and adipose tissue in vitro; they are phenotypically positive for CD105, CD73 and CD90 and negative for haematopoietic markers CD45, CD34, CD14, CD11b, CD3 and CD19. 7 11 They do not express Class II MHC molecules unless stimulated by interferons 7 and lack costimulatory molecules such as CD40, CD80 and CD86. Exposure to proinflammatory cytokines IFN-γ, TNF and IL-1β 10 and activation by exogenous/endogenous danger signals such as bacterial products and heat shock proteins through Toll-like receptor 3 (TLR3) 'licenses' MSCs to become immunosuppressive 12 ; in contrast, activation through TLR4 confers a proinflammatory signature and, under some conditions, TLR3 signals may do the same. 12 13 The immunomodulatory functions of MSC include their ability to: inhibit T cell proliferation and promote their differentiation into regulatory T cells (Tregs); 14 inhibit the CD4 + T cell induced differentiation of B-cells into plasma cells and directly inhibit B-cell proliferation, differentiation and chemotaxis. 15 Although MSCs reside in most postnatal organs and tissues, 16 they are readily harvested from bone marrow, adipose tissues, umbilical cord blood and Wharton's jelly (figure 2). Tolerogenic dendritic cells (toldC) DCs are best recognised for their antigen presenting functions in driving immune responses against pathogens and tumour cells. However, DC also play crucial roles in co-ordinating central and peripheral tolerance processes, such that absent or deficient DC associate with an increased tendency to develop autoimmunity. 17 18 Furthermore, in autoimmunity, DC are skewed to a proinflammatory state, producing more proinflammatory cytokines and leading to activation and differentiation of autoreactive T cells. 19 Immature DC are usually regarded as tolerogenic, whereas mature DC can exert either tolerogenic or immunogenic functions depending on signals received during maturation from the microenvironment and invading pathogens. For instance, bacterial lipopolysaccharides induce immunogenic maturation of DC by upregulating surface MHC complexes and T cell costimulatory molecules (CD80, CD86), 20 21 while schistosomal lysophosphatidylserine, anti-inflammatory cytokines (eg, IL-10) and glucocorticoids induce a tolerogenic phenotype. 18 Tolerogenic dendritic cells (tolDC) induce peripheral tolerance by induction of anergy and deletion of T cells, 22 blockade of T cell expansion 23 and induction of regulatory T cells (Tregs). 24 25 Tregs in turn induce the regulatory properties of DC (figure 1). These mechanisms have already been reviewed. 26 27 Several methods can be used to produce stable tolDC ex vivo, with limited or no capacity to transdifferentiate into immunogenic DC. Common methods include inhibiting the expression of immune-stimulatory molecules (CD80/CD86 and IL-2) [28][29][30] or stimulating constitutive expression of immunosuppressive molecules such as IL-4, IL-10 and CTLA-4, 31-35 through genetic engineering. Also, exposing differentiating DC ex-vivo to drugs such as dexamethasone and vitamin D3 36 37 or immunosuppressive cytokines such as IL-10 and TGF-β 38-40 and lipopolysaccharides 41 can be used to produce tolDC. These and other methods have been extensively reviewed elsewhere. 42 Regulatory T cells (Tregs) Tregs are a subset of T cells expressing CD4, CD25 and intracellular Forkhead Box P3 (FoxP3) protein that inhibit the functions of effector T cells as well as other immune effector cells and so are essential for immune tolerance. 43 44 They mediate their effects by producing immunosuppressive cytokines and by cell-to-cell contact, following stimulation via their antigen-specific T cell receptors (TCR). These mechanisms also modulate other immune responses in an antigen-non-specific manner through 'bystander suppression' and 'infectious tolerance'. 45 46 Treg depletion and dysfunction have been implicated in a variety of autoimmune disorders including type 1 diabetes, RA, SLE and, classically, with an Review Figure 1 A schematic representation of the mechanisms of action of tolerogenic cells. MSCs promote the differentiation and survival of Tregs and tolDC. Tregs and tolDC, on the other hand, enjoy a mutual bidirectional positive interaction with each other. Tregs and MSCs inhibit the actions of B cells, effector T cells, macrophages and neutrophils through cell-cell contact (eg, Fas:Fas Ligand (FasL) mediated deletion), and various soluble factors such as TGF-β, IDO, PGE2, IL-10, IL-6, and sHLA-G5. MSCs also act through extracellular vesicles. 8-10 18 TolDC directly inhibit effector T cells through various mechanisms. These include: cell-cell ligand-receptor mediated deletion, for example, Fas: FASL, PD-L1 and PD-L2 on tolDC and PD-1 receptors on effector T cells; effector T cell anergy secondary to low expression of co-stimulatory molecules CD80/CD86, CD40 and pro-inflammatory cytokines (TNF, IL-12, IL-21 and IL-16) by tolDC. Other mechanisms include soluble anti-inflammatory cytokines such as IL-10, IL-4 and TGF-β. 26 27 TolDC directly promote Tregs and so indirectly inhibit other immunogenic cells through Tregs. Mechanisms include soluble factors such as IL-10, IDO, TGF-β and TSLP and cell-cell interaction between CTLA-4 and CD80/86. This interaction, in turn, leads to transendocytosis of CD80/86 and further tolerogenic phenotypic 'reinforcement' of tolDC. Tregs also promote tolDC via IL-10 and TGF-β. 26 27 CTLA-4, cytotoxic T-lymphocyte associated protein 4; IDO, indoleamine-2,3-dioxygenase; IL, interleukin; MSCs, mesenchymal stromal cells; PDL, programmed death ligand; PGE2, prostaglandin E2; sHLA, soluble human leucocyte antigen; TGF-β, transforming growth factor beta; tolDC, tolerogenic dendritic cells; TSLP, thymic stromal lymphopoietin. inherited deficiency of FoxP3, immune dysregulation polyendocrinopathy enteropathy X linked syndrome. 47 48 These findings support the possibility that ex-vivo expansion and transfusion of autologous or allogeneic Tregs could provide an effective therapeutic strategy for unwanted immunopathology such as autoimmunity. In the past, the lack of reliable Treg surface markers and the resultant possibility of simultaneously isolating and transfusing proinflammatory T cells slowed the development of protocols for Treg isolation and expansion. 5 More recent studies have used CD4, CD25 and CD127 cell surface markers to isolate CD4 + C-D127 lo/-CD25 + Tregs from blood. 49 50 Other types of regulatory T cells exist, such as T-regulatory type 1 (Tr1) cells, which secrete IL-10. 51 These are a distinct population of regulatory T cells that only transiently express FoxP3, on activation. 52 They coexpress CD49b and LAG-3, and secrete high levels of IL-10 but low amounts of IL-4 and IL-17. Suppression is dependent on IL-10 and TGF-β and they kill myeloid antigen-presenting cells via granzyme B release. Migration of tolerogenic cells MSCs, Tregs and tolDC express a host of homing receptors that are important for their transmigration from the tissue of administration (eg, skin or vascular system) to activation sites (eg, regional lymph nodes) and, ultimately, to the target organs. For instance, FoxP3+ Tregs express CC receptor 7 (CCR7), CCR4, CCR6, CXC receptor 4 (CXCR4) and CXCR5. They also express CD103 (integrin α E β 7 ) (whose ligand is E-cadherin expressed by epithelial cells) and CD62L (L-selectin) (whose ligands are the lymph node and mucosal lymphoid tissue endothelial cell addressins CD34, GlyCAM-1 and MAdCAM-1). 53 Activated tolDC express CCR7 and migrate to CC chemokine ligand 19 (CCL19), 54 underpinning migration to regional lymph nodes. MSCs, on the other hand, express a restricted set of chemokine receptors (CXCR4, CX3CR1, CXCR6, CCR1, CCR7) and have shown appreciable chemotactic migration in response to the chemokines CXC ligand 12 (CXCL12), CX3CL1, CXCL16, CCL3 and CCL19. 55 MSCs may also exert tolerogenic effects in distant tissues via extracellular vesicles. 10 It is clearly important that migration potential is considered during the generation of cellular therapies. Figure 2 Preparation and administration of tolerogenic cellular therapies. This figure describes the process of cellular therapy manufacture and administration. Sources of substrate cells include autologous or allogeneic umbilical cord tissue, bone marrow aspirate and lipo-aspirate for mesenchymal stromal cells and autologous whole blood for expanded regulatory T cells and tolerogenic dendritic cells. Mononuclear cells are usually extracted by density gradient centrifugation of whole blood, bone marrow aspirate and digested tissue (lipo-aspirate and umbilical cord tissue) or by leukapheresis (whole blood). Mononuclear cells are then cultured in the appropriate media and culture conditions for the requisite duration or number of passages. Harvested cells can be administered immediately through various routes (subcutaneous, intravenous, intralesional and intrathecal) or cryopreserved for future use. CellulAR THeRApIes FoR THeRApeuTIC ToleRAnCe What could cellular therapies achieve? Numerous preclinical studies using animal models of autoimmune disorders have shown potent tolerogenic effects of these various immune modulatory cells, although some mechanisms of action remain unclear. Animal models do not faithfully replicate all mechanisms of human autoimmunity but positive results have provided the scientific basis to catalyse clinical trials. Mesenchymal stromal cells (MsCs) The first ever preclinical study of MSCs in an autoimmune setting was in experimental auto-immune encephalomyelitis (a model for MS). 56 MSCs were effective in treating the disease and were shown to be strikingly effective if injected before or at the onset of disease. Further studies in experimental MS buttressed this finding [57][58][59][60] and showed that MSCs control disease through inhibition of CD4 + Th17 T cells, 58 generation of CD4 + CD25 +-FoxP3 + Tregs 60 and through hepatocyte growth factor production. 59 Therapeutic efficacy was also observed in the MRL/Lpr 61 and NZB/W F1 62 63 mouse models of SLE. MSCs were effective in collagen-induced arthritis, 64 65 Freund's adjuvant-induced arthritis and K/BxN mice with spontaneous erosive arthritis. 66 These studies have been reviewed elsewhere. 10 Results from early clinical trials in MS showed good tolerability and some potential efficacy [67][68][69][70] (table 2A) associated with increased number of Tregs in the peripheral blood of patients. 67 In the most recent controlled study, 70 13 patients received MSCs while 10 patients received conventional MS treatment. The active treatment group showed a more stable disease course and a transient increase in immunomodulatory cytokines. A placebo-controlled dose-ranging study of mesenchymal-like cells derived from placenta in patients with MS 71 used a distinct type of cell with immunomodulatory and regenerative properties, which do not fully meet ISCT criteria for MSCs (and therefore not included in table 2A). Their phenotype includes CD10+, CD105+ and CD200+; they are CD34-and, like MSCs, do not express class II HLA or costimulatory molecules CD80, CD86. The cells appeared safe and well tolerated in patients with relapsing remitting MS and secondary progressive MS. In RA, MSCs were well-tolerated and showed preliminary efficacy with improvements in clinical outcomes when combined with disease-modifying anti-rheumatic drugs (DMARDS). 72 73 In the first placebo-controlled randomised trial of MSCs in RA, 73 40 patients who had failed at least two biological DMARDS received intravenous infusions of adipose-derived MSCs at varying dose, while 7 patients received placebo. Adverse events were few and included fever and respiratory tract infections; however, serious adverse events included a lacunar infarction. Clinical outcomes, especially DAS28-ESR, showed a dose-dependent improvement. The first case series of MSC in patients with SLE was published in 2009. 74 Four patients with cyclophosphamide/glucocorticoid-refractory SLE were treated with bone marrow-derived MSCs. After 12-18 months of follow-up, all showed improvement in disease activity, renal function and serological markers. Subsequent studies, mainly by the same group, have confirmed that MSCs are safe in SLE and reported promising results such as improvement in renal function, proteinuria, SLE disease activity indices, anti-dsDNA titre and circulating Tregs. [75][76][77][78][79][80] In the most recent multicentre study, up to 60% of treated patients achieved either major or partial clinical response as determined by British Isles Lupus Activity Group scores. 80 However, a relapse rate of 12.5% at 9 months may warrant repeated infusions of MSCs. An analysis, by the same group, of four patients with diffuse alveolar haemorrhage in SLE using high resolution CT scan showed resolution of lung pathology after treatment with MSCs. 81 A serious complication of Crohn's disease is perianal fistulae. MSCs have been extensively studied in Crohn's disease for their immunomodulatory properties and for their ability to differentiate into mesodermal tissues with tissue repair capabilities (table 2B). Results in Crohn's disease are encouraging with patients who received MSCs experiencing significant improvement in fistulae while reporting just minor side effects. [82][83][84][85][86][87][88][89][90] The unprecedented success of MSCs in a recently concluded phase III multicentre clinical study in Crohn's disease across seven European countries and Israel implies that MSCs could become a treatment of choice for Crohn's fistulae refractory to conventional treatment. In this study, 90 212 patients with Crohn's disease-associated fistulae received intralesional injections of either MSCs or placebo. Fifty per cent of the treatment group achieved combined clinical and radiological remission at 24 weeks compared with 34% of the placebo group, with only minor adverse effects reported. MSC have also been successfully embedded in an absorbable biomaterial and surgically delivered for the treatment of fistulae associated with Crohn's disease. 91 In this study, 12 patients safely received MSC embedded in a Gore fistula plug with fistula healing rate of 88.3% at 6 months. Review MSCs have also been used in several trials to prevent and treat graft versus host disease (GVHD). In a multicentre phase II study, 55 patients with steroid resistant severe acute GVHD received MSCs at a median dose of 1.4×10 6 cells, obtained either from HLA-identical sibling donors, haploidentical donors or third-party HLA-mismatched donors. Up to 30 patients achieved complete clinical response independent of cell source. 92 In a recent phase II study, prophylactic MSCs were successfully used to prevent GVHD following HLA-haploidentical stem cell transplantation. 93 103 Phase I unblinded randomised controlled dose escalation study Monocyte-derived autologous tolDC. Three cohorts of patients with rheumatoid or other inflammatory arthritis received 1×10 6 , 3×10 6 , or 10×10 6 cells into an inflamed knee. DC exposed to synovial fluid during culture as a source of auto-antigen. A fourth (control) cohort received arthroscopic washout alone. Review Safe and acceptable procedure, feasible to manufacture tolDC from peripheral blood of patients with arthritis. Arthroscopically assessed synovial vascularity and synovitis improved in some patients who received TolDC. First intra-articular administration of tolDC. No consistent immunomodulatory trend in peripheral blood between treatment and control groups. No evidence for DC-induced joint flare (indicating DC stability). A potential advantage of MSC therapy over some other tolerogenic therapies is that their lack of MHC class expression means that they can be derived from either an autologous or allogeneic source with little or no risk of immune rejection. 10 Thus, cryopreserved allogeneic MSC could become an 'off-the-shelf ' therapy rather than a bespoke therapy requiring preparation at the point of delivery. In tables 2A and 2B, the source of MSC is indicated for each trial listed. Tolerogenic dendritic cells (toldC) In an early murine experiment, allogeneic DC transfer from diabetic non-obese diabetic (NOD) mice to prediabetic NOD mice prevented development of diabetes in the latter. 94 The hypothesis was that the diabetic NOD mice DC contained pancreatic antigens that conferred immunoregulatory properties, possibly by targeting regulatory T cells specific to those antigens. Since then, many preclinical studies have demonstrated that ex vivo generated DC, with an anti-inflammatory or tolerogenic phenotype, can effectively suppress or 'switch off ' auto-immune disorders such as diabetes, 95 96 arthritis, 97 MS, 98 99 autoimmune thyroiditis 100 and myasthenia gravis. 39 In most studies, tolDC were pulsed with antigens to confer specificity: bovine serum albumin for bovine serum albumin-induced arthritis, 97 pancreatic islet lysate for diabetes, 95 encephalitogenic myelin basic protein peptide 68-86 (MBP 68-86) for MS 99 and thyroglobulin for autoimmune thyroiditis. 100 Interaction of autoreactive T cells with such partially mature or 'deviated' DC results in their loss of functionality (anergy), apoptosis or acquisition of regulatory function. The majority of the studies aimed at prevention of autoimmunity by administering tolDC in the predisease state (either prophylactically or immediately post-immunisation). 39 95 96 100 However, tolDC also arrested established disease, 39 41 97 with similar outcomes to prophylactic models. 98 These studies have been summarised elsewhere. 42 The first clinical trial of tolDC in a human autoimmune disorder was in type 1 diabetes 101 (table 3). In this study, 10 million autologous DC were safely administered intradermally into patients two times a week for a total of 4 doses, without serious adverse effects. Two forms of DCs were used: immature 'control DC' cultured from monocyte precursors using IL-4 and GM-CSF and immunosuppressive DC (iDC) genetically manipulated ex-vivo to block the expression of costimulatory molecules CD80/CD86. 101 TolDC were not loaded with autoantigens in this trial. Some therapeutic efficacy was suggested as some patients showed elevated c-peptide levels post-treatment, indicative of increased endogenous insulin production. In a phase I single centre study, tolDC were also safely infused intraperitoneally in patients with refractory Crohn's disease and showed some potential efficacy. 102 Other studies of TolDC in autoimmunity are in inflammatory arthritis: the AuToDeCRA study where autologous tolDC were loaded with autologous synovial fluid as a source of autoantigen 103 and the Rheumavax study where autologous tolDC were exposed to citrullinated peptides to confer antigen specificity and administered intradermally to patients with RA. 104 In the phase I AuToDeCRA study, DC were injected arthroscopically into an inflamed knee joint, as a robust test of their stability and safety in an inflamed environment. There was no evidence that the procedure provoked a flare of symptoms. In a study published only as an abstract, recombinant autoantigen-loaded tolDC were administered subcutaneously to patients with RA at doses of 0.5×10 7 and 1.5×10 7 cells. Dose-dependent efficacy was reported, especially in autoantigen positive patients and autoantibody titres also decreased. 105 Other trials in Crohn's disease, RA and MS are ongoing and results are yet to be published. 27 A potential advantage of (autoantigen-loaded) tolDC compared with MSC is their capacity to specifically target autoreactive T cells, without non-specific immune suppression. 103 104 Other similar antigen-specific cells are actively being investigated, especially in transplantation. These include regulatory macrophages (Mregs), 106-108 myeloid derived suppressor cells 109 and MSC-conditioned monocytes. 110 While other applications Review remain preclinical, regulatory macrophages have been studied in humans in the context of renal transplantation. In a recent case report, 108 two patients received donor-derived Mregs at doses of 7.1×10 6 and 8×10 6 cells/kg intravenously prior to receiving living donor renal transplants. Both patients were eventually weaned from steroids over 10 weeks leaving maintenance low dose tacrolimus. Transfused Mregs were shown to secrete IL-10 and suppress T cell proliferation by cell-cell contact and IFN-γ induced IDO activity. 108 Both patients showed increased numbers of circulating Tregs post-transplant and a peripheral blood gene expression profile indicative of tolerance according to the Indices of Tolerance (IOT) research network. 111 Regulatory T cells 'Natural' CD4 + CD25 + FoxP3 + regulatory T cells (Tregs) play a central role in immune tolerance in health. While the evidence is not always definitive, Treg defects or deficiencies have been implicated in several autoimmune diseases. 47 112 As with MSCs and DCs, considerable effort has therefore been dedicated to developing methodologies to isolate and expand these cells, as a potential tolerogenic therapy for autoimmune disease. Isolation uses the cell surface markers CD4, CD25 and usually CD127 low . Subsequent expansion generally uses anti-CD3, anti-CD28 and IL-2 (figure 2). The expanded cells can, in theory, be rendered disease-specific by expansion in the presence of relevant autoantigens or genetic manipulation of TCR expression. 113 Expanded Tregs have been used preclinically to treat murine models of autoimmunity, especially type 1 diabetes [114][115][116][117][118] and, in some studies, Tregs were expanded with DCs to confer antigen specificity. In humans, early trials took place in patients with GVHD following bone marrow transplantation. For example, transfusion of HLA partially matched allogeneic umbilical cord blood derived Tregs at a dose of 0.1-30×10 5 Treg/kg, following double umbilical cord blood transplantation, was associated with a reduced incidence of acute GVHD when compared with identically treated controls without Treg. 119 Tregs have also been used in a phase I study to prevent GVHD by infusing donor-specific ex-vivo expanded Tregs prior to haploidentical haematopoietic stem cell transplantation without post-transplantation GVHD prophylaxis. 120 The first description of expanded Treg administration in human autoimmunity was in children with type 1 diabetes. 121 Ten children received intravenous injections of autologous Tregs in two dosing cohorts (10×10 6 and 20×10 6 cells/kg) and followed for 6 months (table 4). A matched control group was used to compare clinical improvement after infusion. The treatment group, on average, had lower insulin requirements at 6 months compared with their matched controls. In an extension of this study, a higher dose of up to 30×10 6 cells/kg was well tolerated and associated with some clinical improvement after 12 months (reduction in insulin requirement and higher C-peptide levels). 122 In a recent study in adults with newly diagnosed type 1 diabetes, 50 a dose escalation protocol was used to assess the maximum tolerated dose of Tregs. Patients received intravenous infusions of Tregs up to a target dose of 2.3×10 9 cells, experiencing no serious adverse effects. In vitro analysis showed that expansion of the Tregs increased the overall number of cells and their functional activity/potency. In this study, the DNA of expanded Tregs was labelled with deuterium, allowing in vivo tracking. Up to 25% of transfused Tregs survived in the peripheral blood after 1 year. Furthermore, deuterium did not appear in other lymphocyte populations suggesting expanded Tregs were stable after administration. Autologous Tr1 cells were also well tolerated when administered intravenously in 20 patients with Crohn's disease with associated improvement in disease activity. 123 Concerns have been raised about the potential plasticity of Tregs in relation to their reliability as a cellular therapy. Natural Tregs form a relatively small proportion of peripheral blood CD4+ T cells and express no unique surface marker to facilitate their isolation. Nonetheless, enrichment of CD127 -/low cells generally suffices to minimise contamination with activated T cells. However, the propensity for expanded Tregs to express IL-17 was noted some years ago, with evidence suggesting that CD4 + CD25 + FoxP3 + Tregs can undergo transformation to pathogenic Th17 cells after repeated expansion. [124][125][126] These studies demonstrated that epigenetic instability of the FoxP3 and retinoic acid receptor-related orphan receptor (RORC) loci accounted for the potential for Th17 (de-)differentiation. Further investigation demonstrated that both loci were stable in 'naïve' (CD45RA + ) Tregs, when compared with memory (CD45RO + ) Tregs. 126 127 Therefore, use of CD45RA as an additional marker for Treg isolation should minimise expansion-induced epigenetic instability and produce a more homogenous tolerogenic Treg population, with low risk of Th17 transformation. In mice, evidence exists for cells that coexpress FoxP3 and RORγT, the murine equivalent of the Th17-lineage defining marker RORC. 128 Despite a capacity to differentiate into either classical Tregs or Th17 cells, these cells demonstrated a regulatory function in murine diabetes. The development of Tr1 cells as a therapy is at an earlier stage than regulatory T cell therapy. They can be expanded ex vivo from PBMC or CD4+ T cells. One method, using an IL-10 secreting DC (DC-10), can generate allospecific Tr1 cells for potential use in haematological or solid organ transplantation. An alternative technique generated ova-specific Tr1 cells for a phase 1b/2a clinical trial in Crohn's disease. 123 In vivo expansion of regulatory T cells IL-2 is a key cytokine for T cell activation and proliferation. Furthermore, because natural Tregs express high levels of CD25, the IL-2 receptor alpha chain, they are highly sensitive to stimulation by IL-2. In patients with cancer treated with peptide vaccine 129 and DC-based vaccine immunotherapy, 130 131 administration of IL-2 (with a rationale to expand effector T cells) actually led to in-vivo expansion of Tregs. This led to the theory that IL-2, particularly at low doses, will preferentially expand Tregs, informing preclinical experiments and clinical trials in autoimmunity. In a cohort of patients with chronic refractory GVHD, low dose IL-2 administration (0.3-1×10 6 IU/m 2 ) increased Treg:Teff ratio, with improvement in clinical symptoms and enabling tapering of steroid dose by a mean of 60%. 132 Similarly, low dose IL-2 (1-2×10 5 IU/m 2 ) post-allogeneic SCT in children prevented acute GVHD when compared with those who did not receive low dose IL-2. 133 Treatment of patients with Hepatitis C virus-induced, cryoglobulin-associated vasculitis with IL-2 at a dose of 1.5×10 6 IU once a day for 5 days followed by 3×10 6 IU for 5 days on weeks 3, 6 and 9 was associated with clinical improvement in 80% of patients as well as a reduction in cryoglobulinaemia and normalisation of complement levels. 134 In a phase I trial in type 1 diabetes, administration of 2-4 mg/day of rapamycin and 4.5×10 6 IU IL-2 thrice per week for 1 month led to a transient increase in Tregs but a paradoxical worsening of β-cell function, associated with an increase in circulating NK-cells and eosinophils. 135 In SLE, a Treg defect associates with disease activity and 121 Phase I non-randomised study 10 children with type 1 diabetes received autologous Tregs intravenously in two dosing cohorts (10×10 6 and 20×10 6 cells/ kg body weight). A matched control group of 10 children did not receive a placebo. In the extension study, 122 two extra patients were recruited for treatment, and 6 out of the total 12 patients received an additional infusion at 6-9 months (either 10×10 6 or 20×10 6 cells/kg) making up a total dose of 30×10 6 cells/kg. Here, patients were followed up for 1 year. Review No serious adverse events. Generally, treated children had lower insulin requirements at 6 months compared with matched controls, and recorded significantly higher c-peptide levels. A higher dose of 30×10 6 cells/ kg was also safely tolerated and was associated with better clinical outcomes (more patients in this group achieved remission, at 1 year with highest fasting and stimulated c-peptide levels and lowest HbA1C levels. Safely tolerated with few adverse events. Clinical improvement with a reduction in Crohn's disease activity index and inflammatory bowel disease questionnaires First in human study of use of Tr1 cells for treatment of autoimmunity. Authors argue that ovalbumin is widely distributed in the GI tract and will activate Tr1 cells. Safe with no major adverse events. There was a reduction in cryoglobulinaemia in 90% of patients and improvement in vasculitis in 80%. FoxP3 + Tregs also increased in peripheral blood. Review appears secondary to defective endogenous IL-2 production. 136 Exogenous low dose IL-2 appears to both reverse the biological defect and provide a potential therapeutic strategy. [136][137][138] A common finding in trials of low dose IL-2 to treat autoimmunity is that effects are transient, declining once treatment is discontinued. Effects may not be limited to natural Tregs but also extend to FoxP3 + CD8 + T cells, at least in type 1 diabetes. 139 However, an optimum dosing regime is yet to be defined. Results from a recent adaptive dose-finding study in 40 patients with type 1 diabetes suggest that the optimal dose of a single injection of IL-2 that will induce 10% and 20% increases in Tregs over 7 days were approximately 0.10×10 6 IU/m 2 and 0.5×10 6 IU/ m 2 , respectively. 140 This study also showed that the mean plasma concentrations of IL-2 at 90 min postinjection, even at the lowest doses, were higher than the hypothetical Treg-specific therapeutic window determined in vitro (0.015-0.24 IU/mL). This was associated with a dose-dependent transient desensitisation of Tregs (downmodulation of the beta subunit of IL-2 receptor (CD122)) and a decrease in the number of circulating Tregs and other lymphocytes, which improved 2 days after injection. These findings may explain the lack of response seen in some patients who have received daily injections of low-dose IL-2. A follow-on study by the same group investigated the optimum frequency of administration of IL-2 in type 1 diabetes. 141 Results show that the optimum regimen to maintain a steady state increase in Treg of 30% and CD25 expression of 25% without Teff expansion was 0.26×10 IU/m 2 every 3 days. 142 It is unclear at this juncture whether in vivo expansion of Tregs might provide a superior therapeutic option in autoimmunity than ex vivo expansion and readministration. Conceivably the two modalities could be combined. Other attempts have been made to expand Tregs in vivo. One method is the administration of autoantigen in Freund's incomplete adjuvant. In a phase I trial, a single dose of insulin-β-chain in IFA was administered intramuscularly to patients with type 1 diabetes. 143 Treatment was well tolerated and appeared to stimulate robust antigen-specific regulatory T cell populations in the treatment arm up to 24 months, although there was no statistically significant difference in mixed meal stimulated c-peptide responses compared with the control group. Other methods are the probiotic use of whole helminths or their unfractionated products and administration of purified excretory/secretory helminths' products. In preclinical studies using animal models of RA, MS, Crohn's disease and type 1 diabetes, they induce Tregs (and other regulatory cells) in vivo and prevent autoimmunity. [144][145][146] However, clinical trials are yet to show consistent encouraging results in humans. 145 WHeRe ARe We noW? Results to date from human clinical trials have shown that cellular therapies are, at minimum, safe and feasible, and therefore worth exploring further in our pursuit of therapeutic tolerance induction. The regenerative properties of MSCs could additionally provide an element of tissue replenishment, repairing some of the damage that inevitably accompanies autoimmunity. However, most of the studies outlined in this review are at the very earliest phases of clinical development. Phase II and, ultimately, phase III studies will be needed to confirm their efficacy. Furthermore, as with any tolerogenic therapy in autoimmunity, clear objectives are required for efficacy trials. In transplantation, 'operational tolerance' is present when immunosuppression can be removed without allograft rejection. The situation is less clear in autoimmunity. Re-establishment of self-tolerance should equate with life-time drug-free remission, which has been demonstrated in some animal models when tolerogenic cells are administered both prophylactically and therapeutically. 42 95 However, tolerance takes time to develop and tolerogenic therapies may not reduce symptoms in the short-term, necessitating the temporary continuation of more conventional therapies. Furthermore, immunosuppressive drugs and glucocorticoids could potentially interfere with tolerance induction as previously suggested for calcineurin inhibitors. 147 Careful clinical trial designs will therefore be fundamental in order to identify, robustly, tolerance induction. In the short term, this is likely to require immune monitoring, for example, using autoantibody arrays and MHC-peptide tetramers, in order to track and interrogate the quality and quantity of the autoantigen-specific response. 148 149 To date, cellular therapy trials have only occasionally incorporated experimental medicine end-points, for example, to measure longevity of cells, their distribution in vivo or to determine appropriate dosage. 123 140 It is important that future trials adopt a similar philosophy, both to advance therapeutic development and also for ethical reasons. Other factors to consider during the development of tolerogenic cellular therapies include the route of delivery. For more standard therapeutics, the main decision is usually oral vs parenteral delivery. For cellular therapies, the route has to be parenteral but the decision is potentially more sophisticated. For example, where might TolDC regulate an aberrant autoimmune response? In the target tissue, the draining lymph nodes, the central lymphoid organs? Route of delivery is likely to influence the therapy's ultimate destination, and treatment development needs to encompass work that demonstrates the cells express appropriate homing receptors. And then, there are the more standard developmental questions such as dosage and frequency of administration-a true tolerogenic therapy should only require a single 'course' of treatment but, in a patient with a propensity to autoimmunity, regular re-treatments may be required to keep autoreactivity at bay. Choice of autoantigen is also critical for certain cellular therapies. And last, cost-effectiveness has to be demonstrated for any novel treatment. However, the health economics would be very different for a tolerogenic therapy if it could truly avoid the need for chronic immunosuppressive therapy and its complications, not to mention the ravages of autoimmunity-associated tissue damage and comorbidities, such as cardiovascular disease. The costs of isolating and expanding cells for therapy are significant but collaborations across academic research centres and commercial partners will solve some logistical challenges of clinical grade manufacture. Such challenges include cell source, cell isolation and expansion techniques, culture media and reagents, potency markers and genetic manipulation techniques where required (figure 2). These need to be standardised to ensure reproducibility because different cell manufacturing techniques will lead to subtle or even unidentified phenotypic differences in the final product. For example, it is unclear whether different types of tolDC, manufactured using distinct techniques, will have significantly different clinical effects. 150 Measurement of potency is therefore a critical step prior to the release and administration of any cellular therapy product. 151 At one point, the costs of cell manufacturing were envisaged to be a potential barrier to the development of immunomodulatory cell therapies. However, with the success of cellular therapeutics such as chimeric antigen receptor T cells for cancer, significant investment has been made in relevant technologies. For example, closed bioreactors can enable manufacture of large quantities of GMP-grade cells within a shorter period of time than labour-intensive, open culturing in flasks and bags. 152 Such technologies Review are inherently adaptable, and therefore transferrable to different types of cellular therapy, 153 helping to achieve cost-effectiveness and reducing batch-to-batch variability. Eventually, and assuming positive results, comparative effectiveness trials across cell types (MSCs, TolDC and Tregs) may be required to determine which products are best suited for different forms and stages of autoimmunity. For example, MSCs, because of their regenerative capacity, may be favoured in conditions such as Crohn's disease and MS where tissue regeneration would be advantageous. On the contrary, Tregs may be preferred in diseases with documented evidence of Treg dysfunction such as type 1 diabetes and SLE, because ex-vivo expansion of Tregs can reverse Treg dysfunction. 154 The effects of different cell types is being investigated in transplantation in The ONE Study. 155 In this collaborative study, different immunosuppressive cell populations (tolerogenic macrophages, myeloid derived suppressor cells, tolDC, monocytes conditioned by MSCs, IL-10 induced DCs and rapamycin-conditioned DCs) are manufactured from the same leukapheresis product, removing one element of variability when comparing these very different therapies. Cells are then studied in different disease contexts to determine the best approach to treatment. It may also prove possible to combine different cells to produce synergistic effects. As tolerance can break down many years before the onset of clinical disease, it is also important to consider the optimal timing of cellular therapies. Detection of preclinical autoimmunity may provide a window of opportunity to treat and cure these diseases with safe interventions before symptom onset and before tissue damage has accrued. Epitope spreading, with broadening of the autoimmune repertoire alongside the non-specific effects of tissue damage, might render therapeutic tolerance induction more difficult in established disease, despite phenomena such as infectious tolerance and linked suppression. 156 Appropriate immune monitoring will be even more important in disease, as a means to establish benefit in the absence of symptoms or signs. In-depth studies of allograft recipients who have achieved operational tolerance have identified biomarkers that appear specific for the tolerant state. These may be useful for monitoring attempts at tolerance induction prospectively. 157 ConClusIon It is an exciting time for tolerogenic cellular therapies. Rapid advances can be expected in the short to medium term catalysed by progress in manufacturing technologies, advances in the development of immune monitoring techniques and the identification of tolerance biomarkers, alongside an acceptance that earlier treatment may be ethically justified if the therapeutic target is tolerance induction. Whether any, or all, of the cells discussed in this review will ultimately demonstrate robust tolerogenic effects must await formal clinical trials of efficacy; and we should be as certain as we can be that the timing, route and dosing of therapy is optimal before conducting the 'definitive' studies. These are not easy challenges but they are tractable and, currently, there is a large amount of intellectual energy directed at solving them.
9,800
2018-11-02T00:00:00.000
[ "Biology", "Medicine" ]
Microcavities with suspended subwavelength structured mirrors We investigate the optical properties of microcavities with suspended subwavelength structured mirrors, such as high-contrast gratings or two-dimensional photonic crystals slabs, and focus in particular on the regime in which the microcavity free-spectral range is larger than the width of a Fano resonance of the highly reflecting structured mirror. In this unusual regime, the transmission spectrum of the microcavity essentially consists in a single mode, whose linewidth can be significantly narrower than both the Fano resonance linewidth and the linewidth of an equally short cavity without structured mirror. This generic interference effect---occuring in any Fabry-Perot resonator with a strongly wavelength-dependent mirror---can be exploited for realizing small modevolume and high quality factor microcavities and, if high mechanical quality suspended structured thin films are used, for optomechanics and optical sensing applications. Interference effects in structured thin films are widely exploited in photonics to tailor the properties of integrated optical elements. Of particular interest are thin films patterned with subwavelength structures, such as high-contrast gratings [1] or photonic crystals [2]. There, interference between modes propagating through the film and transverse guided modes result in the appearance of Fano resonances [3], which can bring about remarkable optical properties, such as broadband high-reflectivity or transmittivity, or the appearance of high quality factor resonances. Such features can be exploited for realizing a wide range of integrated photonics components, e.g., optical filters [4], couplers [5], reflectors [6], lasers [7], detectors [8], sensors [9], etc. While such subwavelength structured films inherently display Fabry-Perot-type interferences [10], they are typically integrated with standard optical elements, e.g. in linear Fabry-Perot resonator configurations in order to enhance their spectral selectivity, detection sensivity, or the strength of the lightmatter interaction [1][2][3]. In this Letter, we investigate microcavities with suspended subwavelength structured mirrors and focus in particular on the regime in which the microcavity free-spectral range is larger than the width of a Fano resonance of the highly reflecting structured mirror. In this unusual regime, the transmission spectrum of the microcavity essentially consists in a single mode, whose linewidth can be significantly narrower than either the Fano resonance linewidth or the linewidth of an equally short cavity without structured mirror. We show for instance that few-micron long resonators with a Fano mirror with a moderately high Q resonance of a thousand and having finesse of a few hundreds at µm wavelengths can display GHz-wide transmission features. This generic interference effect occuring in any Fabry-Perot resonator with a strongly wavelength-dependent mirror could thus be exploited to enhance the spectral resolution of microcavities without increasing their modevolume [11]. Furthermore, these remarkably narrow linewidths are shown to be reasonably robust with respect to wavefront curvature and imperfect parallelism effects, so that such microcavities could be realized in practice * Corresponding author<EMAIL_ADDRESS>in a plane-parallel geometry, without resorting to optical elements with short focusing abilities [12][13][14][15]. The combination of ultrashort cavities with low-mass and high-mechanical quality suspended structured thin films [16][17][18][19][20] would be particularly interesting for optomechanics [19,[21][22][23] and sensing [15,24] applications. Furthermore, realizing such microcavities by integrating the structured mirror in a fiber-optic Fabry-Perot interferometer would be an interesting alternative to fiber sensors with in-fiber embedded Bragg gratings [25]. Idealized 1D model -We start by considering an idealized one-dimensional scattering model, in which the optical resonator consists of two parallel, absorption-free mirrors ( fig. 1): a highly reflecting mirror with amplitude transmission and reflection coefficients r and t, and a "Fano" mirror, whose reflection and transmission coefficients result of the interference between the direct transmission through the slab and a guided transverse mode [26,27] where t d and r d are the normal incidence transmission and reflection coefficients, and γ the width of the transverse guided resonance mode with frequency ω 0 . This coupled-mode model is known to accurately reproduce the Fano resonances typically observed with subwavelength high-contrast grating (HCG) or photonic crystal structures [2,3], and can thus be used as a simple basis to discuss the physics of Fano mirror resonators. For convenience, we start by modelling the mirrors as infinitely thin 1D scatterers, characterized by their polarizabilities ζ = −ir/t and ζ g (ω) = −ir g (ω)/t g (ω), respectively. We assume that the highly-reflecting mirror polarizability ζ does not significantly vary over the frequency range of interest, while the frequency dependence of ζ g (ω) is prescribed by eq. (1). Denoting by l the cavity length, the overall transmission of the optical resonator is then given by (2) To simplify the discussion we assume that the cavity length l is chosen such that the guided mode resonance frequency ω 0 coincides with one of the bare optical cavity resonance frequencies, satisfying ω 0 = (c/2l)(2πp + arctan(1/ζ)), with p an integer and c the speed of light in vacuum. We also assumed r g and t g real for simplicity, and take ζ(ω 0 ) ∼ ζ in order to mimic a symmetric cavity. Depending on the ratio of the free spectral range γ FSR = c/(2l) and the width of the subwavelength structured mirror resonance γ, two regimes can be considered. Typically, "long" cavities and mirrors with relatively broad Fano resonances and γ FSR γ and exhibit, within the bandwidth of the Fano resonance, Lorentzian cavity resonances with a linewidth given by the "bare" cavity linewidth κ = γ FSR /πζ 2 (for ζ 1). By the "bare" cavity we refer here to a cavity having the same length, but for which both mirrors have a frequency-independent polarizability ζ. This regime is illustrated in Fig. 2a for a ∼ 200 µm-long cavity consisting of a highly reflecting mirror with |r| 2 = 0.99 and a Fano mirror with an optical resonance around 940 nm having a Q-factor of 20, resulting in a ratio γ = 5γ FSR . The spectrum indeed exhibits cavity modes with linewidth κ around the Fano mirror resonance, while modes with larger linewidth and lower peak transmission are observed away from ω 0 due to the decrease in reflectivity of the Fano mirror. However, for a short enough cavity, such that γ FSR γ, the cavity spectrum consists in one mode only, having a resonance frequency close to ω 0 . Remarkably, the linewidth of this mode can be much narrower than both the bare cavity linewidth κ and the Fano mirror resonance width γ. Assuming ζ g (ω 0 ) = ζ 1, the transmission around resonance can be shown to be approximately given by where δ = (ω − ω 0 )/γ and F ζ 4 is the coefficient of finesse of the bare cavity. The resulting Fano resonance profile displays a linewidth γ/ √ F, which is narrower than that of the Fano mirror by a factor given by the coefficient of finesse of the cavity. This effective narrowing in presence of the highlyreflecting cavity mirror can be understood by realizing that, even though the Fabry-Perot resonator only possesses one resonant mode, constructive interference occurs only for photons whose frequency detuning after ∼ √ F roundtrips from the Fano resonance frequency is less than γ. Figure 2b shows the transmission of an 8.5 µm-long cavity with a Fano mirror with Q ∼ 10 3 , such that γ = γ FSR /100. The resulting transmission linewidth is ∼ 10 pm, substantially narrower than the bare cavity linewidth of 0.26 nm and the Fano mirror width of 1.04 nm, and its profile is seen to be accurately reproduced by eq. (3). Full 1D model -In order to confirm the findings of the infinitely thin scatterer model we numerically simulate the transmission spectrum of a cavity similar to that of Fig. 2b, consisting in a HCG and a multilayer Bragg mirror as in Fig. 1. We base ourselves on the subwavelength grating structures patterned on suspended sillicon nitride films [16,22], but note that similar results could readily be obtained with photonic crystal structures [15,[17][18][19][20]28]. We consider a 200 nm-thick silicon nitride slab (refractive index n = 2.0), in which a high contrast grating with a period of 779 nm and 705 nm-wide, 50 nm-deep rectangular grating fingers is etched, in combination with a Bragg mirror consisting in 18 alternate layers of SiO 2 (n = 1.455) and Ta 2 O 5 (n = 2.041) with π/2 thickness, and simulate the field propagation using the finite element modeling software Comsol using periodic Floquet boundary conditions. In this way, we realistically simulate a Fano mirror and a highly-reflecting mirror with parameters close to those of Fig. 2b. For a cavity length of 8.59 µm, a ∼ 6 pm-wide (∼ 2 GHz) transmission line is obtained (Fig. 2c), thus supporting the previous analysis. Let us also point out that the numbers chosen in this example for the reflectivity level (99%) and optical quality factor (10 3 ) of the Fano mirrors are quite realistic experimentally [16,19]. Substantially narrower linewidths could in principle be obtained, were one to use even more reflecting mirrors [20,22] and/or ultrahigh-Q Fano resonances [4,9]. Transverse effects -We now address effects going beyond the 1D scenario and start by investigating the effect of the incoming field transverse wavefront on the microcavity linewidth. For perfectly parallel plane mirrors, taking the wavefront curvature into account sets a fundamental limit to the achievable finesse and linewidth of the resonator. Such wavefront effects are particularly relevant for Fano mirrors whose structured area is relatively small, thus constraining the incoming beam size to avoid diffraction losses and, thereby, increasing the beam divergence inside the microcavity. For the sake of concreteness we assume an incoming Gaussian TEM 00 mode having its waist w 0 at the Fano mirror. We take w 0 to be much smaller than the structured area in order to neglect trivial diffraction effects. Under these assumptions the outgoing field amplitude after the highly reflecting mirror is given by an infinite sum of reflected components E(r, l) = n tt g (rr g ) n e ikz n exp − where k = 2π/λ, z n = (1 + 2n)l, R(z) = z[1 + (z R /z) 2 ], z R = πw 2 0 /λ, w(z) = w 0 1 + (z/z R ) 2 and ψ(z) = arctan(z/z R ) are the standard Gaussian beam parameters. The relative transverse dephasing and reduction in reflectivity, which increase for each roundtrip, will obviously lead to a broadening of the spectrum and reduced cavity transmission, as compared to those observed in the idealized 1D plane-wave model. The normalized cavity transmission T = ∞ 0 |E(r, l)| 2 2πrdr is plotted on Fig. 3 for the same cavities as in Fig. 2 and for two different waist sizes of 20 and 50 µm. Figure 3a shows that the transmission of the long cavity is substantially broadened and reduced, even for the larger waist of 50 µm which corresponds to a Rayleigh range of 8 mm. In contrast, the linewidth of the short cavity is only slightly affected by wavefront curvature effects (Fig. 3b). This highlights another practical benefit of ultrashort Fano microcavities, for which ultranarrow transmission lines can be obtained even without resorting to Fano mirrors with focusing abilities [12][13][14][15] Another transverse effect worth investigating is the sensitivity of the microcavity linewidth with respect to imperfect parallelism between the mirrors. We still assume that the previous Gaussian mode impinges at normal incidence (zdirection) on the Fano mirror, but now the highly reflecting mirror makes an angle with the Fano mirror plane in the, say, x-direction. Following the approach of Ref. [29], geometrical considerations for the reflected field amplitudes lead to an outgoing field amplitude after the highly reflecting mir-ror given by E(x, y, l) = n tt g (rr g ) n E n (x − x n , y, l + z n ) , where E n (x, y, z) = E in (x cos(2n ), y, z + x sin(2n )) √ cos(2n ) is the field amplitude having experienced a wavefront tilt by 2n , x n = l/ tan( )(1/ cos(2n ) − 1) is the transverse walk-off of the n-th outgoing beam, z n = l tan(2n )/ tan( ) is the distance travelled by the n-th outgoing beam with reference to the direct transmission beam (n = 0). E in is the incoming Gaussian modefunction at the Fano mirror. Figure 4 shows the resonance linewidth and resonant transmission levels of the previous 8.5 µm-long cavity with a Q = 900 Fano mirror resonance, for an incoming beam waist of 20 µm and for tilt angles which are achievable by, e.g., assembly of commercial silicon nitride membranes [30]. The linewidth broadening and resonant transmission reduction are consistent with the observation that, for small tilt angles, tilt-induced beam walkoff corrections scale as F( /θ) 2 , where θ = λ/πw 0 is the Gaussian beam divergence [30]. For a given degree of parallelism and cavity finesse, there generally exists an optimal waist size which minimizes the linewidth, as too large a beam increases the dephasing due to the tilt-induced beam walkoff, but too small a beam results in stronger wavefront effects. Let us finally note that these effects, in particular for high-finesse cavities, would be mitigated by the use of Fano mirrors with focusing abilities [12][13][14][15]. To conclude, we investigated the generic transmission properties of a linear Fabry-Perot resonator incorporating a Fano mirror with a strongly wavelength-dependent reflectivity, such as can be realized with high-constrast gratings or two-dimensional photonic crystals. We showed in particular that enhanced spectral resolution can be achieved with ultrashort microcavities using suspended structured membranes in a simple parallel-plane geometry and for realistic parameters. Cavities with vibrating Fano mirrors would be particularly interesting as integrated devices for photonics and optomechanical sensors, and, if further combined with electrical actuation [31], for nano-electro-optomechanics [32]. Funding Information -The Velux Foundations.
3,218.6
2018-04-03T00:00:00.000
[ "Physics" ]
METHODOLOGY FOR ASSESSING THE RISK ASSOCIATED WITH INFORMATION AND KNOWLEDGE LOSS MANAGEMENT . In practice, there is a massive time lag between data loss and its cause identification. The existing techniques perform it comprehensively, but they consume too much time, so there is a need for fast and reliable methods. The article’s purpose is to develop a rapid methodology to assess the risk of information and knowledge loss management. It provides the implementation of eight steps and combines a risk mapping method modified by assessments based on risk factors and incidents as elements from set theory and using formalization via binary estimates. The methodology includes five significant events caused by the company staff, technical problems, software, cybercriminals, viral attacks, and 66 factors influencing company incidents. As a result, a risk map of 9 groups was built for a Ukrainian enterprise. Only two groups with the minimum number of incidents and low losses are represented by all five incidents. The defined overall level of each risk group ranges from 0.14 to 0.26, which indicates a low probability of all happenings in the group. In general, the resulting map shows the existence of specific security problems of the company under investigation. The proposed assessment allows us to interpret the level of risk in the company quickly, identify weaknesses in the information security system, and predict future losses. Introduction Today, when information flows and scientific-technological progress are increasing, a company is interested in providing its information security at the highest level. The main reason for this is the information and knowledge loss management. Access to information opens the way to financial flows of the company, its documentation, contracts, employees, tech-nologies, products, personal data, etc. Today, companies depend entirely on information and know ledge, so accidental or non-accidental loss of any information can have negative consequences for the entrepreneurs. It will relate not only to the cost to recover information but also to the financial losses -results from substantial information loss. According to the research conducted by the Ponemon Institute commissioned by IBM Security, the average financial loss from hacking and leakage of information in June 2019 for medium-sized businesses in the world was about $ 3.92 million (Ponemon Institute, 2019). This sum has been increased by 1.55% ($ 3.86 million) from 2018, by 8.29% ($ 3.62 million) in 2017, and by 12% ($ 3.50 million) over the last five years (Ponemon Institute, 2018, 2014. The leader in this area is the United States, the companies of which lost an average of $ 8.19 million in 2019. One can also point out companies in the Middle East ($ 5.97 million), Germany ($ 4.78 million), Canada ($ 4.44 million), and France ($ 4.33 million). Enterprises in India and Brazil received the lowest average losses of $ 1.83 million and $ 1.33 million, respectively Ponemon Institute (2019). Analysing industry losses, companies in health ($ 6.45 million), financial ($ 5.86 million), energy ($ 5.60 million), industrial ($ 5.20 million) and pharma ($ 5.20 million) suffered the most considerable average losses (Ponemon Institute, 2019). According to Breach Level, more than 18 million records are lost every day, which means 214 records every second. In the first half of 2018, the record number was 3,353,172,708 records (Gemalto, 2018). It means that the situation in the whole world is unfavourable since there is a tendency to increase financial losses as a result of leaks, break-ins, theft, and other types of information loss. Loss of information and knowledge can lead to a loss of the company's reputation and customer trust since the information may be publicly available. Data of millions of customers of the company Microsoft have become available on the Internet. The reason was the incorrect setup of the Elasticsearch database. Two hundred fifty million records were publicly available from 05/12/2019 to 31/12/2019 (Riley, 2020). A similar situation was in February 2020 at Decathlon, the information about customers of which was also available online. The reason was the poor security of the Elasticsearch server (Targett, 2020). Also, in February 2020, it was reported that hackers stole data of more than 10.6 million customers of MGM Reports in 2019 during a hacking attack (Cimpanu, 2020). In 2019, many companies faced the problem of information loss, which concerned not only the personal data of individuals but also banking information -credit and debit cards. Such famous firms as Mastercard, Wyze, Honda, Toyota, Lexus, Yves Rocher, the financial holding company Capital One, several Iranian banks have suffered losses. The companies do not only lose customers in such a way; they often have to pay fines. Thus, for the leakage of data of 9.4 million customers, Cathay Pacific has to pay a fine of about $ 642,000 (Lee, 2020). Unfortunately, there are many cases where companies are obliged to pay fines when they lose their information. Since the problem of information and knowledge loss management is relevant, this study will solve the issue of assessing the risk of information and knowledge loss for companies, because identifying risks enables the company to predict not only the probable loss but also to identify security issues. In practice, such techniques as COBRA, RA Software Tool, CRAMM, MethodWare, etc. are used for risk assessment. Their advantages include a comprehensive approach to risk identification, which involves the collection of large data amounts, the calculation of particular methods, the security standards maintenance. The use of these techniques takes considerable time. The companies need an average of 206 days to find information and 73 days to recover (Ponemon Institute, 2019). That is why the study focuses on the development of a rapid methodology that will quickly assess the risk of loss of information and knowledge. Its practical application will reduce time and labour costs. This paper is structured as follows. Section "Literature review" shows different approaches of international scientists to study the problem of the risk associated with information and knowledge loss management. Part "Risk incidents and risk factors" makes a list of specific incidents and influence factors to assess risk, and explains their concept. Section "Research methodology" presents the developed methodology for the risk assessment, which includes eight stages. Part "Results" demonstrates the results of applying the methodology for one Ukrainian company. Section "Conclusions" contains brief conclusions, limitations, recommendations for the implementation of a measures' set to reduce the risk of information loss, further possible research. Literature review The problems associated with the study of the information and knowledge loss risks are quite common in the world. The main reason is the growing trend in the level of informatization and computerization of society. Scientists are exploring various aspects of information loss in different areas: banks (Aryani & Hussainey, 2017;Limba et al., 2019), entrepreneurship (Vasa et al., 2014;Brahmana & Tan, 2018), stock market (Leonov et al., 2012), agriculture (Podaras, 2017), national and global economics (Bilan et al., 2019c;Leonov et al., 2017;Kendiukhov & Tvaronavičienė, 2017). Separately, we can highlight the methodology proposed for systemic risk identification in the banking system of Ukraine (Vasylyeva et al., 2014;Vasa & Angeloska, 2020), which allows us to reduce the risk of information loss in the process of bank consolidation. Also, the risk assessment proposed in an article (Boyko & Roienko, 2014) is interesting for assessment of the insurance companies used in suspicious transactions, which affects a change in the approach to maintaining knowledge in the insurance industry. One of the main reasons for the information loss is a fraud, which is carried out by employees, company management, external criminals. Morsher et al. (2017) identified that the cause is the unlimited availability of information, especially financial information. To resist this phenomenon, Lyulyov and Shvindina (2017) proposed the Pentagon theory, which can be one of the methods of reducing information leakage from the company. Kostyuchenko et al. (2018), Leonov et al. (2019) also proposed to use monitoring systems to fight against fraud that affects the information and knowledge loss management. Kollár et al. (2017) developed the transformation model as one of the possible tools to increase the level of information security and reduce the risk of information loss. One of another reason for the information loss is the unintentional implementation of errors by employees due to their insufficient experience or lack of required professional knowledge (Gupta, 2017). Therefore, some researchers emphasize the importance of developing innovative approaches to creating and using training systems in companies to solve this problem (Kolomiiets & Petrushenko, 2017). Many studies pay attention to the fact that the problems associated with the collection, processing, storage of information at a high and safe level are increasing with the growth of the level of the scientific and technical process, the informatization of society and enterprises. These aspects are addressed in researches by groups of authors (Bilan et al., 2019a(Bilan et al., , 2019bHrytsenko et al., 2019;Karaoulanis, 2018). Levchenko et al. (2019), Lyeonov et al. (2019) raised the issue of information security in banks to protect anti-money laundering. Along with it, the impact of big data on company informatization and corporate social responsibility is investigated by Hammerström et al. (2019). Creating corporate databases has an important impact on the state of information and knowledge in the company, therefore, we need effective tools to reduce the risk of their losses in the conditions of Big Data functioning. To ensure this aspect, Vasyl' eva et al. (2017) proposed the use of the innovation's diffusion theory, which allows to reduce the risk of data loss during the process of data integration. Another approach is to increase the effectiveness of management methods that affect risks in the activities of companies, including information (Grenčíková et al., 2019). Nasr et al. (2019) suggested to create the integrated risk management framework for firms. Specialists use various techniques and methods to assess risks. Kuzmenko and Bozhenko (2014) considered the optimization models of bank risks for a quantitative assessment of market risks. Berzin et al. (2018) used an approach to assess the risks of business activity, based on creating a quadrangle of factors and determining the centre of mass, which allows us to predict the likelihood of stability in the level of business activity. Dmytrov and Medvid (2017) developed the approach of quantifying indexed information for risk assessment, that fits the needs of the National Risk Assessment of Money Laundering and Terrorist Financing Risks. Lazaroiu et al. (2018) proposed measures to maintain data confidentiality to ensure the General Data Protection Regulation. In researches of risk issues, there are quite popular statistical methods of risk assessment (Hudakova et al., 2018), panel cointegration and causality analysis (Bilan et al., 2020), system dynamics (Jin, 2019), probabilistic methods (Polak, 2019), econometric methods (Bilan et al., 2019d;Mura et al., 2018). Hudáková and Dvorský (2018) proposed assessing the risks in dependence on the rate of implementing the risk management process in the SMEs. Other researchers suggest using the neural network apparatus (Subeh & Yarovenko, 2017); sectoral analysis (Nocoń & Pyka, 2019); bifurcation theory (Vasilyeva et al., 2019). The issue of information security and the risk of information loss is widely discussed at international conferences. So, the issues of critical (information) infrastructures protection, to solve significant problems of resilience and societal safety, were presented at the conference "The 15th International Conference on Critical Information Infrastructures Security" on 2-3 September 2020 in Bristol, UK (University of Bristol, 2020). Scientists discussed the most crucial directions in data security, cyber-espionage, cyber-terrorism, opportunities of risk mitigation, using cloud computing, machine learning to improve resilience data protection, etc., at the 19th Annual AusCERT Cyber Security Conference on 15-18 September 2020 in Australia (AusCERT, 2020). Specialists in the field of computer and information security debated about security guarantees against arbitrary attacks, biometric backdoors, Deep Learning, Neural Networks, Genetic Testing for detection breaches in information security, development software for malware detection, etc., at the 5th IEEE European Symposium on Security and Privacy on 7-11 September 2020 (IEEE, 2020) The analysis of the achievements described in the researches shows different areas that need to solve the problem of assessing the risk associated with information and knowledge loss management. There are no universal approaches, but the assessment process must be fast and efficient. Thus, this article will focus on developing a rapid risk assessment methodology. Risk incidents and risk factors The risk of information and knowledge loss is a possible danger, a threat to the company, which leads to the loss of the most valuable resource -information and knowledge. This risk depends on certain conditions -incidents, which company staff can cause by actions, technical problems, software, illegal actions of cybercriminals, virus attacks. On the other hand, the occurrence of such an incident may be due to various factors. When an employee unknowingly did not save the results of his or her work, the information was lost. As a result, additional time and additional resources were necessary to recover, i.e., the company lost not only information but also financial support, the size of which is usually measured by information loss in the company. Based on the example, a specific employee's action is a factor that has affected the loss of information, i.e., a generated risk. Since the initiator was a person, this factor refers to an incident caused by human actions. To determine the level of information and knowledge loss risk, we identify five incidents (causes) and 66 factors of influence that cause the incident in the company. 1. "Human Error Incident" (HE), caused by the misconduct of staff. Thus, users' errors, their careless use of the computer, and the software can lead to information loss. According to statistics, the human factor causes about 32% of loss (Karabuto, 2020). The following twelve factors influencing the occurrence of this incident were selected: Intentional deletion of data files or sections of text; Unintentional deletion of data files or parts of the text; Intentional non-saving information; Unintentional non-saving information; Overwriting important files; Accidental formatting of your hard drive; Liquid spills; Intentional making a mistake; Unintentional making a mistake; Using of other usernames and passwords; Theft of information by employees; Violation of the rules and procedures for working with data. 2. "Viruses and Malware Incident" (VM), related to virus attacks and antivirus programs. Companies often face a situation where, due to the malicious action of an antivirus program or the appearance of a new virus, the virus enters the system, which leads to the information loss and blocking the work of the entire company. About 7% of information loss is due to a VM incident (Karabuto, 2020). To assess the risk ten factors causing this incident were used: Lack of antivirus updates; Lack of scanning by antivirus; Loss of information due to a virus; Corruption virus; Intentional activating a virus email by a user; Unintentional activating a virus email by a user; Intentional disabling antivirus software; Unintentional disabling antivirus software; False Signal Antivirus; Removing important information by antivirus. 3. "Technical risk" (TR), resulting from a technical, mechanical malfunction. The equipment failure, mechanical damage, wear of the media, improper use cause 44% of the information and knowledge loss management (Karabuto, 2020). This incident includes twelve factors that lead to the malfunctioning of the equipment or its physical destruction, such as Hard disk mechanical failure; Computer damage due to overheating; Computer damage due to dust accumulation in the computer; Intentional dropping or jostling a computer; Unintentional dropping or jostling a computer; Tornadoes, earthquakes, and other natural disasters; Fire; Planned power outage; Unplanned power outage; Intentional turning off the computer without saving information; Unintentional turning off the computer without saving information; Conflict between devices. 4. "Criminal risk" (CR), caused by the illegal actions of cybercriminals against a company to steal data or knowledge. Today, this incident occurs in 4% of cases (Karabuto, 2020), but it is difficult to predict because it is the cause of actions that are external to the company. As a rule, criminals are interested in information about the company's financial flows, new technologies and developments. Theft or spoof of this information causes the incomparably significant loss, possible bankruptcy of the company. Often, competitors use this type of crime to harm other companies. Nineteen potential factors causing CR were selected: Logging in with someone else's login; Computer theft; Computer loss; Copying information to removable media; Sending information to an external email address; Information theft; Information substitution; Unauthorized using of administrator rights; Social engineering; DoS attack; Smurf attack; UDP Storm; UDP Bomb; Sniffing; IP Hijack; Dummy DNS Server; IP-Spoofing; Information Loss due to encryption/decryption; Hacking encryption keys. 5. An incident involving the incorrect work of software in the company, "Software Corruption" (SC). 14% of information and knowledge loss is in the company's software (Karabuto, 2020). It is the result of improper settings of operating and application programs, non-use of job descriptions, violation of license terms, inadequate testing, errors in the program code. Thirteen following factors influencing the formation of SC were defined, i.e. Unexpected or improper software shutdowns; Lack of software updates; Reformatting during system updates; Errors in Windows registers; The program is not responding; Inaccurate removal or installation of the software; Errors in drivers; Calculation errors; Logical errors; Data I / O Errors; Data manipulation errors; Compatibility errors; Pairing errors. Each company can expand the own list of factors, but this study highlights the most typical factors. Each risk factor is characterized by the number of cases over some time and the number of monetary losses the company has spent on recovering information and lost profits. This article provided calculations using the given methodology and the information on the number of cases and amounts of loss for the month for the selected factors from one Ukrainian company (there is no name of the company due to its trade secret). Research methodology It is advisable to use rapid techniques that will quickly identify its level to assess the risk of information and knowledge loss management. Such one method is to build a risk map, which is common in practice because it visually assesses the various dangers posed by economic agents. It is built on a plane, one side of which is the probability of an occurring event, and the other side is the amount that the company may lose when the event occurs. Usually, this area is divided into sectors. The company sets the number of industries, depending on what level of detail of risk it wants to receive. Then the subject sphere is analysed to determine in which sector event will occur at a given probability and relates to a given level of loss. The disadvantage of this risk map is that in its formation, managers often use subjectivity judgments, which are supported only by their own experience. It mainly concerns the probability of being determined in practice using one's consideration. This approach is only appropriate for quick decision making. In this study, we use the approach of constructing a risk map, modifying the construction process by mathematically determining risk estimations based on the factors and incidents as elements of the theory, and using formalization through the binary estimates. They were only used to identify the operational risk of banks, which formed the basis to develop the methodology for the National Bank of Ukraine (Dmitrov et al., 2010). Let several incidents determine the risk of loss of information and knowledge from 1 to k (this paper deals with five incidents -HE, VM, TR, CR, SC (incident designations are represented by the capital letters of their names, according to Chapter 2), i.e., k = 5). Several factors (n factors, although 66 factors are calculated in Chapter 2 of this paper), which we consider as the number of cases that occur in a company and cause the information loss, and the amount of financial loss related to the information and knowledge loss influence the formation of every incident. The sets of incidents M i, i = 1÷k = {g p, p=1÷k } (where, 1÷k means that i goes from 1 to k), caused by every j factor for p-th group of a risk map, can intersect, forming the set M i,i=1÷k ∩ M j,j = 1÷n,i≠j = {g pi,p=1÷k = g pj,p=1÷n } (where, 1÷n means that i goes from 1 to n). Besides, each of the factors causes the formation of only one incident. On this basis, the methodology for assessing the risk of information loss consists of the following steps. Stage 1. It is necessary to build a table of the number of cases that form five identified incidents (Table 1) and a table of losses related to the implementation of cases in the company's activities (Table 2). Note: n is the maximum value of the number of factors, a ji is the number of cases of element j, that affect the formation of the incident i. Then, it is necessary to divide the operations into groups that take into account the principle of risk map construction. It means that it is necessary to select operations based on the number of cases and the number of losses. That is why we propose the logic of selection, which will occur according to formula (1): where a pji is a selected factor value j for the incident i, which corresponds to p (p = 1 ÷ t; t = 4 ∨ t = 9 ∨ t = 25 ∨ …) (where, 1÷t means that i goes from 1 to t) in the risk map group, m is the threshold for the number of cases of factors that the company establishes independently, based on case statistics for previous periods, h is the limit for the amount of loss that the company sets itself based on its policies. It can be a sum that is equal to a percentage of the company's profits or a rate of its cash flow, which is not significant to the company. Stage 2. Suppose that risk is the probability of an event occurring under unfavourable circumstances. We need to formalize the significance of the factors. It means that it is necessary to calculate the number of cases for p-group factors using binary properties. If there is a case that causes information loss and, consequently, financial loss, it is a negative phenomenon for the company to deal with, regardless of the number of such cases. Therefore, regardless of the number of cases, the factor will be equal to "1", which will mean the fact of realization of the case of information and knowledge loss management. If the value is "0", the company does not have any cases of information loss due to a certain factor. We use formula (2) for formalization: It is necessary to define the sum of binary peculiarities for the i-th incident for each p group of risk card according to Eq. (3) to determine the total number of cases for each incident, considering the sampling of data for each group of risk card: Note: s ji is the amount of loss by factor j, that affect the formation of the incident i. The value of A pi shows us the effect of the impact of factors on a risk incident. If A pi = 0, there are no cases of impact factor on the i-th risk incident. If A pi = 1, we have one case of impact factor which may be accidental, but if A pi > 1, it can be argued that the company has problems in the security system that have an additional impact on the risk incident. Therefore, two components should be identified to define the level of risk. The first component will reflect the underlying set of risk incident values, which will take into account only that we have a fact or no influence of the factor on the incident, or the existence of the influence of the factor regardless of the number of cases of such impact. The second component will reflect the additional impact on the risk incident, which considers the fact that the number of cases for each risk incident may be greater than "1", which also takes into account the impact of loss on the risk incident. The value of the first component is calculated by formula (4): In this case, Z pi is the basic set of values of risk incidents: The value of the second component for risk assessment is in the third stage. Stage 3. The calculated characteristics of A pi reflect the total number of negative cases for each incident. Still, one should take into account that these cases can lead to the loss of different amounts of information and therefore cause various losses to the company. For example, one case involving "DoS attacking" could result in a loss of $1,000,000, and several incidents involving "Liquid spills" could result in a loss of $10,000. Thus, it is necessary to consider the impact of factors not only taking into account the number of cases of their implementation but also considering the impact of the loss amount on risk incidents as a set Therefore, it is necessary to adjust the binary values of a pji using Eq. (6): where a * pji is an adjusted value of a pji , r pi are weighting coefficients calculated as = ∑ 1 n pji j S and then ranked from 1 to i. We receive the sum of loss for each incident and assign the rank as follows -the highest sum is equal to rank "1", the smallest sum is equal to rank "i". The adjustments will allow us to identify the second component to the risk assessment, which reflects the additional impact on the incident. It will be as follows: Stage 4. Considering the results of the second and third stages, we determine the number of occurrences of factors that affect the incident and which take into account the basic set of values of the risk incidents and the additional impact on the incident by the formula: where B p is the number of factors' occurrences that affect the incident and that take into account the underlying set of risk incident values and the additional impact on the incident, [] is the integer part of the number. Stage 5. It is also necessary to consider the situation when the company has all possible instances of impact factors on risk incidents for the risk level identification. It means that the security service has identified at least one fact of such factor impact on each event. For this purpose, a matrix is constructed (Table 3), the elements of which take values equal to "1". It means that the impact of i-th factor generates every j-th (j = 1÷k) risk incident. Using this approach, we specify the number of all possible factors' occurrences that affect the incident, and which consider the additional impact on the incident, depending on the loss amount: where * pi B are all possible occurrences of the factors affecting the incident and taking into account the additional impact on the incident depending on the loss amount, Z pi is the basic set of values of risk incidents calculated by formula (5), r pj is the rank of the j-th factor affecting the i-th risk incident which was selected depending on the p-group of the risk map, [] is the integer part of the number. Stage 6. At this stage, we calculate the level of risk by formula (9) based on the ratio between the number of cases of factors affecting the incident, considering the basic set of risk incidents values and the additional impact on the incident, and the number of all possible cases of factors affecting the incident, taking into account the additional effects on the incident depending on the extent of losses: where R pi is an assessment of the risk level for each incident, the value of which is from "0" to "1". A value closer to "1" indicates an increased risk of incident i, meaning the information loss will be significant for the company. If the risk value approaches "0", the incident generates a low level of risk, i.e., the information loss will be negligible or acceptable to the company. Stage 7. At the second last stage, we identify the overall risk level by formula (10) for each of the p-groups that correspond to the information distribution on the risk map: where R p is the overall level of risk for each of the p-th group. Stage 8. Finally, there is a company's risk-loss map that displays the level of risk for each incident, depending on the factors affiliation to one of the map-sectors. Results Using the information on the selected 66 factors relating to the cases that are the leading causes of information loss in the company, the data were divided into nine groups for the future risk map. This number was chosen because nine sectors have basic risk maps for companies. Another reason is that increasing the number of groups requires more data sampling. Given that the primary information we possess is the number of cases and the amount of loss for each factor, we classified the data into nine groups (Figure 1). Figure 1 shows the classification of factors according to the number of cases and the loss amount. The graphs color changes depending on the increase in the risk level for each of the groups, where a light tone corresponds to a low level, a dark tone shows a high risk. Groups 1, 2, and 4 form a safe risk zone, in which cases of information loss are rare, and the amount of loss is negligible. Groups 3, 5, and 7 form a tolerable risk zone, i.e., such cases occur very often, but the amount of loss is small, or the examples are quite rare. They generate significant information loss for the company. Groups 6, 8, and 9 are at risk because there is a significant loss for the company, and such cases occur quite often. There are groups according to criteria, the values of which were selected on the basis of the analysis results carried out in the process of study preparation (Table 4). Companies can decide the value of the number of cases and the loss amount which they can set to find the risk of information and knowledge loss. Figure 1 demonstrates that factors from 1, 4, and 6 groups are the most common; there are single factors in other groups. In conclusion, it is necessary to define the level of information and knowledge loss risk. The steps of the proposed methodology help to calculate the risk level for each incident and each group. The risk map in Figure 2 presents the results. Note: X-axis is the loss amount in dollars, Y-axis is the number of cases. At first, we consider "Criminal Risk", presented in groups 1, 2, 4, and 7 ( Figure 2). It means that there are few cases of factors that form this type of risk since they are related to external interference with the company's information system. Still, the risk of their existence is moderate and equals to 0.5 and 0.67. It suggests that the company is likely to have some problems with its cyber defence system, which allows situations where the company loses information and knowledge through external sources, such as scams, hackers, etc. Inclusion of this category to Group 7 also indicates that, with a certain amount of probability, a company may lose significant amounts of money, which can eventually lead to enormous losses. Therefore, it is worth paying attention to those situations that lead to an increase in "Criminal Risk" in the company. "Software Corruption" occurs in groups of 1, 2, 4, 5, 6, 9 (Figure 2), which indicates the prevalence of this type of risk in cases of information and knowledge loss management. Particularly this type of risk is critical in groups 5 and 9. That is, the cases of information loss with a high degree of probability occur in the company, and they are caused by factors that Figure 2. The risk map (source: compiled by author) form the incident "Software Corruption". The company should review the software usage, and setup instructions and protocols since information in this area may be lost due to incorrect operating system settings and custom software that distorts information, reduces computer performance, time loss, etc. "Human Error" is present in 1, 3, 4, 6, 7, and 8 groups (Figure 2), which indicates a large number of human-initiated information loss cases. Particularly the risk in Group 8 (equal to 1.00) is critical, which means that high levels of employee action are likely to result in relevant information and financial loss. Both "Software Corruption" and "Human Error" can contribute to cybersecurity issues and deadtime. As a result, it will lead not only to information and knowledge loss but also to financial loss. "Viruses and Malware" occurs in groups 1, 4, 6, and 9. The risk of information loss about this incident is moderate. Its value (0.5) in group 9 may be the result of a virus attack, which indicates the atypical impact factors of this incident on the information loss. The fact that this incident occurred in Group 9 signals companies that they should take additional antivirus protection measures. "Technical Risk" occurs in groups 1, 3, 4, 6, 7 ( Figure 2). The risk level does not exceed 0.50, and in most cases, it is low despite its prevalence. That is, cases of information loss, which cause technical problems, occur in the company, but do not lead to significant losses. An overall risk level was determined for each group (Figure 2), which shows the likelihood of loss due to the entire set of incidents. On the whole, one should note that it is insignificant and ranges from 0.14 to 0.26. The probability of risk occurrence for the 6th group is maximum. This value suggests that a situation is possible in the studied company with a possibility of 0.26 when the event of factors will be repeated very often (10-100 times) for each incident and lead to significant losses ($ 20,000-70,000). In other situations, factors of five incidents may influence the information loss is hardly probable. However, such conditions are also possible for groups 1 and 4 of Figure 2. Conclusions Thus, the problem of information and knowledge loss management is quite relevant for different companies, because by losing data, the company loses money. Internal and external factors related to user error, external virus and hacking attacks influence the innovative technology, supernew software, and hardware. Therefore, timely response to the company's management by predicting the harmful incidents will reduce losses. The proposed methodology will allow timely and rapid assessment and identification of the risk of information and knowledge loss in general and in the context of incidents. This approach will just avoid subjectivism in the methods of companies since actual data on the number of cases, the amounts of loss for each factor form it. There is also real experience in applying such approaches in the operational risk assessment process of banks used by the National Bank of Ukraine. A positive fact is the visual interpretation of the risk of information and knowledge loss in the form of a risk map, which considers the number of cases and loss, outputs information by groups of factors with the determination of risk for each incident, and the overall level by group. Analysing such a map, you can identify the problematic places in the company, which cause the information and knowledge loss management. The results of the map enable to predict the consequences for the company with the obtained risk level. For this purpose, it is advisable to determine the scenarios when such risk is present and there are fundamental decisions in the information security system of the company, which scenarios the company will receive if the number of cases and the number of losses increase (decrease). The proposed methodology can be used for companies regardless of the ownership form and activity type. Its main limitation is associated with the criteria for the factors classification for which there are no reasonable measurements. To overcome this disadvantage, the methodology can be refined by developing a statistical estimate of the minimum and maximum boundaries for the number of agent cases and the allowable amount of losses for each risk group. The next limitation is that the approach proposed in the paper will not replace the set of measures that need to be implemented to reduce the risk of information loss in the company. Companies should conduct regular training sessions to increase computer literacy for users, especially for young and inexperienced employees, to reduce the risk of human factors. It is also necessary to provide employees with information regarding the procedures for dealing with data. It is essential to ensure that users have access rights to job descriptions. This measure reduces the amount of fraud that staff can commit by having an expanded amount of administrator privileges or passwords. Regular monitoring of user actions will help to detect errors in their work. More constructive measures should be taken, such as the use of solid-state drives instead of hard drives, the use of surge protectors, generators, backup batteries, systematic cleaning of computers, and the keeping of devices in specially equipped rooms, use of dust and waterproof enclosures, to reduce the risk of technical incident factors. It is necessary to have a secure lock/ unlock procedure, to shut down the software after each use, to perform several software testing procedures, to use systematic backup and archive of information on additional servers or external media, for the reduction of the "Software Corruption" incident impact. Companies should implement anti-theft software on laptops, regularly update antivirus software and scan files, verify access rights and roles of employees in the company information system to reduce the risk of "Viruses and Malware" and "Criminal Risk" incidents. There are plans to develop the proposed methodology, considering the impact of predicted results of implementation measures to prevent situations of information and knowledge loss on reducing the risk level in the future. It is possible to add identification of factors depending on the degree of their control by company and determination of loss from the respective groups. Funding This work is carried out within the taxpayer-funded researches: No. 0118U003574 "Cybersecurity in the banking frauds enforcement: protection of financial service consumers and the financial and economic security growth in Ukraine", No. 0118U003569 "Modeling and forecasting socio-economic and political road reform map in Ukraine for the transition to the model of modern business", No. 0120U104798 "Quadrocentric recrusive model of Ukrainian unshadow economy to increase its macroeconomic stability" and No. 0120U104810 "Optimization and automation of financial monitoring processes to increase information security of Ukraine".
8,995
2021-02-05T00:00:00.000
[ "Business", "Computer Science" ]
Computational Design of an Electro-Membrane Microfluidic-Diode System This study uses computational design to explore the performance of a novel electro-membrane microfluidic diode consisting of physically conjugated nanoporous and micro-perforated ion-exchange layers. Previously, such structures have been demonstrated to exhibit asymmetric electroosmosis, but the model was unrealistic in several important respects. This numerical study investigates two quantitative measures of performance (linear velocity of net flow and efficiency) as functions of such principal system parameters as perforation size and spacing, the thickness of the nanoporous layer and the zeta potential of the pore surface. All of these dependencies exhibit pronounced maxima, which is of interest for future practical applications. The calculated linear velocities of net flows are in the range of several tens of liters per square meter per hour at realistically applied voltages. The system performance somewhat declines when the perforation size is increased from 2 µm to 128 µm (with a parallel increase of the inter-perforation spacing) but remains quite decent even for the largest perforation size. Such perforations should be relatively easy to generate using inexpensive equipment. To be asymmetric, electroosmosis (EO) must be non-linear. One of the possible mechanisms of EO non-linearity is related to a concurrent ICP. Ion-Exchange (IEX) membranes are well-known to have non-linear and asymmetric current-voltage characteristics occurring due to current-induced ICP. However, their electro-osmotic permeability is very low, so in 1D configurations, they do not generate appreciable volume flows. Nevertheless, in microanalysis, for example, in the so-called T-junction configurations (typically using cation-exchange Nafion materials), coupling between ICP and "orthogonal" EO flows gives Membranes 2023, 13, 243 2 of 16 rise to interesting and useful analyte pre-concentration phenomena [29][30][31][32][33][34]. These systems demonstrate the potential of "non-1D" configurations. Effectively, such a configuration was introduced and explored (using numerical simulations) in a previous study [35], which studied a new kind of electro-membrane material combining current-induced ICP with significant EO flows. This occurred as a result of a physical conjugation of the IEX and nanoporous layers, while micro-perforations in the IEX layers were enabled through volume flows. Such structures featured considerable asymmetries in the rates of volume transfer (at the same current magnitude) depending on the current direction, were easily upscalable with increasing membrane area and, thus, potentially suitable for an electrically driven "power fluidics" (by analogy with power electronics). The previous study [35] considered the simplest model already featuring the behavior of interest. Thus, for instance, and for simplicity, the electrolyte concentration was set right at the external surface of the IEX layer, so a perfect stirring of the outside solution was assumed. This assumption is unrealistic especially for the reversed voltage when this external interface becomes depleted due to the current passage. In addition, the previous study set a constant electrostatic potential right at the external surface of the ion-exchange layer. This was performed to avoid explicitly considering the external solution (which would represent a considerable numerical complication), but this condition actually was not compatible with the condition of a given electrolyte concentration because the condition of the given potential would require putting a reversible electrode right at the external surface, while the condition of a given electrolyte concentration would demand perfect solution stirring right to the surface (and this would actually be impeded by the presence of any electrode). In addition to that, the previous study considered only opposite signs of fixed charges in the IEX and porous layers, only one perforation size and only one value of the zeta potential. In this study, we will make the model essentially more realistic by relaxing several of those approximations. At the same time, due to this, the model parameters will become too numerous for a systematic parametric study to be feasible. Nonetheless, we will explore some correlations of potential interest for eventual practical applications. In particular, we will consider dependencies of system performance on the size of perforations and distance between them, on the thickness of the porous layer as well as the sign and magnitude of the zeta potential of the porous layer. All of these dependencies will reveal more or less pronounced maxima, which can be exploited in the future optimization of practical applications on a case-by-case basis. We will also see that considerable volume fluxes (in the range of several tens of liters per square meter per hour) can be expected at realistic parameter combinations. In this study, we consider only stationary solutions, which implies that the electrical capacity of electrodes is sufficiently large to ensure practically constant voltage drops in the system over times that are much longer than the characteristic relaxation times of concentration changes. This assumption is probably not very realistic, so for future analyses of practical systems, non-stationary simulations will be needed. Theory We consider a porous layer coated on one side with an IEX "mask" having rather scarce circular perforations (see Figure 1). Strictly speaking, one should specify the pattern of the perforation array and solve 3D problems for volume and ion transfer. For infinite arrays, proceeding from symmetry considerations, one can consider a single perforation while accounting for the existence of the other ones using boundary conditions formulated at the external surface of a 3D domain enclosing every single perforation. Depending on the (regular) pattern, such domains can have triangular, square or hexagonal cross-sections. Their explicit modeling would require the solution of 3D problems. To reduce 3D problems to 2D, we will use a cylindrical cell model that approximates the polygonal domain surface with a cylinder. Several studies on microelectrodes [36][37][38] have demonstrated good accuracy of such an approach. domain surface with a cylinder. Several studies on microelectrodes [36][37][38] have demonstrated good accuracy of such an approach. As shown in Figure 1, our model consists of four layers, namely, a porous layer, an IEX layer, an unstirred layer and a further solution layer (from top to bottom). In the IEX layer, there is a single circular perforation. Similar to the previous study, for simplicity, we will neglect a finite thickness of the IEX "mask" and consider it a geometrical boundary impermeable to volume flow and coions (see below). For the description of liquid flow in the solution layers beneath the membrane, we use the Stokes and continuity equations where is the vector fluid velocity, is the dynamic viscosity, is the hydrostatic pressure, ∆ is the Laplace operator and is the nabla operator. Pressure-driven flows through porous media are often described by the Darcy law, which postulates that the flow rate is proportional to the negative hydrostatic pressure gradient. However, the use of the Darcy equation gives rise to non-zero slip at non-porous surfaces and to non-physical singularities in the normal velocity at the perforation edge (see, for example, [35]). To avoid such singularities, we will use the so-called Brinkman equation [39], which is a kind of "superposition" of the Darcy and Stokes equations. Strictly speaking, this equation lacks a rigorous physical background. Nonetheless, it correctly described the limiting cases of Darcy and Stokes flows and is compatible with the no-slip condition at solid surfaces. The conventional Brinkman equation (for pressuredriven flows) reads as: where is the vector cross-section-averaged fluid velocity, is the porous-medium hydraulic permeability and is its porosity. As mentioned above, the Brinkman equation is a "superposition" of the Stokes and Darcy equations. These two equations use different kinds of velocity. While the Stokes equation operates with actual fluid velocity, the Darcy equation uses a cross-section-averaged fluid velocity. Dividing viscosity by porosity in the "Stokes component" of Equation (3) (the first term on the left-hand side) effectively gives rise to some reduction in velocity proportionally to the porosity and, thus, "recalibrates" the velocity to the cross-section-averaged one. As a result, both "components" of the Brinkman equation operate with the same kind of velocity. As demonstrated in the previous study, in the so-called Smoluchowski approximation for the description of EO, the hydrostatic pressure gradient should be modified to include an electrical body force. As a result, the EO Brinkman equation will read as As shown in Figure 1, our model consists of four layers, namely, a porous layer, an IEX layer, an unstirred layer and a further solution layer (from top to bottom). In the IEX layer, there is a single circular perforation. Similar to the previous study, for simplicity, we will neglect a finite thickness of the IEX "mask" and consider it a geometrical boundary impermeable to volume flow and coions (see below). For the description of liquid flow in the solution layers beneath the membrane, we use the Stokes and continuity equations where v is the vector fluid velocity, η is the dynamic viscosity, p is the hydrostatic pressure, ∆ is the Laplace operator and ∇ is the nabla operator. Pressure-driven flows through porous media are often described by the Darcy law, which postulates that the flow rate is proportional to the negative hydrostatic pressure gradient. However, the use of the Darcy equation gives rise to non-zero slip at non-porous surfaces and to non-physical singularities in the normal velocity at the perforation edge (see, for example, [35]). To avoid such singularities, we will use the so-called Brinkman equation [39], which is a kind of "superposition" of the Darcy and Stokes equations. Strictly speaking, this equation lacks a rigorous physical background. Nonetheless, it correctly described the limiting cases of Darcy and Stokes flows and is compatible with the no-slip condition at solid surfaces. The conventional Brinkman equation (for pressure-driven flows) reads as: where u is the vector cross-section-averaged fluid velocity, k is the porous-medium hydraulic permeability and γ is its porosity. As mentioned above, the Brinkman equation is a "superposition" of the Stokes and Darcy equations. These two equations use different kinds of velocity. While the Stokes equation operates with actual fluid velocity, the Darcy equation uses a cross-section-averaged fluid velocity. Dividing viscosity by porosity in the "Stokes component" of Equation (3) (the first term on the left-hand side) effectively gives rise to some reduction in velocity proportionally to the porosity and, thus, "recalibrates" the velocity to the cross-section-averaged one. As a result, both "components" of the Brinkman equation operate with the same kind of velocity. As demonstrated in the previous study, in the so-called Smoluchowski approximation for the description of EO, the hydrostatic pressure gradient should be modified to include an electrical body force. As a result, the EO Brinkman equation will read as where ϕ is the electrostatic potential, and the coefficient β is defined as where εε 0 is the liquid dielectric constant, ζ is zeta potential of the pore surface and γ is the active porosity (including pore tortuosity). Using Equation (5) implies that the pores are sufficiently large compared to the Debye screening length (see below for the applicability of this approximation). Usually, this has to be complemented by the continuity equation. In addition, we require equations for electrostatic potential and salt concentration. As such, we will use Ohm's law and the convection-diffusion equation. They have the same form in both porous and unstirred layers (recall that we do not explicitly consider the "interior" of the IEX layer), the only difference being that electric conductivity and diffusion permeability are reduced in the porous layer due to the finite porosity. Thus, in the unstirred layer I = −αc∇ϕ (7) and in the porous layer I = −α p c∇ϕ (9) where I is the vector current density, J s is the vector salt flux, D is the salt diffusion coefficient and c is the salt concentration. For simplicity, in this study, we consider (1:1) electrolytes with cations and anions of the same mobility (such as in KCl). Therefore, the proportionality coefficient in Equation (7) (Ohm's law) is given simply by Finally, due to the finite porosity Using the salt concentration in the porous layer, we can understand the concentration averaged over the pore space. By using Ohm's law, we can neglect the so-called streaming currents (convective transfer of electric charge due to the movement of charged pore liquid) as compared to conventional electromigration currents. This is legitimate for sufficiently large pores (compared to the screening length). The equations to solve are those of charge and salt conservation, that is ddi(J s ) = 0 (15) which apply in each of the layers. External Boundary of the Porous Layer Strictly speaking, there is also an unstirred layer. However, due to the relatively large pore size, this interface is not current polarized. Therefore, any unstirred layer can be effectively included in the porous layer. The boundary conditions here are zero hydrostatic pressure, zero electrostatic potential and the given salt concentration. Due to the problem of linearity in concentration, the concentration can be set at any level Since we are solving the Brinkman equation, we need an additional hydrodynamic boundary condition at this surface. As such, we used the following condition of flow one-dimensionality External Surface of the Unstirred Layer Here, we set the same value of the salt concentration and a non-zero applied voltage Due to deviations from flow one-dimensionality close to the perforations, there are some hydrostatic pressure gradients in the solution beneath the membrane. They can be expected to fade away at some distance from it giving rise to constant hydrostatic pressure and a 1D liquid flow. In auxiliary simulations, we checked the dependence of our results on the location of the plane where hydrostatic pressure was set equal to zero, and we found that there was no dependence at L s ∼ = 400 ÷ 500 µm. Given that there is no applied pressure difference in our system (EO conditions), The hydrodynamic equations are solved within the whole layer from −L s up to the membrane surface. However, the salt-transfer equations are solved only within the unstirred layer. Cell Boundary Here, we set the condition of zero radial flow (no liquid is entering or leaving the cell) and the condition of perfect slip. These conditions apply both in the porous and unstirred layers. For salt transport, we apply the conditions of zero ion fluxes through the cell surface, which is ensured by zero radial derivatives of the salt concentration and electrostatic potential. IEX Layer Here, we impose the conditions of zero normal (impermeability) and tangential (zero slip) volume flow For ion transport, we use the condition of impermeability to coions ∂c ∂z For definiteness, we will consider that the IEX layer is impermeable to cations (and, thus, positively charged). Thus, in the discussion below, negative zeta potentials will correspond to the case of fixed charges of opposite signs and positive zeta potentials will correspond to the case of coincident signs. The case of cation-exchange IEX layers (impermeable to anions) can be easily considered by analogy. In addition to the cation impermeability, we consider that the IEX layer is perfectly permeable to counterions (anions), so their electrochemical potential does not change across the layer Thus, both the salt concentration and electrostatic potential experience jumps at this interface whose magnitudes are related by Equation (35). Exposed Surface of the Porous Layer within Imperfections Rigorous hydrodynamic boundary conditions at the interfaces between porous media and free fluids (such as those at the surface of a porous layer exposed within perforations) is a complex theoretical matter, which is additionally complicated by the fact that the Brinkman equation is not rigorous. We resolve this problem by effectively using the Brinkman equation not only in the porous but also in the unstirred layer and by assuming that within the latter, the hydraulic permeability was extremely large. In this way, the Comsol Multiphysics software ensures a correct description of hydrodynamics at the interface between the porous layer and free fluid within perforations without the need to explicitly formulate the boundary conditions. For the salt transport problem at this boundary, we use the conditions of the continuity of salt concentration and electrostatic potential as well as of flux continuity for each of the two ions, which, taking into account the difference in the diffusion coefficients due to the finite porosity, gives rise to these conditions Numerical simulations were performed using Comsol Multiphysics 6.1 software with the 2-D Axisymmetric Geometry Model in the sub-section "Porous Media and Subsurface Flow" of the section "Fluid Flow", physics interface Brinkman Equation. Close to the perforation and IEX layer, we used adaptive meshes of the type mapped. For the remaining part of the numerical domain, we used a simple free triangular mesh with a maximum element size of around 1 µm in the vertical direction. In the radial direction, the mesh size was adapted to the cell size using the scale geometry option. Results and Discussion Our model has a number of parameters. The most important parameters are the size of perforations, the distance between them, the thickness of the porous layer, the zeta potential of the pore surface and applied voltage. We will explore the dependencies of system performance on the more important of these parameters. The other parameters will be fixed at some reasonable values, as briefly explained below in Table 1. Figure 9) Due to the deviations from flow one-dimensionality close to the perforations, there are some hydrostatic pressure gradients in the solution beneath the membrane. They can be expected to fade away at some distance from it giving rise to constant hydrostatic pressure and a 1D liquid flow. In auxiliary simulations, we checked the dependence of our results on the location of the plane where hydrostatic pressure is set equal to zero, and we found that there was no dependence at L s > 400 ÷ 500 µm. The value of hydraulic permeability approximately corresponds to the average size of cylindrical pores of 200 nm. Thus, the pores are much larger than the screening length in the assumed 1 mM KCl solution, so the Smoluchowski approach to the description of electroosmosis is applicable. Ultimately, the effect of net flow is due to a geometrical asymmetry: a porous layer on one side and an unstirred solution layer on the other. The extent of asymmetry becomes smaller when the unstirred layer becomes thicker. Therefore, one can expect a monotone decrease in the net flow rate with increasing unstirred layer thickness, which was confirmed using simulations (not shown). The value of thickness used in the calculations (50 µm) is typical for systems with moderate stirring. Our problem is linear in concentration, so only relative changes in concentration matter. The only output property that depends on the salt concentration is the current density (it is directly proportional to it). For definitiveness, we used the value of 1 mM for the initial concentration. For other concentrations, one should adjust the current density proportionally. The previous study [35] demonstrated that without unstirred layers (and using the Darcy law for the description of flow in the porous layer), the dependencies of flow rate on the applied voltage (including its sign) were strictly linear. The system's non-linearity and asymmetry manifested themselves only in the current-voltage characteristics. Due to their asymmetry, the same amount of electric charge transferred in two opposite directions gave rise to the transfer of different volumes of liquid and a non-zero net volume transfer over the period. In this study, we account for unstirred layers and use the Brinkman equation (instead of the Darcy law) primarily to better capture the flow details close to the perforation edge. This gives rise to deviations from the strict linearity of flow vs. applied voltage. As one can see from Figure 2, they are not very pronounced in the case of opposite signs of fixed charges in the porous and IEX layers. On the contrary, the current-voltage characteristics in this case are pronouncedly asymmetrical. Therefore, the net flows are controlled by the current asymmetry as described above. In the case of coincident signs of fixed charges, the situation is more complex, namely, both the flow and current feature considerable asymmetry, while the current asymmetry is less pronounced than in the case of opposite signs. However, both asymmetries work in the same direction given that the currents are larger in magnitude at negative voltages where the flow rates are smaller. Below, we will see that in many cases, this gives rise to somewhat larger net flows and efficiencies (see below for the definitions) than in the case of charges of opposite signs, although this ultimately depends on the system parameters. Overall, both configurations feature comparable performances. results on the location of the plane where hydrostatic pressure is set equal to zero, and we found that there was no dependence at > 400 ÷ 500 μm. The value of hydraulic permeability approximately corresponds to the average size of cylindrical pores of 200 nm. Thus, the pores are much larger than the screening length in the assumed 1 mM KCl solution, so the Smoluchowski approach to the description of electroosmosis is applicable. Ultimately, the effect of net flow is due to a geometrical asymmetry: a porous layer on one side and an unstirred solution layer on the other. The extent of asymmetry becomes smaller when the unstirred layer becomes thicker. Therefore, one can expect a monotone decrease in the net flow rate with increasing unstirred layer thickness, which was confirmed using simulations (not shown). The value of thickness used in the calculations (50 µm) is typical for systems with moderate stirring. Our problem is linear in concentration, so only relative changes in concentration matter. The only output property that depends on the salt concentration is the current density (it is directly proportional to it). For definitiveness, we used the value of 1 mM for the initial concentration. For other concentrations, one should adjust the current density proportionally. The previous study [35] demonstrated that without unstirred layers (and using the Darcy law for the description of flow in the porous layer), the dependencies of flow rate on the applied voltage (including its sign) were strictly linear. The system's non-linearity and asymmetry manifested themselves only in the current-voltage characteristics. Due to their asymmetry, the same amount of electric charge transferred in two opposite directions gave rise to the transfer of different volumes of liquid and a non-zero net volume transfer over the period. In this study, we account for unstirred layers and use the Brinkman equation (instead of the Darcy law) primarily to better capture the flow details close to the perforation edge. This gives rise to deviations from the strict linearity of flow vs. applied voltage. As one can see from Figure 2, they are not very pronounced in the case of opposite signs of fixed charges in the porous and IEX layers. On the contrary, the current-voltage characteristics in this case are pronouncedly asymmetrical. Therefore, the net flows are controlled by the current asymmetry as described above. In the case of coincident signs of fixed charges, the situation is more complex, namely, both the flow and current feature considerable asymmetry, while the current asymmetry is less pronounced than in the case of opposite signs. However, both asymmetries work in the same direction given that the currents are larger in magnitude at negative voltages where the flow rates are smaller. Below, we will see that in many cases, this gives rise to somewhat larger net flows and efficiencies (see below for the definitions) than in the case of charges of opposite signs, although this ultimately depends on the system parameters. Overall, both configurations feature comparable performances. Figure 2 shows that the volume flow has a considerable impact on the current-voltage characteristics (especially, at larger voltages). The flow plays a dual role. First, it defines the shape of the quasi-1D salt-concentration profiles away from the interface between the porous and IEX layers. This, in turn, influences the current-induced ICP itself because convection transports salt to or from the polarized interface. Second, the volume Figure 2 shows that the volume flow has a considerable impact on the current-voltage characteristics (especially, at larger voltages). The flow plays a dual role. First, it defines the shape of the quasi-1D salt-concentration profiles away from the interface between the porous and IEX layers. This, in turn, influences the current-induced ICP itself because convection transports salt to or from the polarized interface. Second, the volume transfer causes a solution "injection" or "ejection" through the perforations. Injection denotes flows directed into the porous layer and ejection denotes the oppositely directed flows. The injected solution can have either a higher or lower concentration than the surrounding solution. The injection of a (much) higher concentration has a stronger impact on the concentration in the porous layer (and current density) than the injection of a (much) lower concentration. In turn, the concentration in the injected solution is essentially influenced by the ICP phenomena in the unstirred layer. As discussed above, in some cases, non-zero net flows (under AC conditions) arise primarily owing to a dependence of current density on the sign of the applied voltage (current direction). Due to such current asymmetry, essentially different amounts of electric charge are transferred in two directions, while the transferred volume is approximately the same. In other words, when the same amount of charge is transferred in two directions (for example, by adjusting the times of current passage in the opposite directions), there is a net volume transfer. Zero net charge transfer is a definition of alternate currents. Their principal advantage is that capacitive electrodes (without electrode reactions) can be used. The time of passage of a unit charge is inversely proportional to the current. Therefore, the volume transferred in a given direction per unit charge is proportional to the rate of volume transfer divided by the current. The net transfer (per double unit charge) is the difference in volumes transferred in the opposite directions. The time needed to transfer double unit charge (the same unit charge in each direction) is equal to the sum of inverse currents. Therefore, the rate of net volume transfer (the net volume transferred per unit time) is given by the following equation: where V 1 , V 2 and j 1 , j 2 are the absolute values of volumes and currents transferred in two directions, respectively. Scaled on the per-perforation area, this gives us the linear velocity (m/s) of net volume flow. This will be one of the principal output values considered below. Of course, this definition is also valid in the cases where the flow rate is significantly asymmetrical. Adjusting the times of current passage in two directions is not the only possible scenario. For example, one could also apply different (in absolute value) voltages of opposite signs for approximately the same times. Nonetheless, for definitiveness, in this study, we will use this simple definition of net flow rate. Selecting an optimal protocol is the subject matter of application-specific optimization, which is beyond the scope of this study. The numerator in Equation (40) is the net volume transferred per double unit charge (a unit charge is transferred in each direction). In addition to its scaling by the time of transfer for this charge (as in Equation (40) The current density in this simple case is where λ is the bulk solution conductivity (we neglect surface conductance). Taking the ratio, we obtain 8εε 0 ζ/ηλ for the volume transferred per unit charge. Scaling the net volume transferred per unit charge (half of the numerator in Equation (40)) by this value produces a dimensionless parameter that informs us about the efficiency of the AC EO pump as compared to the maximum possible performance. This will be another principal output property described below. We will see that some trends in its dependencies on the system parameters can be different from those observed for the linear velocity of the net volume flow defined by Equation (41). Accordingly, optimal combinations of parameters can be different depending on whether the net flow rate or the volume transferred per a given charge (for example, of a battery) is to be maximized. The direction of electro-osmosis is towards the movement of counterions. In the current-induced concentration polarization of interfaces between electrolyte solutions and IEX media, ion depletion (salt-concentration reduction) occurs at interfaces receiving counterions from the solution. In the opposite case (interfaces releasing counterions), there is ion enrichment (salt-concentration increase). Thus, in our system with two "active" (EO-active and perm-selective IEX) layers, there are four possible configurations: Concentration decreased in the porous layer, and flow away from the interface (Figure 3a). Concentration increased in the porous layer, and flow towards the interface (Figure 3b). Concentration increased in the porous layer, and flow away from the interface (Figure 3c). Concentration decreased in the porous layer, and flow towards the interface (Figure 3d). The first two configurations (a,b) correspond to opposite signs of fixed charges in the IEX and porous layers. The last two configurations (c,d) correspond to coincident signs. At typical parameter combinations, the electrical resistance of the porous layer is dominant and controls the current density at a given applied voltage. This resistance is controlled by the salt concentration, so its distribution in the porous layer is of primary importance. This concentration is controlled by two factors. First, the concentration changes induced by the current passage across the interface between the IEX and porous layers. Second, the convective "injection" of a solution through perforations from the nearby unstirred layer. The concentration in this "injected" solution is always changed in the opposite direction from the porous layer (i.e., increased when in the porous layer it is decreased and vice versa). Whether there is an injection into or ejection from the porous layer depends on the relationship between the current direction and the sign of the zeta potential of the pore surface. In turn, depending on the relationship between the sign of perm-selectivity of the IEX layer and the current direction, the injected solution can have either increased or reduced concentration. Under the strongly non-linear conditions of this study, these decreases or increases can be quite strong. It is clear that the injection of The first two configurations (a,b) correspond to opposite signs of fixed charges in the IEX and porous layers. The last two configurations (c,d) correspond to coincident signs. At typical parameter combinations, the electrical resistance of the porous layer is dominant and controls the current density at a given applied voltage. This resistance is controlled by the salt concentration, so its distribution in the porous layer is of primary importance. This concentration is controlled by two factors. First, the concentration changes induced by the current passage across the interface between the IEX and porous layers. Second, the convective "injection" of a solution through perforations from the nearby unstirred layer. The concentration in this "injected" solution is always changed in the opposite direction from the porous layer (i.e., increased when in the porous layer it is decreased and vice versa). Whether there is an injection into or ejection from the porous layer depends on the relationship between the current direction and the sign of the zeta potential of the pore surface. In turn, depending on the relationship between the sign of perm-selectivity of the IEX layer and the current direction, the injected solution can have either increased or reduced concentration. Under the strongly non-linear conditions of this study, these decreases or increases can be quite strong. It is clear that the injection of a high-concentration solution into a dilute one has a much stronger effect on the resulting (after a diffusive mixing) concentration in the porous layer than the injection of a dilute solution into a concentrated one. Of course, this picture is oversimplified because the current density is far from homogeneous and very strongly depends on the radial position. This current inhomogeneity can be expected to be more pronounced in the configuration where a more concentrated solution is injected into the porous layer (Figure 3a) because this creates a zone of high conductivity "focusing" current streamlines (see Figure 4). This zone is connected in parallel to the remaining part having much lower conductivity. Of course, this inhomogeneity is progressively leveled out by diffusion away from the interface. importance. This concentration is controlled by two factors. First, the concentration changes induced by the current passage across the interface between the IEX and porous layers. Second, the convective "injection" of a solution through perforations from the nearby unstirred layer. The concentration in this "injected" solution is always changed in the opposite direction from the porous layer (i.e., increased when in the porous layer it is decreased and vice versa). Whether there is an injection into or ejection from the porous layer depends on the relationship between the current direction and the sign of the zeta potential of the pore surface. In turn, depending on the relationship between the sign of perm-selectivity of the IEX layer and the current direction, the injected solution can have either increased or reduced concentration. Under the strongly non-linear conditions of this study, these decreases or increases can be quite strong. It is clear that the injection of a high-concentration solution into a dilute one has a much stronger effect on the resulting (after a diffusive mixing) concentration in the porous layer than the injection of a dilute solution into a concentrated one. Of course, this picture is oversimplified because the current density is far from homogeneous and very strongly depends on the radial position. This current inhomogeneity can be expected to be more pronounced in the configuration where a more concentrated solution is injected into the porous layer (Figure 3a) because this creates a zone of high conductivity "focusing" current streamlines (see Figure 4). This zone is connected in parallel to the remaining part having much lower conductivity. Of course, this inhomogeneity is progressively leveled out by diffusion away from the interface. Below, we will see that the configuration with fixed charges of the same sign in the porous and IEX layers overall seems to show a better performance, especially in terms of efficiency. This may be related to a better radial homogeneity in the concentration (and current) distribution in this case. In this system, perforations in the IEX layer are a central element enabling fluid transfer. At the same time, their properties such as size and spacing seem to be relatively easy to engineer. There are two obvious limiting cases, namely, a very large and a very small spacing. In both cases, the net velocity of the volume flow should tend to zero. At very large spacing, this occurs because a finite volume flow through a perforation is distributed over an ever larger per-perforation area. At very small spacing, the perforations practically overlap, so there is almost no IEX layer and no associated current-induced ICP. The flow rates in both directions are large but strictly symmetrical, so the net volume flow is again zero. From these limiting cases, it follows that the dependencies of the net flow velocity on the inter-perforation spacing should have maxima. This is confirmed by Figure 5 showing the velocity of the net volume flow and efficiency as functions of the cell radius (directly related to the inter-perforation spacing) for various perforation radii. As expected, all the dependencies have maxima. The most important observation is that the height of these maxima does not decrease very much even when the perforation size is increased by almost two orders of magnitude. Moreover, the optimal cell radii (corresponding to the maxima) increase with the perforation size. In practical terms, this is good news because larger perforations at a larger spacing are definitely easier to generate. Indeed, the optimal spacing increases sub-linearly with the perforation size, so optimal porosity increases, for example, from about 0.7% in the case of R h = 1 µm to around 21% for R h = 64 µm. over an ever larger per-perforation area. At very small spacing, the perforations practically overlap, so there is almost no IEX layer and no associated current-induced ICP. The flow rates in both directions are large but strictly symmetrical, so the net volume flow is again zero. From these limiting cases, it follows that the dependencies of the net flow velocity on the inter-perforation spacing should have maxima. This is confirmed by Figure 5 showing the velocity of the net volume flow and efficiency as functions of the cell radius (directly related to the inter-perforation spacing) for various perforation radii. As expected, all the dependencies have maxima. The most important observation is that the height of these maxima does not decrease very much even when the perforation size is increased by almost two orders of magnitude. Moreover, the optimal cell radii (corresponding to the maxima) increase with the perforation size. In practical terms, this is good news because larger perforations at a larger spacing are definitely easier to generate. Indeed, the optimal spacing increases sub-linearly with the perforation size, so optimal porosity increases, for example, from about 0.7% in the case of = 1 μm to around 21% for = 64 μm. All the curves in Figure 5 look qualitatively similar. Nevertheless, there are some potentially important quantitative differences. Thus, for instance, the maxima at small perforation sizes are somewhat higher in the case of coincident signs of fixed charges, but their height decreases with increasing perforation size faster than in the case of opposite signs. Therefore, coincident signs can be preferable for smaller perforations, and systems with different signs could work better for larger perforations. In terms of efficiency, the trends are similar, but the advantage of coincident signs at smaller perforation sizes is more pronounced. Finally, somewhat larger spacings are optimal in terms of efficiency than in terms of the velocity of the net volume flow. The dependencies shown in Figure 5 were calculated for a relatively large applied voltage magnitude (600 mV). Figure 6 shows similar dependencies obtained for a lower voltage magnitude of 200 mV (they also enable easier comparison of cases of opposite and coincident signs). Although the dependencies again are qualitatively similar, the most interesting observation is that the efficiency at the lower voltage is noticeably smaller, especially for the configuration with coincident signs of fixed charges in the porous and IEX layers. Thus, for instance, at the point of maximum for R h = 1 µm, the efficiency is about 0.107 at ϕ 0 = 600 mV but only around 0.067 at ϕ 0 = 200 mV. A close inspection of Figures 5 and 6 reveals that the net flow velocity at the points of maxima also increases somewhat super-linearly with the applied voltage. This, however, concerns the net velocities and efficiencies at the points of maxima. Their location, in turn, depends on the applied voltage. Figure 7 shows some dependencies on applied voltage calculated for a fixed combination of perforation and cell sizes. The dependencies of net velocities are initially super-linear and then become sublinear, so the benefits of increasing voltage depend on its range. The efficiency has maxima located differently depending on the sign and magnitude of the zeta potential. The coincident signs of fixed charges are always beneficial in terms of efficiency but are not always so in terms of the net flow velocity. In summary, the applied voltage is another parameter to be selected on a case-by-case basis depending on the application and optimization criteria. One should also keep in mind that excessively large voltages can give rise to electrode reactions (e.g., water splitting) and, thus, compromise the capacitive function of the electrodes. coincident signs). Although the dependencies again are qualitatively similar, the most interesting observation is that the efficiency at the lower voltage is noticeably smaller, especially for the configuration with coincident signs of fixed charges in the porous and IEX layers. Thus, for instance, at the point of maximum for This, however, concerns the net velocities and efficiencies at the points of maxima. Their location, in turn, depends on the applied voltage. Figure 7 shows some dependencies on applied voltage calculated for a fixed combination of perforation and cell sizes. This, however, concerns the net velocities and efficiencies at the points of maxima. Their location, in turn, depends on the applied voltage. Figure 7 shows some dependencies on applied voltage calculated for a fixed combination of perforation and cell sizes. The porous layer thickness can probably also be engineered with relative ease. The dependencies of net flow velocity on the thickness can be expected to have maxima because too thin layers have insufficient "internal" hydraulic resistance to be able to pump liquid through the entrance resistance of the perforated IEX layer, while in excessively thick layers, the driving voltage gradients are reduced. In terms of efficiency, the situation can be different because increasing thickness gives rise to decreasing currents. Figure 8 shows that, indeed, the dependencies of net velocity on the porous layer thickness have pronounced maxima at H = 60 ÷ 80 µm, while the maxima for efficiency are much broader and shifted towards larger thicknesses. However, the location of the maxima depends on a number of parameters and has to be determined on a case-by-case basis. The value we used in most of the above simulations (H = 100 µm) is a compromise between the maximum net velocities and efficiencies. The efficiency increases with the thickness within its broader range. Nevertheless, it also decreases starting from certain thicknesses, which is probably caused by a dependence on the thickness of ICP phenomena. The zeta potential of the pore surface is more difficult to engineer than the "geometrical" parameters, but this can still be possible to some extent using material selection or modification. Generally, convective flows (controlled by the zeta potential) seem to always reduce the ICP and, thus, the system asymmetry. However, both direct and reverse flows increase with the magnitude of the zeta potential. Within a certain range, this overcompensates the loses in the asymmetry, and the net flow velocity increases. Figure 9 shows that this occurs only up to certain values of the zeta potential depending on its sign and the geometrical parameters. Remarkably, efficiency is highest at very low zeta potentials. This is one more example of the classical trade-off between efficiency and productivity. Again, the optimal values should be selected considering application-specific optimization criteria. pronounced maxima at = 60 ÷ 80 μm , while the maxima for efficiency are much broader and shifted towards larger thicknesses. However, the location of the maxima depends on a number of parameters and has to be determined on a case-by-case basis. The value we used in most of the above simulations ( = 100 μm) is a compromise between the maximum net velocities and efficiencies. The efficiency increases with the thickness within its broader range. Nevertheless, it also decreases starting from certain thicknesses, which is probably caused by a dependence on the thickness of ICP phenomena. The zeta potential of the pore surface is more difficult to engineer than the "geometrical" parameters, but this can still be possible to some extent using material selection or modification. Generally, convective flows (controlled by the zeta potential) seem to always reduce the ICP and, thus, the system asymmetry. However, both direct and reverse flows increase with the magnitude of the zeta potential. Within a certain range, this overcompensates the loses in the asymmetry, and the net flow velocity increases. Figure 9 shows that this occurs only up to certain values of the zeta potential depending on its sign and the geometrical parameters. Remarkably, efficiency is highest at very low zeta potentials. This is one more example of the classical trade-off between efficiency and productivity. Again, the optimal values should be selected considering application-specific optimization criteria. Conclusions and Outlook Using numerical simulations, we have explored a novel electro-membrane microfluidic diode featuring significant (to be of practical interest) net volume flows in response to (ultra)low-frequency AC voltages. The system consists of a nanoporous layer and a micro-perforated ion-exchange layer put in series. The dependencies of system performance on model parameters such as the perforation size and spacing, the porous layer thickness and the zeta potential of pore surface feature maxima offer opportunities for system optimization for practical applications, for example, in sports garments with "active" moisture evacuation. In particular, our simulations have revealed that even relatively large perforations (~100 μm) can give rise to a decent performance, which is important in terms of practical implementability and cost. In addition, we have demonstrated that configurations with coincident signs of fixed charges in the ion-exchange and porous layers can feature even better performance than systems with opposite signs as considered previously. This is of interest in view of the typically better properties of cation-exchange materials (as compared to anion-exchange ones) and reduced fouling of negatively-charged surfaces by natural organic matter (as compared to positively-charged ones). In this study, we considered only stationary solutions, which implies that the electrical capacity of electrodes was sufficiently large to ensure practically constant voltage drops in the system over times that are much longer than the characteristic relaxation times of concentration changes. This assumption is probably unrealistic, so for analysis of practical systems, non-stationary simulations will be needed. Explicit inclusion of electrodes will require even more model parameters than used in this study and will have to be performed in conjunction with experimental studies where the values of a part of parameters can be fixed according to specific systems. Conclusions and Outlook Using numerical simulations, we have explored a novel electro-membrane microfluidic diode featuring significant (to be of practical interest) net volume flows in response to (ultra)low-frequency AC voltages. The system consists of a nanoporous layer and a microperforated ion-exchange layer put in series. The dependencies of system performance on model parameters such as the perforation size and spacing, the porous layer thickness and the zeta potential of pore surface feature maxima offer opportunities for system optimization for practical applications, for example, in sports garments with "active" moisture evacuation. In particular, our simulations have revealed that even relatively large perforations (∼ 100 µm) can give rise to a decent performance, which is important in terms of practical implementability and cost. In addition, we have demonstrated that configurations with coincident signs of fixed charges in the ion-exchange and porous layers can feature even better performance than systems with opposite signs as considered previously. This is of interest in view of the typically better properties of cation-exchange materials (as compared to anion-exchange ones) and reduced fouling of negatively-charged surfaces by natural organic matter (as compared to positively-charged ones). In this study, we considered only stationary solutions, which implies that the electrical capacity of electrodes was sufficiently large to ensure practically constant voltage drops in the system over times that are much longer than the characteristic relaxation times of concentration changes. This assumption is probably unrealistic, so for analysis of practical systems, non-stationary simulations will be needed. Explicit inclusion of electrodes will require even more model parameters than used in this study and will have to be performed in conjunction with experimental studies where the values of a part of parameters can be fixed according to specific systems.
10,913
2023-02-01T00:00:00.000
[ "Physics" ]
Cell Culture-Based Assessment of Toxicity and Therapeutics of Phytochemical Antioxidants Plant-derived natural products are significant resources for drug discovery and development including appreciable potentials in preventing and managing oxidative stress, making them promising candidates in cancer and other disease therapeutics. Their effects have been linked to phytochemicals such as phenolic compounds and their antioxidant activities. The abundance and complexity of these bio-constituents highlight the need for well-defined in vitro characterization and quantification of the plant extracts/preparations that can translate to in vivo effects and hopefully to clinical use. This review article seeks to provide relevant information about the applicability of cell-based assays in assessing anti-cytotoxicity of phytochemicals considering several traditional and current methods. Introduction Cancer is one of the leading causes of death worldwide. It is the first or second leading cause of death prior to age 70 in 112 of 183 countries and third or fourth leading cause in a further 23 countries, according to World Health Organization (WHO) estimates in 2019 [1]. The rising prominence of cancer as a leading cause of death in combination with limited clinical interventions clearly compromises the effects of treatment on population trends in cancer mortality, even in developed countries [2]. Although a combination of screening and treatment is progressively effective in reducing mortality from some cancers, an expected global cancer burden of 28.4 million cases in 2040, a rise of 47% from 2020 values, necessitates the development of new tools to address the unmet needs in cancer management [1]. Although newer, more specific treatments are showing promising results, they can be expensive, and further research is required to determine how to best use these drugs, as well as the toxicities associated with their use [3]. The most common types of cancer treatments available today are chemotherapy, surgery, and radiotherapy. Chemotherapy is curative in subsets of patients presenting with advanced disease, including Hodgkin's and non-Hodgkin's lymphoma, acute lymphoblastic and acute myelogenous leukemia, germ cell cancer, small cell lung cancer, ovarian cancer, and choriocarcinoma [4,5]. Chemotherapy has also been used as a neoadjuvant therapy to reduce the size of solid tumors before surgical removal, and adjuvant therapy has been used after surgery or radiotherapy, with promising results [4]. However, for some other advanced cancers, including prostate cancer, a curative treatment regimen is yet to be discovered. Scientists are returning to the drawing board to find new therapies or new combinations of therapies to further improve cancer treatment outcomes. Increased reactive oxygen species (ROS) levels have been found in almost all cancers and are thought to play an important role in the initiation and progression of cancers [6]. These highly reactive ions and molecules are produced during normal metabolism of cells but are present in higher levels in cancer cells due to increased metabolic activity, mitochondrial dysfunction, peroxisome activity, increased cellular receptor signaling, oncogene activity, increased activity of oxidases, cyclooxygenases, lipoxygenases and thymidine phosphorylase, or through crosstalk with infiltrating immune cells [6]. ROS are managed under normal physiological conditions, through detoxification by non-enzymatic molecules such as glutathione, or through antioxidant enzymes, which specifically scavenge different kinds of ROS [6]. With increasing interest in natural products, scientists continue to consider plants, which are natural sources of exogenous antioxidants, as possible sources of effective treatments for different cancers. Medicinal Plants in Cancer Treatment and Management Phytochemicals are classified as primary or secondary metabolites based on their role in plant metabolism [7]. Secondary metabolites are chemically active compounds including alkaloids, anthocyanins, flavonoids, terpenoids, tannins, steroids, saponins, coumarins, phenolics and antioxidants. These are often produced in response to stress, are more complex in structure, and are less widely distributed than the primary metabolites [7,8]. They are pharmacologically active as anti-oxidative, anti-allergic, anti-bacterial, anti-fungal, anti-diabetic, anti-inflammatory and anti-carcinogenic compounds [8][9][10]. It is common for a single plant to produce many secondary metabolites with a wide range of chemical and biological properties, providing a range for bioactive substances [10]. In the last decades, several plants have been confirmed to contain chemo-preventive and therapeutic agents for various cancers [11][12][13][14][15][16][17][18][19][20]. These studies show the effectiveness and synergistic effects of phytochemicals in plant extracts in various diseases [15,21,22]. Researchers have discovered that polyphenols are good antioxidants, capable of neutralizing the destructive reactivity of reactive oxygen/nitrogen species produced as byproducts of metabolism [23]. In addition, epidemiological studies have revealed that polyphenols provide significant protection against development of several chronic conditions such as cardiovascular diseases (CVDs), cancer, diabetes, infections, aging and asthma [23]. Phenolic phytochemicals are the largest category of phytochemicals and the most widely distributed in the plant kingdom [24]. There have been studies to examine the effect of crude plant extracts or fractions containing phenolic compounds on cancer cells to test the hypothesis that potent antioxidants possess anticancer potential. Some of these studies revealed that the efficacy of phenolic compounds in inhibiting cancer activity differs based on the structure of the phenolic compound and its molecular target [25]. Phenolic compounds can directly scavenge free radicals after entering cells and activate several cellular signaling pathways (CSP), including nuclear factor erythroid-2 (NFE2)-related factor 2 (Nrf2)-Kelch-like ECH associated protein 1 (Keap1) complex [26]. When activated, the Nrf2-Keap1 complex induces cellular defense mechanisms, including phase II detoxifying enzymes, phase III transporters, anti-oxidative stress proteins, and other stress-defense molecules that protect normal cells from ROS and reactive metabolites of carcinogenic species [26]. Another CSP that can be activated by phenolics is the mitogen-activated protein kinases (MAPKs) cascade, which helps regulate proliferation, differentiation, stress reduction, and apoptosis in cells [26]. Anticancer activity of phenolic compounds has been studied with the use of crude extracts containing mixtures of phenolic compounds and with isolated phenolic compounds. Some examples of crude extracts with reported anticancer activity include: Pandanus amaryllifolius extracts containing gallic acid, cinnamic acid and ferulic acid, with reported in vitro inhibition of breast cancer cell lines [25]. Several Teucrium species extracts containing hydroxycinnamic acid derivatives, phenylethanoid glycosides, flavonoid glycosides, and flavonoid aglycones, with reported antiproliferative and proapoptotic activities in HCT-116 colon cancer cell lines [27,28]; Baccharis trimera extracts containing gallic acid, pyrogallol, syringic acid and caffeic acid, with reported suppression of tumor cell colony formation and proliferation of SiHa cell line (isolated from a primary uterine squamous cell carcinoma); and Prunus africanus extracts, containing artraric and ferulic acids and N-butylbenzene-sulfonamide (NBBS) with reported antiproliferative effect on prostate cancer cells [15,29]. Over 60% of currently used anti-cancer agents are estimated to be derived from natural sources, such as plants, marine organisms, and microorganisms [15,17,30,31]. Good examples of plant sources include Prunus africana [15,17], African cherry (Prunus africana (Hook.f.) Kalkman) or Pygeum africanum (Hook. f.), bitter almond, African prune, and red. Besides its use for timber, it is employed as a medicinal plant, whose leaves, roots and bark are used in traditional medicine in Africa [15,[32][33][34][35]. This is not surprising since various bioactive substances with anti-inflammatory, anti-cancer, and anti-viral properties have been identified in different members of the genus Prunus [33,[35][36][37]. Many phytochemicals from medicinal plants have been discovered to have significant anticancer properties, and many more are yet to be discovered. Determining Anti-Cancer Potential of Phytochemicals Phytochemicals and their derivate metabolites present in plants have been shown to possess several beneficial effects in humans. Some of the more widely known phytochemicals with anticancer properties include vincristine, vinblastine, camptothecin, bleomycin, paclitaxel, and Taxol among others [30,38]. Different mechanisms have been proposed for the anticancer effect of various phytochemicals, with some exerting additive and/or synergistic effects with other phytochemicals. Some of the mechanisms that have been identified include selective killing of rapidly dividing cells, targeting of atypically expressed molecular factors, anti-oxidation, modification of cell growth factors, and inhibition of inappropriate angiogenesis and induction of apoptosis [38]. An example is ellagic acid, found in pomegranates, which induces apoptosis in prostate and breast cancer cells and suppresses metastatic processes of many cancer types [38]. Curcumin, found in turmeric, is attributed to cause apoptosis in cancer cells without cytotoxic effect on healthy cells via several mechanisms, including the regulation of cell proliferation, cell survival, and caspase activation pathways [26,39]. While many phytochemicals that can serve as anticancer drugs by themselves are yet to be discovered, those already discovered can serve as models for the preparation of more effective formulations by applying methods such as total or combinatorial synthesis, or biosynthetic pathway manipulation [16,30]. This concept was applied to overcome the severe toxicity of earlier formulations of paclitaxel by utilizing an albuminbound nanoparticle technology, which concentrates the drug in tumors [16]. With new and more effective technologies and better understanding of cancer biology, phytochemicals and their derivatives are bound to play a pivotal role in cancer chemotherapy. Cancer Cell Lines Cancer cell lines are useful tools because they provide a multifaceted model of the biological mechanisms involved in cancer development and progression. The use of cancer cell lines improved the knowledge of deregulated genes and signaling pathways involved in cancer progression; cell lines have also been used to define potential molecular markers for cancer screening and prognosis [40]. Numerous cell lines with their unique properties and characteristics are currently available for in vitro study of different types of cancer [41]. Cell lines are easy to handle and manipulate genetically/epigenetically by using demethylation agents, small interfering ribonucleic acid (siRNA), expression vectors, and they can be pharmacologically manipulated through cytostatics (cell growth inhibitors). Cell lines are homogenous, providing identical tumor cells for easier analysis unlike in heterogeneous solid tumors. However, to imitate in vivo tumor characteristics as closely as possible, a cancer cell line panel representative of the heterogeneity observed in the primary tumors can be used. Cancer cell lines are pure populations of tumor cells and have a high degree of similarity with the initial tumor. Because of the homogeneity of cell lines, results of experiments using correct conditions are easily reproducible [40]. In addition, there is a substantial number and variety of cancer cell lines available (Table 1). Despite these advantages, some drawbacks of using cancer cell lines include cross-contamination with HeLa cells, genomic instability leading to differences between the original tumor and the specific cell line, changes in the morphology, gene expression, and cellular pathways of cell lines from culture conditions required to maintain them (i.e., culture adaption), and infections with mycoplasma [40]. Furthermore, it is difficult to establish long-term cancer cell lines of certain types of tumors, including prostate cancer tumors [40,42]. The limited number of cell line models for prostate cancer research stems from the difficulty in propagating prostate cancer cells in vitro for extended periods. Investigators have been able to generate only seven cell lines that were previously available through public cell line repositories, but these do not represent the spectrum of clinical disease. New cell lines, which demonstrate the commonly observed clinical phenotypes, are clearly needed [42]. Since the isolation of the first cell line in the 1950s (i.e., HeLa cells), a variety of cancer cell lines have been developed for preliminary drug testing [43]. The different cell lines require different media, growth factors, and supplements to remain viable over time, as the constituents of culture media affect the cell lines. In a study by Kim et al., human breast cancer cells (MDA-MB-231) cultured in minimum essential medium (MEM), Dulbecco's modified Eagle's medium (DMEM), or Roswell Park Memorial Institute (RPMI)-1640 medium and containing different concentrations of fetal bovine serum (FBS) or different sera (equine or bovine) showed significant changes in gene expression [44]. They reported that about 25% of genes were expressed at significantly different levels by cells grown in MEM, DMEM, or RPMI-1640 media based on genome-wide expression analysis [44]. In another study, lung cancer cells (A549) and hepatocellular cancer cells (HepG2) cultured in Ham's F-12 nutrient mix (F12), RPMI, DMEM, and MEM revealed a significantly increased proliferation rate for A549 cells in DMEM compared to the other media tested, and the lowest rate for both A549 and HepG2 cells in MEM, confirmed by assaying conditioned media for basal level ATP at 72 h [45]. This underscores the significant effect of growth conditions and/or environment on cells in drug discovery experiments, and the need for specificity to ensure results are reproducible. There are a handful of prostate cancer cell lines in use today, most of which have been established from metastatic deposits [46]. The LNCaP cell line, isolated from a subclavian lymph node metastasis of prostate cancer, maintains several key markers including prostatespecific antigen (PSA), prostate specific membrane antigen (PSMA) and the androgen receptor (AR) [47]. The LNCaP cell line is androgen sensitive (AS) and expresses AR and PSA mRNA/protein [41,48]. It has a doubling time of 60-72 h, is responsive to TGFα, EGF and IGF-1, which are known to promote cancer development and progression, and has a 50% success rate after xenografting, with a tumor doubling time of 86 h when combined with a Matrigel™ formulation [41]. In another study, LNCaP cells, among others, were injected subcutaneously between the scapulae of pfp −/− /rag2 −/− double knock-out mice, resulting in primary tumor growth and pulmonary metastases in 100% of LNCaP-injected mice, and detection of DNA of 266 circulating tumor cells (CTC) per mL of blood and 35 disseminated tumor cells (DTC) per mL bone marrow after Alu-PCR analysis [49]. Through passage and hormonal manipulation in vivo, the lineage-related LNCaP sublines have resulted in a series of cells that mimic the progression of prostate cancer from the original AS LNCaP cell line to the androgen-independent (AI) C4-2 and C4-2B cell lines [47,50]. An AI cell line, C4-2, reproducibly and consistently follows the metastatic patterns of hormone-refractory prostate cancer by producing lymph node and bone metastases when injected either subcutaneously or orthotopically in either hormonally intact or castrated hosts. This model enables the study of factors that determine the predilection of prostate cancer cells for the skeletal microenvironment [50]. These C4-2 cells have a doubling time of about 48 h, are androgen independent, express an androgen receptor, metastasize to lymph nodes, and produce PSA [41,46]. The AI C4-2 cell line differs from its parent AS LNCaP, with differential expression of 38 genes between the two cell lines (≥2-fold change, 95% CI), 14 of which expressed at higher levels in LNCaP than in C4-2 cells, while the remaining 24 were expressed at lower levels in LNCaP than in C4-2 cells. In addition, the AI C4-2 cell line is highly tumorigenic and metastatic, including spontaneous metastasis to bone, whereas the AS LNCaP cell line is only weakly tumorigenic and is non-metastatic [47]. Recent Advances in Cell Culture Models for Testing Anticancer Drugs In vitro anti-cancer screening has long been used by researchers as a rapid tool in screening natural and synthetic compounds for drug development [53]. To assess preliminary anti-cancer activity in terms of cell viability, the 3(4, 5-dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide (MTT) and 3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium (MTS) in vitro cytotoxicity assays are considered two of the most economic, reliable, and convenient methods (Figure 1) [53,59]. This is based on their ease of use, accuracy, rapid indication of toxicity, and sensitivity and specificity [59]. Both assays are in vitro whole cell toxicity assays that employ colorimetric methods for determining the number of viable cells based on mitochondrial dehydrogenase activity measurement and differ only in the reagent employed [59]. In the MTT assay, the MTT salt is bio-reduced by dehydrogenase inside living cells, using the succinate-tetrazolium reductase system, to form a colored formazan dye, while a similar bioconversion using the MTS salt and phenazine ethosulfate as an electron coupling reagent occurs in the MTS assay [53,59]. In addition, the MTT assay requires the addition of solubilizing agents to dissolve the insoluble formazan product, while the MTS assay generates a water-soluble formazan product. The quantity of the colored product is directly proportional to the number of live cells in the culture since only metabolically active cells can reduce the MTT/MTS reagent to formazan [53,59]. The MTT and MTS assays assess the toxicity of a compound to cells but not anti-cancer activity. In addition, the MTT reagent is cytotoxic and subject to interference by chemical compounds such as vitamin A and C, which can lead to an under-or overestimation of cell viability, respectively [60,61]. MTS can be chemically reduced by reducing agents such as gallic acid, and the absorbance measured in the MTS assay is influenced by the incubation time (ideally 1-3 h), cell type, and the proportion of MTS reagent to cells in culture, hence the cell number [60,62]. Therefore, these factors must be considered in interpreting the results from these tests. The sulforhodamine B (SRB) assay is a rapid, sensitive, and inexpensive method for determining cell growth, utilizing a bright pink anionic dye that binds electrostatically to basic amino acids of trichloroacetic acid fixed cells. The protein-bound dye is extracted with Tris (tris (hydroxymethyl) aminomethane) base to quantify the protein content indirectly with spectrophotometry [63]. The endpoint of the SRB assay is non-destructive, stable, does not require time-sensitive measurement, and it is comparable with other fluorescence assays [60,64]. However, it is labor intensive, requiring several washing steps [63,65]. A known characteristic of cancer cell growth and metastasis is the ability of the cells to escape apoptosis because of a mutation in tumor suppressor genes. Induction of apoptosis is thus used as an important indicator of the ability of chemotherapeutic agents to inhibit tumor growth and progression. The acridine orange/ethidium bromide (AO/EB) apoptosis assay is used to study changes in cellular and nuclear morphology and characteristics of apoptosis under a fluorescent microscope [53]. Both AO and EB bind to DNA and RNA by intercalation between adjacent base pairs, but AO stains both live and dead cells while EB stains dead cells only. Live cells appear green under the microscope, and early apoptotic cells have a bright green nucleus due to chromatin condensation and nuclear fragmentation, while late apoptotic cells appear orange because they take up EB, and necrotic cells will stain orange but have a normal nuclear morphology [53,66]. After cells are counted under the microscope, an apoptotic index is calculated. The living status of a cell can be determined by measuring the amount of ATP in the cell, since ATP is necessary for life and function of all cells, and levels of cytoplasmic ATP decrease in cases of injury and hypoxia. After a cell is lysed, ATP is free to react with luciferin and luciferase, producing high quantum chemiluminescence that is linearly proportional to the ATP concentration under optimum conditions. Compared to the MTT assay, the luciferase assay had higher sensitivity and reproducibility over several days and was able to detect the viability of cells with cell counts as low as 2000 cells/well compared to a minimum of 25,000 cells/well required for the MTT assay mentioned above. The ATP assay has been reported as sensitive compared to MTT and calcein assays, used to determine the potency of cytotoxic agents [67]. This high sensitivity of the ATP assay allowed for detection of cytotoxic agent-induced ATP breakdown after incubation periods as short as 1 h, which provides an additional advantage over the MTT assay that requires approximately 72 h of incubation. A further advantage of the ATP assay was the short measurement time of 15 s per well, compared to the MTT assay, which required a 1-2 h solubilization step of the formazan before an absorption measurement [67]. However, the ATP assay cannot differentiate between cytostatic and cytotoxic cellular effects [64]. Many anticancer drugs in use today inactivate target cells by inducing apoptosis [68]. As one of the later steps in apoptosis, DNA fragmentation, a process resulting from the activation of nucleases that break down DNA into small fragments, can be used as a measure of anticancer agent bioactivity [68]. When anticancer agents break down DNA, they expose many 3 -hydroxyl ends to which fluorescein deoxythymidine analog, 5-bromo-2 -deoxyuridine 5 -tri-phosphate (BrdUTP) molecules attach, with the help of terminal deoxynucleotidyl transferase (TdT). One of the TdT dUTP Nick-End Labeling (TUNEL) assay methods involves the attachment of a fluorescein deoxythymidine analog, 5-bromo-2 -deoxyuridine 5 -triphosphate (BrdUTP) molecule [68]. After it is assimilated into the DNA, BrdU can be detected by an anti-BrdU antibody using standard immunohistochemical techniques, fluorescence microscopy or flow cytometry [68]. TUNEL assays have been used in the evaluation of many anticancer compounds, including derivatives of betulinic acid and botulin and 5-fluorouracil [68]. The harsh denaturing conditions necessary for the binding of anti-BrdU cause cell disruption and protein degradation, which is a limitation, particularly if concurrent protein content measurement or molecular analysis is required [64,69]. Although two-dimensional (2D) cell culture systems involving the growth of a monolayer of cells on a plastic surface or in vivo animals were the standard for drug testing, data from 2D models are often misleading, resulting in difficulties with translational efficacy in vivo [70,71]. This is mostly because while convenient, 2D systems are over simplistic representations of the in vivo complex tissue architecture, which fail to incorporate the biochemical and biomechanical crosstalk between tumors and the surrounding tumor microenvironment [70,72,73]. The absence of drug transport barriers, extracellular matrix and blood vessels, immune cells, gradients of oxygen tension, extracellular pH, nutrients, catabolites present normally in tumor conditions in vivo, coupled with short-term culture conditions in 2D systems, may select for cytotoxic drugs that prove insufficient in preclinical and clinical settings [70,72,74]. Therefore, in a bid to overcome these limitations and avoid the ethical concerns involving animal testing, new test systems such as the Boyden's chamber, three-dimensional (3D) cultures, microfluidic device systems, and models created using 3D bioprinting were developed [72]. One of the new systems used to model the complex in vivo intercellular interactions in vitro is the Boyden chamber, consisting of two chambers containing media and partitioned by a semi-permeable membrane [72,75]. To study cell migration in a Boyden chamber, cells are seeded in the upper chamber and allowed to migrate under the influence of a concentration gradient of chemotactic substances added to the media in the lower chamber media [75,76]. Cell migration is assessed by measuring the optical density of labeled cell extracts and corresponds to the effectiveness of the biologically active substance [75]. The Boyden chamber was used to assess and compare the invasive activity of spheroids containing only tumor cells and spheroids containing a mixture of tumor and stem cells, showing an increased invasion of the heterogeneous spheroids when compared to spheroid containing only tumor cells [72,77]. Although easy to use, the Boyden chamber does not allow for direct cell-cell interactions, limiting its ability to fully reproduce in vivo conditions and creating a preference for other evolving methods such as 3D culture and microfluidic systems [75]. An ideal 3D system would mimic a specific solid tumor microenvironment, where cells are able to replicate and interact with other cells, while promoting differentiation [78]. Most 3D systems, however, do not exactly simulate in vivo conditions, although they are more representative than 2D models. Three-dimensional systems may be classified as freefloating anchorage-independent systems, scaffold-based systems, and organoids, which are hybrid 3D culture models composed of spheroids [78]. Regardless of the class of 3D system used, research has shown that cancer cells grown in 3D culture may respond to drug treatment similarly to cancer cells in the native environment [70,72,78]. In addition, differences in apoptotic sensitivity to chemotherapeutic agents have been noted in nonmalignant and malignant mammary cell lines between 2D and 3D cultured cells [79,80]. In another study, BT-549, BT-474, and T-47D breast cancer cells in a 2D culture were less resistant to paclitaxel and doxorubicin compared to a 3D culture of the same cells [72,81]. Three-dimensional cell culture systems also provide an alternative to suspension cultures, which are necessary for growing poorly adherent cancer cells and non-solid tumor cells such as leukemia [70]. In addition, the development of organoid cultures has created more ways to carry out high-throughput drug screening using 3D culture, which may facilitate personalized cancer treatments, biomarker discovery, and mechanistic studies on drug resistance [73]. Organoid cultures have been successfully used to model pancreatic ductal adenocarcinoma from patient derived xenograft tumors and from patient prostate cancer bone metastasis [72]. Bioprinting allows for the creation of various models that mimic the processes that occur in the tumor microenvironment (TME) and is a method for constructing complex 3D biological structures. This is achieved by printing a bio ink composed of an extracellular matrix (ECM) or other synthetic substrate and cells layer-by-layer in a computer-designed pattern [72,82]. In this way, a system that mimics the TME of cervical cancer, triple-negative breast cancer with fibroblasts, and patient-derived cancer cells, fibroblasts, and endothelial cells have been created [82]. Three-dimensional bioprinting has been used to highlight the trophic role of stromal or immune cells in breast cancer cells cultured with fibroblasts in spheroids which remained viable for over 30 d and were resistant to paclitaxel, unlike the homogenous breast cancer spheroids [72]. Three-dimensional bioprinting also makes it possible to study immune cell behavior in the TME. For example, glioblastoma cells in a 3D bioprinting model were shown to polarize actively recruited macrophages in glioblastomaassociated macrophages, enhancing the proliferation and invasiveness of the glioblastoma cells [72]. Three-dimensional bioprinting can also be used to design systems that simulate aberrant tumor vascularization to better understand tumor biology and for in vitro drug testing [72,82]. Microfluidic systems involve the use of small devices designed for cell cultures to mimic perfusion, thus allowing for steady supply of oxygen and nutrients to cells and the removal of wastes [72,78]. They make it possible to control fluid flow, temperature, hydrodynamic and hydraulic pressures, shear, and chemical gradients in vitro to simulate physiological conditions in the TME [72]. The device may be designed with a barrier between compartments or a non-physical barrier such as a biomimetic extracellular matrix to divide the compartments in the device [78]. Microfluidic systems can be used to simulate a metastatic model of tumors, allowing researchers to study the effects of anti-metastatic drugs on tumor cell migration. For example, a microfluidic system using collagen-matrigel hydrogel matrices was used to reproduce the microenvironment and experimental conditions to study the migration and invasion of H1299 lung adenocarcinoma cells [72]. Different forms of microfluidic systems such as well plate, droplet, and continuous flow microfluidics are amenable to high-throughput drug screening, making them desirable for anticancer drug screening [83]. Real-Time Assessments of Cell Culture Assays With the establishment of 3D cultures, there is a need for monitoring and recording the cultures in 3D in real time and over the time needed for progression to occur, namely, in 4D [84]. Real-time image-based analysis of cellular response to drug activity in vitro and in vivo may expedite drug development timelines, decrease costs, provide better understanding of adaptive response and increase clinical predictivity when used with relevant model systems [85]. While traditional time-lapse epifluorescent and confocal microscopes provide detailed temporal and spatial analysis of cellular function, they are restricted to one or a few samples per experiment, limiting their application for drug discovery. However, new generation live-cell imaging microscopes allow for examination of dynamic cellular processes in response to multiple molecular or pharmacological interventions [85]. These include the IncuCyte™; Cell-IQ™ and Biostation CT™, which are equipped with software to remotely control image acquisition, filter optic configurations and image analysis, and are optimized for long-term kinetic studies across multi-well plates [85]. The standard IncuCyte-FLR™ system can accommodate up to six 384-well plates and can automatically monitor cell growth, cell migration into a wounded monolayer, angiogenesis and apoptosis [85]. In a recent study, the Sartorius IncuCyte ® system was used to investigate the killing potential of immune cells on cancer cell lines, tracking living cells labeled by a red fluorescent protein, and cell death through the green fluorescent signal generated when apoptotic pathways are activated [86]. The Cell-IQ ® system is a fully integrated incubator, with continuous live cell imaging and an automated analysis platform that combines phase-contrast microscopy and fluorescent image acquisition with an analyzer software package for the quantification of migration image data [85,87]. Processes such as cell attachment, migration velocity, migration direction, neurite outgrowth, vesicle formation, angiogenesis and stem cell differentiation have been documented using the Cell-IQ™ system, which allows for a robust kinetic study of phenotypic response to drug treatment [85]. The Nikon Biostation CT™ platform is reported to be the first multi-objective fluorescent and phase contrast microscope combined with automated plate handling robotics within a cell-culture incubator [85]. The Biostation CT is used to demonstrate reduced spheroid migration velocity and suppressed spheroid fusion of human breast cancer cell lines BT474 and T47D when exposed to trastuzumab and paclitaxel, correlated by ATP quantification cell viability testing [79]. The ability of these systems to document real-time drug response in cells have the added advantage of making it easier to quantify transient phenotypic responses, optimize time points for endpoint studies, determine accurate dosing and scheduling regimens, identify cancer cell adaptive responses, and facilitate more robust quantitative analysis from less specimens [85]. The combinations of functions in one place, the ability to maintain steady environmental conditions, and remotely controlled multiple phases of experiments are attractive features of these automated live-cell analysis systems. Conclusions The increasing prevalence of cancers worldwide has made the development of quicker methods to create, develop, and test new anticancer drugs a necessity. Anticancer bioassays have proven to be powerful tools in the drug discovery process and preclinical authentication. However, caution should be exercised in comparing results from the different assays, as they usually target different mechanisms. The choice of an anti-cancer bioassay to use depends, to a large extent, on the researcher's objective, the target cancer cells and phytochemical composition of the medicinal plant, the availability of reagents and cost of reagents, and the experience of the research team. It is also important that a few known anticancer compounds of known potency are included for comparison with potential medicinal plant extracts, regardless of what method is used for cytotoxicity screening, for objectivity and relevance. Cell-based anti-cancer bioassays have much to offer prior to animal testing. Because of lower cost, the investigator has more control of confronting variables, and models can be developed to predict or approximate the phytochemical (i.e., natural drug) effect in animal and, with additional research, human subjects. Funding: This study was supported by a grant from Evans-Allen (project # DELXHMEC2017). Institutional Review Board Statement: Not applicable.
7,033.6
2022-02-01T00:00:00.000
[ "Medicine", "Environmental Science", "Biology" ]
How does digital transformation impact bank performance? Abstract Digital transformation is a keyword that has not only been mentioned in recent years but is also strongly applied by companies. However, the benefits of digital transformation for companies are still an issue that needs to be researched. Therefore, this study is conducted to determine the dynamics of digital transformation on banking business results. The research was conducted with joint stock commercial banks in Vietnam listed on the stock exchange. Based on text analysis on annual reports to measure banks’ digital transformation level from 2015 to 2021. Research results have shown that digital transformation has a negative impact on bank’s performance (through the return on assets and return on equity). Furthermore, the study also found that there was a paradoxical situation where COVID-19 increased the profits of banks. This result provides exciting discussions related to digital transformation and bank performance. Introduction Companies and countries worldwide are in the era of technology development, and the digitalization trend is being applied in all industries (Zhai et al., 2022). Accordingly, digital transformation has been considered the leading trend in recent years. Regarding the speed of digital transformation of the economy, countries such as the US, China, and Singapore are considered to be leading the digital transformation trend. Therefore, implementing digital transformation becomes an inevitable trend not only for countries but also for companies. However, whether the implementation of digital transformation succeeds or fails is a matter of consideration. Will digital transformation bring economic development benefits to companies or only create pressure to reduce the firm performance? Fitzgerald et al. (2014) report a digital technology dichotomy. That is, directors think about the advantages of adopting digital transformation but are disappointed with the progress of digital transformation and getting performance from it in their company. In Vietnamese enterprises, banking is considered one of the fastest-growing digital transformation industries. Digital transformation in banking will enable many banking transactions that are now done on computers or mobile phones . The variety of possibilities, the time and cost savings, and the ease of use of these applications consistently give them a share of the usual banking channels (bank branches). In addition, through these applications, owner-customers can have real-time information on the valuation of their investment products, bank promotions, and expenses they incurred through digital means to receive more good offers . The benefits above make digital banking more attractive for young people and smartphone owners (Ananda et al., 2020a(Ananda et al., , 2020bPanda, 2020;Talbot & Ordonez-Ponce, 2020) There have been several studies on digital transformation in companies and banks. Several studies show a positive impact of digital transformation on performance (Zhai et al., 2022). However, some studies show a relatively slow impact of digital transformation on firm performance. After digital transformation, it takes about five years to make any sense of firm performance (Beccalli, 2007;Kriebel & Debener, 2019). Most studies imply a linear relationship between digital transformation and business results. However, some studies show a U-shaped relationship between digital transformation and business results (Guo & Xu, 2021). There is still much debate about evaluating the impact of digital transformation on the business results of enterprises. It can be seen that there are many different research results in specific contexts. Therefore, specific research is needed in banking development in Vietnam. Digital development has been changing economies around the globe at a rapid pace, and the COVID-19 pandemic has contributed to accelerating this process. In that strong digitization trend, developing digital banking is the inevitable path for Vietnamese banks. Because of such importance, we conduct research to determine the impact of digital transformation on the business results of Vietnam Joint Stock Commercial Banks. This study will make the following contributions: (1) It will help supplement the theory of innovation and the business results of enterprises. Digital transformation will change how banks operate amidst increasing competition in the industry. (2) The extent of digital transformation and whether it increases the business results of banks when resources (people, finance, time) are focused on this activity will be explored. The study will investigate whether the trade-off of resources is stainable development or creates difficulties for banks. (3) From the results of this study, some policy implications will be proposed to help banks operate more efficiently when deciding to carry out digital transformation activities. These policy implications will assist banks in making effective decisions regarding digital transformation. Digital tranformation There are many definitions of digital transformation (DT). For example, some researchers think that DT uses new technologies, such as mobile devices, digital communication, etc., to improve business operations Fitzgerald et al. (2014), which business activities can include improving operational processes, improving the customer experience of services, or digitizing all jobs based on machine resources Fitzgerald et al. (2014). At the same time, studies show that DT applies information technology to production and business activities (Agarwal et al., 2010). In addition, some authors believe that DT is applying digital technology to innovate business models to create more value for businesses (Kane et al., 2015;Schallmo et al., 2017). In this study, the authors use the concept/ perspective of Fitzgerald et al. (2014) to define DT as using new digital technology to improve the business operations of enterprises. Regarding objectives and concepts, DT uses digital technology to innovate business models and add value to the company. However, not all businesses implementing DT also bring about the desired effect (Zhai et al., 2022). In the United States, as of 2017, up to 50% of business owners disclose their failure to DT (according to a survey by Wipro Digital in 2017). Although businesses are very interested in sales activities, the successful implementation of sales is a big issue that needs to be considered to have accurate results (Kane et al., 2015;Schallmo et al., 2017). Because there are issues related to switching costs, workforce, and time, conversely, many businesses will also have better results when they reduce operating costs (Zhai et al., 2022) and significantly add value to the company (Kane et al., 2015;Schallmo et al., 2017). Regarding the characteristics of DT, according to Bharadwaj (2013); and Nambisan et al. (2019), DT has the outstanding feature of applying technological techniques to enterprises' business and production activities. Besides, Nambisan et al. (2019) also show that DT brings higher efficiency to the company when reducing costs and increasing profit when the process is improved. At the same time, Verhoef et al. (2021) show that DT helps develop business models and create more value for companies. Measuring digital transformation: There are many approaches to measuring and assessing digital transformation (DT). Firstly, DT can be measured through a questionnaire about digital transformation in a business, such as the length of time digital banking has been used on the system Verhoef et al., 2021). Additionally, a common method for measuring DT involves a solution based on textual analysis (Kriebel & Debener, 2019). Specifically, researchers use the frequency of appearance of terms related to digital transformation (Verhoef et al., 2021). Some of the terms related to digital transformation that researchers commonly use include digital transformation, digitalization, internet, website, ATM, web, computer, online, information system, IT, information technology, bankcard, virus, digital, e-banking, payment service, hardware, cloud, email, mobile device, server, tablet, password, encryption, smartphone, LAN, wireless . . . . This study will use the second measurement approach through text analysis of keywords related to digital transformation on annual reports. Digital transformation for the economy as well as for companies Information technology or digital transformation has a positive relationship with economic growth (Hajli et al., 2017). However, according to Hajli et al. (2017), although DT plays a role in the economy, it does not positively affect labor productivity. On the other hand, in terms of microeconomics, DT helps businesses improve their business activities (Kaur & Sood, 2017), thereby increasing the GDP contribution of enterprises to the economy in general (Galindo-Martín et al., 2019). Similarly, the study by Ardito et al. (2021) looked at a sample of US small businesses. The findings suggest that DT adoption can enhance innovation. Additionally, Ribeiro-Navarrete et al. (2021) investigated the effect of revenue on the financial performance of the service industry. The authors use social media and instruction in digital tools to capture DT and discover that these elements improve business success. Similar research was conducted by Llopis-Albert et al. (2021) on the impact of DT strategy on stakeholder satisfaction and company operating models in the automotive industry. Digital transformation and bank performance The stronger the bank's digital transformation implementation, the stronger the business strategy. The bank wants to enhance corporate value by integrating digital technology into its operations and processes (Saebi et al., 2017;Zhai et al., 2022). Applying digital transformation will help improve the bank's reputation and attract more customers. This will help the bank compete and strengthen (Ardito et al., 2021;Lin & Kunnathur, 2019). At the same time, applying argumentpassing will help enhance communication between leaders, employees, and customers. Moreover, this will help improve operating costs (Ardito et al., 2021;Lin & Kunnathur, 2019;Zhai et al., 2022). The application of digital transformation will help save time, optimize operational processes as well as better manage risks. As a result, banks will improve service quality for customers (Boufounou et al., 2022). Reducing operating costs and increasing work efficiency also increase the bank's performance. Therefore, the research hypothesis is put forward as follows: Hypothesis: Digital transformation has a positive impact on bank performance 3. Method Research model Based on research objectives and referring to previous research by Zhai et al. (2022), Guo and Xu (2021) research model is described as follows: In which the research variables in the model are described in detail as follows: Bank performance: The performance of banks is described through two representative indexes, ROE (return on equity) and ROA (return on assets). These are two common indicators chosen by many researchers as a proxy for performance. Digital transformation: Based on text analysis, the study scans the annual reports of each bank. Words related to digital transformation in banking, such as Technology, Internet, digitization, Fintech, ATM, etc., are considered keywords related to digital transformation. Therefore, the number of words related to digital transformation will represent the level of digital transformation in Vietnamese joint stock commercial banks. Control variables: In this study, the research team uses control variables in the model, including Bank size based on total assets (SIZE), loan growth rate (LOAN_ GROWTH); COVID-19 pandemic (COVID); non-performance loan ratio (NPL), and annual growth economic (GGDP). These are some indicators that are considered to represent a bank's performance. Details of the measurement of variables in the research model are described in Table 1. Data With the objective and description of the variables in the model, the data will be collected on the financial statements and annual reports of the joint-stock commercial banks listed on the Vietnam Stock Exchange from 2015 to 2021. Data on digital transformation is collected by analyzing text (Analysis text) bank's annual reports. Words and phrases related to digital transformation will be counted and collected when appearing on the report. All other variables are collected based on financial statements from platforms such as Stoxplus (belonging to FiinPro, a reputable financial unit in Vietnam). Collected data will be encrypted and put into STATA software version 15 for calculation and analysis. Data analysis Panel data will be used for comparison with the examination of listed banks in 2015-2021. Depending on the characteristics of the fields of research, one of three models can be used for panel data: (1) The Pooled OLS model is the simplest model when not accounting for differences between banks; however, this model is rarely used; (2) The Fixed effect model (FEM) is a further development of Pooled when accounting for other differences between banks, and there is a correlation between the residuals of the model and the independent (variables or unique characteristics of each bank related to the independent variables); There is no relationship between the residuals and the model's independent variables in the Random effect model (REM), which is comparable to the Fixed model in terms of how banks differ from one another. The Hausman test will choose the research model between the Fixed and Random effect models. Suppose the model is satisfied and does not suffer from one of the above three defects. In that case, it can be concluded that the research model is reliable in estimating the impact of digital transformation on bank performance. In contrast, it is necessary to overcome the model defects when the model encounters one of two defects, autocorrelation, and heteroskedasticity. With the endogeneity, the author conducts the Generalized Method of Moments (GMM) correction to control endogeneity. Descriptive Collected data will be encrypted and put into STATA 15 software for analysis. Initially, to get preliminary information about variables. Then, descriptive statistics techniques will be used. The results of the descriptive analysis show that the mean ROA achieved by banks is 0.09, of which the maximum is 0.36 and the minimum is −0.004. The average ROE is 0.115, the maximum is 0.303, and the minimum is −0.046. The mean digital conversion of banks is 46.2 (46 words related to digital transformation in each annual report), the maximum is 219 words, and the minimum is one word. Because keywords related to DT have been collected since 2015. Therefore, in the early stages, some banks rarely mentioned keywords related to digital transformation, so there was Regression The regression analysis results were performed for the dependent variables: ROA and ROE. The Hausman test results show that the FEM model is more suitable than the REM model in both models with the dependent variables ROA and ROE. However, tests for autocorrelation and heteroskedasticity both occur in these two models. Therefore, the study makes corrections through the GMM model. The GMM method will help to address endogeneity issues. The AR(1) indicators meet the requirement when their p-values are all less than 0.05; the AR(2) indicators are all greater than 0.05, and the p-value of the Hansen test is also greater than 0.05. Therefore, the GMM models used are reliable. The regression results are described in detail in Tables 3 and 4. The regression analysis with GMM shows that digital transformation has a negative impact on firm performance (ROA and ROE). This indicates that digital transformation activities are reducing bank performance. In addition, COVID-19 has a positive impact on bank performance. It can be seen that the banking sector has no negative impact on bank performance and brings in more profits for the banks. NPL ratio has a negative effect on ROE (beta is negative and statistically Standard errors in parentheses ***p<0.01, **p<0.05, *p<0.1 significant at 5%). In addition, the results show that the higher the NPL ratio, the lower the ROE. LOAN GROWTH positively impacts ROA and ROE (significant at 5%). Finally, GGDP positively impacts ROA and ROE (significant at 5% and 10%). The analysis results of the impact of digital transformation on bank performance through the two indicators of ROA and ROE show similar results. From this, it can be seen that the promotion of digital transformation during the research period has reduced bank performance. Discussion Research results show a negative impact of digital transformation on ROA and ROE. The results of this study are also similar to the previous study by Beccalli (2007) and Kriebel and Debener (2019). This study and the study of Kriebel and Debener (2019) have similarities both in terms of measuring DT (through the frequency of related keywords in the annual report) and also in terms of results (negative impact) of DT on business performance). Kriebel and Debener (2019) conducted their research in the German market. Kriebel and Debener (2019) suggested that digital transformation activities require up to 5 years to be effective. The difficulties in the digital transformation process are related to IT infrastructure issues. The issue of resources and technological infrastructure is a significant investment, but the business results from investing in digital transformation (DT) may not be immediately apparent (Kriebel & Debener, 2019). The authors also return to the annual reports, the keywords about digital transformation are mentioned a lot with accompanying adjectives such as: missing, limited, error. . . This can see limited problems. In deploying as well as fixing the DT system. For a developing country like Vietnam, implementing digital transformation involves a trade-off between initial investment and the resulting outcomes. It can be argued that Vietnamese banks are experiencing the "profitability paradox." This phenomenon suggests that, under competitive pressure, using available technologies for all market participants can increase efficiency but not affect profits (Beccalli, 2007;Kriebel & Debener, 2019). In Vietnam, for successful digital transformation, it is necessary to invest in more technological equipment, innovate transaction protocols and encounter many errors in the implementation process. Although digital transformation is considered a success, banks have had to trade off a lot of financial, human and time costs to build and handle errors that occur during the digital transformation implementation. Therefore, in the stage of partial digital transformation, the business results can be seen better, but due to the large trade-off costs, the issue of profit from digital transformation is not clear in this period. COVID-19 has positively impacted the ROA and ROE, indicating that banks have generated higher profits during the COVID-19 period. The work-from-home model has significantly reduced costs in bank operations. However, there have been no disruptions in business operations (credit and non-credit activities), as reported. Conversely, the Vietnamese government's support of post-COVID-19 loans has increased bank profits. NPL ratio negatively impacts banking results (both ROA and ROE). It can be seen that credit risk control and management activities are of great significance to banks. The increase in the nonperformance loan ratio reduces the profitability of banks. This research result is consistent with previous studies by Ekşi and Doğan (2022) when the results show the negative impact of nonperformance loans bank's performance. According to Nguyen et al. (2021), Growth Loans positively impact ROA and ROE, demonstrating that lending remains a primary source of income for banks. The interest rate differential between deposits and loans and other lending activities makes lending a crucial activity for banks. Additionally, economic growth positively impacts ROA and ROE, indicating that banks benefit from general economic development. The country's economic growth brings benefits to both individuals and businesses. Increased trade and better investment opportunities increase demand for loans and deposits. Notably, the real estate market's upward trend in Vietnam is closely related to borrowing from banks, resulting in a significant increase in bank profits when the market booms. Conclusion and implications The study presented the content of digital transformation in banks in Vietnam. Digital transformation is using new digital technologies to enable business innovations in banking. At the same time, through data analysis to assess the impact of digital transformation on bank performance, the study shows the extent to which digital transformation negatively impacts bank performance (ROA, ROE). This result would indicate that digital transformation decreases profit. From the results of this study, any implications are given as follows: Theoretical implications Research has shown a significant relationship between digital transformation and bank performance and that the level of digital transformation implemented by a bank can negatively impact its performance. This finding supports the resource-based theory and the profitability paradox in the banking industry, as banks face competition from the market and pressure to adopt digital transformation. However, limited infrastructure resources have hindered the development of digital transformation, which has resulted in more efficient banking operations but reduced profits due to the profitability paradox. Practical implications This research result also helps banks to have investment policies for digital transformation to get performance. The experimental results show that difficulties with IT infrastructure are the main obstacles to banking operations. Therefore, banks need to consider IT systems as an important factor that needs to be upgraded to serve better digital transformation activities (which are mandatory in the face of increasingly fierce competition from rivals). Digital transformation needs a specific roadmap to ensure stable and sustainable development. In addition, for investors, information about digital transformation can be a signal for their investment decisions. When information about IT systems is good, digital transformation items are stable and increasingly developing, and these are signs of increased profitability for the bank. When information about increasing digital transformation does not mention technology infrastructure upgrades, this is still considered a negative signal for profitability growth. Limitation and future research The study systematized the theory related to digital transformation and found the impact of digital transformation on business results of Vietnamese joint stock commercial banks through quantitative data analysis. However, the study still has certain limitations. The study only performed a digital conversion assessment based on text analysis on the frequency and frequency of mentioning technology-related words. Not to mention how to measure based on application development digital transformation process. From the above limitation, the study proposes orientations for further studies. The following studies use newer and richer metrics to provide more detailed assessments of digital transformation.
4,894.4
2023-12-31T00:00:00.000
[ "Business", "Computer Science", "Economics" ]
The great depression as a global currency crisis: An Argentine perspective Many of the works that have tried to understand the proximate causes of the Great Depression have emphasized the consequences of maintaining the Gold Standard during the interwar period, as its innate inflexibility prevented the use of expansive monetary policies and generated recessionary deflationary processes. Another perspective, both complementary and different, is that offered by new works that consider the Great Depression as to some extent a consequence not so much of a Gold Standard per se, but of the return to redemability at an overvalued parity after the Great War. The novelty of this new approach is to stress the negative effect of maintaining an unbalanced price for the metal over time. The models that have analyzed the currency crises suffered in recent decades by many Latin American countries help to understand the path that led the world to the Great Depression, with the convertibility regime applied in Argentina between 1991 and 2001 being particularly relevant. Introduction In economic history, it has been common to attribute a strong responsibility to the gold standard in generating the Great Depression, as in Temin (1989); Eichengreen (1992), and Bernanke (1995). Temin (1989) pointed out that the monetary system imposed a deflationary necessity on the world economy, with negative effects on economic activity. He concluded: 'In fact it was the attempt to preserve the gold standard that produced the Great Depression' (Temin 1989, p. 38). The situation would have been different, Eichengreen (1992) argued, if countries had coordinated their actions to allow 2 The model Many of the economic crises experienced by Latin American countries have been characterized as currency or exchange rate crises. Several works have analyzed these processes (such as Frankel and Rose 1996;Reinhart et al. 1998;Kaminsky and Reinhart 1999). Some of the variables considered include inflation, the exchange rate, the money supply, the fiscal deficit, and the current account deficit. One of the most paradigmatic cases of exchange rate crisis was the experience and outcome of the convertibility regime applied in Argentina between 1991 and 2001 (Damill et al. 2004;Pérez Caldente and Vernengo 2007). Some authors have already pointed out the similarity of Argentina's convertibility regime to the gold-exchange standard model implemented almost globally in the interwar period (see Sumner 2015). Different from the traditional Gold Standard from before the Great War, the Gold-exchange Standard implemented in the interwars was one in which central banks of participant countries held US Dollars as their reserves and not gold bullion. That system "economized" in gold by offering greater flexibility for credit creation and, consequently, retarding the effects of the "drain mechanism" inbuilt in the traditional Gold Standard, as pointed out by Larry White (1995:115), and reason for the perceived similarity. Seen from the perspective of this Latin American case, the new explanations of the Great Depression centered on the disequilibrium of the value of gold, under a system in which the constraints on the production of money were not so stringent as generally thought, seem appropriate. A fixed parity with an increasingly overvalued local currency contains the germ of a future crisis in which the exchange rate is adjusted towards sustainable levels. 1 The cycle of a currency crisis begins by anchoring the currency to an external parameter such as the dollar. Then, at some point, there is a price increase, as a consequence of continued expansion of liquidity, that leaves in disequilibrium the exchange rate vis à vis to the parameter. If a country has high reserves of this external anchor, be that a foreign hard currency like the US Dollar or gold, and if the expectations of maintaining its nominal value are favorable, the disequilibrium can be maintained for a while. As expectations become negative and economic agents threaten to move from money substitute instruments and acquire the undervalued asset, the government can sustain the situation by applying a contractive monetary policy that increases the interest rate, thus making it attractive to hold speculative investments in the local currency. In turn, this generates an economic downturn by discouraging investment due to the increase in financial costs. At the same time, a deflationary trend is generated. At some point, the situation becomes unsustainable and the country begins to lose reserves, generating an outflow of capital. Finally, governments are forced to abruptly to move away from the fixed exchange rate and allow it to depreciate. With the new and higher exchange rate, the expectations of speculators change and capital returns to the country, thus increasing reserves. On the other hand, monetary policy is no longer restrictive, so the interest rate goes down and investment is encouraged. At the same time, prices tend to stabilize and consumption recovers. High interest rates, deflation, and a fall in GDP are the consequences of the disequilibrium in the exchange rate between the local currency and the parameter. These variables reverse their negative tendency once the local currency is devalued, leaving the exchange rate to reveal the price of equilibrium of the external parameter. In Tables 1 and 2, it can be seen that in the cases of Argentina's convertibility crisis and the Great Depression in Britain and the United States, the economic variables (before and after the adjustments in the parameters) reflect this transition quite well. Taking the years in which the countries devalued their currencies as a pivot (1931( for Britain, 1933 for the United States, and 2002 for Argentina), there can be seen in each case the transition from a situation of falling or stagnant reserves, high interest rates, and decreasing output to a new stage of lower financial costs, increasing reserves, and economic recovery. It is noteworthy that in the case of the United States, there was not a large drop in reserves held by the government, although it did occur with the gold holdings held by private financial institutions. The gold stock followed a similar behaviour. It decreased 3, 3% between 1931and 1933and increased 150% between 1933and 1935. See: Bureau of the Census (1949. p. 276. The phases of a currency crisis and Argentina's experience of convertibility In the early 1990s, Argentina became a model for market-oriented economic reforms that seemed to perfectly embody the guidelines of the Washington Consensus. The high inflation and, at times, hyperinflationthat had affected it throughout the previous decade had been overcome and the country began to show healthy growth. The main reforms applied by Carlos Menem's government were: (1) the convertibility plan, which entailed the implementation of a conversion system that gave the peso parity with the dollar, and legal tender status to both currencies; (2) deregulation of the banking system and political allocation of credit, leaving interest rates to be determined by the market; (3) the liberalization of capital movements; (4) the privatization of public utilities; (5) a sharp reduction in customs fees and the elimination of most non-tariff barriers -apart from the total liberalization of tariffs with Brazil and the other Mercosur countries; and from 1994, (6) the adoption of a mixed system of pensions by which employees could choose to move to pension funds managed by private managers. Menem's program seemed to herald a new era of high economic growth and low inflation, based on disciplined macroeconomic policies and market-oriented structural reforms. The growth of real GDP, which had been negative on average during the 1980s (falling at an annual rate of 1,01%), rebounded sharply to more than 10% during 1991-1992, the first two years of the stabilization plan, and more than 5% during 1993-1994. Inflation fell to single digits from 1993 onwards. Capital inflows also began to intensify, reflecting renewed confidence in the economy. In 1995, Mexico's 'Tequila crisis' interrupted this good macroeconomic performance as it led to a reversal of capital flows and a downturn in economic activity. Shortly after the crisis, however, Argentina's monetary situation stabilized and growth was 5% in 19965% in and 8% in 19975% in (IMF 2003. The logic of Menem's new monetary regime was that the high institutional and economic costs of an exit gave credibility to the adopted exchange rate anchor. Yet, it also meant that, in the case of persistent deficits in the public or external sector, the country would be trapped in a system that, by design, restricted the options available to the authorities. An issue was that the parity chosen at the start of the plan left the peso overvalued in relation to its level throughout the 1980s. Then, as inflation took a while to fall, the peso's overvaluation became higher still, reaching an appreciation of approximately 25% in 1993 (IMF 2003). This situation was facilitated by the enormous flow of capital that entered Argentina as a result of privatizations and the repatriation of funds from abroad. The peso had, in real terms, appreciated from 30 to 40% by 1998, with the consequent loss of competitiveness of Argentina's exports and a fall in the level of activity in some sectors, especially the local manufacturing industry (Darvas 2012). Fernando de la Rúa assumed the Argentine presidency at the end of 1999 with the conviction that it was essential to maintain convertibility, which still had strong popular support. One of his campaign slogans had been 'with me, one peso, one dollar'. The belief that it was vitally important to maintain a historical exchange rate was similar to that adopted by Winston Churchill in 1925 when, as Chancellor of the Exchequer, he reimposed Britain's pre-war parity with gold, as well as by Herbert Hoover, President of States United between 1929 and1933, who considered the gold standard to be sacrosanct. Hoover (1952, p. 391) would write in his memoirs: 'A convertible currency is the first economic bulwark of free men. Not only is this a question of economic freedom, but more deeply is it a question of morals. The moral issue lies in the sacredness of government assurances, promises, and guarantees'. But in Argentina, as in Britain and the United States after 1930, there was a growing distrust in the market about how sustainable the parity would be, particularly in 2000, when depositors in peso accounts began withdrawing funds from banks. From the middle of 2001, something similar happened with dollar accounts, as the perception grew that financial institutions could not respond to the demands for currency. The special assistance received from the International Monetary Fund (IMF) failed to stabilize the fiscal situation or calm the market, leading to a sharp increase in the interest rate, which aggravated the economic crisis. At the same time, a deflationary process was generated, with prices falling by 1% during de la Rua's presidency. Given the acceleration of the withdrawal of bank deposits, the government ordered that the funds be largely immobilized. Finally, after the fall of the government at the end of 2001, the dollar deposits were converted to pesos with a 40% premium, which was a compensation far lower than the devaluation of around 250%. Bank accounts in dollars were no longer allowed and foreign currency debts were converted to pesos with the previous parity. Finally, in December 2001, the interim president, Adolfo Rodríguez Saá, suspended the servicing of the foreign debt, which only began to be restructured in 2005, with a substantial deduction of capital. As the Argentine lawyers of Cristina Kirchner's government argued in the United States' courts, this outcome coincided with the measures taken by Franklin Roosevelt when exiting the gold standard in 1933 (Edwards 2018, Position 180). After the devaluation of the peso, the production and exchange mechanisms that result in economic expansion began to be reestablished. The recovery was driven by several concomitant factors: (1) a growing external demand, which absorbed higher volumes at significantly higher international prices than in the previous decade (especially in the raw materials that the country exported); (2) improvements in the internal terms of trade due to a real exchange rate that remained at historically high values after the devaluation (see Fig. 1), and (3) a marked increase in public spending, which had been at low levels prior to the crisis. This evolution of prices and quantities significantly modified the weight of foreign trade in the macroeconomic aggregates. In 1998, exports of goods and services had accounted for 15% of private consumption (and in 1993 it was 10%), whereas for 2006, the ratio exceeded 40%. The global evolution of the gold standard Between 1870 and 1914, many nations established a fixed convertibility for gold, at rates that seem to have been compatible with a situation of market equilibrium. There is no doubt that the demand for gold was increasing due to the growth of the world economy, and that there was a slight global inflationary process that reduced the real value of the metal. Yet, at the same time, gold production was increasing due to the development of new deposits (particularly in South Africa) and due to technological improvements in extraction, as the evolution of the global gold supply is seen by Rockoff (1984). This situation of stability, and the expectation that it would continue, were reflected in the general lack of speculative attacks against national currencies. By 1913, metal reserves as a backup for working capital were important, although the coverage was only partialindeed, it was even less than existed later in the 1920s. Everything would change with the Great War. Given the needs of inflationary war financing, many countries had to abandon convertibility or apply regulations that severely limited it. For the economies included here, the end of convertibility allowed for an inflation of their currencies which resulted in the weighted average increase of almost 140% in their price levels between 1914 and 1920. Even the United States, a country that joined the war only in 1917, suffered a 100% increase in its price level as it inflated its stock of money by, among other things monetizing the inflows of European gold. This would imply a decrease in the global purchasing power of gold (see Fig. 2). Global real gold price index b Argentina's real exchange rate a 1913/84 = 100 Fig. 1 The Real Price of Gold (1913Gold ( -1935 Nonetheless, in those troubled years, the demand for gold would be momentarily reduced due to the stagnation of world GDP between 1914 and 1918 and by the suspension of the gold standard. According to Rothbard, one of the main reasons for the Fed to have adopted an inflationary policy during the war, which lasted with short interruptions until 1928 -the years that Benjamin Strong was the governor of the New York Fed-, was to help Britain to finance their war effort and later to aid their catastrophic decision of returning to the gold-exchange standard at the rate of 1914. In Rothbard's words: The United States inflated its money and credit in order to prevent Britain from losing gold to the United States, a loss which would endanger the new, jerry-built 'gold standard' structure (2005: p. 271). However, it is necessary to go to other sources in order to find precise data about money and credit expansion in the war years, and Friedman's and Schwartz (1993) show that the wholesale price index in the U.S. from 1914 to 1918 rose from 65 to 130 in a scale that the wholesale price level of 1926 equals 100; the money stock rose roughly from $15 billion in 1914 to $30 billion in 1918 (Friedman and Schwartz 1993: p. 197). The federal debt increased immensely during the war; from $ 32 billion of total expenditures by the federal government from April 1917 to June 1919, no less than $ 23 billion were funded by borrowing and money creation (1993: p. 216). The Fed could not possibly come into being in more adequate moment in face of the necessities of war financing, in the words of Friedman and Schwartz: The Federal Reserve became to all intents and purposes the bond-selling window of the Treasury, using its monetary powers almost exclusively to that end. Although no "greenbacks" were printed, the same result was achieved by more 1900 1905 1910 1915 1920 1925 1930 1935 1940 1945 Nominal gold price Summarizing the monetary impacts of the war finance, Friedman and Schwartz state that from the $ 34 billion in expenses ($ 32 billion in Federal deficit plus $ 2 billion in additional Treasury cash balances), 25% was financed by taxes, 70% was borrowed and 5% was money creation. They also note that due to the fractional reserve system, the money supply increased $ 6.4 billion or $ 4.8 billion more than the fiduciary currency issued by the government. As one of us stated elsewhere (Zelmanovitz 2015), during the WWI, the Fed proved its utility, now came the cost, in the form of the post war inflation, and it was big: -it was roughly of the same magnitude of the variation in the money supply accumulated since the beginning of belligerence. From 1914 to 1920, the peryear change in wholesale prices in the U.S. was 15%, the annual change in the money supply was 13% and the per-year change in "high-powered money" was 12% (1993: p. 208). After the war, governments around the world considered the return to the gold standard, but higher price levels made it difficult to re-establish the old parities. To return to the gold standard, one possible option was to deflate the economy to bring prices closer to their prewar levels. To some extent, this would occur between 1920 and 1922, when the global price level fell by 13%, and in the case of the United States, by 16% (Friedman and Schwartz 1993:197). This decline was, however, insufficient and further falls seemed impossible to achieve due to increased inflexibility of labor costs, due to the growing power of unions, which prevented salary cuts (see Keynes 1932b: 186). There was also the danger of encouraging radical political movements that would destabilize or bring down governments, as had been the case in Russia. In 1924, the global real price of gold 2 represented only 63% of its value in 1913, while for the United States it was down to 57%. The value of gold was thus in disequilibrium, particularly due to the growing demand for the metal that resulted from the growth of world GDP by almost 40% between 1919 and 1929. Despite this general situation, after 1925, some countries, notably, the United Kingdom and the United States, decided to re-establish a convertibility similar to that 2 In order to understand what is meant by "real price of gold", it is worthwhile to keep in mind the concept of "Purchasing Power Parity," or PPP. PPP is "the estimation of imbalances in the exchange rate between different currencies assuming that the market price of similar goods in different coutries tends to converge. In other words, the exchange rate adjusts so that an identical good in two different countries has the same price when expressed in the same currency. The most famous application of this theory is the "Big Mac Index" produced by The Economist that compares the price of the famous sandwich in different countries to estimate how much one currency is over-valued or devalued in comparison with the US Dollar" (Zelmanovitz, 2015: 406). The "real price" of gold is an exchange rate that would equilibrate the demand for cash balances of a currency redeemable in gold that is perceived to be different of the nominal parity for redeemability established by the government. When the nominal parity is overvalued, such discrepancy with the real exchange rate can only be reconciled by domestic deflation or devaluation of the currency. For example, in 1913, the exchange rate of the US Dollar was 20.67 USD, during the war, the convertibility was suspended, and inflation reduced the purchasing power of the US Dollar by half. Since the exchange rate, nominaly, remained the same, the "real price of gold" in the United States was reduced to 57% of it was before the war, that is, with one ounce of gold you could only buy in 1919 about half of the goods you could buy with the same gold before the war. existing in 1914. 3 It is noteworthy that in spite of the attention given to the return to convertibility at the pre-war parity in the UK, that was also the case in the US, where, as seen, the inflation was also significant. It is a fact that both the United Kingdon and the United States returned to convertibitly at their pre-war parities. However, while in the UK an extended deflationary period ensued even before the Great Depression, in the US they had the "roaring 20s." Such disparity presents, a priori, a challenge to the thesis advanced in this article. In order to dispel such challenge a more nuanced discussion of the gold standard in the U.S. leading up to the Great Depression becomes necessary. Given the complexity of the American monetary history in the interwar period, only the most relevant features explaining the different results will be highlighted. The general theme here is that there were in the United States political, economic, and institutional reasons explaining why, in spite of the return to full convertibility in 1919 at an exchange rate that did not reflected the actual purchasing parity of the US Dollar, there were no run on the Dollar at least until the early 1930s. Among those features we may refer to the "safe harbor" nature of the gold deposits in the United States, which resulted in a constant inflow of gold in the country, regardless of the exchange rate, from a world dealing with political instability and economic disruption in the form of revolutions and wars in the wake of the Great War. Another element was that the commodity standard in the United States was actually a bimetallic system since gold and silver were freely coined (Selgin 2013). Yet another element is that in 1913, the Federal Reserve Act converted the U.S. from a decentralized gold system to a managed gold standard. That gave the newly created authority discretion to regulate the money supply, decoupling it from the flows of gold coming or leaving the country, as evidenced by the constant "sterelizations" of gold. The fact is that by the interwar period, the gold-exchange gold standard was increasingly centered on the US Dollar. As stated by Rothbard: In that way, if U.S. banks inflated their credit, there would be no danger of losing gold abroad, as would happen under a genuine gold standard (2005: p. 219). The geopolitical situations of the United States gave to the American authorities a room for maneuver that their British counterpart did not have, even if "(I)n the long run, whether the disturbances are monetary or real, the balance of payments under fixed exchange rates return to equilibrium" as noted by Bordo and Schwartz (1999:239), and also in Bordo et al. (1988). The lesson from those events was not that the results from the same causes were different, but that the different context of the American case made it take longer for the entire process to play out. The fact that the US also returned to convertibility at an exchange rate that would not allow for monetary equilibrium, and would, ultimately, require the implementation of 3 There are many speculations about why the UK returned to the Gold Standard in 1925 at the pre-war parity, and much have been said about that. Allegedly, the entire corpus of Keynesian economics may be understood as a response to the ill consequences of that policy, as one may conclude from Keynes's 1925 The Economic Consequences of Mr Churchill (Keynes, 1932a: 244). A commonly accepted view is that restoring conversibility at the pre-war exchange rate would make whole the investors in war-bonds and other long term obligations of the Crown, that, was compatible with tradition (return to conversibility after the Napoleonic wars was done at the pre-war partity), with morals (debasement, since the Middle ages has been "accepted" in emergencies if, passed the emergency, the metal content of the coins is restored), and seens as politically expedient (it would keep the "good credit" of the government. Such discussion, however, goes beyond our purpose with this paper. deflationary policies is usually overlooked in the literature about the causes of the Great Depression, and it is an importat part of the argument presented with this paper. In fact, the global real price of gold continued to fall in the second half of the 1920s, from 56% of its per-war value in 1925 to 53% in 1929. The cases of France and Italy were different, since they would restore their monetary systems based on a parity closer to their previous real levels. 4 The low price meant that the volume mined languished, falling by 20% from the level of the first three decades of the century. While South African production remained stagnant, the rest of the world's output showed a sharp decline. In 1929, American extraction would reach its historical minimum (Hirsh 1968, p. 486). On the other hand, the cheapness of the metal was encouraging its industrial and luxury/hoarding use, with the latter taking on significant dimensions in countries like India (Johnson 1998, p. 46-48). It seems clear that the clash between the real price of gold and the movement of the economy was becoming more and more acute, until it reached its peak during 1928-29. To this effect, the eminent Swedish economist Gustav Cassel, the only one who seems to have fully called out the dangers generated by the low price of the metal, noted in 1928: 'The great problem before us, is how to meet the growing scarcity of gold from increased demand and from diminished supply' (in Irwin 2014, p. 206). Keynes would also note something similar in January 1929, a short time before the crisis began: 'A difficult, and even dangerous, situation is developing [...]. [T] here may not be enough gold in the world to allow the central banks to be comfortable […]' (in Irwin 2014, p. 216). The strong deflation that would be unleashed worldwide between 1929 and 1933 can be considered one of the mechanisms by which the market tried to adjust the real price of gold. Global prices fell by 23% in that period, but this was not a sufficient correction to return to equilibrium. In those final years of the 1920s, financial operators and governments perceived this disequilibrium to be unsustainable, leading to speculative attacks on the gold reserves of some nations. Britain, which had re-established the gold standard in 1925 at its historical pre-war parity, could only defend its scarce reserves by applying high interest rates (Johnson 1998, p. 113). From 1928, the country began to lose its metal stock, although it could momentarily stay in the system thanks to loans and support from France and the United States. Despite this, between June and October of 1929, it would lose almost 23% of its gold holdings. The United States, on the other hand, after starting to lose its reserves, had to initiate a contractive monetary policy in 1928, increasing the discount rate. The rise in financial costs would eventually help to explode the speculative bubble in the stock market, which sparked a crisis that was transmitted to the rest of the economy through a fall in wealth and consumption and the consequent fall in international trade. 5 On the other hand, between 1929 and 1931, real interest rates increased dramatically due to the sharp deflation. In mid-1931, several 4 France reestablished convertibility in 1926 with a price relatively close to its real level of 1913. The country did not, therefore, suffer speculative attacks, while it could lower the interest rate and significantly increase its gold reserves. While the Bank of France owned 7% of world reserves in 1926, it had 27% in 1932 (Irwin, 2012, p. 4). Part of this increase was undoubtedly caused by gold transfers to a nation that it was believed would not abandon convertibility, thus providing greater security to the funds deposited there. The Netherlands also did not suffer from speculative attacks, as it was able to maintain the gold standard until 1936 because the parity of its currency with gold was at a value closer to the equilibrium level. 5 Romer (1993, p. 28), in his survey of the Great Depression from a Keynesian perspective, notes that the high US interest rate policy that started the contraction (characterized by a sharp decline in consumption) was, at least in part, generated by the attempt to stop gold from leaving the country. governments, led by France, requested that Britain send them gold in exchange for the pounds sterling included in their reserves (see Yeager, 1976:330). Although Britain tried to curb the situation by raising the interest rate, as well as accepting US financial help, the pressure was unsustainable, and its currency had to be devalued in September by 30%an action that was then imitated by other countries. 6 The United States also suffered similar drainage in 1931 due to the demand of European banks and investors, as well as the American public (Friedman and Schwartz 1993, p. 316). But this country had high reserves and could momentarily cope with the crisis, although only by applying the largest interest rate increase in its history, which intensified the banking crisis (Friedman and Schwartz 1993, p. 317). A good part of the reserves that various countries had in the United States in financial assets were converted to gold in 1932, in response to the risk of a possible devaluation of the dollar. Said gold was earmarked in the Federal Reserve, waiting to be transferred to the owner countries (see Puxley 1933). At the end of that year and the beginning of 1933, 7 the speculative demand for gold revived and again the Federal Reserve had to increase the cost of money to try to contain it (Friedman and Schwartz 1993, p. 326). 8 After losing reserves dramatically in the first months of 1933, the newly assumed President Roosevelt suspended convertibility in early March. Transactions and payments in gold were prohibited and all entities and individuals were obliged to hand over their gold holdings to the Federal Reserve at the previous parity. The adjustment of payments by "gold clauses" of public securities and debts was suspended, and bonds and debts started to be re-paid in Dollars at the old parity, which soon was no longer in force. In January 1934, the US Dollar was officially devalued by 59% with respect to golda substantial change that brought the price of metal closer to its real value of 1910. By 1935, most countries had dramatically altered the parity value of their currencies: 31 had done so by more than 40% and 5 by more than 30%, while only 12 kept their prices stable (although many of those had already devalued their currency by 1929) (Bank 1935, pp. 8-9). The global real price of gold was at its 1900 levels by 1934, which had been achieved by both increasing the nominal gold price by 31% from 1929 to 1933 and through generalized deflation, which was around 25%. The value of gold seemed to have reached a new equilibrium point, which had as a concomitant lower interest rates, the reversal of deflation, some recovery in the level of economic activity, and an increase in official 6 Arguably, it is possible to interpret the monetary policy followed by the Fed under the leadership of Benjamin Strong at the helm of the New York Fed from 1914 until his death in 1928 as one that tried to conciliate the needs of war finance during the Great War, the goal of price stability at home and support for the return of the international gold standard, then reframed as the gold-exchange standard centered in the US Dollar. That implied support for the return of redemability in the United Kingdon, which sometimes conflicted with other policy goals. After Strong's death, the importance given to support the British pound convertibility lost some of its importance in the conduction of American monetary policy, what may be perceived in the movements of gold reserves and in the bank rate. 7 A possible objection to this model is why the United States was able to maintain the low value of gold after the British devaluation of 1931. It would have been normal for the United States to lose its reserves in gold and for capital to leave it for countries where the value of the metal was closer to its equilibrium level, as it was the case in several European countries. The answer is that in Europe there was increasing political risk, which reduced its appeal to foreign investors. In 1932, the French president was killed, in Britain there was a weak coalition government, in Germany the Nazi party had practically acceded to government, and in Spain a coalition government composed of socialists had initiated an agrarian reform and nationalized private companies, while social conflict grew. 8 On capital movements motivated by expectations of a devaluation, see James (1992). gold reserves. The latter is not entirely surprising, since the increase in the value of gold would soon result in a notable increase in its global production. The increase was notable for both the US and South African mines (Mazumder and Wood 2013, pp. 163-64). Conclusion As we have seen in this paper, one of the causes of the Great Depression was the monetary disequilibrium brought about by the return to conversibility after the Great war at an undervalued price of gold set by some central governments such as the UK and the US. The tension between the official price at which the metal would be redeemed, and its equilibrium level would become unsustainable towards the end of the 1920s, as evidenced by the disparity between the exchange rate and the purchasing power parity, and pointed out by Yeager (1976:319). This disequilibrium was one of the initial triggers of the crisis, as it led to increases in interest rates to combat speculative attacks on gold, sharp deflation, declining activity, and the beginning of a global tariff war. When you open a window to exchange gold for paper money at about half the real exchange rate, it is no surprise that whoever has access to such window will ask for gold. On the flip side of that, it should be no surprise that if you give paper money with only half of the real value than before the war for any gold that gold producers brought to your window, that gold production will lag. It was the undervalued price given to gold by the UK and US governments, and not the gold standard itself, that bore a strong responsibility for the Great Depression. If, in the early years of the 1920s, if the UK and the US had accepted that the real value of gold should be higher (as France did) and had altered the convertibility rate by absorbing, at least in part, the inflation generated during the Great War, the Great Depression could possibly have been avoided. 9 Robert Mundell in his Nobel Prize Lecture stressed that world history would have changed if President Hoover had decided to devalue the dollar in time: After a great war, in which inflation has occurred […] a return to the gold standard is only consistent with price stability if the price of gold is increased. Failing that possibility, countries would have fared better had they heeded Keynes' advice to sacrifice the benefits of fixed exchange rates under the gold standard and instead stabilize commodity prices rather than the price of gold […]. Had the price of gold been raised in the late 1920s, or, alternatively, had the major central banks pursued policies of price stability instead of adhering to the gold standard, there would have been no Great Depression, no Nazi revolution, and no World War II. (Mundell 2000 p. 331) Some similar counterfactual could be projected for Argentina. If, upon assuming the presidency in 1999, de la Rúa had devalued the currency by perhaps 30%, he may have succeeded in completing his term. 10 There would have been a macroeconomic crisis, but without a catastrophic fall in GDP and unbearable social unrest. Convertibility could have continued (at a different parity), but there would have been no speculative attacks on the currency and exporters and importers would have received earlier the appropriate price signals to adjust their behaviour. Had the government renegotiated debt payments, the dramatic default that isolated the country from global financial markets for more than a decade would not have occurred. On the other hand, there would not have been the conditions for the emergence of the Kirchners' populist governments, the gigantic increase in public spending and deficit, and, finally, the reappearance of high inflation. From 2004, probably under a more moderate Peronist government, the commodity boom would have allowed Argentina to benefit better from favorable global conditions without dramatically altering the institutional framework and the possibilities for future growth. The costs of postponing the necessary adjustments in crucial economic variables is gigantic: the world's GDP fell 16% between 1929 and 1933, while the United States' fell by 29%; for Argentina, the reduction was 20% during the unwinding of convertibility. Why has such a simple explanation of the Great Depression not become standard or at least an alternative to be mentioned in the discussions of its origins? It is notable that neither Johnson's (1998) nor Mundell's (2000) works have been incorporated as central to the debate. Possibly, the reason is historians' and economists' consensus about the need and effectiveness of expansive monetary and fiscal policies, ignoring that the gold-exchange system of the interwar period posed not a stringent constraint on the money supply. On the other hand, although the negative impact of the gold standard has been pointed out, this has been only in its restrictive aspect and no emphasis has been placed on the low price of gold itself. Finally, an Austrian perspective would have been more compatible with this explanation, but its representatives remained trapped in the interpretation based on a supposed over-investment in the 1920s. They have not recognized that the new explanation gives them an excellent example of how an incorrect price fixed by governments could in the long term have dire consequences for the economy. rates and gold reserves: Mitchell (1975), pp. 744-745;League of Nations (1926); League of Nations (1937); Edvinsson et al. (2010); Klovland (2004) Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
9,130.8
2020-09-09T00:00:00.000
[ "Economics", "History" ]
Analytical Expressions for Spring Constants of Capillary Bridges and Snap-in Forces of Hydrophobic Surfaces When a force probe with a small liquid drop adhered to its tip makes contact with a substrate of interest, the normal force right after contact is called the snap-in force. This snap-in force is related to the advancing contact angle or the contact radius at the substrate. Measuring snap-in forces has been proposed as an alternative to measure the advancing contact angles of surfaces. The snap-in occurs when the distance between the probe surface and the substrate is hS, which is amenable to geometry, assuming the drop was a spherical cap before snap-in. Equilibrium is reached at a distance hE < hS. At equilibrium, the normal force F = 0, and the capillary bridge is a spherical segment, amenable again to geometry. For a small normal displacement Δh = h – hE, the normal force can be approximated with F ≈ −k1Δh or F ≈ −k1Δh – k2Δh2, where k1 = −∂F/∂h and k2 = −1/2·∂2F/∂h2 are the effective linear and quadratic spring constants of the bridge, respectively. Analytical expressions for k1,2 are found using Kenmotsu’s parameterization. Fixed contact angle and fixed contact radius conditions give different forms of k1,2. The expressions for k1 found here are simpler, yet equivalent to the earlier derivation by Kusumaatmaja and Lipowsky (2010). Approximate snap-in forces are obtained by setting Δh = hS – hE. These approximate analytical snap-in forces agree with the experimental data from Liimatainen et al. (2017) and a numerical method based on solving the shape of the interface. In particular, the approximations are most accurate for super liquid-repellent surfaces. For such surfaces, readers may find this new analytical method more convenient than solving the shape of the interface numerically. ■ INTRODUCTION An axisymmetric capillary bridge between two parallel surfaces exerts a normal force on the surfaces. Such capillary bridges are encountered in self-aligning liquid joints, 1−6 capillary grippers, 7−9 capillarity-based switchable adhesive surfaces, 10 granular media, 11 the adhesion of nanoparticles, 12 and soft materials, 13 the adhesion and friction of powders, and biofibers, 14 when using capillary bridges as flexible joints, 15 atomic force microscopy, 16,17 among other applications. These capillary bridges are of particular interest when the normal force is measured to quantify the (super) liquid repellency of the surfaces. 18 −20 In a typical force characterization experiment, a force probe with a drop at its tip approaches the substrate of interest, then makes contact with the substrate and finally retracts from the substrate. The force right after the first contact is called the snap-in force, and smaller snap-in forces have been experimentally 20−22 and theoretically 18,20 shown to correspond to larger advancing contact angles and smaller contact radii. Unlike contact angle measurements, the force measurements can remain accurate even when the substrate is super liquid-repellent (e.g., θ > 150°) or when the surface is not flat. 19,20,23 To theoretically relate a force to the contact angle or contact radius, one has to find the shape of the surface, typically numerically. In a direct version of the problem, one computes the shape of the interface for a known geometry (e.g., liquid bridge height, volume, and contact radius at the probe and contact angle at the substrate) and then computes the force, for example, as a sum of capillary and Laplace pressure terms. 18 In the force characterization experiments, we are actually interested in solving an inverse version of the problem: find the geometry (e.g., contact angle at the substrate, assuming everything else is known) that corresponds to the measured force. There are several methods for finding the shape of the capillary bridge. One numerical method is to solve the Young− Laplace equation with boundary conditions and a volume constraint. 20 An alternative numerical method is to minimize the energy functional using a finite element method 24 or by optimizing a discrete mesh shape, 25,26 one particularly popular option for the latter being the Surface Evolver 25 software. Finally, when gravity is neglected, all solutions of the Young− Laplace equation are constant-mean-curvature surfaces and the axisymmetric solutions are the Delaunay surfaces: 27 planes, cylinders, spheres, catenoids, nodoids, or unduloids. One parameterization of the Delaunay surfaces was found by Kenmotsu. 28 In principle, the shape of the surface can be found by finding the Kenmotsu parameters for which the constraints (volume, contact angles, or contact radii) are fulfilled. Unfortunately, the Kenmotsu parameterization involves elliptic integrals, so the parameters will have to be sought numerically. An analytical method for computing the force, without solving the exact shape of the capillary bridge, would still be highly useful. An important special case of the Delaunay surfaces is the spherical-segment-shaped capillary bridge. Such bridges can be handled by simple geometry and their Kenmotsu parameters are trivial. This case is especially important because a spherical segment is the equilibrium shape (normal force F = 0) of a liquid bridge when gravity is negligible. Furthermore, in many practical applications, the capillary bridges are nearly spherical. We will see that this is the case when computing the snap-in forces of a super liquid-repellent surface (contact angles near 180°) or a pad with a small radius. We use the term pad as a generic term for a circularly patterned substrate on which the liquid completely wets a circular area, but then is pinned to the edge of the area. This can be achieved through surface chemistry (a highly wettable area on a highly liquid-repellent background) or surface topography, for example, a protruding pillar on whose edge the drop pins. In this paper, the force−distance relationship of an axisymmetric capillary bridge is analytically linearized at the equilibrium distance using the Kenmotsu parameterization and this linearized model is used to estimate the snap-in force. The distance at which the bridge is in equilibrium is denoted with h E (Figure 1). For a small normal displacement Δh = h − h E , the normal force F can be approximated with a first-order approximation or a second-order approximation where k 1 = −∂F/∂h and k 2 = −1/2·∂ 2 F/∂h 2 are respectively the effective linear and quadratic spring constants of the capillary bridge. Fixed contact angle and fixed contact radius conditions give different forms of k 1,2 . Kusumaatmaja & Lipowsky 29 identified three different cases (eqs 46−48 in their paper), and we will stick to their labeling: Case I: both substrates are homogeneous and the contact angles are fixed. The contact angles of the two substrates are not necessarily the same. Case II: the bottom substrate is homogeneous and its contact angle is fixed, while the top substrate is patterned with a pad and its contact radius is fixed. The roles of the two substrates can be of course chosen freely; we assume that it is the bottom substrate that is homogeneous. Case III: both substrates are patterned with pads of certain radius, and the liquid fully wets the pads, and contact lines are pinned to the pad edges. In other words, the contact radii on both substrates are fixed although not necessarily the same. For force characterization experiments, fabricating a force probe surface with a known pad radius is easier than fabricating a force probe surface with a truly homogenous surface with a stable contact angle, so the cases II and III are expected to be more relevant here. The force−distance curves in cases II and III are illustrated in Figure 1a,b, respectively. Nevertheless, for completeness, we will give the spring constants for the three cases. The snap-in occurs at a distance h S , which is amenable to geometry, assuming the drop was a spherical cap before snapin. Approximate snap-in forces are finally obtained by setting Δh = h S − h E ( Figure 1). These approximate analytical snap-in forces agree with the experimental data from Liimatainen et al. 20 and a numerical method based on solving the shape of the interface. Previous Work. Kusumaatmaja & Lipowsky 29 have earlier derived the linear spring constant k 1 in all three cases by starting from the energy functional and then considering small perturbations to the equilibrium shape. The expressions for the spring constants found here are simpler yet equivalent to their derivation. Meurisse & Querry 30 and Petkov & Radoev 31 computed approximate forces in the case I, with the further assumption that both surfaces have the same contact angle. Meurisse and Querry started from accurate descriptions of Delaunay's surfaces, and then approximated the profile curve with a circular arc. Petkov and Radoev used a numerical procedure, based on parameterization of the profile curves. Furthermore, Vogel 32 studied capillary bridges in the general case I and in particular, derived a general condition for the second-order stability of the capillary bridge. Figure 1. Method for calculating snap-in force analytically for (a) homogeneous substrates, with a constant contact angle; and (b) pads or pillars, with a constant contact radius. Point A is when the tip of the spherical cap-shaped drop touches the substrate. The corresponding distance h S can be solved using the geometry. Point E is when the capillary bridge is in equilibrium, its shape is a spherical segment and the force is again 0. The corresponding distance h E can be solved using geometry. Accurate snap-in force (point S) can only be solved numerically, but we can analytically linearize the force curve at point E and use the linearized curve to find point S′, which approximates the snap-in force. Article Escobar & Castillo 19 found the normal force in the case II by minimizing an energy functional. Heinrich et al. 33 and Goldman 2 computed the forces in the case III. Heinrich simplified the problem by assuming that the profile curve is a circular arc, while Goldman first used numerical methods and then fitted a heuristic equation to the numerical force−distance data. Attard & Miklavcic 16 computed the spring constants of liquid bubbles when interacting with spherical particles or probes, which is different in geometry from all the three cases considered here. Nevertheless, their spring constant expression has very similar form to the ones derived here, including logarithmic and rational parts, and they underlined that all systems must behave as simple springs for sufficiently small approximations, 16 which is the rationale for the linearization approximation taken here. ■ METHODS In this paper, we make the following assumptions, which are not difficult to fulfill in experimental conditions. (1) The gravity can be neglected. In practice, this means that the capillary bridges must be small compared to the capillary length, ≈2.7 mm for water in an ordinary room environment. (2) The capillary bridges are axially symmetric. In practice, this means that both surfaces will have to be parallel in all the three cases. Furthermore, in the case III, the two pads should be axially aligned to each other, but cases I and II are self-aligning in the sense that they can assume an axially symmetric configuration because of energy minimization. (3) The relative motion of the probe is slow, in the sense that we can ignore the hydrodynamics of the liquid. In other words, for every distance, we can assume that the interface is in equilibrium. (4) The evaporation of the drop is slow compared to the duration of the experiment, so that we can assume that the volume of the drop is constant. On both substrates, we assume that either that the contact radius is fixed or the contact angle is fixed (cases I−III), but importantly, we do not assume that the conditions on both substrates are necessarily the same. Figure 2 shows the geometry and symbols used in this paper. Supporting Information contains a Maple worksheet that shows all the steps taken in the derivation of the theory. Axisymmetric Constant Mean Curvature Surfaces. Recall that the shape of the capillary interface is governed by the Young− Laplace equation where H is the mean curvature, Δp is the Laplace pressure, and γ is the surface tension of liquid. Neglecting gravity, Δp is constant and the surface has a constant mean curvature of Δp/2γ. Axisymmetric constant-mean-curvature surfaces are the Delaunay surfaces: planes, cylinders, spheres, catenoids, nodoids, or unduloids. 27 Kenmotsu 28 parameterized the profile curve (r,z) of Delaunay surfaces as where B and H control the shape of the interface, H being the mean curvature of the surface, and s is the curve parameter. The surface is an unduloid when B < 1 and the surface is an nodoid when B > 1. When B = 1, the surface is a sphere, and in this case, the radius of the sphere is R = 1/H. For later treatment, we will rewrite 4 and 5 by (1) Changing sin → cos. This is a matter of preference. It is preferable to have the center of spherical joints at origin, which will simplify the relation between contact angles and the integration limits in 5. (2) Setting R = 1/H. We will not consider the case H = 0. The liquid bridge is an arc of the profile curve with s ∈ [α 1 ,α 2 ], so there are a total of four shape parameters that fully describe the capillary bridge: B, R, α 1 , and α 2 . Note that R is a simple scaling parameter with a dimension of length, while the rest are nondimensional quantities. Solving the Shape and Force of the Liquid Bridge. In the case III (both contact radii fixed), we can find the shape parameters by treating volume V, joint height h, and pad radii r 1 and r 2 as the independent variables and B, R, α 1 , and α 2 as the dependent variables. The dependent variables can be found by solving a system of equations Cases I and II can be found replacing r 1 = r(α 1 ) with −cot θ 1 = r′(α 1 )/z′(α 1 ) or by replacing r 2 = r(α 2 ) with cot θ 2 = r′(α 2 )/z′(α 2 ). Once the dependent variables, that is, the shape of the surface, are known, the normal force F can be found as the sum of Laplace and capillary terms 18 Using Δp = 2γ/R and the relation sin θ = 1/(1 + r′ 2 /z′ 2 ) with 6 and 7, we get Figure 2. Schematic of the geometry and symbols used in the paper. This shows that F depends on B and R, but does not explicitly depend on α 1 or α 2 . Special Case of Spherical Segments. It was already pointed out that the capillary bridge is a spherical segment when B = 1. Putting B = 1 into (6 and 7), we see that when −π/2 < s < π/2. Furthermore, from 10 it is clear that when B = 1, the force F = 0, so the equilibrium shape of a liquid bridge is a spherical segment. Finally, for the case of a spherical segment, α 1 and α 2 are related to the contact angles by Using simple geometry, 29 the parameters R, α 1 , and α 2 can now be solved depending on the case. This also gives the equilibrium distance h E . Linear Spring Constants. Our next goal is to derive the linear spring constant k 1 = −∂F/∂h for the equilibrium case B = 1. We start by noting that k 1 must be independent of R. This follows from the fact that the dependent variables uniquely define the surface so that k 1 = k 1 (γ,B,R,α 1 ,α 2 ). The units of k 1 and γ are N/m, the unit of R is m and B, α 1 , and α 2 are nondimensional quantities so a dimensional argument 35 can be put forward that 2 Thus, without loss of generality, we can compute the spring constant in the case R = 1. In the case III, the spring constant is given by (remembering that F does not explicitly depend on α 1 and α 2 ) When B = 1, from 10 we see that ∂F/∂R = 0 so that in this specific case, the second term in 14 can be neglected. To find ∂B/∂h, the implicit function theorem and Cramer's rule can be used to get where we have used 15, 11 and 12 and 13. For the denominator, we get Quadratic Spring Constants. As seen in Figure 1, the first-order approximation 1 underestimates the numerically computed force for both positive and negative Δh. This can be somewhat remedied by using the second-order approximation 2. Again, we use the implicit function theorem to find ∂ 2 F/∂h 2 and evaluate it at B = 1. In the case III, this gives the quadratic spring constant k 2 as where X = cos θ 1 + cos θ 2 , Y = cos θ 1 cos θ 2 + 1 and C = ln tan θ 1 /2 + ln tan θ 2 /2. In the cases I and II, the rather lengthy expressions for k 2 are given in the Supporting Information Snap-in Forces. Right before the snap-in, the drop is a spherical cap and has a height of h S , which is given by assuming the drop was bound to the top substrate before the snap-in. Δh = h S − h E and k 1,2 can now be put into either 1 or 2 to approximate the snap-in force. ■ RESULTS AND DISCUSSION Recall that our initial aim was to find an analytical method for computing the snap-in forces of capillary bridges, with the purpose of linking the snap-in forces to the geometry of the capillary bridgecontact angle or contact radius at the substrate. We now have that analytical method and will compare these analytical approximations to numerical models and experimental data. Comparison to Numerical Modeling. To compare the validity of the analytical approximation, the snap-in forces were also computed numerically using MATLAB software. Briefly, the numerical model finds the Kenmotsu parameters of the liquid bridge by numerically solving the system of eq 8. The equilibrium shape, which is incorrect only in its height, is used as the initial guess for the solver. All code to recreate all the figures can be downloaded from Zenodo. 36 Figure 3 shows how the analytical approximation becomes increasingly accurate as either the contact angle is increased in the case II or the contact radius is decreased in the case III. Clearly, the analytical approximations become indistinguishable from a more accurate numerical solution when the contact radius is decreased or the contact angle is increased. The quadratic approximation is more accurate than the linear approximation for moderate contact angles and contact radii, but eventually diverges faster for small contact angles and large contact radii, because of the presence of the quadratic term. To make the comparison of the models more concrete, we must define what we mean by a model being accurate. For example, we can say that an analytical model is accurate when the relative error between the analytical and the numerical model is less than 10%, that is, |log(F analytical /F numeric )| < log 1.1. Figure 3 shows that the relative accuracy depends not only on the contact angle or the contact radius but also on the volume. When V/r 2 3 = 100, the linear model is accurate for contact angles above 159°, but the quadratic model is accurate for contact angles above 108°. On the other hand, when V/r 2 3 = 5, the linear model is accurate for contact angles above 172°and the quadratic model is accurate for contact angles above 143°. Comparison to Experimental Data. For experimental validation of the model, we use published data from Liimatainen et al. 20 Figure 4 compares the experimental, numerical, and linear analytic and quadratic analytic forcedistance relationship of a capillary bridge between two surfaces with pads. It shows that for small displacements, both analytical models approximate the numerical model, and the both agree reasonably well with the experimental curve, yet not perfectly. Finally, Figure 5 compares the analytical, numerical, and experimental snap-in forces for mushroom-shaped pillars with varying radii. In the range of pillars tested, the linear analytical model is almost as good as the numerical model, except perhaps in the case of the largest of the pillars (r 1 /r 2 = 0.8). The quadratic model is indistinguishable from the numerical Without this correction, there is no bend in the models and none of the model approximates the experimental data well. Nevertheless, the analytical and numerical models agree perfectly with each other in the small contact radius limit. Summary and Conclusions. In summary, we now have an analytical method for calculating approximately the snap-in forces of liquid drops that captures a body of experimental data. It is therefore a valuable addition to the toolbox of a scientist working on normal forces of liquid drops. It is worth pointing out that the theory was only compared to the experiments for the case III because of the unavailability of high-quality experimental data from the cases I and II. One of the difficulties of such measurements is that it is difficult to measure optically contact angles beyond 150°. Unfortunately, this super liquid-repellent limit is where one would expect the analytical models to be more accurate, according to Figure 3. A second potential difficulty is that most studies have focused on water as the liquid and all substrates with water contact angle beyond 130°have some kind of micro-/nanotopography. On such a roughness, the radial or axial contact line position may be uneven, that is, the axial symmetry assumption may be violated. Finally, we have so far not considered receding contact angles at all, even though it has been argued 18 that the receding contact angle is a more relevant characteristic of a super liquidrepellent surface because it puts an upper bound on the sliding angle. Receding contact angle can, in principle, be obtained from force measurements, if the probe is retracted far enough that the contact line de-pins and starts retracting. Measuring the minimum force during retraction or the force right before pull-off (capillary bridge failure) has been proposed as an alternative to measuring receding contact angles. 20 There are difficulties in applying the approach developed in this paper to compute the pull-off forces: (1) the linearized curve cannot be used to find the distance at which the force is at minimum. (2) The Δh is larger during pull-off than during snap-in, so the analytical models introduce more errors. The previous difficulties immediately suggest the following approach: instead of snap-in or pull-off forces, one could measure the slope of the force−distance curve at the equilibrium distances during approach and retraction, respectively. Equation 21 could then be used to relate the slopes to the contact angles at the substrates. Figure 6 shows a simulated experiment and how the contact angles could be extracted from such data. In such an experiment, one should squeeze the bridge enough during the approach. This would guarantee that during retraction, when the equilibrium distance is reached, the contact line has already depinned and is receding. Notes The author declares no competing financial interest. 20 The parameters were r 2 = 0.5 mm, V = 1.53 μL, γ = 72 mN/m, and h C = 1.2 μm. Note that both scales are logarithmic. Figure 6. Simulated experiment demonstrating how both advancing and receding contact angles can be extracted from the spring constants. The force−distance curve was computed numerically, using parameters r 2 = 0.5 mm, V = 1.53 μL, γ = 72 mN/m, advancing contact angle θ A = 150°, and receding contact angle θ R = 120°. To extract the contact angles from the data: (1) find the slopes of the tangent lines at the equilibrium distances (F = 0) during approach and retraction; (2) find a spherical segment that has the prescribed V, r 2 , and k 1 ; k 1 given by 19. The spherical segment gives θ 1 .
5,608.4
2019-04-16T00:00:00.000
[ "Physics" ]
Online vibration monitoring system for rotating machinery based on 3-axis MEMS accelerometer This article discusses the design of a rotating machine vibration monitoring system in real time by utilizing a 3-axis accelerometer MEMS sensor. The system consists of 2 main parts, namely, vibration data acquisition system and data analyse system with the principle of wireless data communication. The results of vibration measurements from each three MEMS accelerometer sensors will be forwarded by the Arduino Nano microcontroller to the data collection device, the Arduino Uno microcontroller. Also, the machine rotating speed data acquired by the TCRT5000 speed sensor will be forwarded by the Arduino Pro Mini microcontroller to the same collection device. Data that has been recorded by the collection device is then transmitted to the computer through wireless communication using the Node-MCU device. The Lab View software is used as a machine vibration data display that has previously been processed by the computer in accordance with parameters desired by the user. Testing stability of data transmission with a certain length of time and communication distance is carried out to ensure the measurement results are in accordance with the real-time vibration conditions. Test results show an average delay time of below 200ms for the farthest distance from the wireless device signal range. Introduction The development of data acquisition techniques, sensors and vibration analysis in the manufacturing industry today is demanded to be more reliable. The condition of the machine operating causes vibration will always be present in almost all types of rotating machines. The rotating machinery includes a gearbox, fan, shaft, motor, compressor, pump, mixer, dryer etc. Vibrations that occur on a rotating machine are generally caused by mechanical errors, e.g. mechanically loose, unbalance, wear and misalignment. Through vibration analysis techniques, the condition of the machine can be evaluated, so that precautions can be taken before the machine is fatally damaged. Take for example an industrial electric-motor, factors such as the level of viscosity of a lubricant, consideration of electric current, motor ventilation, alignment and motor load are various possibilities that can cause motor failure. These factors produce vibrations that tend to rise in electric motors or rise in motor temperatures to critical levels [1]. Vibration analysis is widely used in predictive maintenance in industries that use rotating machines. This method allows maintenance technicians to detect problems in the machine early. Appropriate vibration analysis can be used to identify damage to rotating machine components. Some component damage can be in the form of rotors or shafts unbalanced, bearing failures, shaft misalignment, shaft bending, gear wear, mechanical looseness, and so on. Vibration Analysis can provide relevant information about abnormal machine working conditions. Abnormal working conditions can cause failures that cause the factory to stop. So that maintenance and repair actions can be taken before failure occurs. By using vibration analysis techniques, one can find out the root causes of failure and ways to avoid successive failures. Vibration analysis techniques can also be used to determine the decline in the function of machining components after a long period of use. Vibration analysis is only possible if the data used is sufficiently reliable and accurate, so the device used to collect data not only functions properly but must also be in accordance with the current machine conditions. There are three types of devices that can be used to measure and monitor vibrations, namely, displacement, velocity, and accelerometer. Each measuring device has certain usage limits and specifications depending on the conditions of measurement to be performed. An accelerometer is the best device for determining the force of a vibration source. In addition, the accelerometer has been widely used because of the development of low-cost Micro-Electro-Mechanical-System (MEMS) technology. The main objective of this research is to design a real-time monitoring system of the vibration condition of a rotating machine using an accelerometer MEMS technology sensor with wireless communication. In the enforced system, ADXL345 3-Axis digital MEMS measuring device sensing element has been used as vibration sensing element. it's a little, thin, ultralow power, 3-axis measuring device with high resolution (13-bit) mensuration at up to ±16 g. Digital output information is formatted as 16-bit twos complement and is accessible through either a SPI (3-or 4-wire) or I2C digital interface. Small, complete, and breadboard-friendly board supported the ATmega328P, Arduino Nano is used. Wireless transmission of information has been done by exploitation Node-MCU, ESP8266 Wi-Fi So-C from Espressif systems, and hardware that relies on the ESP-12 module. Interface is employed to interface sensing element information to LabVIEW computer code. Graphical interface, alarm indication and data-logging are going to be done exploitation LabVIEW computer code on the pc. Three continuous graphs shows vibration in unit of 'mm/s2'. An important aspect of this article is the results of the analysis of vibration data on the monitored system to display undesirable conditions that might occur and alert this to the maintenance team at work. From the literature review it is known that the malfunction of rotating machine components can be detected through measurements of vibration levels [2]. This can be done through the use of sensor devices that record vibration data to be forwarded to the appropriate software. Several articles explain that the use of devices with capacitive type MEMS accelerometer as a vibration measuring device can be utilized [3][4][5][6][7]. The use of Arduino microcontroller boards in the development of low-cost machine vibration monitoring systems has also been studied in Papers [6][7][8][9]. In the development of cable-based vibration monitoring systems as data transmission media also presented in articles [4,5]. However, cable-based technology is very far behind compared to the use of wireless communication technology in several aspects. The main advantage of utilizing wireless technology over transmission with cable is being able to reduce installation costs and obstacles by avoiding long cables from sensors to analysis devices. In addition, the use of wireless technology can improve aspects of mobility and easy reconfiguration. Several studies have developed a cordless machine vibration monitoring system using the Zig-Bee communication protocol [5,10]. In terms of communication coverage and power consumption, wireless personal area network technology is superior to using the Zig-Bee protocol compared to the blue-tooth protocol [11]. Although Zig-Bee requires less power compared to Wi-Fi communication, device availability and ease of installation are superior to using Wi-Fi communication. Experimental design The experimental design shows on Figure 1. It displays the design of the overall machine vibration condition monitoring system. The system consists of two major parts, namely, the acquisition of vibration measurement results from the accelerometer MEMS sensor and the measurement data display section which is designed in accordance with initial planning to facilitate the user in monitoring the condition of the machine. Figure 1. Design of vibration monitoring system. Vibration data acquisition unit The vibration data acquisition unit consist of several components as follows. Vibration sensor. The ADXL345 is a complete 3-axis accelerations activity systems with selectable activity vary of ±2 g, ±4 g, ±8 g, or ±16 g. Figure 2 shows the functional block diagram of ADXL345 MEMS accelerometer sensor. It measures each dynamic acceleration ensuing from motion or shock and static acceleration, like gravity, that permits the device to be used as a tilt device. The device could be a polysilicon surface-micro-machined structure designed on prime of an element wafer. Polysilicon springs suspend the structure over the surface of the wafer and supply a resistance against forces thanks to applied acceleration. Deflection of the structure is measured victimisation differential capacitors that is accommodates freelance fastened plates and plates connected to the moving mass. Acceleration deflects the proof mass and unbalances the differential condenser, leading to a device output whose amplitude is proportional to acceleration. Phase-sensitive reception is employed to see the magnitude and polarity of the acceleration. Digital output measurement data is formatted as 16-bit twos complement and is accessible through either a SPI (3-or 4-wire) or I2C digital interface. Rotating speed sensor. The TCRT5000 is reflective sensors which include an infrared emitter and phototransistor in a leaded package which blocks visible light. It has peak operating distance about 2.5 mm and operating range within over 20 % relative collector current from 0.2 mm to 15 mm. Figure 3 shows the test circuit of TCRT5000. This device used to measure the speed of rotating equipment that will give the data as a reference for frequency of the vibration signal. Microcontroller Arduino Nano and Arduino Pro-mini. Arduino Nano is a device that process digital signal from vibration sensor. It will write and read the data from the ADXL345 vibration sensor so it ready to transmit to the next device. In this research, three Arduino Nano were used to read data from three vibration sensors. On the other words, one microcontroller will read for one vibration sensor. Arduino Nano is small, complete, and breadboard-friendly board supported the ATmega328P. Arduino IDE software is used to write the program of this device. Furthermore, Arduino Pro Mini was used to get data from rotating speed sensor TCRT5000. Figure 4 shows the configuration of microcontroller device used. Arduino Uno R3 was used to collect all of measurement data from all other microcontrollers. It also gets power from power supply that generate from single 9 VDC battery and also generate power for other devices. Arduino is employed for building differing types of electronic circuits simply mistreatment of each a physical programmable printed circuit sometimes microcontroller and piece of code running on laptop with USB affiliation between the pc and Arduino. Programming language employed in Arduino is simply a simplified version of C++ that may simply replace thousands of wires with words. The most necessary part in Arduino Uno R3 is ATMEGA328P-PU is AN 8-bit Microcontroller with non-volatile storage reach to 32k bytes. Each of the fourteen digital pins on the Uno is often used as AN input or output, mistreatment Pin-Mode, digital-Write, and digital-Read functions. They operate at five volts. Every pin will offer or receive a most of forty mA and has an interior pull-up electrical device (disconnected by default) of 20-50 k Ohms. The Uno has six Analog inputs, labelled A0 through A5, every of which give ten bits of resolution (i.e.1024 completely different values). By default, they live from ground to five volts, although is it doable to alter the higher finish of their vary mistreatment the AREF pin and therefore the Analog-Reference operate. Figure 4. Arduino Uno R3 device configuration. Node-MCU (ESP-01 Wi-Fi module). ESP-01 wireless fidelity module is developed by Ai-thinker Team, core processor ESP8266 in smaller sizes of the module encapsulated Tensillica L106 integrates industry-leading radical low power 32-bit MCU small, with the 16-bit short mode. Clock speed support 80 MHz, 160 MHz, supports the RTOS, integrated Wi-Fi MAC/BB/RF/PA/LNA, on-board antenna. The module supports customary IEEE802.11 b/g/n agreement, complete TCP/IP protocol stack. Users will use the add modules to associate existing device networking, or building a separate network controller. ESP8266 is high integration wireless SOCs, designed for house and power forced mobile platform designers. It provides un-exceeded ability to imbed Wi-Fi capabilities inside different systems, or to operate as a standalone application, with the bottom price, and marginal house demand. Vibration monitoring section The vibration observation section receives information wirelessly and so displays the activity results mistreatment LabVIEW code on the computer. This section consists of the subsequent components: 2.2.1. AMICA Node-MCU module. A Node-MCU module core is an ESP8266 Wi-Fi Module based mostly development board. It is got a small USB slot that may be directly connected to the computer or alternative USB host devices. It is got 15x2 Header pins and a small USB slot, the headers may be mounted on a bread board and also the small USB slot is for association to a USB host device which will be a laptop. It is got a CP2102 USB to serial device. Within the vibration monitoring unit, the AMICA MCU Node Module is organized as an organiser mistreatment the Node MCU computer program code. The baud is ready as 9600. This device is connected to the computer via a USB Serial Port interface. This module is USB supercharged and incorporates a transformer of 3.3 volts. This module receives data transmitted by RF link and so serially transfers it to the computer. Computers with display and input devices. For the system during this study, portable computers have been used. This fulfils the requirements needed to run LabVIEW software and the Arduino IDE 1.5.7 program. This computer has a USB communication as well as a serial port, for the Node MCU Module interface. The LabVIEW software has been installed on the computer for display development (GUI) for the machine vibration monitoring system. Vibration analysis can be performed on data obtained through features that have been embedded in this interface. An alarm indication feature has been added as a safety control. The data recording feature has also been added to display reports that can be used for offline analysis. Results and discussion In vibration sensor unit, sensor ADXL345, TCRT5000, Arduino Nano, Arduino Pro Mini and ESP-01 Wi-Fi module are powered by Arduino Uno R3 microcontroller board. Small wheel grinding's machine is selected as source of vibrations for experimental vibration monitoring. Two Accelerometers are attached on top and right surface of motor case to measure on vertical and horizontal position, respectively. One Accelerometer is mounted on the face of rotor surface to measure vibration on axial position. The last one, rotating speed sensor is attached to base of machine to measure the speed of grinding wheel rotation. Sensor modules are attached on motor surface tightly using adhesive tape. Proper circuit connections were made and power supply was given to Arduino Uno R3 board. Coordinator Node-MCU AMICA module was interfaced to computer. Figure 5 Shows the final vibration sensor unit assembly. Figure 6 shows the flowchart of vibration sensor unit working principle. Initial condition when the engine is not turned on, the vibration in the monitoring system displays the results as shown in Figure 7. The three vibration sensors and rotation sensors on the display chart (X, Y, Z) show no significant vibration and rotation. Then, after the engine is turned on, vibration continues to rise and then stabilizes under certain conditions as shown in the graph Figure 8 and Figure 9. The display in the Lab-View application shows the engine vibration conditions in real-time. The highest allowable vibration limit is 9.8 mm/s2. When the vibration measurement value exceeds a predetermined threshold, the alarm indicator is ON with flashing. Recording of measurement data is still carried out as long as the STOP button on the application display has not been pressed and the measured text file is saved into the local computer's hard drive that has been predetermined. Conclusion In this paper, a real-time vibration monitoring system design has been implemented utilizing the ADXL345 MEMS accelerometer sensor with wireless data communication and making vibration monitoring displays using Lab-View software. The interface of the accelerometer sensor with a microcontroller board derived from Arduino has been explained. The implementation of wireless communication has been carried out using the MCU-Node module. The measurement results show the sensor can display four pieces of data, namely three engine vibration data in a vertical, horizontal and axial position on three axes (X, Y, Z), and one engine speed measurement data. Log data files stored on computer storage media can be used for vibration analysis and vibration data history at certain times.
3,581.2
2020-02-01T00:00:00.000
[ "Computer Science" ]
Flexible changes to the Heliothis virescens ascovirus 3h (HvAV-3h) virion components affect pathogenicity against different host larvae species ABSTRACT The pathogenicity of a virus to a specific host species is an inerratic and describable ability of a virus to cause infection but is generally shaped by a variety of abiotic and biotic factors. In this investigation, the variations in pathogenicity of Heliothis virescens ascovirus 3h (HvAV-3h) to five noctuid pests were assessed based on mass spectrometry analysis on the virion compositions. Twenty-nine common HvAV-3h proteins were shared across all hosts, and different flexible proteins were identified in the virions of each specific host. Different host proteins were identified as HvAV-3h virion-associated proteins, including different detoxification enzyme proteins. Furthermore, a relatively fixed relationship between viral replication and changes in host detoxification enzyme activity caused by deficiencies in various viral structural proteins was found in the host larvae using a correlation matrix analysis: the host larval carboxylesterase and cytochrome P450 monooxygenases generally had highly similar responses to the viruses blocked by different structural proteins’ antisera and their effects on viral DNA replication. Different interaction patterns for the virion structural proteins were found in different host larvae-produced virions, and the interactions between Spodoptera litura glutathione S-transferases and viral structural proteins were confirmed. The different host responses after viral infection could be the reason for the changes in viral pathogenicity, while the virus responses gradually adapted to the different hosts and there were flexible changes in the virion structures. IMPORTANCE Different pathogenic processes of a virus in different hosts are related to the host individual differences, which makes the virus undergoes different survival pressures. Here, we found that the virions of an insect virus, Heliothis virescens ascovirus 3h (HvAV-3h), had different protein composition when they were purified from different host larval species. These “adaptive changes” of the virions were analyzed in detail in this study, which mainly included the differences of the protein composition of virions and the differences in affinity between virions and different host proteins. The results of this study revealed the flexible changes of viruses to help themselves adapt to different hosts. Also, these interesting findings can provide new insights to improve our understanding of virus adaptability and virulence differentiation caused by the adaptation process. The ASM Journals program strives for constant improvement in our submission and publication process.Please tell us how we can improve your experience by taking this quick Author Survey. Sincerely, Clinton Jones Editor, Microbiology Spectrum Reviewer comments: Reviewer #1 (Comments for the Author): Dear Authors, The Authors summitted a detailed research on the " Flexible changes to the Heliothis virescens ascovirus 3h (HvAV-3h) virion components affects pathogenicity against different host larvae species."I appreciate the valuable contributions of the study in the field by exploring the virus-host interaction mysteries between different hosts in one of the lesser-known virus group Ascoviridae. After a thorough review of the work, I have two key suggestions to improve the manuscript.: the co-IP Assay section and the presentation of proteomics data in the supplementary tables. Co-IP Assay Elaboration and Controls: Please provide a more detailed description of the co-IP Assay methodology.Elaborate on the experimental setup, including the specific antibodies used for immunoprecipitation and the detection of interacting proteins.Additionally, clarify the controls used in the assay to ensure the reliability of your results, especially concerning false positive detection.Describing negative controls and how they help identify non-specific interactions will strengthen the validity of your findings.Proteomics Data and Supplementary Tables: In the supplementary tables presenting the proteomics data, ensure that the information is comprehensive and well-organized.Provide additional relevant details on the identified proteins, such as their functional annotations and potential roles in immune responses.If any of the detected proteins are not unique to your study but are closely related to previously reported data in related virus groups, make a clear comparison to highlight any similarities or differences.This will help readers contextualize your findings within the existing knowledge on these proteins.Overall, these revisions will enhance the clarity and robustness of your study, enabling readers to understand better your experimental approach and the significance of your results.Addressing these suggestions will significantly improve the manuscript's quality and its impact on the scientific community. Reviewer #2 (Comments for the Author): Ascoviruses are a group of large, double-stranded DNA viruses that mainly infect insects of the family Noctuidae.In this manuscript submitted by Yu et al. entitled "Flexible changes to the Heliothis virescens ascovirus 3h (HvAV-3h) virion components affects pathogenicity against different host larvae species", the authors found that the virus (HvAV-3h) infection had significant effects on the life span of five different lepidopteran insects, although the morphology of the virions purified from the infected insects was similar.Mass spectrometry analysis revealed that the virions produced by different host insects share 29 common viral proteins but also contain some distinct viral components and host proteins, particularly of some detoxification enzymes (P450, GST).Western blotting and immuno-electronic microscopy confirmed the presence of some viral proteins (3H-13, 27, 55, 56, 57, 58, 152) on the purified virions.Blocking/neutralizing the virions with the antibodies of those viral proteins reduced the viral DNA replication and virus infectivity, and to a certain extent, affected the host detoxification enzymes activities.Overall, the manuscript provided a large body of data, however, the results were not well organized and described or interpreted.It is expectable that the components of virions purified from different host insects have subtle difference, but the functional links of those components, particularly of those detoxification enzymes, with the virus infectivity are not clear. Major points: 1. Lines 140-147, the morphology of the purified virions has no obvious difference.This section shouldn't stand alone.It could be combined with the section "Different host larvae produced HvAV-3h virions had similar......" in lines 148-149.Also, Lines 220-233, they compared the host proteins associated with the HvAV-3h virion, this section should be combined with those in lines 148-149.Additionally, the description and interpretation of the components of virions purified from different hosts are not clear.2. Lines 250-251, they used the inhibitors of the detoxification enzymes to evaluated the relationship of the virus infectivity and the activity of enzymes.The inhibitory effect of each inhibitor on the specific enzyme of insects are not clear.3. Lines 262-296.Since the antibody block/neutralization substantially reduced the viral DNA replication and infectivity, it is not conceivable to assess the relationship of the viral structural proteins and host detoxification enzyme activity by using the antibody blocked/neutralized virus to infect the insect. Preparing Revision Guidelines To submit your modified manuscript, log onto the eJP submission site at https://spectrum.msubmit.net/cgi-bin/main.plex.Go to Author Tasks and click the appropriate manuscript title to begin the revision process.The information that you entered when you first submitted the paper will be displayed.Please update the information as necessary.Here are a few examples of required updates that authors must address: • Point-by-point responses to the issues raised by the reviewers in a file named "Response to Reviewers," NOT IN YOUR COVER LETTER. • Upload a compare copy of the manuscript (without figures) as a "Marked-Up Manuscript" file. • Each figure must be uploaded as a separate file, and any multipanel figures must be assembled into one file.For complete guidelines on revision requirements, please see the journal Submission and Review Process requirements at https://journals.asm.org/journal/Spectrum/submission-review-process.Submissions of a paper that does not conform to Microbiology Spectrum guidelines will delay acceptance of your manuscript." Please return the manuscript within 60 days; if you cannot complete the modification within this time period, please contact me.If you do not wish to modify the manuscript and prefer to submit it to another journal, please notify me of your decision immediately so that the manuscript may be formally withdrawn from consideration by Microbiology Spectrum. If your manuscript is accepted for publication, you will be contacted separately about payment when the proofs are issued; please follow the instructions in that e-mail.Arrangements for payment must be made before your article is published.For a complete list of Publication Fees, including supplemental material costs, please visit our website. Corresponding authors may join or renew ASM membership to obtain discounts on publication fees.Need to upgrade your membership level?Please contact Customer Service at<EMAIL_ADDRESS>Thank you for submitting your paper to Microbiology Spectrum. Response letter for Spectrum02488-23 Dear editor Clinton Jones, Thank you for your decision letter concerning our manuscript (ID Spectrum02488-23) entitled "Flexible changes to the Heliothis virescens ascovirus 3h (HvAV-3h) virion components affects pathogenicity against different host larvae species" and your time regarding for our revision.I also appreciate all the critical comments from you and reviewers.We have carefully considered the comments and revised the manuscript based on your comments and suggestions.With these improvements, we hope that the current version can meet the Journal's standards for publication.The following is a point-by-point response to all those comments and a list of changes we have made to the manuscript. Reviewer: 1 1.Co-IP Assay Elaboration and Controls: Please provide a more detailed description of the co-IP Assay methodology.Elaborate on the experimental setup, including the specific antibodies used for immunoprecipitation and the detection of interacting proteins.Additionally, clarify the controls used in the assay to ensure the reliability of your results, especially concerning false positive detection.Describing negative controls and how they help identify non-specific interactions will strengthen the validity of your findings.Response: Thank you for your suggestion.We had revised the method of Co-IP according to your suggestion.Please see the revised manuscript. 2. Proteomics Data and Supplementary Tables: In the supplementary tables presenting the proteomics data, ensure that the information is comprehensive and well-organized.Provide additional relevant details on the identified proteins, such as their functional annotations and potential roles in immune responses.If any of the detected proteins are not unique to your study but are closely related to previously reported data in related virus groups, make a clear comparison to highlight any similarities or differences.This will help readers contextualize your findings within the existing knowledge on these proteins. Response: Thank you for your suggestion.We had 24 supplementary Tables to present the proteomic data, 12 for the viral proteins and 12 for host larval proteins.All them contained annotation information.Considering we do not detect the immune responses, such as melanization or other humoral immunity or cellular immunity pathways, we did not revise these tables.Please excuse us for we did not adding any highlights. Reviewer: 2 Comments: Overall, the manuscript provided a large body of data, however, the results were not well organized and described or interpreted.It is expectable that the components of virions purified from different host insects have subtle difference, but the functional links of those components, particularly of those detoxification enzymes, with the virus infectivity are not clear.Response: Thank you for your valuable comments.The study on ascovirus is relatively backward, and there are many unknowns about their pathogenesis and the structure of virions, which brings great difficulties to our work.In this study, we mainly aim to demonstrate the mutual "adaptation" between the ascovirus and the host by verifying the variability of the structure of virions.There are indeed some aspects that cannot be explained clearly in this study, and this is also what we want to study in the future.At present, we are preparing antibodies against the host detoxification enzyme proteins associated to the virions, and we will use these antibodies to further study the relationship between virus infection and host detoxification enzyme proteins and activity.Specific points: 1. Lines 140-147, the morphology of the purified virions has no obvious difference.This section shouldn't stand alone.It could be combined with the section "Different host larvae produced HvAV-3h virions had similar......" in lines 148-149.Also, Lines 220-233, they compared the host proteins associated with the HvAV-3h virion, this section should be combined with those in lines 148-149.Additionally, the description and interpretation of the components of virions purified from different hosts are not clear.Response: Thank you for your suggestion.We had rearranged the sections in the RESULTS according to your suggestion.The description of the mass spectrum results about the different host produced virions were separated into two parts: the protein encoded by the virus and the protein encoded by host larvae.The had been combined into a same subsection according to your suggestion.And we had added some general description about the host larval coded proteins identified from the MS data.Hope the revised manuscript could meet your requirements. 2. Lines 250-251, they used the inhibitors of the detoxification enzymes to evaluated the relationship of the virus infectivity and the activity of enzymes.The inhibitory effect of each inhibitor on the specific enzyme of insects are not clear. Response: Thank you.PBO, DEM, and TPP were commonly used as inhibitors to the insect detoxification enzymes, thus we did not detect their inhibitory effects in this study.We had added several references in the M&M sections.Please see the revised manuscript. 3. Lines 262-296.Since the antibody block/neutralization substantially reduced the viral DNA replication and infectivity, it is not conceivable to assess the relationship of the viral structural proteins and host detoxification enzyme activity by using the antibody blocked/neutralized virus to infect the insect. Response: As we had discussed in the DISCUSSION, the viral DNA replication and infectivity might be directly resulted from the reduced invading of the viruses due to the blocked structural protein.But how does the blocked structural protein in the virions affect the invading of ascovirus was unknown.From the MS data we can see that the virions might carry the host detoxification enzyme protein, which suggested that the host detoxification enzyme protein should interacted with the virion structural proteins (virus coded ones).These interactions might happen in the late stage of the infection of ascovirus (the stage of assemble of virions), but this still indicates the infection of ascovirus are associated to the host larval detoxification enzyme activity.On the other side, as shown in Fig. 5B, the host larval detoxification enzyme activity changed a lot from 3-24 hpi, and this stage was the invading stage of the ascovirus, which indicates that the invading of HvAV-3h is related to the host larval detoxification enzyme activity.To reveal whether the selected viral structural protein had any functions to stimulate the host larval detoxification enzyme activity, so as to influence the invading of ascovirus, we performed the experiments of Fig. 6A (Lines 262-296).4. Lines 158-160, the proteins associated with the virions purified from each insect species are varied remarkably. Response: Many thanks.To identify the virion components, the MS analysis of virion protein samples purified from each larval species were performed with 3 biological repeats.As you see there are differences between the 3 repeats.The Veen analysis of the 3 repeats of each insect species were performed, and to avoid the inaccuracies caused by these differences, we used those proteins commonly identified in all the 3 repeats.We hope that the conclusion obtained in this way will be more reliable.5. Lines 226,228,[231][232], the difference of "P450" and "P450s" is not clear.P450 is a large protein family, which specific member(s) of P450 were detected in those purified virions? Response: Sorry for our careless.We had uniform the "P450s" into "P450", "GSTs" into "GST", please see the revised manuscript.The specific numbers of P450 were provided in the supplementary Tables.There are too many identified P450, and it is not appropriate to list their numbers one by one.For example, 15 P450 proteins (CYP324A6, CYP4L7, CYP6AE97, CYP4S8, CYP6AE10, CYP354A14, CYP6AB61, CYP332A1, CYP6B68, CYP339A1, CYP9A21v4, CYP6AB31, CYP306A1, CYP4M15, CYP321A9) were identified from the virions purified from S. exigua.Furthermore, CarE and GST also are protein families, if we list the numbers of P450, we also have to separate CarE and GST proteins by their subgroups.So, we didn't add the specific member(s) of P450 in the revised manuscript.The readers can find and select the detailed information in our provided supplementary data.6. Line 231, "S.frugiperda" should be "S.litura". Response: Sorry for our careless.We had corrected the mistakes.Please see the revised manuscript. We hope the revision could meet the requirement to your journal and be better readable to authors.Your manuscript has been accepted, and I am forwarding it to the ASM Journals Department for publication.You will be notified when your proofs are ready to be viewed. The ASM Journals program strives for constant improvement in our submission and publication process.Please tell us how we can improve your experience by taking this quick Author Survey. Publication Fees: We have partnered with Copyright Clearance Center to collect author charges.You will soon receive a message from<EMAIL_ADDRESS>with further instructions.For questions related to paying charges through RightsLink, please contact Copyright Clearance Center by email at<EMAIL_ADDRESS>or toll free at +1.877.622.5543.Hours of operation: 24 hours per day, 7 days per week.Copyright Clearance Center makes every attempt to respond to all emails within 24 hours.For a complete list of Publication Fees, including supplemental material costs, please visit our website. ASM policy requires that data be available to the public upon online posting of the article, so please verify all links to sequence records, if present, and make sure that each number retrieves the full record of the data.If a new accession number is not linked or a link is broken, provide production staff with the correct URL for the record.If the accession numbers for new data are not publicly accessible before the expected online posting of the article, publication of your article may be delayed; please contact the ASM production staff immediately with the expected release date. Corresponding authors may join or renew ASM membership to obtain discounts on publication fees.Need to upgrade your membership level?Please contact Customer Service at<EMAIL_ADDRESS>Thank you for submitting your paper to Spectrum. Sincerely, Clinton Jones Editor, Microbiology Spectrum Journals Department American Society for Microbiology 1752 N St., NW Washington, DC 20036 E-mail<EMAIL_ADDRESS>• Manuscript: A .DOC version of the revised manuscript • Figures: Editable, high-resolution, individual figure files are required at revision, TIFF or EPS files are preferred -23R1 (Flexible changes to the Heliothis virescens ascovirus 3h (HvAV-3h) virion components affects pathogenicity against different host larvae species) Dear Dr. Guo-Hua Huang:
4,125.8
2023-11-09T00:00:00.000
[ "Biology", "Environmental Science" ]
The Lengenbach Quarry in Switzerland : Classic Locality for Rare Thallium Sulfosalts † The Lengenbach quarry is a world-famous mineral locality, especially known for its rare and well-crystallized Tl, Pb, Ag, and Cu bearing sulfosalts. As of June 2018, it is the type locality for 44 different mineral species, making it one of the most prolific localities worldwide. A total of 33 thallium mineral species have been identified, 23 of which are type minerals. A brief description of several thallium species of special interest follows a concise and general overview of the thallium mineralization. Introduction The Lengenbach quarry in the Binn valley, Valais, Switzerland (Figures 1 and 2) is located in Triassic meta-dolomites of the Penninic zone in the Swiss Alps.Metal extraction for economic purposes never occurred in the quarry, but specimen extraction has been continuously carried out since 1958.The quarry is currently operated by the Forschungsgemeinschaft Lengenbach (FGL, literally: Lengenbach Research Association), financed by a group of idealistic collectors and by the local community of Binn.The purpose of the research association is to promote scientific research on the unique minerals of the Lengenbach deposit and of other dolomite localities in the Binn valley.An intermittent, measured specimen extraction during the snow-free summer months shall guarantee the potential for scientific investigations on the one hand and deliver dolomite material for a publicly accessible dump, serving as an attraction to equally eager tourists and mineral collectors, on the other hand. This brief review gives a glimpse at the current status of the mineralogical research with regard to the thallium mineralization at the locality.For further information about history, geology, and mineral extracting work we recommend References [1,2]. Geochemical Setting and Formation of the Lengenbach Locality The Lengenbach ore body is located within the Penninic Monte Leone nappe, at the Northern front and subvertical hinge zone of a large fold.The stratabound mineralization occurs in the stratigraphically uppermost part of the 240 m thick dolomite sequence. The formation of the highly complex mineralization in the Lengenbach deposit is not yet completely understood.While Graeser [3] suggested a late introduction of As, Tl, and Cu into a pre-existing Fe-Pb-Zn mineralization during Alpine metamorphism from the underlying gneissic basement, Hofmann and Knill [4] proposed a pre-Alpine origin of those elements and a subsequent isochemical Alpine metamorphism, under upper greenschist to lower amphibolite facies. Geochemical Setting and Formation of the Lengenbach Locality The Lengenbach ore body is located within the Penninic Monte Leone nappe, at the Northern front and subvertical hinge zone of a large fold.The stratabound mineralization occurs in the stratigraphically uppermost part of the 240 m thick dolomite sequence. The formation of the highly complex mineralization in the Lengenbach deposit is not yet completely understood.While Graeser [3] suggested a late introduction of As, Tl, and Cu into a pre-existing Fe-Pb-Zn mineralization during Alpine metamorphism from the underlying gneissic basement, Hofmann and Knill [4] proposed a pre-Alpine origin of those elements and a subsequent isochemical Alpine metamorphism, under upper greenschist to lower amphibolite facies. Geochemical Setting and Formation of the Lengenbach Locality The Lengenbach ore body is located within the Penninic Monte Leone nappe, at the Northern front and subvertical hinge zone of a large fold.The stratabound mineralization occurs in the stratigraphically uppermost part of the 240 m thick dolomite sequence. The formation of the highly complex mineralization in the Lengenbach deposit is not yet completely understood.While Graeser [3] suggested a late introduction of As, Tl, and Cu into a pre-existing Fe-Pb-Zn mineralization during Alpine metamorphism from the underlying gneissic basement, Hofmann and Knill [4] proposed a pre-Alpine origin of those elements and a subsequent isochemical Alpine metamorphism, under upper greenschist to lower amphibolite facies. According to Hofmann and Knill [4], the distinct mineral associations in the different parts of the Lengenbach dolomite can be understood as a result of slow crystallization processes in two different redox environments.One is based on graphite and/or pyrite-pyrrhotite, leading to zerovalent arsenic.The other, which was essential for the rare sulfosalts' formation, is controlled by baryte (sulfate)-pyrite (sulfide), leading to trivalent arsenic.Accordingly, the As(III)-rich zone in the central part of the quarry shows an enrichment in baryte and hosts the coveted Tl-Pb-Ag-Cu bearing sulfosalts. Graeser [3] as well as Hofmann and Knill [4] have each proposed a zonation scheme for the different types of mineral assemblages.While the former considers in essence only mineralogical and spatial criteria, the later rely on geochemistry and there is no obvious link between the two zonations.However, it is clear that the Tl-rich zone is restricted to the central part of the quarry.While on a broad scale the different bedding-parallel zones containing the different assemblages strike subvertically in an east-west direction (Figure 1), on the more local scale, they can be subdivided into ribbons and ellipsoidal lenses that thicken, to a maximum thickness of 0.5 m, and pinch out. The FGL has been working for a few years on three such ribbons in the Tl-rich central zone (Figure 3).They are spaced approximately 1 m apart, measure a maximum of 4 m × 2 m and are designated, from north to south as ribbons 1, 1/2, and 2. Structurally, they are ellipsoidal in shape, with a sharp contact to the surrounding, mineral-poor dolomite.The contrast is essentially a mineralogical-geochemical, not a lithological contrast.Thanks to their high realgar contents, all three ribbons are easily identified in situ.But while ribbon 1 is very rich in thallium species, the very brittle and orpiment-rich dolomite of ribbon 1/2 is poorer in Tl while the realgar-richest ribbon 2 is almost bare of any thallium-species.Table 1 summarizes all species found in ribbon 1; 18 species containing thallium as one of main constituents are marked in bold characters.According to Hofmann and Knill [4], the distinct mineral associations in the different parts of the Lengenbach dolomite can be understood as a result of slow crystallization processes in two different redox environments.One is based on graphite and/or pyrite-pyrrhotite, leading to zerovalent arsenic.The other, which was essential for the rare sulfosalts' formation, is controlled by baryte (sulfate)-pyrite (sulfide), leading to trivalent arsenic.Accordingly, the As(III)-rich zone in the central part of the quarry shows an enrichment in baryte and hosts the coveted Tl-Pb-Ag-Cu bearing sulfosalts. Graeser [3] as well as Hofmann and Knill [4] have each proposed a zonation scheme for the different types of mineral assemblages.While the former considers in essence only mineralogical and spatial criteria, the later rely on geochemistry and there is no obvious link between the two zonations.However, it is clear that the Tl-rich zone is restricted to the central part of the quarry.While on a broad scale the different bedding-parallel zones containing the different assemblages strike subvertically in an east-west direction (Figure 1), on the more local scale, they can be subdivided into ribbons and ellipsoidal lenses that thicken, to a maximum thickness of 0.5 m, and pinch out. The FGL has been working for a few years on three such ribbons in the Tl-rich central zone (Figure 3).They are spaced approximately 1 m apart, measure a maximum of 4 m × 2 m and are designated, from north to south as ribbons 1, 1/2, and 2. Structurally, they are ellipsoidal in shape, with a sharp contact to the surrounding, mineral-poor dolomite.The contrast is essentially a mineralogical-geochemical, not a lithological contrast.Thanks to their high realgar contents, all three ribbons are easily identified in situ.But while ribbon 1 is very rich in thallium species, the very brittle and orpiment-rich dolomite of ribbon 1/2 is poorer in Tl while the realgar-richest ribbon 2 is almost bare of any thallium-species.Table 1 summarizes all species found in ribbon 1; 18 species containing thallium as one of main constituents are marked in bold characters. Overview of Thallium Minerals at Lengenbach As of June 2018, the Lengenbach quarry hosts 160 different mineral species, with sulfides and sulfosalts being the major group representing 57% of these species (Table 2 and Figure 4).Forty-four minerals are so called type minerals as they have been found and described for the first time from this locality.Twenty-three of them are thallium minerals.From the 73 valid mineral species containing essential thallium worldwide (according to mindat.org[36]), the striking number of 33 species (45.2%) could be found at Lengenbach. On Some Special Thallium Minerals from the Lengenbach Quarry We focus here on a few thallium species of special interest, as a discussion of all thallium sulfosalts would be beyond the scope of this brief note. Hutchinsonite-The First of Its Kind The first thallium mineral from Lengenbach, hutchinsonite, TlPbAs 5 S 9 , was found in 1903, when the English expert of Lengenbach minerals, Richard Harrison Solly (1851Solly ( -1925)), during one of his many trips to this remote Binn valley, recognized that the red to greyish-black, often flattened orthorhombic crystals probably were belonging to a new species.He briefly described it in 1904 [37], without giving it a name.In 1905, his colleague at the British Museum, G.T. Prior was able to reveal the presence of 20 wt % Tl in hutchinsonite.This was of "especial interest", as Solly [17] wrote in his following detailed description, in which he named the mineral after Arthur Hutchinson (1866-1937).Prior's discovery of thallium in hutchinsonite was important enough to result in a short note in Nature [38], as it was only the third mineral worldwide (after crookesite and lorándite) to contain thallium as an essential constituent. Hutchinsonite is the most common thallium species in the deposit and represents the type structure of a family of complex sulfosalts.Its crystals are commonly transparent and prismatic, due to an elongation parallel to the c axis (see Figures 5-7), or, rarely, more or less isometric.Hutchinsonite may contain antimony that contributes the crystals to be darker and opaque.The two varieties, prismatic Sb-free and more isometric Sb-bearing hutchinsonites, may be closely associated.The crystals reach 2 mm in size. On Some Special Thallium Minerals from the Lengenbach Quarry We focus here on a few thallium species of special interest, as a discussion of all thallium sulfosalts would be beyond the scope of this brief note. Hutchinsonite-The First of Its Kind The first thallium mineral from Lengenbach, hutchinsonite, TlPbAs5S9, was found in 1903, when the English expert of Lengenbach minerals, Richard Harrison Solly (1851Solly ( -1925)), during one of his many trips to this remote Binn valley, recognized that the red to greyish-black, often flattened orthorhombic crystals probably were belonging to a new species.He briefly described it in 1904 [37], without giving it a name.In 1905, his colleague at the British Museum, G.T. Prior was able to reveal the presence of 20 wt % Tl in hutchinsonite.This was of "especial interest", as Solly [17] wrote in his following detailed description, in which he named the mineral after Arthur Hutchinson (1866-1937).Prior's discovery of thallium in hutchinsonite was important enough to result in a short note in Nature [38], as it was only the third mineral worldwide (after crookesite and lorándite) to contain thallium as an essential constituent. Hutchinsonite is the most common thallium species in the deposit and represents the type structure of a family of complex sulfosalts.Its crystals are commonly transparent and prismatic, due to an elongation parallel to the c axis (see Figures 5-7), or, rarely, more or less isometric.Hutchinsonite may contain antimony that contributes the crystals to be darker and opaque.The two varieties, prismatic Sb-free and more isometric Sb-bearing hutchinsonites, may be closely associated.The crystals reach 2 mm in size. Fangite-The Thallium-Richest Well-developed crystals of fangite, Tl3AsS4, could recently be identified for the first time [13]-to our knowledge, the first microscopically visible crystals of this species worldwide.They are small, deep red, and very shiny (Figure 8).With more than 75 wt % Tl, fangite is the thallium-richest mineral discovered at Lengenbach up to now. A morphological study of the small crystals [26] showed them to display different combinations of crystal forms, irrespective of the fact that the prismatic habit is always approximately the same (Figure 9). Fangite-The Thallium-Richest Well-developed crystals of fangite, Tl3AsS4, could recently be identified for the first time [13]-to our knowledge, the first microscopically visible crystals of this species worldwide.They are small, deep red, and very shiny (Figure 8).With more than 75 wt % Tl, fangite is the thallium-richest mineral discovered at Lengenbach up to now. A morphological study of the small crystals [26] showed them to display different combinations of crystal forms, irrespective of the fact that the prismatic habit is always approximately the same (Figure 9). Fangite-The Thallium-Richest Well-developed crystals of fangite, Tl 3 AsS 4 , could recently be identified for the first time [13]-to our knowledge, the first microscopically visible crystals of this species worldwide.They are small, deep red, and very shiny (Figure 8).With more than 75 wt % Tl, fangite is the thallium-richest mineral discovered at Lengenbach up to now. A morphological study of the small crystals [26] showed them to display different combinations of crystal forms, irrespective of the fact that the prismatic habit is always approximately the same (Figure 9). Richardsollyite-Honoring a Pioneer In 2015, the FGL extracted two specimens with an unknown mineral from the very Tl-rich dolomite ribbon 1 (Figure 3) in the center of the quarry.Its chemistry, as first determined by Energy Dispersive X-ray Spectroscopy (EDXS) measurements in two independent institutes, showed the presence of Tl, Pb, As, and S in a simple, yet unknown ratio of 1:1:1:3.Also the powder X-ray diagram did not match with any known listed natural or synthetic chemical compound in the relevant databases.One year later, Meisser et al. [5] could describe this mineral as a new species with the name richardsollyite (Figure 10), TlPbAsS3, honoring the aforementioned pioneer of Lengenbach investigations at the dawn of the twentieth century, R.H. Solly.The crystal structure of richardsollyite is new in nature, being previously known only in some synthetic alkali sulfosalts [5]. Richardsollyite-Honoring a Pioneer In 2015, the FGL extracted two specimens with an unknown mineral from the very Tl-rich dolomite ribbon 1 (Figure 3) in the center of the quarry.Its chemistry, as first determined by Energy Dispersive X-ray Spectroscopy (EDXS) measurements in two independent institutes, showed the presence of Tl, Pb, As, and S in a simple, yet unknown ratio of 1:1:1:3.Also the powder X-ray diagram did not match with any known listed natural or synthetic chemical compound in the relevant databases.One year later, Meisser et al. [5] could describe this mineral as a new species with the name richardsollyite (Figure 10), TlPbAsS3, honoring the aforementioned pioneer of Lengenbach investigations at the dawn of the twentieth century, R.H. Solly.The crystal structure of richardsollyite is new in nature, being previously known only in some synthetic alkali sulfosalts [5]. Richardsollyite-Honoring a Pioneer In 2015, the FGL extracted two specimens with an unknown mineral from the very Tl-rich dolomite ribbon 1 (Figure 3) in the center of the quarry.Its chemistry, as first determined by Energy Dispersive X-ray Spectroscopy (EDXS) measurements in two independent institutes, showed the presence of Tl, Pb, As, and S in a simple, yet unknown ratio of 1:1:1:3.Also the powder X-ray diagram did not match with any known listed natural or synthetic chemical compound in the relevant databases.One year later, Meisser et al. [5] could describe this mineral as a new species with the name richardsollyite (Figure 10), TlPbAsS 3 , honoring the aforementioned pioneer of Lengenbach investigations at the dawn of the twentieth century, R.H. Solly.The crystal structure of richardsollyite is new in nature, being previously known only in some synthetic alkali sulfosalts [5]. The New "Sartorites"-From a Species to a Group Sartorite, PbAs2S4, was first described by vom Rath as "scleroclase" in 1864 [40], and shortly after renamed by Dana [41] to honor Wolfgang Sartorius von Waltershausen (1809-1876), a professor of mineralogy in Göttingen, Germany.In 1919, Smith and Solly [42] recognized that sartorite has-despite its simple chemical formula-a quite unique and complex crystal structure: "sartorite appears to rank with the telluride of gold, calaverite, in the peculiarity of its atomic arrangement, since in certain at least of his crystals there exist simultaneously two or even three incongruent space-lattices, which may be supposed derivable from one another by a slight shear."It is quite remarkable that they were able, with the limited methods of their time-in essence only crystal-morphological investigations-to recognize the so-called complex and incommensurate nature of both the calaverite and, partly, sartorite structures. After the introduction of X-ray investigations in crystallography, several different monoclinic super cells (superstructures) were described [43][44][45][46].Berlepsch et al. [46] pointed to the crystallographic consequences of the complex correlated atomic substitution by which substantial amounts of thallium are incorporated into the sartorite structure.They found a so-called 9-fold superstructure for a sartorite with up to 6.5 wt % Tl and discussed this as a "lock-in" structure with a commensurate lattice, for a species that they regarded as usually incommensurate. The true nature of sartorite was finally resolved by Topa et al. [9,11,19].Based on a systematic combination of electron microprobe measurements and crystal structure determinations they showed that "sartorite" actually represents a group of different mineral species with distinct crystal structures and distinct chemical compositions.According to their different superstructures, the new "sartorites" were named by adding a Greek prefix, which corresponds to an integral multiple of the basic "sartorite" substructure with 4.2 Å (Table 4).4.4.The New "Sartorites"-From a Species to a Group Sartorite, PbAs 2 S 4 , was first described by vom Rath as "scleroclase" in 1864 [40], and shortly after renamed by Dana [41] to honor Wolfgang Sartorius von Waltershausen (1809-1876), a professor of mineralogy in Göttingen, Germany.In 1919, Smith and Solly [42] recognized that sartorite has-despite its simple chemical formula-a quite unique and complex crystal structure: "sartorite appears to rank with the telluride of gold, calaverite, in the peculiarity of its atomic arrangement, since in certain at least of his crystals there exist simultaneously two or even three incongruent space-lattices, which may be supposed derivable from one another by a slight shear."It is quite remarkable that they were able, with the limited methods of their time-in essence only crystal-morphological investigations-to recognize the so-called complex and incommensurate nature of both the calaverite and, partly, sartorite structures. After the introduction of X-ray investigations in crystallography, several different monoclinic super cells (superstructures) were described [43][44][45][46].Berlepsch et al. [46] pointed to the crystallographic consequences of the complex correlated atomic substitution by which substantial amounts of thallium are incorporated into the sartorite structure.They found a so-called 9-fold superstructure for a sartorite with up to 6.5 wt % Tl and discussed this as a "lock-in" structure with a commensurate lattice, for a species that they regarded as usually incommensurate. The true nature of sartorite was finally resolved by Topa et al. [9,11,19].Based on a systematic combination of electron microprobe measurements and crystal structure determinations they showed that "sartorite" actually represents a group of different mineral species with distinct crystal structures and distinct chemical compositions.According to their different superstructures, the new "sartorites" were named by adding a Greek prefix, which corresponds to an integral multiple of the basic "sartorite" substructure with 4.2 Å (Table 4). The Routhierite-Stalderite Group-Complex Substitutions The routhierite-stalderite series is a group of thallium arsenio-sulfosalts with, in addition, monovalent (Me1: Cu + , Ag + ) and bivalent metal ions (Me2: Hg 2+ , Zn 2+ , Fe 2+ ) and the generic formula TlMe1 + Me2 2+ As2S6.Accordingly, six different combinations, and thus six distinct mineral species are theoretically possible in this group (Table 5, Figure 12).Five could indeed be found in nature: arsiccioite, routhierite, stalderite, ralphcannonite, and ferrostalderite.The latter four occur in the Lengenbach quarry, the type-locality for the latter three.Routhierite and arsiccioite are red to dark-red in color, the other three are dark grey with a metallic luster, but they all show the same pseudo-cubic to prismatic morphology (≤1 mm). The Routhierite-Stalderite Group-Complex Substitutions The routhierite-stalderite series is a group of thallium arsenio-sulfosalts with, in addition, monovalent (Me1: Cu + , Ag + ) and bivalent metal ions (Me2: Hg 2+ , Zn 2+ , Fe 2+ ) and the generic formula TlMe1 + Me2 2+ As 2 S 6 .Accordingly, six different combinations, and thus six distinct mineral species are theoretically possible in this group (Table 5, Figure 12).Five could indeed be found in nature: arsiccioite, routhierite, stalderite, ralphcannonite, and ferrostalderite.The latter four occur in the Lengenbach quarry, the type-locality for the latter three.Routhierite and arsiccioite are red to dark-red in color, the other three are dark grey with a metallic luster, but they all show the same pseudo-cubic to prismatic morphology (≤1 mm).In the frame of a series of EDXS analyses on Lengenbach samples from the mineralogical collection of the Eidgenössiche Technische Hochschule (ETH) collection in Zurich, we recently identified what to our knowledge are the first completely idiomorphic crystals of routhierite worldwide (Figure 13). Arsiccioite has not yet been found at Lengenbach.However, a single EDXS analysis of a minute crystal indicated a possible dominance of silver and iron, and thus, the possible existence of the last unreported member of this group in the quarry.In the frame of a series of EDXS analyses on Lengenbach samples from the mineralogical collection of the Eidgenössiche Technische Hochschule (ETH) collection in Zurich, we recently identified what to our knowledge are the first completely idiomorphic crystals of routhierite worldwide (Figure 13). Arsiccioite has not yet been found at Lengenbach.However, a single EDXS analysis of a minute crystal indicated a possible dominance of silver and iron, and thus, the possible existence of the last unreported member of this group in the quarry. Chabournéite-Dalnegroite-And Possibly More Dalnegroite was described as a new mineral species from Lengenbach in 2009 [8].It is considered the As-analogue of chabournéite.The latter was also recently identified in one single specimen showing what appears to be the first distinct crystals for the species: They are very similar to the dalnegroite crystals and also show a strong similarity to lengenbachite crystals, which may imply that the two rare species may have been mistaken for lengenbachite in the past (Figure 14).Both dalnegroite and chabournéite are Pb-bearing species.In ribbon 1/2, however, the FGL has recently found a few samples of a Pb-free dalnegroite (Figure 15), the equivalent of the Pb-free chabournéite from Jas Roux, France [47,48].The Pb-free species may well be distinct minerals (Figure 16).Their study needs to be refined and completed. Chabournéite-Dalnegroite-And Possibly More Dalnegroite was described as a new mineral species from Lengenbach in 2009 [8].It is considered the As-analogue of chabournéite.The latter was also recently identified in one single specimen showing what appears to be the first distinct crystals for the species: They are very similar to the dalnegroite crystals and also show a strong similarity to lengenbachite crystals, which may imply that the two rare species may have been mistaken for lengenbachite in the past (Figure 14).Both dalnegroite and chabournéite are Pb-bearing species.In ribbon 1/2, however, the FGL has recently found a few samples of a Pb-free dalnegroite (Figure 15), the equivalent of the Pb-free chabournéite from Jas Roux, France [47,48].The Pb-free species may well be distinct minerals (Figure 16).Their study needs to be refined and completed. Chabournéite-Dalnegroite-And Possibly More Dalnegroite was described as a new mineral species from Lengenbach in 2009 [8].It is considered the As-analogue of chabournéite.The latter was also recently identified in one single specimen showing what appears to be the first distinct crystals for the species: They are very similar to the dalnegroite crystals and also show a strong similarity to lengenbachite crystals, which may imply that the two rare species may have been mistaken for lengenbachite in the past (Figure 14).Both dalnegroite and chabournéite are Pb-bearing species.In ribbon 1/2, however, the FGL has recently found a few samples of a Pb-free dalnegroite (Figure 15), the equivalent of the Pb-free chabournéite from Jas Roux, France [47,48].The Pb-free species may well be distinct minerals (Figure 16).Their study needs to be refined and completed. Summary and Perspective Since 1958 an uninterrupted prospection for and extraction of mineral specimens has been performed in the Lengenbach quarry.Three working collectives have been successively in charge of these purely non-profit operations, aimed at providing interesting specimens to both science and collector communities.The Arbeitsgemeinschaft Lengenbach (AGL, literally: Lengenbach Working Association) was active from 1958 to 1997 and extracted 28,422 specimens in total.It was followed by the Interessengemeinschaft Lengenbach (IGL, literally: Lengenbach Interest Association), active from 1998 to 2002.The IGL produced 2424 specimens.The Forschungsgemeinschaft Lengenbach (FGL), which was founded in 2003 and celebrates its 15th anniversary in 2018, has been extremely successful.The association could ensure the collaboration of several experts as Preferred Associated Scientists, who performed countless investigations to increase our knowledge of the Lengenbach mineralogy, and to contribute with many publications and the description of several new mineral species to further enhance Lengenbach's reputation as an eldorado for rare and complex sulfosalts.In these 15 years, 43 new mineral species have been described from the Lengenbach quarry, including 17 new type minerals. As a consequence of the specimen extraction performed since 2014 in the Tl-rich dolomite ribbon 1, the number of thallium-bearing mineral samples extracted in the quarry has significantly increased recently, as it was already the case in the late eighties and early nineties when the AGL also worked in this zone (Figure 17).The FGL plans to continue the mineral prospection and extraction during the next two or three years in this fascinating part of the deposit.Consequently, there is a good chance to find additional rare thallium sulfosalts and-who knows-eventually also new mineral species. Summary and Perspective Since 1958 an uninterrupted prospection for and extraction of mineral specimens has been performed in the Lengenbach quarry.Three working collectives have been successively in charge of these purely non-profit operations, aimed at providing interesting specimens to both science and collector communities.The Arbeitsgemeinschaft Lengenbach (AGL, literally: Lengenbach Working Association) was active from 1958 to 1997 and extracted 28,422 specimens in total.It was followed by the Interessengemeinschaft Lengenbach (IGL, literally: Lengenbach Interest Association), active from 1998 to 2002.The IGL produced 2424 specimens.The Forschungsgemeinschaft Lengenbach (FGL), which was founded in 2003 and celebrates its 15th anniversary in 2018, has been extremely successful.The association could ensure the collaboration of several experts as Preferred Associated Scientists, who performed countless investigations to increase our knowledge of the Lengenbach mineralogy, and to contribute with many publications and the description of several new mineral species to further enhance Lengenbach's reputation as an eldorado for rare and complex sulfosalts.In these 15 years, 43 new mineral species have been described from the Lengenbach quarry, including 17 new type minerals. As a consequence of the specimen extraction performed since 2014 in the Tl-rich dolomite ribbon 1, the number of thallium-bearing mineral samples extracted in the quarry has significantly increased recently, as it was already the case in the late eighties and early nineties when the AGL also worked in this zone (Figure 17).The FGL plans to continue the mineral prospection and extraction during the next two or three years in this fascinating part of the deposit.Consequently, there is a good chance to find additional rare thallium sulfosalts and-who knows-eventually also new mineral species. Minerals 2018, 8 , 16 Figure 1 . Figure 1.The upper part of the Lengenbach quarry in the Binn valley, view to the east. Figure 2 . Figure 2. Ralph Cannon, technical head of the Forschungsgemeinschaft Lengenbach (FGL) research association, at the entrance of the quarry in front of the concrete hall, where the current mineral-extraction activities are carried out at the lowest dolomite level. Figure 1 . 16 Figure 1 . Figure 1.The upper part of the Lengenbach quarry in the Binn valley, view to the east. Figure 2 . Figure 2. Ralph Cannon, technical head of the Forschungsgemeinschaft Lengenbach (FGL) research association, at the entrance of the quarry in front of the concrete hall, where the current mineral-extraction activities are carried out at the lowest dolomite level. Figure 2 . Figure 2. Ralph Cannon, technical head of the Forschungsgemeinschaft Lengenbach (FGL) research association, at the entrance of the quarry in front of the concrete hall, where the current mineral-extraction activities are carried out at the lowest dolomite level. Figure 3 . Figure 3. Three realgar-rich ribbons in the central dolomite zone in the lower part of the quarry.Ribbon 1 (red) is very rich in thallium species, ribbon 1/2 (yellow) contains much orpiment in a brittle dolomite and is poorer in thallium, ribbon 2 (orange) is the realgar-richest ribbon, but almost bare of any thallium species.The maximum ribbon thickness is about 0.5 m.Dashed lines show the expected extension of the ribbons below the debris. Figure 3 . Figure 3. Three realgar-rich ribbons in the central dolomite zone in the lower part of the quarry.Ribbon 1 (red) is very rich in thallium species, ribbon 1/2 (yellow) contains much orpiment in a brittle dolomite and is poorer in thallium, ribbon 2 (orange) is the realgar-richest ribbon, but almost bare of any thallium species.The maximum ribbon thickness is about 0.5 m.Dashed lines show the expected extension of the ribbons below the debris. Figure 9 . Figure 9. Fangite crystals showing different combinations of forms but similar habits.A distinct color is assigned to each crystal form.The Miller indices of the different forms are given without brackets FACES drawings [39]. Figure 9 . Figure 9. Fangite crystals showing different combinations of forms but similar habits.A distinct color is assigned to each crystal form.The Miller indices of the different forms are given without brackets FACES drawings [39]. Figure 9 . Figure 9. Fangite crystals showing different combinations of forms but similar habits.A distinct color is assigned to each crystal form.The Miller indices of the different forms are given without brackets FACES drawings [39]. Figure 12 . Figure 12.The minerals of the routhierite-stalderite group in a block diagram.The six distinct species are the result of the six possible combinations of monovalent (Ag + , Cu + ) and bivalent metal ions (Zn 2+ , Hg 2+ , Fe 2+ ).Modified after Reference [27]. Figure 12 . Figure 12.The minerals of the routhierite-stalderite group in a block diagram.The six distinct species are the result of the six possible combinations of monovalent (Ag + , Cu + ) and bivalent metal ions (Zn 2+ , Hg 2+ , Fe 2+ ).Modified after Reference [27]. Figure 13 . Figure 13.Idiomorphic, pseudo-cubic crystals of routhierite from the mineralogical collection of the Eidgenössiche Technische Hochschule ETH in Zurich. Figure 13 . Figure 13.Idiomorphic, pseudo-cubic crystals of routhierite from the mineralogical collection of the Eidgenössiche Technische Hochschule ETH in Zurich. Figure 16 . Figure 16.Diagram of Jas Roux and Abuta (France, resp.Japan, open diamonds) chabournéite,Monte Arsiccio (Italy, open circles) protochabournéite and Lengenbach (open squares) dalnegroite, modified after Reference [48], with copyright permission from Mineralogical Association of Canada, 2013.Dalnegroites are located in the As-dominated left part of the diagram, chabournéites and protochabournéites in the right part with Sb-dominance.The Pb content increases along the y-axis.The new Lengenbach samples (analyzed by EDXS) are shown as red squares.Lengenbach chabournéite is located in the Pb-rich part of the diagram (the cross gives the  1 sigma of all measurements), showing a higher Sb content than the samples from Jas Roux.The nearly Pb-free dalnegroite seems to be clearly isolated from the holotype material of this species (open squares). Figure 16 . Figure 16.Diagram of Jas Roux and Abuta (France, resp.Japan, open diamonds) chabournéite,Monte Arsiccio (Italy, open circles) protochabournéite and Lengenbach (open squares) dalnegroite, modified after Reference [48], with copyright permission from Mineralogical Association of Canada, 2013.Dalnegroites are located in the As-dominated left part of the diagram, chabournéites and protochabournéites in the right part with Sb-dominance.The Pb content increases along the y-axis.The new Lengenbach samples (analyzed by EDXS) are shown as red squares.Lengenbach chabournéite is located in the Pb-rich part of the diagram (the cross gives the  1 sigma of all measurements), showing a higher Sb content than the samples from Jas Roux.The nearly Pb-free dalnegroite seems to be clearly isolated from the holotype material of this species (open squares). Figure 16 . Figure 16.Diagram of Jas Roux and Abuta (France, resp.Japan, open diamonds) chabournéite, Monte Arsiccio (Italy, open circles) protochabournéite and Lengenbach (open squares) dalnegroite, modified after Reference [48], with copyright permission from Mineralogical Association of Canada, 2013.Dalnegroites are located in the As-dominated left part of the diagram, chabournéites and protochabournéites in the right part with Sb-dominance.The Pb content increases along the y-axis.The new Lengenbach samples (analyzed by EDXS) are shown as red squares.Lengenbach chabournéite is located in the Pb-rich part of the diagram (the cross gives the ± 1 sigma of all measurements), showing a higher Sb content than the samples from Jas Roux.The nearly Pb-free dalnegroite seems to be clearly isolated from the holotype material of this species (open squares). Figure 17 . Figure 17.Number of thallium bearing samples officially cataloged by the three working associations, since 1958. Table 1 . [5]cies constituting the mineral assemblage of ribbon 1 in the Lengenbach quarry.Tl minerals are marked in bold, modified after reference[5]. Table 2 . Mineralogical overview of the Lengenbach quarry.
7,637
2018-09-14T00:00:00.000
[ "Geology" ]
Estimating Topic Modeling Performance with Sharma–Mittal Entropy Topic modeling is a popular approach for clustering text documents. However, current tools have a number of unsolved problems such as instability and a lack of criteria for selecting the values of model parameters. In this work, we propose a method to solve partially the problems of optimizing model parameters, simultaneously accounting for semantic stability. Our method is inspired by the concepts from statistical physics and is based on Sharma–Mittal entropy. We test our approach on two models: probabilistic Latent Semantic Analysis (pLSA) and Latent Dirichlet Allocation (LDA) with Gibbs sampling, and on two datasets in different languages. We compare our approach against a number of standard metrics, each of which is able to account for just one of the parameters of our interest. We demonstrate that Sharma–Mittal entropy is a convenient tool for selecting both the number of topics and the values of hyper-parameters, simultaneously controlling for semantic stability, which none of the existing metrics can do. Furthermore, we show that concepts from statistical physics can be used to contribute to theory construction for machine learning, a rapidly-developing sphere that currently lacks a consistent theoretical ground. Introduction The Internet and, particularly, social networks generate a huge amount of data of different types (such as images, texts, or table data). A large amount of collected data becomes comparable to physical mesoscopic systems. Correspondingly, it becomes possible to use machine learning methods based on methods of statistical physics to analyze such data. Topic Modeling (TM) is a popular machine learning approach to soft clustering of textual or visual data, the purpose of which is to define the set of hidden distributions in texts or images and to sort the data according to these distributions. To date, a relatively large number of probabilistic topic models with different methods of determining hidden distributions have been developed, and several metrics for measuring the quality of topic modeling results have been formulated and investigated. The lion's share of the research on TM has focused on the use of probabilistic models [1] such as variants of Latent Dirichlet Allocation (LDA) and probabilistic Latent Semantic Analysis (pLSA); therefore, we study and provide numerical experiments for these models. Non-probabilistic algorithms, such as Non-negative Matrix Factorization (NMF), can also be applied to the task of TM [2,3]; however, NMF approaches are less popular due to their inability to produce generative models. Other problems of NMF models were described in [4,5]. At the same time, despite broad usage of probabilistic topic models in different fields of machine learning [6][7][8][9], they, too, possess a set of problems limiting their usage for big data analysis. A fundamental problem of probabilistic TM is finding the number of components in the mixture of distributions since the parameter determining this number has to be set explicitly [10][11][12]. A similar problem arises for the NMF approach since factorization rank has to be chosen [4]. A well-known 1. LetD be a collection of textual documents with D documents andW be a set (dictionary) of all unique words with W elements. Each document d ∈D is a sequence of terms w 1 , ..., w n from dictionaryW. 2. It is assumed that there is a finite number of topics, T, and each entry of a word w in document d is associated with some topic t ∈T. A topic is understood as a set of words that often (in the statistical sense) appear together in a large number of documents. 3. A collection of documents is considered a random and independent sample of triples (w i ; d i ; t i ), i = 1, ..., n, from the discrete distribution p(w; d; t) on a finite probability spaceW ×D ×T. Words w and documents d are observable variables, and topic t is a latent (hidden) variable. 4. It is assumed that the order of words in documents is unimportant for topic identification (the "bag of words" model). The order of documents in the collection is also not important. In TM, it is also assumed that the probability p(w|d) of the occurrence of term w in document d can be expressed as a product of probabilities p(w|t) and p(t|d), where p(w|t) is the probability of word w under topic t and p(t|d) is the probability of topic t in document d. According to the formula of total probability and the hypothesis of conditional independence, one obtains the following expression [27]: p(w|d) = ∑ t∈T p(w|t)p(t|d) ≡ ∑ t∈T φ wt θ td . Thus, constructing a topic model means finding the set of latent topicsT, i.e., the set of one-dimensional conditional distributions p(w|t) ≡ φ wt for each topic t, which constitute matrix Φ (distribution of words by topics), and the set of one-dimensional distributions p(t|d) ≡ θ td for each document d, which form matrix Θ (distribution of documents by topics), based on the observable variables d and w. One can distinguish three types of models in the literature that allow solving this problem: (1) models based on likelihood maximization; (2) models based on Monte Carlo methods; and (3) models of the hierarchical Dirichlet process. A description of these models and their limitations can be found in Appendix A. In the process of TM, for algorithms based on the Expectation-Maximization (E-M) algorithm (first type) and the Gibbs sampling algorithm (second type), transition to a strongly non-equilibrium state occurs. The initial distributions of words and documents in matrices Φ and Θ for Gibbs sampling methods are flat; however, in E-M models, the initial distribution is determined by a random number generator. For both types of algorithm, the initial distribution corresponds to the maximum entropy of the topic model. Regardless of the algorithm type and the procedure of initialization, redistribution of words and documents by topics proceeds so that a significant portion of words (about 95% of all unique words) acquires probabilities close to zero and only about 3-5% receive probabilities above a threshold 1/W [28]. Numerical experiments demonstrate that the number of words with high probabilities depends on the number of topics and values of model parameters, which allows constructing a theoretical approach for analyzing such dependency using the perspective of statistical physics [29]. The rest of the paper proceeds as follows. Section 2.1 reviews the standard metrics, which are used in the field of machine learning, relationships between these metrics and their differences. Section 2.2 describes our concept and basic assertions of our new method. Section 2.3 is devoted to the adaptation of Renyi entropy for the analysis of TM results. Sections 2.4 and 2.5 represent adaptation of Sharma-Mittal entropy for the analysis of TM results leading to a new quality metric in the field of TM. The relations of this new metric to the standard ones are also presented throughout Section 2. Section 3 shows numerical results of the application of our new metric to the analysis of TM outputs. We demonstrate the results of simulations run on two datasets by using two TM algorithms, namely probabilistic Latent Semantic Analysis (pLSA) and Latent Dirichlet Allocation (LDA) with Gibbs sampling. In Section 3, we also demonstrate the application of several standard metrics to TM results and compare them with our new metric. Section 4 summarizes the functionality of our two-parametric entropy approach and proposes directions for future research. Appendix A contains a short discussion of topic models and a detailed description of the models that were used in numerical experiments. Appendix B contains numerical results on another metric, which is called "semantic coherence", to the outputs of TM and demonstrates difficulties when using this metric for tuning model parameters. Methods for Analyzing the Results of Topic Modeling The results of TM depend on the parameters of models, such as "number of topics", hyper-parameters of Dirichlet distributions, or regularization coefficients, since these parameters are included explicitly in the mathematical formulation of the model. In the literature on TM, the most frequently-used metrics for analyzing topic models are the following. 1. Shannon entropy and relative entropy. Shannon entropy is defined according to the following equation [19,30,31]: .., n are distribution probabilities of a discrete random value with possible values {x 1 , ..., x n }. Relative entropy is defined as follows [32]: ) and Shannon entropy. Relative entropy is also known as Kullback-Leibler (KL) divergence. In the field of statistical physics, it was demonstrated that KL divergence is closely related to free energy. In the work [33], it was shown that in the framework of Boltzmann-Gibbs statistics, KL divergence can be expressed as follows: where p is the probability distribution of the system residing in the non-equilibrium state,p is the probability distribution of the system residing in the equilibrium state, q = 1/T, T is the temperature of the system, and F is the free energy. Hence, KL divergence is nothing but the difference between the free energies of off-equilibrium and equilibrium. The difference between free energies is a key characteristic of the entropy approach [29], which is to be discussed further below in Sections 2.2 and 2.3. The variant of KL divergence used in TM is also discussed in Paragraph 3 of this section. 2. Log-likelihood and perplexity: One of the most-used metrics in TM is the log-likelihood, which can be expressed through matrices Φ and Θ in the following way [21,34]: ln(P(D|Φ, Θ)) = ∑ D d=1 ∑ W w=1 n dw ln(∑ T t=1 φ wt θ td ), where n dw is the frequency of word w in document d. A better model will yield higher probabilities of documents, on average [21]. In addition, we would like to mention that the procedure of log-likelihood maximization is a special case of minimizing Kullback-Leibler divergence [35]. Another widely-used metric in machine learning, and in TM, particularly, is called perplexity. This metric is related to likelihood and is expressed as: perplexity = exp(− ln(P(D|Φ, Θ))/ ∑ D d=1 n d ), where n d is the number of words in document d. Perplexity behaves as a monotone decreasing function [36]. The score of perplexity is the lower the better. In general, perplexity can be expressed in terms of cross-entropy as follows: perplexity = 2 entropy or perplexity = e entropy [37], where "entropy" is cross-entropy. The application of perplexity for selecting values of model parameters was discussed in many papers [10,17,21,34,38,39]. In a number of works, it was demonstrated that perplexity behaves as a monotonously-decreasing function of the number of iterations, which is why perplexity has been proposed as a convenient metric for determining the optimal number of iterations in TM [11]. In addition, the authors of [12] used perplexity for searching the optimal number of topics. However, the use of perplexity and log-likelihood has some limitations, which were demonstrated in [40]. The authors showed that perplexity depends on the size of vocabulary of the collection for which TM is implemented. The dependence of the perplexity value on the type of topic model and the size of the vocabulary was also demonstrated in [41]. Hence, comparison of topic models for different datasets and in different languages by means of perplexity is complicated. Many numerical experiments described in the literature demonstrate monotone behavior of perplexity as a function of the number of topics. Unlike the task of determining the number of iterations, the task of finding the number of topics is sensitive to this feature, and fulfillment of the latter task appears to be complicated by it. In addition, calculation of perplexity and log-likelihood is extremely time consuming, especially for large text collections. 3. Kullback-Leibler divergence: Another measure, that is frequently used in machine learning, is the Kullback-Leibler divergence (KL) or relative entropy [32,42,43]. However, in the field of TM, symmetric KL divergence is most commonly used. This measure was proposed by Steyvers and Griffiths [20] for determining the number of stable topics: , where φ and φ correspond to topic-word distributions from two different runs; i and j are topics. Therefore, this metric measures dissimilarity between topics i and j. Let us note that KL divergence is calculated for the same words in different topics; thus, the semantic component of topic models is taken into account. This metric can be represented as a matrix of size T · T, where T is the number of topics in compared topic models. The minimum of KL(i, j) characterizes the measure of similarity between topics i and j. If KL(i, j) ≈ 0, then topics i and j are semantically identical. An algorithm for searching for the number of stable topics for different topic models was implemented [17] based on this measure. In this approach, pair-wise comparison for all topics of one topic's solution with all topics of another topic solution was done. Hence, if the topic is stable from the semantic point of view, then it reproduces regularly for each run of TM. In [16], it was shown that different types of regularization lead to different numbers of stable topics for the same dataset. The disadvantage of this method is that this metric does not allow comparing one topic solution with another as a whole, but one can only obtain a set of pair-wise compared word distributions for separate topics. No generalization of this metric for solution-level comparisons has been offered yet. 4. The Jaccard index and entropy distance: Another widely-used metric in the field of machine learning is the Jaccard index, also known as the Jaccard similarity coefficient, which is used for comparing the similarity and diversity of sample sets. The Jaccard coefficient is defined as the cardinality of the intersection of the sample sets divided by the cardinality of the union of the sample sets [23]. Mathematically, it is expressed as follows. Assume that we have two sets X and Y. Then, one can calculate the following values: a is the number of elements of X, which are absent in Y; b is the number of elements of Y, which are absent in X; c is the number of common elements of X and Y. The Jaccard coefficient is J = c a+b+c , where c = |X ∩ Y|, |X ∪ Y| = a + b + c, | · | is the cardinality of a set. The Jaccard coefficient J = 1 if sets are totally similar and J = 0 if sets are totally different. This coefficient is used in machine learning due to the following reasons. Kullback-Leibler divergence characterizes similarity based on the probability distribution. This means that two topics are similar if words' distributions for them have similar values. At the same time, the Jaccard coefficient demonstrates the number of identical words in topics, i.e., it reflects another point of view of the similarity of topics. The combination of two similarity measures allows for deeper analysis of TM results. In addition, the Jaccard distance is often used, which is defined as [22]: J(X, Y) = 1 − c a+b+c . This distance equals zero if sets are identical. The Jaccard distance also plays an important role in computer science, especially, in research on "regular language" [44,45] and is related to entropy distance as follows [22]: is the mutual information of X and Y, and H(X, Y) is the joint entropy of X and Y. In the standard set-theoretic interpretation of information theory, the mutual information corresponds to the intersection of sets X and Y and the joint entropy to the union of X and Y, and hence, the entropy distance corresponds to the Jaccard distance [22]. Correspondingly, if J(X; Y) = 0, then D H (X, Y) = 0 as well. The paper proposes to use the Jaccard coefficient as a parameter of entropy, but not for TM tasks, while we incorporate it into our two-parametric entropy approach to TM specifically. 5. Semantic coherence: This metric was proposed to measure the interpretability of topics and was demonstrated to correspond to human coherence judgments [17]. Topic coherence can be calculated as follows [17]: is a list of M most probable words in topic t, D(v) is the number of documents containing word v, and D(v, v ) is the number of documents where words v and v co-occur. The authors of [17] proposed to consider the following values of M = 5, ..., 20. To obtain a single coherence score of a topic solution, one needs to aggregate obtained individual topic coherence values. In the literature, one can find that aggregation can be implemented by means of the arithmetic mean, median, geometric mean, harmonic mean, quadratic mean, minimum, and maximum [46]. Coherence can also be used for determining the optimal number of topics; however, in paper [47], it was demonstrated that the coherence score monotonously decreases if the number of topics increases. 6. Relevance: This is a measure that allows users of TM to rank terms in the order of their usefulness for topic interpretation [24]. This measure is similar to a measure proposed in [48], where a term's frequency is combined with the exclusivity of the word (exclusivity is the degree to which a word's occurrences are limited to only a few topics). The relevance of term w to topic t given a weight parameter λ (0 ≤ λ ≤ 1) can be expressed as: r(w, k|λ) = λ · log(φ wt ) + (1 − λ) log( φ wt p w ), where λ determines the weight given to φ wt relative to its lift and p w is the empirical term probability, which can be calculated as: with n dw being a count of how many times the term w appears in document d and n d being total term-count in document d, namely, n d = ∑ w n dw . The authors of [24] proposed to take the default value of λ = 0.6 according to their user study; however, in general, it is not clear how to chose the optimal value of λ for a particular dataset. Furthermore, relevance is a topic-level measure that cannot be generalized for an entire solution, which is why it is not used further in this research. Minimum Cross-Entropy Principles in Topic Modeling As was shown above, TM parameter estimation and assessment of semantic stability are separate processes based on several unrelated metrics. Therefore, it is necessary to develop a single approach that would include a number of metrics and would allow solving simultaneously two problems, namely optimization of both semantic stability and other parameters. Such an approach can be developed on the basis of the cross-entropy minimum principle (minimum of KL divergence). In doing so, this principle can be implemented in two ways: (1) by constructing an entropic metric and searching for the minimum of this metric under variation of different topic model parameters, where TM is conducted using standard algorithms; (2) by creating an algorithm of restoring hidden distributions based on cross-entropy minimization. A version of the TM algorithm, close to the second approach, was considered in [49], where symmetric KL divergence was added to the model based on log-likelihood maximization. However, this model included regularization using only matrix Θ, and one has to set explicitly the regularization coefficient (the parameter called η). In our work, we consider only the first approach, i.e., searching for optimal parameters of the topic model based on the entropy metric, which takes into account the distribution of words by topics and the semantic stability of topics under the condition of the variation of different model parameters. By the "optimal" number of topics for a dataset, we mean the number of topics that corresponds to human judgment. We propose a method for tuning topic models, which is based on the following assertions [29,50], which create a linkage between TM and statistical physics and reformulate the problem of model parameter optimization in terms of thermodynamics: (1) A collection of documents is considered a mesoscopic information system: a statistical system where the elements are words and the documents number in the millions. Correspondingly, the behavior of such a system can be studied by application of models from statistical physics. (2) The total number of words and documents in the information system under consideration is constant (i.e., the system volume is not changed). (3) A topic is a state (an analogue of spin direction) that each word and document in the collection can take. Here, a word and a document can belong to different topics (spin states) with different probabilities. (4) A solution of topic modeling is a non-equilibrium state of the system. (5) Such information system is open and exchanges energy with the environment via changing the temperature. Here, the temperature of the information system is the number of topics that is a parameter and should be selected by searching for a minimum KL divergence. (6) Since KL divergence is proportional to the difference of free energies, to measure the degree to which a given system is non-equilibrium, one can use the following expression: where F 0 is the free energy of the initial state (chaos) of the topic model and F(T) is the free energy after TM for a fixed number of topics T [50]. (7) The minimum of Λ F depends on topic model parameters such as the number of topics and other hyper-parameters. (8) The optimal number of topics and the set of optimal hyper-parameters of the topic model correspond to the situation when the information maximum (in terms of non-classical entropy) is reached. If one does not take semantic stability into account, then the information maximum corresponds to the Renyi entropy minimum [29]. However, in our work, we aim to consider the semantic stability of topics; hence, the information maximum will depend on the semantic component. It is known that in topic models, the sum of probabilities of all words equals the number of topics T = ∑ T t=1 ∑ W w=1 p wt , where p wt ∈ [0, 1] for all w = 1, .., W; t = 1, ..., T. In the framework of statistical physics, it is common to investigate the distribution of statistical systems by energy levels, where energy is expressed in terms of probability. In accordance with such approach, we divide the range of probabilities [0, 1] by a fixed number of intervals, determine energy levels corresponding to these intervals, and then seek the number of words belonging to each energy level. Let us note that these values depend on the number of topics and the values of the hyper-parameters of a topic model. Division into intervals is convenient from a computational point of view. If the lengths of such intervals tend to zero, the distribution of words by intervals will tend to the probability density function. However, for simplification, we will consider a two-level system, where the first level corresponds to words with high probabilities and the second level corresponds to words with small probabilities close to zero. Therefore, we introduce the density-of-states function for words with high probabilities under a fixed number of topics and a fixed set of parameters: ρ = N/(WT), where N is the number of words with high probabilities. By high probability, we mean the probability satisfying: p > 1/W. The choice of such a level is informed by the fact that the values 1/W are the initial values of matrix Φ for a topic model. The value W · T determines the total number of micro-states of the topic model (the size of matrix Φ), and normalizes the density-of-states function. During the process of TM, the probabilities of words redistribute with respect to the above threshold 1/W. A small part of the words has probabilities higher than the threshold level, while the larger part of words has probabilities lower than that. The energy of the upper level containing states with high probabilities is expressed as follows: where the step function Ω(·) is defined by Ω(p wt − 1/W) = 1 if p wt ≥ 1/W and Ω(p wt − 1/W) = 0 if p wt < 1/W. Therefore, in Equation (1), we sum only the probabilities that are greater than 1/W. The energy of the lower level is expressed analogously, except that summing occurs for probabilities that are smaller than 1/W. A level is characterized by two parameters: (1) the normalized sum of probabilities of micro-states, that lie in the corresponding interval,P; (2) the normalized number of micro-states (density-of-states function), ρ, whose probabilities lie in this interval. Let us note that the density-of-states function is sometimes called the statistical weight of a complex system's level. For a two-level system, the main contribution to the entropy and energy of the whole system is made by the states with high probabilities, that is mainly by the upper level. Respectively, the free energy of the whole system is almost entirely determined by the entropy and the energy of the upper level. The free energy of a statistical system can be expressed through Gibbs-Shannon entropy and the internal energy in the following way [51]: The entropy of such a system can be expressed through the number of micro-states belonging to the same level [52]: S = ln(N). It follows that the difference of free energies of the topic model is expressed throughP and ρ in the following way: where E 0 and S 0 are the energy and the entropy of the initial state of the system, with E 0 = − ln(T) and S 0 = ln(WT). Hence, the degree to which a given system is non-equilibrium can be defined as the difference between the two free energies and expressed in terms of experimentally-determined values ρ andP. Values ρ andP were calculated for each topic model under variation of parameter T and hyper-parameters, i.e., Λ F is a function of the number of topics T, hyper-parameters, and size of vocabulary W. Renyi Entropy of the Topic Model Using partition function: q = 1/T [53], one can express Renyi entropy in Beck notation through free energy [54] and through experimentally-determined values ρ andP: where, again, q = 1/T. The choice of entropy in Beck notation is determined by the following considerations. Firstly, constructing topic models with just one or two topics is meaningless in terms of their informativeness for end users. Correspondingly, the entropy of such a model should be large. Secondly, excessive increase of the number of topics leads to a flat distribution of words by topics that, again, should lead to a large value of entropy. Thirdly, both q and Z q calculated for words with high probabilities are less than one. Correspondingly, if we normalize this value by 1 − q, we will obtain a negative value of Renyi entropy. Taking into account the necessity to have maximum entropies at the boundaries of the range of the number of topics, the normalization coefficient q − 1 should be used. Summing up the advantages of Renyi entropy application to TM, the following can be said. First, since calculation of Renyi entropy is based on the difference of free energies (i.e., on KL divergence or relative entropy), it is convenient to use Renyi entropy as a measure of the degree to which a given system is in non-equilibrium, and this is what we do in our approach. Second, Renyi entropy, in contrast to Gibbs-Shannon entropy, allows taking into account two different processes: a decrease in Gibbs-Shannon entropy and an increase in internal energy, both of which occur with the growth of the number of topics. The difference between these two processes can have an area of balance when two processes counterbalance each other. In this area, Renyi entropy reaches its minimum. Third, the search for the Renyi entropy minimum (i.e., minimum of KL divergence) can be convenient for optimizing regularization coefficients in topic modeling. As mentioned above, a relative drawback of Renyi entropy here is the impossibility of taking into account the semantic component of topic models since it is expressed only through the density-of-states function and energy of the level. However, this drawback can be overcome by using two-parametric Sharma-Mittal entropy, where one of deformation parameters is taken as q = 1/T and the second deformation parameter corresponds to the semantic component of a topic model. Sharma-Mittal Entropy in Topic Modeling Sharma-Mittal two-parametric entropy proposed in [55] has been discussed in many works [56][57][58]. The main emphasis in these papers was made on the investigation of its mathematical properties [56,59,60] or application of this entropy when constructing generalized non-extensive thermodynamics [61]. In the field of machine learning, Sharma-Mittal entropy is used in a few works, for instance, in [62]. Two-parametric Sharma-Mittal entropy can be written as: where r and q are deformation parameters. The essence of deformation parameters r and q for TM can be determined based on consideration of limit cases. One can show that lim r→1 S SM = S R q and lim r→0 S SM = exp(S R q ) − 1. Since in TM, deformation parameter q can be defined through the number of topics (q = 1/T), in order to use Sharma-Mittal entropy for the purposes of TM, one has to define the meaning of parameter r. Let us note that r ∈ [0; 1] according to [55]. In addition, if r → 1, then Sharma-Mittal entropy transforms into Renyi entropy; hence, in this case, the quality of topic model is defined only by Renyi entropy and deformation parameter q, i.e., by the number of topics. If r → 0, then the value of entropy becomes large since lim r→0 S SM = exp(S R q ) − 1. Based on the principle that maximum entropy corresponds to the information minimum, we conclude that the minimum value of parameter r corresponds to the minimum information and maximum entropy. Taking into account that entropy can be parameterized by the Jaccard coefficient and that semantic distance between two topic solutions can be estimated by entropy distance, we define r as a parameter being responsible for the semantic stability of the topic model under variation of the number of topics or hyper-parameters. Therefore, we define the value of r as equal to the value of the Jaccard coefficient (i.e., r := J, where J is the Jaccard coefficient calculated for the sets of the most probable words for each pair of topic solutions). Consequently, 1 − r = J(W , W ) is the entropy distance or Jaccard distance, where W and W are the sets of the most probable words of the first topic solution and the second topic solution, correspondingly. Sharma-Mittal Entropy for a Two-Level System Based on Equations (4) and (5) and the statistical sum (3), the Sharma-Mittal entropy of the topic model in terms of experimentally-determined values ρ andP can be defined as: On the one hand, application of Sharma-Mittal entropy allows estimating the optimal values of topic model parameters, such as hyper-parameters, and the number of topics, by means of searching for the minimum entropy, which, in turn, is characterized by the difference of entropies between the initial distribution and the distribution obtained after TM. On the other hand, it allows estimating the contribution of the semantic difference between any two topic solutions that, in turn, is influenced by values of hyper-parameters and the number of topics. Hence, the optimal values of topic model parameters correspond to the minimum Sharma-Mittal entropy, and the worst values of parameters correspond to the maximum entropy. Data and Computational Experiments For our numerical experiments, the following datasets were used: • Russian dataset (from the Lenta.ru news agency): a publicly-available set of 699,746 news articles in the Russian language dated between 1999 and 2018 from the Lenta.ru online news agency (available at [63]). Each news item was manually assigned to one of ten topic classes by the dataset provider. We considered a class-balanced subset of this dataset, which consisted of 8624 news texts (containing 23,297 unique words). It is available here at [64]. Below, we provide statistics on the number of documents with respect to categories (Table 1). Some of these topics are strongly correlated with each other. Therefore, the documents in this dataset can be represented by 7-10 topics. • English dataset (the well-known "20 Newsgroups" dataset http://qwone.com/~jason/ 20Newsgroups/): 15,404 English news articles containing 50,948 unique words. Each of the news items belonged to one or more of 20 topic groups. Since some of these topics can be unified, 14-20 topics can represent the documents of this dataset [65]. This dataset is widely used to test machine learning models. We conducted our numerical experiments using pLSA and LDA with Gibbs sampling. These models represent two different types of algorithms. The LDA model used here was based on the Gibbs sampling procedure, and the pLSA model was based on the E-M algorithm. A detailed description of these models can be found in Appendix A. Experiments on these models allowed us to estimate the usability of Sharma-Mittal entropy for two main types of algorithms. Topic modeling was conducted using the following software implementation: the package "BigARTM" (http://bigartm.org) was used for pLSA; GibbsLDA++ (http://gibbslda.sourceforge.net) for LDA (Gibbs sampling). All source codes were integrated into a single package "TopicMiner" The choice of pLSA model was determined by the fact that this model has only one parameter: the number of topics. Correspondingly, we can isolate the effect of this parameter on the values of the above metrics. Figure 1 plots the log-likelihood as a function of the number of topics for both datasets. One can see that increasing the number of topics led to a smooth increase of the log-likelihood. Thus, these curves did not allow determining the optimal number of topics due to the absence of any clear extrema. The difference between these two curves resulted from different sizes of vocabularies and the different amounts of documents in the corresponding datasets. (4). The exact minimum of Renyi entropy for the Russian dataset was seven and for the English dataset 16. However, as was noted, being an ill-posed problem, topic modeling produced different results on different runs of the same algorithm, which was especially true for pLSA. From the previous research [29], it is known that the range of such variation between the runs is approximately ±3 topics. Therefore, it makes more sense to look at the range of the neighboring minima rather than at the exact minimum. It can be seen that the numbers of topics defined by humans, when corrected for inter-topic correlation, lied within the discovered ranges in both datasets, which suggests the language-independent character of this metric (at least for European languages). As Renyi entropy does not include an instrument to evaluate the semantic stability of topic models, we calculated Jaccard coefficients under variation of the number of topics. Figure 3 presents a "heat map" of Jaccard coefficients for the dataset in the Russian language. The matrix containing Jaccard coefficients was symmetric with respect to the main diagonal, and this is the reason why only half of this matrix is depicted. The structure of the "heat map" of the Jaccard index for the English dataset was similar to that for the Russian dataset and can be found in [29]. Figures 3 and 4, there are areas of sharp decreases in semantic similarity between topic solutions with different numbers of topics. In order to incorporate the "density-of-states" function, the probabilities of words, and semantic similarity under variation of the parameter "number of topics", we calculated Sharma-Mittal entropy according to Equation (6) for the pLSA model on both datasets. Figure 8 shows Sharma-Mittal entropy for the pLSA model in two versions: a 3D picture and its view from above. Together, they show that Sharma-Mittal entropy has areas of minima and maxima, the overall shape of the curve being determined by the number of topics and the local fluctuations resulting from the fluctuations of the Jaccard distance. In practice, however, we propose to consider only two-dimensional versions of this figure (e.g., Figure 6), where the Jaccard index is calculated only for the neighboring solutions. Such plots are easier to interpret, and at the same time, they demonstrate the influence of semantic stability. The exact values of the Sharma-Mittal entropy minimum are the following: T = 20 for the English dataset and T = 7 for the Russian dataset. Horizontal shift of the Sharma-Mittal entropy minimum as compared to the Renyi entropy minimum on the English dataset is an effect of the sharp fall of the Jaccard coefficient observed in the range of 14-16 topics. It follows that application of Sharma-Mittal entropy for models based on the E-M algorithm allows determining the optimal number of topics involving the semantic stability of topics. Figures that demonstrate the behavior of semantic coherence for these datasets can be found in Appendix B. We do not provide them here since they monotonously decrease, with some fluctuations, but without any clear extrema, thus providing no criteria for choosing topic number. Results for the LDA with Gibbs Sampling Model The difference between the pLSA model and LDA Gibbs sampling model is not only in the application of the Monte Carlo algorithm for determining hidden distributions, but also in the presence of a regularization procedure. The level of regularization in LDA with Gibbs sampling is determined by hyper-parameters α and β. In our numerical experiments, we used the algorithm [11] where hyper-parameters of the LDA model were fixed and did not change from iteration to iteration since our goal was to analyze the results of the LDA model with respect to different values of hyper-parameters. Figure 9 plots the log-likelihood for the Russian dataset as a function of T for pLSA and for LDA with different fixed values of α or β. The behavior of the log-likelihood for the English dataset was similar to that for the Russian dataset, and therefore, we do not provide the figure. Using the results of calculations (Figure 9), one can conclude that the log-likelihood metric allows estimating the effect of regularization in the LDA Gibbs sampling model. Namely, it can be seen that the largest values of regularization coefficients (blue curve) led to the lowest values of the log-likelihood, while according to [21,34], the optimal topic model should correspond to the maximum log-likelihood. According to our numerical results, the maximum log-likelihood corresponds to the pLSA model, that is to the zero regularization of LDA. Let us note that a similar result was obtained in [66], where, according to human mark-up, pLSA was shown to perform better than LDA, as regularized pLSA, and than pLSA regularized with decorrelation and sparsing-smoothing approaches, for the task of revealing ethnicity-related topics. Figures 10 and 11 plot Renyi entropy as functions of T for different values of α and β for the Russian and English datasets. Calculations demonstrated that application of Renyi entropy and the log-likelihood allowed estimating the influence of regularization in TM. Namely, larger regularization coefficients led to higher entropy, i.e., to the model's deterioration. The exact minima of Renyi entropy were the following: (1) Russian dataset: T = 7 for α = 0.1, β = 0.1; T = 9 for α = 0.5, β = 0.1; T = 14 for α = 1, β = 1; (2) English dataset: T = 17 for α = 0.1, β = 0.1; T = 15 for α = 0.5, β = 0.1; T = 13 for α = 1, β = 1. It follows that Renyi entropy is useful for estimating topic model hyper-parameters for different datasets, at least in European languages. In addition, Renyi entropy is less sensitive to the size of vocabulary since this metric is normalized with respect to initial states (chaos). However, as Renyi entropy for the LDA Gibbs sampling model and pLSA model does not allow taking into account semantic stability, we further do not present our results on Sharma-Mittal entropy. for α = 0.5, β = 0.1; T = 13 for α = 1, β = 1 Furthermore, these figures demonstrate that the location of jumps of Sharma-Mittal entropy, which are related to semantic stability, are almost independent of the regularization coefficients. However, in general, entropy curves were lifted along the Y axis if regularization coefficients increased. It follows that for LDA Gibbs sampling, the optimal values of both α and β coefficients were small. It can be concluded that the results of regularization coefficients' selection by means of Sharma-Mittal entropy were similar to those obtained with the log-likelihood and Renyi entropy; however, two-parametric entropy, unlike other considered metrics, allowed incorporating semantic stability using the Jaccard distance. Sharma-Mittal entropy under variation of the number of topics and incorporation of the Jaccard coefficient represents a three-dimensional structure with a set of local minima, which are determined by the number of topics and by semantic stability. These areas of local minima represent islands of stability. Figures 16 and 17 demonstrate the three-dimensional surfaces of Sharma-Mittal entropy for the Russian and English datasets and its projections to the horizontal plane OT 1 T 2 . Numerical results on semantic coherence for LDA with Gibbs sampling can be found in Appendix B (Figures A3 and A4). However, as with pLSA, this metric fell monotonously and did not provide any criteria for the choice of the topic number. Discussion In this work, we proposed a new entropy-based approach for the multi-aspect evaluation of the performance of topic models. Our approach was based on two-parametric Sharma-Mittal entropy, that is twice deformed entropy. We considered the deformation parameter, q, being the inverse value of the number of topics, and the second parameter, r, being the Jaccard coefficient, while 1 − r the entropy distance. Our numerical experiments demonstrated that, firstly, Sharma-Mittal entropy, as well as Renyi entropy allowed determining the optimal number of topics. Secondly, as the minimum of Sharma-Mittal entropy corresponded to the maximum of the log-likelihood, the former also allowed choosing the optimal values of hyper-parameters. Thirdly, unlike Renyi entropy or the log-likelihood, it allowed optimizing both hyper-parameters and the number of topics, simultaneously accounting for semantic stability. This became possible due to the existence of areas of semantic stability that have been shown to be characterized by low values of Sharma-Mittal entropy. According to our numerical results, the location of such areas did not depend on the hyper-parameters. However, on the whole, larger values of hyper-parameters in the LDA Gibbs sampling led to higher entropy, while small values made the LDA model almost identical to pLSA. This means that new methods of regularization are needed that would not impair TM performance in terms of entropy. We concluded that Sharma-Mittal entropy is an effective metric for the assessment of topic models performance since it includes the functionality of several metrics. However, our approach had certain limitations. First of all, topic models have an obvious drawback, which is expressed by the fact that the probabilities of words in topics depend on the number of documents containing these words. This means that if a topic is represented in a small number of documents, then the topic model will assign small probabilities to the words of this topic, and correspondingly, a user will not be able to see this topic. Thus, topic models can detect topics that are represented in many documents and poorly identify topics with a small number of documents. Therefore, Renyi entropy and Sharma-Mittal entropy allow determining the number of those large topics only. Secondly, in our work, Sharma-Mittal entropy was tested only for two European languages, while there are papers on the application of topic models for the Chinese, Japanese, and Arabic languages. Correspondingly, our research should be extended and tested on non-European languages. Thirdly, our metric allowed finding the global minimum when topic modeling was performed in a wide range of the number of topics; however, this process was resource-intensive and in practice can be applied to datasets containing up to 100-200 thousand documents. For huge datasets, this metric is not applicable. This problem might be partially solved by means of renormalization, which can be adapted for topic models from statistical physics. Research on application of renormalization for fast search of Renyi entropy and Sharma-Mittal entropy minima deserves a separate paper. Fourthly, we would like to note that our method was not embedded in algorithms of topic modeling. Therefore, in future research, the metric of quality based on Sharma-Mittal entropy can be used for the development of new topic models. Sharma-Mittal entropy can be embedded in the algorithms based on the Gibbs sampling procedure, where walks in the multi-dimensional space of words, hyper-parameters, and the number of topics will be determined by the level of this entropy. Correspondingly, transition along different axes of multi-dimensional space can be guided by the entropy minimization principle. An algorithm similar to the algorithm of annealing based on searching for the minimum Tsallis entropy [67] can be used in this case. However, unlike the algorithm proposed by Tsallis, one can use deformation parameter q as a parameter that controls the number of components in the mixture of distributions and search for the minimum when changing the number of components. Therefore, the walk in the multi-parameter space can be determined by the direction of the minimum of deformed entropy when changing the dimension of the space. For topic models based on the maximum log-likelihood principle, the sizes of matrices are included in the model as external parameters, which are selected by the user. Correspondingly, new topic models can be developed in the future by using the principle of deformed logarithm maximization, where one of deformation parameters corresponds to the sizes of matrices (namely, the number of topics) and the other parameter corresponds to semantic stability (e.g., the Jaccard index). Note that both parameters here are maximization parameters. A more detailed discussion of these possible directions for research is out of the scope for this paper and can be used as a starting point for new research. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; nor in the decision to publish the results. Abbreviations The following abbreviations are used in this manuscript: Mathematical formulation of this family of models is based on the fact that matrix containing distributions of words by documents, F, is represented as a product of Φ and Θ [15,27], so the problem F = ΦΘ is considered as a problem of stochastic matrix decomposition, which is carried out using the Expectation-Maximization (E-M) algorithm. Logarithm of likelihood is maximized for searching the approximation of solution [10]. Let us note that stochastic matrix decomposition is not unique and is defined with accuracy up to a non-degenerate transformation: F = ΦΘ = (ΦR)(R −1 Φ) = Φ Θ [15]. It means that different topic solutions with the same number of topics can be assigned to the initial set of words and documents (matrix F). The elements of matrices Φ and Θ can differ under variation of matrix of transformation R. It follows that the problem of TM is ill-posed. Nowadays, many models with different types of regularization exist. One of the most used regularizer found in literature is a product of conjugate distributions, namely, multinomial and Dirichlet distributions. In this case, the final distribution of words by topics and distribution of topics by documents are Dirichlet distributions [10] in accordance with properties of conjugate distributions. Another variant of widely used regularization is additive regularization developed by Vorontsov [15]. In the framework of this approach, a set of functions characterizing the variant of regularization is added to the logarithm of likelihood in the framework of maximization problem, the level of regularization is defined by the value of regularization coefficient. For example, Gibbs-Shannon entropy or Kullback-Leibler divergence can play a role of regularizer. Despite the large number of possibilities of this approach, the method of additive regularization does not assist in choosing regularization parameters and selecting the combination of regularizers [68]. Basically, the selection of regularization coefficients is carried out manually taking into account the perplexity stabilization [68]. Generally, the problem of selecting regularizers and their coefficient values is still in the research core for this type of models. Alternative variant of regularization is a model taking into account the relations between words. Such information is taken externally (for instance, from the dataset) and is expressed in form of covariance matrix C of size W × W, where W is the number of unique words. A significant disadvantage of this type of regularization is the problem of calculating the covariance matrix, since the size of the dictionary of unique words can exceed one million of words. However, in our point of view, this type of regularization is potentially perspective since "word embedding" method is actively developed in the framework of neural networks. This method allows calculating word co-occurrence matrices which, in turn, can be incorporated in topic models. Nowadays, there are two hybrids of topic models and "word embedding" model [69,70]. However, these models also possess instability since "word embedding" algorithms possess instability [71,72]. Let us consider the Probabilistic Latent Semantic Analysis (PLSA) model, which is used in our numerical experiments, in detail. In the framework of this model, the determination of the matrices Φ and Θ is performed as described in [27]. The entire dataset is generated as: where p(d, w) is the joint probability distribution, n(d, w) counts the appearance frequency of the term w in the document d. Note that this model involves a conditional independence assumption, namely, d and w are independently conditioned on the state of the associated latent variable [27]. The estimation of the one-dimensional distributions is based on log-likelihood maximization with linear constraints: The determination of the local maximum of L(φ, θ) is carried out using Expectation-Maximization (E-M) algorithm. The initial approximation of φ wt and θ td is chosen randomly or uniformly before the first iteration. E-step: using Bayes' rule, conditional probabilities p(t|d, w) are calculated for all t ∈ T and each w ∈ W, d ∈ D [73], namely: M-step: using conditional probabilities, new approximations of φ wt , θ td are estimated, namely: φ wt = ∑ d∈D n(d, w)p(t|d, w) ∑ w∈W ∑ d∈D n(d, w)p(t|d, w) , θ td = ∑ w∈W n(d, w)p(t|d, w) ∑ t∈T ∑ w∈W n(d, w)p(t|d, w) . Thus, alternating E and M steps in a cycle, p(t|d) and p(w|t) can be estimated. Note that this model has no additional parameters except of "the number of topics", which defines the size of matrices Φ, Θ. Appendix A.2. Models Based on Monte-Carlo Methods This class of models represents a variant of Potts model adapted for text analysis. Each document of the collection is considered a one-dimensional grid, where a node is a word. The number of the states of the spin is considered as the number of topics. The difference between topic model and Potts model is that in TM a large amount of documents is used. Probabilities of distribution of words and documents can be estimated by means of expectation under condition of known integrand. According to approach of Blei [10], probability density functions of multinomial and Dirichlet distributions are used as integrand. Determining the hidden distributions is implemented by means of Gibbs sampling. Due to the presence of a set of local minima and maxima of integrand, this type of models also possess a certain degree of instability. One can also introduce regularizers for these types of models, for example, by fixing belongings of some words in a range of topics [74]. This type of regularization behaves as process of crystallization, where a layer of words (which are often found together with words from the core) is formed around the core containing fixed words. Another variant of regularization represents a modified Gibbs-sampling procedure, where sampling is implemented not for one word, but for several words, which are placed inside a window of fixed size [16]. As demonstrated in experiments, this variant of regularization gives a high level of stability, however, there are a lot of "garbage topics", which can not be interpreted. One can claim that the problem of optimization of regularization procedure for models based on Gibbs sampling and determining the optimal number of topics is not completely solved. Let us consider Latent Dirichlet Allocation (LDA) model with Gibbs sampling procedure in detail. LDA is a topic model, in which each topic is smoothed by the same regularizer in the form of Dirichlet function [11]. According to Blei et al. [10], it is assumed to use Dirichlet distributions with one-dimensional parameters β and α, correspondingly, in order to simplify the derivation of analytical expressions for the matrices Φ and Θ. In LDA, documents are generated by picking a distribution over topics θ from a Dirichlet distribution with parameter α, then the words in the document are generated by picking a topic t from this distribution and then picking a word from that topic according to probabilities which are determined by φ ·t [11], where φ cdott is drawn from a Dirichlet distribution with parameter β. On this basis, the probability of the i-th word in a given document d is defined as follows [11]: is the probability of the word w i in document d under the j-th topic, p(z i,d = j) is the probability of choosing a word from topic j in the current document d, w i,d is the i-th word in document d, counter c d,j is the number of words in document d assigned to topic j, counter c w,j is the number of word w is assigned to topic j; ∑ T j=1 c d,j is the total number of words in document d (i.e., length of document d), ∑ W w=1 c w,j is the total number of words assigned to topic j. Correspondingly, θ and φ can be obtained as follows: The algorithm of calculation consists of three phases. The first one is the initialization of matrices, counters and parameters α and β, in addition to specifying the number of iterations. Counters, which define the initial values of matrices Φ and Θ, are set as constants. So, matrices are filled with constants, for example, Φ can be filled with uniform distribution, where all elements of the matrix are equal to 1/W, where W is the number of unique words in a collection of documents. The second phase (sampling procedure) is an exhaustive search through all the documents and all words in each document in a cycle. Each word w i in a given document d is matched with the topic number, which is generated as follows: where c −i d,j is the number of words from document d assigned to topic j not including the current word w i , c −i w,j is the number of instances of word w assigned to topic j not including the current instance i, c −i d,j and c −i w,j are called counters. Here, the probabilities of belonging of the current word to different topics are calculated, then z i is sampled according to this distribution. The initial probability of word-topic matching is defined only by 1/W when considering a uniform distribution as the initial approximation of matrix Φ. However, after each word matching to a topic, the values of counters change and, hence, after an important number of iterations, counters contain the full statistics of document collection under study. At the third phase, Φ and Θ are calculated according to the Equations (A1) and (A2). Finally, the matrices are ready for manual analyses, where for sociological analysis, only the most probable words and documents for each topics are considered. Note that the coefficients α and β defining Dirichlet distribution are parameters of this model, which one has to select. The hyper-parameter β determines whether topics will have more sparse or more uniform distributions over words [21]. The hyper-parameter α determines the level of sparsity of vectors θ ·d . If α = 1 then Dirichlet distribution transforms into uniforms distribution while small values of α cause more sparse vectors θ ·d . Therefore, in general, the hyper-parameters α and β influence the sparsity of matrices Φ and Θ [34]. The sparsity of matrices influences, in turn, the number of topics, which can appear in a document collection. Consequently, the number of topics may implicitly depend on the values of hyper-parameters. Work [11] suggests a rule to select hyper-parameters: α = 50/T and β = 0.01, where T is the number of topics. Such values of parameters were widely used in different studies [75][76][77]. Appendix A.3. Models Based on Hierarchical Dirichlet Process Alternative approach in TM is hierarchical model based on Dirichlet processes (HDP) [13]. In paper [78], a two-level version of hierarchical Dirichlet process (with Split-Merge Operations) based on Chinese Restaurant Franchise (CRF) is used. According to this paper, Chinese Restaurant Franchise is associated to topic model in the following way: "restaurant" corresponds to "document"; customer corresponds to "word"; "dish" corresponds to "topic". In this approach, "customers" are partitioned at the group-level and "dishes" are partitioned at the top level. The customer partition represents the per-document partition of words; the top level partition represents the sharing of topics between documents [78]. Let us note that the list of dishes (topics) is the same for all restaurants. Despite the fact, that this type of models is referred to the class of non-parametric methods in literature, this model has a set of pre-defined parameters, which influence on the results and on the number of topics. Appendix B. Numerical Results on Semantic Coherence Appendix B.1. PLSA In order to calculate semantic coherence, we considered 30 top-words for each topic of topic solutions for the two datasets from Section 3. Figure A1 demonstrates behavior of individual topic coherence for topic solution on 30 topics for the Russian and English datasets, where topics are ordered in descending order with respect to the coherence. It is not obvious how to separate "good" and "bad" topics for the Russian dataset since topic coherence change is nearly smooth. For the English dataset one can see a dramatic decrease in the region of 25 topics, however it does not correspond to the mark-up of this dataset. Figure A2 demonstrates aggregated semantic coherence for topic solutions with the different topic numbers T for the Russian and English datasets. This figure does not allow us to choose the "optimal" number of topics since the maximum coherence for the Russian dataset corresponds to T = 4, however, it is not close to the human mark-up. For the English dataset we observe a number of peaks that does not assist in selecting the number of topics. It follows that semantic coherence does not allow us to determine the single "optimal" number of topics for pLSA model. Appendix B.2. LDA with Gibbs Sampling For calculation of semantic coherence for LDA models we, again, considered 30 top-words for each topic. Figure A3 demonstrates behavior of individual topic coherences for topic solutions on 30 topics for the Russian and English datasets, where topics are ordered in descending order with respect to the coherence. However, it is not obvious how to choose the optimal number of topics, i.e., where to cut the line in order to separate "good" and "bad" topics for the Russian dataset. For the English dataset we observe a sharp fall for T = 25, however this number of topics does not correspond to the description of the dataset. Figure A4 demonstrates aggregated semantic coherence for topic solutions with different topic numbers T for the Russian and English datasets. One can see that the maximum is reached for T = 4 for the Russian dataset which does not correspond to the human annotation. For the English dataset one can see a peak for T = 19, however this pick is not unique and it is not obvious which one we should choose. Thus, we have demonstrated limitations of semantic coherence as a method for selecting the number of topics.
13,993
2019-07-01T00:00:00.000
[ "Computer Science", "Mathematics" ]
Constrained Bayesian optimization for automatic chemical design using variational autoencoders Automatic Chemical Design is a framework for generating novel molecules with optimized properties. Introduction Machine learning in chemical design has shown promise along a number of fronts. In quantitative structure activity relationship (QSAR) modelling, deep learning models have achieved state-of-the-art results in molecular property prediction 1-8 as well as property uncertainty quantication. [9][10][11][12] Progress is also being made in the interpretability and explainability of machine learning solutions to chemical design, a subeld concerned with extracting chemical insight from learned models. 13 The focus of this paper however, will be on molecule generation, leveraging machine learning to propose novel molecules that optimize a target objective. One existing approach for nding molecules that maximize an application-specic metric involves searching a large library of compounds, either physically or virtually. 14,15 This has the disadvantage that the search is not open-ended; if the molecule is not contained in the library, the search won't nd it. A second method involves the use of genetic algorithms. In this approach, a known molecule acts as a seed and a local search is performed over a discrete space of molecules. Although these methods have enjoyed success in producing biologically active compounds, an approach featuring a search over an open-ended, continuous space would be benecial. The use of geometrical cues such as gradients to guide the search in continuous space in conjunction with advances in Bayesian optimization methodologies 16,17 could accelerate both drug 14,18 and materials 19,20 discovery by functioning as a high-throughput virtual screen of unpromising candidates. Recently, Gómez-Bombarelli et al. 21 presented Automatic Chemical Design, a variational autoencoder (VAE) architecture capable of encoding continuous representations of molecules. In continuous latent space, gradient-based optimization is leveraged to nd molecules that maximize a design metric. Although a strong proof of concept, Automatic Chemical Design possesses a deciency in so far as it fails to generate a high proportion of valid molecular structures. The authors hypothesize 21 that molecules selected by Bayesian optimization lie in "dead regions" of the latent space far away from any data that the VAE has seen in training, yielding invalid structures when decoded. The principle contribution of this paper is to present an approach based on constrained Bayesian optimization that generates a high proportion of valid sequences, thus solving the training set mismatch problem for VAE-based Bayesian optimization schemes. SMILES representation SMILES strings 22 are a means of representing molecules as a character sequence. This text-based format facilitates the use of tools from natural language processing for applications such as chemical reaction prediction [23][24][25][26][27][28] and chemical reaction classication. 29 To make the SMILES representation compatible with the VAE architecture, the SMILES strings are in turn converted to one-hot vectors indicating the presence or absence of a particular character within a sequence as illustrated in Fig. 1. Variational autoencoders Variational autoencoders 30,31 allow us to map molecules m to and from continuous values z in a latent space. The encoding z is interpreted as a latent variable in a probabilistic generative model over which there is a prior distribution p(z). The probabilistic decoder is dened by the likelihood function p q (m|z). The posterior distribution p q (z|m) is interpreted as the probabilistic encoder. The parameters of the likelihood p q (m|z) as well as the parameters of the approximate posterior distribution q f (z|m) are learned by maximizing the evidence lower bound (ELBO) Lðf; q; mÞ ¼ E qfðz|mÞ Â log p q ðm; zÞ À log q f ðz|mÞ Ã : Variational autoencoders have been coupled with recurrent neural networks by ref. 32 to encode sentences into a continuous latent space. This approach is followed for the SMILES format both by ref. 21 and here. The SMILES variational autoencoder, together with our constraint function, is shown in Fig. 2. The origin of dead regions in the latent space The approach introduced in this paper aims to solve the problem of dead regions in the latent space of the VAE. It is rst however, important to understand the origin of these dead zones. Three ways in which a dead zone can arise are: (1) Sampling locations that are very unlikely under the prior. This was noted in the original paper on variational autoencoders 30 where sampling was adjusted through the inverse conditional distribution function of a Gaussian. (2) A latent space dimensionality that is articially high will yield dead zones in the manifold learned during training. 33 This has been demonstrated to be the case empirically in ref. 34. (3) Inhomogenous training data; undersampled regions of the data space are liable to yield gaps in the latent space. A schematic illustrating sampling from a dead zone, and the associated effect it has on the generated SMILES strings, is given in Fig. 3. In our case, the Bayesian optimization scheme is decoupled from the VAE and hence has no knowledge of the location of the learned manifold. In many instances the explorative behaviour in the acquisition phase of Bayesian optimization will drive the selection of invalid points lying far away from the learned manifold. Objective functions for Bayesian optimization of molecules Bayesian optimization is performed here in the latent space of the variational autoencoder in order to nd molecules that score highly under a specied objective function. We assess molecular quality on the following objectives: z denotes a molecule's latent representation, log P(z) is the water-octanol partition coefficient, QED(z) is the quantitative estimate of drug-likeness 35 and SA(z) is the synthetic accessibility. 36 The ring penalty term is as featured in ref. 21. The "comp" subscript is designed to indicate that the objective function is a composite of standalone metrics. It is important to note, that the rst objective, a common metric of comparison in this area, is misspecied as has been pointed out by ref. 37. From a chemical standpoint it is undesirable to maximize the log P score as is being done here. Rather it is preferable to optimize log P to be in a range that is in accordance with the Lipinski rule of ve. 38 We use the penalized log P objective here because regardless of its relevance for chemistry, it serves as a point of comparison against other methods. Fig. 1 The SMILES representation and one-hot encoding for benzene. For purposes of illustration, only the characters present in benzene are shown in the one-hot encoding. In practice there is a column for each character in the SMILES alphabet. Constrained Bayesian optimization of molecules We now describe our extension to the Bayesian optimization procedure followed by ref. 21. Expressed formally, the constrained optimization problem is max z f ðzÞ s:t: Pr is a black-box objective function, Pr À CðzÞ Á denotes the probability that a Boolean constraint CðzÞ is satised and 1 À d is some user-specied minimum condence that the constraint is satised. 39 The constraint is that a latent point must decode successfully a large fraction of the times decoding is attempted. The specic fractions used are provided in the results section. The black-box objective function is noisy because a single latent point may decode to multiple molecules when the model makes a mistake, obtaining different values under the objective. In practice, f (z) is one of the objectives described in Section 2.3. Expected improvement with constraints (EIC) EIC may be thought of as expected improvement (EI), that offers improvement only when a set of constraints are satised: 40 The incumbent solution h in EI(z), may be set in an analogous way to vanilla expected improvement 41 as either: (1) The best observation in which all constraints are observed to be satised. (2) The minimum of the posterior mean such that all constraints are satised. The latter approach is adopted for the experiments performed in this paper. If at the stage in the Bayesian optimization procedure where a feasible point has yet to be located, the form of acquisition function used is that dened by ref. 41. with the intuition being that if the probabilistic constraint is violated everywhere, the acquisition function selects the point having the highest probability of lying within the feasible region. The algorithm ignores the objective until it has located the feasible region. Related work The literature concerning generative models of molecules has exploded since the rst work on the topic. 21 Current methods feature molecular representations such as SMILES 42-54 and graphs [55][56][57][58][59][60][61][62][63][64][65][66][67][68][69][70][71][72] and employ reinforcement learning 73-83 as well as generative adversarial networks 84 for the generative process. These methods are well-summarized by a number of recent review articles. [85][86][87][88][89] In terms of VAE-based approaches, two popular approaches for incorporating property information into the generative process are Bayesian optimization and conditional variational autoencoders (CVAEs). 90 When generating molecules using CVAEs, the target data y is embedded into the latent space and conditional sampling is performed 47,91 in place of a directed search via Bayesian optimization. In this work we focus solely on VAE-based Bayesian optimization schemes for molecule generation and so we do not benchmark model performance against the aforementioned methods. Principally, we are concerned with highlighting the issue of training set mismatch in VAE-based Bayesian optimizations schemes and demonstrating the superior performance of a constrained Bayesian optimization approach. Results and discussion Experiment I Drug design. In this section we conduct an empirical test of the hypothesis from ref. 21 that the decoder's lack of efficiency is due to data point collection in "dead regions" of the latent space far from the data on which the VAE was trained. We use this information to construct a binary classication Bayesian Neural Network (BNN) to serve as a constraint function that outputs the probability of a latent point being valid, the details of which will be discussed in the section on labelling criteria. The BNN implementation is adapted from the MNIST digit classication network of ref. 92 and is trained using black-box alpha divergence minimization. Secondly, we compare the performance of our constrained Bayesian optimization implementation against the original model (baseline) in terms of the numbers of valid, realistic and drug-like molecules generated. We introduce the concept of a realistic molecule i.e. one that has a SMILES length greater than 5 as a heuristic to gauge whether the decoder has been successful or not. Our denition of drug-like is that a molecule must pass 8 sets of structural alerts or functional group lters from the ChEMBL database. 93 Thirdly, we compare the quality of the molecules produced by constrained Bayesian optimization with those of the baseline model. The code for all experiments has been made publicly available at https://github.com/Ryan-Rhys/Constrained-Bayesian-Optimisation-for-Automatic-Chemical-Design. Implementation. The implementation details of the encoder-decoder network as well as the sparse GP for modelling the objective remain unchanged from ref. 21. For the constrained Bayesian optimization algorithm, the BNN is constructed with 2 hidden layers each 100 units wide with ReLU activation functions and a logistic output. Minibatch size is set to 1000 and the network is trained for 5 epochs with a learning rate of 0.0005. 20 iterations of parallel Bayesian optimization are performed using the Kriging-Believer algorithm 94 in all cases. Data is collected in batch sizes of 50. The same training set as ref. 21 is used, namely 249, 456 drug-like molecules drawn at random from the ZINC database. 95 Diagnostic experiments and labelling criteria. These experiments were designed to test the hypothesis that points collected by Bayesian optimization lie far away from the training data in latent space. In doing so, they also serve as labelling criteria for the data collected to train the BNN acting as the constraint function. The resulting observations are summarized in Fig. 4. There is a noticeable decrease in the percentage of valid molecules decoded as one moves further away from the training data in latent space. Points collected by Bayesian optimization do the worst in terms of the percentage of valid decodings. This would suggest that these points lie farthest from the training data. The decoder over-generates methane molecules when far away from the data. One hypothesis for why this is the case is that methane is represented as 'C' in the SMILES syntax and is by far the most common character. Hence far away from the training data, combinations such as 'C' followed by a stop character may have high probability under the distribution over sequences learned by the decoder. Given that methane has far too low a molecular weight to be a suitable drug candidate, a third plot in Fig. 3(c), shows the percentage of decoded molecules such that the molecules are both valid and have a tangible molecular weight. The denition of a tangible molecular weight was interpreted somewhat arbitrarily as a SMILES length of 5 or greater. Henceforth, molecules that are both valid and have a SMILES length greater than 5 will be referred to as realistic. This denition serves the purpose of determining whether the decoder has been successful or not. As a result of these diagnostic experiments, it was decided that the criteria for labelling latent points to initialize the binary classication neural network for the constraint would be the following: if the latent point decodes into realistic molecules in more than 20% of decode attempts, it should be classied as realistic and non-realistic otherwise. Molecular validity. The BNN for the constraint was initialized with 117 440 positive class points and 117 440 negative class points. The positive points were obtained by running the training data through the decoder assigning them positive labels if they satised the criteria outlined in the previous section. The negative class points were collected by decoding points sampled uniformly at random across the 56 latent dimensions of the design space. Each latent point undergoes 100 decode attempts and the most probable SMILES string is retained. J comp log P (z) is the choice of objective function. The raw validity percentages for constrained and unconstrained Bayesian optimization are given in Table 1. In terms of realistic molecules, the relative performance of constrained Bayesian optimization and unconstrained Bayesian optimization (baseline) 21 is compared in Fig. 5(a). The results show that greater than 80% of the latent points decoded by constrained Bayesian optimization produce realistic molecules compared to less than 5% for unconstrained Bayesian optimization. One must account however, for the fact that the constrained approach may be decoding multiple instances of the same novel molecules. Constrained and unconstrained Bayesian optimization are compared on the metric of the percentage of unique novel molecules produced in Fig. 5(b). One may observe that constrained Bayesian optimization outperforms unconstrained Bayesian optimization in terms of the generation of unique molecules, but not by a large margin. A manual inspection of the SMILES strings collected by the unconstrained optimization approach showed that there were many strings with lengths marginally larger than the cutoff point, which is suggestive of partially decoded molecules. We run a further test of drug-likeness for the unique novel molecules generated by both methods consisting of passing a number of functional group lters consisting of 8 sets of structural alerts from the ChEMBL database. The alerts consisted of the Pan Assay Interference Compounds (PAINS) 96 alert set for nuisance compounds that elude usual reactivity, the NIH MLSMR alert set for excluded functionality lters, the Inpharmatica alert set for unwanted fragments, the Dundee alert set, 97 the BMS alert set, 98 the Pzer Lint procedure alert set 99 and the Glaxo Wellcome alert set. 100 An additional screen dictating that molecules should have a molecular weight between 150-500 daltons was also included. The results are given in Table 2. In the next section we compare the quality of the novel molecules produced as judged by the scores from the black-box objective function. Molecular quality. The results of Fig. 6 indicate that constrained Bayesian optimization is able to generate higher quality molecules relative to unconstrained Bayesian optimization across the three drug-likeness metrics introduced in Section 2.3. Over the 5 independent runs, the constrained optimization procedure in every run produced new molecules ranked in the 100th percentile of the distribution over training set scores for the J comp log P (z) objective and over the 90th percentile for the remaining objectives. Table 3 gives the percentile that the averaged score of the new molecules found by each process occupies in the distribution over training set scores. The J comp log P (z) objective is included as a metric for the generative performance of the models. It has been previously noted that it should not be benecial for the purposes of drug design. For the penalised log P objective function, scores for each run are presented in Table 4. The best score obtained from our constrained Bayesian optimization approach is compared against the scores reported by other methods in Table 5. The best molecule under the penalised log P objective obtained from our method is depicted in Fig. 7. Experiment II Combining molecule generation and property prediction. In order to show that the constrained Bayesian optimization approach is extensible beyond the realm of drug design, we trained the model on data from the Harvard Clean Energy Project 19,20 to generate molecules optimized for power conversion efficiency (PCE). In the absence of ground truth values for the PCE of the novel molecules generated, we use the output of a neural network trained to predict PCE as a surrogate. As such, the predictive accuracy of the property prediction model will be a bottleneck for the quality of the generated molecules. Implementation. A Bayesian neural network with 2 hidden layers and 50 ReLU units per layer was trained to predict the PCE of 200 000 molecules drawn at random from the Harvard Clean Energy Project dataset using 512 bit Morgan circular ngerprints 101 as input features with bond radius of 2 computed using RDKit. 102 While a larger radius may be appropriate for the prediction of PCE in order to represent conjugation, we are only interested in showing how a property predictor might be incorporated into the automatic chemical design framework and not in optimizing that predictor. The network was trained for 25 epochs with the ADAM optimizer 103 using black box alpha divergence minimization with an alpha parameter of 5, a learning rate of 0.01, and a batch size of 500. The RMSE on the training set of 200 000 molecules is 0.681 and the RMSE on the test set of 25 000 molecules is 0.999. PCE scores. The results are given in Fig. 8. The averaged score of the new molecules generated lies above the 90th percentile in the distribution over training set scores. Given that the objective function in this instance was learned using a neural network, advances in predicting chemical properties from data 104,105 are liable to yield concomitant improvements in the optimized molecules generated through this approach. Concluding remarks The reformulation of the search procedure in the Automatic Chemical Design model as a constrained Bayesian optimization problem has led to concrete improvements on two fronts: (1) Validitythe number of valid molecules produced by the constrained optimization procedure offers a marked improvement over the original model. (2) Qualityfor ve independent train/test splits, the scores of the best molecules generated by the constrained optimization procedure consistently ranked above the 90th percentile of the distribution over training set scores for all objectives considered. These improvements provide strong evidence that constrained Bayesian optimization is a good solution method for the training set mismatch pathology present in the unconstrained approach for molecule generation. More generally, we foresee that constrained Bayesian optimization is a workable solution to the training set mismatch problem in any VAE-based Bayesian optimization scheme. Our code is made publicly available at https://github.com/Ryan-Rhys/Constrained-Bayesian-Optimisation-for-Automatic-Chemical-Design. Further work could feature improvements to the constraint scheme [106][107][108][109][110][111] as well as extensions to model heteroscedastic noise. 112 . 7 The best molecule obtained by constrained Bayesian optimization as judged by the penalised log P objective function score. Fig. 8 The best scores for novel molecules generated by the constrained Bayesian optimization model optimizing for PCE. The results are averaged over 3 separate runs with train/test splits of 90/10. The PCE score is normalized to zero mean and unit variance by the empirical mean and variance of the training set. In terms of objectives for molecule generation, recent work by 44,89,91,113,114 has featured a more targeted search for novel compounds. This represents a move towards more industriallyrelevant objective functions for Bayesian optimization which should ultimately replace the chemically misspecied objectives, such as the penalized log P score, identied both here and in ref. 37. In addition, efforts at benchmarking generative models of molecules 115,116 should also serve to advance the eld. Finally, in terms of improving parallel Bayesian optimization procedures in molecule generation applications one point of consideration is the relative batch size of collected points compared to the dataset size used to initialize the surrogate model. We suspect that in order to gain benet from sequential sampling the batch size should be on the same order of magnitude as the size of the initialization set as this will induce the uncertainty estimates of the updated surrogate model to change in a tangible manner. Conflicts of interest There are no conicts to declare.
4,932.2
2019-11-18T00:00:00.000
[ "Chemistry", "Computer Science" ]
Classification of Parkinson’s Disease in Patch-Based MRI of Substantia Nigra Parkinson’s disease (PD) is a chronic and progressive neurological disease that mostly shakes and compromises the motor system of the human brain. Patients with PD can face resting tremors, loss of balance, bradykinesia, and rigidity problems. Complex patterns of PD, i.e., with relevance to other neurological diseases and minor changes in brain structure, make the diagnosis of this disease a challenge and cause inaccuracy of about 25% in the diagnostics. The research community utilizes different machine learning techniques for diagnosis using handcrafted features. This paper proposes a computer-aided diagnostic system using a convolutional neural network (CNN) to diagnose PD. CNN is one of the most suitable models to extract and learn the essential features of a problem. The dataset is obtained from Parkinson’s Progression Markers Initiative (PPMI), which provides different datasets (benchmarks), such as T2-weighted MRI for PD and other healthy controls (HC). The mid slices are collected from each MRI. Further, these slices are registered for alignment. Since the PD can be found in substantia nigra (i.e., the midbrain), the midbrain region of the registered T2-weighted MRI slice is selected using the freehand region of interest technique with a 33 × 33 sized window. Several experiments have been carried out to ensure the validity of the CNN. The standard measures, such as accuracy, sensitivity, specificity, and area under the curve, are used to evaluate the proposed system. The evaluation results show that CNN provides better accuracy than machine learning techniques, such as naive Bayes, decision tree, support vector machine, and artificial neural network. Introduction Parkinson's disease (PD) is one of the brain diseases that occur due to disorder in the neurological system of the brain.The thalamus is a region in the human brain that contains neurons and has an important role in transmitting sensory information to the brain.Another region of the human brain is the substantia nigra, which contains dopaminergic neurons.Dopamine, a neurotransmitter essential for motor coordination and control, is produced and released by these neurons [1].Dopamine provides signals to the brain and other parts of the body related to movement and coordination.During Parkinson's disease, dopamine chemical generation decreases and causes neuron death [2].Parkinson's disease symptoms include shakes, slowness in muscle movement, stiffness, imbalance, or postural instability.There are some other symptoms as well, such as slowness in thinking, voice disorder, fatigue, anxiety, and depression.Sleep may also become disturbed and concentration may be lost [3].There is no medical lab or medical test to diagnose this disease [4].Traditionally, medical experts have used past records and neurological investigations.However, this approach is not that accurate because of many reasons and similar neurodegeneration diseases.It is difficult to diagnose after much loss of dopamine chemicals.The correct detection of PD is very important.If a patient is diagnosed as healthy, with time, this disease becomes worse, which is difficult to control.Machine learning is widely used in many medical disease diagnoses, like heart disease detection, cancer detection, Alzheimer's disease detection, and many more.Regarding PD, there are many symptoms that can be present in a Parkinson's disease patient.These symptoms or features can be age, voice, brain images, etc., in different patterns.So, on the basis of these features, we can classify this disease as PD if a patient has these features or symptoms by using machine learning techniques.In the current era of technology, the trend of making everything automated has been started, which is reaching medical diagnosis as well.Automation can increase the speed and precision of medical diagnosis.Healthcare professionals can gain from the helpful decision making assistance that automated technologies can provide.By employing vast amounts of medical information and data, these technologies can assist physicians in making informed decisions, providing likely diagnoses, and prescribing appropriate tests or treatments.This can help in standardizing diagnostic processes, ensuring consistency in evaluations, and reducing diagnostic variability.These tools can aid in the early identification and prevention of sickness.Automation enables scalability and improved accessibility of medical diagnostics.Different automated and semi-automated systems have been developed for disease classifications [4][5][6][7][8][9].In the same way, different researchers have attempted to classify PD by using machine-learning-based techniques.Most of these techniques are suport vector machine, neural network, Bayesian learning, decision tree method, etc.In articles [10][11][12], different machine-learning-based approaches that have been applied on people with PD are discussed.A research work in [13] applied the random forest approach on a dataset adopted from ADRC, which contains voice recordings of people with PD and healthy controls.The simulation results showed a 99.25% accuracy rate.However, this technique is not applied on different features and datasets. Parkinson's Classification Based on Machine Learning (ML) and Deep Learning (DL) Techniques This section is dedicated to recent literature on Parkinson's classification using different machine learning (ML) and deep learning (DL) techniques.Most of the techniques are fully automated, while some are semi-automated. A work in [14] proposed a novel intelligent model using DL techniques that analyzes gait information.In order to build deep neural network architecture, a 1D convolutional network is used.The model receives 18-ID signals from foot sensors, which measure vertical ground reaction force (VGRF).The algorithm is tested on Parkinson's detection and prediction of severity of Parkinson's.The authors claimed an accuracy of 98.7% achieved by the proposed model. In [15], the authors introduced an intelligent system that can detect PD from vowels.The features from the vowels are extracted by using singular value decomposition (SVD) and minimum average maximum (MAMa) tree.Further, 50 distinctive features are selected using feature selection techniques.For classification purposes, they used KNN classifier and obtained 92% accuracy. In [16], they presented a CNN-based model for classification of PD and HC from neuromelanin-sensitive magnetic resonance imaging (NMS-MRI).Neuromelanin-sensitive MRI is a medical imaging technique that allows experts to study the abnormities with detail in substantia nigra pars compacta (SNc).The dataset used in this study comprises the NMS-MRI of 45 subjects in total, where 25 are PD and 35 are HC.The authors claim a superior testing accuracy of 80%. In [17], the authors proposed a novel intelligent system, where all regions of the brain are covered by a network.Feature vectors are collected from every region of the brain and random forest is used to select relevant features.Lastly, support vector machine is applied in order to combine all the futures along with the ground truth.This model is trained and tested on the Parkinson's Progression Markers Initiative (PPMI) dataset, including 169 HC and 374 PD subjects.The authors claimed an accuracy of 93%. The article in [18] proposed a machine-learning-based technique to diagnose Parkinson's disease by developing a multilayer feed forward neural network (MLFNN).They obtained the dataset from Oxford Parkinson's datasets, which include the voice measurements of 31 subjects, where 21 of them are PD patients, while rest of the subjects are healthy controls.In total, eight different attributes on the basis of frequency (tremor) are selected.For classification, the k-means algorithm is used.The simulation results showed sensitivity 83.3%, specificity 63.6%, and accuracy 80%. In [19], another model of PD classification was introduced.The dataset used in this study is adopted from the UCI repository.The swarm optimization technique has been applied for features extraction, while naive Bayes has been applied for classification.The authors claimed 97.5% accuracy. In [20], the authors used non-motor features for diagnosis purposes of PD.These features are the collection of olfactory loss, sleep behavior disorder, and rapid eye movement (REM).Further, the non-motor features were combined with dopaminergic imaging markers and cerebrospinal fluid measurement features.The dataset used in the experiments was obtained from PPMI, in which 401 were PD subjects while 183 were healthy controls.Boosted tree, SVM, random forest, and Bayes were used for classification purposes.The results showed 96.4% in terms of accuracy with SVM.In the literature, it was studied that non-motor symptoms, including cognitive decline, trouble sleeping, mood problems, and autonomic dysfunction, may show up in the early stages of Parkinson's disease (PD), even before the appearance of motor symptoms.By considering non-motor traits in addition to motor symptoms, clinical experts can make a more accurate and speedy diagnosis, leading to appropriate treatment and therapy.In addition to Parkinson's disease, other neurological illnesses can also cause non-motor symptoms.The specific pattern and combination of non-motor symptoms can assist differentiating PD from other disorders to aid in the differential diagnosis process.In PD, non-motor symptoms could manifest before those that are motor. In [21], the author proposed a novel intelligent model for classifation of PD.This approach is based on GA)-Walvet kernet(WK)-Extreme learning machine (ELM).The neural network was trained by ELM.WK-ELM uses three different parameters, which are adjustable.The ideal values for parameters are calculated with the support of genetic algorithm.The authors obtained a 96% accuracy rate with a dataset taken from the UCI library, which contains voice measurements of 31 subjects, where 23 are PD patients. In [22], a CNN model, AlexNet, was presented for classifation of PD.The model is trained on 2820 HC and 3296 PD MR images and tested on 705 HC and 824 PD MR images using the transfer learning technique.The PPMI dataset was used in this study.This model achieved 88.9%, 89.30%, and 88.40% results in terms of accuracy, sensitivity, and specificity, respectively. In [13], the authors performed experiments on Parkinson's and Alzheimer's diseases.A fully automated system was introduced based on different intelligence and deep learning algorithms, such as decision tree, random forest, boosted tree, bagging, and MLP.The dataset used in the research was adopted form Alzheimer's Disease Research Center (ADRC), which contained a total of 890 subjects' data, where 65% of cases belonged to Alzheimer's, while 40% were PD subjects.According to this paper, alcohol, genes, and age are the main influencing factors regarding AD and PD.According to the author, accuracy of 99.25% has been achieved on random forest and MLP.A research work in [23] worked on susceptibility weighted imaging (SWI) scan.SWI is a medical imaging technique in MRI.This technique has the capability to visualize the susceptible variations in detail for many issues like blood iron, with the support of contract enhancement.SVM is used for classification of Parkinson's and Parkinsonisms at an isolated level and obtained an accuracy of 86%.A local dataset is used, having 36 subjects' records, where 16 were PD patients and 20 were Parkinsonisms. In [24], the authors worked on three classes of classification regrading PD, progressive supranuclear palsy (PSP), and HC.The advanced stage of PD is PSP; its progression is very high and it is less reactive to medication.The dataset used in this study consists of the MRIs of 84 subjects.The authors applied principal computer analysis (PCA) for feature extraction, while SVM was used as a classifier.Their accuracy is about 88% on an average basis. In [25], a multimodel on MR images was proposed.In this study, SVM was applied as a classifier.This model obtained the results 86.96 %, 92.59 %, and 78.95% in terms of accuracy, specificity, and sensitivity, respectively.A local dataset was used in this research, which contained a total of 46 subjects, where 19 belonged to PD and 27 belonged to HC. An author in [26] worked on TRODAT and SPECT images to detect the PD.In this regard, the authors presented an ANN-based model.Striatal and striatum pixel values were obtained from images, and these were then fed to ANN as input.This model obtained an accuracy of 94%. A comprehensive analysis of prior work is presented in Table 1.There are different factors involved in PD patients, like olfactory loss, rapid eye disorder, sleep disturbance, postural unbalancing, cerebrospinal fluid, and dopaminergic imaging.There is a need to consider all these features and apply a classification technique that can correctly diagnose people with PD.CNN has shown state-of-the-art accuracy in a number of biomedical image classifications.Recently, Billones, Ciprian D. et al [35] adjusted the parameters of a VGGNet model for Alzheimer's detection and succeeded with 91.85% accuracy.Likewise, [36] obtained an accuracy of 93.16% for cerebral microbleeds in MRI.Due to the high accuracy of CNN with MR images, it is applied for PD detection and succeeded in obtaining satisfactory results.The main advantage of the proposed system is that it is a simple convolutional network with limited training parameters; hence, the training time is shorter than state-of-the-art models.A general limitation of the proposed model is that it deals with Parkinson's disease as a binary classification problem; however,; there are some other diseases closely related to Parkinson's, such as Parkinsonism, dementia, and Alzheimer's, etc.It would be good to develop a system that can classify these diseases in a multiclass classification.Figure 1 shows the overall operation of the proposed system.Regarding the order of the remainder of this research paper, Section 2 covers the materials and methods.The results and experiments of the proposed methods are discussed in Section 3. Section 4 is reserved for the discussion regarding the results.Finally.Section 5 concludes the research and also presents the future work.The main contributions of this paper are four-fold: 1. Achieved the state-of-the-art mean accuracy, sensitivity, specificity, and area under the curve as 96, 96.87, 95.83, and 94.5 percent, respectively.2. Dealing with limited data, this model was developed in such a manner that reduces the overfitting problem. 3. Low-computational-power GPU was used and obtained satisfactory results as compared to other techniques.4. Specific patches were extracted from the samples. Pre-Processing The MR images were initially stored in the DICOM format and then converted into JPEG using publicly available software known as DICOM to JPEG.Each subject's data consisted of 45 slices, and only slice number 22 was collected per subject since this slice provides the accurate image of the substantia nigra in PD class.Substantia nigra is a structure in the mid-brain area that controls movement and motor coordination.Dopamine is a substance that is produced in this area and is employed as a signal transmitter.This sends signals about movement and coordination to the brain and other parts of the body.A stack was created by combining slice number 22 from all the subjects.To align the images, intensity-based image registration was carried out using the OpenCV library on the stack.Image registration is the procedure of lining up scans of the brain or other pertinent regions taken from people with Parkinson's disease.Using image registration techniques, this alignment establishes the spatial relationship between the pictures, enabling a consistent and uniform analysis.By ensuring that the pictures are in a uniform coordinate system, image registration eliminates variances brought on by changes in patient placement or scanning procedures.The primary objective of image registration was to eliminate unwanted and irrelevant information, which could lead our model to learn unnecessary and redundant features.For obtaining a perfect image of substantia nigra, the mid-brain section was cropped using the freehand region of interest (ROI) technique with a window size of 33 × 33.Freehand ROI was used for cropping because the size of the specific organ varies in different patients, and, instead of using fix ROI cropping, the freehand region of interest (ROI) technique provides us better control in cropping the exact position of the organ.This image was the final input to the CNN model.Figure 3 provides a visual representation of the preprocessing steps. Convolutional Neural Network Architecture CNN architecture have been widely used for image-relevant tasks, such as image recognition and classification of images, etc.The use of CNNs has effectively improved the performance of many image-relevant tasks.For example, a deep-CNN-based COVID-19 diagnosis system was proposed in [38].The author claimed that the deep-CNN-based dignosis of COVID-19 from sounds like dry cough outperforms other models.In an another article, CNN has been proposed for classification of lung diseases [39].In this article, the authors applied CNN on chest X-ray images and classified lung diseases into five different disease classes.The results of CNN-based classification were higher than existing methods.The main building blocks in CNN architecture are convolutional layers, activation functions, feature maps, max pooling, and regularization.The CNN architecture begins with convolutional layer that accepts input and uses the convolutional kernels to process the spatial information in local receptive field and report activation value using activation function.The convolutional layers can be stacked over one another, which enables CNN to extract and learn features in increasingly complex hierarchy and provides a features map.The number of generated feature maps depends on the number of convolutional filters used.The activation function encodes the pixel-level spatial neighborhood activation at respective pixel location in feature map.The max pooling layer is after the feature map layer.The main purpose of using the max pooling layer is to reduce the input dimensionality, reduce the risk of overfitting, and reduce computational costs.The result of the max pooling layer can be given to another convolutional layer to create a hierarchical structure.The final feature maps are fully connected to every neuron on dense layer.Finally, softmax function as activation function is also used for classification purpose.Following are the main building blocks of CNN model. Weights Initialization Right weight initialization has the key role of deep learning, which reduces the convergence time and brings stability in loss function even after thousands of iterations.Xavier initializer is incorporated in this study that maintains activation variance and back propagation gradient in controlled levels [40,41]. In Equation ( 1), U is the normal distribution, where w is tensor, the weight of input layer, and w+1 is that of output layer. Convolution of Kernels After the start of convolution on image, feature maps are generated.Each kernel has a feature map.Feature map F can be calculated using the equation below. where M shows kernel and N shows input channel (2). Activation Function Non-linearity into the system is introduced by the activation function.A number of activation functions have been proposed and are still under research.Each activation function has some limitations and is not suitable for every situation: for instance, sigmoid kill gradient.However, ReLU obtained better results when compared with sigmoid and hyperbolic tangent function but suffers from dying ReLU problem.For instance, the large gradient flows through the ReLU update the weights that will never activate at any data point.The other issue with ReLU activation function is that it ignores gradients smaller than zero.LeakyReLU is the improved form of ReLU and tackles the dying ReLU problem by bringing the negative gradient into it.ReLU is defined as LeakyReLU is defined as Here, α is the leakiness parameter, which may be a real number between 0 and 1. Pooling Dimensionality of the feature map is reduced by pooling; it makes the system ignore small changes, such as small intensity and illumination change.The prominent pooling layer is max pooling, min pooling, average pooling.The min and max select features with the minimum, maximum value in the pooling kernel, respectively, while the average pooling calculates the average of the features in pooling kernel and returns the average effect of all features.Max pooling is used here in this study, which can be formalized as where k and l correspond to the spatial positions. Regularization The main purpose of regulation is to avoid model overfitting.A number of overfitting techniques are available; however, L1 and L2, global average pooling, global max pooling, and batch normalization are well-known among them.Dropout is another effective regulation technique that randomly switches the neurons on and off to learn effectively and contribute in the overall output individually.In this paper, we use Dropout, which removes neurons with probability p.The value of the Dropout can be any real value between 0 and 1.The working of Dropout can be observed by the following formula [42]: where y k is the probable result of the unit k, M * is the set of all thinned network, y M is the output of unit M, and Pr() shows the probability function. Fully Connected layers or Dense layers It is the last layer after convolutional layers.Here, each pixel of the image is considered as neuron and given to each neuron in the fully connected layer.A classifier is used for classification at the end of architecture.Softmax is most common classifier in deep neural networks.It can be defined using Bayes theorem [43]. where C k is the targeted class to find, and C j is the j = 1,2,3, . . ., n th class.Its exponential form is as under [43]: Loss Function It is used to calculate the compatibility between the given ground truth label and predicted values.The loss function can be custom-designed for a particular task.There are many loss functions based on the nature of the learning problem, but the most common lost function that is used in classification task is categorical cross-entropy.The categorical cross-entropy is used as the cost function.It can be formalized as Here, c is the actual target class, while ĉ is the predicted class in Equation (9). Proposed Network Architecture Our proposed model receives MRI as input and eventually labels it as PD or HC.This method takes advantage of the deeper CNN with a small convolutional kernel of size 3 × 3 throughout the network.The smaller convolutional kernel has fewer parameters to estimate and allows learning and generalizing from limited training data.Conversely, the larger convolutional kernel has larger parameters to estimate, is difficult to generalize, and demands high availability of training data.Each convolutional kernel is followed by advanced activation function (i.e., LeakyReLU).LeakReLU addresses simple ReLU issues, such as dying ReLU by adjusting negative gradients on back propagation.The recently developed batch normalization is used before every LeakyReLU layer to improve the performance.It has the ability to accelerate the training process of the network.The proposed network takes input patch of dimension 33 × 33.The first three convolutional layers are followed by max pooling layer with kernel dimension of 3 × 3 and stride 2 × 2. The output of the first max pooling layer (i.e., feature maps) is 64 (number of channels) 16 × 16 in dimension.The max pooling layer is used to reduce overall dimensionality, which results in fewer learnable parameters.The output feature maps of first max pooling layer are forwarded to next three convolutional layers.The output feature maps of the sixth convolutional layer (i.e., 128 × 16 × 16 in dimension) are forwarded to the second max pooling layer with kernel size of 3 × 3 and stride of 2 × 2. The output feature maps of this pooling layer have dimensions of 128 × 7 × 7.These feature maps are then fully connected to the FC (fully connected) layers.There are two FC layers.The first FC layer has 512 neurons, while the second has 256 neurons.Advanced regularization technique is used, which is dropout with 0.1 value in both FC layers to reduce the risk of network overfitting.At the end of network, softmax layer is used to obtain the classification probabilities.Figure 4 shows graphical representation of the proposed model, while Table 3 shows the architecture along with used parameters of proposed model.In Table 3, "Type" column conv means convolutional layer and Max-pool means max pooling layer.In "Inputs" column, the first value is the number of input channels and next two values are the dimension of the feature map or patch size. Results This section discusses the outcome performance of the network on the Parkinson's dataset. Performance Measures Area under curve (AUC), classification accuracy, sensitivity, and specificity are used to evaluate the performance of the proposed model. Experimental Setup The NVIDIA Geforce 940MX GPU, which supports Keras, has been utilized to run the CNN.The Keras Python deep learning API enables the usage of both Theano and Tensorflow.The Theano library and Sequential model are used as a CNN model. Experiments In order to evaluate the robustness of the network, a number of experiments have been performed.The proposed model has been applied to the patches as well as to the complete image.Different model parameters have been tested to obtain a robust model.Furthermore, several times, the model has been executed to find the performance validity of the proposed model.The accuracy of the proposed model has been recorded after each epoch.The system is evaluated on the training set as well as on the test set.Four of the experiments are elaborated below.Experiments 1 and 2 show the highest and the lowest accuracy, respectively, archived during training and validation with the same input and network settings.In experiment 3, the last convolutional layer is eliminated in order to reduce the computational cost while keeping the input the same as in experiments 1 and 2. However, in experiment 4, the same networking settings are maintained.The network is tested on the full mid-brain as input rather than ROI patches. First Experiment In experiment 1, Figure 5a shows the training and testing accuracy.The x-axis shows the number of epochs, while the Y-axis shows accuracy.The line in green color is for validation accuracy, while blue is for training accuracy.The accuracy increases with the number of epochs.The validation accuracy reached up to 98%. Figure 5b shows the training loss vs. validation loss.The × axis represents the numbers of epochs and the y-axis represents the loss.Figure 5c shows the ROC curve for the proposed architecture on the test set.The x-axis represents the false positive rate and the y-axis represents the true positive rate.In this experiment, the proposed model obtained an AUC of 0.94 on the test set. Second Experiment In experiment 2, Figure 6a-c show the results of the same model repeated for 50 epochs on the same input patches for validating the performance of the model.In this experiment, the training and validation accuracy decreased to 95%.This is the least accuracy obtained by the proposed model.The 3% decrement in the accuracy is due to the noise in input. Third Experiment In experiment 3, Figure 7a-c show the results of the third experiment in which the last convolutional layer is eliminated.This last layer is removed to reduce the computational cost, but the accuracy of the model is greatly affected.The AUC remains constant on the test set, but the accuracy is reduced to 65% on validation as well as on testing.The AUC remains the same up to 94% on the test set. Fourth Experiment In experiment 4, Figure 8a-c show the results of the next experiment in which the model is applied to a full slice of MRI instead of patches.The AUC is constant, but the accuracy is reduced to 85%.The comparison of the several experiments shows that the proposed architecture performed better with patches and produced high AUC and accuracy on validation and test sets. Discussion Numerous experiments have employed various network setups.The network parameters include layer count, input size, and other network features.The accuracy has consistently ranged from 94 to 98 percent.In Table 4, the AUC, sensitivity, specificity, and average classification accuracy have been shown.Also, we have generated the confusion matrix representing the true and false classification in Table 5.Detection of PD from MR imaging cannot be considered a novel task since many researchers have attempted to classify PD and HC.Table 1 represents a comprehensive analysis of the prior work.In these studies, different groups of ML techniques are applied that contain supervised models, such as SVM [17,24,27,30,32,44], and unsupervised models, like [28,33].These models achieved promising results, but their accuracies are more likely variable accuracies.In many of the mentioned works, the authors used millions of features of single or multiple modalities with limited datasets using SVM, which creates a hyperplane in the n-feature dimensional space.Using this strategy can achieve high accuracy, but it has a chance of overfitting. To compare the results, with Alexnet [22] being the pioneers to use a novel strategy of using ROI base patches, when it comes to the biological domain, PD is associated with the substantia nigra.There is a chance of high structural changes in this organ as compared to the rest of the brain.Providing the input of the specific image (substantia nigra) to the network rather than the full image is the key factor of achieving promising results.The performance comparison of our model with other classifiers can be seen in Table 6.The results show that the proposed model suppresses the previous models, while experiment 4 confirms the involvement of mid-brain (patches) in PD classification. Conclusions In conclusion, this paper proposes a customized CAD system that utilizes convolutional neural networks to accurately classify MRI patches into Parkinson's and healthy patterns.The model successfully extracts and learns the patterns from the training samples of the benchmark PPMI dataset, resulting in improved results.The findings demonstrate that the proposed model can autonomously learn accurate features of Parkinson's disease.However, the study highlights the challenge of overfitting in working with a limited dataset.Nevertheless, the proper design and integration of the dropout layer in the model enable effective suppression of the overfitting problem.Overall, the proposed CNN-based model offers a promising approach for the automatic and precise classification of Parkinson's disease, and it has the potential to benefit clinical practice in the future. Contribution With the increasing trend of computer-aided diagnosis, it has become feasible to avail these technologies for diagnosing complex diseases.Despite limited resources, these technologies are being used along with machine learning approaches for diagnosis of different diseases in many biomedical research labs.A computer-aided diagnosis based on convolutional neural network is presented in this paper.The performance of the model has been analyzed in detail on the basis of accuracy, sensitivity, specificity, and AUC.One of the main objectives of the proposed system is to reduce the incorrect diagnosis of PD and to detect the disease in early stages to improve the QoF of patients.To the best of our knowledge, this is the very first attempt to apply convolutional neural network on ROI for classification of Parkinson's disease.Although there is no cure for the disease itself, there are treatments available that help in reducing the symptoms for newly diagnosed patients.This maintains QoF for as long as possible. Future Work The proposed network is more simple, with fewer feature maps and layers.The complex features can be learned by complex organization of the network; however, a complex network requires a huge amount data.In the future, it is intended to work on the problem as the dataset is also updating with new patient records.The efforts will continue to improve the correct diagnosis of Parkinson's disease. Figure 2 . Figure 2. Slices of an MRI scan of an HC and PD patient. Table 1 . Summary of literature review results. Table 2 . Details of subjects. Table 3 . Detailed structure of CNN architecture.Conv. is used for convolutional layers, Max-Pool.is used for max pooling layers, and FC is used for fully connected layers.
6,969.2
2023-08-31T00:00:00.000
[ "Computer Science" ]
Energy poverty and the role of institutions: exploring procedural energy justice – Ombudsman in focus ABSTRACT This paper aims to explore the role of institutions, and specifically of the Ombudsman, in creating and practicing policies with relevance to energy poverty as a case of procedural energy (in)justice in a European context, while refining procedural energy justice. It is empirically informed by studies about the Austrian energy utility-based Ombudsman and the independent Ombudsman in North Macedonia, countries with a low and high level of energy poverty, respectively. I highlight the unexplored institutional capacity of the independent Ombudsman to discover hidden institutional energy poverty drivers, and the utility-based Ombudsman to alleviate energy poverty, and contribute to a socially just energy transition. The energy market and social welfare system are important institutions co-shaping energy poverty, and the energy utility plays an especially relevant role in creating or preventing energy injustices. Procedural energy justice applied to energy poverty is about how institutions treat citizens over access to affordable energy, and how citizens are (dis)empowered by that relationship. Introduction The EU-led energy transition has received much-needed academic attention, however, it should not be only focused on studying technologies and fuels (Jenkins et al., 2017). At the core of the energy system are, however, its people who along with institutions and policies are part of a specific energy culture (LaBelle, 2020). From an energy justice point of view, everyone is entitled to use affordable, safe, and clean energy (Heffron & McCauley, 2014). However, almost 50 million people across the EU are affected by energy poverty (Thomson & Bouzarovski, 2018), defined as the inability to attain a socially and materially necessitated level of domestic energy services (Bouzarovski & Petrova, 2015). Being vulnerable to energy poverty impedes their participation in the energy transition process (Bouzarovski & Tirado Herrero, 2017;Sovacool et al., 2019), and raises the question about the inclusiveness of the energy transition (Stojilovska, 2020). More recent energy poverty discussions try to shift the focus away from energy-poor households and put pressure on the policies and institutional path-dependencies which keep the energy-poor locked in (Petrova, 2018). There is an increased interest in the role of institutions and processes that should rectify energy injustices across the energy system (Jenkins et al., 2016) and balance different economic, environmental, and political goals (Heffron et al., 2015). Institutions are defined as formal institutions being social structuring to regulate political, social, or economic human interaction and to be distinguished from simple regulations and arrangements (Lepsius, 2017;North, 1990). Adding to the plurality of actors, activists engage in social movements (Yoon & Saurí, 2019) and emphasize the 'right to energy' concept to demand greater protection for vulnerable consumers from the privatization of the energy sector in Europe (EPSU & EAPN, 2017). While the academic literature is trying to keep up with the citizen-led demands for affordable energy (Frankowski & Tirado Herrero, 2021;Fuller & McCauley, 2016) there is very limited research about the contribution of institutions in conceptualizing energy poverty and exposing hidden energy injustices. Typically, national governments, energy suppliers, regional or local governments, and in fewer cases NGOs and energy regulators are relevant institutions offering various measures to alleviate energy poverty, but there is no mention of an Ombudsman (EU_Energy_Poverty_Observatory, 2020b). The Ombudsman, typically an independent institution tasked to observe human rights, has rarely been examined as an actor in the energy poverty and energy justice literature, although it can be a powerful actor in detecting legal breaches that inflict injustices on energy-poor households (Hesselman & Herrero, 2020). The Ombudsman refers to an independent public institution with the task to perform a soft control of public administration, preferably of the executive branch of government (Kucsko-Stadlmayer, 2008;Reif, 2004). This article is inspired by the work of the independent Macedonian Ombudsman speaking out on energy injustices inflicted by energy monopolies on (vulnerable) citizens and the Austrian utility-based Ombudsman established to address the individual situations of its energy consumers unable to pay their energy bills. Thus, this paper aims to explore the role of institutions, and specifically of the Ombudsman, in creating and practicing policies with relevance to energy poverty as a case of procedural energy (in)justice in a European context, while refining procedural energy justice. The Ombudsman in North Macedonia is described as a special professional independent body not belonging to any branch; it is tasked to protect citizens' rights (Ombudsman, 2011a), and this is a typical Ombudsman institution. The Austrian one is not a typical Ombudsman, but a separate body set up within the Austrian state-owned energy utility Wien Energie. The Austrian Wien Energie Ombudsman works with private and clients of social institutions which are in a difficult life situation preventing them from paying their energy bills (Wien_Energie, 2021). The case studies, North Macedonia and Austria, represent different geographical areas with different levels of energy poverty. Austria belongs to the 'geographical core' with relatively lower levels of energy poverty (Bouzarovski & Tirado Herrero, 2017;Thomson & Snell, 2013), while as a post-socialistic country North Macedonia is faced with a lack of proper infrastructure, cold climates, and systemic deficiencies in the management of housing, energy and social welfare belonging to the Central Eastern Europe region with comparatively higher levels of energy poverty (Bouzarovski, 2014;Thomson & Snell, 2013). By studying the role of the Ombudsman as a protector of vulnerable citizens from energy injustices, I highlight the institutional capacity of 'untypical' stakeholders, such as the Ombudsman and public utilities, to contribute to a socially just energy transition while systematically addressing energy poverty. Energy poverty as a procedural energy justice I primarily apply the procedural energy justice concept to energy poverty, where I focus on the role of formal institutions in detecting, preventing, or creating energy poverty. The conceptual framework is also informed by the right to energy concept, energy citizenship, and more extended energy justice discussions. In Graph 1, I present visually the conceptual framework I propose. Graph 1 shows that procedural energy justice is about the relationship between formal institutions and citizens, and often vulnerable citizens, over access to affordable energy. Procedural energy justice applied to energy poverty is about how institutions treat citizens over access to affordable energy, and how citizens are (dis)empowered by that relationship. To complete the conceptual framework, I inform the discussion with the good governance capacity of formal institutions, and the agency of (vulnerable) citizens. I also present the impacts of distributive and recognition energy justice on procedural energy justice, and how they can amplify the procedural injustices. Lastly, a crucial addition is whether formal institutions treat energy as a commodity or a basic energy service, and how the policies of formal institutions affect vulnerable citizens regarding access to affordable energy. In the sections below I elaborate on the conceiving of this conceptual framework. Procedural energy justice shows the relationship between institutions and vulnerable citizens over affordable energy. The literature indicates that procedural energy justice is about the fair process (Jenkins et al., 2016) and fair decision-making (Bouzarovski & Simcock, 2017). Energy poverty is a procedural energy injustice when there is a lack of information on energy poverty, energy prices and solutions, lack of participation in energy, housing, climate, fiscal policies, lack of access to legal rights, and there are barriers to challenging these rights (Walker & Day, 2012). The inclusion of local knowledge, different levels of governance, and better institutional representation are also features of procedural justice (Jenkins et al., 2016;Walker & Day, 2012). This indicates that the processes of information sharing, participation in decisions, legal remedies, and inclusion of various stakeholders are in focus. Procedural justice is considered a universal type of justice (LaBelle, 2017). However, while these processes are highlighted, the stakeholders involved are neglected, and other processes resulting from various contacts between institutions and citizens regarding access to affordable energy services are not included. This requires reinterpretation of energy poverty as procedural energy justice to show the dynamics between the relevant institutions involved in policy-making, regulatory or energy supply, and the vulnerable citizens and how these citizens benefit or are disadvantaged by this relationship. Energy poverty is a procedural injustice when institutions are ignoring the needs of the energy-poor and creating policies that affect them negatively. Procedural energy justice is more than just fairness and participation, but about setting up standards for the formal institutions shaping the decisions about energy poverty. Informed by broader energy justice discussions (Sovacool et al., 2017;Sovacool & Dworkin, 2015), procedural energy justice points out that formal institutions need to have a good governance capacity. Energy justice is about achieving transparent and accountable forms of energy decision-making (Sovacool et al., 2017;Sovacool & Dworkin, 2015). The discussions about energy justice and institutions emphasize that government policies need to address the social inequalities resulting from increasing energy prices by focusing on lower-income groups (Schlör et al., 2013). Rawls' theory of justice states that the market creates inequalities, however, institutions are to rectify the distributive inequalities (Rawls, 1971). Institutions have to be just and unjust institutions need to be abolished (Rawls, 1971). On the receiving side of energy services, are citizens which are gaining increased attention about their agency as (vulnerable) citizens. Gillard et al. (2017) understand procedural justice as stakeholder engagement in policy and governance, while McCauley et al. (2016) argue that procedural justice is about inclusive stakeholder engagement in a non-discriminatory way and setting up equitable procedures. NGOs and ordinary people are seen are new forms of governance that can contribute to better detection of vulnerabilities (Fuller & McCauley, 2016;Gillard et al., 2017;Walker et al., 2016). There is criticism against passivizing the citizens to consumers only (Lennon et al., 2019;Ryghaug et al., 2018). Not only are citizens seen as new actors but as proactive ones pointing out injustices and demanding retribution. Restorative justice, imported from criminal law, aims to repair the harm done to people, rather than solely focus on punishing the offender (Heffron & McCauley, 2017). Procedural justice is about showing resilience and protesting (McCauley & Heffron, 2018). Climate justice talks about sharing burdens and benefits between countries or individuals (Bulkeley et al., 2013). The procedural energy justice touches upon more general subjects, such as institutions, governance, and policies, while the distributive and recognition justice are more narrowly defined. Inspired by the development of the environmental justice concept (Schlosberg, 2004), energy justice has three core elements of distributive, procedural, and recognition justice. According to Walker and Day (2012) the distributive energy justice takes a central place impacting both the procedural and recognition justice. Distribution energy justice is about access to energy, affordability, and quality, security, or safety of energy sources (Heffron & McCauley, 2014;Jenkins et al., 2016;Sovacool & Dworkin, 2015;Walker & Day, 2012). Recognition justice is about misrecognition of vulnerable groups (Jenkins et al., 2016;Walker & Day, 2012). I argue that distribution and recognition justice impact procedural justice, the former through the market structure and infrastructural path-dependencies (Bouzarovski et al., 2016;Robinson et al., 2018), while the latter through the size of the energy poverty problem. After having discussed the performance of institutions and the agency of citizens, I describe the tendency for humanizing energy justice which adds more quality to the requirements of procedural energy justice for a fair process. Ideals of the morality of citizens and their well-being are raised by energy justice (Jenkins et al., 2018;McCauley et al., 2019;Sovacool, 2015;Sovacool & Dworkin, 2015). This opens up the discussion about re-shaping the relationship between institutions and citizens over energy considering that energy services are needed for a normal life. Thus, institutions can treat energy as a commodity (Teschner et al., 2020;Walker, 2015) or a basic energy service which needs protection from its marketization and rising energy prices (Demski et al., 2019). They can go beyond market relations (Walker, 2015). The energy justice concept is also questioning the neo-classical economics thinking while putting forward the just and equitable approach rather than only an efficient one (Heffron et al., 2015). This 'system re-thinking' goes along with the recent demands at the European level about the right to energy by non-governmental organizations which demand the prohibition of disconnections for vulnerable consumers, and propose special tariffs, and public funds for energy efficiency for vulnerable households (EPSU & EAPN, 2017). The right to energy concept inspires the discussions of whether energy can be considered a legal (human or consumer right) or a moral right (Hesselman et al., 2019). Methodology The data are based on two case studies -North Macedonia and Austria following Yin (2003) that case studies are used for how or why questions and contemporary phenomena. The main method for data collection is documents, such are written materials, official publications, and reports (Patton, 2002) to complement a case study (Punch, 2005). The documents were collected in the period 2017-2020 to study energy poverty in both countries. The collected materials from Austria are given by the Wien Energie Ombudsman, and the materials about North Macedonia are the annual reports of the Ombudsman for the period 2000-2018 which are publically available on the Ombudsman's website. Additionally, interviews with relevant stakeholders representing the utilities, public and private organizations involved in studying energy poverty or energy provision in both countries are used -3 in North Macedonia and 4 in Austria, to explain the wider socio-political environment. The sampling of the interviews was purposive, made in a deliberate way (Punch, 2005) to explore the energy poverty of households, and understand its underlining drivers. The selected interviews and documents were analyzed qualitatively. I coded the selected material by following the steps of data condensation, data display, and conclusion drawing (Miles et al., 2014), differently for the two cases. The annual reports of the Macedonian Ombudsman were coded by mapping the Ombudsman's contribution in interpreting the legislation; developing new legislation; implementing the legislation; and developing a new understanding of energy poverty. The interview with the Wien Energie Ombudsman representative and the materials given from this post were coded to understand the reasons for the establishment of the Ombudsman, the definition of its target group, and work with its target group. I acknowledge the empirical limitations informing the cases, using mostly Wien Energie Ombudsman as a source, and the annual reports of the Macedonian Ombudsman. I justify this since the Austrian Ombudsman offered real inquires by vulnerable consumers, and references from collaborators. The Macedonian Ombudsman is an independent institution publishing detailed publically available annual reports. The interviewees have given written consent for their interview, while one gave oral consent. Case studies The case studies are about the Macedonian independent Ombudsman and the Austrian energy utility-based Ombudsman which are unexplored actors in the energy system, although play a role in exposing energy injustices and addressing energy poverty. I use two cases of different types of Ombudsman in two countries in different contexts, the case of North Macedonia being a case of deeper energy poverty, and Austria being a case of lower energy poverty. The aim is not to directly compare the cases, but to use them as different examples for studying the different varieties of an Ombudsman body and its institutional role in exposing, or alleviating energy poverty. I present data about the structure and ownership of the energy markets and the effectiveness of the social welfare system since these institutions have been identified by the Ombudsman to increase or alleviate energy poverty. The three EU-SILC indicators used in Table 1 serve as a guiding threshold for the level of energy poverty (EU_Energy_Poverty_Observatory, 2020a; Thomson & Snell, 2013) showing that North Macedonia is much more affected by energy poverty than Austria. North Macedonia is a post-socialist country with a high share of energy poverty, high levels of income poverty, and housing deprivation (Table 1). Energy poverty is driven by widespread material deprivation, inefficient housing stock, and over-dependency on subsidized electricity and fuelwood used in inefficient devices (Stojilovska, 2020). It can be compared to a subsistence-like economy which is the minimal level of productivity (Stojilovska, 2020); post-communist energy poverty related to the infrastructure legacies from the centrally planned economy (Bouzarovski & Tirado Herrero, 2017), and energy deprivation shaped by a lack of gas infrastructure (Bouzarovski, 2018). In Austria, a small minority is affected by energy poverty between 2 and 9% based on the EU-SILC criteria (Table 1) and the study of their Energy Regulatory body argues that it is lower (Energie_Control_Austria). Income poverty and housing deprivation are comparatively low (Table 1). The Austrian social welfare system is much more effective in poverty reduction than the Macedonian (Table 1). Macedonian Ombudsman The case study about North Macedonia is focused on the work of the independent Ombudsman tasked to protect the constitutional and legal rights of the citizens when violated by the public bodies in the country. I analyze its contribution to exposing energy injustices imposed on citizens by the energy monopolies and social protection institutions. The results indicate that citizens suffer from the injustices inflicted by the district heat and electricity monopolies in private ownership and the weakness of the social welfare system. This section will show through representative examples how the Ombudsman has (1) interpreted the existing legislation; (2) developed new legislation; (3) implemented the existing legislation, and (4) developed new legal and policy understanding. While explaining each of these contributions of the Ombudsman, I explain how the Ombudsman has discovered the hidden institutional energy poverty drivers in North Macedoniathe district heat market, electricity market, and the social welfare system. The Ombudsman has voiced its criticism of monopolies by elaborating that the existing legislation was not respected. I explain this through the example of collective electricity disconnections detected in multiple annual reports described as a misuse of the electricity utility's monopoly position. Over the years in neighborhoods with a high concentration of non-payers of electricity, the electricity utility would disconnect not only the consumers which were not paying but also those paying (annual reports for 2004-2006; 2008-2009; 2013) (Ombudsman, 2011b). This is because the utility was afraid of physical injury potentially inflicted by dissatisfied consumers if it were to disconnect consumers on the spot (Ombudsman, 2011b). In 2008 alone, there were 9510 affected customers by collective disconnections (Energy_and_Water_Services_Regulatory_-Commission, 2009). The Ombudsman states collective disconnections threaten basic human rights (Ombudsman, 2011b). The issue with the electricity utility is that is a private monopoly. It tends to sue consumers with arrears and employs an enforcement agent to collect debts (Interview with a representative of the private electricity utility EVN, 19. 05. 2017). Many citizens face affordability issues due to high energy bills. As a result of unpaid electricity bills, 73,727 consumers were disconnected from electricity in 2018 (Energy_and_Water_-Services_Regulatory_Commission, 2019) which is around 3.5% of the population (Stojilovska, 2020). The Ombudsman contributed to developing missing legislation. I explain this through the obligation for disconnected consumers of district heating in a collective building to pay for the basic district heat fee (noted in annual reports for 2012-2018) (Ombudsman, 2011b). The district heating network is small and only existing in the capital city. There are two district heating companies, each supplying a different part of the capital, the smaller is in public ownership and the larger is in private. Until recently, when one disconnected from the larger (private) supplier and lived in a collective building, one had to pay the basic fee for using passive energy. The Ombudsman considered this obligation to pay after being disconnected as breaching consumer rights (Ombudsman, 2011b). A small victory was achieved in 2018 when this obligation was going to get canceled (Ombudsman, 2011b). The issue with the district heating is since its consumers cannot economize their heating nor control the time of heating and indoor temperature. The larger (private) district heat company is against individual apartment bills since they consider not to be economically justified (Interview with a representative of the private district heat company BEG, 25.05.2017) knowing the tendency of citizens to economize the heating. Having understood the link between material deprivation and energy arrears, the Ombudsman based on existing legislation demanded greater action about the social welfare system. The Ombudsman alerted that the current social protection system does not respond to the needs of the citizens at risk and as a result, a social welfare recipient family living in a bad illegal dwelling was affected by fire killing three children in 2018 (Ombudsman, 2011b). The Ombudsman has stated that the social welfare does not help the affected out of poverty and does not enable them a normal life as they can barely pay for food and clothes, let alone for electricity and district heat (annual reports for 2011, 2016-2018) (Ombudsman, 2011b). The country is a social state but does not implement this principle in reality (Ombudsman, 2011b). This estimation of the Ombudsman is justified since the social welfare is only 40 EUR per month for households without any income (Interview with a representative of the Platform against poverty, 05.06.2017). There is a 16 EUR worth of monthly energy poverty subsidy for social welfare recipients after having paid the last energy bill (Official_Gazette, 2018). The Ombudsman has paved the way for new legal and policy understanding of energy poverty and its human rights implications. In few annual reports, the Ombudsman raised the issue of public schools without heating which affects the educational performance and health of children (annual reports for 2014, 2017) (Ombudsman, 2011b). This means that energy poverty is experienced beyond the household as a unit, but in public buildings, and it is a link between the right to energy and the right to education and protection of health. Austrian Energy Ombudsman The Austrian Ombudsman located within the utility Wien Energie results from legislation in Austria obliging utilities to develop centers for consumers (Federal_Ministry_Republic_of_Austria, 2019), but at the same time results from the observation of the public utility Wien Energie about the need for improved dialogue with vulnerable consumers. I analyze the work of Wien Energie Ombudsman to support vulnerable consumers to pay their energy bills while considering their precarious situation. The results show that the Wien Energie Ombudsman located in the public utility serves as a guardian to protect vulnerable consumers and shows the crucial role of public utilities and related social institutions to support consumers in need. This section explores the role of the Wien Energie Ombudsman by explaining (1) the reasons for its establishment, (2) the definition of its target group, and (3) and the actual work with its target group. I also explain briefly the energy market and the social welfare system. Wien Energie is a state-owned energy utility supplying district heat, gas, and electricity in Vienna. The Ombudsman reported that it is crucial for them that they are state-owned as they would not be able to take care of their consumers if they were not in public ownership: 'We work on the open market, but we work for the citizens' (Interview with Wien Energie Ombudsman representative 16. 03. 2017). From April 2011 till March 2017 (interview) the Ombudsman processed 17,000 requests from social institutions and clients referring to 12,000 households and had 270 networking meetings with private and public social organizations (Wien_Energie_Ombudsman, 2017). The collaborators of the Ombudsman have praised their work. Psychological counseling for addicts Dialog explained: 'The Ombudsperson is an important service for materially underprivileged people which is rare to find in companies. ' (Wien_Energie_Ombudsman, 2017). The Austrian state-owned energy supplier in Vienna Wien Energie has built up a team to answer the requests of its vulnerable clients. The representative of the Ombudsman explains: We have begun in 2011 to build our team because we experienced getting more and more requests from social institutions directed to Wien Energie with special questions, and we could not offer solutions which we give to a typical customer… And we did not have sufficient know-how … Then, it was decided to build a customers' unit here, the Ombudsperson, and to employ social workers. (Interview with Wien Energie Ombudsman representative 16. 03. 2017) One of the key aspects of the success of the Ombudsman is its good networking with other relevant institutions. The cooperation with the social institutions reduces the bureaucracy since the consumer does not have to prove that they are facing affordability in front of the Ombudsman (Interview with Wien Energie Ombudsman representative 16. 03. 2017). The Ombudsman offers payment in installments, reduction of certain costs, such as for disconnection and warning, fast reconnection in case of need, such as for dependents on care, and payment through pre-payment meters (Wien_Energie_Ombudsman, 2017). In some cases, they might prevent a disconnection (Interview with Wien Energie Ombudsman representative 16. 03. 2017). The key achievement of the Wien Energie Ombudsman is the development of the criteria for 'severe social case'. This notion is broader than energy poverty and is a customer who fulfills any at least three different subcriteria out of six main criteria (income, illness, housing situation, family situation, debts, and life crises) in Table 2 (Interview with Wien Energie Ombudsman representative 16. 03. 2017). The work of the Ombudsman Table 2. Main and their sub-criteria defining a severe social case according to Wien Energie Ombudsman. Main criteria Sub-criteria Income (1) Persons on the guaranteed minimum income or minimal pension; (2) Long-term unemployed person eligible for support from labor market authorities; (3) Owner of a card of the City of Vienna with discounts for persons on minimal income or minimal pension; (4) Household which energy costs are more than 10% of the household income; (5) Person not entitled to the guaranteed minimal income; (6) Person which receives support for child care and allowance Illness (1) Household with a member receiving attendance allowance; (2) Person on life-support equipment; (3) Person with disability (certificate of disability); (4) Chronically ill person (e.g. cancer); (5) Person with appointed special guardian; (6) Person with psychological illness; (7) Person with current addiction Housing situation (1) Former homeless person or family who have been taken care of by an institution; (2) Person at the risk of eviction Family situation (1) Single parent with children required to attend school; (2) Single mother-to-be or family-to-be with children required to attend school Debts (1) Rent arrears; (2) Energy arrears or disconnection or risk at disconnection from Wien Energie (electricity/heat); (3) Person in debt/person under execution (attachment); (4) Person working on paying back debts Life crises (1) Separation/divorce, death in the family; (2) Domestic violence (restraining order for domestic violence); (3) Job loss; (4) Person on probationary service; (5) Person with a refugee status; (6) Person with an ongoing asylum procedure Source: (Wien_Energie_Ombudsman, 2017). complements a strong social welfare system and a liberalized energy market. The social welfare system is developed to accommodate different needs, such as attendance allowance, child allowance, guaranteed minimum income, heating allowance (Wien_Energie_Ombudsman, 2017). A consumer with arrears can refer to the basic supply for electricity and gas, meaning they will have to get a contract despite having arrears (Interview with a representative of the university JKO Linz, 23.03. 2017; with a representative of the Ministry of Social Affairs, 22.03.2017). The process of disconnection is legally regulated with a warning period that precedes it (Interview with a representative of the energy regulator E-Control, 23.03.2017; with a representative of the Ministry of Social Affairs, 22.03.2017). To concretely illustrate their work with their clientsthe severe social cases, Wien Energie Ombudsman provided a few real cases of clients' requests and the response of the Ombudsman. One edited example is given in Table 3. It shows that the Ombudsman does not offer to forgive debts, but works on developing complex and sustainable solutions in cooperation with the consumer and social institutions which would allow the consumer to pay their bills. Discussion I have proposed a new conceptual framework to apply procedural energy justice to energy poverty. I did this by integrating broader energy justice discussions about the role, responsibility, and features of formal institutions and the agency of citizens along with energy citizenship literature; the right to energy discussions about the relationship of institutions to affordable energy; and the impacts of distributive and recognition justice on procedural justice. I place the Ombudsman, represented in this article with a typical independent Ombudsman observing human rights, and an untypical one, being a special unit within a state-owned energy utility, among formal institutions to study their contribution to detecting, preventing, or creating energy poverty. The proposed conceptual framework defines procedural energy justice applied to energy poverty to be about how institutions treat citizens over access to affordable energy, and how citizens are (dis)empowered by that relationship. The energy justice scholarship opens up the discussion on the good governance of institutions (Sovacool et al., 2017;Sovacool & Dworkin, 2015), emphasizing their responsibility and capacity (Rawls, 1971). This adds to the capacity assessment of relevant formal institutions, such as institutions in energy policy-making, energy regulation, energy supply, and social institutions to be just or unjust by rectifying or creating inequalities. The Ombudsman as an independent body can play a crucial role in detecting injustices and as a body within an energy utility can contribute to alleviating energy poverty. The proposed conceptual framework highlights the agency of citizens. Energy justice brings out the moral and human aspects of the energy transition (Jenkins et al., 2018;McCauley et al., 2019;Sovacool & Dworkin, 2015) and the questions of retribution and resistance (Heffron & McCauley, 2017;Sovacool et al., 2017). The energy citizenship literature points out that a citizen is left disempowered if it is only reduced to a consumer (Lennon et al., 2019;Ryghaug et al., 2018). Citizens have multiple roles in society and should not be seen only as able to afford their energy bills. They have the right to participate (Walker & Day, 2012) and enjoy a dignified life. The case studies have shown that the independent Ombudsman found the right to a dignified life of (vulnerable) citizens to be more important than their role as consumers, and the utility-based Ombudsman created mechanisms to reach out to vulnerable citizens and consider their precarious life situation before demanding them to pay their energy arrears. One of the key aspects of the proposed conceptual framework is the assessment of the impacts of distributive and recognition energy justice on procedural energy justice, and how they amplify procedural (in)justices. Recognition justice being about identifying who is vulnerable (Jenkins et al., 2016;Walker & Day, 2012) impacts procedural justice through the size of the energy-poor population. In the case of North Macedonia where material deprivation and energy poverty affect a large share of the population, they magnify the level of ambition needed to tackle energy poverty through institutional policies. Distributive justice which is about the location of injustices (Jenkins et al., 2016) and unequal access to energy services (Walker & Day, 2012), affects the fairness of the process and decisions through the institutional setup creating (un)equal access to energy. Certain institutions and institutional-set ups are energy poverty drivers. The energy market structure (monopolized or not), the ownership of utilities (state or private), and the strength of the social welfare system determine to which extent will formal institutions practice policies which affect citizens positively. Lastly, a crucial aspect of procedural energy justice is how formal institutions treat access to energy and how the policies they create affect vulnerable citizens regarding access to affordable energy. Some institutions practice neo-liberal energy policies and treat energy as a commodity that citizens need to be able to afford no matter their personal circumstances. The independent Ombudsman detected that the energy monopolies were unjustly treating citizens only as consumers, and even employing enforcement agents to collect the energy debts, even from materially deprived citizens on social welfare assistance. Other institutions go beyond the market relations (Walker, 2015), and practice the right to energy concept (EPSU & EAPN, 2017;Hesselman et al., 2019), thus consider affordable energy to be an essential human need. The utility-based Ombudsman has studied the need of its vulnerable consumers, developed criteria to identify them by considering their entire life situation, proposed measures to secure their energy access and enable mechanisms for them to pay their energy arrears. A crucial output of procedural energy justice is whether citizens are protected from energy disconnections. The utility Ombudsman considers energy access as one of its main goals, while the independent Ombudsman draws attention to energy monopolies that misused their position to disconnect citizens with arrears and even citizens who regularly pay their bills. Conclusion Drawing on procedural energy justice applied to energy poverty with insights from more general energy justice, energy citizenship, and the right to energy literature, I have explored the role of two different Ombudsman entities (independent and utility-based) existing in different contexts (North Macedonia with high and Austria with low energy poverty levels) in tackling energy poverty and exposing energy injustices. Through the independent Ombudsman's lenses, I have found out hidden institutional energy poverty drivers to be energy monopolies and the weak social welfare system. The utility-based Ombudsman has shown to support vulnerable consumers in paying back their energy debts while protecting them against disconnections. Against the background of illustrating cases with different development of the social welfare system and different market structures, the Ombudsman discovers that these two institutions can alleviate or create energy poverty. The energy utility plays an especially relevant role in creating or preventing energy injustice since in the case of North Macedonia the utility has been identified as the key institutional driver of energy poverty as reported by the independent Ombudsman. On the other hand, the Austrian Ombudsman is located within the utility and plays an emerging role in reducing energy poverty. The article shows that the Ombudsman is a new and under-explored actor in detecting energy poverty (Hesselman & Herrero, 2020). The Macedonian case showed that the independent Ombudsman is a protector of consumer rights and vulnerable consumers by pointing out the private district heat and electricity monopolies and the social welfare system because they are creating injustices, however, it has a low impact on rectifying these injustices. The relevance of the Ombudsman's work in North Macedonia is in the development of new legislation that improves the level of consumer rights. The Ombudsman has also explicitly mentioned that human rights are endangered in cases of disconnections and weak social welfare protection, and it hinted at the case that energy poverty can be experienced outside of the household with health and educational implications. It showed that a broader scope of citizens including regular payers can be affected by energy injustices, such as in the case with collective disconnections. The Austrian utility-based Ombudsman is a result of legislation to protect vulnerable consumers but also due to the awareness of the public utility to deal with consumers unable to pay their energy bills. The Austrian Energy Ombudsman by developing a broad definition of a severe social case has internalized the human rights approach to considering energy services as a necessary (human) need. The case studies tried to show the complex contextual environment in which the Ombudspersons operate. The article acknowledges that no generalization is possible, however, it clearly shows the asymmetry of the energy markets and the social welfare systems in both countries which either support or entrap consumers in (un)just policies. On one hand, we have a well-developed welfare state with public utility leading the fight against the inhuman treatment of energy consumers, and on the other weak social welfare system where energy monopolies in private ownership inflict injustices on citizens and go unpunished. In North Macedonia, the energy supplier sees citizens mainly as energy consumers who need to pay their energy bills, but the Austrian energy suppliers have shown that energy affordability challenges are to be solved on the liberalized market by enabling everyone to pay their energy bills but offering them a human approach. It shows that one of the key institutions is the energy supplier which can be just or unjust depending on whether it practices the right to energy concept (Hesselman et al., 2019;Walker, 2015). Finally, I have extended the understanding of procedural energy justice. It is more than access to information, participation in the decision-making, and access to legal rights (Walker & Day, 2012), but it is about the relationship between formal institutions and citizens over affordable energy. Procedural energy justice applied to energy poverty is how institutions treat citizens over access to affordable energy, and how citizens are (dis)empowered by that relationship. Procedural energy justice demands just institutions and policies to lead a socially just energy transformation. This requires structural reforms (Guyet et al., 2018), a rethinking of the neo-classical economics system (Heffron et al., 2015) which co-produces energy poverty, and re-shift from the personal situation of the energy-poor to the policies and institutions which co-shape energy poverty (Petrova, 2018). This article paves the way for further research about the role and contribution of Ombudspersons in detecting energy injustices and alleviating energy poverty. In Europe, few good examples of Ombudspersons exist in safeguarding energy-poor citizens from energy injustices, such as the Belgian, French, Spanish, and UK Ombudspersons. Although they appear in projects and policy work, their role and contribution have rarely been part of academic studies. Further research would benefit from learning about their practical work and their success stories of alleviating energy poverty.
8,624
2021-06-21T00:00:00.000
[ "Engineering" ]
TECHNOLOGY AND ECONOMIC DEVELOPMENT: RETROSPECTIVE : The purpose of this paper is to represent historical link between technological progress and economic development. Furthermore, the reasons are aimed at presenting the stages of development of technological progress and its contribution to economic development, particularly in changing economic and social life. I want to emphasize that many researchers of scientific thought, technological progress has been learned as one of the root causes of cyclical movement of capitalist economy. Others seek to quantify the share of technological progress in the rate of GDP growth. Also, in this paper are going to be analyzed the importance of technological progress in society and will be given a chronological framework for the role and importance of the scientific and technological revolutions. Introduction The term technique comes from the Greek word (tehne), and it means: skill, ability, skill, knowledge, means and instrument. In the recent time, it is the accumulated knowledge and experience that is generated in all areas of society. While the term technology comes from two Greek words: tehno and logos (science). It is a science to the sum of knowledge about procedures and processes used in the manufacture of material production. Later the term technology has been extended in a double sense: first, covers the sum of knowledge about procedures and processes not only in manufacturing but also in other spheres of social life and, secondly, includes the procedures and processes. Historically, Colin Clark and Jean Fourastié are the first economic thinkers who emphasize the role of technological progress and its impact on economic progress or development. When discussing technological development, and to have no doubt, think of the changes that are produced under the influence of interacting relations in society and technological progress. And all this affect the social and economic behavior in everyday life. While the spectrum of opinion represented in these discussions is quite broad, there seem to be some points of consensus, at least within the Western "mainstream"(Bostroam, Nick 2006): • Technological development will have major impacts on human society. • These developments will create both problems and opportunities. • "Turning back" is neither feasible nor desirable. • There is a need for careful public examination of both the upsides and downsides of new technologies, and for exploration of possible ways of limiting potential harms (including technological, regulatory, intergovernmental, educational, and community-based responses). Technological progress and economic development Today, technological progress becomes a factor in economic growth and development, but in some countries it varies according to the intensity and forms of realization. Its main components are: • Discovery (invention), • The application of the innovation and • Diffusion of other entities. Besides these components, some researchers emphasize factor aging (degradation) of the invention. In a scientific thought three main stages of technological progress are distinguished manually production (up to the first industrial revolution), machine-industrial system and age of automation. It is about different points of view in the discussion stages of technological progress distinguished between four scientific and technological revolutions. The first industrial revolution is caused by working of the steam engine or replacing of a manufacture system with an industrial system design (industrial age).When it started at the end of the XVIII century, caused a number of changes in the economic and social life and led to replace some of the physical effort with machines, allow hiring of female and child labor, increase the universality of workers in terms that can be employed and adapted to perform different activities, increased unemployment, increased the level of concentration of production and has increased imbalances in the development of economic branches and regions (Stojkov, 2008). Second industrial or electromechanical revolution refers to automation. Simply, automation enables strengthening of nationwide economies. It refers to the employee transfer in the field of direct manufacturing and went to the fields before and after production. Automating changes man and human development not only in the execution of physical operations, but also in the performance of certain mental operations. Also central to the changes of this revolution are electricity and its application in electric motors, telephone, telegraph, automobile, aircraft and others. The third industrial or technological revolution began before World War II and it's called electronic revolution. At the heart of the changes is a transistor whose application enable the development of computers or computers and microprocessors. Finally, it is discussed about the fourth technological revolution that began late last century and which is also called an information revolution. The key for this revolution is the chip. A chip is directly linked to high technology -information technology non-informational robots and other forms i.e. machines and tools with numerical control. Technological revolution caused major changes in biotechnology, energy resources and raw materials. Among other things, it causes the replacement of national economies with the global economy and remarkable passage of workers in the services sector. The literature devoted to the technological progress depicts three basic types of technological progress: 1. Labor intensive (larger investments in labor rather than capital), 2. Capital-intensive (larger equity investments compared with investments in workforce), and 3. Neutral technological progress (equal increase in investments capital and labor). This division is made according to the relationship between capital and labor in the realization of production (the macro level of GDP growth). In practice more individual indicators are used to measure technological progress. For example, the height of the global productivity is used as a synthetic indicator of technological progress, which is calculated using the Cobb-Douglas production function. Technological progress causes major changes in other factors of production, particularly in labor and capital. It causes significant changes in the area of international competition where the countries that offer high technology had bigger advantage (Stojkov 2008). Technological progress is among the major factors that enhance the process of globalization in the world. In this context, it points out that electronic and Internet services revolution is leading to the formation of the Internet economy. And also contribute to increase the connectivity of the world economy. Because of the technological development rate's great importance in order to have economic growth and broader economic development, there are many efforts to involve technological development in newer models of economic growth recently. Special merit for this plan deserved Robert Solow and Paul Romer. Important factors that affect the economic growth: A detailed analysis of these four findings are presented in the report. In highlighting the role of technological development for economic development one can swim in the waters of technological determinism. Namely, economic development can be explained only by the benefits of technological progress in innovations and to believe that they will solve all the problems of humanity. According to me, these are ten hightech breakthroughs ranked as follows: • Technology Transfer It's very complex and polidimensional concept. For this concept are given either very general or partial definitions. The general definition states that the transfer of equipment and technology amounts to a transfer of technology and knowledge from where you they are created or complement to the place where they should be used. In that sense, the transfer can take place between countries or between regions within a country. In more specific definitions says that despite the transfer of technology and knowledge, the user should be able to have the conditions for creating proprietary technology and its diffusion. The main purpose of the transfer of technology is to accelerate the technological and economic development. Otherwise, it may be horizontal and vertical and may take place in different channels. According to some authors, the channels can be grouped into three groups: (Todorov, 2002).Technological dependency also exists between the leaders of technological development, on the one hand, and other less developed countries of the leaders on the other. But those countries are exporters in some areas and are importers of equipment and technology in others, so they've got a lower level of technological dependency, compared with countries that only import equipment and technology (developing countries). Conclusion Many economists and researchers learn technological progress as one of the root causes of: • Cyclical movement of capitalist economy; • Others seek to quantify the share of technological progress in the rate of GDP growth; • Learn the importance of technological progress in the information society; • Express the scientific and technological revolutions. After World War II a separate discipline is formed with different names: the policy of technological development, scientific and technological progress and so on. In terms of technological development a key contribution have policy makers. Specifically, technological policy should refer to goals and priorities particularly, determining the technological policy subject, the establishment of instruments and means of implementation of objectives and analysis of the results, and at last proposing measures to improve the situation. Often as its main aims and objectives are suggested the following: the transition to a higher technological and development phase, providing sustainable development where crucial importance have investment and education, involvement in the process of globalization, development of national innovation systems, more expressed orientation towards human potential in terms of capital.
2,104.2
2016-10-01T00:00:00.000
[ "Economics" ]
ToxDBScan: Large-Scale Similarity Screening of Toxicological Databases for Drug Candidates We present a new tool for hepatocarcinogenicity evaluation of drug candidates in rodents. ToxDBScan is a web tool offering quick and easy similarity screening of new drug candidates against two large-scale public databases, which contain expression profiles for substances with known carcinogenic profiles: TG-GATEs and DrugMatrix. ToxDBScan uses a set similarity score that computes the putative similarity based on similar expression of genes to identify chemicals with similar genotoxic and hepatocarcinogenic potential. We propose using a discretized representation of expression profiles, which use only information on up- or down-regulation of genes as relevant features. Therefore, only the deregulated genes are required as input. ToxDBScan provides an extensive report on similar compounds, which includes additional information on compounds, differential genes and pathway enrichments. We evaluated ToxDBScan with expression data from 15 chemicals with known hepatocarcinogenic potential and observed a sensitivity of 88%. Based on the identified chemicals, we achieved perfect classification of the independent test set. ToxDBScan is publicly available from the ZBIT Bioinformatics Toolbox. Introduction Developing new drugs is a very cost-intensive process. Estimates of the overall cost for developing Food and Drug Administration (FDA) approved drugs range from $160 million to $1.8 billion for one drug, based on success rates of only 12% to 23% for drugs entering the clinical phase [1]. Low success rates in combination with high requirements for approval by the FDA or similar agencies lead to the immense costs per approved drug. Depending on the estimate, between 40 and 65 percent of the total cost is spent during the preclinical phase [1]. Animal studies are required prior to approval for clinical studies. These animal studies are expensive, both in terms of required resources (i.e., animals, researchers, chemicals), as well as time. While clinical trials are generally more expensive than preclinical trials, the success rate is much lower for preclinical trials. Therefore, a larger number of preclinical trials is required per approved drug, which leads to high costs accumulating in the preclinical phase. Drug candidates with genotoxic effects are identified early in the preclinical phase with genotoxicity assays, e.g., the Ames test [2]. However, carcinogenic effects can also arise irrespective of genotoxic events, e.g., by inhibition of apoptosis or initiation of proliferation [3]. Currently, no approved short-term assays are available for non-genotoxic carcinogenicity. The current gold standard in the preclinical assessment of non-genotoxic carcinogenicity is the two-year rodent assay [4]. During this assay, a group of rodents, typically rat or mice, is treated with the drug candidate in a multitude of the estimated human dosagaes (see ICH Safety Guidelines S1A-S1C and OECD Test Guideline 451). The treated group is compared with a non-treated control group to identify a potential increase in cancer incidents, e.g., by histopathology. This process is not only cost-intensive, but can lead to the late discovery of the carcinogenic effects of the drug candidate. These failures late in the preclinical phase contribute to the low success rate and are particularly expensive [5]. The field of toxicogenomics uses computational biology approaches to investigate toxicological questions, such as carcinogenicity prediction for drug candidates. This includes in silico approaches, e.g., quantitative structure-activity relationship (QSAR) models [6], as well as approaches that combine high-throughput methods, such as microarrays or next-generation sequencing with computational analysis [7]. The combination of short-term rodent assays with machine learning has been shown to be able to predict the outcome of the two-year rodent assay [7]. This toxicogenomics approach uses microarray data obtained from the treated and control animals after one, two or four weeks, or even a longer duration of treatment with the drug candidate. The problems of these studies are the small sample size, due to budget or time restrictions, and the large diversity of potential modes of action (MOAs) observed for non-genotoxic carcinogens. Whereas DNA damage response and p53 signaling can be observed for most genotoxic carcinogens, non-genotoxic carcinogens act through several distinct mechanisms, e.g., chronic cell injury, immunosuppression, increased secretion of trophic hormones or altering receptor activity [3]. During the last decade, the problem posed by non-genotoxic carcinogens led to the development of two large databases that are publicly available: Open TG-GATEs [8] and DrugMatrix [9]. These databases allow a more comprehensive analysis of MOAs of non-genotoxic carcinogens. To our knowledge, no tool exists that allows the analysis of both databases. Most studies that investigated toxicogenomics approaches focused on established machine learning methods that are applied to expression profiles obtained from specific microarrays [7]. Therefore, it is difficult to construct prediction systems for new expression profiles that were obtained by different researchers under different conditions. In this paper, we present ToxDBScan, which is an easy interface for the information included in these databases. ToxDBScan enables researchers to quickly identify compounds that show similar perturbations on the gene expression level in the liver of male rats. Compatibility across available microarray platforms is provided through the abstraction of array-specific probe set identifiers to gene symbols. In addition, ToxDBScan performs pathway enrichment analyses against the KEGG database [10]. The ToxDBScan web application is freely available from the ZBIT Bioinformatics Toolbox [11]. Because only the up-and down-regulated genes are required as input for the web application, no confidential data needs to be uploaded in order to perform analyses. This allows a quick and easy identification of potentially similar compounds for further mechanistic analysis, assessment of their hepatocarcinogenic potential or mode of action discovery. Gene Fingerprint Extraction Gene fingerprints were extracted for each condition based on the intensity ratio (treated to control animals) with two thresholds: 1.5-fold and two-fold deregulation. Figure 1 shows the distribution of gene fingerprint sizes. At least one deregulated gene was identified for each threshold in all conditions. For the less conservative 1.5-fold deregulation threshold, the gene fingerprint size ranges from 23 to 6525 genes, with a median size of 131 genes. Gene fingerprint sizes were smaller for conditions from TG-GATEs, with a median size of 111 genes, compared to a median size of 603 genes for DrugMatrix conditions. For the stricter two-fold deregulation threshold, gene fingerprint sizes range from 5 to 3224 genes, with a median size of 32 genes. Again, gene fingerprint sizes were smaller for TG-GATEs conditions, with a median size of 27 genes, compared to a median size of 152 genes for conditions from DrugMatrix. This difference may be a result of the higher dose levels administered in DrugMatrix experiments. Identification of Similar Conditions For each chemical in the evaluation dataset (see Table 1), we used our similarity score to extract the most similar conditions from the combined TG-GATEs and DrugMatrix databases. The extracted conditions were compared to the evaluation chemicals based on genotoxicity and carcinogenicity information. Ten of the 15 chemicals in the evaluation set are contained in either TG-GATEs, DrugMatrix or both. These 10 substances included eight non-genotoxic carcinogens (NGCs), one non-hepatocarcinogen (NC) (nifedipine, NIF) and one GC (nitrosodimethylamine, DMN). For five of these eight NGCs (acetamide (AAA), ethionine (ET), methapyrilene (MP), phenobarbital (PB) and thioacetamide (TAA)) an experiment with the same substance was returned as the best hit (see Figure S1). The remaining three NGCs (cyproterone acetate (CPA), diethylstilbestrol (DES) and Wy-14643 (WY)) were placed second in the returned list of similar experiments, due to the existence of related NGCs for which higher similarity scores were observed (see Figure S1). The identification of similar NGCs also allows a mode of action analysis. For instance, WY was found to be most similar to the chemicals, fenofibrate, clofibric acid and clofibrate (see Figure 2a). These are known to activate peroxisome proliferator-activated receptor alpha (PPARa) [13], suggesting a PPARa-related mode of action for WY (as shown by Peraza et al. [13]). Fenofibrate, clofibric acid, WY and clofibrate are also among the most similar chemicals for DHEA (see Figure 2b). This may indicate a PPARa-related mode of action for DHEA, as has previously been shown by Mastrocola et al. [14]. For PBO, several NGCs were identified as the most similar chemicals: omeprazole, hexachlorobenzene, carbamazepine and spironolactone (see Figure 2c). Three of these chemicals, omeprazole, hexachlorobenzene and carbamazepine, are classified as enzyme inducers [15,16], suggesting an enzyme-inducing mode of action for PBO, as demonstrated by Goldstein et al. [17]. Similar results are obtained for other enzyme inducers in the test set, e.g., cyproterone acetate (CPR) and PB. Omeprazole, spironolactone and carbamazepine are found among the most similar compounds for CPR, suggesting enzyme induction as the major MOA, as Schulte-Hermann et al. demonstrated [18]. Carbamazepine and hexachlorobenzene are among the most similar compounds for PB, which again suggests enzyme induction as an MOA, as has been shown by Waxman et al. [19]. Sulfasalazine, which is classified as an enzyme-inducing NGC by Uehara et al. [16], is also among the compounds most similar to PB, but has no associated positive test for hepatocarcinogenicity in CPDB and is therefore considered an NC. The NGCs, TAA, MP and ET, are considered hepatotoxic oxidative stressors by Uehara et al. [16]. For TAA, the most similar compound is MP, but with a low similarity compared to the TAA experiments contained in TG-GATEs. Among the compounds most similar to MP are carbon tetrachloride and TAA, which supports the hepatotoxic MOA, but also the PPARa-activator, gemfibrozil, and the genotoxic compound, hydrazine. For ET, the most similar compounds include TAA and MP, as well as carbon tetrachloride, which is also a hepatotoxic oxidative stressor [16]. The genotoxic compound DMN was not recalled, which may also be due to the different dosage and duration of treatment (10 mg/kg/day for five days in DrugMatrix vs. 4 mg/kg/day for seven days in the evaluation dataset). However, nitrosodiethylamine, which is very similar to DMN chemically, was identified as the most similar compound for DMN, along with other GCs. For the second genotoxic chemical, C.I Direct Black (CIDB), the five most similar compounds identified in the databases are all GCs; the highest scoring is acetamidofluorene (see Figure 2d). This evaluation shows that our similarity score allows the identification of similar compounds to provide leads for mechanistic analysis, carcinogenicity evaluation and mode of action detection. Figure 2. Gene expression heat maps of similar compounds. For selected test chemicals, we extracted the most similar chemicals included in either TG-GATEs or DrugMatrix. Each column corresponds to a chemical that was identified as similar. The chemicals are sorted from left to right by descending similarity score. The heat maps show the log 2 fold change of 20 selected genes from the gene fingerprints of the test chemical. Genes above the black line are upregulated at least 1.5-fold in the test chemical, and genes below are downregulated, respectively. Genes were selected based on average expression in the identified chemicals. The color bar above the chemical name indicates the hepatocarcinogenicity annotation, and the legend is shown in (a). (d) Threshold Selection In order to select an appropriate similarity threshold for the compound fingerprints, we determined for each chemical how many conditions with an equal toxicological class are among the five, 10 and 20 nearest neighbors, i.e., most similar conditions (see Table 2). The less conservative threshold of 1.5-fold deregulation performs slightly better than the stricter threshold. On average, 4.3 out of the five, 8.0 out of the 10 and 14.4 out of the 20 most similar were treated with a chemical of the same carcinogenicity class. For each chemical in the test set, relative similarity scoresS were computed by dividing the observed similarity score for a certain condition by the maximum similarity score. The percentage of conditions annotated with the same carcinogenicity class in the subset of conditions with a relative similarity score higher than 0.8 and 0.7 was computed (see Table 3). Again, a slightly better performance was observed for the less conservative fold change cutoff. On average, 88% of the identified conditions with aS > 0.8 were of the same class as the evaluation chemical, while only 80% conditions with matching classes were found forS > 0.7. Our evaluation with expression profiles from an independent dataset show that our similarity score allows robust identification of compounds with similar genotoxic and hepatocarcinogenic potential. The identification is possible for chemicals that are already in one or both databases, as well as for compounds that are not included in any of the two databases. Table 2. Percentage of correctly identified conditions. The most similar conditions were extracted for each chemical in the evaluation set. The percentage of conditions with the same carcinogenicity class in the five, 10 and 20 most similar conditions was calculated. CIDB 100 100 95 100 90 90 DMN 80 70 50 100 80 75 Non-genotoxic carcinogens PBO 100 80 65 80 70 75 MCA 60 60 45 80 60 50 DHEA 100 90 85 100 90 80 MP 80 70 70 80 50 40 TAA 100 80 65 100 70 60 DES 100 100 100 100 100 100 WY 100 90 90 100 100 95 AAA 60 50 35 80 60 50 ET 100 100 100 100 100 85 CPR 80 80 65 60 60 60 PB 80 60 45 20 20 25 Non-hepatocarcinogens CFX 100 90 85 60 70 85 NIF 60 80 70 60 70 70 Mean 86 80 71 82 73 70 Table 3. Percentage of correctly identified conditions. The most similar conditions were extracted for each chemical in the evaluation set. The percentage of conditions with the same carcinogenicity class and a relative similarity above 0.8 and 0.7 was calculated. Across all evaluations, the 1.5-fold deregulation threshold led to better results for the similarity search. This may be due to the larger number of genes available for the similarity scoring of the evaluation compounds (median fingerprint size: 269 genes). The smaller fingerprint sizes observed for the higher threshold (see Figure 1) may contain too few specific genes, which are only slightly deregulated. Particularly, for NGCs and NCs, the number of deregulated genes is very small when using the two-fold deregulation threshold, with a median fingerprint size of 53 genes. Based on the above evaluations, we propose using the 1.5-fold deregulation threshold and consider conditions with a relative similarity scoreS > 0.8 as likely to share the same class. Hepatocarcinogenicity Prediction Above, we proposed using an intensity ratio threshold of 1.5-fold deregulation for gene fingerprint extraction and a relative fingerprint similarity of more than 0.8 to identify similar compounds. To assess the viability of these thresholds, we performed a classification of an independent test set. For each chemical, we extracted gene fingerprints using a 1.5-fold deregulation threshold. These were compared to the database using our similarity score. Conditions with a relative similarity scoreS ≥ 0.8 were considered as similar, whereas conditions withS < 0.8 were considered different. To assign the carcinogenicity class, we performed an over-representation test for GC and NGC, respectively. We calculated the ratio of GCs (R GC ) and NGCs (R NGC ) in the similar conditions. For each chemical, a random permutation test was performed to assess the significance of the observed GC and NGC percentages. Each permutation test used a gene fingerprint of the same size as the test compound containing genes randomly drawn from the genes available in the database. We performed n = 100, 000 repetitions with different randomly drawn gene fingerprints to estimate the distribution of the R GC and R NGC . The p-value of the over-representation test for GCs was computed as p GC = N n , where N is the number of random gene fingerprints that contained a higher ratio of GCs. Analogously, p NGC was computed. Each test chemical was classified as a GC, if p GC < 0.05, or as a NGC, if p NGC < 0.05, or as an NC, if p GC > 0.05 and p NGC > 0.05. The results of the over-representation test are shown in Table 4. The correct class was predicted for all 15 chemicals in the evaluation set. Table 4. Classification results. Similar conditions in TG-GATEs and DrugMatrix were identified by computing the similarity score S and selecting conditions with a relative similarityS > 0.8. Ratios of genotoxic carcinogens (R GC ) and non-genotoxic carcinogens (R NGC ) were computed based on the annotation of the similar conditions. A permutation test (n = 100, 000) was performed to assess the significance of over-representation of GCs (p GC ) and NGCs (p NGC ). If the p-values were significant for α = 0.05, the corresponding class was predicted. If no significant enrichment was found for either of the two classes, the test chemical was predicted as non-hepatocarcinogen (NC). Significant p-values are printed in bold font. Web Application ToxDBScan is available as a web application from the ZBIT Bioinformatics Toolbox [11]. The ZBIT Bioinformatics Toolbox runs on a Galaxy Project web server [20][21][22], which provides a user-friendly and sustainable platform for tools used in scientific research. No local installation is required for running the application. ToxDBScan generates an HTML report, which is shown directly inside the ZBIT Bioinformatics Toolbox. This report includes the results of the database scan for similar compounds, enriched KEGG pathways, as well as information on the NGC-specificity and information content of the deregulated genes (see Figure 3). This information can be used for a mechanistic analysis of the hepatocarcinogenic potential or mode of action detection. The gene fingerprint of the query compound can be compared to the gene expression profiles observed under the most similar conditions by means of a heat map. Additional information on the deregulated genes is available from the "Gene analysis" tab at the head of the report. The results of the pathway enrichment analysis against the KEGG database are available from the "Pathway analysis" tab. The "Heat maps" tab shows heat maps of the gene expression in the most similar compounds. Additional information on the database compounds (e.g., CAS number and structure) and KEGG pathways is provided. All reports can be downloaded for further analyses in either tabular format or as a PDF. ToxDBScan requires only the deregulated genes observed in an experiment, which can be provided as either official rat gene symbols (as provided by the Rat Genome Database [23]), Entrez IDs [24], Ensembl IDs [25] or UniProt IDs [26]. Therefore, no confidential data needs to be uploaded, such as the chemical structure, name or experimental details. Discussion We have developed a novel approach for similarity scoring of gene expression profiles and applied it to data from TG-GATEs and DrugMatrix, two large-scale toxicogenomics databases. We evaluated our similarity score with an independent evaluation set of gene expression profiles from experiments not included in TG-GATEs and DrugMatrix. The results indicate that our similarity score is able to robustly identify hepatocarcinogenic compounds with similar modes of action. Furthermore, we demonstrated that an accurate prediction of the carcinogenicity class of the evaluation chemicals was possible. All 15 compounds in the evaluation dataset were assigned to the correct class. The similarity score can be used through a web application to identify compounds with a potentially similar mode of action in TG-GATEs and DrugMatrix. The web application, ToxDBScan, is freely available from the ZBIT Bioinformatics Toolbox [11]. The evaluation dataset included 15 chemicals belonging into three carcinogenicity classes: NGCs, GCs and NCs. In our evaluation dataset, three major mechanisms are represented: oxidative stress-mediated hepatotoxicity (TAA, MP, ET), PPARa-induction (WY, DHEA) and enzyme induction (PB, PBO, CPR) [13,14,[16][17][18]. Our similarity score robustly identified compounds in TG-GATEs and DrugMatrix that act through the same modes of action as these NGCs. Furthermore, GCs in the databases were identified as most similar to the genotoxic evaluation chemicals (CIDB, DMN). This indicates that our similarity score is a useful tool for the identification of compounds in TG-GATEs and DrugMatrix that act through similar mechanisms, thus providing leads for further analysis of the mode of action. To assess if our similarity score can be used for the identification of the hepatocarcinogenicity of new drug candidates, we evaluated different intensity-ratio cutoffs and relative similarity thresholds. The best results were obtained with a 1.5-fold deregulation threshold for genes and a 0.8 relative similarity threshold. With these parameters, we observed that 88% of the compounds that were identified as similar have equal hepatocarcinogenic potential. Using these optimal parameters, we performed a classification of the independent evaluation compounds based on the TG-GATEs and DrugMatrix databases. We were able to correctly predict all 15 evaluation chemicals as NGC, GC or NC. This indicates that our similarity score allows the hepatocarcinogenicity evaluation of new compounds based on large databases of compounds with known hepatocarcinogenic potential. ToxDBScan, a web application that is freely available, was created to allow other researchers to use our similarity score for the identification of similar compounds in TG-GATEs and DrugMatrix. To our knowledge, no other web application is available that offers a similarity search in both TG-GATEs and DrugMatrix. In addition, ToxDBScan is independent of the platform used to identify the deregulated genes, as only the list of up-and down-regulated genes is required to run ToxDBScan. New data can easily be integrated into ToxDBScan to extend the database of expression profiles available for the similarity search. The use of ToxDBScan is not limited to new drug candidates, as demonstrated by the compound, CIDB, in our evaluation set, which is a genotoxic dye. In summary, ToxDBScan offers a unique similarity scoring method for the two largest toxicogenomics databases and may contribute to the implementation of new approaches for the evaluation of the carcinogenic potential of chemicals. Carcinogenic Potency Database The Carcinogenic Potency Database (CPDB) is a publicly available database, which records the outcome of long-term in vivo cancer bioassays performed in several organisms. Currently, it contains the outcome of 6540 studies on 1547 chemicals. The carcinogenic potential is listed by the observed cancer site. In addition, the outcome of an auxotroph-based Ames test is contained for many chemicals. For this study, we considered a chemical hepatocarcinogenic if the CPDB contained a positive outcome that was observed in the liver of male rat and genotoxic if a positive outcome of the Ames test was recorded. Compounds that were not tested in the CPDB or that have no distinct associated outcome are annotated as unclassified. Chemicals were classified as genotoxic carcinogens (GC) if they were both hepatocarcinogenic and genotoxic, non-genotoxic carcinogens (NGC) if they were hepatocarcinogenic and not genotoxic or non-hepatocarcinogenic (NC) if no positive carcinogenicity test in male rat liver was recorded in the CPDB (see Table S1). Toxicogenomics Project-Genome Assisted Toxicity Evaluation System TG-GATEs is a publicly available toxicogenomics database, which was established by the Japanese government and several Japanese pharmaceutical companies [8,31]. It is available from ArrayExpress through the accession number, E-MTAB-800. TG-GATEs contains gene expression profiles from male Sprague-Dawley rat liver and kidney, as well as cultured human and rat hepatocytes treated with 160 chemicals in either single or repeated dosage settings. For ToxDBScan, all expression profiles from the rat liver were used. Each chemical was administered at three doses and for eight durations, i.e., 3 to 24 h in the single dosage setting and 4, 8, 19 and 29 days in repeated dosage setting. In total, 3528 combinations of chemical, dosage and duration were performed with three replicates each. Three matched controls were profiled for each condition, leading to 14,143 available gene expression profiles. Through CPDB and Uehara et al. [16], genotoxicity annotations are available for 123 of the 160 compounds profiled in male rat liver, which translates to 2768 conditions with known hepatocarcinogenic and genotoxic potential (see Table S2). DrugMatrix The DrugMatrix is a toxicogenomics database, which was obtained and made publicly available by the National Toxicology Program (NTP) from the Gene Expression Omnibus (GEO) [32] with the accession number, GSE57822. It contains gene expression profiles sampled from male Sprague-Dawley rat tissue (liver, kidney, heart and thigh muscle) and cultured rat hepatocytes after single and repeated dosage treatment with 376 chemicals, with control samples from male rats kept in equal conditions. Chemicals were administered in different doses and for different durations (ranging from 6 h to 7 days), and each combination of tissue, chemical, dosage and duration was replicated with three animals, leading to 5587 gene expression profiles. In male rat liver, only 200 of 376 chemicals were profiled, resulting in 654 different combinations of chemical, dosage and duration and 1939 expression profiles. The gene expression profiles were profiled using the Affymetrix Rat Genome 230 2.0 Array. Through CPDB, hepatocarcinogenicity and genotoxicity annotations are available for 132 of the 200 compounds profiled in male rat liver, which translates to 440 conditions with known hepatocarcinogenic and genotoxic potential (see Table S3). Comparison of TG-GATEs and DrugMatrix Fifty-one chemicals were profiled by both TG-GATEs and DrugMatrix. For the overlapping chemicals, the dose levels used in DrugMatrix were generally higher than the ones used in TG-GATEs. The dose levels selected for the TG-GATEs repeat dosage experiments were considered to be acceptable for 1-month repeated dosing [8]. The DrugMatrix doses were selected based on estimates of the maximum tolerated dose and fully effective dose generated from literature research and preliminary dose finding studies [33]. Data Preprocessing TG-GATEs data were normalized with robust multi-array average (RMA) normalization using the R Bioconductor package affy [34]. RMA normalized data from DrugMatrix were downloaded from the DrugMatrix FTP server [35]. For all conditions in the two datasets, log 2 intensity ratios were calculated for each probe set as the difference in the average log 2 intensity observed in treated samples and controls. Affymetrix probe set identifiers were mapped to official gene symbols using the Bioconductor package biomaRt for R [36]. The expression values of probe sets mapping to the same gene symbol were averaged. Differentially expressed genes were identified for two commonly used intensity ratio cutoffs, 1.5-fold and 2-fold up-or down-regulation. These gene fingerprints were stored for each condition. Pathway Enrichment Gene symbols were mapped to corresponding Rattus norvegicus pathways obtained from the KEGG database [10]. For each pathway in the KEGG database, a hypergeometric test was performed to check for significant pathway perturbation. The p-value is computed as: where N is the number of all genes for which gene expression was measured, M is the number of genes in the pathway of interest, n is the number of differentially expressed genes and m is the number of differentially expressed genes that are part of the pathway of interest. The resulting p-values were corrected for multiple hypothesis testing with Benjamini-Hochberg correction [37]. Similarity Scoring The most commonly used similarity measures for gene expression profiles are the Pearson correlation and the Euclidean distance [32]. However, the number of differentially expressed genes is small compared to the total number of genes that were profiled. This leads to sparse gene fingerprints. Therefore, both methods were deemed not applicable for measuring the similarity of gene fingerprints. In chemoinformatics, fingerprints are used to score the similarity of chemical substances, e.g., the extended connectivity fingerprints (ECFP) [38]. Similarity based on ECFP is computed using the Tanimoto similarity coefficient, which was derived from the Jaccard index (also called the Jaccard similarity coefficient) [39]. The Jaccard index is a similarity measure that is used to define the Jaccard distance, a metric for computing the distance of arbitrary sets. The Jaccard index J is defined as the ratio of the number of elements in the overlap of two sets A and B and the number of elements in the union of the two sets: This is equivalent to computing: which does not require the union of the sets A and B. The Tanimoto coefficient T is the equivalent to the Jaccard index defined on binary vectors X, Y ∈ {0, 1} n : The gene fingerprints used for the similarity scoring are not binary vectors, as they present information on upregulated genes (represented as 1), downregulated genes (represented as −1) and non-regulated genes (represented as 0). We defined a modified Tanimoto coefficient S, which accounts for the ternary representation. For two gene fingerprints X, Y ∈ {−1, 0, 1} n , where n is the number of measured genes, the similarity score S is defined as: where δ(x, y) is the Kronecker delta: The modified similarity score S allows the scoring of gene fingerprints similarly to the scoring of ECFP fingerprints with the Tanimoto coefficient T . If X and Y are binary vectors, the similarity score S and Tanimoto coefficient T will be equal. During the analysis of the scoring schemes, we found that many genes provide little information. This is due to common up-or down-regulation in response to drug administration, regardless of the toxicological outcome. To account for these genes, we further modified the similarity score by introducing a weight vector w. Each gene is assigned a weight depending on its frequency in the database: where N is the number of compounds in the database and C is the set of gene fingerprints of the database compounds, i.e., c g is 1 if gene g is upregulated in compound c, −1 if g is downregulated, and 0 if g is not deregulated. The weight w g corresponds to the negative decadic logarithm of the probability of observing deregulation of gene g, when randomly choosing a compound from the database. This concept is commonly used in information theory, where it is known as the information content or self-information [40]. The final similarity coefficient for scoring the similarity of two gene fingerprints is then defined as: Performance Evaluation To assess the performance of the similarity scoring, we extracted gene fingerprints from a dataset of gene expression profiles not included in either DrugMatrix or TG-GATEs. The evaluation dataset is publicly available from GEO under the accession number GSE53082 [12]. It contains gene expression profiles for two genotoxic carcinogens, 11 non-genotoxic carcinogens and two non-hepatocarcinogens (see Table 1). Ten of the 15 chemicals are included in one or both of TG-GATEs and DrugMatrix. RMA normalized data were obtained from GEO, and gene fingerprint extraction was performed for each chemical as previously described for TG-GATEs and DrugMatrix. We used our similarity score to extract the most similar conditions in TG-GATEs and DrugMatrix and compared them based on hepatocarcinogenic and genotoxic potential. Enriched KEGG pathways were computed for each chemical. Conclusions We present a new tool for the hepatocarcinogenicity evaluation of drug candidates in rodents. We developed a new similarity scoring method for gene expression profiles that allows robust identification of chemicals with similar hepatocarcinogenic and genotoxic potential. We provide a web application, ToxDBScan, which allows us to perform a similarity search against the two largest publicly available databases in toxicogenomics, TG-GATEs and DrugMatrix, using a newly developed similarity score. ToxDBScan is easy to use and allows a very fast identification of chemicals similar to the query. Since only the deregulated genes are required as input, the tool is independent of the specific microarray or sequencing platform used for transcriptomic profiling. We evaluated the newly developed similarity score with 15 compounds from an experiment not contained in either TG-GATEs or DrugMatrix. We found our scoring system to be capable of robustly identifying compounds with similar hepatocarcinogenic and genotoxic potential. To assess the viability of the similarity score, we performed a classification of the chemicals in the evaluation dataset. All 15 chemicals were assigned to their correct carcinogenicity class. ToxDBScan is publicly available from the ZBIT Bioinformatics Toolbox [11].
7,270.2
2014-10-01T00:00:00.000
[ "Biology" ]
Gut-Derived Metabolite, Trimethylamine-N-oxide (TMAO) in Cardio-Metabolic Diseases: Detection, Mechanism, and Potential Therapeutics Trimethylamine N-oxide (TMAO) is a biologically active gut microbiome-derived dietary metabolite. Recent studies have shown that high circulating plasma TMAO levels are closely associated with diseases such as atherosclerosis and hypertension, and metabolic disorders such as diabetes and hyperlipidemia, contributing to endothelial dysfunction. There is a growing interest to understand the mechanisms underlying TMAO-induced endothelial dysfunction in cardio-metabolic diseases. Endothelial dysfunction mediated by TMAO is mainly driven by inflammation and oxidative stress, which includes: (1) activation of foam cells; (2) upregulation of cytokines and adhesion molecules; (3) increased production of reactive oxygen species (ROS); (4) platelet hyperreactivity; and (5) reduced vascular tone. In this review, we summarize the potential roles of TMAO in inducing endothelial dysfunction and the mechanisms leading to the pathogenesis and progression of associated disease conditions. We also discuss the potential therapeutic strategies for the treatment of TMAO-induced endothelial dysfunction in cardio-metabolic diseases. Introduction The endothelium is a monolayer of cells that lines the interior surface of the blood vessel and forms a partially permeable barrier between endothelial tissues and blood circulation. Blood vessels, comprising endothelial cells and vascular smooth muscle cells (VSMCs), serve essential secretory, synthetic, metabolic, and immunological roles [1]. Normal physiological conditions of the endothelium regulate vascular homeostasis by modulating vascular tone, platelet adhesion, inflammation, plasmatic coagulation, fibrinolysis, and VSMC proliferation. The generation and release of vasoactive factors by endothelial cells, such as endothelium-derived relaxing factors (EDRFs) and contracting factors (EDCFs), are vital for the maintenance of normal physiological conditions, and disturbances to these factors are known to increase the incidence of endothelial dysfunction [2][3][4]. Endothelial dysfunction, a pathophysiological condition wherein the endothelial homeostasis is disrupted, enhances the risk of thrombosis, inflammation, angiospasm, and intraplaque hemorrhage, resulting in atherothrombosis, infraction, and ischemia [1], and contributes to cardio-metabolic diseases, such as atherosclerosis, acute coronary syndromes, hypertension, reproductive disorders, and diabetes [5,6]. Multiple factors trigger endothelial dysfunction, which includes high blood pressure, cholesterol levels, genetics, and lifestyle practices such as smoking, physical inactivity, and diet. According to the Global Burden of Disease, Injuries, and Risk Factor study 2013, dietary risks are one of the most significant factors that contribute to cardio-metabolic diseases [7]. In recent years, trimethylamine N-oxide (TMAO) was found to be closely associated with cardio-metabolic diseases mediated through endothelial dysfunction. TMAO is a biologically active compound from a class of amine oxides, generated from dietary precursors highly enriched in red meat, fish, and egg yolk [8]. Studies have shown that plasma TMAO levels are elevated in individuals with type II diabetes [9], diastolic dysfunction [10], heart failure [10], atherosclerotic plaque deposition [11,12], and peripheral artery disease (PAD) [13]. Subsequent mechanistic studies revealed that TMAO treatment elevates inflammation and oxidative stress, which triggers cardio-metabolic diseases [14,15]. Given its well-established association with chronic inflammation and accelerated progression of cardio-metabolic diseases, TMAO has recently gained significant scientific interest as a potential circulating biomarker for predicting cardio-metabolic diseases and chronic kidney diseases (CKD) [16]. In this review, we discuss the currently available methods for TMAO detection and its known association with disease conditions. Furthermore, the molecular mechanisms of TMAO-induced endothelial dysfunction in experimental and clinical studies, as well as potential treatment strategies to prevent the progression of diseases triggered by TMAO, are also summarized. TMAO Metabolism, Biosynthesis, and Excretion The biochemical pathways involved in TMAO biosynthesis, metabolism, excretion, and processes leading to endothelial dysfunction causing cardiovascular complications are summarized in Figure 1. Specifically, trimethylamine (TMA) is generated by gut microbes through dietary precursors such as choline, L-carnitine, lecithin, phosphatidylcholine, and betaine [15]. Bacterial strains involved in TMA generation include Anaerococcus hydrogenalis, Clostridium asparagiforme, Clostridium hathewayi, Clostridium sporogenes, Edwardsiella tarda, Escherichia fergusonii, Proteus penneri, and Providencia rettgeri [17]. Interestingly, individuals with cardio-metabolic diseases have an imbalance in the levels of bacteria in the gut. Elevated levels of pathogenic bacteria such as Firmicutes and Proteobacteria are found and are known to be associated with increased levels of inflammation and insulin resistance, resulting in poor metabolism [18]. In contrast, healthy individuals have a greater diversity of gut microbes that are found in stable amounts. Beneficial bacteria namely Bifidobacterium, Lactobacillus, and Faecalibacterium prausnitzii are present in abundant levels and are associated with improved metabolism and lower levels of inflammation [19]. Most of the TMA formed is rapidly absorbed via portal circulation [20]. In the liver, a class of hepatic flavin monooxygenase (FMO) enzymes, predominantly FMO3, causes the oxidation of TMA to TMAO [21]. Homogenous distribution of TMAO takes place throughout the body through systemic circulation, but it may accumulate in higher amounts in certain tissues [22]. In most individuals, about half of the TMAO generated is excreted without any modifications within 24 h, through urine (95%), feces (4%), as well as by sweat and breath (less than 1%) [23]. Not excreted TMAO remains circulating in the plasma, and its levels are remarkably high in patients with type II diabetes, hypertension, heart failure, and coronary heart disease [8]. In summary, these findings indicate that the gut microbiome plays an essential role in the advancement and acceleration of cardio-metabolic diseases. Therefore, understanding the species involved in TMA formation could potentially result in novel therapeutic strategies to lower the risk of these diseases. Moreover, these observations suggest that plasma TMAO levels may serve as a pre-chronic disease biomarker to assess the health status of an individual. Figure 1. Biochemical pathways involved in the formation of TMAO. TMAO is synthesized from dietary precursors after the action of the gut microbiota and flavin-containing monooxygenases, mainly the FMO3 enzyme in the liver. Increased plasma TMAO levels are associated with biological pathways that trigger endothelial dysfunction and lead to cardiovascular complications. TMAO Detection and Measurement Methods With the conceptual understanding that TMAO can be considered a potential biomarker for chronic diseases, detection of TMAO in plasma becomes crucial in the preliminary prognosis of several disease conditions. TMAO levels in plasma, feces, and urine samples have been analyzed [24], and commonly used methods for TMAO detection include chromatography techniques such as selective solid-phase extraction, ion chromatography, UPLC-M/MS, flow injection gas diffusion-ion chromatography, and liquid chromatography-selective ion monitoring [25][26][27][28] (Table 1). These methods are advantageous due to their analytical precision and reproducibility, but they require the expertise of specialized technicians [29], and the process is time-consuming and expensive. Other techniques involve the use of electrochemical tools such as cyclic voltammetry, differential pulse voltammetry, oxygen anti-interference membrane, and microbial electrochemical technology [30][31][32][33]. They are user-friendly and have long operational stability. However, they may be prone to environmental interferences in clinical applications. In summary, there is still a need to develop cheaper, more reliable, and more efficient testing tools to detect TMAO clinically, and identify patients with higher cardiovascular disease (CVD) risks. This will enable clinicians to intervene with the right treatment strategies and prevent the evolution of the condition. TMAO Detection and Measurement Methods With the conceptual understanding that TMAO can be considered a potential biomarker for chronic diseases, detection of TMAO in plasma becomes crucial in the preliminary prognosis of several disease conditions. TMAO levels in plasma, feces, and urine samples have been analyzed [24], and commonly used methods for TMAO detection include chromatography techniques such as selective solid-phase extraction, ion chromatography, UPLC-M/MS, flow injection gas diffusion-ion chromatography, and liquid chromatography-selective ion monitoring [25][26][27][28] (Table 1). These methods are advantageous due to their analytical precision and reproducibility, but they require the expertise of specialized technicians [29], and the process is time-consuming and expensive. Other techniques involve the use of electrochemical tools such as cyclic voltammetry, differential pulse voltammetry, oxygen anti-interference membrane, and microbial electrochemical technology [30][31][32][33]. They are user-friendly and have long operational stability. However, they may be prone to environmental interferences in clinical applications. In summary, there is still a need to develop cheaper, more reliable, and more efficient testing tools to detect TMAO clinically, and identify patients with higher cardiovascular disease (CVD) risks. This will enable clinicians to intervene with the right treatment strategies and prevent the evolution of the condition. FIGD-IC: flow injection-ion chromatography; GC-MS: gas chromatography-mass spectrometry; SPME: solidphase microextraction; SPE: solid-phase extraction; LC-SIMs: liquid chromatography-selective ion monitoring; UPLC-M/MS: ultraperformance liquid chromatography tandem mass spectrometry; IDA: indicator displacement assay; CV: cyclic voltammetry; DPV: differential pulse voltammetry. TMAO Level Variations and Disease Conditions Plasma TMAO levels are regulated by several factors such as age, genetics, gut microbiome, FMO3 activity, and diet [22]. For example, many studies have shown that an increase in age influences plasma TMAO levels [38,39]. Furthermore, links between various disease conditions, their progression, and plasma TMAO levels have also been established [8,40], which are summarized in Table 2. Hence, quantification and understanding of plasma TMAO levels in individuals may be essential in the pre-diagnosis of certain specific diseases. However, some findings have inherent limitations due to tight sample size, uneven gender distribution, and lack of control groups. In addition, a controversial study has shown that plasma TMAO levels may be an independent risk factor for disease conditions [41]. These different findings need to be validated by a large-scale analysis including a greater number of individuals with a balanced representation of both genders. (Table 3) Endothelial Dysfunction Mediated by TMAO Endothelial dysfunction, often classified as the impairment of endothelium-dependent vasodilation, is associated with oxidative stress and exaggerated activation of inflammatory pathways, which are mediated through foam cell formation, expression of inflammatory cytokines, and generation of adhesion molecules [4,6]. Endothelial dysfunction is known to play key roles in blood clotting, immune response, and vascular tone [4] (modulated via the synthesis and release of various EDRFs and EDCFs by the endothelium [4,6]), reported to contribute to CVD, CKD, and cardio-metabolic diseases such as diabetes. The proposed mechanisms of action of TMAO-activating endothelial dysfunction and triggering cardio-metabolic complications are summarized in Figure 2. These include a reduction in endothelial cell viability, overproduction of reactive oxygen species (ROS), enhanced vascular inflammation, vascular calcification leading to atherosclerotic plaques, and reduced vascular tone, which will be discussed in detail in the subsequent section. However, most studies associating TMAO and endothelial dysfunction were performed in rodents or in cell culture [53]. There is a need for clinical data to better understand the molecular mechanisms of TMAO-driven endothelial dysfunction in humans. This is crucial for the development of effective therapeutic interventions to overcome the complications of disease evolution. crucial for the development of effective therapeutic interventions to overcome the complications of disease evolution. Figure 2. Proposed mechanisms of action in TMAO-induced cardio-metabolic diseases. Increased circulating levels of TMAO cause various processes within the endothelial cells, contributing to the pathogenesis of endothelial dysfunction and atherosclerosis. Effect of TMAO on Cell Viability Cell viability assay is a common tool to evaluate the direct impact of TMAO exposure on endothelial cells. Despite numerous studies reporting TMAO-induced endothelial dysfunction, the effects of TMAO on endothelial cell viability remain inconsistent. For instance, TMAO (125-1000 μM) treatment for 48 h was shown to increase apoptosis in human aortic endothelial cells (HAEC) [54]. Consistent with this observation, human umbilical vein endothelial cells (HUVEC) showed lower viability after 48 h of TMAO treatment (100 μM or higher) [55]. On the other hand, several studies reported that TMAO has no significant effect on endothelial cell viability. For example, HUVEC cells treated with 10-100 μM of TMAO for 24 h did not result in any changes in cell viability [56]. Similarly, TMAO did not induce any difference in cell viability in other endothelial cell types, such as human endothelial progenitor cells [53]. This observation was consistent with another recent study where TMAO did not influence cell viability at any time point or concentration in bovine aortic endothelial cells-1 (BAEC-1) treated with 1 μM-10 mM of TMAO for 24 h-72 h [57]. Collectively, there is controversial evidence regarding the impact of TMAO on cell viability. These contradicting results may be due to the usage of different endothelial cell types, the wide range of treatment durations, and varied TMAO doses, although the range of concentrations used in these in vitro experiments were usually physiologically relevant to the plasma serum levels of patients with disease conditions ( Table 2). TMAO Enhances Oxidative Stress Oxidative stress is caused by the imbalance between the generation of ROS and the ability of the cells to neutralize these ROS through antioxidant activities [3,58]. Many studies have demonstrated that high TMAO concentrations induce endothelial dysfunction in cultured endothelial cells through oxidative stress [38,59]. Specifically, TMAO has been shown to trigger ROS production through thioredoxin-interacting protein-NOD-, LRR- Effect of TMAO on Cell Viability Cell viability assay is a common tool to evaluate the direct impact of TMAO exposure on endothelial cells. Despite numerous studies reporting TMAO-induced endothelial dysfunction, the effects of TMAO on endothelial cell viability remain inconsistent. For instance, TMAO (125-1000 µM) treatment for 48 h was shown to increase apoptosis in human aortic endothelial cells (HAEC) [54]. Consistent with this observation, human umbilical vein endothelial cells (HUVEC) showed lower viability after 48 h of TMAO treatment (100 µM or higher) [55]. On the other hand, several studies reported that TMAO has no significant effect on endothelial cell viability. For example, HUVEC cells treated with 10-100 µM of TMAO for 24 h did not result in any changes in cell viability [56]. Similarly, TMAO did not induce any difference in cell viability in other endothelial cell types, such as human endothelial progenitor cells [53]. This observation was consistent with another recent study where TMAO did not influence cell viability at any time point or concentration in bovine aortic endothelial cells-1 (BAEC-1) treated with 1 µM-10 mM of TMAO for 24 h-72 h [57]. Collectively, there is controversial evidence regarding the impact of TMAO on cell viability. These contradicting results may be due to the usage of different endothelial cell types, the wide range of treatment durations, and varied TMAO doses, although the range of concentrations used in these in vitro experiments were usually physiologically relevant to the plasma serum levels of patients with disease conditions ( Table 2). TMAO Enhances Oxidative Stress Oxidative stress is caused by the imbalance between the generation of ROS and the ability of the cells to neutralize these ROS through antioxidant activities [3,58]. Many studies have demonstrated that high TMAO concentrations induce endothelial dysfunction in cultured endothelial cells through oxidative stress [38,59]. Specifically, TMAO has been shown to trigger ROS production through thioredoxin-interacting protein-NOD-, LRR-and pyrin domain-containing protein 3 (TXNIP-NLRP3). It was demonstrated that the TXNIP-NLRP3 inflammasome complex production was activated in a time and dosedependent manner by TMAO [60]. Another pathway responsible for oxidative stress is the Sirtuin 3 and superoxide dismutase 2 (SIRT3-SOD2) ROS signaling, which is activated by TMAO in vascular inflammation models [61]. Interestingly, TMAO lowered expression levels of SIRT1 and increased oxidative stress, both in vivo and in vitro by triggering the p53/p21/retinoblastoma tumor suppressor signaling pathways [62]. In addition, TMAO is correlated with an increase in nicotinamide adenine dinucleotide phosphate (NADPH) oxidase activity resulting in vascular oxidative stress [63]. Finally, elevated circulating TMAO levels are associated with aging in mice and humans [64], which may deteriorate endothelial cell through senescence and increase ROS generation. TMAO Induces Inflammation Inflammation is a sequence of native and comprehensive immune responses that our body generates as feedbacks, upon exposure to harmful stimuli [65]. Inflammatory response, involving migration of immune cells to the damaged region, is the first step. It is followed by repair and regeneration (2nd step), involving the building of new collagen and restoration of skin homeostasis [66]. Lastly, remodeling and maturation occur to improve cellular organization where the injured tissue matures. Factors such as the overproduction of inflammatory cytokines, enhanced adhesion, and activation of foam cell formation is part of the inflammatory response [67]. Simultaneously, blood vessels present at the inflammatory site narrow down, which slows down the blood flow and activates vascular modifications [68], a phenomenon that can cause endothelial dysfunction. Enhanced Cytokine Production TMAO triggers inflammation by increasing the generation of inflammatory cytokines. Inflammatory cytokines (or pro-inflammatory cytokines) are signaling molecules generated by activated macrophages and are important players of inflammation. Some of the significant pro-inflammatory cytokines include interleukin 1 beta (IL-1β), tumor necrosis factor-alpha (TNF-α), and IL-6 [69]. TMAO is known to initiate the production of TNF-α and IL-1β [61,70], and in vitro studies confirmed the elevated levels of TNF-α production in endothelial cells, through the activation of the nuclear factor-κB (NF-κB) signaling pathway, which enhances leukocyte adhesion to endothelial walls [14]. This activates endothelial dysfunction, which may trigger CVD risks such as thrombosis and atherosclerosis [15]. In human trials, a positive relationship was also found between TMAO and IL-1β in patients with angina [53], and in a population of individuals at risk of CVD, a positive correlation was observed between TMAO levels and inflammation [71]. Collectively, data show that elevated plasma TMAO levels contribute to inflammatory and cardio-metabolic risks via the induction of inflammatory cytokines [72,73]. Activation of Adhesion Molecules Relationships between TMAO and adhesion molecules have been established in the evolution of endothelial dysfunction. Expression of the vascular cell adhesion protein 1 (VCAM-1) is induced by TMAO in primary rats and human vascular smooth muscle cells (VSMCs) [74], while TMAO-induced VCAM-1 expression is triggered by the methylation of the NF-κB p65 subunit in HUVEC [56]. In fact, many studies demonstrated that TMAOinduced NF-κB activation is a significant downstream process that upregulates monocyte adhesion through upregulation of cellular adhesion molecules such as VCAM-1, but also intercellular adhesion molecule 1 (ICAM-1) and E-selectin, and enhances endothelial dysfunction [14,15]. Moreover, TMAO (10, 50 and 100 µM) is known to activate the protein kinase C (PKC) in a dose-dependent manner, which plays a crucial role in upregulating monocyte adhesion [20,75]. In summary, increased TMAO levels, in animal models and human endothelial cells, contribute to increased adhesion of monocytes and low endothelial self-repair by the activation of PKC, NF-κB, and VCAM-1 signaling pathways [56], resulting in endothelial dysfunction. Elevated Foam Cell Formation Foam cell formation is an indicative feature in the introductory phase of atherosclerosis progression, which characterizes CVD. Indeed, CVD is distinguished by inflammationinduced atherosclerotic complications, resulting from an increase in lipid particle transport to endothelial cells causing foam cell formation [76]. Foam cells (also called lipid-laden macrophages) are a key source of pro-inflammatory phenotypes as they generate inflammatory mediators such as cytokines, chemokines, and ROS, and play a significant role in activating inflammation at different stages of the atherosclerotic progression. Foam cells are formed when immune cells such as macrophages take up large amounts of cholesterol through absorption of lipoproteins via different transporters, mostly mediated by CD36, SR-A, and LOX-1. They then become overloaded with cholesterol and are unable to process it effectively. This causes these macrophages to transform into foam cells (which store esterified cholesterol and are characterized by their large and frothy appearance), which accumulate in the walls of blood vessels and contribute to atherosclerosis [77][78][79]. Studies have shown that in mice models, TMAO stimulates macrophage recruitment by promoting their migration and expression of TNF-α, IL-6 (considered promoters of foam cell formation [79]), as well ICAM1 [80]. Moreover, TMAO plays a critical role in the accumulation of ox-LDL in macrophages through upregulation of multiple scavenger receptors, CD36, lectin-like oxidized low-density lipoprotein receptor-1 (LOX-1), and class A1 scavenger receptors (SR-A1) [77], that contribute to the formation of atherosclerosis by enhancing cholesterol uptake with lipoprotein modifications [11]. This process triggers the transformation of more macrophages into foam cells within the vascular membrane [81]. Other studies demonstrated that dietary choline, a precursor of TMAO, increases foam cell production in ApoE knockout mice [11], extensively used as a model of atherosclerosis. Finally, TMAO promotes the development of foam cells by upregulating macrophage scavenger receptors [11,12,82]. Eventually, foam cell formation modulates lipoprotein metabolism and causes lesions [83]. Plaques with abundant foam cells can rupture, leading to thrombosis and CVD-related events [78]. In summary, there is a mechanistic link between TMAO and elevated foam cell generation resulting in atherosclerosis. TMAO Reduces Vascular Tone Endothelial dysfunction is associated with abnormal changes in the vascular tone, which is regulated by the production of at least three vasoactive factors, nitric oxide (NO), prostaglandin I 2 (PGI 2 ), and endothelium-derived hyperpolarization (EDH) [3,58,[84][85][86]. PGI 2 , one of the prostanoids of arachidonic acid metabolism, is a potent vasodilator that inhibits platelet aggregation, leukocyte adhesion, and VSMC cell proliferation [87]. NO is produced through the enzymatic conversion of L-arginine to L-citrulline by endothelial NO synthase (eNOS) [86,88]. The vasodilator actions of NO are mediated via the activation of soluble guanylate cyclase, leading to the accumulation of cGMP and the relaxation of smooth muscle cells [89,90]. Lastly, EDH is generated by contact-mediated (myoendothelial gap junctions) and non-contact-mediated mechanisms, which involve the opening of smalland intermediate calcium-activated potassium channels (SK Ca and IK Ca ) and subsequent hyperpolarization and relaxation of VSMC. Collectively, the endothelium functions normally through the production of NO, PGI 2 , and EDH to maintain vascular tone and an imbalance in these vasoactive factors result in endothelial dysfunction [91][92][93]. Effects of TMAO on NO Bioavailability Studies have shown a link between elevated circulating TMAO levels and reduced eNOS, and therefore reduced NO bioavailability in the aorta of Fischer-344 rats [38]. This reduced eNOS seems to result from the upregulation of vascular oxidative stress and inflammation [94]. These data were consistent with another study where TMAO pretreatment for 24 h significantly reduced NO production after ATP stimulation in BAEC-1, indicating the potential involvement of TMAO in damaging the endothelial-dependent vasodilatory mechanism [57]. Conversely, in the same study, TMAO pre-treatment for 1 h did not influence the intracellular NO release and eNOS phosphorylation in BAEC-1. Other findings demonstrate that eNOS activity remains unchanged in the aorta of rats treated with TMAO and in HAEC pre-incubated with 1 µM of TMAO [57]. These last findings suggest that increased plasma TMAO levels in the near-physiological range are neutral to vascular function. In summary, from these experimental results, the effects of TMAO on NO bioavailability are inconsistent, and it appears that only pharmacological concentrations of TMAO could have a negative effect in normal metabolic conditions. However, underlying metabolic diseases may interfere with TMAO effects, explaining the contradictory data from the different studies. Association between TMAO and Hydrogen Sulfide (H 2 S) H 2 S and other vasoactive factors are key signaling molecules associated with vasorelaxation, cardio-protection, neuroprotection, and anti-inflammation. Hence, TMAO-induced NO bioavailability may potentially have a stronger effect in altering the vasculature in patients with underlying metabolic disorders, increasing their risk of endothelial dysfunctiondriven diseases. H 2 S is produced in various tissues and plays a significant role in the circulatory system homeostasis, including the heart, blood vessels, and kidneys [95]. H 2 S also protects against ROS, and its proangiogenic effects can lower blood pressure and heart rate. Studies revealed that a diet enriched in choline reduces the plasma H 2 S levels, which activates cardiac dysfunction through the cyclic GMP-AMP synthase -stimulator of interferon genes-NOD-like receptor protein 3 (cGAS-STING-NLRP3) inflammasome-mediated pathway in mice [96]. However, the study did not directly measure the association between TMAO and H 2 S, hence, the direct impact of TMAO on H 2 S production and its vascular effects warrants further investigation. Role of Prostanoids in Vasoconstriction Prostanoids, metabolites of arachidonic acid, are dominant lipid mediators that modulate inflammatory responses. They include PGD 2 , PGE 2 , PGF 2 alpha, PGI 2 , thromboxane A 2 (TXA 2 ) [88]. PGI 2 is the most potent vasodilator prostanoid in the cardiovascular system and lowers the risk of atherosclerosis plaque formation. In mice models, choline reduces serum PGI 2 levels and increases TXA 2 production [97]. This causes a vasoconstrictor response and a proatherogenic phenotype, resulting in endothelial cell damage. However, there is a very limited number of studies showing the relationship between prostanoids and TMAO in causing endothelial dysfunction. EDH in Endothelial Dysfunction EDH is an essential component in small arteries, and it impacts vascular resistance, blood pressure, and the distribution flow [3]. In rats, acute treatment with TMAO specifically impairs acetylcholine-evoked EDH-mediated relaxation in the femoral arteries, indicating that TMAO contributes to the progression of peripheral arterial disease [98]. This observation is consistent with another study in rats where EDH-type relaxations were selectively disrupted without interference with NO-induced vasodilation in isolated mesenteric arteries. Taken together, these data suggest that a reduction in EDH elevates the process of endothelial dysfunction in various diseases that could be influenced by TMAO levels. TMAO-Enhanced Platelet Hyperreactivity Platelet hyperreactivity is a significant factor in the activation of thrombotic environments resulting in heart attack, ischemic stroke, and severe diabetes complications [99]. High blood pressure, oxidative stress, and upregulated levels of vascular shear stress are conditions that often contribute to platelet hyperreactivity [100]. Under resting periods, platelets show a low intracellular [Ca 2+ ] ([Ca 2+ ]i) as they circulate through the healthy vessels [101]. However, at the site of vessel injury, platelets are activated by increased [Ca 2+ ]i, a precursor to thrombus formation [102]. Physiological levels of TMAO enhance submaximal thrombin-induced augmentation of platelet [Ca 2+ ]i in a dose-dependent manner [103]. In addition, the MAPK signaling pathway is a well-established driving factor of platelet aggregation by collagen [104], and TMAO causes platelet hyper-responsiveness to collagen by promoting the phosphorylation of extracellular signal-regulated kinase (ERK) 1/2 and c-Jun N-terminal kinase (JNK) [105], triggering thrombotic phenotypes [103]. TMAO Triggers Heart Failure As discussed, TMAO increases the risk of atherosclerosis and CVD by different mechanisms. The terminal stage of a variety of CVD complications is heart failure (HF), a well-known cause of disability and death. The pathological mechanisms of HF are very complex, and they initiate cardiac remodeling and inflammatory responses. These processes include apoptosis and extracellular matrix accumulation, consequently causing fibrosis [106]. Animal models, such as rats and mice, have been used to study the effects of TMAO on HF. NLRP3 inflammasome activation by TMAO triggers cardiac hypertrophy and fibrosis through the suppressor of mothers against decapentaplegic 3 (Smad 3) signaling pathway [107]. In addition, TMAO triggers oxidative damage, promotes glycogen synthesis, and reduces pyruvate dehydrogenase activity as well as fatty acid β oxidation in mitochondria. This results in mitochondrial dysfunction and lower cardiac energy production [108]. TMAO Promotes Metabolic Syndrome Metabolic syndrome corresponds to simultaneous disorders including hypertension, obesity, hyperglycemia, and hyperlipidemia, which increase the risk of heart disease, stroke, and type II diabetes, as well as the risk of CVD. Some of the major causes of these metabolic disorders are genetics, organ dysfunction, and mitochondrial dysfunction. A high-fat diet with TMAO precursors activates impaired glucose tolerance and inhibits the hepatic insulin signaling pathway [109]. Indeed, studies have shown that TMAO directly binds to and activates the protein kinase R-like endoplasmic reticulum kinase (PERK), causing hyperglycemia [110,111]. In addition, obesity traits are increased in mice treated with TMAO resulting in a high risk of type II diabetes, mediated by the intestinal reverse cholesterol transport and the TMA/FMO3/TMAO pathways [112]. TMAO also activates metabolic dysfunction through bile acid metabolism. It positively correlates with plasma levels of bile acids and hepatic mRNA expression of cholesterol 7 alpha-hydroxylase (CYP7A1) in mice, which trigger hepatic lipogenesis and hepatic steatosis via the bile acid-mediated hepatic farnesoid X receptor (FXR) signaling pathway [113]. Kidney disease Human and high-fat diet/low-dose streptozotocin-induced diabetes rats ↑ pro-fibrotic factors TGF-β1, IL-1β and Smad3, ↑ phosphorylation and Smad3 activation, ↑ kidney injury molecule-1, activation of NLRP3 inflammasome, renal inflammation, renal fibrosis and renal dysfunction NLRP3 inflammasome signaling pathway, transforming growth factor β, SMAD signaling pathway [117,118] Potential Treatment Strategies Understanding the involvement of TMAO in various disease conditions has resulted in active research and analyses to identify potential therapeutic strategies to reduce TMAO levels in the serum. As no specific compound directed against TMAO was found yet, a direct scavenger targeting TMAO is not available [23,120,121]. Hence, commonly proposed potential treatment strategies target the process of TMA generation, the activity of gut microbes to lower TMAO production, and the ingestion of natural products to reduce the concentration of TMAO. These therapeutic approaches are outlined in Table 4. Some potential therapeutics involve the inhibition of TMAO-forming enzymes. In mice models, knockdown of FMO3 (the enzyme which converts TMA to TMAO) has been reported to suppress the expression of FoxO1 (a key protein regulating metabolism), and to prevent the occurrence and progression of metabolic dysfunction such as hyperglycemia, hyperlipidemia, and atherosclerosis. Consistent with this finding, FMO3 overexpression in mice upregulates the levels of lipids in the plasma and liver, suggesting that FMO3 may be linked to gluconeogenesis and lipogenesis, and may play a major role in glucose and lipid homeostasis regulation [138]. The drawback of FMO3 inhibition is an accumulation of TMA, which can lead to trimethylaminuria characterized by a fishy odor, and which induces inflammation. In addition, if FMO3 overexpression is closely associated with the upregulation of TMAO formation [23,82,138,139], TMAO is not the only substrate of FMO3. Hence, the inhibition of this enzyme will also lower the metabolism of other substrates such as morphine, propranolol, and tyramine [23], potentially leading to co-lateral metabolism modifications that may not be beneficial. Targeting TMA Studies performed in mouse models showed that the 3,3-dimethyl-1-butanol (DMB), found in balsamic vinegar, olive oil, grape seed oil, and red wines [21], and which inhibits the choline TMA lyase enzyme [21], reduces macrophage foam cell formation and aortic root atherosclerotic lesion development in Apo E knockout mice [22,115]. In obese mice fed with a high-fat western diet, the DMB treatment does not have any effects on body weight and dyslipidemia, but significantly lowers plasma TMAO levels and prevents cardiac dysfunction. Moreover, DMB successfully prevents the expression of pro-inflammatory cytokines (IL-1β and IL-10) and TNFα. However, it is unable to completely prevent TMAO formation and does not inhibit the formation of TMA from γ-butyrobetaine [115,125]. Prevention of bacterial TMAO formation through competitive inhibition of the bacterial carnitine palmitoyl transferase-1 (CPT-1) has also been observed to be possible with meldo-nium. Meldonium, known for its anti-atherosclerotic and anti-ischemic properties, is an analogue of carnitine that lowers the generation of TMA from L-carnitine, but not choline, and improves endothelial function [22,127,140]. Finally, plant sterol esters can reduce the gut microbiota generation of TMA as well as cholesterol accumulation, and eliminate atherogenesis in mice [141]. However, the effects remain unclear in humans. Prebiotics and Probiotics Both prebiotics and probiotics can be used to improve the composition of the gut microbiota and regulate the level of TMAO formation [122]. Prebiotics are inclusive of all kinds of non-digestible food and are known to trigger the growth and development of useful bacteria [142], while probiotics involve the administration of living microbes that can yield beneficial effects on human health when administered in sufficient amounts, as defined by the Food and Agriculture Organization (FAO) of the United Nations [143]. Conversely, some bioactive food can reduce the generation of bacteria that convert dietary precursors in TMA. As such, the administration of Lactobacillus paracasei in germ-free mice colonized with human infant microbiota results in reduced TMA formation, and the use of Lactobacillus and Bifidobacterium lowers the risk of atherosclerosis [123,144]. Other studies have reported the possibility of using methanogenic bacteria (e.g., the large group of Methanobacteriales), such as Methanomassiliicoccus luminyensis B10, to metabolize TMA and deplete it [145,146]. In addition, probiotics lower inflammation by triggering antiinflammatory cytokines and reducing pro-inflammatory cytokines that regulate the NF-κB pathway [147], which is linked to MAPK, pathogen recognition, and inflammatory signaling pathways [148]. The toll-like receptor expression has been shown to be downregulated by probiotics, hence also lowering intestinal inflammation [149]. However, a common limitation of probiotics used to lower TMAO levels and potentially reduce the risk of atherosclerosis is that the effect of treatment may change according to the gut microbiota composition of each specific individual. Antibiotics Another strategy to lower or eliminate the conversion of dietary precursors into TMA is to target the gut microbiome composition via antibiotics. Antibiotics such as ciprofloxacin and metronidazole effectively suppress TMAO levels in clinical trials [150,151]. However, after one month of antibiotics withdrawal, TMAO levels are detected again [21,124]. Furthermore, the use of antibiotics may incur bacterial resistance or kill beneficial bacteria in addition to harmful ones [124]. Other Therapeutic Alternatives to Lower TMAO Concentration Oral non-absorbent binders have been used to eliminate TMAO and its precursors. Clinically used, oral charcoal adsorbent AST-120 eliminates uremic contaminants such as indoxyl sulfate from end-stage renal disease patients [152]. However, this remains an uncertain approach as none of these absorbents specifically target TMAO [23,120,121]. Consumption of natural products may also reduce TMAO levels. Specifically, studies showed that Resveratrol (a polyphenol with antioxidant activities) modifies the composition of the gut microbiome, reducing the bacteria that promote TMA formation and increasing the useful bacteria [128,153]. Gynostemma pentaphyllum (an herbaceous climbing vine) lowers plasma TMAO levels and increases lecithin levels in rat models [131]. Gancao (the root of Glycyrrhiza uralensis) prevents the rise of TMAO levels when administered with Fuzi (the processed lateral root of Aconitum carmichaelii). However, it does not lower plasma TMAO levels when administered alone [60]. Oolong tea extract and citrus peel polymethoxyflavones target the TMAO formation process and lower vascular inflammation [154]. Other compounds such as berberine (BBR) [134] and trigonelline [135] are also natural products known to inhibit the formation of TMAO from TMA by lowering the expression of the FMO3 enzyme. Anti-diabetic medications also have the potential to modify the gut microbiome. Similarly, the gut microbiota can modify the effectiveness of diabetic medications. The majority of data indicate that metformin is the most effective drug as compared to all the other anti-diabetic medications [155]. Interestingly, in db/db mice with type 2 diabetes mellitus, treatment with metformin results in a twofold reduction in TMAO concentration and the generation of bacteria associated to TMAO precursors production [137]. In this study, it was suggested that a reduction in TMAO concentration with the use of metformin is an effective therapeutic strategy to exert cardiovascular benefits. In addition, some potential anti-obesity drugs such as capsanthin, as well as the lycopene, amaranth, and sorghum red pigments obtained from Lycopersicon esculentum (M.), Amaranthus tricolor, and Sorghum bicolor, respectively, also reduce serum levels of TMAO and increase microbial diversity in mouse fed with high-fat diet [129,130]. Another drug, Enalapril (ACE [angiotensin converting enzyme] inhibitor), tested in rats, increases the excretion of TMAO in the urine. However, the mechanism remains unclear, as it does not target TMAO formation or modification of the gut microbiota [21]. Despite the promising effects of these products to reduce TMAO levels, studies were only performed in mouse models. Hence, there is insufficient evidence to confirm their impact in humans. Concluding Remarks and Future Perspective In conclusion, the gut microbial metabolite TMAO is a significant biomarker of cardiometabolic diseases. The molecular mechanisms underlying TMAO-induced endothelial dysfunction and subsequent development of cardio-metabolic diseases are multi-factorial, and primarily involve vascular inflammation and oxidative stress via the MAPK and NF-κB signaling pathways. Through oxidative stress and inflammation, TMAO triggers other effects such as platelet hyperreactivity and reduction in vascular tone through the impairment of EDH-mediated relaxation and PGI 2 production. While other reported factors, such as cell viability and NO bioavailability remain controversial, the differences observed may be attributed to distinct metabolic backgrounds of models, as well as study design (cell types, TMAO concentrations, and treatment durations). Future studies should explore the molecular signatures and pathways that contribute to endothelial dysfunction and/or other cardio-metabolic diseases. While most of the current treatment strategies focus on preventing the formation of TMAO, other plausible treatment strategies could focus on targeting key mechanistic pathways that contribute to disease pathology in the various organs. Hence, a better understanding of the underlying molecular mechanisms will lead to the development of new therapeutic agents such as small molecules [156], peptides [157,158] or natural products [159][160][161] that have potent vasoprotective effects (e.g., anti-inflammatory properties) to effectively prevent or reverse TMAO-induced endothelial dysfunction and/or other cardio-metabolic diseases. Conflicts of Interest: The authors declare no potential competing interest.
8,384.6
2023-03-28T00:00:00.000
[ "Biology", "Chemistry", "Medicine" ]
Convex and non-convex regularization methods for spatial point processes intensity estimation This paper deals with feature selection procedures for spatial point processes intensity estimation. We consider regularized versions of estimating equations based on Campbell theorem derived from two classical functions: Poisson likelihood and logistic regression likelihood. We provide general conditions on the spatial point processes and on penalty functions which ensure consistency, sparsity and asymptotic normality. We discuss the numerical implementation and assess finite sample properties in a simulation study. Finally, an application to tropical forestry datasets illustrates the use of the proposed methods. Introduction Spatial point pattern data arise in many contexts where interest lies in describing the distribution of an event in space. Some examples include the locations of trees in a forest, gold deposits mapped in a geological survey, stars in a cluster star, animal sightings, locations of some specific cells in the retina, or road accidents (see e.g. Møller and Waagepetersen, 2004;Illian et al., 2008;Baddeley et al., 2015). Interest in methods for analyzing spatial point pattern data is rapidly expanding across many fields of science, notably in ecology, epidemiology, biology, geosciences, astronomy, and econometrics. One of the main interests when analyzing spatial point pattern data is to estimate the intensity which characterizes the probability that a point (or an event) occurs in an infinitesimal ball around a given location. In practice, the intensity is often assumed to be a parametric function of spatial covariates (e.g. Waagepetersen, 2007;Møller and Waagepetersen, 2007;Waagepetersen, 2008;Waagepetersen and Guan, 2009;Guan and Shen, 2010;Coeurjolly and Møller, 2014). In this paper, we assume that the intensity function ρ is parameterized by a vector β and has a log-linear specification where z(u) = {z 1 (u), . . . , z p (u)} are the p spatial covariates measured at coordinate u, β = {β 1 , . . . , β p } is a real p-dimensional parameter, D is the domain of observation, and d represents the state space of the spatial point processes (usually d = 2, 3). Methods to estimate β when p is reasonable are now quite standard. Instead of the maximum likelihood estimation which is computationally expensive (Møller and Waagepetersen, 2004), standard methods are based on estimating equation derived from the Campbell theorem and include the Poisson likelihood (e.g. Waagepetersen, 2007) and logistic regression likelihood methods (e.g. Baddeley et al., 2014) (see Appendix A for the details on these methods). An important advantage of such methods is their simple implementation. From a numerical point of view, it has been demonstrated (see e.g. Baddeley et al., 2015) that the Poisson likelihood and logistic regression likelihood can be efficiently approximated by a generalized linear model (more precisely a weighted quasi-Poisson regression for the first one and a logistic regression for the second one). GLM software can, therefore, be adapted to accurately estimate β. This is exactly what is proposed by the R package spatstat (Baddeley et al., 2015) devoted to the analysis of spatial point patterns. In recent decades, with the advancement of technology and huge investment in data collection, many applications for estimating the intensity function which involves a large number of covariates are rapidly available (e.g. Hubbell et al., 2005;Renner and Warton, 2013;Thurman et al., 2015). When the intensity is a function of many variables, covariates selection becomes inevitable. Variable selection in the context of spatial point processes is a recent topic. Thurman and Zhu (2014) focus on using adaptive lasso to select variables for inhomogeneous Poisson point processes. This study is extended to clustered spatial point processes by Thurman et al. (2015) who establish asymptotic properties (consistency, sparsity, and asymptotic normality distribution) of the estimates. Yue and Loh (2015) consider modeling spatial point data with Poisson, pairwise interaction point processes, and Neyman-Scott cluster models, incorporated lasso, adaptive lasso, and elastic net regularization methods. The latter work does not provide any theoretical result. In this paper, we intend to extend from a theoretical point of view the previous papers by considering more methods, more penalties. We propose regularized versions of either the Poisson or logistic regression likelihoods to estimate the intensity of the spatial point processes. The penalty functions we consider are either convex or non-convex. We provide general conditions on the characteristics of the spatial point process (finite moments, mixing conditions) and on the penalty function to ensure an oracle property and a central limit theorem. It is also to be noted that our theoretical results hold under less restrictive assumptions on the model and on the asymptotic covariance matrix than the ones required by Thurman et al. (2015) (see Remark 3). Since we outline the link between the criteria we maximize and penalized generalized linear models, our work is mainly based on the pioneering paper by Fan and Li (2001). Our contribution is to exploit and extend this paper: First, the asymptotic we consider is an increasing domain asymptotic, i.e. the domain of observation, say D n ⊂ R d , increases to R d with n (so |D n | the volume of D n plays the same role as n in standard literature); Second, unlike the work by Fan and Li (2001) which assumes the independence of observations, our results can be applied to spatial point processes which exhibit dependence (e.g. Neyman-Scott processes, log-Gaussian Cox processes). From a numerical point of view, we are led to implement regularization methods for generalized linear models. This is quite straightforward since we only need to combine the spatstat R package with the two R packages implementing penalized estimation for generalized linear models, glmnet (Friedman et al., 2010) and ncvreg (Breheny and Huang, 2011). The rest of the paper is organized as follows. Section 2 gives necessary background on spatial point processes, details briefly how a parametric intensity function is classically estimated and formulates the problem we tackle. This section is quite short but the non expert readers can find more details in Appendices A-D. Our main contribution is to obtain asymptotic properties for various spatial point processes models, estimation methods, and penalty functions. These results are detailed in Section 3. Section 4 investigates the finitesample properties of the proposed method in simulation study, followed by an application to tropical forestry datasets in Section 5, and finished by conclusion and discussion in Section 6. Proofs of the main results are postponed to Appendices E-G. Spatial point processes and intensity functions Let X be a spatial point process on R d . Let D ⊂ R d be a compact set of Lebesgue measure |D| which will play the role of the observation domain. We view X as a locally finite random subset of R d , i.e. the random number of points of X in B, N (B), is almost surely finite whenever B ⊂ R d is a bounded region. A realization of X in D is thus a set x = {x 1 , x 2 , . . . , x m }, where x i ∈ D and m is the observed number of points in D. Note that m is obtained from the realization of a random variables and 0 ≤ m < ∞. Suppose X has intensity function ρ and second-order product density ρ (2) . Campbell theorem (see e.g. Møller and Waagepetersen, 2004) states that, for any function k : In particular, Campbell theorem provides an intuitive interpretation of ρ and ρ (2) . We may interpret ρ(u)du as the probability of occurrence of a point in an infinitesimally small ball with center u and volume du. In the same way, ρ (2) (u, v)dudv is the probability for observing a pair of distinct points from X occurring jointly in each of two infinitesimally small balls with centers u, v and volume du, dv. Without entering into details, we can define ρ (k) the k-th order intensity function (see Møller and Waagepetersen, 2004, for more details). For further background materials on spatial point processes, see for example Møller and Waagepetersen (2004); Illian et al. (2008). In order to study whether a point process deviates from independence (i.e., Poisson point process), we often consider the pair correlation function given by when both ρ and ρ (2) exist with the convention 0/0 = 0. For a Poisson point process (Appendix B.1), we have ρ (2) (u, v) = ρ(u)ρ(v) so that g(u, v) = 1. If, for example, g(u, v) > 1 (resp. g(u, v) < 1), this indicates that pair of points are more likely (resp. less likely) to occur at locations u, v than for a Poisson point process with the same intensity function as X. If for any u, v, g(u, v) depends only on u−v, the point process X is said to be second-order reweighted stationary. Parametric intensity estimation In our study, we assume that the intensity function depends on a vector of parameters β, i.e. ρ(·) = ρ(·; β). As outlined in the introduction, maximum likelihood estimation is almost unfeasible for general spatial point processes models. Instead of this method, Campbell formula provides a nice tool for defining estimating equations based methods. These methods are now standard in the context of spatial point processes but we refer the reader to Appendix A for a more detailed presentation. The standard parametric methods for estimating β are obtained by maximizing the weighted Poisson likelihood (e.g. Guan and Shen, 2010) or the logistic regression likelihood (e.g. Baddeley et al., 2014) given respectively by where w(·) is a non-negative weight function depending on the first and the second-order characteristics of X and δ(·) is a non-negative real-valued function. Appendix A reminds the pertinence of (2.3)-(2.4): Campbell theorem shows that the gradient vector of (2.3)-(2.4) constitute unbiased estimating equations. The solution obtained by maximizing (2.3) (resp. (2.4)) is called Poisson estimator (resp. the logistic regression estimator). We refer readers to Appendix A for further details on the weight function w(·) and for the role of the function δ(·). From a numerical point of view, it has been demonstrated that (2.3) and (2.4) can be efficiently approximated by a weighted generalized linear model (more precisely a weighted quasi-Poisson regression for the first one and a logistic regression for the second one). GLM software can therefore be adapted to accurately estimate β. More details about this numerical implementation can be found in Appendices C.1 and C.2 respectively. Regularization techniques Regularization techniques are introduced as alternatives to stepwise selection for variable selection and parameter estimation. In general, a regularization method attempts to maximize the penalized likelihood function (θ) − η p j=1 p λj (|θ j |), where (θ) is the likelihood function of θ, η is the number of observations, and p λ (·) is a nonnegative penalty function parameterized by a real number λ ≥ 0. The same general strategy is adopted here in the context of spatial point processes. Let (w; β) be either the weighted Poisson likelihood function (2.3) or the weighted logistic regression likelihood function (2.4). In a similar way, we define the penalized weighted likelihood function given by where |D| is the volume of the observation domain, which plays the same role as the number of observations η in our setting, λ j is a nonnegative tuning parameter corresponding to β j for j = 1, . . . , p, and p λ is a penalty function which we now describe. For any λ ≥ 0, we say that p λ (·) : R + → R is a penalty function if p λ is a nonnegative function with p λ (0) = 0. Examples of penalty function are the 1216 A. Choiruddin et al. The first and second derivatives of the above functions are given in Table 1. It is to be noticed that p λ is not differentiable at θ = λ, γλ (resp. θ = γλ) for SCAD (resp. for MC+) penalty. Penalty functions give rise to specific well-known methods which are summarized in Table 2. More details can be found in Appendix D. The solution obtained by maximizing (2.5) is called either regularized Poisson or logistic estimator. From the previous section, the numerical implementation of the maximization of (2.5) can be done using procedures which estimate a penalized weighted generalized linear model. This is now quite standard for instance in R with packages such as glmnet and ncvreg. More details about this can be found in Appendix C.3. What is expected from maximizing (2.5) is that the procedure correctly selects the true covariates and that the estimate is consistent and still satisfies a central limit theorem. To obtain such properties when the observation domain increases to R d , specific conditions on the point process, the covariates, the regularity of the penalty function and most of all on the tuning parameters λ j are required. This is investigated in the next section. Table 2 Details of some regularization methods. Asymptotic theory In this section, we present the asymptotic results for the regularized Poisson estimator when considering X as a d-dimensional point process observed over a sequence of observation domain D = D n , n = 1, 2, . . . which expands to R d as n → ∞. The regularization parameters λ j = λ n,j for j = 1, . . . , p are now indexed by n. For sake of conciseness, we do not present the asymptotic results for the regularized logistic estimator. The results are very similar. The main difference is lying in the conditions (C.6) and (C.7) for which the matrices A n , B n , and C n have a different expression (see Remark 2). So, from now on, we let n = n,PL and Q = Q n be indexed by n. We define the p × p matrices A n (w; β 0 ), B n (w; β 0 ), and C n (w; β 0 ) by In what follows, for a squared symmetric matrix M n , ν min (M n ) denotes the smallest eigenvalue of M n . Consider the following conditions (C.1)-(C.8) which are required to derive our asymptotic results.: (C.4) There exists an integer t ≥ 1 such that for k = 2, . . . , 2 + t, the product density ρ (k) exists and satisfies ρ (k) < ∞. (C.5) For the strong mixing coefficients (3.1), we assume that there exists somẽ (C.8) The penalty function p λ (·) is nonnegative on R+, satisfies p λ (0) = 0 and is continuously differentiable on R + \{0} with derivative p λ assumed to be a Lipschitz function on R + \ {0}. Furthermore, given (λ n,j ) n≥1 , for j = 1, . . . , s, we assume that there exists (r n,j ) n≥1 , where |D n | 1/2r n,j → ∞ as n → ∞, such that, for n sufficiently large, p λn,j is thrice continuously differentiable in the ball centered at |β 0j | with radiusr n,j and we assume that the third derivative is uniformly bounded. Under the condition (C.8), we define the sequences a n , b n and c n by where K 1 is any positive constant. These sequences a n , b n and c n , detailed in Table 3 for the different methods considered in this paper, play a central role in our results. Even if this will be discussed later in Section 3.3, we specify right now that we require that a n |D n | 1/2 → 0, b n |D n | 1/2 → ∞ and c n → 0. Main results We state our main results here. Proofs are relegated to Appendices E-G. We first show in Theorem 1 that the regularized Poisson estimator converges in probability and exhibits its rate of convergence. This implies that, if a n = O(|D n | −1/2 ) and c n = o(1), the regularized Poisson estimator is root-|D n | consistent. Furthermore, we demonstrate in Theorem 2 that such a root-|D n | consistent estimator ensures the sparsity ofβ; that is, the estimate will correctly set β 2 to zero with probability tending to 1 as n → ∞, andβ 1 is asymptotically normal. Remark 1. For lasso and adaptive lasso, Π n = 0. For other penalties, because c n = o(1), then Π n = o(1). Since A n,11 (w; β 0 ) = O(|D n |) from conditions (C.2) and (C.3), |D n | Π n is asymptotically negligible with respect to A n,11 (w; β 0 ) . Remark 2. Theorems 1 and 2 remain true for the regularized logistic estimator if we replace in the expression of the matrices A n , B n , and C n , w(u) by w(u)δ(u)/(ρ(u; β 0 ) + δ(u)), u ∈ D n and extend the condition (C.3) by adding The proofs of Theorems 1 and 2 for this estimator are slightly different mainly because unlike the Poisson likelihood for which we have (2) n (w; β)) = −A n (w; β). Despite the additional difficulty, we maintain that no additional assumption is required. Remark 3. We want to highlight here the main theoretical differences with the work by Thurman et al. (2015). First, the methodology and results are available for the logistic regression likelihood. Second, we consider very general penalty function while Thurman et al. (2015) only consider the adaptive lasso method. Third, Thurman et al. (2015) assume that that |D n | −1 M n → M as n → ∞, where M n is A n , B n or C n , and where M, i.e. either A, B or C, are positive definite matrices. Instead we assume the sharper condition lim inf n→∞ ν min (|D n | −1 M n ) > 0, where M n is either A n or B n + C n . The latter point makes the proofs a little bit more technical. Discussion of the conditions We split the conditions we assume into two different categories: conditions (C.1)-(C.7) and condition (C.8) combined with the assumptions on the behavior of the sequences a n , b n and c n . Conditions (C.1)-(C.7) are standard in the literature, see e.g. Coeurjolly and Møller (2014). Essentially, these assumptions ensure that when there is no regularization, the estimateβ is consistent and satisfies a central limit theorem. To help the reader, we reproduce comments that can be done on these assumptions. In condition (C.1), the assumption that E contains o in its interior can be made without loss of generality. If instead u is an interior point of E, then condition (C.1) could be modified to that any ball with centre u and radius r > 0 is contained in D n = nE for all sufficiently large n. Condition (C.3) is quite standard. From conditions (C.2)-(C.5), the matrices A n (w; β 0 ), B n (w; β 0 ) and C n (w; β 0 ) are bounded by |D n | (see e.g. Coeurjolly and Møller, 2014). As mentioned, conditions (C.1)-(C.6) are used to establish a central limit theorem for |D n | −1/2 (1) n (w; β 0 ) using a general central limit theorem for triangular arrays of nonstationary random fields obtained by Karácsony (2006), which is an extension from Bolthausen (1982), then later extended to nonstationary random fields by Guyon (1995). As pointed out by Coeurjolly and Møller (2014), condition (C.6) is a spatial average assumption. This assumption is similar for linear models to an assumption like ν min (n −1 X X) where n would play the role of the number of observations and X would represent the design matrix. Conditions (C.6)-(C.7) ensure that the matrix Σ n (w; β 0 ) is invertible for sufficiently large n. We refer the reader to e.g. Coeurjolly and Møller (2014) where these conditions are shown to hold for large class of models including Poisson and Cox processes discussed in Appendix B. Condition (C.8) controls the higher order terms in Taylor expansion of the penalty function. Roughly speaking, we expect the penalty function to be at least Lipschitz and thrice differentiable in a neighborhood of the true parameter vector. As it is, the condition looks technical, however, it is obviously satisfied for ridge, lasso, elastic net (and the adaptive versions). According to the choice of λ n , it is satisfied for SCAD and MC+ when |β 0j |, for j = 1, . . . , s, is not equal to γλ n and/or λ n . As a consequence of the previous discussion, the main assumptions we require in this paper are the ones related to the sequences a n , b n and c n . We require that a n |D n | 1/2 → 0, b n |D n | 1/2 → ∞ and c n → 0 as n → ∞ simultaneously. For the ridge regularization method, b n = 0, preventing from applying Theorem 2 for this penalty. For lasso and elastic net, a n = K 2 b n for some constant K 2 > 0 (K 2 =1 for lasso). The two conditions a n |D n | 1/2 → 0 and b n |D n | 1/2 → ∞ as n → ∞ cannot be satisfied simultaneously. This is different for the adaptive versions where a compromise can be found by adjusting the λ n,j 's, as well as the two non-convex penalties SCAD and MC+, for which λ n can be adjusted. For the regularization methods considered in this paper, the condition c n → 0 is implied by the condition a n |D n | 1/2 → 0 as n → ∞. Simulation study We conduct a simulation study with three different scenarios, described in Section 4.1, to compare the estimates of the regularized Poisson likelihood (PL) and that of the regularized weighted Poisson likelihood (WPL). We also want to explore the behavior of the estimates using different regularization methods. Empirical findings are presented in Section 4.2. Furthermore, we compare, in Section 4.3, the regularized Poisson and logistic estimators. Simulation set-up The setting is quite similar to that of Waagepetersen (2007) and Thurman et al. (2015). The spatial domain is D = [0, 1000] × [0, 500]. We center and scale the 201 × 101 pixel images of elevation (x 1 ) and gradient of elevation (x 2 ) contained in the bei datasets of spatstat library in R (R Core Team, 2016), and use them as two true covariates. In addition, we create three different scenarios to define extra covariates: Scenario 1. We generate eighteen 201 × 101 pixel images of covariates as standard Gaussian white noise and denote them by x 3 , . . . , x 20 . We define z(u) = {1, x 1 (u), . . . , x 20 (u)} as the covariates vector. The regression coefficients for z 3 , . . . , z 20 are set to zero. Scenario 2. First, we generate eighteen 201 × 101 pixel images of covariates as in Scenario 1. Second, we transform them, together with x 1 and x 2 , to have multicollinearity. In particular, we define z . . , 20, except (Ω) 12 = (Ω) 21 = 0, to preserve the correlation between x 1 and x 2 . The regression coefficients for z 3 , . . . , z 20 are set to zero. Scenario 3. We consider a more complex situation. We center and scale the 13 50 × 25 pixel images of soil nutrients covariates obtained from the study in tropical forest of Barro Colorado Island (BCI) in central Panama (see Condit, 1998;Hubbell et al., 1999Hubbell et al., , 2005, convert them to be 201×101 pixel images as x 1 and x 2 , and use them as the extra covariates. Together with x 1 and x 2 , we keep the structure of the covariance matrix to preserve the complexity of the situation. In this setting, we have z(u) = {1, x 1 (u), . . . , x 15 (u)} . The regression coefficients for z 3 , . . . , z 15 are set to zero. The different maps of the covariates obtained from Scenarios 2 and 3 are depicted in Appendix H. Except for z 3 which has high correlation with z 2 , the extra covariates obtained from Scenario 2 tend to have a constant value ( Figure 3). This is completely different from the ones obtained from Scenario 3 ( Figure 4). The mean number of points over the domain D, μ, is chosen to be 1600. We set the true intensity function to be ρ(u; β 0 ) = exp{β 0 + β 1 z 1 (u) + β 2 z 2 (u)}, where β 1 = 2 represents a relatively large effect of elevation, β 2 = 0.75 reflects a relatively small effect of gradient, and β 0 is selected such that each realization has 1600 points in average. Furthermore, we erode regularly the domain D such that, with the same intensity function, the mean number of points over the new domain D R becomes 400. The erosion is used to observe the convergence of the procedure as the observation domain expands. We consider the default number of dummy points for the Poisson likelihood, denoted by nd 2 , as suggested in the spatstat R package, i.e. nd 2 ≈ 4m, where m is the number of points. With these scenarios, we simulate 2000 spatial point patterns from a Thomas point process (see Appendix B.2) using the rThomas function in the spatstat package. We also consider two different κ parameters (κ = 5×10 −4 , κ = 5×10 −5 ) as different levels of spatial interaction and let ω = 20. For each of the four combinations of κ and μ, we fit the intensity to the simulated point pattern realizations. We also fit the oracle model which only uses the two true covariates. All models are fitted using modified internal function in spatstat (Baddeley et al., 2015), glmnet (Friedman et al., 2010), and ncvreg (Breheny and Huang, 2011). A modification of the ncvreg R package is required to include the penalized weighted Poisson and logistic likelihoods. Simulation results To better understand the behavior of Thomas processes designed in this study, Figure 1 shows the plot of the four realizations using different κ and μ. The smaller value of κ, the tighter the clusters since there are fewer parents. When μ = 400, i.e. by considering the realizations observed on D R, the mean number of points over the 2000 replications and standard deviation are 396 and 47 (resp. 400 and 137) when κ = 5 × 10 −4 (resp. κ = 5 × 10 −5 ). When μ = 1600, the mean number of points and standard deviation are 1604 and 174 (resp. 1589 and 529) when κ = 5 × 10 −4 (resp. κ = 5 × 10 −5 ). Tables 4 and 5 present the selection properties of the estimates using the penalized PL and the penalized WPL methods. Similarly to Bühlmann and Van De Geer (2011), the indices we consider are the true positive rate (TPR), the false positive rate (FPR), and the positive predictive value (PPV). TPR corresponds to the ratio of the selected true covariates over the number of true covariates, while FPR corresponds to the ratio of the selected noisy covariates over the number of noisy covariates. TPR explains how the model can correctly select both z 1 and z 2 . Finally, FPR investigates how the model incorrectly select among z 3 to z p (p = 20 for Scenarios 1 and 2 and p = 15 for Scenario 3). PPV corresponds to the ratio of the selected true covariates over the total number of selected covariates in the model. PPV describes how the model can approximate the oracle model in terms of selection. Therefore, we want to find the methods which have a TPR and a PPV close to 100%, and a FPR close to 0. Generally, for both the penalized PL and the penalized WPL methods, the best selection properties are obtained for a larger value of κ which shows weaker spatial dependence. For a more clustered one, indicated by a smaller value of κ, it seems more difficult to select the true covariates. As μ increases from 400 (Table 4) to 1600 (Table 5), the TPR tends to improve, so the model can select both z 1 and z 2 more frequently. Ridge, lasso, and elastic net are the regularization methods that cannot satisfy our theorems. It is firstly emphasized that all covariates are always selected by the ridge so that the rates are never changed whatever the method used. For the penalized PL with lasso and elastic net regularization, it is shown that they tend to have quite large values of FPR, meaning that they wrongly keep the noisy covariates more frequently. When the penalized WPL is applied, we gain smaller FPR, but we suffer from smaller TPR at the same time. This smaller TPR actually comes from the unselection of z 2 which has smaller coefficient than that of z 1 . When we apply adaptive lasso, adaptive elastic net, SCAD, and MC+, we achieve better performance, especially for FPR which is closer to zero which automatically improves the PPV. Adaptive elastic net (resp. elastic net) has slightly larger FPR than adaptive lasso (resp. lasso). Among all regularization methods considered in this paper, adaptive lasso seems to outperform the other ones. Considering Scenarios 1 and 2, we observe best selection properties for the penalized PL combined with adaptive lasso. As the design is getting more complex for Scenario 3, applying the penalized PL suffers from much larger FPR, indicating that this method may not be able to overcome the complicated situation. However, when we use the penalized WPL, the properties seem to be more stable for the different designs of simulation study. One more advantage when considering the penalized WPL is that we can remove almost all extra covariates. It is worth noticing that we may suffer from smaller TPR when we apply the penalized WPL, but we lose the only less informative covariates. From Tables 4 and 5, when we are faced with a complex situation, we would recommend the use of the penalized WPL method with adaptive lasso penalty if the focus is on selection properties. Otherwise, the use of the penalized PL combined with adaptive lasso penalty is more preferable. Tables 6 and 7 give the prediction properties of the estimates in terms of biases, standard deviations (SD), and square root of mean squared errors (RMSE), some criteria we define by whereÊ(β j ) andσ 2 j are respectively the empirical mean and variance of the estimatesβ j , for j = 1, . . . , p, where p = 20 for Scenarios 1 and 2, and p = 15 for Scenario 3. In general, the properties improve with larger value of κ and μ due to weaker spatial dependence and larger sample size. For the oracle model where the model contains only z 1 and z 2 , the WPL estimates are more efficient than the PL estimates, particularly in the more clustered case, agreeing with the findings by Guan and Shen (2010) in the unregularized setting. When the regularization methods are applied, the bias increases in general, especially when we consider the penalized WPL method. The regularized WPL has a larger bias since this method does not select z 2 much more frequently. Furthermore, weighted method seems to introduce extra bias, even though the regularization is not considered as in the oracle model. For a low clustered process, the SD using the penalized WPL is similar to that of the penalized PL which may be because of the weaker dependence represented by larger κ, making weight surface w(·) closer to 1. However, a larger RMSE is obtained from the penalized WPL. When we observe the more clustered process, we obtain smaller SD using the penalized WPL which explains why in some cases (mainly Scenario 3) the RMSE gets smaller. For the ridge method, the bias is closest to that of the oracle model, but it has the largest SD. Among the regularization methods, the adaptive lasso method has the best performance in terms of prediction. Considering Scenarios 1 and 2, we obtain best properties when we apply the penalized PL with adaptive lasso penalty. As the design is getting much more Table 7 Empirical prediction properties (Bias, SD, and RMSE) complex for Scenario 3, when we use the penalized PL with adaptive lasso, the SD is doubled and even quadrupled due to the over selection of many unimpor-tant covariates. In particular, for the more clustered process, the better properties are even obtained by applying the regularized WPL combined with adaptive lasso. From Tables 6 and 7, when the focus is on prediction properties, we would recommend to apply the penalized WPL combined with adaptive lasso penalty when the observed point pattern is very clustered and when covariates have a complex structure of covariance matrix. Otherwise, the use of the penalized PL combined with adaptive lasso penalty is more favorable. Our recommendations in terms of prediction support as what we recommend in terms of selection. Logistic regression Our concern here is to compare the regularized Poisson estimator to the regularized logistic estimator with a different number of dummy points. We remind that the number of dummy points comes up when we discretize the integral terms in (2.3) and in (2.4). We consider three different numbers of dummy points denoted by nd 2 . By these different numbers of dummy points, we want to observe the properties with three different situations: (a) nd 2 < m, (b) nd 2 ≈ m, and (c) nd 2 > m, where m is the number of points. In the following, m ≈ 1600 and nd 2 = 400, 1600, and 6400. Note that the choice by default from the Poisson likelihood in spatstat corresponds to case (c). Baddeley et al. (2014) show that for datasets with very large number of points and for very structured point processes, the logistic likelihood method is clearly preferable as it requires a smaller number of dummy points to perform quickly and efficiently. We want to investigate a similar comparison when these methods are regularized. We only repeat the results for κ = 5×10 −5 and μ = 1600, and for Scenarios 2 and 3. We use the same selection and prediction indices examined in Section 4.2 and consider only the adaptive lasso method. Table 8 presents selection properties for the regularized Poisson and logistic estimators using adaptive lasso regularization. For unweighted versions of the procedure, the regularized logistic method outperforms the regularized Poisson method when nd = 20, i.e. when the number of dummy points is much smaller than the number of points. When nd 2 ≈ m or nd 2 > m, the methods tend to have similar performances. When we consider weighted versions of the methods, the results do not change that much with nd and the regularized Poisson likelihood slightly outperforms the regularized logistic likelihood. In addition, for Scenario 3 which considers a more complex situation, the methods tend to select the noisy covariates much more frequently. Empirical biases, standard deviation and the square root of mean squared errors are presented in Table 9. We include all empirical results for the standard Poisson and logistic estimates (i.e. no regularization is considered). Let us first consider the unweighted methods with no regularization. The logistic method clearly has a smaller bias, especially when nd = 20, which explains why in most situations the RMSE is smaller. However, for the weighted methods, although the logistic method has a smaller bias in general, it produces much larger SD, leading to larger RMSE for all cases. When we compare the weighted and the unweighted methods for logistic estimates, in general, not only do we fail to reduce the SD, but we also have a larger bias. When the adaptive lasso regularization is considered, combined with the unweighted methods, we can preserve the bias in general and in parallel improve the SD, and hence improve the RMSE. The logistic likelihood method slightly outperforms the Poisson likelihood method. When the weighted methods are considered, we obtain smaller SD, but we have a larger bias. For weighted versions of the Poisson and logistic likelihoods, the results do not change that much with nd and the weighted Poisson method slightly outperforms the weighted logistic method. From Tables 8 and 9, when the number of dummy points can be chosen as nd 2 ≈ m or nd 2 > m, we would recommend to apply the Poisson likelihood method. When the number of dummy points should be chosen as nd 2 < m, the logistic likelihood method is more favorable. Our recommendations regarding whether weighted or unweighted methods follow the ones as in Section 4.2. Application to forestry datasets In a 50-hectare region (D = 1, 000m×500m) of the tropical moist forest of Barro Colorado Island (BCI) in central Panama, censuses have been carried out where all free-standing woody stems at least 10 mm diameter at breast height were identified, tagged, and mapped, resulting in maps of over 350,000 individual trees with more than 300 species (see Condit, 1998;Hubbell et al., 1999Hubbell et al., , 2005. It is of interest to know how the very high number of different tree species continues to coexist, profiting from different habitats determined by e.g. topography or soil properties (see e.g. Waagepetersen, 2007; Waagepetersen and Guan, 2009). In particular, the selection of covariates among topological attributes and soil minerals as well as the estimation of their coefficients are becoming our most concern. We are particularly interested in analyzing the locations of 3,604 Beilschmiedia pendula Lauraceae (BPL) tree stems. We model the intensity of BPL trees as a log-linear function of two topological attributes and 13 soil properties as the covariates. Figure 2 contains maps of the locations of BPL trees, elevation, slope, and concentration of Phosphorus. BPL trees seem to appear in greater abundance in the areas of high elevation, steep slope, and low concentration of Phosphorus. The covariates maps are depicted in Figure 4. We apply the regularized Poisson and logistic likelihoods, combined with adaptive lasso regularization to select and estimate parameters. Since we do not deal with datasets which have a very large number of points, we can set the default number of dummy points for Poisson likelihood as in the spatstat package, i.e. the number of dummy points can be chosen to be larger than the number of points, to perform quickly and efficiently. It is worth emphasizing that we center and scale the 15 covariates to observe which one has the largest effect on the intensity. The results are presented in Table 10: 12 covariates for 1232 A. Choiruddin et al. Fig 2. Maps of locations of BPL trees (top left), elevation (top right), slope (bottom left), and concentration of Phosphorus (bottom right). the Poisson likelihood and 11 for the logistic method are selected out of the 15 covariates using the unweighted methods while only 5 covariates (both for the Poisson and logistic methods) are selected using the weighted versions. The unweighted methods tend to overfit the model by over selecting unimportant covariates. The weighted methods tend to keep out the uninformative covariates. Both Poisson and logistic estimates own similar selection and estimation results. First, we find some differences in estimation between the unweighted and the weighted methods, especially for slope and Manganese (Mn), for which the weighted methods have approximately two times larger estimators. Second, we may lose some nonzero covariates when we apply the weighted methods, even though it is only for the covariates which have relatively small coefficient. Borron (B) has a high correlation with many of the other covariates, particularly with them which are not selected. This is possibly why Boron which is selected and may have a nonnegligible coefficient in the unweighted methods is not chosen in the model. This may explain why the weighted methods introduce extra biases. However, since the situation appears to be quite close to the Scenario 3 from the simulation study, the weighted methods are more favorable in terms of both selection and prediction. In this application, we do not face any computational problem. Nevertheless, if we have to model a species of trees with much more points, the default value for nd will lead to numerical problems. In such a case, the logistic likelihood would be a good alternative. These results suggest that BPL trees favor living in areas of higher elevation and slope. Further, higher levels of Manganese (Mn) and lower levels of both Phosphorus (P) and Zinc (Zn) concentrations in soil are associated with higher appearance of BPL trees. pH -0.14 -0.14 0 0 Nb of cov. 12 11 5 5 Conclusion and discussion We develop regularized versions of estimating equations based on Campbell theorem derived from the Poisson and the logistic regression likelihoods. Our procedure is able to perform covariates selection for modeling the intensity of spatial point processes. Furthermore, our procedure is also generally easy to implement in R since we need to combine spatstat package with glmnet and ncvreg packages. We study the asymptotic properties of both regularized weighted Poisson and logistic estimates in terms of consistency, sparsity, and asymptotic normality. We find that, among the regularization methods considered in this paper, adaptive lasso, adaptive elastic net, SCAD, and MC+ are the methods that can satisfy our theorems. We carry out some scenarios in the simulation study to observe selection and prediction properties of the estimates. We compare the penalized Poisson likelihood (PL) and the penalized weighted Poisson likelihood (WPL) with different penalty functions. From the results, when we deal with covariates having a complex covariance matrix and when the point pattern looks quite clustered, we recommend to apply the penalized WPL combined with adaptive lasso regularization. Otherwise, the regularized PL with the adaptive lasso is more preferable. The further and more careful investigation to choose the tuning parameters may be needed to improve the selection properties. We note the bias increases quite significantly when the regularized WPL is applied. When the penalized WPL is considered, a two-step procedure may be needed to improve the prediction properties: (1) use the penalized WPL combined with the adaptive lasso to choose the covariates, then (2) use the selected covariates to obtain the estimates. This post-selection inference procedure has not been investigated in this paper. We also compare the estimates obtained from the Poisson and the logistic likelihoods. When the number of dummy points can be chosen to be either similar to or larger than the number of points, we recommend the use of the Poisson likelihood method. Nevertheless, when the number of dummy points should be chosen to be smaller than the number of points, the logistic method is more favorable. A further work would consist in studying the situation when the number of the covariates is much larger than the sample size. In such a situation, the coordinate descent algorithm used in this paper may cause some numerical troubles. The Dantzig selector procedure introduced by Candes and Tao (2007) might be a good alternative as the implementation for linear models (and for generalized linear models) results in a linear programming. It would be interesting to bring this approach to spatial point process setting. Another direction could consist in extending the intensity model itself to get more flexibility, for instance using single-index type models. Such models have already been proposed for spatial point processes by Fang and Loh (2017) with moderate number of covariates. Using e.g. Zhu et al. (2011), combining such models and regularization techniques for inhomogeneous spatial point processes seems feasible. Kernel-type regression methods could also appear as interesting perspectives and the work by Crawford et al. (2018) could serve as a basis to investigate such methods for spatial point processes feature selection problem. Appendix A: Parametric intensity estimation One of the standard ways to fit models to data is by maximizing the likelihood of the model for the data. While maximum likelihood method is feasible for parametric Poisson point process models (Appendix A.1), computationally intensive Markov chain Monte Carlo (MCMC) methods are needed otherwise (Møller and Waagepetersen, 2004). As MCMC methods are not yet straightforward to implement, estimating equations based on Campbell theorem have been developed (see e.g. Waagepetersen, 2007;Møller and Waagepetersen, 2007;Waagepetersen, 2008;Guan and Shen, 2010;Baddeley et al., 2014). We review the estimating equations derived from the Poisson likelihood in Appendix A.2-A.3 and from the logistic regression likelihood in Appendix A.4. A.1. Maximum likelihood estimation For an inhomogeneous Poisson point process with intensity function ρ parameterized by β, the likelihood function is and the log-likelihood function of β is where we have omitted the constant term D 1du = |D|. As the intensity function has log-linear form (1.1), (A.1) reduces to (1994) show that the maximum likelihood estimator is consistent, asymptotically normal and efficient as the sample region goes to R d . A.2. Poisson likelihood Let β 0 be the true parameter vector. By applying Campbell theorem (2.1) to the score function, i.e. the gradient vector of (β) denoted by (1) (β), we have The properties of the Poisson estimator have been carefully studied. Schoenberg (2005) shows that the Poisson estimator is still consistent for a class of spatio-temporal point process models. The asymptotic normality for a fixed observation domain is obtained by Waagepetersen (2007) while Guan and Loh (2007) establish asymptotic normality under an increasing domain assumption and for suitable mixing point processes. Regarding the parameter ψ (see Appendix B.2), Waagepetersen and Guan (2009) study a two-step procedure to estimate both β and ψ, and they proved that, under certain mixing conditions, the parameter estimates (β,ψ) enjoy the properties of consistency and asymptotic normality. A.3. Weighted Poisson likelihood Although the estimating equation approach derived from the Poisson likelihood is simpler and faster to implement than maximum likelihood estimation, it potentially produces a less efficient estimate than that of maximum likelihood (Waagepetersen, 2007;Guan and Shen, 2010) because information about interaction of events is ignored. To regain some lack of efficiency, Guan and Shen (2010) propose a weighted Poisson log-likelihood function given by where w(·) is a weight surface. By regarding (A.2), we see that a larger weight w(u) makes the observations in the infinitesimal region du more influent. By Campbell theorem, (1) (w; β) is still an unbiased estimating equation. In addition, Guan and Shen (2010) prove that, under some conditions, the parameter estimates are consistent and asymptotically normal. Guan and Shen (2010) show that a weight surface w(·) that minimizes the trace of the asymptotic variance-covariance matrix of the estimates maximizing (A.2) can result in more efficient estimates than Poisson estimator. In particular, the proposed weight surface is is the pair correlation function. For a Poisson point process, note that f (u) = 0 and hence w(u) = 1, which reduces to maximum likelihood estimation. For general point processes, the weight surface depends on both the intensity function and the pair correlation function, thus incorporates information on both inhomogeneity and dependence of the spatial point processes. When clustering is present so that g(v −u) > 1, then f (u) > 0 and hence the weight decreases with ρ(u). The weight surface can be achieved by settingŵ(u) = {1+ρ(u)f (u)} −1 . To get the estimateρ(u), β is substituted byβ given by Poisson estimates, that is,ρ(u) = ρ(u;β). Alternatively, ρ(u) can also be computed non parametrically by kernel method. Furthermore, Guan and Shen (2010) suggest to approximate f (u) by K(r) − πr 2 , where K(·) is the Ripley's K−function estimated bŷ extend the study by Guan and Shen (2010) and consider more complex estimating equations. Specifically, w(u)z(u) is replaced by a function h(u; β) in the derivative of (A.2) with respect to β. The procedure results in a slightly more efficient estimate than the one obtained from (A.2). However, the computational cost is more important and since we combine estimating equations and penalization methods (see Section 2.3), we have not considered this extension. A.4. Logistic regression likelihood Although the estimating equations discussed in Appendices A.2 and A.3 are unbiased, these methods do not, in general, produce unbiased estimator in practi-cal implementations. Waagepetersen (2008) and Baddeley et al. (2014) propose another estimating function which is indeed close to the score of the Poisson log-likelihood but is able to obtain less biased estimator than Poisson estimates. In addition, their proposed estimating equation is in fact the derivative of the logistic regression likelihood. Following Baddeley et al. (2014), we define the weighted logistic regression log-likelihood function by where δ(u) is a nonnegative real-valued function. Its role as well as an explanation of the name 'logistic method' will be explained further in Appendix C.2. Note that the score of (A.3) is an unbiased estimating equation. Waagepetersen (2008) shows asymptotic normality for Poisson and some clustered point processes for the estimator obtained from a similar procedure. Furthermore, the methodology and results are studied by Baddeley et al. (2014) considering spatial Gibbs point processes. To determine the optimal weight surface w(·) for logistic method, we follow Guan and Shen (2010) who minimize the trace of the asymptotic covariance matrix of the estimates. We obtain the weight surface defined by where ρ(u) and f (u) can be estimated as in Appendix A.3. Appendix B: Examples of spatial point processes models with prescribed intensity function We discuss spatial point process models specified by deterministic or random intensity function. Particularly, we consider two important model classes, namely Poisson and Cox processes. Poisson point processes serve as a tractable model class for no interaction or complete spatial randomness. Cox processes form major classes for clustering or aggregation. For conciseness, we focus on the two later classes of models. We could also have presented determinantal point processes (e.g. Lavancier et al., 2015) which constitute an interesting class of repulsive point patterns with explicit moments. This has not been further investigated for the sake of brevity. In this paper, we focus on log-linear models of the intensity function given by (1.1). B.1. Poisson point process A point process X on D is a Poisson point process with intensity function ρ, assumed to be locally integrable, if the following conditions are satisfied: 1. for any B ⊆ D with 0 ≤ μ(B) < ∞, N (B) ∼ P oisson(μ(B)), 2. conditionally on N (B), the points in X ∩ B are i.i.d. with joint density proportional to ρ(u), u ∈ B. A Poisson point process with a log-linear intensity function is also called a modulated Poisson point process (e.g. Møller and Waagepetersen, 2007;Waagepetersen, 2008). In particular, for Poisson point processes, B.2. Cox processes A Cox process is a natural extension of a Poisson point process, obtained by considering the intensity function of the Poisson point process as a realization of a random field. Suppose that Λ = {Λ(u) : u ∈ D} is a nonnegative random field. If the conditional distribution of X given Λ is a Poisson point process on D with intensity function Λ, then X is said to be a Cox process driven by Λ (see e.g. Møller and Waagepetersen, 2004). There are several types of Cox processes. Here, we consider two types of Cox processes: a Neyman-Scott point process and a log Gaussian Cox process. Neyman-Scott point processes. Let C be a stationary Poisson process (mother process) with intensity κ > 0. Given C, let X c , c ∈ C, be independent Poisson processes (offspring processes) with intensity function ρ c (u; β) = exp(β z(u))k(u − c; ω)/κ, where k is a probability density function determining the distribution of offspring points around the mother points parameterized by ω. Then X = ∪ c∈C X c is a special case of an inhomogeneous Neyman-Scott point process with mothers C and offspring X c , c ∈ C. The point process X is a Cox process driven by Λ(u) = exp(β z(u)) c∈C k (u − c, ω)/κ (e.g. Waagepetersen, 2007;Coeurjolly and Møller, 2014) and we can verify that the intensity function of X is indeed ρ(u; β) = exp(β z(u)). One example of Neyman-Scott point process is the Thomas process where is the density for N d (0, ω 2 I d ). Conditionally on a parent event at location c, children events are normally distributed around c. Smaller values of ω correspond to tighter clusters, and smaller values of κ correspond to fewer number of parents. The parameter vector ψ = (κ, ω) is referred to as the interaction parameter as it modulates the spatial interaction (or, dependence) among events. Log Gaussian Cox process. Suppose that log Λ is a Gaussian random field. Given Λ, the point process X follows Poisson process. Then X is said to be a log Gaussian Cox process driven by Λ (Møller and Waagepetersen, 2004). If the random intensity function can be written as where φ is a zero-mean stationary Gaussian random field with covariance function c(u, v; ψ) = σ 2 R(v−u; ζ) which depends on parameter ψ = (σ 2 , ζ) (Møller and Waagepetersen, 2007;Coeurjolly and Møller, 2014). The intensity function of this log Gaussian Cox process is indeed given by ρ(u; β) = exp(β z(u)). One example of correlation function is the exponential form (e.g. Waagepetersen and Guan, 2009) Here, ψ = (σ 2 , ζ) constitutes the interaction parameter vector, where σ 2 is the variance and ζ is the correlation scale parameter. Appendix C: Numerical methods We present numerical aspects in this section. For nonregularized estimation, there are two approaches that we consider. Weighted Poisson regression is explained in Appendix C.1, while logistic regression is reviewed in Appendix C.2. Penalized estimation procedure is done by employing coordinate descent algorithm (Appendix C.3). We separate the use of the convex and non-convex penalties in Appendices C.3.1 and C.3.2. Berman and Turner (1992) develop a numerical quadrature method to approximate maximum likelihood estimation for an inhomogeneous Poisson point process. They approximate the likelihood by a finite sum that had the same analytical form as the weighted likelihood of generalized linear model with Poisson response. This method is then extended to Gibbs point processes by Baddeley and Turner (2000). Suppose we approximate the integral term in (A.1) by Riemann sum approximation C.1. Weighted Poisson regression where u i , i = 1, . . . , M are points in D consisting of the m data points and M −m dummy points. The quadrature weights v i > 0 are such that i v i = |D|. To implement this method, the domain is firstly partitioned into M rectangular pixels of equal area, denoted by a. Then one dummy point is placed in the center of the pixel. Let Δ i be an indicator of whether the point is an event of point process (Δ i = 1) or a dummy point (Δ i = 0). Without loss of generality, let u 1 , . . . , u m be the observed events and u m+1 , . . . , u M be the dummy points. Thus, the Poisson log-likelihood function (A.1) can be approximated and rewritten as where w i is the value of the weight surface at point i. The estimateŵ i is obtained as suggested by Guan and Shen (2010). The similarity between (C.1) and (C.2) allows us to compute the estimates using software for generalized linear model as well. This fact is in particular exploited in the ppm function in the spatstat R package (Baddeley and Turner, 2005;Baddeley et al., 2015) with option method="mpl". To make the presentation more general, the number of dummy points is denoted by nd 2 for the next sections. C.2. Logistic regression To perform well, the Berman-Turner approximation often requires a quite large number of dummy points. Hence, fitting such generalized linear models can be computationally intensive, especially when dealing with a quite large number of points. When the unbiased estimating equations are approximated using deterministic numerical approximation as in Appendix C.1, it does not always produce unbiased estimator. To achieve unbiased estimator, we estimate (A.3) by where D is dummy point process independent of X and with intensity function δ. The form (C.3) is related to the estimating equation defined by Baddeley et al. (2014, eq. 7). Besides that, we consider this form since if we apply Campbell theorem to the last term of (C.3), we obtain which is exactly what we have in the last term of (A.3). In addition, conditional on X ∪ D, (C.3) is the weighted likelihood function for Bernoulli trials, y(u) = 1{u ∈ X} for u ∈ X ∪ D, with Precisely, (C.3) is a weighted logistic regression with offset term − log δ. Thus, parameter estimates can be straightforwardly obtained using standard software for generalized linear models. This approach is in fact provided in the spatstat package in R by calling the ppm function with option method="logi" (Baddeley et al., 2014(Baddeley et al., , 2015. In spatstat, the dummy point process D generates nd 2 points in average in D from a Poisson, binomial, or stratified binomial point process. Baddeley et al. (2014) suggest to choose δ(u) = 4m/|D|, where m is the number of points (so, nd 2 = 4m). Furthermore, to determine δ, this option can be considered as a starting point for a data-driven approach (see Baddeley et al., 2014, for further details). C.3. Coordinate descent algorithm LARS algorithm (Efron et al., 2004) is a remarkably efficient method for computing an entire path of lasso solutions. For linear models, the computational cost is of order O(Mp 2 ), which is the same order as a least squares fit. Coordinate descent algorithm (Friedman et al., 2007(Friedman et al., , 2010 appears to be a more competitive algorithm for computing the regularization paths by costs O(Mp) operations. Therefore we adopt cyclical coordinate descent methods, which can work really fast on large datasets and can take advantage of sparsity. Coordinate descent algorithms optimize a target function with respect to a single parameter at a time, iteratively cycling through all parameters until convergence criterion is reached. We detail this for some convex and non-convex penalty functions in the next two sections. Here, we only present the coordinate descent algorithm for fitting generalized weighted Poisson regression. A similar approach is used to fit penalized weighted logistic regression. C.3.1. Convex penalty functions Since (w; β) given by (C.2) is a concave function of the parameters, the Newton-Raphson algorithm used to maximize the penalized log-likelihood function can be done using the iteratively reweighted least squares (IRLS) method. If the current estimate of the parameters isβ, we construct a quadratic approximation of the weighted Poisson log-likelihood function using Taylor expansion: where C(β) is a constant, y * i are the working response values and ν i are the weights, Regularized Poisson linear model works by firstly identifying a decreasing sequence of λ ∈ [λ min , λ max ], for which starting with minimum value of λ max such that the entire vectorβ = 0. For each value of λ, an outer loop is created to compute Q (w; β) atβ. Secondly, a coordinate descent method is applied to solve a penalized weighted least squares problem The coordinate descent method is explained as follows. Suppose we have the estimateβ l for l = j, l, j = 1, . . . , p. The method consists in partially optimizing (C.5) with respect to β j , that is min βj Ω(β 1 , . . . ,β j−1 , β j ,β j+1 , . . . ,β p ). Friedman et al. (2007) provide the form of the coordinate-wise update for several penalized regression estimators. For instance, the coordinate-wise update for the elastic net, which embraces the ridge and lasso regularization by setting respectively γ to 0 or 1, is , (C.6) whereỹ (j) i =β 0 + l =j z ilβl is the fitted value excluding the contribution from covariate z ij , and S(z, λ) is the soft-thresholding operator with value The update (C.6) is repeated for j = 1, . . . , p until convergence. Coordinate descent algorithm for several convex penalties is implemented in the R package glmnet (Friedman et al., 2010). For (C.6), we can set γ = 0 to implement ridge and γ = 1 to lasso, while we set 0 < γ < 1 to apply elastic net regularization. For adaptive lasso, we follow Zou (2006), take γ = 1 and replace λ by λ j = λ/|β j | τ , whereβ is an initial estimate, sayβ(ols) orβ(ridge), and τ is a positive tuning parameter. To avoid the computational evaluation for choosing τ , we follow Zou (2006, Section 3.4) and Wasserman and Roeder (2009) who also considered τ = 1, so we choose λ j = λ/|β j (ridge)|, whereβ(ridge) is the estimates obtained from ridge regression. Implementing adaptive elastic net follows along similar lines. C.3.2. Non-convex penalty functions Breheny and Huang (2011) investigate the application of coordinate descent algorithm to fit penalized generalized linear model using SCAD and MC+, for which the penalty is non-convex. Mazumder et al. (2011) also study the coordinate-wise optimization algorithm in linear models considering more general non-convex penalties. Mazumder et al. (2011) conclude that, for a known current estimateθ, the univariate penalized least squares function Q u (θ) = 1 2 (θ −θ) 2 +p λ (|θ|) should be convex to ensure that the coordinate-wise procedure converges to a stationary point. Mazumder et al. (2011) find that this turns out to be the case for SCAD and MC+ penalty, but it cannot be satisfied by bridge (or power) penalty and some cases of log-penalty. Breheny and Huang (2011) derive the solution of coordinate descent algorithm for SCAD and MC+ in generalized linear models cases, and it is implemented in the ncvreg package of R. Letβ l be a vector containing estimates β l for l = j, l, j = 1, . . . , p, and we wish to partially optimize (C.5) with respect to β j . If we defineg ij , the coordinate-wise update for SCAD is for any γ > max j (1 + 1/η j ). Then, for γ > max j (1/η j ) and the same definition ofg j andη j , the coordinate-wise update for MC+ is where S(z, λ) is the soft-thresholding operator given by (C.7). C.4. Selection of regularization or tuning parameter It is worth noticing that coordinate descent procedures (and other computation procedures computing the penalized likelihood estimates) rely on the tuning parameter λ so that the choice of λ is also becoming an important task. The combination between 1 and 2 penalties. This method is particularly useful when the number of predictors is much larger than the number of observations since it can select or eliminate the strongly correlated predictors together. The lasso procedure suffers from nonnegligible bias and does not satisfy an oracle property asymptotically (Fan and Li, 2001). Fan and Li (2001) and Zhang (2010), among others, introduce non-convex penalties to get around these drawbacks. The idea is to bridge the gap between 0 and 1 , by trying to keep unbiased the estimates of nonzero coefficients and by shrinking the less important variables to be exactly zero. The rationale behind the non-convex penalties such as SCAD and MC+ can also be understood by considering its first derivative (see Table 1). They start by applying the similar rate of penalization as the lasso, and then continuously relax that penalization until the rate of penalization drops to zero. However, employing non-convex penalties in regression analysis, the main challenge is often in the minimization of the possible non-convex objective function when the non-convexity of the penalty is no longer dominated by the convexity of the likelihood function. This issue has been carefully studied. Fan and Li (2001) propose the local quadratic approximation (LQA). Zou and Li (2008) propose a local linear approximation (LLA) which yields an objective function that can be optimized using least angle regression (LARS) algorithm (Efron et al., 2004). Finally, Breheny andHuang (2011) andMazumder et al. (2011) investigate the application of coordinate descent algorithm to non-convex penalties. In (2.5), it is worth emphasizing that we allow each direction to have a different regularization parameter. By doing this, the 1 and elastic net penalty functions are extended to the adaptive lasso (e.g. Zou, 2006) and adaptive elastic net (e.g. Zou and Zhang, 2009). Table 2 details the regularization methods considered in this study. Proof. We now focus on the proof of Theorem 2. Since Theorem 2(i) is proved by Lemma 2, we only need to prove Theorem 2(ii), which is the asymptotic normality ofβ 1 . As shown in Theorem 1, there is a root-|D n | consistent local maximizer β of Q n (w; β), and it can be shown that there exists an estimatorβ 1 in Theorem 1 that is a root-(|D n |) consistent local maximizer of Q n w; (β 1 , 0 ) , which is regarded as a function of β 1 , and that satisfies ∂Q n (w;β) ∂β j = 0 for j = 1, . . . , s, andβ = (β 1 , 0 ) .
15,320.2
2017-03-07T00:00:00.000
[ "Mathematics", "Computer Science" ]
Models and algorithms for human capital reproduction intellectual analysis . The managerial decisions making tasks in human capital reproduction complex systems are solved on the basis of models built on experimental data. It is problematic to take into account all the factors affecting the human capital reproduction. Existing approaches are not focused on building models for the human capital reproduction with incomplete information. Algorithms for inductive modeling are developed for the human capital reproduction systems characteristics functional description. The software is developed to implement the proposed algorithms for the human capital reproduction intellectual analysis based on the metric spaces of multisets. Introduction Currently the main sustainable development factor is human capital, which not only affects materialized capital, but also manages it.This is explained by the fact that in recent years the economy has turned into an information system, where the main aspect of its competitiveness is not fixed assets, but human abilities, skills and competence [1,2].As human capital accumulates, marginal benefits decrease and marginal costs increase.Therefore, it is necessary not only to form, but also to reproduce human capital. Human capital reproduction is the formation of a person's productive abilities through investments in specific processes of an individual's activity -namely, in education and health promotion, which contribute to the human capital development.The whole process of human capital reproduction consists in a gradual transition from one phase to another: • formation, where the certain knowledge accumulation directly takes place, which later a person uses in the production process; • distribution, when a person begins to use the accumulated knowledge in certain areas and sectors of the economy; • exchange, where a certain intellectual base of any business entity is exchanged for remuneration for its activities; • consumption, where, firstly, there is a productive use of human capital, and secondly, the basis for its further improvement is formed. Human capital reproduction different phases necessitate taking into account a multitude of accompanying processes.The expected results from the use of certain means can be unpredictable as a result of the action of random external factors.When external factors are strictly defined or known, then the uncertainty can be taken into account and, accordingly, it is possible to propose ways to handle them.In the tasks of human capital reproduction system analysis, there are three main types of uncertainty: uncertainty of goals; situational uncertainty; informational uncertainty. One of the human capital reproduction information uncertainty features is the uncertainty caused by the incompleteness of the data.There is a need to recover missing data based on the intelligent algorithms selection by which they will be recovered.This task is important for processing small sample sizes, when an incorrect assessment of the human capital reproduction is highly undesirable and can lead to errors in the predictive models construction. Materials and methods Human capital reproduction data analysis models.The study of the human capital reproduction processes using mathematical models allows to investigate the quantitative relationships between the input and output variables, as well as the factors affecting the output variables.This allows to study the processes behavior at any time intervals.Mathematical models used for these purposes should take into account the peculiarities of the interaction of quantitative and qualitative variables with possible consideration of real time based on simulation [3,4]. The choice of the mathematical model structure is not an easy task that must be solved interactively.First, the model structure is estimated approximately based on the patterns study, correlation functions analysis, as well as visual data analysis.In this case, several of the most probable structures are selected.Then the candidate models parameters estimates are calculated and the optimal ones are selected using the corresponding statistical characteristics of the models quality.Different methods and approaches can be used to predict incomplete data, depending on the reasons for these uncertainties. Currently, many exhaustive and iterative models have been developed for analyzing human capital reproduction data [5,6].Enumerated models are effective as a means of structural identification, but only with a limited number of arguments.Iterative approaches are computationally efficient with a large number of arguments, but the specificity of their architecture does not guarantee the construction of an adequate structure model to fill in the gaps in data on the human capital reproduction. The use of any means to fill the gaps can bias the sample design that will be obtained from existing incomplete data, which can distort the real distribution of observations in the sample and reduce the actual significance of the results obtained. When choosing a specific model to fill in the gaps, one should take into account the possibilities of its application, which significantly depend on the data analysis method that is supposed to be used in the future.There are various approaches to handle missing information, such as EM-estimation, Hot Deck, Zet, Barlet's method, Resampling, ZET braid [7,8]. Expectation maximization estimation is an iterative procedure designed to solve optimization problems for a certain functional through an analytical search for the extremum of a function.It allows not only to reproduce missing values using a two-step iterative algorithm interface, but also to estimate mean, covariance and correlation matrices for quantitative variables. The Hot Deck method uses substitution instead of the missing value of the closest info item.The missing data can be selected both from the entire set of complete observations, and from some subgroup (cluster) to which the target object belongs.To fill the gap for the selected characteristic of the target object, the value of this characteristic is used for the object closest to the target.The type of distance function for determining the missing value is selected based on the type of data being studied, as well as ideas about the nature of the relationship between the variables. The ZET method consists in selecting each value to fill the gap not over the entire set of observations, but from some part of it, which is called the component matrix, made up of component rows and columns.The componentness of a certain line is value inversely proportional to the Cartesian distance along the target line (incomplete observation with a gap) in the space, the axes of which are the variables (the characteristics of objects) [9,10].Based on the component matrix data, in the future, a functional dependence of the predicted value on the corresponding value in the component matrix is built, on the basis of which the value of the missing data is then predicted. Bartlett's method consists of two stages: substitution instead of skipping the initial generated values in the first stage; at the second stage, the target variable covariance analysis and the construction of a dichotomous indicator of the observations completeness for the target variable. Resampling is an iterative method that involves changing rows with missing data with randomly selected rows from a full observations matrix, and then constructing a regression equation to predict the missing value.Regression modeling procedures are repeated several times, after which the values of the obtained regression coefficients are averaged and the final value is obtained, which gives the missing value maximum forecast accuracy [11,12]. The ZET Braid method contains a mechanism for objective selection of the competent matrix dimension.A sequential selection of competent rows and columns is carried out, and each time a new competent matrix is formed.Then, according to a given criterion, its effectiveness in predicting gaps is determined [13,14]. It is proposed to consider each of these methods as the opinion of a separate expert.It is proposed to use multisets as a mathematical model for the expert assessments presentation [15,16]. Results Algorithms for human capital reproduction data analysis/ The tasks of managerial decisions making in human capital reproduction complex systems are solved on the basis of models built on experimental data.Problematic is both the accounting of all factors affecting the human capital reproduction in specific conditions and the complexity of collecting reliable information [17]. There are many algorithms [18][19][20][21][22] that are used in problems of human capital reproduction modeling, but not all of them are focused on building models of complex systems in conditions of incomplete information.It is advisable to use the methods and tools of inductive modeling, designed primarily for the functional description of the systems characteristics for the human capital reproduction. Iterative algorithms have been developed that solve the problem of constructing models based on a sample data = ( ) of observations of input (Expectation maximization, Hot Deck, ZET, Bartlett's method, Resampling, ZET Braid) and one output variable is multiplicity of the multiset.The following types of algorithms for the intellectual analysis of the human capital reproduction have been implemented. A multi-row algorithm, where in the process of calculations at each iteration (selection series) intermediate partial models are formed from all possible pairs of arguments, where are the best outputs of the previous series : +1 = ( , ), = 1, . . ., , = + 1, . . ., . ( Relaxation algorithm, in which pairs are formed from intermediate and initial arguments: A combined algorithm, where pairs are formed from both intermediate and initial arguments, that is, it combines the two previous algorithms: A generalized algorithm, where pairs are formed as in the combined algorithm, and combinatorial optimization of the complexity of all private models is also applied: The three previous types of algorithms are special cases of the generalized one.In this algorithm, combinatorial optimization of the particular models complexity consists in the fact that on each row, models of the following form are considered: where are the elements of the binary structure vector taking the value 1 or 0, depending on the inclusion or exclusion of the corresponding argument.This optimizes the particular model complexity.All algorithms look for the optimal model as a solution to the optimization problem: where is the regularity criterion A client-server software has been developed that implements the proposed algorithms for the intellectual analysis of the human capital reproduction.Client-server interaction in the software package determines the functional distribution between the client and server parts into the so-called "operational levels": • user interface responsible for data presentation and timely response to user commands; • server that is responsible both for the level at which the information received from the user is processed and for the human capital reproduction data management level, ensuring their storage and access to them. The software package consists of several blocks: data storage unit, which stores both input and output data and intermediate results; task formation block, in which control parameters E3S Web of Conferences 376, 05013 (2023) https://doi.org/10.1051/e3sconf/202337605013ERSME-2023 are set; block for solving the problem, in which the modeling process can be performed in three modes (two automatic and one interactive). Using the data storage unit, having an initial sample of human capital reproduction data, it is possible to split data into projects, store intermediate calculations for further continuation of the modeling process, save the final calculation results and use the resulting models for new data. The system operates simultaneously with three databases: the initial database, the calculation database and the results database.After the initial sample is generated or obtained, the block for generating the problem is used.At this stage, the sample is divided into two parts -training and testing: the model coefficients are estimated on the training sample, the best models of human capital reproduction are selected on the test sample based on the regularity criterion.In the process of generating a data sample, it is possible to set: the type of sample splitting, the noise level, the external criterion, the modeling algorithm.Further, depending on the use of various modifications of the algorithm (1) -( 5), different complexity models are generated, for each of which the criterion value is calculated by which they are selected for the next series. In the software package, the process of modeling the human capital reproduction can be implemented in three modes (two automatic and one interactive). Automatic (the process of self-organization of models is performed automatically) the mode is implemented in two versions: standard -the same type of private description is set for all rows without exception; planned -the process of self-organization of models is performed automatically according to a given plan, that is, when the type of private description is set different for the series. Interactive when it is possible to directly participate in the models self-organization process: • include or not include modifications on any row; • change the complexity of private description models; • choose a different number of models that will move to the next row; • use different criteria for selecting the best models.The developed program interface is a set of tools through which the user can control the modeling process.The following interface features have been implemented: the selforganization process can be stopped at any stage of the calculation, and then at any time the calculations can be extended, while all intermediate calculations will be saved; on any row, it is possible to include or not to include different modifications, change the complexity of models of a private description, choose a different number of models that will go to the next row, change the criteria for choosing the best models. The database stores raw data, calculation data, and results data.Having an initial sample, it is possible to split the data into projects, store intermediate calculations for further continuation of the modeling process, save the calculation results, and also use the resulting models on new data. Depending on the use of various optimization options, models of different complexity are generated, for each of which the criterion value is calculated by which they are selected for the next series.Such a structural arrangement allows experimenting with the input data at each stage, thereby interactively changing the algorithm structure for analyzing data on the human capital reproduction. Discussion In the tasks of intelligent processing of human capital reproduction data, the greatest difficulty remains the need to classify uncertainties of different types and the resulting gaps and imprecise values.Efficient algorithms are needed for handling uncertainties and E3S Web of Conferences 376, 05013 (2023) https://doi.org/10.1051/e3sconf/202337605013ERSME-2023 associated missing values that are specific to this area.The main goal of the analysts' work is precisely the identification and development of such management decisions that may be typical for solving various problems of increasing the efficiency of human capital reproduction. A step-by-step solution to the problem of filling in the missing data of the human capital reproduction involves analyzing the essence of the process described by a certain sequence of data, selecting the structure of the model, choosing adequate data mining methods to fill in the missing data, and implementing these methods with modern tools. Conclusions The developed software allows working with different data sets, performing planned computational experiments and solving practical problems of intellectual analysis of human capital reproduction.The constructed models are provided by the system for graphical and meaningful analysis and are stored in the database for further use.The information system has been implemented, in which the modeling process can be performed automatic and interactive modes. which is based on dividing the sample into two parts 1 and 2 with the volume 1 and 2 , 1 + 2 = , ̂ 1 is the estimate of the parameters on the subsample 1 .
3,511.2
2023-01-01T00:00:00.000
[ "Economics", "Business", "Computer Science" ]
Intensive luminescence from a thick, indium-rich In0.7Ga0.3N film An In0.7Ga0.3N layer with a thickness of 300 nm deposited on GaN/sapphire template by molecular beam epitaxy has been investigated by highly spatially resolved cathodoluminescence (CL). High crystal film quality without phase separation has been achieved. The InGaN layer shows intense emission in the IR spectral region. The lateral as well as the vertical luminescence distribution is used to probe the In composition ([In]) homogeneity: the thick InGaN film exhibits laterally a rather homogeneous emission intensity at 1.04 eV (∼1185 nm) with a FWHM of only 63 meV. Carrier localization into regions of enhanced In concentration originating from compositional fluctuations is revealed. The evolution of emission in growth direction has been explored by a cross-sectional CL linescan showing a slight spectral redshift from the bottom to the surface of the InGaN film corresponding to an increase of [In] of only 0.5% within the layer thickness of 300 nm. Introduction In x Ga 1-x N alloys with an indium composition over 50% have recently gained considerable interest due to their tunable bandgap within the green-red and IR spectral regions. The large difference in decomposition temperature between InN and GaN and the theoretically predicted large region of solid phase immiscibility leads to significant difficulties in achieving homogeneous epitaxial films during InGaN growth. Typically, phenomena such as indium clustering and phase separation are reported. [1][2][3] Up to now, there are only a few reports published on In-rich InGaN films with In content > 50% by metal organic chemical vapor deposition (MOCVD) as well as molecular beam epitaxy (MBE). [3][4][5][6][7][8] Calculated InN/GaN phase diagrams show large miscibility gaps at growth temperatures between 600 and 850°C being typically applied for MOCVD and MBE. 9) Regarding the low dissociation temperature for InGaN, MBE growth shows advantages over MOCVD for the low temperature regime since nitrogen atoms can be supplied by a plasma source in MBE independent of substrate temperature. [10][11][12][13] Moreover, it is necessary to keep the growth surface under a metal-rich condition, i.e. In-rich condition for InGaN epitaxy, in order to stay at the two-dimensional growth mode yielding flat surfaces. 5) However, in an In-rich regime, excess indium adatoms may not be evaporated in time giving rise to indium droplet formation on the surface, a common behavior in the case of InGaN growth by MBE. 14,15) A non-uniform distribution of In atoms in the InGaN alloy either by random atomic scale fluctuations and/or In clustering on different length scales results in fluctuations of the local bandgap which has a strong impact on the optical properties due to localization of excess carriers. 11,[16][17][18][19] Reference 5 have performed an extensive study with spatially averaging techniques. As an indirect result, an "S-shaped" temperature dependence of the peak wavelength is measured for thick InGaN alloys as well as InGaN quantum well structures. [20][21][22] Here, we use a nm-scale spatially resolved, optical mapping technique to monitor the spatially changing potential landscape as induced by indium fluctuations. 23,24) We refer to the pioneering work of Refs. 11, 12, 23 that analyzed films with In-concentrations below 20% containing films. Generally, the weak luminescence of In-rich films and a relatively poor detector efficiency in the IR spectral region make it challenging to directly visualize optical properties of In-rich InGaN films ([In] > 50%) spatially resolved. In this work, a comprehensive investigation of a 300 nm thick In 0.7 Ga 0.3 N layer deposited on a GaN/sapphire template at 540°C by MBE has been performed by atomic force microscopy (AFM), X-ray diffraction (XRD), and highly spatially resolved cathodoluminescence (CL). Experimental methods The InGaN film has been grown by plasma-assisted molecular beam epitaxy (PA-MBE) at 540°C. A conventional dual filament Knudsen cell is used as Ga and In sources, and a RF N 2 plasma cell as the nitrogen source. A GaN layer with a thickness of about 4 μm deposited on a sapphire substrate by MOCVD is the template used for MBE growth. After a 200 nm thick GaN buffer layer is grown on the GaN template, 300 nm thick InGaN films were grown using the growth-temperature-controlled-epitaxy method. 25) More detailed information on the growth technique can be found elsewhere. 15) In-rich growth conditions are required to realize high-quality growth of InGaN. Thus, a high In/Ga flux ratio has been used in this work and the composition of indium is actually determined by the growth temperature. From our previous study, we expect to have a residual background concentration of 10 18 cm −3 . 26) The surface morphology was characterized by a Bruker Dimension Icon AFM. The crystal quality, strain state and averaged In composition was determined by XRD measurements in a Bruker D8 X-ray diffraction system. The investigation of the optical properties has been realized by highly spatially-resolved CL in a home-built CL-SEM system. 27) In our plan-view CL experiments, the kinetic energy of the incident electron beam was set to 5 keV leading to a penetration depth of about 360 nm. [28][29][30] Realizing a scanning-transmission-electron-microscopy-(STEM) mode-like configuration inside our SEM, the cross-section measurements were performed at a SEM-acceleration voltage (V acc ) of 25 kV on a 100-300 nm thin specimen, where the majority of the incident electrons are transmitted. The sample was prepared following conventional TEM preparation recipes (mechanical polishing followed by Ar + ion milling in a liquid-nitrogen-cooled precision ion polishing system) and was mounted on a dedicated SEM holder with a hole as well as an electron absorbing graphite plate for STEM mode configuration. Measurements took place at a beam current of I Beam = 1.6 nA (plan-view) and I Beam = 650 pA (cross-section), respectively, and at liquid helium temperature (T = 5 K) recording the emitted luminescence with a liquid-nitrogen-cooled InGaAs array detector. Structural characterization The surface morphology across a 3 × 3 μm 2 scanned area obtained by AFM is depicted in Fig. 1. As mentioned above, growth under a In-rich condition may lead to In droplets, which are usually removed by chemical etching after growth. 14,15) Remarkably, the InGaN surface presented in Fig. 1 provides a droplet-free, grain-like structure with a rms roughness of about 2.31 nm. Lateral luminescence distribution The low temperature (T = 5 K), spatially averaged CL spectrum recorded from the surface is shown in Fig. 3(a) and reveals an intense, but broadened and symmetric luminescence band from the InGaN layer with a peak wavelength at 1185 nm and a FWHM of 63 meV. Sharper linewidths are reported in literature for less Ga-containing layers. 5) No emission from the GaN buffer layer can be seen in plan-view due to the large thickness of the InGaN layer. The broad InGaN emission is modulated by Fabry-Pérotthickness interferences. The luminescence peak energy is in accordance with an In composition range of 68%-78% assuming a bowing parameter in a wide range from 1.8 to 2.8 eV, 6,31,32) supposing that the substrate induced strain can be neglected. The CL intensity image (CLI) of Fig. 3(b) taken over 15 × 12 μm 2 shows an intense and more or less homogeneous distribution of luminescence. We found a few dark lines that are due to scratches during template handling (not shown here). However, these scratches are not seen as morphological features of the InGaN in AFM or SEM images. Structural damage of the buffer layer due to scratching reduces the CL emission intensity of the overgrown InGaN film possibly due to dominant non-radiative recombination at the GaN-MBE/GaN-MOCVD interface. The penetration depth of the incident electron beam perfectly matches the vertical position of the scratches within the layer stack. Other than the network of dark lines, only slight intensity modulations are resolved in the CLI, proving a homogeneous InGaN film in the area between the scratches. In Fig. 3(c), the CL wavelength image (CLWI), i.e., a map of the local peak emission wavelength across an image of 256 × 200 points, is depicted. We were able to prove wavelength changes on two different length scales: an overall continuous gradient across the whole sample as well as smaller variations on a lateral scale below 1 μm. For carrier localization, such small fluctuations play a dominant role. Since the emission peak of the thick InGaN alloy can directly be attributed to a potential minima landscape, the CLWI can be transformed to a map of the infimum Incomposition. Chemical disorder in III-nitride semiconductor alloys is referred to as a substitution of atom species A by a second species B on a group-III lattice site leading to a local change of the regular number of atoms A and B, respectively. In a volume of a perfectly random alloy, the probability of finding one atom species is given by a Poisson distribution. Beyond the Poisson distribution, inhomogeneity on different length scales is possible: Short-range ordering effects caused by chemical affinities, correlated fluctuations across multiple sites, as well as point and extended defects force deviations from a pure Poisson distribution. On an even larger scale, phase separation and clustering may occur. In our sample, the continuous wavelength shift in the CLWI is caused by a temperature gradient across the wafer during the growth leading to a slight In-concentration discrepancy. Since, the scanned area in the shown CL map (Fig. 3) is too small, the overall gradient is not recognizable here. The small wavelength variation below 1 μm is caused by indium fluctuation. To statistically analyze these fluctuations quantitatively, we calculated the histograms of the CLWIs: the frequency of pixels in the CLWI maps emitting at a given peak energy is plotted versus the photon energy for all 51200 pixels of the map. For an alloy without local phase separation but purely statistically local indium fluctuations, a monomodal statistical distribution function results, which converges into a Gaussian distribution for perfect random alloys. 33) The standard deviation σ of the statistical distribution gives information on the disorder in the alloy. For [In] = 50%, the standard deviation should reach its maximum (assuming strain-free conditions). The resulting histogram (dotted red line) is plotted in Fig. 3(a) together with the corresponding spatially averaged spectrum. The maximum of the distribution of wavelength matches with the integral spectrum. The standard deviation σ = 7 meV is relatively large. From the spectrally wide features in the histogram we can conclude that local inhomogeneity of the InGaN layer leads to the broadening of the integral spectrum. The reasons for the broad distribution in the shown CLWI are the local variation of the In-composition on micrometer scale as well as the shift of the In concentration across the wafer. In agreement with the XRD data, we did not find any indication for phase separation in CL, since the distribution of wavelengths is purely statistical monomodal with a single emission line in the low temperature spectra. 16) Nevertheless, indium clustering is directly visible on a sub-micrometer scale in the wavelength image. On the other hand, alloy disorder on atomic level cannot be resolved here in our experiment as the radiative recombination inherently averages over the distance of the generated excess electrons and holes which is of the order of several lattice constants. Moreover, the spatial resolution of our SEM-CL setup does not match with the required resolution. With the help of STEM-CL, which offers a much higher spatial resolution, such questions can be addressed. [34][35][36][37][38][39] Evolution of emission in growth direction The volume of the electron beam-solid interaction determines the spatial resolution of a CL experiment and is known as the Bethe range for bulk materials. 28) A drastic reduction of this interaction volume is achieved by using a thin specimen instead of a bulk sample like that used in TEM. 28) Since the InGaN layer in this study is only 300 nm thick, we have applied this approach for the cross-section characterization of our sample to increase the spatial resolution. The InGaN layer was prepared with a manual TEM preparation technique in a face-to-face sandwich configuration to achieve a wedge-shaped specimen. Subsequently, the specimen was analyzed using our standard SEM-CL, but with relatively high acceleration voltage (V acc = 25 kV) to achieve a STEM mode with a small scattering volume. The expected Goldstein range, i.e., the range of scattered electrons within the thickness of the specimen is in the order of 10 nm. 40,41) Figure 4 displays the CL linescan across the specimen to characterize the luminescence evolution in growth direction. The region, where the two pieces of the sample face-to-face prepared specimen are glued together, is blackened to improve the contrast. The sharp near-band-gap emission of GaN (3rd and 4th order at λ = 1072 nm and λ = 1429 nm, respectively) as well as the InGaN luminescence is visible. The broad InGaN emission is modulated by Fabry-Pérotthickness interferences and appears at longer wavelengths (λ peak = 1250 nm) than the plan-view measurements due to the different sample position and the In-composition gradient across the wafer. Nevertheless, a slight shift within the linescan to a longer wavelength from the InGaN/GaN interface to the surface can be obtained (Δλ = 10 nm). This redshift of emission can be caused either by the relief of substrate-induced stress or more likely, by an increase of In concentration along growth direction (calculated from emission shift: Δ[In] = 0.5% 6) ). We excluded excitation dependent effects through the CL linescan under different beam currents. Conclusions An analysis by structural and optical means of an MBEgrown, 300 nm thick In 0.7 Ga 0.3 N layer is presented that excludes the presence of phase separation. While the strainstate of the layer is almost fully relaxed it still exhibits intense emission around 1200 nm allowing for a CL-based mapping of the optical properties. In plan-view, a laterally homogeneous emission intensity is found. Slight In fluctuations as well as a growth-temperature induced compositional gradient across the wafer are revealed. In cross-section STEM-CL experiments, a slight increase of In composition during the growth of the 300 nm thick layer of about 0.5% is observed. These results demonstrate the feasibility for high-quality InGaN materials with In composition over 50% when grown by MBE [3][4][5][6][7][8]
3,273.2
2019-05-28T00:00:00.000
[ "Physics", "Materials Science" ]
Temperature Profile of Produced Gas in Oil Palm Biomass Fluidized Bed Gasifier: Effect of Fibre/Shell Composition Ratio Azali A1, Sapuan SM2, Rahman SA1, Adam NM2, Hasan M3 and Enamul Hoque M4* 1Malaysian Palm Oil board, Malaysia 2Department of Mechanical and Manufacturing Engineering, Universiti Putra Malaysia, Malaysia 3Department of Materials and Metallurgical Engineering, Bangladesh University of Engineering and Technology, Dhaka 1000, Bangladesh 4Department of Mechanical, Materials and Manufacturing Engineering, University of Nottingham Malaysia Campus, Jalan Broga, 43500 Semenyih, Selangor, Malaysia Introduction With the progress of strong economic growth and revolutionary development, the demands for energy are ever increasing. Presently, one of the major challenges that the developed/developing countries are facing is the ample supply of energy. To date, various alternatives energy sources (e.g. biofuel, biogas etc.) have been investigated to find sustainable solutions to ever increasing energy demands [1,2]. In Malaysia, the oil palm biomass can be a major potential source for energy production. It is a new renewable source of energy, which could serve 5% of total energy consumption in Malaysia [3][4][5][6][7]. The gasification process is one of the most simple, cleanest and efficient method for the production of useful gases from low or negative-value carbon-based feedstock such as coal, petroleum coke and high sulfur fuel oil that would otherwise be disposed as waste [8][9][10][11]. Nowadays, Gasification is becoming a more interesting and efficient energy-conversion technology for a wide variety of oil palm biomass fuels. The large scale deployment of efficient technology along with interventions to enhance the sustainable supply of oil palm biomass fuels can transform the energy supply situation in rural areas. It has gained as a potential technique to become the growth engine for rural development in the country. The understanding of new methodology during gasification is a fundamental importance, for the optimal design of oil palm biomass fluidized bed gasifier [12][13][14]. The temperature often plays important roles in the behavior of produced liquids and semi-liquids. Kanagaratnam et al. observed that the oil-binding capacity of palm oil based shortening (a semi-liquid) was affected by the temperature [15]. Likewise, the effects of various compositional factors such as, air temperature, moisture content, conditions in gasifying chamber etc. on the gas production had also been studied. Schoeters et al. developed a fluidized bed gasifier with beds moving in parallel to each other [16]. The fluidized bed gasifier is more suited for low density and non free flowing material. They also investigated the effect of variables, such as air factor, volumetric throughput, steam and oxygen addition and feedstock properties on the gasifier performance (gas quality, thermal efficiency). Boateng et al. [17] employed fluidized bed gasification of rice hull had approved the method of gasifying using Fluidized Bed Gasifier (FBG) significantly. Their method used steam as reaction in the FBG where the heating value was found to be between 11.1-12.1 MJm -3 at the respective reactor temperature of 700-800°C, varied between 35-59% MC of rice hull. Azali et al. [12] suggested gasification technology for palm oil mill in comparison to the existing system that is using boiler. Gasification system converts a gas from combustion and uses this syngas direct to generator or engine gas to generate electricity. Similar function is found in boiler where it generates steam for turbine to generate electricity. Azali studied the gasification system for fuel treatment in which moisture content in oil palm biomass poses a major obstacle [18]. Figure 1 shows a flowchart of gasification developed in oil palm industry, while Figure 2 describes flow process of fibre and shell treatment before being used as fuel. Figure 3 represents a basic diagram of oil palm biomass fluidized bed gasifier where the oil palm biomass flows into gasifying chamber at different ratios. The gasifier was made of stainless steel pipe and the total high of gasifier was 850 mm with an internal diameter of 50 mm. Prior to each experiment, the gasifier was charged with 20 g of silica beads as the bed material to obtain better temperature distribution, to stabilize the fluidization and prevent coking inside the reactor. The solenoid valve was turned on and a pre-heated air flow passed through the bed and the reactor when the temperatures in the bed and in the gasification zone reached the desired temperature. Flow of fuel was set-up at 20 kg per hour with screw conveyer as a feeder. In this experiment, the baseline data was first obtained for gasifying of oil palm biomass. In addition, gasifying with different ratios of fibre and shell were carried out to investigate their gasification characteristics in comparison of mixer, air flow and gas produced. The fibres in the EFB have various lengths and on each fibre, the diameter varies from one end to another where the largest diameter is located around the middle. The usual diameter varies in between 0.4 to 0.7 mm. It had tensile strength of 50-300 MPa, Young's modulus of 6-18 GPa and moisture content of around 15%. On the other hand, the EFB shell comprises 24-26% lignin, 22-25% cellulose, 24-27% hemicelluloses and 8-10% moisture. The calorific value of the shell is also higher than that of EFB fibre. Gasification process were carried out for the different ratio of oil palm biomass of 80% fibre and 20% shell, 60% fibre and 40% shell, 60% shell and 40% fibre and 80% shell and 20% fibre. In gasifying process, excess air flow was varied from 1-2 m/s at the interval of one hour for each oil palm biomass. For each excess air condition, a 4 kW heater was used to pre heat air at 400°C where the total airflow at inlet was maintained by an air damper. Fuel feeding was performed by a screw conveyor with a setting speed at 2 rpm. The gasifying tests were operated at the bed temperature 950°C, air velocity in the range of 1-2 m/s and at atmospheric pressure. Calorific value of oil palm biomass (fibre and shell) The properties of oil palm biomass such as percentage of mixing ratio, temperature behavior, air velocity and gas production as well as variable of operating parameters were determined. In addition, calorific value analysis of the oil palm biomass, whose heating profile of different mixture ratio was tested during gasification. The experiments were divided into four types of oil palm biomass mixture (fibre and shell): 80% shell and 20% fibre Prior to delivering the oil palm biomass into the gasifier chamber, some sample was taken to measure its Calorific Value (CV). Table 1 presents a summary of calorific value of oil palm biomass used in this study. This table shows that the oil palm biomass fuels with the lowest CV is 80% fibre and 20% shell to yield an average of 18.82 MJ per kg and the highest CV of 19.38 MJ per kg for 80% shell and 20% fibre. The data of CV of oil palm biomass are shown as follows: The fibre had CV of 18.5 MJ/kg, while shell had calorific CV of 20.72 MJ/kg. 1 kg of biomass was used for calculating CV of each ratio of biomass. The methods of calculation are given below: Figure 4 shows a plot of average experimental and theoretical CV value. A difference of 1-5% was observed between theoretical and experimental values, which may be due to imperfections in the raw material. This is consistent with the study done by Eris [19], who found that fibre had a CV of 13.57 MJ/kg and shell had a CV of 16.37 MJ/kg. From the experiment, Moisture Content (MC) of oil palm biomass was also controlled such that ≤15% dry basis. Brammer and Bridgwater claimed that the moisture content delivered to gasification system should be minimized hence a drying stage is required [20]. The drying process carried out should be maximized at the expense of heat exported. Bain [21] as well as Brammer and Bridgwater [20] reported that drying of a biomass material means removing water from the solids to reduce moisture content to an acceptably low value. They also mentioned that the combustion of biomass in the range 79% combustion efficiency could be obtained using the dried biomass. The above-mentioned literature proposed that the moisture content must be removed effectively from biomass and this has been done for this research. Effect of temperature When oil palm biomass was delivering into the gasifier, a strong flame was initially observed leaving the outlet chamber. After five minutes, the flames disappeared. The observation of this work is in agreement with the work of Azlina [22]. In her work, she reported that the reduction of temperature occurred when the oil palm biomass was fed into the combustion chamber. Figures 8-14 show the temperature versus times curves for fuel feed of 20 kg/hour under steady state conditions for different fibre shell ratios. The air flow speed was maintained up to 1 m/s during the process. The results of this study are in good agreement with the results of Van Paasen and Kiel [23] during their work on tar formation in a fluidized bed gasifier where the temperature range selected was between 500-800°C. Figure 5 depicts that for the case of 80% fibre and 20% shell during the feeding of fuel, the curve increased linearly with time at the temperature range of 450-780°C. Then, at the temperature range of 650-700°C, the steady state (state with no fluctuate of value ≤10%) condition was observed for 15 minutes, and subsequent collection and measurements of synn gas produced. From Figure 6, it was shown that for the case of 60% fibre and 40% shell during the feeding of fuel, the curve increased linearly with time at the temperature range of 500-780°C. Then, at the temperature range of 700-780°C, the steady state (state with no fluctuate of value ≤10%) condition was observed for 18 minutes. From Figure 7, it was obvious that for the case of 60% shell and 40% fibre during the feeding of fuel, the curve increased linearly with time at the temperature range of 550-810°C. Then, at the temperature range of 740-810°C, the steady state (state with no fluctuate of value ≤10%) condition was observed for 30 minutes. The trend revealed by Figure 8 indicated that for the case of 80% shell and 20% fibre during the feeding of fuel, the curve increased linearly with time at the temperature range of 550-850°C. Then, at the temperature range of 770-850°C, the steady state (state with no fluctuate of value ≤10%) condition was observed for 30 minutes. Figures 9-12 show the curves for temperature vs. time for different fibres and shell ratios at air flow rate of 2 m/s. From Figure 9, for the case of 80% fibre and 20% shell during the feeding of fuel, the curve increased linearly with time at the temperature range of 470-760°C. Then, at the temperature range of 670-760°C, the steady state (state with no fluctuate of value ≤10%) condition was observed for 25 minutes. of 750-850°C, the steady state (state with no fluctuate of value ≤10%) condition was achieved for 30 minutes. Marcelo et al. [24] used biomass, such as sugar-cane bagasse, rice husk, sawdust and elephant grass (Pennisetum purpureum) for their gasifier and they achieved 750°C, which is similar to this work on oil palm biomass. In this study, the longest period of steady state condition of 30 minutes occurs when using 80% shell and 20% fibre at airflow of 1 m/s. This result also indicates that complete gasification was achieved because the output temperature ≥700°C. Figure 13 shows the detailed temperature of versus time for all conditions at 1 m/s airflow rate whilst in Figure 14 shows results for airflow rate of 2 m/s. From both figures, a temperature difference of 1-5% between experimental and simulation results were obtained. This is in good agreement and consistent with literature [25]. Conclusions The experiments were carried out by mutually varying the fibre/ shell composition ratio from 20 to 80%. The overall findings of this study are concluded as below: • The highest calorific value (19.38 MJ/kg) was achieved from the composition of 80% shell and 20% fibre while, the lowest calorific value (18.82 MJ/kg) was achieved from the composition of 80% fibre and 20% shell. For the case of 60% fibre and 40% shell in Figure 10, during the feeding of fuel, the curve increased linearly with time at the temperature range of 510-760°C. Then, at the temperature range of 680-760°C, the steady state (state with no fluctuate of value ≤10%) condition was observed for 15 minutes. The results of Figure 11 depicts that for the case of 60% shell and 40% fibre during the feeding of fuel, the curve increased linearly with time at the temperature range of 540-810°C. Then, at the temperature range of 730-810°C, the steady state (state with no fluctuate of value ≤10%) condition was observed for 30 minutes. From Figure 12, it was observed that for the case of 80% shell and 20% fibre during the feeding of fuel, the curve increased linearly with time at the temperature range of 550-850°C. Then, at the temperature range
3,033.8
2015-08-07T00:00:00.000
[ "Environmental Science", "Engineering" ]
All-Atom Structural Models of the Transmembrane Domains of Insulin and Type 1 Insulin-Like Growth Factor Receptors The receptor tyrosine kinase superfamily comprises many cell-surface receptors including the insulin receptor (IR) and type 1 insulin-like growth factor receptor (IGF1R) that are constitutively homodimeric transmembrane glycoproteins. Therefore, these receptors require ligand-triggered domain rearrangements rather than receptor dimerization for activation. Specifically, binding of peptide ligands to receptor ectodomains transduces signals across the transmembrane domains for trans-autophosphorylation in cytoplasmic kinase domains. The molecular details of these processes are poorly understood in part due to the absence of structures of full-length receptors. Using MD simulations and enhanced conformational sampling algorithms, we present all-atom structural models of peptides containing 51 residues from the transmembrane and juxtamembrane regions of IR and IGF1R. In our models, the transmembrane regions of both receptors adopt helical conformations with kinks at Pro961 (IR) and Pro941 (IGF1R), but the C-terminal residues corresponding to the juxtamembrane region of each receptor adopt unfolded and flexible conformations in IR as opposed to a helix in IGF1R. We also observe that the N-terminal residues in IR form a kinked-helix sitting at the membrane–solvent interface, while homologous residues in IGF1R are unfolded and flexible. These conformational differences result in a larger tilt-angle of the membrane-embedded helix in IGF1R in comparison to IR to compensate for interactions with water molecules at the membrane–solvent interfaces. Our metastable/stable states for the transmembrane domain of IR, observed in a lipid bilayer, are consistent with a known NMR structure of this domain determined in detergent micelles, and similar states in IGF1R are consistent with a previously reported model of the dimerized transmembrane domains of IGF1R. Our all-atom structural models suggest potentially unique structural organization of kinase domains in each receptor. inTrODUcTiOn Insulin receptor (IR) and type 1 insulin-like growth factor receptor (IGF1R) are homologous, ligandactivated, and constitutively homo-dimeric transmembrane glycoproteins of the receptor tyrosine kinase (RTK) superfamily (1). Both IR and IGF1R have similarities in primary sequences, structural topologies, functions, and binding affinities for peptide ligands such as insulin and insulin-like growth factors (IGFs) (2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13). Structurally, each subunit in receptors is composed of three large protein fragments: the extracellular part (also known as the ectodomain), the intracellular part (containing kinase domains), and a single-pass transmembrane domain (TMD) that connects extracellular and intracellular fragments. Specifically, the TMD as well as the catalytic kinase domain are located in the β-chains of each subunit of receptor homodimers. TMD potentially plays a critical role in mediating signaling via IR and IGF1R because ligand binding to extracellular subunits leads to conformational changes that are conveyed (via TMD) to kinase domains, thereby triggering trans-autophosphorylation and downstream signaling cascades (14)(15)(16)(17)(18)(19)(20). Initially, the TMD appeared to play a passive role in insulin signaling (21) but other studies indicate that modifications in TMDs of IR or IGF1R alter receptor internalization as well as affect kinase activation and negative cooperativity (22)(23)(24)(25), while replacing IR-TMD with that of glycophorin A inhibits insulin action (26). The mechanistic details of these processes remain poorly understood at the molecular scale, but simple mechanical models for signal transduction via TMD suggest that a lateral shift or a rotational motion of TMD is energetically more favorable than the vertical motion in the phospholipid bilayer, as it would suggest dimerization of TMDs that could bring kinase domains in proximity (25,(27)(28)(29). However, recent studies propose different mechanisms for IR and IGF1R activation (3,30): Lee et al. (31) have suggested that TMDs of IR in the non-activated basal state are constitutively dimerized and dissociate on ligand binding, while Kavran et al. (32) have suggested that ligand binding leads to dimerization of TMDs in IGF1R. Previously, a different "yo-yo" model of receptor activation was proposed by Ward et al. (10) in which the ligand-induced conformational change releases kinase domains (for transphosphorylation) from an initially constrained position near the membrane. These studies do not directly support a common mechanism of activation of transmembrane cell-surface receptors (27). Therefore, the exact mechanism of signal transduction in IR and IGF1R remains elusive in part due to the lack of knowledge of intact structures of full-length receptors (in apo or ligand-bound forms) although several structures of excised extracellular and intracellular domains have been solved (33)(34)(35)(36)(37)(38)(39)(40)(41)(42)(43)(44)(45)(46)(47)(48). The solution structure of IR-TMD has been determined in detergent micelles (49), but the deviation of the hydrophobic thickness of micelles from lipid bilayers can potentially cause changes in protein conformations (50). Nonetheless, this study suggested that the excised IR-TMD sequence remains largely monomeric in solution and forms an α-helix with a kink at residues Gly960 and Pro961, but the possibility of dimer formation was not excluded depending upon the detergent/protein ratio. It was also speculated that the presence of one SXXXG sequence motif in IR-TMD could play a role in dimerization similar to the GXXXG motif (51,52). Currently, no experimental data on the structure of IGF1R-TMD are available. We have previously shown that molecular dynamics (MD) simulations conducted in explicit solvent with all-atom structural models and enhanced sampling algorithms (53) are highly promising tools to understand conformational flexibility of receptor structures and their ligand-binding mechanisms (8,(54)(55)(56)(57)(58). In this work, we aim to study the structure, orientation, and conformational variability of TMDs of IR and IGF1R in an explicit lipid bilayer environment. In particular, we have studied the folding/unfolding behavior and stability of membraneembedded peptide sequences of IR and IGF1R using enhanced sampling simulations conducted with metadynamics algorithm (59) because classical MD simulations are likely insufficient for sampling of all relevant peptide conformations in the lipid bilayer. In particular, our predicted structural ensembles are consistent with recent NMR data (49) and reveal that the presence of Gly960 and Pro961 in IR-TMD indeed results in increased flexibility in comparison to IGF1R-TMD, while metastable structural ensembles of both peptides show significant differences in their orientation in the membrane and in conformations of the N-and C-termini. We also observe different patterns of water distribution near peptide residues at the membrane-solvent interface and find that changes in backbone conformations of peptides correlate with certain angle variables measured relative to the membrane normal. Molecular Dynamics simulations: system setup All MD trajectories were generated with NAMD (60) using the TIP3P water model and the CHARMM force-field with the CMAP correction (61,62). VMD was used for system creation, protein rendering, and analyses (63). All simulations were carried out in the NPT ensemble using the Langevin thermostat at 310 K and the Nosé-Hoover barostat. We modeled 51 residues for IR (939 -FYVTDYLDVPSNIAKIIIGPLIFVFLFSVVIGSIYLFL RKRQPDGPLG -989) and IGF1R (918 -DPVFFYVQAKTGY ENFIHLIIALPVAVLLIVGGLVIMLYVFHRKRNNSRLG -968) that included the predicted TMD sequence (underlined; 957-979 for IR and 936-959 for IGF1R) for each receptor (Sequence numbering is based upon protein knowledgebase www.uniprot. org accession numbers P06213 and P08069). For each sequence, we generated an ideal α-helix as a starting structure using VMD's psfgen tool and generated a palmitoyloleoylphosphatidylcholine (POPC) membrane patch of ~80 Å × 80 Å in size using VMD's membrane builder tool. Each peptide was then embedded in the POPC bilayer by aligning the centers of mass and the principal axis of each helix along the z-direction. Thereafter, overlapping lipid molecules within 2 Å of each peptide were deleted. Each system was solvated with ~17700 water molecules, neutralized with KCl, and brought to an ionic strength of 0.2M. The final simulation domains measured ~83 Å × 80 Å × 140 Å and contained 74168 (IR) and 74144 (IGF1R) atoms, respectively. Each system was then equilibrated in three consecutive steps. In the first step, initially a conjugate-gradient minimization was carried out for 1000 cycles, which was followed by a short MD equilibration (0.5 ns long with a 2-fs time step) by keeping all atoms fixed except those in lipid tails. In the second step, MD equilibration was continued for 5 ns in the NPT ensemble by fixing only peptide atoms. In the third step, no atoms were fixed or constrained in a 50 ns MD equilibration in the NPT ensemble. The final atomic coordinates after the equilibration in the third step were used to setup enhanced exploration of peptide conformations in lipids using metadynamics, as described below. Initial and equilibrated configurations of IR-TMD and IGF1R-TMD are shown in Figure 1. Metadynamics simulations Metadynamics is an enhanced sampling method for faster and uniform exploration of conformational space in a specified set of collective variables (CVs) by augmenting the force-field with a history-dependent biasing potential (Vmeta) of the following form (59,64): where ξi is the current value of the CV, and ξi(t′) is the value of the CV at time t′. Vmeta is constructed as a sum of Ncv-dimensional repulsive Gaussian functions with a chosen height (W) and width (δ). The Gaussian functions can be added at a desired frequency τG. These three main parameters in metadynamics (W, δ, and τG) control the efficiency and accuracy of the free energy reconstruction from converged metadynamics potential (Vmeta) (65). Metadynamics has been successfully applied to study many biophysical problems (66)(67)(68)(69)(70)(71)(72) including prediction of peptide conformations in lipid membranes (73,74). In this work, we have used as CV the root-mean-squareddeviation (RMSD) of the backbone Cα atoms with respect to a perfect α-helix. The RMSD CV was bounded between 0 and 15 Å, and therefore, low values of RMSD indicate helical conformations and higher values indicate kinks and/or unfolded states. For all metadynamics simulations, a 1-fs integration time step was used, and the Gaussian height (W), width (δ), and frequency (τG) of 0.1 kcal/mol, 0.2 Å, and 1 ps, respectively, were used. Metadynamics simulations converged in 160 (IR-TMD) and 145 ns (IGF1R-TMD), respectively, after which each trajectory sampled the CV range diffusively. The converged free-energy profiles from the last 10 ns of each metadynamics trajectory were used for analyzing metastable conformations and for carrying out other analyses reported in this work. We note that we have not studied the effect of including multiple CVs in our simulations. Additionally, we point out that the protonation states of all residues were assigned at physiological pH, and the effect of varying pH was not explicitly studied here. Free energy Profiles and conformational ensembles of ir-TMD and igF1r-TMD Starting with a perfectly α-helical conformation of each peptide ( Figure 1A), we carried out independent ~55 ns long MD equilibrations in explicit membrane and solvent environments before launching enhanced sampling simulations using metadynamics (see Materials and Methods). The final conformations of peptides sampled from these MD trajectories ( Figure 1B) show that even at these short-timescales, peptides deviate from their initial conformations and adopt tilted conformational states with respect to the membrane normal. Specifically, IR-TMD largely maintains an α-helical structure but with a sharp kink at Pro961 such that residues 939-958 in the N-terminal helix interact strongly with lipids than the water molecules. IGF1R-TMD also remains α-helical with a minor kink at Pro941, but the first 10 residues in the N-terminus spontaneously unfold and interact with the water molecules. The Cα-RMSDs relative to a perfect helix for the final peptide conformations are 6.35 Å (IR-TMD) and 3.75 Å (IGF1R-TMD), respectively. To uniformly explore peptide conformations between 0 and 15 Å RMSD and to obtain estimates on the free energy, we carried out 160 ns (IR-TMD) and 145 ns (IGF1R-TMD) long metadynamics simulations (see Materials and Methods). Consistent with enhanced conformational sampling, each peptide visited both helical and non-helical states multiple times during these simulations. The averaged free energy profiles (potentials of mean force; PMFs) from the last 10 ns of each metadynamics trajectory (Figures 2A,B) indicate that peptide conformations below an RMSD of ~3.5 Å and above ~11.5 Å are significantly higher in free energy relative to other states. This suggests that peptides prefer neither a fully helical structure (which occurs at 0 Å RMSD) nor a significantly unfolded configuration (which occurs beyond 12 Å RMSD), but instead metastable/stable configurations likely contain both helical and partially unfolded structural motifs. Moreover, the stable conformations with the lowest free-energy relative to other states occur at ~6 Å RMSD for IR-TMD and ~8 Å RMSD for IGF1R-TMD (inset in Figures 2A,B). From the last 10 ns of each metadynamics trajectory, we harvested several metastable/stable configurations for each peptide (17 for IR and 11 for IGF1R) with a ~2-3 kcal/mol free-energy difference. These conformations for IR-TMD and IGF1R-TMD are distinct (Figures 2C,D) and have the following features: (1) in IR-TMD, α-helical structures are observed for residues 939-958 in the N-terminus and residues 962-980 (part of the predicted transmembrane domain sequence, 957-980, of IR). These two helices are stably held together by a sharp kink at Gly960 and Pro961. The remaining residues in the C-terminus (981-989) are highly flexible and adopt unfolded conformations; and (2) in IGF1R-TMD, the N-terminal residues 918-932 are significantly flexible and unfolded, a small α-helix kinked at Pro941 is observed between 933 and 941, while a full α-helix is observed for residues 942-968. To quantify these observations, we further carried out secondary structure analysis on all metastable/stable configurations and computed average helicity on a per residue basis. These results (Figure 3) show that the α-helical content for IR-TMD is reduced between residues 939 and 942, and no helical content is observed between residues 959-961 and 982-989, while for IGF1R-TMD, no helical content is present between residues 918 and 931, and a minor decrease in helicity is observed at residue Gly950. We note that an unstable kink at Gly950 mostly switches back to a stable α-helix, as described in the following. Orientation of ir-TMD and igF1r-TMD in the Membrane In metadynamics simulations, the change in RMSD of peptides relative to a perfect helix could be due to several different types of structural features such as tilting, bending, or unfolding. Therefore, to understand the orientation of peptides in the lipid bilayer, we computed three angle variables and analyzed their correlation with the RMSD change (Figure 4). For IR-TMD, α and β characterize the orientation (relative to the membrane normal) of the helix preceding Pro961 and the helix corresponding to the transmembrane sequence (962-979), and γ characterizes the interhelical angle, while in IGF1R-TMD, α and β characterize the orientation of helices between 934-948 and 951-966 relative to the membrane normal, and γ is the interhelical angle. These data (Figures 4B,D) indicate that several conformations in the RMSD range (0-15 Å) can take a wide variety of angle values suggesting multiple orientations of peptides due to enhanced conformational sampling via metadynamics. For IR-TMD, we find that angles α and γ are correlated with RMSD such that an increase in RMSD results in an increase in α but a decrease in γ. Structurally, this means that the N-terminal helix in IR-TMD kinks toward the membrane, thereby becoming parallel to the membrane-solvent interface, while the membrane-embedded helix straightens to align along the membrane normal, as also indicated by a sharp decrease in β. For the metastable/stable conformations of IR (Figure 2C), we observe α values between ~70° and 90°, γ values slightly smaller than ~110°, and β values between ~5° and 25°. For IGF1R-TMD, we observe no significant correlation between the angle α and RMSD as α remains near 30° on average, suggesting that the helix between residues 934 and 948 remains tilted relative to the membrane normal. However, an increase in RMSD is correlated with a decrease in β and γ that leads to a kink at Gly950. This kink is unstable and not observed in metastable/stable conformations of IGF1R-TMD ( Figure 2D) where γ values near 180° are observed. In these IGF1R-TMD conformations, a significant contribution to change in RMSD is due to the unfolding of the N-terminus (residues 918-932), and a minor contribution is due to a kink at Pro941. interactions of Peptides with the solvent In each 51-residue long peptide studied here, several charged amino acids are present in the sequence preceding as well as following the predicted TMD sequence (957-979 for IR and 936-959 for IGF1R). Because we observed kinked or unstructured configurations in the termini of each peptide, we analyzed all metastable conformations for interactions with water molecules at the membrane-solvent interface. Specifically, we present average number of water molecules within 4.5 Å of each protein residue in Figure 5. These data show that no water molecules are A C D B FigUre 5 | (a,B) The average number of water molecules per residue within ~4.5 Å of metastable/stable conformations of IR-TMD and IGF1R-TMD. (c,D) Selected snapshots from metastable/stable conformations of IR-TMD (c) and IGF1R-TMD (D) are shown to highlight interactions with water molecules (red licorice representations). Several charged residues in the termini of each peptide and a proline residue (961 for IR and 941 for IGF1R) are shown in brown and red space-filling representations, respectively. Each peptide is rendered as a cartoon in the same coloring scheme as in Figures 1 and 2 observed in the vicinity of helix-forming hydrophobic residues buried in the membrane (for example, 965-977 for IR-TMD and 937-954 for IGF1R-TMD). Both IR-TMD and IGF1R-TMD have an "Arg-Lys-Arg" motif immediately following the TMD sequence that is exposed to solvent as indicated by the increasing number of water molecules for residues in this motif of each peptide. Importantly, this motif is part of the unfolded C-terminus in IR-TMD but is fully folded in IGF1R-TMD. The exposure of this motif to solvent is compensated by a larger tilt angle in IGF1R-TMD in comparison to homologous sequence in IR-TMD. Several other residues in the C-terminus of each peptide have over 10 water molecules in their vicinity. A significant difference in water distribution is observed in the N-terminus of each peptide largely because residues 918-932 in IGF1R-TMD are highly flexible, unfolded, and located outside the membrane, while the homologous residues in IR-TMD form an α-helix resting at the membrane-solvent interface, such that a charged residue Lys956 has over 25 water molecules in its vicinity. The kink-forming residue Pro961 in IR-TMD is also significantly exposed to the solvent, but the corresponding residue Pro941 in IGF1R-TMD is completely shielded from the solvent. The highest water density is observed for Arg966 in IGF1R-TMD, and for Lys956 or Arg982 in IR-TMD. DiscUssiOn In this work, we have presented all-atom structural models of 51-residue long peptides containing the transmembrane domain sequence of IR and IGF1R (957-979 for IR and 936-959 for IGF1R; see Materials and Methods). These models have been generated in explicit membrane and solvent environments using MD simulations assisted by enhanced conformational sampling algorithms that facilitate extensive sampling of conformational space and provide information on key thermodynamic properties such as the free energy. For both receptors, we observe that the residues corresponding to the transmembrane domain sequence are fully membrane-embedded and form α-helices with a major kink at Pro961 in IR and a minor kink at Pro941 in IGF1R. A kink in IGF1R-TMD at Gly950 is unstable and recovers to an α-helical conformation. Based upon angle collective variables characterizing the orientation of each peptide in the membrane (Figure 4), we observe that the membrane-embedded α-helix in IGF1R-TMD is significantly more tilted (relative to the membrane normal) than in IR-TMD (Figures 5C,D). However, it is important to point out that these angles were not explicitly included as CVs in our metadynamics calculations and therefore were not extensively sampled. The values of angles reported in Figure 4 are those that correspond to extensive sampling along RMSD CV, as also indicated by multiple values of a specific angle corresponding to a single value of RMSD. We also notice major differences in conformations of peptide termini: a short α-helix is observed for the N-terminal residues (939-958) of IR-TMD, but significantly unfolded and flexible conformations are observed for the N-terminal residues (918-932) of IGF1R-TMD, while an α-helix is observed for the C-terminal residues (960-968) of IGF1R, but unfolded and flexible conformations are observed for the C-terminal residues (981-989) of IR. Importantly, irrespective of different conformations in the C-terminus of each peptide, an "Arg-Lys-Arg" motif is solvent exposed, albeit at the expense of a larger tilt angle in IGF1R-TMD than in IR-TMD. However, we observe that all N-terminal residues (918-932) in IGF1R-TMD are solvent exposed, but only a few N-terminal residues (Lys956 and Pro961) in IR-TMD are significantly solvated. This difference is primarily due to the fact that a short α-helix in the N-terminus of IR-TMD is partially membrane-embedded such that the positively charged residues are oriented toward the membrane-solvent interface, while the N-terminal residues in many metastable conformations of IGF1R-TMD reside outside the membrane. Li et al. (49) have recently determined a solution structure of the transmembrane domain of IR (PDB code 2MFR) using NMR spectroscopy in dodecylphosphocholine (DPC) micelles. The following features observed in the NMR conformational ensemble are consistent with our IR-TMD structural models: (i) a well-defined α-helix (between residues Leu962 and Tyr976) buried in the DPC micelles with a kink at Gly960 and Pro961; (ii) a flexible and solvent-exposed C-terminal region (between residues Gln983 and Leu988); and (iii) a short α-helix (between residues Phe939 and Tyr944) partially buried in the DPC micelles with weak solvent interactions for Thr942 and Asp943. On comparing our models with the NMR structure, we observe that the kink angle at Pro961 in our IR-TMD models is larger than what is observed in the NMR structure, which results in increased interactions of Pro961 with the solvent in our models. We therefore analyzed the spherical micellar region encasing IR-TMD reported in Li et al. 's work (49) and found that it is at least ~3-4 Å thicker than the equilibrium thickness of a POPC membrane. We speculate that the difference in the hydrophobic thickness of a bilayer and a micelle could have contributed to a difference in the kink-angle near Gly960 and Pro961. However, the observation of a kink at these residues in our models and the NMR structure is consistent with the observation of enhanced helicity in IR-TMD on individual or simultaneous mutations of Gly960 and Pro961 to Ala (75) as well as with the role of Gly and Pro residues as helix breakers (76,77). Currently, no experimental structure of IGF1R-TMD is known, but consistent with our all-atom structural models of IGF1R-TMD, classical MD simulations reported in Kavran et al. 's work (32) indicate a kink at Pro941, interactions of His935 with the solvent, and a significantly tilted α-helical conformation of the transmembrane sequence. A major unresolved question is related to the dimerization of IR-TMD and IGF1R-TMD in the basal or activated states of receptors in part because no experimental structures of these domains in a dimeric configuration have been reported so far. However, different models have been proposed (3,30) (32) has suggested that IGF1R-TMD can form stable dimers by associating near kink-inducing residue Pro941 such that His935 residues in helices can interact with each other and the solvent. The conformations of helices reported in this dimer are consistent with our IGF1R-TMD structural models. Importantly, Cabail et al. (33) have provided crystallographic, biochemical, and biophysical evidence showing that the phosphorylated kinase domains of IR and IGF1R dimerize through exchanged juxtamembrane regions, but ~20 residues of unknown structure in the N-terminus of the juxtamembrane sequence preclude conclusive support for dimerized or dissociated transmembrane helices. While we have not directly studied the dimerization of IR-TMD or IGF1R-TMD in this work, the C-terminal sequences in our 51-residue long peptides include several residues from the N-terminal juxtamembrane regions of receptors (10 residues of IR and 9 residues for IGF1R). As described above, in both receptors, these residues are significantly exposed to the solvent, but adopt distinct conformations (in IR-TMD, completely unstructured and flexible conformations are observed as opposed to IGF1R-TMD, where these residues participate in an α-helix). We speculate that these conformational differences could contribute to different structural organization of kinase domains in the basal or activated states of receptors. cOnclUsiOn Using MD simulations combined with enhanced sampling algorithms, we have presented all-atom structural models of IR-TMD and IGF1R-TMD in explicit membrane and solvent environments. We found intact α-helical conformations for the membrane-embedded residues of each peptide with a larger tilt-angle (relative to the membrane normal) in IGF1R-TMD in comparison to IR-TMD. We also observe kinks in membranespanning helices at Pro961 (IR) and Pro941 (IGF1R). The major differences in peptide conformations are in the terminal sequences where a kinked α-helix is observed for the N-terminus of IR-TMD as opposed to unfolded conformations in IGF1R, and an α-helix is observed in the C-terminus of IGF1R-TMD as opposed to unfolded conformations in IR-TMD. These differences in conformations lead to increased solvation of the N-terminal residues in IGF1R-TMD in comparison to IR-TMD, but similar solvation patterns are observed in the C-terminal residues containing an "Arg-Lys-Arg" motif. FUnDing This work was supported through a Summer Teaching Assistant Fellowship (HM) and a Summer Faculty Fellowship (HV) from the University of New Hampshire Graduate School.
5,734.4
2016-02-16T00:00:00.000
[ "Chemistry", "Medicine" ]
Offshore Antarctic Peninsula Gas Hydrate Reservoir Characterization by Geophysical Data Analysis A gas hydrate reservoir, identified by the presence of the bottom simulating reflector, is located offshore of the Antarctic Peninsula. The analysis of geophysical dataset acquired during three geophysical cruises allowed us to characterize this reservoir. 2D velocity fields were obtained by using the output of the pre-stack depth migration iteratively. Gas hydrate amount was estimated by seismic velocity, using the modified Biot-Geerstma-Smit theory. The total volume of gas hydrate estimated, in an area of about 600 km 2 , is in a range of 16 × 10 9 –20 × 10 9 m 3 . Assuming that 1 m 3 of gas hydrate corresponds to 140 m 3 of free gas in standard conditions, the reservoir could contain a total volume that ranges from 1.68 to 2.8 × 10 12 m 3 of free gas. The interpretation of the pre-stack depth migrated sections and the high resolution morpho-bathymetry image allowed us to define a structural model of the area. Two main fault systems, characterized by left transtensive and compressive movement, are recognized, which interact with a minor transtensive fault system. The regional geothermal gradient (about 37.5 °C/km), increasing close to a mud volcano likely due to fluid-upwelling, was estimated through the depth of the bottom simulating reflector by seismic data. Introduction Gas hydrates are solids composed by ice containing molecules of gas, usually methane, in the lattice which grow within the pore space of sediments [1].They are common in the upper hundred meters of sediments along both active and passive continental margins [2] and permafrost areas [3] and where high pressure, low temperature and adequate gas saturation fall within stability conditions [1].In seismic sections, the presence of gas hydrate within marine sediments is underlined by the Bottom Simulating Reflector (BSR).The BSR is a reflector that mimics the seafloor and it is characterized by reverse phase with respect to the seafloor [2].Generally, the BSR is given by the result of strong acoustic contrast produced by the free gas accumulated at the base of the gas hydrate layer [4][5][6].Sometimes, it can also be the result of high gas hydrate accumulation within sediments without free gas below it [6]. During last decades, the scientific community has employed many resources in studying gas hydrates in order to evaluate their potential as a future energy resource [7][8][9], their geological hazards [10], and their possible influence on carbon cycle within oceans [11,12] contributing to the global warming [13,14].Moreover, due to recent hydrocarbon fuel resource crisis, interest in gas hydrates has increased strongly [15].Some authors have pointed out how geological and environmental features can affect gas hydrate accumulation within marine sediments, even when stability conditions and adequate gas amounts are respected.In particular, the Bottom Simulating Reflector (BSR) identification is affected by geological features.In fact, presence of important faults (normal or inverse) acting as main conduit for fluid escapes [16,17] and sediments stratification parallel to the seafloor can make difficult the interpretation of the BSR [18].Around several mud volcanoes, the gas hydrate distribution is strongly influenced by the thermal field [19].Finally, the depth of hydrate stability zone can be affected by the fluid salinity changing the depth and/or generating a focused flux of gas that can reach the seafloor [20]. Many approaches developed by different authors are available to quantify gas-phase amount presents within sediments.Some procedures require direct measurements in order to estimate the gas hydrate contents, such as water chlorinity analysis [21,22].Other procedures are based on seismic data analysis, using velocity seismic models and/or amplitude variations versus offset analysis [23][24][25].Moreover, some authors have suggested modelling elastic velocities versus gas hydrate concentrations using seismic velocities [26][27][28] or sonic log data [26,29] to obtain information about the hydrate occurrence. In our study area (offshore Antarctic Peninsula), several Multi Channel Seismic (MCS) profiles, gravity cores and multibeam data were acquired from 1990 to 2004 [30].We performed iteratively pre-stack depth migration (PreSDM) and analysis of Common Image Gathers (CIGs) to obtain a reliable velocity field.The theory developed by Tinivella [26] was adopted to estimate gas hydrate amount, because no direct petro-physical parameters were available. Geological Setting The investigated area is located offshore Elephant Island in the South Shetland margin.The area extends from 60.5° S to 61.5° S and from 56° W to 58.5° W (Figure 1).In this area, the continental margin shows a complex tectonic setting due to the subduction of the Antarctic and the -former Phoenix‖ plates beneath the South Shetland micro-continental block.The Phoenix plate is the last remnant of the Nazca plate subducted beneath the Antarctic plate and bordered by the Hero Fracture Zone, to the SW, and by the Shackleton Fracture Zone (SFZ) to the NE.Nazca plate subduction was active along the Pacific margin of Gondwanaland from late Paleozoic time [31] to 4 Ma ago [32], when spreading at the Antarctic Phoenix ridge ceased [33].Currently, the subduction process is controlled by sinking and roll-back of the oceanic plate.This passive subduction is coupled with the extension of the Bransfield Strait marginal basin [32,[34][35][36].The SFZ along the northeastern side is still active [38], whereas the Hero Fracture Zone along the southwest side is locked [39].The surface traces of these two main fracture zones delimit the lateral extent of the Bransfield microplate [40], which separated from the Antarctic Peninsula by opening of the back-arc basin related to the ridge spreading cessation [41]. From the interpretation of time migrated seismic profiles (see Figure 1), a preliminary tectonic setting was delineated [42].Identified a sedimentary prism affected by several thrust faults and extensional faults, oriented sub-parallel to the continental shelf, and the presence of a strike slip fault related to the Shackleton Fracture Zone.This fault crosses the entire continental slope splitting the margin in two parts with different characteristics.To the northeast of the fault, a strong and continuous BSR is detected, while to the southwest it becomes weak and discontinuous.Moreover, small mid-slope basins are common within the prism, often bounded by extensional faults that reach the seafloor locally. Geophysical Data The geophysical data were acquired during three geophysical surveys.Eleven seismic profiles with long streamer and one Ocean Bottom Seismometer (OBS) data were acquired during the first two surveys, carried out in the 1989/90 and 1996/97 Austral summers [23,43], and several geophysical data were acquired in the 2003/04 Austral summer [30]. During the first survey, the energy source was composed of two arrays of 15 air-guns (total volume of 45 L), fired every 50 m.The streamer was 3000 m long with 120 channels and 25 m hydrophone group interval.The sampling interval was 4 ms.During the second survey, the same streamer was adopted, while the source consisted of two generator-injector guns with a total volume of 3.5 L firing every 25 m.The sampling interval was 1 ms. Figure 1 shows the dataset here analysed.The data are characterised by high signal/noise ratio requiring just the application of pass-band filter. In the last cruise, multibeam data were acquired by using the Reson Seabat 8150, which is an Ocean Depth Multibeam Echo Sounder System.The system is characterized by nominal depth ranging between 0.1 and 15 km, 234 beams and nominal frequency of 12 kHz.Data acquisition and processing were performed using PDS2000 software (RESON).The high-resolution morpho-bathymetry image is characterized by 150 m cell size.Moreover, sub bottom profile data (7 kHz) and gravity cores were also acquired [30]. Methods In order to characterize the gas reservoir located offshore the Elephant Island, regional velocity and gas hydrate concentration models were developed.We applied depth migration in the pre-stack domain (Kirchhoff algorithm) to determine with a layer stripping approach both the velocity field and the seismic image in depth [45].In fact, it is well known that pre-stack depth migration provides information about the quality of the velocity field [45].When an incorrect velocity is used to migrate MCS data, the imaged depths of reflections in a CIG will differ from each other along offset.In this situation, residual move-out is observed in the CIGs; for this reason, residual move-out analysis is used to update the migration velocity [46].The velocity fields obtained along the 2D seismic lines were then interpolated in order to obtain a 3D velocity model. First of all, velocity values from 60 m to 3840 m were extracted every 40 m, considering a horizontal spatial grid equal to 200 m.We produced the velocity slides interpolating the values by using the GMT software [47].The adopted interpolation parameters are the following: first step, 3 km of block dimensions with a 25 km of research radius; second step, 0.2 km of block dimension with 2.4 km of research radius.The output model has a cell grid size of 200 m in the two horizontal direction and 40 m in depth.The total nodes are: 241 along the longitude; 309 along the latitude; and 96 in depth.The so-obtained regional velocity model was smoothed along the three axes.The reliability of the interpolation was verified comparing the original 2D velocity section with the 2D velocity section extracted from the interpolated model.The error estimated in correspondence of the main inverted horizons (seafloor, BSR, and the reflector in between) shows a trend included within the range of ±5%, even if local high error values are also detected (about ±8%; Figure 2).The method for estimating gas hydrate and free gas concentrations consists of comparing seismic velocities with theoretical velocity curves in the absence of free gas and gas hydrate, the so-called reference velocity profile.For this purpose, we used the methodology described in [26] and already tested in this area to quantify the gas hydrate and the free gas in the pore space [24].Essentially, the concentration of gas hydrate was estimated by using the modified Biot-Geerstma-Smit theory [26], which translates the velocity anomalies, calculated with respect to a reference curve, in terms of gas amount. Using the seafloor depth and information from literature [23,48], the reference velocities (i.e., the velocity of water saturated marine sediments) were evaluated.We used as reference curves the Hamilton curves [49,50].We used the average Poisson ratio for all sediments equal to 0.435, obtained by OBS data analysis in the same area [23]. Data Analysis We obtained information about the geometries and the velocities of the main geological structures from the PreSDM sections.The velocity model was obtained by using the PreSDM (Kirchhoff algorithm) with a layer stripping approach.The method uses the output of the PreSDM performed at different offsets (the CIGs) to determine iteratively the velocity field [45,46].After the PreSDM, the correctness of the migration velocity is verified by the analysis of the flatness of reflections in CIGs domain.If the reflectors in the CIGs are not flat, a residual moveout analyses is required to correct the curvatures of reflections.The energy in the semblance is quantified by residual (r) parameter, which is a measurement of the flatness deviations along the offset.If the r-parameter has a negative value, it means that the velocity needs to be increased, whereas it needs to be decreased for positive value.The seismic processing was performed by using the open source Seismic Unix software (SU; [51]) and home codes created ad hoc [30]. The initial velocity model had uniform velocity equal to 1465 m• s −1 and a 20 m vertical spacing; while the horizontal spacing was calculated for each profile and ranged from 25 to 27.25 m in order to avoid errors introduced by irregular ship velocity.After a few iterations, the velocity of the first layer (sea water) was fixed.Below the seafloor, the more continuous reflector located between the seafloor and the BSR was analysed.5 iterations were required to invert the velocity of the second layer, while the velocity above the BSR was defined after an average of 20 iterations.The final PreSDM was performed introducing a velocity gradient equal to 0.6 s −1 below the BSR and smoothing the whole velocity field. Structural Analysis Seven PreSDM sections are analysed to obtained information about geological structures and the velocities.The most representative of them, because they show the main geological features of the area (see thick lines in Figure 1), are here analysed. Figure 3 shows the depth image of line I97209, obtained by using the methodology described above, on which the interpretation and the velocity model are superimposed.The SE sector of the seismic section, between 7 and 14 km, shows a flat seafloor in correspondence of which a well stratified and folded sedimentary sequence is abruptly interrupted.At about 15 km, a strong variation of seafloor depth is present (about 750 m) related to the presence of a fault, as well as imaged in the morpho-bathymetry (Figure 4).At about 28 km, a second discontinuity, which affects the seafloor and slightly deforms sediments, is detectable.As can be seen on Figure 4, an evident morphological expression on the seafloor is detectable.The two above-mentioned faults could be part of the same faults system showing a transcurrent character, and called Transcurrent Fault System 1 (TFS1; Figure 4).Between 37 and 47 km, the sediments are deformed forming an anticline.The NW part of the line is shaped by a trough wide about 1.2 km, also detectable in the morpho-bathymetric map and showing a NE-SW trend. Figure 5 shows seismic image of line I97213, with superimposed the interpretation and the velocity, located in-line to the continental margin.Between 0 and 14 km, three discontinuities are recognizable.One of them, the most important, abruptly deeps the seafloor of about 400 m (at about 9 km) and border a filled channel.The others two discontinuities with the same trend (normal faults) are evident at 12 and 13 km.In the morpho-bathymetric map, these faults show variable trend from NNW-SSE to N-S.These faults are likely part of a minor fault system called secondary Fault System (s-FS).In the central part of the line, at about 19 km, a transtensive fault is identified by a small bathymetric variation and by the deformation of sediments.These faults are likely part of the TFS1 already identified in the line I97209 (Figure 3) and in the morpho-bathymetric map (Figure 4).Toward the NNE (at about 32 km), a sedimentary basin, about 1 km depth and 5 km wide, is present at the top of a folded unit.Figure 6 shows seismic image of line I97214, with the interpretation and the velocity model superimposed.Between 5 and 13 km, shallow stratified sediments deepen ocean-ward with an on-lap closure at the top of an anticline structure.This structure soles out at the top of a thrust fault, located around 15 km and confined below the seafloor.Along the section, several minor faults and fractures affecting the sediments are detectable.Between 15 and 18 km, another minor structure is imaged in the seismic line.This anticline structure are located to the southern end of a mud volcano, called Vualt [30], and clearly identified on the morpho-bathymetry image (light blue arrow in Figure 4).Between 19 and 28 km, the sediments are well stratified, even if slightly deformed by minor fractures and faults.These faults deepen the basin up to of a maximum thickness of about 450 m at 26.5 km (Figure 6).Finally, the analysis of the high resolution morpho-bathymetric map allowed us to recognise important geological features.In addition to the three important fault systems (TFS, s-FS and TFS1), other morphological elements, such as slumps [30], are detected in the southernmost part (see white arrows in Figure 4).A deep canyon is crossed by the TFS1, as evidenced by black arrows (Figure 4).Moreover, some morphological highs, similar to the Volcano Vualt, are present in the northern part, close to the trench (light blue arrow), and even associable to volcanic ridge. BSR Analysis All seismic lines reveal the presence of the BSR with variable characteristics along with.In particular, the seismic line I97209 is characterized by the presence of the BSR (Figure 3), which results locally weak and somewhere disappears, as between 27 and 30 km where an active fault is recognized.The velocity model shows lateral variations, more evident in the third layer.The BSR is evident along the line I97213 (Figure 5) with variable amplitude character, and it is continuous between 20 and 35 km.Between 8 and 20 km, the BSR becomes more discontinuous, in particular close to the fault system s-FS.The velocity field evidences lateral velocity variation in the third layer, showing higher velocity with respect to the velocities observed at the line I97209. Between 9.5 km and the end of the line I97214 (Figure 6), the BSR is locally discontinuous and its depth ranges from 450 to 700 mbsf.Seismic velocities are affected by intense lateral variations showing higher values at about 10 and 15 km (central part of the line), and very low velocity within sediments along the east side of the volcanic ridge, between 18 and 28 km.All the 2D velocity fields are converted in 2D gas hydrate concentration sections.As expected, the concentration variations are consistent with the velocity variations and will be later discussed.In fact, lacking direct measurements, which are necessary to validate the background velocity, only qualitative information about the gas hydrate occurrence can be extracted. Regional Models In order to analyse both velocity and gas concentration variations and to identify the link between bathymetry and geological features, regional models are produced.Figure 7 (left) shows the velocity anomalies extracted just above the BSR.This BSR map was obtained interpolating the BSR depths extracted from the 2D seismic models.The area is characterised by an average velocity anomaly of 100-200 m• s −1 .High anomaly values (about 300 and 350 m• s −1 ) are detectable in the north-western side of the reservoir and within the central area (see black thick arrows).Close to the Mud Volcano Vualt (see yellow thick arrow) very low or zero velocity anomalies are detected.As expected, the gas hydrate concentrations map (Figure 7, right), whose values were extracted just above the BSR, shows a similar trend to that of the velocity anomalies map.The average amount of the hydrate varies in a range of 4% (white arrows)-16% (black arrow) of the total volume, while very low gas hydrate concentration is predicted close to the Mud Volcano Vualt. Finally, the analysis of the seismic BSR depth can help to understand the meaning of its lateral changes.To do this, the thickness map of the possible gas hydrate zone (Figure 8) by seismic data, obtained by using the GMT free software [47], was evaluated.Interpolation algorithm parameters were: first step, 3 km of block dimensions and 25 km of research radius; and, second step, 0.2 km of block dimension and 2.4 km of research radius.The cell grid size was 200 m × 200 m.The thickness map was obtained performing the difference between the seafloor from multibeam data and the BSR depth from PreSDM.The area that was considered reliable, based on seismic interpretation, extends 600 km 2 . Analyzing the possible gas hydrate thickness (Figure 8), local high values, more than 750 m (see black tick arrows), are detected along the seismic lines I97213 and I97209.The most part of the reservoir shows values ranging between 450 m and 600 m.In the eastern side, the thickness is strongly reduced to 300 m close to the Mud Volcano Vualt (see white thick arrow).Based on depth models, the regional geothermal gradient is estimated comparing seismic and theoretical BSR depths.Knowing the sea water depth (from the bathymetry), the sea bottom temperature (equal to 0.4 °C from OBS data; [23]) and considering geothermal gradients from 25 to 45 °C/km with a step of 2.5, the theoretical BSR depth was evaluated along all seismic profiles.Moreover, these evaluations were performed supposing a mixture of gasses: 90% methane; 5% ethane; and 5% propane, as pointed out by core analysis [1,30].The regional geothermal gradient is estimated fitting the seismic BSR depth with the theoretical one.For this goal, we used the Geographic Information System (GIS).The distribution of estimated geothermal gradients is shown in Figure 9. For each available dataset, we created a grid with 200 m cell size.Then, each grid was subtracted to the grid representing the seismic BSR depth.Considering that the maximum error of the seismic BSR depth is equal to 5% [24], just values lower than this error was considered in every grid cell.Note that the grids were transformed in absolute value before performing the subtraction.Figure 9 shows highest values of the geothermal gradient located close to the volcanic ridge (black thick arrow).In general, we observed an increase of the geothermal gradient versus the trench.The analysis of these grids suggests that the regional geothermal gradient ranges between 37.5 and 40 °C/km. Gas Hydrate versus Geological Features The main features observed along the seismic lines (Figure 4) are correlated with the main features recognised in the high resolution morpho-bathymetric information.This relation highlights the presence of two main fault systems (TFS and TFS1).The first one was interpreted as a compressive system located at the NE side of the gas hydrate reservoir, while the second one was interpreted as transcurrent system with normal component and bordering the SW side.This system is composed of two main sub-parallel branch faults as detectable on the morpho-bathymetry. TFS could be associated to the elongated structural high (see blue arrows in Figure 4), characterized by anticline deformation (Figure 6).This element suggests that the entire elongated high could represent a fault-controlled anticline structure. TFS1 shows a left-movement evidenced by the 2350 m shift of a canyon, orthogonally orientated with respect to the southern branch fault (black thick arrows in Figure 4).The normal component is determined by the depression that deepens sediments of about 750 m (Figure 3).The strong reduction or the absence of the normal component along the northern branch fault (see at 28 km distance in Figure 3) suggests a variable strain along the TFS1.In the SW side of the investigated area, a secondary fault system (s-FS), also characterized by a normal movement, is responsible of a basin formation (Figure 5).The s-FS is affected by the TFS1, as suggested by the change of fault orientations, from NW-SE to N-S, at the west side of the reservoir (Figure 4).The clockwise rotation of the s-FS is in agreement with the left movement of the TFS1, confirming this hypothesis.The narrow trough recognized in the north portion of the reservoir (Figure 3 and 4) could be tectonically controlled.This is suggested by the discontinuity detected in the seismic line I97209 and by the elongate and straight shape, deduced by the morpho-bathymetry image.In the central part of the reservoir, only few and discontinues faults affect the seafloor (Figure 4). Gas hydrate concentration (Figure 7) is affected by strong lateral variation partially due to the activity of the fault systems (TFS1, TFS, and s-FS).In particular, high gas hydrate concentrations at the BSR, resulting in a range of 4-16% of total volume, are detectable (the central area).Gas hydrate concentration close to TFS1 is in a range of 0-6% of total volume, while along the TSF the gas hydrate concentration is strongly variable up to zero close to the Mud Volcano Vualt.This suggests that the gas hydrate reservoir is strongly controlled by the tectonics, expressed by transtensive and compressive faults bordering the reservoir. The relation between gas hydrate presence and tectonics is highlighted by the analysis of the BSR trend and its relationship with geological features (faults, mud volcano, slumps; Figure 4).The BSR depth is strongly variables along the margin and reaches the maximum thickness of 750 m (see black arrows in Figure 8) around the seismic line I97213.Comparing the map of possible gas hydrate thickness and the structural model, a possible interaction between the TFS1 and the deepening of the BSR can be supposed.The presence of deep fault-controlled canyons, affecting the slope from the shelf break to the middle part (Figure 4), can guide huge quantities of cold and dense turbiditic currents coming from the continental shelf.During the past, this part of margin has experienced intense turbiditic flows which have favoured the deposition of cold sediments on the top sedimentary prism.Events confirmed by the erosion affecting the upper slope, recorded in the SE part of the seismic line I97209 (Figure 3), that could be associated to the ice sheet advance to the shelf edge during the last glacial maximum [32,52,53].Thus, the high BSR depth at the north of the TFS1 can be justified by higher input of cold sediments, filling the intra-slope basin (see Figure 5), that locally decrease the thermal field deepening the base of the hydrate stability zone [54]. High resolution morpho-bathymetric image reveals the presence of a well defined slump scars, located within to the SW side of the reservoir (see white arrow in Figure 4), in an area characterized by a seafloor dipping of about 3.6°.This value is lower than a critical angle for normal compacted terrigenous sediments, usually characterized by an inner shear strength angle (or critical angle) ranging between 15° and 20° [55].Reference [56] suggests that large submarine mass movements can be triggered by some mechanisms, that are: (1) build up of excess pore pressure due to high sedimentation rate; (2) earthquakes; (3) seepage of shallow methane gases; (4) oversteepening; and (5) erosion at the toe of the slope.In the investigated area, the slump scars are located between the two main branches of the active TFS1.This suggests that tectonic activity, responsible of significant earthquakes, combined with gas presents within pore space sediments, could have triggered the observed slumps. Lateral gas phase changes are supported by lateral seismic velocity changes and BSR lateral discontinuity.The local discontinuity of the BSR along seismic lines could be in some part related to the horizontal resolution [45] that results 240 m.This value is obtained assuming a dominant frequency of 40 Hz, an average root-mean square velocity of 1700 m• s −1 at the BSR, and a BSR depth of 2,700 m.In the western portion of the gas hydrate reservoir, around the s-FS (see Figure 3 and 4), the BSR depth from the seafloor (Figure 8) results from 450 to 600 m.This trend is due to the missing around the fault system of a strong and continuous BSR that could have generated a misinterpretation on the seismic lines. Along the TFS (Figure 4) the possible gas hydrate thickness (Figure 8) results are anomalously low (about 375 m; see white arrow in Figure 8).This trend can be associated to the presence of the Mud Volcano Vualt.In proximity of this structure, the shallow BSR and the low gas hydrate concentration (Figure 7) can be correlated to intense fluid fluxes, probably guided by the TFS.This hypothesis is confirmed by fluid or/and gas fluxes recorded by chirp data and located along the side of the mud volcano close to a submarine slide.This relation between fluid outflow, slumps and mud volcano are well discussed in [30].Moreover, this hypothesis is supported by laboratory experiments and direct observation in different studied areas, indeed some authors suggest that intense hot fluid fluxes change the thermodynamic of sediments moving up the base of gas hydrate stability zone [16,17].This model of fluid escapes can be applied to the Mud Volcano Vault.It is important to underline that the presence of a strong BSR in the seismic profile I97214 (across the mud volcano) can be justified assuming a large gas accumulation at the base of the hydrate layer, as already highlighted by other authors [6,57,58].Probably, the free gas can accumulate because the pore space of a thin layer of sediments above the BSR are partially filled by gas hydrate that acts as seal.The presence of the free gas zone and absence of gas hydrate was already observed by several authors [17,59] in similar environment. In addition, the estimated geothermal gradient close to the volcano shows values greater than 40 °C/km (black thick arrow in Figure 9), resulting higher than the central area, characterized by an average geothermal gradient of 37.5 °C/km (unmasked area).Moving to the continental margin, where the BSR is only locally present, lower geothermal gradient values (35-37.5 °C/km) prevail (Figure 9). Finally, a 3D model of gas hydrate concentration from the seafloor to the BSR was obtained.This model takes into account a space divided into cells, from seafloor to BSR, and the gas hydrate amount was estimated within cells.Thus, the estimated total volume of hydrate, fin the area (600 km 2 ) where the interpolation is reliable, is 16 × 10 9 m 3 .Considering that the gas hydrate concentration is affected by errors that could be equal to about ±25%, as deduced by sensitivity tests [44,48] and by error analysis related to the interpolation procedure (Figure 2).The estimate amount of gas hydrate can varies in a range of 12 × 10 9 -20 × 10 9 m 3 .Moreover, considering that 1 m 3 of gas hydrate corresponds to 140 m 3 of free gas in standard conditions, the total free gas trapped in this reservoir is in a range of 1.68 × 10 12 -2.8× 10 12 m 3 .This estimation does not take into account the free gas contained within pore space below the hydrate layer, so this values could be underestimated. Conclusions The applied procedure allows us to characterize the gas hydrate reservoir, presents offshore Antarctic Peninsula, and draws the following conclusions:  A complex structural setting of the sedimentary prism and the gas hydrate reservoir was defined by the structural interpretation of the depth migrated seismic images.TFS1 and TFS, characterized by transtensive and compressive movements respectively, border the morphological high corresponding to the central part of the reservoir.Moreover, a secondary fault system (s-FS), probably controlled by TFS1, borders the western side of the reservoir. The source of a gravitational instability, well recognised on the morpho-bathymetry image, is associated to the tectonic activity of a fault segment (part of the TFS1) and it likely to be favoured by fluid content coming from gas system. The gas hydrate reservoir is characterized by a regional geothermal field of about 37.5 °C/km.As expected, the geothermal gradient shows a slow increase from the inner to the frontal part of the prism.Some local high values (about 40 °C/km) are associated to the mud volcano presences. The BSR and the gas hydrate distribution within sediments are strongly controlled by tectonics. High gas concentrations are detected in the central part of the reservoir, where not faults deformation affects the sediments. The 3D gas hydrate volume was estimated; the potentiality results in a range of 12 × 10 9 -20 × 10 9 m 3 ; thus, the free gas volume in standard condition results a range of 1.68 × 10 12 -2.8× 10 12 m 3 . Figure 1 . Figure 1.Shaded relief of low resolution bathymetric data downloaded by the website at [37].Mercatore projection is adopted.Thick white arrows indicate a schematic NW-SE extension across Bransfield Basin (B.B.).The direction of the subduction -Former‖ Phoenix Plate is indicated with thin white arrow, while transform fault movements are indicated with double white thin arrows.The dashed rectangle indicates the analyzed area.Location of seismic lines analysed in this work are indicated as thin solid lines; the thick lines refer to the line shown in Figures 2, 4 and 5. E.I.: Elephant Island; F.T.P.: Frontal Thrust Prism; S.S.I.: South Shetland Islands; S.F.Z.: Shackleton Fracture Zone. Figure 2 . Figure 2. Percentage error of the interpolated velocities along the line I97214 at the sea floor (SF, stars), the horizon 1 (HOR1; solid triangles), and the BSR (crosses).See text for details. Figure 3 . Figure 3. PreSDM section of line I97209, on which the velocity model is superimposed.Thick red lines: main faults.Dashed red lines: minor fault and fractures.The BSR and the main geological features are reported.TFS1: Transcurrent Fault System 1; Vertical exaggeration 1:4; Crossing lines are indicated. Figure 4 . Figure 4. Shaded relief integrating seismic profile interpretations and morpho-bathymetry information.The recognized fault systems are indicated with thick red lines; minor faults and fractures are indicated with dashed red lines.White arrows indicate slumps; light blue arrows indicate the recognized mud volcanoes; and black thick arrows indicate a submarine canyon.Anticline and sharp folds are indicated with green symbol.s-FS: secondary Fault System; TFS: Thrust Fault System; V.V.: Volcano Vualt. Figure 5 . Figure 5. PreSDM section of line I97213, on which the velocity model is superimposed.The s-FS, the TFS1, the BSR and the main geological features are indicated.Vertical exaggeration 1:4.Minor faults (red dashed lines) and major faults (red thick lines) are indicated.Crossing lines are reported. Figure 6 . Figure 6.PreSDM of line I97214, on which the velocity model is superimposed.The BSR and the main geological features are indicated.Fractures are indicated with dashed red lines.TSF: Thrust Fault System; SR_1: Seismic Reflector 1; Vertical exaggeration 1:4; Crossing lines are indicated. Figure 7 . Figure 7. (A) Map of velocity anomalies extracted just above the BSR.Contour lines are plotter every 50 m.White arrows indicate low velocity anomalies; black arrow indicate the high velocity anomalies; and yellow arrow indicate the Mud Volcano Vualt; (B) Map of the gas hydrate amount extracted just above the BSR.Contour lines are plotter every 2%.White thick arrows indicate low gas hydrate amount corresponding to the low velocity anomalies, and black arrow indicates the high gas hydrate amount.A mask is over-imposed on the images to visualize the reliable area.The structural interpretation was over-imposed as red solid and dashed lines. Figure 8 . Figure 8. Seismic potential gas hydrate thickness expressed in mbsf.Contour lines are plotted every 75 m.Black arrows indicate the higher and white arrow indicate the lower BSR depth detected within the reservoir.A mask is over-imposed on the images to visualize the reliable data.The structural interpretation was over-imposed as white solid and dashed lines. Figure 9 . Figure 9. Map of the difference between seismic and calculated BSR depth produced by using the GIS software; see text for details.geothermal gradient equal to (A) 32.5 °C/km, to (B) 35 °C/km, to (C) 37.5 °C/km and to (D) 40 °C/km.The mask limits the central area.Black arrows indicate the Mud Volcano Vualt.
7,823.2
2010-12-31T00:00:00.000
[ "Geology" ]
Assembly of CdS Quantum Dots onto Hierarchical TiO2 Structure for Quantum Dots Sensitized Solar Cell Applications Quantum dot (QD) sensitized solar cells based on Hierarchical TiO2 structure (HTS) consisting of spherical nano-urchins on transparent conductive fluorine doped tin oxide glass substrate is fabricated. The hierarchical TiO2 structure consisting of spherical nano-urchins on transparent conductive fluorine doped tin oxide glass substrate synthesized by hydrothermal route. The CdS quantum dots were grown by the successive ionic layer adsorption and reaction deposition method. The quantum dot sensitized solar cell based on the hierarchical TiO2 structure shows a current density JSC = 1.44 mA, VOC = 0.46 V, FF = 0.42 and η = 0.27%. The QD provide a high surface area and nano-urchins offer a highway for fast charge collection and multiple scattering centers within the photoelectrode. Introduction In the modern age, solar cells have attracted significant attention due to their promising applications in energy generation devices. Since the pioneering report by O'Regan and Grätzel in 1991, dye-sensitized OPEN ACCESS solar cells have been investigated extensively all over the world [1][2][3][4][5][6][7][8][9][10][11]. The quantum dot sensitized solar cell (QDSSC) has received wide attentions recently because they have several advantages over dye sensitizers, such as tunable energy gaps [12], high absorption coefficients [13], and generation of multiple electron-hole pair with high energy excitation [14]. The TiO2 nanoparticle based photoelectrode showed considerable power conversion efficiency over a large surface area with the attachment of dye molecules. M. Pavan et al. reported on an oxide heterojunction solar cell, entirely produced by spray pyrolysis onto fluorine doped tin oxide (FTO) covered glass substrates [15]. However, the irregular stacking of TiO2 nanocrystallines have been found to limit the electron transportation and decreases the electron life time because of the random network of crystallographically misaligned crystallites, and lattice mismatches at the grain boundaries [15][16][17][18]. It has been accepted that the value of power conversion efficiency of photoelectrodes highly depends on the morphology and structure of TiO2. In order to increase the photovoltaic performance, through their excellent electron transport and light scattering ability, one-dimensional nanostructures, such as nanorods (NRs), nanowires (NWs) or nanotubes (NTs), have been studied as photoelectrode materials for sensitized solar cells [19][20][21][22][23]. Due to low specific surface area ascribing to larger diameter and wide gaps among neighbor NWs [23,24], the TiO2 NWs based photoelectrode has not shown remarkable enhancement of power conversion efficiency. To overcome this problem, hierarchically-structured materials composed of nanocrystallites that form large micro-spheres. Nanocrystallites can provide excellent light scattering with large surface area for sensitizer-uptake. In these hierarchical materials, slow trap-limited charge transport remains a fundamental problem. To solve this problem, a nanourchin (NU) TiO2 are formed by clustering nanowires that have a mean diameter of about 50 nm and a length of a few micrometers to construct a radially aligned structure. There are few recent examples concerning hierarchical TiO2 structure (HTS), such as either for rutile TiO2 on FTO glass or anatase TiO2 on a Ti foil substrate for improving the power conversion efficiency [19,24]. In the present work, hierarchical TiO2 structure (HTS) consisting of spherical nano-urchins was synthesized through hydrothermal method. The CdS QDs were assembled by successive ion layer adsorption and reaction (SILAR). The HTS/CdS QDs based photoelectrode was used to improve the power conversion efficiency of quantum dot sensitized solar cell. Synthesis of Hierarchical TiO2 Structure The hierarchical TiO2 structure (HTS) was grown on the FTO substrate. In a typical synthesis, the substrate was ultrasonically cleaned sequentially in acetone, isopropyl alcohol and deionized water for 15 min and was finally dried with nitrogen flow. Separately, 1 mL of titanium isopropoxide was added dropwise to a 1:1 mixture of deionized water and concentrated (35%) hydrochloric acid to obtain a clear transparent solution. The substrate was placed at an angle in a 100 mL Teflon liner, and the precursor solution was added to it. The Teflon liner was loaded in an autoclave and was placed in furnace. The growth was carried out at 180 °C for 15 h. Deposition of CdS Quantum Dots (QD) The CdS quantum dots were deposited on HTS films by successive ionic layer deposition and reaction (SILAR) method. The HTS electrode was exposed to Cd 2+ and S 2− ion successive immersion in a ethanolic solutions of 0.5 M Cd(NO3)2 and methnolic solution of 0.5 M Na2S. The film was dipped into 0.5 M Cd(NO3)2 solution for 1 min and rinsed with ethanol and then, dipped into 0.5 M Na2S for 1 min and rinsed with methanol. These dipping procedures are considered one cycle. The coating procedure was repeated 10 times. For the deposition of CdS on the HTS by SILAR method, the experimental procedure is explained as follow: Preparation of Electrolyte Solution Polysulfide electrolytes were prepared by mixing suitable quantities of Na2S, S, and KCl powders in water/methanol solution taken in the ratio 3/7. Fabrication of QDSCs The QD-adsorbed HTS was used as the working electrode and platinum coated FTO glass as counter electrode. The electrodes were assembled into a sealed cell with a cello tape spacer and binder clips, with an active area equal to 0.36 cm 2 . The electrolyte was injected from the edges into the open cell, and the cell was tested. The schematic of the studied QDSSCs is shown in Figure 1. Characterization X-Ray diffraction XRD analysis of HTS/FTO films was carried out using multipurpose X-ray diffractometer (Bruker D8 Discover, Bruker AXS GmbH, Karlsruhe, Germany) with Cukα source radiation. Surface morphology of the films was investigated with a JEOL (JSM-7600F) Field Emission electron microscope (Jeol, Peabody, MA, USA). Size of the CdS QDs was measured by JEOL (JEM-2100F) field emission electron microscope (FETEM) (Jeol, Peabody, MA, USA). Optical absorption studies were made at room temperature by using UV-Vis-NIR spectrophotometer (JASCO-V 670) (Jasco, Halifax, NS, Canada) in the wavelength range 200 nm-800 nm. The current-voltage and capacitance-voltage characteristics were investigated using a Semiconductor Characterization System SC-4200 from Keithley (Keithley Instruments, Solon, OH, USA). The films were illuminated by a Class-BBA Solar Simulator (PV measurements, Boulder, CO, USA) and TM-206 solar power meter (Tenmars, Taipei, Taiwan) was used for measuring the light intensity. Figure 2 shows the XRD patterns of TiO2 nanowires grown on FTO glass substrate. Nanowires grown in this work, regardless of substrate used, were found to have the rutile phase by matching between the observed and standard "d" value of the TiO2 nanostructure. XRD data ( Figure 1) show a good agreement with the standard TiO2 (PDF file #01-086-0147, P42/mnm, a = b = 4.594 Å and c = 2.958 Å). In XRD spectrum, the respective diffraction peaks corresponding to the FTO are denoted by the symbol "F". The field emission scanning electron microscope (FESEM) images of TiO2 nanostructure on FTO glass is shown in Figure 3. From Figure 3a,b we have found that the morphology of TiO2 is a hierarchical structure. It is believed that a hierarchical TiO2 structure (HTS) has three novel levels. The first level, the TiO2, is made up of nano-urchins (NUs); the second level, the NUs, is composed of nanowires (NWs); and the third level, the NWs, is made up of nanoparticles (NPs) [25]. Figure 3a The comparison of the absorption spectra of HTS and CdS QDs deposited on the HTS is shown in Figure 5. The immersion of CdS QDs in HTS structure has improved the optical absorbance in the visible region. The absorption edge obtained from the intersection of the sharply decreasing region of a spectrum with its baseline for HTS around 370 nm and shifted to longer wavelength around 520 nm after immersion of CdS QD. Corresponding to this absorption peak, the band gap was calculated to be 2.7 eV. The value reported for CdS in bulk was 2.42 eV [26]. The band gap of CdS particles deposited on HTS films was higher than that of CdS bulk, which indicated that the size of the CdS particles were still within the scale of quantum dot. Estimated from the absorption edge of the absorption spectra, the radius of CdS particles was calculated to be 2.37 nm by using the hyperbolic band model (HBM) equation [27]. * (1) where Ebulk is bulk band gap; Enano is band gap of nanomaterial; m* is effective mass of electron in bulk CdS (m* = 0.21mo). Hence the particle size estimated as 2R is 4.74 nm. Wavelength (nm) A new approach implemented the benefits of one-dimensional nanostructures by suitably combining the NU TiO2 and nanoparticles (NP) to construct a hierarchical TiO2 structure (HTS) photoelectrode for the QDSSCs. In particular, the QD provide a high surface area for sufficient QD deposition by SILAR technique, whereas NU TiO2 particles offer a highway for fast charge collection and multiple scattering centers within the photoelectrode. The QDSSCs made of the HTS film exhibited remarkable improvement in power conversion efficiency in comparison to the reference cell made with the NP film [28][29][30]. To evaluate the photovoltaic performance of the HTS, the synthesized products were applied as a photoanode for QDSSC application. The J-V characteristics of the quantum dot sensitized (FTO/HTS/CdS QDs/Pt/FTO) solar cell were measured using a solar simulator. The fourth quadrant of J-V and P-V characteristics are shown in Figure 6. The photocurrent is defined as a current produced under light irradiation due to the generation of free charge carriers by absorption of photons within the depletion layer. Figure 6 demonstrate that the value of J increases with the increasing illumination intensities and the photovoltage value increased up to 0.478 V at 90 mW/cm 2 then decreases to 0.46 V at100 mW/cm 2 . The values of short circuit current density Jsc and open circuit voltage Voc are found to be 1.44 mA/cm 2 and 0.46 V under 100 mW/cm 2 illuminations, respectively, with an efficiency of 0.27%. As per the literature, we expected the assembly of CdS quantum dots onto hierarchical TiO2 structure for quantum dots sensitized solar cell to give good results. However, with our fabricated cell, we do not observe similar results. The probable reason for obtaining lower efficiency may be due to the QDs embedded in the TiO2 nano urchins matrix provides many pathways, resulting in an increase in injection time. By power-voltage curve, we can take more information about the delivered power to this device. The electric power first increases with the increasing of the voltage and reaches its maximum value and then, decreases and reaches to zero with the further increase of the voltage. The maximum value of power indicates how much the QDSSC can deliver as a maximum power to an external load and is defined as Pmax = IM × VM, where IM is the maximum current and VM is the maximum voltage at each value of illumination intensities. Power (W) Figure 7a shows the graph between the short circuit current density Jsc with open circuit voltage Voc. It is seen that the value of Voc increases exponentially with the increase of Isc. This exponentially increasing trend of Voc with Isc obeys the following relation [31,32]. 1 (2) where n is the diode ideality factor; k is the Boltzmann's constant; q is the electric charge; and Jo is the reverse saturation current density. By using the above equation, fitted to the graph of Voc−Jsc, the ideality factor of fabricated cell is found to be 3 [33]. For an ideal p-n junction, the ideality factor is considered to be approximately 1 under room temperature. The probable reasons for larger ideality factors may be due to several defects as well as local non-linear shunts anywhere in the cell area that are responsible the ideality factor value greater than 1 [34]. The value of short circuit current density Jsc of the solar cell were obtained from J-V curves under light illumination. In Figure 7b it is observed that there exists a non-linear relation between current and light intensity in the low intensity region. This is due to the large shunt resistance in the device. The linear relation is fully recovered after the light intensity is increased to 40 mW/cm 2 , indicating that the photo filling effect has saturated the non-radiative recombination center. Figure 7c show the Jsc−Voc relation plot under different light intensity and reveals that the dark diode property of the fabricated with high series resistance. Conclusions A novel hierarchical TiO2 structure on transparent conductive FTO glass substrate is synthesized by hydrothermal route. The CdS quantum dots were grown by the successive ionic layer adsorption and reaction deposition method. The investigations of FESEM reveal that HTS consist of spherical nano-urchins and FETEM indicate that CdS QDs are markedly immersed on the surface of TiO2 nanostructure. The QD provide a high surface area and nano-urchins offer a highway for fast charge collection and multiple scattering centers within the photoelectrode, which are responsible for the improvement of power conversion efficiency. We anticipate that nano-urchins consisting of HTS based photoelectrodes could be promising for the fabrication of high efficiency Perovskite quantum dot sensitized solar cell.
3,017
2015-05-01T00:00:00.000
[ "Materials Science", "Chemistry", "Physics" ]
A Non-Relativistic Model of Tetraquarks and Predictions for Their Masses from Fits to Charmed and Bottom Meson Data We investigate a non-relativistic model of tetraquarks, which are assumed to be compact and to consist of diquark-antidiquark pairs. We fit, for the first time, basically all currently known values for the measured masses of 45 mesons, including both charmed and bottom mesons, to the model and predict masses of tetraquarks as well as diquarks. In particular, we find masses of four axial-vector diquarks, i.e. $qc$, $cc$, $qb$, and $bb$, where $q = u,d$, and 24 ground-state tetraquarks, including both heavy-light tetraquarks ($qc\overline{qc}$ and $qb\overline{qb}$) and heavy tetraquarks ($cc\overline{cc}$ and $bb\overline{bb}$). In general, our results for the masses of $qb\overline{qb}$, $cc\overline{cc}$, and $bb\overline{bb}$ are largely comparable with other reported results, whereas our results for the masses of $qc\overline{qc}$ are slightly larger than what has been found earlier. Finally, we identify some of the obtained predictions for masses of tetraquarks with masses of experimental tetraquark candidates, and especially, we find that $\psi(4660)$, $Z_b(10610)$, and $Z_b(10650)$ could be described by the model. I. INTRODUCTION The concept of hadrons was introduced in 1962 by Okun [1] and developed into the quark model in 1964 independently by Gell-Mann [2] and Zweig [3,4], describing ordinary mesons (qq) and baryons (qqq) in terms of quarks q and antiquarksq. In addition to the quark model, the possible existence of exotic hadrons, such as tetraquarks (qqqq or qqqq) and pentaquarks (qqqqq), consisting of four or more quarks was proposed in Gell-Mann's seminal work [2], but it was not until the beginning of the 21st century when the first claimed observations of exotic hadrons were made [5]. Today, a large amount of data, obtained at both electron-positron and hadron colliders, has provided evidence for the possible existence of such exotic hadrons. Concerning tetraquarks, the first discovery was made in 2003 by the Belle collaboration that observed a resonance peak at (3872.0 ± 0.6) MeV [6], which was named X(3872) (and now sometimes referred to as χ c1 (3872) [7]), and then confirmed by several other experiments (see e.g. the reviews [8,9] and references therein). Many proposed exotic hadrons only appear in one decay mode, although X(3872) can be observed in several other decay modes as was discovered by the BaBar [10], CDF [11], and D∅ [12] collaborations. Later, the ATLAS, CMS, and LHCb collaborations were able to contribute with a massive amount of data on the electrically-charge neutral X(3872) and its current mass is determined to be (3871.69 ± 0.17) MeV [7,13]. It is the most studied exotic hadron, but its nature is still fairly unknown. It has similar properties to the charmonium state cc and *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>was first believed to be an undiscovered excited state of cc, but a closer investigation of the decay modes X(3872) → J/ψ π + π − and X(3872) → J/ψ ω shows violation of isospin [14,15], which is unusual for cc. If X(3872) is an exotic hadron, then a common description is that it contains two quarks and two antiquarks forming ucuc. However, it is still an open problem how it is bound together. Since the discovery of X(3872), many new exotic hadron candidates have been claimed to be observed with final states of a pair of heavy quarks and a pair of light antiquarks, which are labeled as X, Y , and Z states by experimental collaborations and collectively referred to as XY Z states [16]. Examples of candidates for XY Z states are Z c (3900) [17,18], Z c (4025) [19,20] (now known as X(4020) ± [7]), Z b (10610) [21], and Z b (10650) [21]. The dynamics of the XY Z systems involves both short and long distance behaviors of QCD, which make theoretical predictions difficult. Hence, many competing phenomenological models currently exist for such states, including lattice QCD, compact tetraquark states, molecular states, QCD sum rules, coupled-channel effects, dynamically generated resonances, and non-relativistic effective field theories (see Ref. [22] and references therein). Many models view the exact nature of the inner structure of tetraquarks to be compact and to consist of socalled diquark-antidiquark pairs [23]. A diquark is a bound quark-quark pair, whereas an antidiquark is a bound antiquark-antiquark pair. These pairs are not by themselves colorless, but are proposed in the context of tetraquarks to form colorless combinations. Exotic hadrons of such pairs are thus not ruled out by QCD, but cannot be accommodated within the naive quark model. Modeling of tetraquarks containing only heavy quarks is therefore of special interest and easier to study theoretically, since several assumptions can be justified. Recently, the LHCb collaboration reported the observation of a doubly-charmed and doubly-charged baryon Ξ ++ cc [24], which has lead to further attention on heavyquark systems as the description of exotic hadrons. Many tetraquark candidates are not possible to describe within quark models, since they have electric charge, and therefore cannot be charmonium or bottomonium, but are potential candidates for hidden-charm or hidden-bottom tetraquarks, molecular systems of charmed or bottom mesons or hadroquarkonia (see Ref. [25] and references therein). In this work, we will study a non-relativistic model describing tetraquarks as composed of diquarks and antidiquarks, which interact much like ordinary quarkonia. Performing numerical fits to masses of mesons, masses of tetraquarks and the underlying diquarks will be predicted. For the first time, we will use both charmed and bottom mesons in the same fit and data of in total 45 charmed and bottom mesons (e.g. charmonium, bottomonium, D mesons, and B mesons) will be considered. We will predict masses of 24 tetraquark states, which is more than what has previously been performed in the literature. This work is organized as follows. In Sec. II, we present the non-relativistic diquark-antidiquark model describing tetraquarks that can predict their masses and describe the numerical fitting procedure of meson data. Then, in Sec. III, we perform numerical fits of this model and state the results of the fits, including predicted masses of diquarks and tetraquarks. We will also present a thorough discussion on the results obtained and comparisons to other works, both theoretical and experimental. Finally, in Sec. IV, we summarize our main results and state our conclusions. II. MODEL AND FITTING PROCEDURE In this section, the model of tetraquarks viewed as diquark-antidiquark systems is presented and the method used to prescribe some tetraquark states quantitative masses is derived. This is preformed by firstly considering a quark-antiquark system and describing the Hamiltonian of that system with an unperturbed one-gluon exchange (OGE) potential and a perturbation term taking the spin of the system into account. This gives rise to a model with four free parameters, which are then fitted to meson data. Secondly, the model is expanded to incorporate composite quark-quark systems, which are called diquarks (the antiquark-antiquark systems are called antidiquarks). Thirdly, with the masses of the diquarks determined, the initial stage of the model describing quark-antiquark systems is then used to describe the diquark-antidiquark systems, which are interpreted as bound states of tetraquarks. First, considering a model of the quark-antiquark system q1q2. Second, extrapolating the model to also describe the quarkquark (or diquark) system q1q2. Third, modeling a tetraquark q1q2q1q2 in the same way as the quark-antiquark system, but with diquarks and antidiquarks as constituents. A. Model Procedure The modeling procedure can be outlined and summarized as follows: 1. Fitting a quark-antiquark model to meson data to obtain the parameters of the effective potential. 2. Using that set of parameters to determine the diquark and antidiquark masses by changing the color constant and the string tension of the potential. 3. Considering the diquarks and antidiquarks as constituents of the tetraquarks to predict the tetraquark masses, see Fig. 1 for a schematic overview of the modeling procedure. We begin by considering the interaction between a quark and an antiquark. In quark bound state spectroscopy, a commonly used potential describing the unperturbed contribution is the so-called Cornell potential [26] V (r) = κα S r + br , where κ is a color factor and associated with the color structure of the system, α S the fine-structure constant of QCD, and b the string tension. The first term in Eq. (1), i.e. V V (r) ≡ κα S /r, is the Coulomb term and associated with the Lorentz vector structure. It arises from the OGE between the quarks. The second term in Eq. (1) is associated with the confinement of the system. A non-relativistic approach is legitimate under the condition that the kinetic energy is much less than the rest masses of the constituents, which is usually the case considering heavy-quark bound states. We formulate the Schrödinger equation in the center-of-mass frame. Using spherical coordinates, one can factorize the angular and radial parts of this Schrödinger equation. Now, let µ ≡ m 1 m 2 /(m 1 + m 2 ), where m 1 and m 2 are the constituent masses of quark 1 and quark 2, respectively. In the case that m ≡ m 1 = m 2 , it holds that µ = m/2. Thus, the time-independent radial Schrödinger equation can be written as with the orbital quantum number L and the energy eigenvalue E. Substituting ψ(r) ≡ r −1 ϕ(r), Eq. (2) transforms into Based on the Breit-Fermi Hamiltonian for OGE, one can include a spin-spin interaction on the form [27][28][29][30][31] (4) In this model, we incorporate the spin-spin interaction V S (r) in the unperturbed potential V (r) by replacing the Dirac delta function with a smeared Gaussian function, depending on the parameter σ, in the following way as performed in Ref. [32]. Now, Eq. (3) takes the simple form where the effective potential V eff (r) is given by taking into account the spin-spin interaction. Equation (6) can be solved numerically for the energy eigenvalue E and the reduced wavefunction ϕ(r). The mass M of the bound quark-antiquark system can then be expressed as Note that this model consists of five unknown free parameters, namely the masses m 1 and m 2 of the two constituents, the fine-structure constant α S of QCD, the string tension b, and the parameter σ of the spin-spin interaction. B. Color Structure Hadrons are only stable when the colors of their constituent quarks sum up to zero, and thus, every naturally occurring hadron is a color singlet under the group symmetry SU (3). This means that a hadron only occurs if the product color state of the constituent quarks decomposes to an irreducible representation with dimension equal to one. Mesons consist of quarks in the color triplet state 3 and antiquarks in the color antitriplet state3, yielding product color states, which can be decomposed to the following irreducible representations: including a color singlet 1, and thus describing a naturally occurring hadron. In our modeling procedure, we consider the system of a diquark consisting of two quarks and an antidiquark consisting of two antiquarks in the triplet state, yielding a decomposition into a color singlet. The difference in color structure between the quarkantiquark and quark-quark systems allows us to extend the model of the quark-antiquark system to also be valid considering a quark-quark system by only changing the color factor κ and the string tension b. The SU(3) color symmetry of QCD implies that the combination of a quark and an antiquark in the fundamental color representation can be reduced to |qq : 3 ⊗3 = 1 ⊕ 8, which gives the resulting color factor for the color singlet as κ = −4/3 for the quark-antiquark system. When combining two quarks in the fundamental color representation, it reduces to |qq : 3 ⊗ 3 =3 ⊕ 6, i.e. a color an-titriplet3 and a color sextet 6. Similarly, when combining two antiquarks, it reduces to a triplet 3 and an anti-sextet6. Furthermore, combining an antitriplet diquark and a triplet antidiquark yields |[qq]−[qq] : 3⊗3 = 1⊕8, thus forming a color singlet for which the Coulomb part of the potential is attractive. The antitriplet state is attractive and has a corresponding color factor of κ = −2/3, while the sextet state is repulsive and a color factor of κ = +1/3. Therefore, we only consider diquarks in the antitriplet state. Thus, the effect of changing from a quark-antiquark system with color factor κ = −4/3 to a diquark system with color factor κ = −2/3 is equivalent of introducing a factor of 1/2 in the Coulomb part of the potential for the quark-antiquark system. It is common to view this factor of 1/2 as a global factor, since it comes from the color structure of the wavefunction, thus also dividing the string tension b by a factor of 2. For further details, see Ref. [31]. We apply this change of the color factor when considering diquarks. Given the parameters of the potential, we obtain the mass of the corresponding diquark in a similar manner as when considering the quark-antiquark system, only changing the string tension b → b/2 and κ → κ/2, due to the change in the color structure of the system, and thus finding the energy eigenvalues of the diquark systems. C. Fitting Procedure The fitting procedure of the model is described as follows: A fit of the four parameters of the model to experimental data is performed by finding the parameters v ≡ (m, α S , b, σ), where m ≡ m 1 = m 2 , that minimizes the function where N data is the number of experimental data and M exp,i is the experimental mass of the corresponding mass M model,i (v), which is given in the model as a function of v. Each term in Eq. (9) is then weighted with w i for each mass. Following Ref. [33], we will only consider w i = 1, giving the same statistical significance to all states used as input. It should be noted that choosing w i = 1, the χ 2 function in Eq. (9) will be dimensionful. However, we will choose to present the values of this χ 2 function (as well as individual pulls) without units. A. Data Sets The model will be numerically fit to five different data sets. First, a data set consisting entirely of charmonium mesons (in total 15 mesons). Second, a data set consisting entirely of bottomonium mesons (in total 15 mesons). Third, a data set consisting of D mesons (in total 8 mesons). Fourth, a data set of B mesons (in total 7 mesons). Fifth, a fit to both the charmonium and bottomonium meson data will be made (in total 30 mesons). A meson consisting of two charm quarks is a good candidate to fit to the model, since it has a relatively large constituent mass compared to light quarks, and therefore, a non-relativistic approach can be justified. Both charmonium and bottomonium are heavy mesons and well suited to the restrictions of this model. For reference, the data set of charmonium mesons is called I, the data set of bottomonium mesons II, the data set consisting of only D mesons III, the data set consisting of only B mesons IV, and finally, the data set containing both charmonium and bottomonium mesons V. In Tab. I, the data used are presented. B. Numerical Fits and Results In this subsection, the results of the fitted data sets, and subsequently, the resulting masses of different diquarks and tetraquarks are presented. The procedure can be divided into three main parts. First, fitting the model to each data set I-V to obtain five sets of parameter values for the free parameters of the model. Next, using the sets of parameter values obtained by fitting data sets I-IV to calculate the masses of different diquarks. In detail, the sets of parameter values obtained by fitting data sets I, II, III, and IV are used to calculate the masses of the cc, bb, qc, and qb diquarks, respectively, with q being either an up quark (u) or a down quark (d). Finally, using the calculated diquark masses to calculate the masses of different tetraquarks. The set of parameter values used for this computation is the one obtained by fitting data set V to the model. The number of free parameters when fitting the model to data sets I and II is four, since the masses of the constituent quarks for those data sets are equal, i.e. m = m 1 = m 2 . When fitting the model to data sets III-V, where q = u, d). Note that we use a spectroscopic notation, where N denotes the principal quantum number, S the total spin quantum number, L the orbital quantum number, and J the total angular momentum quantum number. we use the values for the constituent masses of the charm and bottom quarks obtained in the fits to data sets I and II, which means that the number of free parameters is three. Also, when considering data sets III and IV, we use the value 0.323 GeV as the constituent mass of an up quark or a down quark, which is taken from Ref. [34] (see also p. 1 in the review "59. Quark Masses" [7]). In practice, we are solving Eq. (6) in the eigenbasis of the spin operators S, S 1 , and S 2 , thus effectively replacing the product S 1 · S 2 by where S, S 1 , and S 2 is the total spin, the spin of quark 1, and the spin of quark 2, respectively. However, note that this modeling procedure is able to split the masses of states with equal principal (N ), orbital (L), and spin (S) quantum numbers, but not different total angular momentum (J) quantum numbers, i.e. the model is independent of J. Solving the Schrödinger equation numerically is performed by assuming Dirichlet boundary conditions at r = 0 and r = r 0 when using Eq. (6). The value of the parameter r 0 is chosen so that the energy eigenvalue E is independent of r 0 up to five significant digits. This approach was inspired by the method described in Ref. [35]. Next, the minimization of Eq. (9) is initially carried out by performing a random search with n = 100 000 points in the parameter space spanned by the parameters v = (m, α S , b, σ). The conditions on the parameters are chosen to be 0.05 ≤ α S ≤ 0.70 , 0.01 GeV 2 ≤ b ≤ 0.40 GeV 2 , 0.05 GeV ≤ σ ≤ 1.50 GeV as well as 1.00 GeV ≤ m ≤ 2.00 GeV for data set I and 4.00 GeV ≤ m ≤ 5.00 GeV for data set II. Furthermore, for data sets III-V, the values of m obtained for data sets I and II are used as input values. After the initial random search, an iterative adaptive method, using the same technique but with narrower conditions on the parameters and significantly smaller number of points, is performed to optimize the coarse point found during the initial random search in order to obtain the (local) bestfit point that minimizes the χ 2 function in Eq. (9). In Tab. II, the resulting values of the χ 2 function for the five data sets I-V are presented as well as each pull of each meson is listed, and in Tab. III, the resulting parameter values for m, α S , b, and σ when fitting the model to the respective data sets are given. Diquarks Given the best-fit values for the free parameters of the model, we find the diquark masses by calculating the energy eigenvalues, changing κ → κ/2 and b → b/2 in order to compensate for the change in color structure of the quark-quark system (compared to the color structure of the quark-antiquark system, see the discussion in Subsec. II B). The sets of parameter values obtained when fitting the model to data sets I, II, III, and IV are used to calculate the masses of the cc, bb, qc, and qb diquarks, respectively. We consider only diquarks in the ground state N 2S+1 L J = 1 3 S 1 , which are known as axial-vector diquarks and named good diquarks by Jaffe [5]. In Tab. IV, the results for the four diquark masses are presented. Tetraquarks For tetraquarks, we consider them to be composites of (axial-vector) diquarks and antidiquarks and the interaction between the diquarks and the antidiquarks is assumed to be effectively the same as for ordinary quarkonia. Thus, the parameter set obtained when fitting data set V to the model is used in the effective potential for all tetraquarks. However, when considering cccc and bbbb tetraquarks, we also compute the tetraquark masses with the parameter sets found by fitting the model to data sets I and II, respectively. Since the diquarks and antidiquark are in the antitriplet and triplet color states, respectively, the color structure of tetraquarks has identical color structure as the mesons, and subsequently, the same color factor κ = −4/3. In addition, the same string tension b is used for tetraquarks as for the mesons. Thus, considering diquarks and antidiquarks as consituents of tetraquarks, the tetraquark masses can be calculated using the diquark masses in Tab. IV. In Tab. V, the results for the masses of 24 tetraquark states are presented. C. Motivation of Parameters and Comparison with Other Works Similar models to the model presented in this work have been proposed in Refs. [31,36]. In Ref. [31], the authors were using the same model as in this work, but also taking into account perturbation in the spin-orbit and tensor interactions, although considering only fullycharmed diquarks and tetraquarks (i.e. cc and cccc), and thus, their results can be compared to the results for the set of parameters fitted to data set I. The models are identical for those states, where the perturbation energy is zero. In Ref. [36], the same authors considered X(3872) (also known as χ c1 (3872) [7]) under the hypothesis that its constituents are consisting of a diquark qc and an antidiquark qc and their parameter values could therefore be compared to the parameter values found by fitting data set III. They fit the model in order to investigate if Z c (4430) could be an exited state of X(3872). Furthermore, we will compare the masses of diquarks and tetraquarks calculated in this model with those presented in Refs. [25,31,[37][38][39][40][41][42][43][44][45][46][47][48][49][50]. In Tabs. VI, VII, and VIII, we 1.887 · 10 −2 5.485 · 10 −3 2.182 · 10 −2 1.696 · 10 −4 1.694 · 10 −1 display the different comparisons. D. Discussion on Results A thorough discussion on the results obtained in this work is in order. In Tab. II, the pulls and the values of the χ 2 function from the five fits to data sets I-V are presented. Comparing the values of the χ 2 function among the fits, we observe that the value for data set V is the largest with an order of magnitude of 10 −1 , the values for data sets I and III with orders of magnitude of 10 −2 , the value for data set II with an order of magnitude 10 −3 , and finally, the value for data set IV is the smallest with an order of magnitude of 10 −4 . The discrepancy in the values of the χ 2 function between data sets IV and V could be a consequence of the much larger variation of the masses in data set V compared to the variation of the masses in data set IV. Also, one could expect that, when fitting the model to data set V, the value of the χ 2 function would be of the same order of magnitude as the ones when fitting the model to data sets I and II, since data set V consists of quarkonia, which are well suited for this model. Nevertheless, comparing the pull values obtained for data set V, we note that almost all charmonium mesons yield positive pull values and almost all The constituent diquarks are assumed to be in the ground state N 2S+1 LJ = 1 3 S1, which are to be found in Tab. IV. All tetraquarks are considered in the states N 2S+1 L = 1 1 S, N 2S+1 L = 1 3 S, and N 2S+1 L = 1 5 S. The label "Data sets n1 + n2" indicates that the input values for the diquark masses are adopted from data set n1 and the input values for the other parameters are taken from data set n2. bottomonium mesons yield negative pull values, implying a skewed adjustment of the model to this data set. The smallest absolute value of the pulls from this data set is 3.144 · 10 −3 for χ b0 (1P ), whereas the largest absolute value of the pulls is 1.598 · 10 −1 for χ c0 (1P ). In general, the deviation in pull values is difficult to explain. It could originate from the fitting procedure being not suitable to assign the same parameter values for both charmonium and bottomonium mesons or simply by the inclusion of more data points contributing to the total value of the χ 2 function. Overall, data set IV fits the model the best and data set V the worst. In Tab. IV, the predicted values for the masses of the diquarks are given, and in Tab. VII, a comparison with other works is presented. In this work, the masses of the diquarks are dependent on the parameters of the effective potential obtained from fitting data sets I-IV to the model. Note that the parameters α S and σ obtained in fitting data set II-V sometimes assume the upperend values of the intervals in which they are allowed to vary, which could imply the existence of more suitable parameter sets for these data sets if the intervals constraining the parameters would be enlarged. However, compared with the values for the diquark masses of the different works presented in Tab. VII, they deviate with at most about 250 MeV and are generally in excellent agreement with the results in Refs. [31,44], which values are also predicted in the framework of nonrelativistic quark models. In Ref. [37], the diquark masses are studied by means of the so-called Schwinger-Dyson and Bethe-Salpeter equations, which take into account the kinetic energy as well as splittings in the spin-spin, spin-orbit, and tensor interactions. The predicted values for the diquark masses in this work are consistently smaller by about 100 MeV compared to the values in Ref. [37]. Relativistic models, such as the ones presented in Refs. [41-43, 45, 51], and models based on QCD sum rules, such as the one in Ref. [38], all predict larger diquark masses. The differences could be a consequence of the introduction of more and updated data in this work or relativistic effects may play a significant role, since such are not taken into account in this work. In Tab. V, the resulting mass spectrum for the ground states of the tetraquarks are presented. An overall feature is that lighter tetraquarks have a larger spread in the energy eigenvalues than heavier ones, giving a larger relative difference in the masses among the states for lighter tetraquarks compared with heavier ones. In Tab. VIII, our predicted values for tetraquark masses and the corresponding ones from other works are shown. Regarding qcqc tetraquarks, the results obtained in Refs. [40,41,48,51] all predict smaller masses for all states. In our model, the masses of qcqc tetraquarks are sensitive to the parameters used in the effective potential, which means that a possible explanation for this deviation could be the skew fit of the model to data set V. Also, a non-relativistic framework may not be suitable when considering heavylight tetraquark systems, since relativistic effects play a significant role in such systems. In general, the predicted masses for cccc tetraquarks are in good agreement with the values in Refs. [25,31,39,44,46,47,50]. However, the 1 1 S state differs by about 1 GeV in comparison to Ref. [49]. Furthermore, the predicted masses for qbqb tetraquarks are in excellent agreement with the values in Refs. [25,41,48], and the relative deviation among the masses of different tetraquark states is overall small for this type of tetraquarks. Concerning bbbb tetraquarks, the predicted masses are in very good agreement with Refs. [25,43,44], but consistently smaller by about 0.5 GeV-1.0 GeV compared to the values in Refs. [39,49,50]. For the heavy tetraquarks (i.e. cccc and bbbb), the predicted masses obtained in Refs. [39,49,50] are consistently and significantly larger than those obtained in this work. In Ref. [39], the color-magnetic interaction is adopted to calculate the masses, and in Refs. [49,50], a similar model to the one used in this work is considered, but the variational principle is applied when solving the Schrödinger equation. This difference in the modeling approach could be the reason for the differences in the results. E. Comparison with Experimental Results Considering experimental results, there are about ten candidates of tetraquarks that are listed in the particle listings of the Particle Data Group [7]. These experimental tetraquark candidates are χ c1 (3872) [ which are both potential qbqb tetraquarks. There exists a classification of tetraquark states based on "good" diquarks (N 2S+1 L J = 1 3 S 1 ) such that [25,51,52] Diquark M [MeV] Ref. [37] Ref. [31] Ref. [41] Ref. [42] Refs. [43,51] Ref. [44] Ref. [45] Ref. [38] qc Ref. [40] investigate. Furthermore, if one considers spin-1 diquarks and antidiquarks (i.e. axial-vector diquarks), then one has only three possibilities for the total spin S = 0, 1, 2 of the tetraquarks, i.e. three wavefunctions for each state N 2S+1 L: N 1 S, N 3 S, N 5 S, N 1 P , N 3 P , N 5 P , N 1 D, N 3 D, and N 5 D, which are nine possibilities [31]. Note that the total angular momentum quantum number J is dropped from the spectroscopic states of the tetraquarks, since our model is independent of J. Therefore, we should compute the following eight interesting states 1 1 S, 1 3 S, 1 5 S, 1 1 P , 1 5 P , 1 1 D, 1 3 D, and 1 5 D (i.e. not 1 3 P included), which are all ground states (N = 1), and compare the experimental values for the masses of tetraquarks with our theoretically predicted values of the masses using the allowed ground states for each tetraquark candidate. In comparing our theoretical predications in Tab. V with the experimental values for the tetraquark masses (cf. Ref. [7]), we find the following agreement within 100 MeV for qcqc tetraquarks Thus, it seems that the most likely tetraquark candidate to be described with our model is ψ(4660) as either a 1 1 P state of mass 4582 MeV or a 1 5 P state of mass 4591 MeV (see Tab. V). Furthermore, the qcqc tetraquark candidates Z c (3900), X(3915), and ψ(4360) as well as the qbqb tetraquark candidates Z b (10610) and Z b (10650) could be described by our model. Unfortunately, the most studied tetraquark candidate χ c1 (3872) cannot be accommodated with any state in our model due to the lowest state having the mass value 4076 MeV (see Tabs. V and VIII). Finally, the cccc and bbbb tetraquarks are interesting objects to study in the sector of exotic hadrons. The results obtained in this work suggest that the mass of the fullycharmed tetraquark could be about 5960 MeV or above in its ground state, whereas the mass of the fully-bottom tetraquark could be as large as 18720 MeV (see Tabs. V and VIII). IV. SUMMARY AND CONCLUSIONS We have investigated a model of tetraquarks, assumed to be compact and to consist of diquark-antidiquark pairs, in a non-relativistic framework and predicted mass spectra for the qcqc, cccc, qbqb, and bbbb tetraquarks. Considering tetraquarks as bound states of axial-vector diquarks and antidiquarks, a simple model originally formulated for quarkonia has been adopted and used to calculate and predict the masses of different tetraquark states. For the first time, a total number of 45 mesons, including both charm and bottom quarks, and the most recent corresponding data on the masses of these mesons [7] have been used to fit the free parameters of the model. Particularly, we have found predictions for four axial-vector diquark masses, and subsequently, a total number of 24 tetraquark masses, which are all presented in Tabs. IV and V. In comparison with other nonrelativistic models, our results for the cccc, qbqb, and bbbb tetraquarks are shown to be in excellent agreement with earlier results presented in the literature. However, considering qcqc tetraquarks, our results deviate slightly from earlier results and the predicted masses of these tetraquarks are consistently larger than the ones found in the literature. For the masses of heavy-light tetraquark states, i.e. qcqc and qbqb, we have been able to identify some of these states with experimentally proposed tetraquark candidates. One such identification includes the ψ(4660) tetraquark candidate, which can be proposed to be the qcqc tetraquark in either the state 1 1 P or the state 1 5 P . For qbqb tetraquarks, the tetraquark candidates Z b (10610) and Z b (10650) could both be identified with the state 1 3 S. Concerning the heavy tetraquark states, i.e. cccc and bbbb, the model predicts the mass of the fully-charmed tetraquark to be 5960 MeV and the mass of the fully-bottom tetraquark to be 18720 MeV, both values correspond to values obtained for their respective ground states. Finally, our model could also be used to predict masses for other potential tetraquark states for which no experimental data exist today.
7,875
2020-06-16T00:00:00.000
[ "Physics" ]
Heat kernel estimates and Harnack inequalities for some Dirichlet forms with non-local part We consider the Dirichlet form given by \sE(f,f)&=&{1/2}\int_{\bR^d}\sum_{i,j=1}^d a_{ij}(x)\frac{\partial f(x)}{\partial x_i} \frac{\partial f(x)}{\partial x_j} dx&+&\int_{\bR^d\times \bR^d} (f(y)-f(x))^2J(x,y)dxdy. Under the assumption that the $\{a_{ij}\}$ are symmetric and uniformly elliptic and with suitable conditions on $J$, the nonlocal part, we obtain upper and lower bounds on the heat kernel of the Dirichlet form. We also prove a Harnack inequality and a regularity theorem for functions that are harmonic with respect to $\sE$. Introduction The main aim of this article is prove a Harnack inequality and a regularity estimate for harmonic functions with respect to some Dirichlet forms with non-local part.More precisely, we are going to consider the following Dirichlet form where a ij : R d → R and J : R d × R d → R satisfy some suitable assumptions; see Assumptions 2.1 and 2.2 below.The domain F of the Dirichlet form E is defined as the closure with respect to the metric E of C 1 -functions on R d with compact support, where E 1 is given by: E 1 (f, f The local part of the above form corresponds to the following elliptic operator which was studied in the papers of E.DeGiorgi [23], J.Nash [36] and J.Moser [33,34] as well as in many others.They showed that under the assumptions that the matrix a(x) = (a ij (x)) is symmetric and uniformly elliptic, harmonic functions with respect to L behave much like those with respect to the usual Laplacian operator.This holds true even though the coefficients a ij are assumed to be measurable only.The above Dirichlet form given by (1.1) has a probabilistic interpretation in that it represents a discontinuous process with the local part representing the continuous part of the process while the non-local part represents the jumps of the process.We call J(x, y) the jump kernel of the Dirichlet form.It represents the intensity of jumps from x to y. In a way, this paper can be considered as the analogue of our earlier paper [21] where the following operator was considered: In that paper, a Harnack inequality as well as a regularity theorem were proved.The methods employed were probabilistic and there we related the above operator to a process via the martingale problem of Stroock and Varadhan, whereas here the probabilistic interpretation is given via the theory described in [3]. The study of elliptic operators has a long history.E. DeGiorgi [23], J. Nash [36] and J. Moser [33], among others, made significant contributions to the understanding of elliptic operators in divergence form.In [29] Krylov and Safonov gave a probabilistic proof of the Harnack inequality as well as a regularity estimate for elliptic operators in non-divergence form. While there has been a lot of research concerning differential operators, not much has been done for non-local operators.It is only recently that Bass and Levin [10] proved a Harnack inequality and a continuity estimate for harmonic functions with respect to some non-local operators.More precisely, they considered the following operator where n(x, h) is a strictly positive bounded function satisfying n(x, h) = n(x, −h).Since then, non-local operators have received considerable attention.For instance in [8], Harnack inequalities were established for variants of the above operator.Also, Chen and Kumagai [12] established some heat kernel estimates for stable-like processes in d-sets as well as a parabolic Harnack inequality for these processes and in [14], the same authors established heat kernel estimates for jump processes of mixed type in metric spaces.Non-local Dirichlet forms representing pure jump processes have also been recently studied in [7] where bounds for the heat kernel and Harnack inequalities were established.A special case of the Dirichlet form given by (1.1) was studied by Kassmann in [25] where a weak Harnack inequality was established.Related work on discontinuous processes include [13], [16], [17], [39] and [37].At this point of the introduction it is pertinent to give some more details about the differences between this paper and the results in some related papers. • In [25] a weak Harnack inequality was established and the jump kernel was similar to the one defined in (1.4) but with index α ∈ [1, 2).There, the techniques used were purely analytic while here the method used is more probabilistic.This allows us to prove the Harnack inequality and continuity estimate for a much wider class of jump kernels. • In [7], a purely non-local Dirichlet form was considered.The jump kernel considered there satisfies a lower and an upper bound.Here because of the presence of the local part, no lower bound is required.The intuitive reason behind this is that since we have a uniformly elliptic local part, the process can move even if there is no jump.This also agrees with the fact that our results should hold when the jump kernel is identically zero. • A parabolic Harnack inequality was also proved in [7].Their result holds on balls with large radius R, while here we prove the Harnack inequality for small R only.Moreover, in [7] the authors considered processes with small jumps only.Here, our processes are allowed to have big jumps. • For our Harnack inequality to hold, we need assumption 2.2(c) below.This assumption is modeled after the one introduced in [8].Thus with this assumption, our result covers the case when the jump kernel J(x, y) satisfies and the k i s are positive constants.Here, unlike in [8], there is no restriction on β − α. • In a recent preprint [18], Chen, Kim and Kumagai looked at truncated jump processes whose kernel is given by the following where α ∈ (0, 2), κ is a positive constant and c(x, y) is bounded below and above by positive constants.The results proved in that paper include sharp heat kernel estimates as well as a parabolic Harnack inequality.The jump kernel studied here includes the ones they study, but since the processes considered here include a continuous part, the results are different. We now give a plan of our article.In Section 2, we give some preliminaries and state the main results.We give upper and lower bounds for the heat kernel associated to the Dirichlet form in Section 3. In Section 4, we prove some estimates which will be used in the proof of the regularity theorem and the Harnack inequality.In Section 5, a proof of the regularity theorem is given.A proof of the Harnack inequality is given in Section 6. Statement of results We begin this section with some notations and preliminaries.B(x, r) and B r (x) will both denote the ball of radius r and center x.The letter c with subscripts will denote positive finite constants whose exact values are unimportant.The Lebesgue measure of a Borel set A will be denoted by |A|.We consider the Dirichlet form defined by (1.1) and make the following assumptions: Assumption 2.1.We assume that the matrix a(x) = (a ij (x)) is symmetric and uniformly elliptic.In other words, there exists a positive constant Λ such that the following holds: We also need the following assumption on the nonlocal part of the Dirichlet form. Assumption 2.2. (a) There exists a positive function J such that J(x, y)1 where K 1 and K 2 are positive constants. In probabilistic terms, J(x, y) can be thought as the intensity of jumps from x to y.Our method is probabilistic, so we need to work with a process associated with our Dirichlet form.The following lemma gives conditions for the existence of a process and its density function.We say that a Dirichlet form E satisfies a Nash inequality if where f ∈ F and c is a positive constant.For an account of various forms of Nash inequalites, see [15].For a definition of regular Dirichlet form, the reader is referred to page 6 of [3].Lemma 2.3.Suppose that the Dirichlet form is regular and satisfies a Nash inequality, then there exists a process X with a transition density function p(t, x, y) defined on (0, ∞) × R d \N × R d \N satisfying P (t, x, dy) = p(t, x, y)dy, where P (t, x, dy) denotes the transition probability of the process X and N is a set of capacity zero. Proof.The existence of such a process follows from Theorem 7.2.1 of [3] while the existence of the probability density is a consequence of Theorem 3.25 of [15]. For the rest of the paper, N will denote the set of capacity zero, as defined in the above Lemma.For any Borel set A, let be the first hitting time and first exit time, respectively, of A. We say that the function u is harmonic in a domain D if u(X t∧τ D ) is a P x -martingale for each x ∈ D. Since our process is a discontinuous process, we define Here are the main results: Theorem 2.7.Suppose Assumptions 2.1 and 2.2 hold.Let z 0 ∈ R d and R ∈ (0, 1].Suppose u is nonnegative and bounded on R d and harmonic with respect to the Dirichlet form (E, F) on B(z 0 , R).Then there exists C > 0 depending only on Λ, κ, β, R and the K i s but not on z 0 , u or u ∞ such that u(x) ≤ Cu(y), x, y ∈ B(z 0 , R/2)\N . We mention that the main ideas used for the proof of the above theorem appear in [10].Note that Assumption 2.2(c) is crucial for the Harnack inequality to hold.In fact, an example in the same spirit as that in [21] can be constructed so that the Harnack inequality fails for a Dirichlet form with a jump kernel not satisfying Assumption 2.2(c).We do not reproduce this example here because the only difference is that here, we require the process to be symmetric while in [21], the process is not assumed to be symmetric. We make a few more comments about some of the assumptions in the above theorem.We require that the local part is uniformly elliptic and as far as we know, our method does not allow us to relax this condition.Moreover, as shown in [26], the nonnegativity assumption cannot be dropped.In that paper, the author constructs an example (violating the nonnegativity assumption) which shows that the Harnack inequality can fail for non-local operators. Upper and lower bounds for the heat kernel The main goal of this section is to prove some upper and lower bounds on the heat kernel.The upper bound on the heat kernel estimate follows from a Nash inequality which is proved in Proposition 3.4.For more information about the relation between Nash inequalities and heat kernel estimates, see [15].As for the lower bound, we use Nash's original ideas, see [36].Since we are dealing with operators which are not local, we also need some ideas which first appeared in [7].The paper [40] also contain some useful information on how to deal with local operators. We start off this section by proving the regularity of the Dirichlet form (E, F).Let H 1 (R d ) denote the Sobolev space of order (1, 2) on R d .In other words, Let (E, F) be defined by (1.1) .Then, As for the discontinuous part, we have where B(r) and B(R) are balls with a common center but with radius r and R respectively, satisfying K ⊂ B(r) ⊂ B(R) and R − r > 1.We consider the term I 1 first.Recall that from Assumption 2.2(a), we have Since the measure J(|h|)1 (|h|≤1) dh is a Lévy measure, we can use the Lévy Khintchine formula(see (1.4.21) of [3]) to estimate the characteristic function ψ of the corresponding process as follows We now use a simple substitution, Plancherel's theorem as well as the above inequality to obtain In the above f denotes the Fourier transform of f .A similar argument is used in the proof of (1.4.24) in [3].As for the second term I 2 , we have The third term I 3 is bounded similarly, that is we have From the calculations above, we have Letting n → ∞, we thus have , hence concluding the proof. Remark 3.2.In Chapter 7 of [3], it is shown that for any regular Dirichlet form, there exists a Hunt process whose Dirichlet form is the given regular one.More precisely, there exists N ⊂ R d having zero capacity with respect to the Dirichlet form (E, F) and there exists a Hunt process (P x , X) with state space R d \N .Moreover, the process is uniquely determined on N c .In other words, if there exist two Hunt processes for which the corresponding Dirichlet forms coincide, then there exist a common proper exceptional set N so that the transition functions coincide on N c .Remark 3.3.We will repeatedly use the following construction due to Meyer([31]); see also [7] and [6].This will enable us to restrict our attention to the process with small jumps only and then incorporate the big jumps later.Suppose that we have two jump kernels J 0 (x, z) and J(x, z) with J 0 (x, z) ≤ J(x, z) and such that for all x ∈ R d , where c is a constant.Let E and E 0 be the Dirichlet forms corresponding to the kernels J(x, z) and J 0 (x, z) respectively.If X t is the process corresponding to the Dirichlet form E 0 , then we can construct a process X t corresponding to the Dirichlet form E as follows.Let S 1 be an exponential random variable of parameter 1 independent of X t , let C t = t 0 N (X s )ds, and let U 1 be the first time that C t exceeds S 1 . At the time U 1 , we introduce a jump from X U 1 − to y, where y is chosen at random according to the following distribution: This procedure is repeated using an independent exponential variable S 2 .And since N (x) is finite, for any finite time interval we have introduced only a finite number of jumps.Using [31], it can be seen that the new process corresponds to the Dirchlet form E. And if N 0 is the set of zero capacity corresponding to the Dirichlet form E 0 , then N ⊂ N 0 . Upper bounds Let Y λ be the process associated with the following Dirichlet form: so that Y λ has jumps of size less than λ only.Let N (λ) be the exceptional set corresponding to the Dirichlet form defined by (3.3).Let P Y λ t be the semigroup associated with E Y λ .We will use the arguments in [3] and [15] as indicated in the proof of Lemma 2.3 to obtain the existence of the heat kernel p Y λ (t, x, y) as well as some upper bounds.For any v, ψ ∈ F, we can define and provided that D λ (ψ) < ∞, we set Proposition 3.4.There exists a constant c 1 such that the following holds. where p Y λ (t, x, y) is the transition density function for the process Y λ associated with the Dirichlet form E Y λ . Proof.Similarly to Proposition 3.1, we write Since J(x, y) ≥ 0 for all x, y ∈ R d , we have We have the following Nash inequality; see Section VII.2 of [2]: This, together with (3.4) yields Now applying Theorem 3.25 from [15], we get the required result. We now estimate E λ (t, x, y) to obtain our first main result. Proof of Theorem 2.4.Let us write Γ λ as where Let µ > 0 be constant to be chosen later.Choose ψ(x) ∈ F such that |ψ(x) − ψ(y)| ≤ µ|x − y| for all x, y ∈ R d .We therefore have the following: |x−y|≤λ (e ψ(x) − e ψ(y) ) 2 J(x, y)dy where K(λ) = sup |x− y| 2 J(x, y)dy.Some calculus together with the ellipticity condition yields: Combining the above we obtain Since we have similar bounds for e 2ψ(x) Γ λ [e −ψ ](x) , we have Taking x = x 0 , y = y 0 and µ = λ = 1 in the above and using Proposition 3.4 together with the fact that t ≤ 1, we obtain p Y (t, x 0 , y 0 ) ≤ c 2 t − d 2 e −|x 0 −y 0 | , Since x 0 and y 0 were taken arbitrarily, we obtain the required result. The following is a consequence of Proposition 3.4 and an application of Meyer's construction. where t 0 is a small constant. Proof.The proof is a follow up of that of the Theorem 2.4, so we refer the reader to some of the notations there.Let λ be a small positive constant to be chosen later.Let Y λ be the subprocess of X having jumps of size less or equal to λ.Let E Y λ and p Y λ (t, x, y) be the corresponding Dirichlet form and probability density function respectively.According to Proposition 3.4, we have Taking x = x 0 and y = y 0 in (3.5) yields Taking λ small enough so that K(λ) ≤ 1 2c 2 , the above reduces to Upon setting µ = 1 3λ log 1 t 1/2 and choosing t such that t 1/2 ≤ λ 2 , we obtain Applying the above to (3.6) and simplifying For small t, the above reduces to Let us choose λ = c 7 r/d with c 7 < 1/24 so that for |x 0 −y 0 | > r/2, we have |x 0 −y 0 |/12λ−d/2 > d/2.Since t is small(less than one), we obtain e −c 3 |x 0 −y|/12λ dy. We bound the integral on the right hand side to obtain e −c 8 r .Therefore there exists t 1 > 0 small enough such that for 0 ≤ t ≤ t 1 , we have . We now apply Lemma 3.8 of [7] to obtain We can now use Meyer's argument(Remark 3.3) to recover the process X from Y λ .Recall that in our case J 0 (x, y) = J(x, y)1 (|x−y|≤λ) so that after using Assumptions 2.2(a) and choosing c 7 smaller if necessary, we obtain where c 9 depends on the K i s and Set t 2 = t 0 r 2 with t 0 small enough so that t 2 ≤ t 1 .Recall that U 1 is the first time at which we introduce the big jump.We thus have By choosing t 0 smaller if necessary, we get the desired result. Remark 3.6.It can be shown that the process Y λ is conservative.This fact has been used above through Lemma 3.8 of [7]. Lower bounds The main aim of this subsection is to prove Theorem 2.5.We are going to use Nash's original ideas as used in [7], [18] and [40].Let and recall that for f, g ∈ F. We begin with the following technical result. Proof.The proof of the first part of the proposition is omitted because it is similar to the proof of Lemma 4.1 of [7].We now give a proof of the second part.We first need to argue that the right hand side of (3.12) makes sense.The second step is to show the equality (3.12). Step 1: By Proposition 3.1, it suffices to show that φ R (•) pǫ(t,•,y 0 ) ∈ L 2 (R d ) and The above display together with the fact that p(t, •, y 0 ) ∈ F and the positivity of p ǫ (t, x, y) show that Step 2: We write (f, g) for f (x)g(x)dx.By Lemma 1.3.4 of [3], we have Taking into consideration the upper and lower bounds on p ǫ (t, x, y), we see that the right hand side of the above is well defined.We have This gives . Using the mean value theorem, D(h)/h = D ′ (h * ) where h * = h * (x, y 0 , h) ∈ (0, h).The bounds on p ǫ (t, x, y) imply that D(h)/h tends to 0 for x ∈ B R (x 0 ) as h → 0. An application of the dominated convergence theorem then yields the desired result. We will need the following Poincaré inequality.A proof can be found in [40]. Proposition 3.8.Consider the function defined by (3.10), there exists a constant c 1 not depending on R, f and y 0 , such that Proof of Theorem 2.5.: Let R > 0 and take an arbritary Using part(b) of Proposition 3.7, we then have where E c and E d are the local and non-local parts of the Dirichlet form E respectively.Let us look at I 2 first.By considering the local part of (3.11) and doing some algebra, we obtain Note that for A > 0, the following inequality holds We now set a = u ǫ (t, y)/u ǫ (t, x) and b = φ R (y)/φ R (x) and observe that Applying inequality (3.15) with A = a/ √ b to the above equality, we obtain ]J(x, y)dxdy. See Proposition 4.9 of [7] where a similar argument is used.We also have 2)(a) and the definition of φ R (x) give the following Hence we have As for the continuous part I 1 , we use some calculus to obtain Using the ellipticity condition, we obtain the following Rearranging the above and using the ellipticity condition again, we obtain To obtain the above inequality, we have also used the following with |φ R (x)| ≤ 1.We now use the ellipticity condition once more and the above to bound I 1 as follows: See [40] where similar arguments are used.Now using (3.13) and the fact that µ(R) Here µ(R) ≍ B R (0) means that there is a constant c > 0 such that c −1 µ(R) ≤ B R (0) ≤ cµ(R).So combining the above, inequality (3.14) reduces to , where K is a positive constant to be chosen later.By choosing t 1 small and using Proposition 3.5, we obtain Using (3.18), we obtain So for t ≤ t 1 , we have Note that on B R/2 (0) − D t , we have u ǫ (t, x) ≤ e −K .Choosing K such that e −K = 1/4 and using p(tR 2 , x, y) ≤ c 11 t −d/2 R −d .This upper bound can be obtain by using an argument very similar to that of the proof of Proposition 3.4 , we obtain We thus obtain Recall that Note that since ǫ is small, we can assume that it satisfies ǫ ≤ c 14 t −d/2 R −d .So for t ≤ t 1 , we can use the bound p(tR 2 , x, y) ≤ c 15 t −d/2 R −d and the above inequality to conclude that G ǫ (t) is bounded above by a constant which we denote by Ḡ.Since on D t , log u ǫ (t, x) ≥ −K, we have only four possibilities: . We can therefore conclude that there exist positive constants c 16 and c 17 such that on D t , Using the above and the fact that µ(R) ≍ B R/2 (0), inequality (3.17) then reduces to where c 18 and c 19 are independent of ǫ.Also note that t 1 is small and can be taken to be less than one.See the proof of Proposition 4.9 of [7] or the proof of Theorem 3.4 of [18] for details.Assume that G ǫ (t 1 ) ≤ −c 18 − 2(c 18 /c 19 ) 1/2 .We can now write and use some calculus to show that (3.19) 2 and that G ǫ (t 1 /2) < 0. This in turn implies that G ǫ (t 1 ) ≥ −8/(3c 19 ).We have thus obtain where Choose y ∈ B R (0).By the semigroup property, we have Applying logarithm to the above and using Jensen's inequality we obtain Using the fact that G ǫ (t 1 ) ≥ −c 20 after taking the limit ǫ → 0 as in the proof of Lemma 3.3.3 of [19], the above reduces to log Some estimates The following estimates will be crucial for the proof of the regularity theorem and the Harnack inequality. (b) for any A ⊂ B(x 0 , 3r/4), there exists some positive constant c 3 such that P x (T A < τ B(x 0 ,r) ) ≥ c 3 |A| r d for x ∈ B(x 0 , r/2) and r ∈ (0, r 1 ] where r 1 is some positive constant. Proof.Let C ⊂ B(x 0 , 2r)\B(x 0 , r).We can then write where we used Theorem 2.5 in the last inequality.Taking |C| = c 5 r d and t = c 6 r 2 we obtain upon choosing c 6 = (2c 4 c 5 ) 2/d , Let m be a positive integer.By the Markov property and using induction, we obtain We can now obtain E x τ B(x 0 ,r) ≤ c 2 r 2 from the above.Let t = c 7 r 2 , then by Proposition 3.5, we have P x (τ B(x 0 ,r) ≤ t) ≤ P x (τ B(x,r/2) ≤ t) ≤ 1/2 for c 7 small enough.We thus have For part(b), since we need to prove a lower bound, it suffices to obtain the result for small jumps(less than λ) only.The more general result follows from the following fact: where U 1 and N (x) are defined in Remark 3.3.The stopping times T λ A and τ λ B(x 0 ,r) are defined in a similar way to T A and τ B(x 0 ,r) respectively but for processes with jumps less than λ.So from now on, we assume that X has jumps less than λ.For t fixed, we can now write Since our process is assumed to have small jumps only, we can use (3.8) to obtain, for t sufficiently small, p(t − τ Br(x 0 ) , X τ Br (x 0 ) , y) ≤ c 9 t γ e −c 10 |Xτ Br (x 0 ) −y| , ( where we have taken λ small enough so that γ = |X τ Br (x 0 ) − y|/12λ − d/2 > 0 whenever |X τ Br (x 0 ) − y| > r/4.We now use the lower bound given by Theorem 2.5 to reduce (4.1) to We have taken t = c 12 r 2 , where r ∈ (0, r 1 ] and r 1 is a small constant so that the right hand side of (4.2) is less than a positive fraction of the lower bound on the heat kernel p(t, x, y). The Regularity Theorem We will need the following Lévy system formula for our process X.The proof is the same as that of Lemma 4.7 in [12].So we omit it here.1 A (X s )J(X s , y)dyds. (5.1) The proof of the following is based on the proof of the regularity theorem in [10].For the sake of completeness, we give a proof here. Proof of Theorem 2.6.: Let us suppose u is bounded by M in R d and z 1 ∈ B(z 0 , R/2)\N .Set where a < 1, ρ < 1 2 , and θ 1 ≥ 2M are constants to be chosen later.We choose θ 2 small enough that B(z 1 , 2r 1 ) ⊂ B(z 0 , R/2).Write B n = B(z 1 , r n ) and τ n = τ Bn .Set Hölder continuity will follow from the fact that M n − m n ≤ s n for all n which will be proved by induction.Let n 0 be a positive number to chosen later and suppose M i − m i ≤ s i for all i = 1, 2, ..., n, where n ≥ n 0 ; we want to show We may suppose that |A n |/|B n | ≥ 1 2 , for if not, we can look at M − u instead.Let A be compact subset of A n such that |A|/|B n | ≥ 1/3.By Proposition 4.1, there exists c 1 such that for all x ∈ B n+1 .Let ǫ > 0 and pick y , z ∈ B n+1 such that u(y) ≤ m n+1 + ǫ and u(z) ≥ M n+1 − ǫ. By optional stopping, By optional stopping and the Lévy system formula (5.1), sup (5.4) See the proof of Proposition 3.5 of [10] where a similar argument is used.We have also used Proposition 4.1(a), Assumption 2.2(a) and the fact that 1 < |x−y| r n−i −rn in the above computations.The first term on the right of (5.3) is bounded by The second term is bounded by ]. where we can take n 0 bigger if necessary so that the last inequality holds.We also choose θ 2 2a 2 c 4 and obtain (5.7) The fourth term can be bounded similarly By choosing n 0 bigger if necessary, we have, for n ≥ n 0 , the above yields Inequalities (5.5)-(5.8)give the following: ). Using the fact that a is less than one, we obtain ]. Now let us pick a as follows: This yields u(z) − u(y) ≤ s n a = s n+1 . The Harnack Inequality We start this section with the following proposition which will be used in the proof of the Harnack inequality. Proposition 6.1.Let x 0 ∈ R d and r ≤ r 0 , where r 0 is a positive constant.Then there exists c 1 depending on κ, K i s and Λ such that if z ∈ B(x 0 , r/4) and H is a bounded non-negative function supported in B(x 0 , r) c , then Proof.By linearity and a limit argument, it suffices to show to consider only H(x) = 1 C (x) for a set C contained in B(x 0 , r) c .From Assumption 2.2(c), we have J(w, v) ≤ k r J(y, w) for all w, y ∈ B(x 0 , r/2) and v ∈ B(x 0 , r) c .Hence, we have sup y∈B(x 0 ,r/2) By optional stopping and the Lévy system formula, we have J(y, v)dv. Letting t → ∞ and using the dominated convergence theorem on the left and monotone convergence on the right, we obtain ) , we have Similarly we have J(y, v)dv.(6.3) Combining inequalities (6.1), (6.2) and (6.3) and using Proposition 4.1(a), we get our result. Proof of theorem 2.7.: By looking at u + ǫ and letting ǫ ↓ 0, we may suppose that u is bounded below by a positive constant.Also, by looking at au, for a suitable a, we may suppose that inf B(z 0 ,R/2) u ∈ [1/4, 1].We want to bound u above in B(z 0 , R/2) by a constant not depending on u.Our proof is by contradiction.Since u is continuous, we can choose z 1 ∈ B(z 0 , R/2) such that u(z 1 ) = 1 3 .Let r i = r 1 i −2 where r 1 < r 0 is a chosen constant so that i=1 r i < R/8.Recall that from Proposition 6.1, there exists c 1 such that if r < r 0 , z ∈ B(x, r/4) and H is a bounded non-negative function supported in B(x, r) c , then E x H(X τ B(x,r/2) ) ≤ c 1 k r E z H(X τ B(x,r/2) ).(6.4) We will also use Proposition 4.1(b) which says that if A ⊂ B(x, 3r/4), then there exists a constant c 2 such that P x (T A < τ B(x,r) ) ≥ c 2 |A| r d .(6.5) Let η be a constant to be chosen later and let ξ be defined as follows Let c 3 be a positive constant to be chosen later.Once this constant has been chosen, we suppose that there exists x 1 ∈ B(z 0 , R/2) with u(x 1 ) = L 1 for some L 1 large enough so that we have c 2 ξL 1 e c 3 j r d+β j κ2 2d+1 > 2, for all j.(6.6) The constants β and κ are from Assumption 2.2(c).We will show that there exists a sequence {(x j , L j )} with x j+1 ∈ B(x j , r j ) ⊂ B(x j , 2r j ) ⊂ B(z 0 , 3R/4) with L j = u(x j ) and L j ≥ L 1 e c 3 j . This would imply that L j → ∞ as j → ∞ contradicting the fact that u is bounded.Suppose that we already have x 1 , x 2 , ..., x i such that the above condition is satisfied.We will show that there exists x i+1 ∈ B(x i , r i ) ∈ B(x i , 2r i ) such that L i+1 = u(x i+1 ) and L i+1 ≥ L 1 e c 3 (i+1) .Define A = {y ∈ B(x i , r i /4); u(y) ≥ ξL i r β i κ }. We are going to show that |A| ≤ 1 2 |B(x i , r i /4)|.To prove this fact, we suppose the contrary. By optional stopping, (6.5), the induction hypotheses and the fact that R < 1, This is a contradiction.Therefore |A| ≤ 1 2 |B(x i , r i /4)|.So we can find a compact set E such that E ⊂ B(x i , r i /4) − A and |E| ≥ 1 3 |B(x i , r i /4)|.Let us write τ r i for τ B(x i ,r i /2) .From (6.5) we have P x i (T E < τ r i ) ≥ c 4 where c 4 is some positive constant.Let M = sup x∈B(x i ,r i ) u(x).We then have + E x i [u(X T E ∧τr i ); T E > τ r i , X τr i ∈ B(x i , r i )] + E x i [u(X T E ∧τr i ); T E > τ r i , X τr i / ∈ B(x i , r i )] = I 1 + I 2 + I 3 .(6.7) Writing p i = P x i (T E < τ r i ), we see that the first two terms are easily bounded as follows: To bound the third term, we prove E x i [u(X τr i ); X τr i / ∈ B(x i , r i )] ≤ ηL i .If not, then by using (6.4), we will have, for all y ∈ B(x i , r i /4), u(y) ≥ E y u(X τr i ) ≥ E y [u(X τr i ); X τr i / ∈ B(x i , r i )] contradicting the fact that A ≤ 1 2 |B(x i , r i /4)|.Hence So (6.7) becomes Choosing η = c 4 4 and using the definition of ξ together with the fact that p i ≥ c 5 and r β i /κ < 1, we see that there exists a constant γ bounded below by a positive constant, such that the inequality (6.8) reduces to M ≥ L i (1 + γ).Therefore, there exists x i+1 ∈ B(x i , r i ) with u(x i+1 ) ≥ L i (1 + γ).Setting L i+1 = u(x i+1 ), we see that The induction hypotheses is thus satisfied by taking c 3 = log(1 + γ). Proposition 5 . 1 . If A and B are disjoint Borel sets, then for each x ∈ R d \N , E x s≤t 1 (X s− ∈A, Xs∈B) = E x t 0 B
8,367.8
2009-01-26T00:00:00.000
[ "Mathematics" ]
Retrofitting Leakage Resilient Authenticated Encryption to Microcontrollers . The security of Internet of Things (IoT) devices relies on fundamental concepts such as cryptographically protected firmware updates. In this context attackers usually have physical access to a device and therefore side-channel attacks have to be considered. This makes the protection of required cryptographic keys and implementations challenging, especially for commercial off-the-shelf (COTS) microcontrollers that typically have no hardware countermeasures. In this work, we demonstrate how unprotected hardware AES engines of COTS microcontrollers can be efficiently protected against side-channel attacks by constructing a leakage resilient pseudo random function (LR-PRF). Using this side-channel protected building block, we implement a leakage resilient authenticated encryption with associated data (AEAD) scheme that enables secured firmware updates. We use concepts from leakage resilience to retrofit side-channel protection on unprotected hardware AES engines by means of software-only modifications. The LR-PRF construction leverages frequent key changes and low data complexity together with key dependent noise from parallel hardware to protect against side-channel attacks. Contrary to most other protection mechanisms such as time-based hiding, no additional true randomness is required. Our concept relies on parallel S-boxes in the AES hardware implementation, a feature that is fortunately present in many microcontrollers as a measure to increase performance. In a case study, we implement the protected AEAD scheme for two popular ARM Cortex-M microcontrollers with differing parallelism. We evaluate the protection capabilities in realistic IoT attack scenarios, where non-invasive EM probes or power consumption measurements are employed by the attacker. We show that the concept provides the side-channel hardening that is required for the long-term security of IoT devices. Introduction The information security of inexpensive IoT devices is especially important due to their high quantity, prevalence, and high threat potential.Arguably the most important feature for such devices are secure firmware updates.They are needed to mitigate software vulnerabilities, which are likely uncovered while a device is in the field.Secured updates can either be achieved by digital signatures or by symmetric AEAD schemes.In case of symmetric cryptography, the secret keys are stored in memory that is protected against malicious read-outs.The protection of secret keys against extraction is required in the IoT context because attackers are potentially capable of performing physical attacks such as side-channel attacks on cryptographic operations.Ronen et al. [1] highlight the implications of unprotected update mechanisms by using a side-channel attack to extract an AES master key from a smart light bulb which is used to protect firmware updates for an entire device family.Using the update master key, a worm is created that automatically infects and maliciously replaces the firmware of similar devices within a 100 m radius.Evidently, the root of trust, i.e., cryptographic keys and operations used for secured updates, requires hardening against hardware attacks.The authors of [1] suggest using digital signatures as mitigation, however, that only provides authentication.AEAD schemes have the additional benefit that they also provide confidentiality which is often necessary to, e.g., protect intellectual property (IP) or credentials in the firmware. Unfortunately, protecting cryptographic implementations against side-channel attacks is challenging, especially when dealing with existing hardware implementations without built-in protection.So far, the only countermeasure that can be retrofitted without giving up hardware acceleration and therefore significantly reducing performance is simple time-based hiding, i.e., inserting random delays or dummy operations before or after the critical operation.Such countermeasures require true randomness and are very limited in their effectiveness because deliberate timing variations can be filtered by signal processing.Particularly for COTS devices it has been shown that the cryptographic operation can be identified despite hiding countermeasures [2].An alternative is to use a hardened software implementation instead of the existing cryptographic hardware accelerator and giving up its provided efficiency.The inherent difficulty of this task is evident in the following example.A team from the French ANSSI published an open-source implementation of a side-channel protected AES targeted for COTS microcontrollers [3].As it is state of the art, they combine masking and shuffling countermeasures to protect against side-channel attacks and provide leakage tests that do not show significant leakage after 100,000 traces.Despite these seemingly positive results, Bronchain and Standaert [4] published an attack that succeeds with only 2,000 traces, which highlights the issues of combined countermeasures on these devices.In the same paper, they also put forward the general difficulty of securing COTS microcontrollers using masking or shuffling due to the lack of noise when countermeasures are implemented in software. In this work, we therefore make use of existing hardware accelerators for cryptographic operations and use concepts from leakage resilience that leverage algorithmic noise and limited data complexity.We show the soundness of our proposal through actual sidechannel attacks and give concrete security levels.Contrary to the previous example, there is no easy way to circumvent parts of the countermeasure.Our contribution is twofold: First, we provide and analyze an LR-PRF as a side-channel secured building block.Second, we implement a leakage resilient AEAD (LR-AEAD) scheme from this primitive to enable applications such as secured firmware updates.which means that most of the work load is handled in hardware.Existing hardware accelerators for the hash function can be used where available, otherwise the hash function can be implemented in software.The fact that the only hardware requirement is an AES accelerator with parallel S-boxes makes this solution applicable to a wide range of microcontrollers.It is therefore highly relevant when retrofitting side-channel protection to existing devices, especially when no true random noise sources are available.In such cases, masking or hiding is even impossible and this concept is without alternatives. As a proof of concept we implement the AES-based LR-PRF on two ARM Cortex-M microcontrollers and evaluate the side-channel security.We chose two representative microcontrollers with 4 and 16 parallel S-boxes, two common implementation options.We evaluate the security of the LR-PRF construction in both cases and find effective protection.For comparison, the AES engines on both tested microcontrollers can be broken after 2,500 traces.After implementing the LR-PRF on the same hardware, the cryptographic operation withstands similar side-channel attacks using the same measurement setup and results in security levels above 100 bits at a reasonable security vs. efficiency trade-off. We also provide an implementation1 and performance evaluation of the full LR-AEAD scheme for both microcontrollers.On one controller we can make use of an existing SHA-256 accelerator to instantiate the hash function, on the other we use an existing software implementation.The resulting LR-AEAD construction serves as a crucial building block for the root of trust of a device.It enables, e.g., side-channel secured firmware updates which makes long-term security possible for IoT devices.Since all modifications are software-only, our concept can be used to retrofit existing designs. Outline In Sec. 2, we provide the background on LR-PRFs and the LR-AEAD scheme.Section 3 defines the attacker model and explains how to assess the side-channel security of such constructions.In Sec. 4 we outline the implementation details and trade-offs of LR-PRFs on microcontrollers with hardware acceleration.We also describe the implementation of the LR-AEAD scheme and discuss relevant attack vectors.Subsequently, Sec. 5 gives details about the two microcontrollers that are evaluated.We present the results of our side-channel evaluation in Sec. 6.After introducing the measurement setup in Sec.6.1, we show in Sec.6.2 that the key transfer to the hardware accelerators is protected, which is a necessary requirement for the proposed solution.Next, Sec.6.3 demonstrates that the accelerators are in fact vulnerable to Template Attacks (TAs) illustrating the need for countermeasures.Finally, Sec.6.4 discusses the side-channel evaluation of the LR-AEAD, which is reduced to an analysis of the LR-PRF with different configurations.Section 7 assesses the runtime and code size of the implementations.We conclude our findings in Sec. 8. Background on leakage resilience Side-channel analysis (SCA) exploits physical observations such as timing, power consumption or EM emanations during the execution of a cryptographic algorithm to retrieve the secret key.There are well-known side-channel countermeasures such as boolean masking, threshold implementations and hiding to prevent or reduce this leakage.Leakage resilience is a different approach to side-channel protection which does, contrary to the mentioned measures, not require randomness or significant hardware overhead.Formally, the leakage behavior of devices is captured in a generic model and algorithmic countermeasures are designed that are provably secure in the established model.The goal is to limit the exposure of the secret key such that it prevents an attacker from accumulating information about the key over multiple observations.This is typically achieved by designing algorithms that limit the control an attacker has over the inputs of critical operations and incorporate re-keying mechanisms.Pseudorandom functions are functions that take a key k and input x and output a fixedlength pseudorandom string.They have an application in, e.g., the secure initialization from public inputs or, as in our case, as building block for larger cryptographic protocols such as AEAD.There have been multiple proposals for LR-PRFs, yet the one proposed by Medwed et al. [5] and later improved in [6] is particularly relevant for our use case because it is based on a standard block cipher (AES).It achieves side-channel protection by frequently changing the encryption key and limiting the usage of each key to a certain number of encryptions.The input of the LR-PRF x is processed in chunks of n bits.Depending on the value of the n bits of x being processed in a given iteration, a plaintext is chosen out of a set of 2 n plaintexts and is encrypted.The number of possible plaintexts 2 n is also denoted as data complexity. Leakage resilient pseudo random functions The execution in case of n = 1 is depicted as a dataflow graph for an example input in Fig. 1.In each iteration, one out of the 2 n = 2 possible plaintexts p 0 and p 1 is encrypted.The plaintext is chosen dependent on the bits of the input x.For example, in iteration 1 the plaintext p 1 is encrypted according to the first bit of x, which is 1.The initial encryption uses the long-term secret key k, subsequently the ciphertext is used as the key for the next iteration.Finally, after all bits of the input have been processed, an additional whitening step is performed with a constant input.The whitening protects against differential attacks on the output.The LR-PRF takes i/n + 1 block cipher encryptions to process an input with length of i bits.Consequently, the number of bits n processed per iteration has a direct impact on the security and the performance of the construction.On the one hand, the data complexity for an attack is determined by n, as each key is only used with 2 n different plaintexts.Therefore, from a security point of view, n has to be kept low.On the other hand, the performance of the LR-PRF deteriorates with lower n because more iterations are necessary to process the input. In the original proposal of the LR-PRF [5], the set of input plaintexts to the underlying block cipher are known to the attacker.Furthermore, all plaintext bytes have the same value and the S-boxes are implemented in parallel hardware.As a result, an attacker can not differentiate between the subkeys during a divide-and-conquer attack because the plaintext input to each S-box is identical.Even if all key bytes are recovered, choosing identical plaintext bytes adds enumeration effort to find the correct order.However, the enumeration effort is only increased if the attacker can not resolve the processing of single subkeys (i.e., S-boxes) in the measured traces.In case of the AES, all 16 S-boxes need to be implemented in parallel hardware and are therefore computed at the same time.An additional benefit of parallel hardware implementations is the added key dependent algorithmic noise that reduces the signal-to-noise ratio of a side-channel attack.In case of limited parallelism, i.e., if not all S-boxes are implemented in parallel and an AES round is computed over multiple clock cycles, these effects are still present but with reduced effectiveness. In the improved LR-PRF with unknown inputs [6], the plaintexts are unknown to an attacker because they are generated in an additional preprocessing step using a leakage resilient pseudo random generator (LR-PRG) proposed by Standaert et al. [8].A PRG differs from a pseudorandom function (PRF) in that it takes no input except for an initial seed or key of fixed length and that it outputs a pseudorandom string of variable length.The LR-PRG by Standaert et al. [8] can be implemented using AES and is shown in Fig. 2. Similar to one stage of the LR-PRF, two plaintexts p 0 and p 1 are initially encrypted with key k.The result of one encryption is used as key for the next iteration, the result of the other forms the pseudorandom string y.In the unknown-input LR-PRF construction, the LR-PRG generates the plaintexts that are used in the actual LR-PRF stage.In contrast to the original proposal, these plaintexts are kept secret.As a consequence, the attacker does not know the inputs to the block cipher operations and is unable to perform attacks as in case of the original construction.Attackers instead have to target the LR-PRG step where p 0 and p 1 are still public and known to them.This attack is equivalent to attacking the original LR-PRF construction with n = 1, i.e., a data complexity of two.The advantage of this design is that it allows higher data complexities in the LR-PRF stage, and therefore increased performance, without increasing the attack surface.Both LR-PRF constructions are vulnerable to high-end invasive EM attacks with high spatial resolution (sub-millimeter coils) when implemented on FPGA devices [9].Unterstein et al. [9,10] demonstrate that the spatial localization and high time resolution of such setups allows for the isolation of individual S-boxes in a divide-and-conquer attack and additionally removes the key dependent noise from parallel S-boxes to a certain extent.The feasibility of such attacks is highly dependent on the integration density of the device.Contrary to our use-case they analyze an FPGA device where the integration density is generally low compared to ASIC designs.In a later work [11], they provide results of a similar investigation on a newer FPGA with smaller feature size and higher integration density where the security level is still high after the attack.Therefore, we believe that the high integration density of an ASIC hardware accelerator will provide at least some resistance to such attacks.In any case, we consider such costly invasive attacks as presented in [9] out of scope for this work because we aim at providing side-channel security to previously unprotected low-cost devices.To provide protection against localized attacks, Unterstein et al. [10] add a method to the LR-PRF construction that 'refills' the key entropy by introducing additional key material in a preprocessing step.This comes at the cost of increased key length and a significant performance penalty because the data complexity in the PRF tree is fixed to 2. Similar to the unknown-inputs LR-PRF, the relevant attack vector is an attack of data complexity 2 on the AES during preprocessing. In summary, to implement any of the presented LR-PRF, the necessary requirement is that an AES core resists side-channel attacks with a data complexity of 2. For our analysis we use the original LR-PRF since it has the least implementation overhead and is best suited to analyze the security impact of the different data complexity configuration.However, we emphasize that the analysis of this LR-PRF with data complexity 2 also covers the other two variants since the best attack vector is identical for all constructions.As long as this case is shown to be appropriately secure for a given implementation, all variants of the LR-PRF can be used.Furthermore, depending on the outcome of the SCA for higher data complexities, more efficient configurations of the original LR-PRF can be considered. Leakage resilient authenticated encryption Authenticated encryption (with associated data) schemes can be implemented using dedicated constructions or built from common primitives such as block ciphers and message authentication codes (MACs) through generic composition.An AEAD scheme takes a key k, a message msg, associated data adata and a nonce nonce as input and outputs a tag tag and a ciphertext ctxt.For our use case, we use a composition scheme that can be instantiated using existing hardware accelerators and make use of the results of Krämer and Struck [7].In their work they revisit the so called F GHF construction that was proposed by Degabriele et al. [12] in the context of sponge based constructions.The F GHF construction is an LR-AEAD scheme and comprises four building blocks: Two functions F and F , a PRG G and a hash function H.In order for the construction to be leakage resilient, the security analysis of Degabriele et al. originally requires both F and F to be pseudorandom under leakage and F to be unpredictable under leakage in addition2 .The hash function and PRG can be instantiated with unprotected primitives.However, Krämer and Struck simplify this and show that for the F GHF construction the unpredictability is actually not required and that both F and F can be implemented using LR-PRFs.In the context of our work, this allows us to use one of the AES based LR-PRF that are discussed in Sec.2.1 as building block for both F and F . The PRG G can be implemented using the AES based LR-PRG of Standaert et al. [8] (Fig. 2).Note that following the security analysis of Degabriele et al., G is not required to be leakage resilient.The main reason we use this LR-PRG is that it also makes use of the AES hardware accelerator.We discuss the security implication of using a block cipher based construction for G in Sec. 4. The hash function H, e.g., SHA-256, can either be realized using hardware accelerators, if they are present, or implemented in software.An overview of the F GHF scheme when implemented with these building blocks is shown in Fig. 3.Note that the key k in this case consists of two keys k enc and k mac which are used for the stream cipher and MAC part of the scheme, respectively. Attacker model and evaluation methodology In this section we first define the attacker model and then outline the methodology for our side-channel evaluations.Specifically, we describe the security assessment of the LR-PRF construction.In practice, this assessment consists of profiled side-channel attacks against AES with limited data complexity. Attacker model The intended targets for bringing leakage resilience to COTS microcontrollers are low-cost IoT devices.Since these microcontrollers are not marketed, nor intended, to be used in high security applications, modeling an attacker with high-end laboratory resources does not seem justified.For that reason, we do not consider invasive, high-precision EM analyses, as described in [10], that use equipment worth around 100,000 USD, excluding the equipment for decapping the chips before analyses.Instead, we assume attackers with considerable technical know-how, but moderate capabilities in terms of laboratory equipment.We assume that attackers have access to EM measurement probes allowing measurements close to the packaged chips with manual positioning of the probe.Along with a preamplifier and a USB oscilloscope, such a setup can be built for a few thousand USD. Since the analyzed devices can be bought without restrictions, attackers can perform profiling with known keys on one or more devices under their control.To reflect this fact in our analysis and in order to avoid inter-device deviations, we perform profiling and attacks on the same device (can be seen as worst case), even though this would not be possible in a real scenario where an attacker has limited control over the attacked device. Assessing the side-channel security Considering the different variations of LR-PRFs, the side-channel security is always reduced to a side-channel attack on the initial usage of the long-term key with varying data complexity.In case of the original proposal [5], the available data complexity in an attack depends on the configuration of the LR-PRF and ranges from 2 up to 256.For the unknown-input [6] and the key refreshing [10] LR-PRF, attacks are limited to a data complexity of 2. All proposed constructions are able to use a varying number of parallel S-boxes that determine the amount of algorithmic noise.These contributions assume dedicated hardware designs where this choice can be made deliberately, whereas in our case the level of parallelism depends on the choice of the controller.Hence, an evaluator's goal is to determine the security of a hardware implementation (with a fixed number of parallel S-boxes) in relation to the data complexity.Then, the LR-PRF construction with the best trade-off between security (security level subject to an attack) and efficiency (implementation cost and runtime) is chosen. When implementing LR-PRFs on microcontrollers through the use of a hardware accelerator, the key inevitably has to be transferred from the CPU over the bus to the accelerator.In the case of commercial security controller, these bus transfers are masked by random values.In our case of unprotected microcontrollers this attack vector needs to be considered.Wouters et al. [13] demonstrate the effectiveness of such attacks and recover the transponder key of a car immobilizer in a profiled attack on the key transfer to a coprocessor.Therefore, to establish SCA secure LR-PRFs on such microcontrollers we evaluate two attack vectors in this work: i) Attacks on the bus transfer of the key in Sec.6.2 and ii) attacks on the AES accelerator with different data complexity in Sec.6.4.To have a baseline for comparison, we attack the accelerator with unlimited data complexity in Sec.6.3. Profiled side-channel attacks with limited data complexity To assess the worst case security of a device it is common practice to use multivariate Gaussian template attacks [14,15].Due to the high computational complexity, the number of samples which are included in the multivariate templates has to be kept low.The most informative time samples from a trace, called points of interest (POIs), are extracted in a preprocessing step.We use the correlation-based leakage test described by Durvaux and Standaert [16] as a first order, profiled leakage test. After identifying POIs, Gaussian templates are calculated on the reduced trace.This profiling phase requires that the attacker has full control over the device, most importantly the secret key and the inputs, e.g., the plaintexts in case of encryption algorithms.Key bytes are profiled and attacked separately in a divide-and-conquer manner.While profiling one byte, all other bytes are randomized.During the attack phase, the attacker only controls the inputs and matches the templates with the measured traces in order to find the secret key.Due to the design of the LR-PRF, the input controlled by the attacker is not directly used as input to the AES.Instead, each execution of the LR-PRF consists of multiple AES executions and for each of those the plaintext is chosen depending on certain bits of the LR-PRF input.Therefore, the input that is provided by the attacker only allows to choose between a limited set of plaintexts for each execution of the AES.This is different from a regular TA, where the input bytes can be chosen randomly by the attacker.This random input allows a divide-and-conquer approach to work well because each byte behaves independently of the others and the attacker may target specific bytes one at a time.When the data complexity is limited, as in the LR-PRF case, the key bytes are no longer independent and this separation is hindered.As a result, the individual bytes are affected by each others correlated leakage and this so-called algorithmic noise impedes side-channel attacks.It is therefore expected that correct key byte candidates are not always determined without doubt and that the attacker has to try the most probable combinations until the full key is recovered.Attacks on the key transfer can be considered a corner case of this scenario where the data complexity is limited to one. To assess the security level from the lists of key candidates, the key rank estimation algorithm by Glowacz et al. [17] is used.It gives an estimation of the remaining security level which we denote in bits.A security level of x bit means that an attacker has to try 2 x keys out of the most probable combinations until the correct one is found.An additional side effect of the limited data complexity is that certain combinations of key bytes are easier to attack than others, depending on the leakage behavior of the device.Hence, we test several keys and estimate the distribution of the resulting security level. Leakage resilient AEAD on COTS microcontrollers This section describes how to achieve LR-AEAD for COTS microcontrollers.First, we explain how to implement an LR-PRF and LR-PRG utilizing a hardware AES engine.We specifically describe the partitioning between software and hardware accelerators.Second, we describe how to use this protected building block together with an LR-PRG and hash function in the LR-AEAD scheme of Degabriele et al. [12].We provide pseudo code for all operations and point out the security critical operations which we analyze in the side-channel evaluation in Sec. 6. The main aspect of our proposal is to benefit from existing hardware accelerators with parallel S-boxes on microcontrollers to realize an LR-PRF.A typical architecture of a microcontroller with integrated cryptographic coprocessor is outlined in Fig. 4. The AES coprocessor is attached to the main CPU via a bus and can run independently and in parallel.It is typically controlled through memory-mapped registers.Commands and data values are exchanged over the bus.The LR-PRF program is executed on the CPU and the hardware accelerator is queried for the necessary block cipher encryptions.The process follows Algorithm 1, where the boxed operations are executed inside the hardware accelerator, while the rest is executed by the CPU.Inputs are the key k, the data input x and the data complexity expressed in the number of bits n that are processed per stage (e.g., for data complexity 4, n equals 2).The expression (nbits| . . .|nbits) 128 denotes a concatenation of bits nbits until the string contains 128 bits, 0 128 is an all zero bitstring with length 128. Algorithm 1: LR-PRF We give the pseudocode description of the used LR-PRG in Algorithm 2. The functionality is split in two functions: An initial seeding of the LR-PRG that sets the key for the first iteration and an iterate function that returns one block of pseudorandom data and updates the key internally.There are two types of security critical operations in Algorithms 1 and 2. The first type are the AES encryptions (AES_encrypt).This is expected from all conceptual considerations.The second type is implementation-specific, i.e., the bus transfers (write_to/read_from_accelerator) of the key. We give a detailed SCA of both types of operations in Sec. 6. Algorithm 3 puts the building blocks together and describes the encrypt and decrypt operations of the LR-AEAD.This does not add any additional attack vectors, as all sensitive operations are located within the LR-PRF and LR-PRG. A cautionary note According to the security analysis of the LR-AEAD scheme by the authors of [12] the PRG is not required to be secured against differential side-channel attacks since its seed is generated on the fly and is only valid for one message.Thus, if we consider the PRG as a black box, each seed is only used for one operation and the only valid attacks are attacks with data complexity 1, i.e., simple power analysis (SPA) attacks.However, since we use a block cipher based PRG, this single operation in fact consists of multiple sensitive block cipher encryptions which increases the data complexity for an attack.It is intuitively clear that the seed to the PRG, i.e., the initial key to the AES, must not be leaked in order to protect the confidentiality of the individual message.Note that leaking this seed would only disclose this one message, neither k enc nor k mac are affected so an attack on the seed would have to be mounted for every message. Fortunately, the practical security of the PRG equals the security of the LR-PRF in our case because one iteration of the PRG has the identical side-channel attack surface as one iteration of the LR-PRF in case of data complexity 2. The theoretical side-channel security is not equivalent since the security proof for the LR-PRG requires use of the random oracle model whereas the LR-PRF is provably secure in the standard model.However, this is only to prevent so called future computation attacks and has no practical significance 3 .In other words, if we can realize a secure LR-PRF with data complexity 2, and we successfully do so, then it implies the security of the LR-PRG. Devices under test: STM32 and EFM32 We chose two COTS microcontrollers for our proof of concept, namely the STM32F215RET6 (STM32) ARM Cortex-M3 and EFM32PG12B500F1024 (EFM32) ARM Cortex-M4 microcontrollers which are both widely used in IoT applications.Both devices feature a 32-bit processor manufactured in a 90 nm process and include an AES hardware cryptographic accelerator that is not specifically hardened against side-channel attacks.The cryptographic accelerators are different in their level of internal parallelism. Based on the number of clock cycles the cryptographic coprocessors take for a single AES operation we assume that the STM32 implements 16 parallel S-boxes to perform the 10 rounds of an 128-bit AES (AES-128) and the EFM32 implements four parallel S-boxes.This assumption is confirmed by the results of a correlation-based leakage test shown in Fig. 5a for the STM32 and in Fig. 5b for the EFM32.In both figures the correlation for the input of the different S-boxes of an AES-128 encryption is depicted for known plaintext and key values.In Fig. 5a the maximum correlations for all 16 S-boxes occur at the same point in time indicating a fully parallel design.In Fig. 5b it can be observed that groups of 4 S-boxes behave similarly.This confirms that the accelerator implementation of the EFM32 processes 32-bit words of the AES state simultaneously which corresponds to four parallel S-boxes.The words exhibit two distinct peaks in subsequent clock cycles which overlap with the following word.This behavior is consistent with leakage caused by writing or overwriting a shared buffer register.For some S-boxes, e.g., S-box 12-15 at clock cycle 6, additional correlation peaks can be observed.This behavior is also visible several cycles after the computation of the current AES round.A reasonable explanation for these additional peaks is the complex structure of the cryptographic coprocessor of the EFM32, which contains an ALU with dedicated instruction memory and data registers.The peaks could stem from internal buffers or the switching of multiplexers between register banks. In order to implement the LR-AEAD as described in Sec.2.2 we utilize the SHA-256 hardware accelerator of the EFM32.In contrast, the STM32 only provides a SHA-1 accelerator that we opted to ignore, as practical attacks against SHA-1 have already been shown [19].Instead, we use a software implementation of SHA-256 provided by the open source library tinycrypt [20], which is designed for constrained devices. Side-channel evaluation In this section we present the results of a side-channel analysis of LR-AEADs on two microcontrollers.As explained in Sec. 4, the analysis of the LR-AEADs is reduced to an analysis of the LR-PRFs with different data complexity configurations.We are covering two attack vectors on the LR-PRF: Attacks on key transfers from CPU to hardware accelerator and attacks on the AES accelerator which is used as part of the LR-PRF implementation.In that regard we first demonstrate that the key can be fully recovered if the AES is used in a standard mode where the data complexity for the attack is not limited.Within the LR-PRF, however, the AES is only used with limited data complexity, i.e., with a limited number of different plaintext inputs under one key.Thus, we provide results of attacks with different data complexities and give estimates of the remaining security level.We find that attacks on the key transfer do not lead to exploitable security levels.For the attacks on the AES, we observe that the security level decreases with rising data complexity.However, for both microcontrollers we find configurations that, under our test conditions, lead to high security levels greater than 100 bit.We describe our measurement setup in Sec.6.1.In Sec.6.2 we provide result of the attack on the key transfer, Sec.6.3 and Sec.6.4 cover attacks on the AES with unlimited and limited data complexity, respectively. Measurement setups The measurement setup for the STM32 is depicted in Fig. 6a and consists of a CW308T-STM32F target board mounted on a CW308 UFO Board running at a clock frequency of 10 MHz.A PicoScope 6402D USB-oscilloscope is used for the data acquisition at a sampling rate of 1.25 GHz.The EM emanations are captured using a passive Langer RF-U 2.5-2 near-field probe that is connected to a Langer PA 303 preamplifier adding a gain of 30 dB.The EFM32 setup consists of an EFM32 Pearl Gecko PG12 Starter Kit running at its default clock frequency of 19 MHz.A LeCroy WavePro 7 Zi-A 2.5 GHz oscilloscope operating at a sampling rate of 5 GHz is used for the data acquisition.We use the same near-field probe and preamplifier as in the STM32 setup. According to the attacker model in Sec.3.1 the probes are positioned manually.Different positions close to pins, capacitors and on top of the package were tested by inspecting signal amplitudes and qualitative indicators, e.g., if the AES round structure is visible.In case of the CW308T-STM32F target board, the probe is located on Pin 31 (c.f.Fig. 6a).For the EFM32 the probe is located in between two decoupling capacitors as shown in Fig. 6b. Template attacks on key transfer Protecting a cryptographic operation against physical attacks is only feasible if the key is not easily recovered through SCA of the key transfer from the CPU to the cryptographic accelerator.We perform an evaluation of the key transfer for both microcontrollers and confirm that under our conditions, the leakage cannot be exploited to recover the key. (b) EFM32 The AES hardware accelerator on both devices is connected to the CPU as a memorymapped device.During the initialization the key has to be written to a register of the accelerator as four 32-bit words for a key size of 128 bits.The key transfer on the internal bus can be observed by an attacker, enabling a TA on the key bytes with a data complexity of 1.As the key is static, differential attacks are excluded.Although the key is transferred in words of 32 bits, building templates for 32-bit values, under the chosen conditions, is not feasible due to the time which is required to collect a sufficient number of measurements for all 2 32 templates.With the acquisition rate of our setup of about 50 traces per second, it takes roughly 10,000 years to collect the required traces, even for 16-bit values it takes over 60 days.The TAs carried out in the following sections are therefore based on 8-bit templates instead. In a first step, a correlation-based leakage test on the key transfer is conducted to find the POIs for the TA.The correlation for the values of the different key bytes is depicted in Fig. 7a for the STM32, and in Fig. 7b for the EFM32.For both figures we use 100,000 traces with known random keys.Both devices show relatively high correlation values of approximately 0.4, which is expected for unprotected microcontrollers.Figure 7a shows that the duration of the entire key transfer is eight clock cycles and the leakage of the words is partly overlapping.The leakage test for the EFM32 looks similar except for the difference that the four 32-bit transfers do not overlap at all. The POIs for the TA on the different words are obtained by using all samples that exhibit a correlation higher than 0.02 for the STM32.For the EFM32 we use a threshold of 0.05 for the first four key bytes and 0.02 for the other key bytes.Both thresholds have been determined visually as being just above the noise floor and are marked with a dashed line.The templates for the 256 possible values are generated from a total of 2,000,000 traces with known random keys in the case of the STM32, and 1,000,000 traces in the case of the EFM32.The difference in the number of traces stems from different acquisition rates on the two setups.To evaluate the entropy reduction by the TA, 1,000 attacks with different random keys are performed where for each key 1,000 traces are recorded.We found that already for this number of traces per key, the results are stable and more traces per key do not further improve results.The results for both devices are depicted in Fig. 8.The median security level under our attack conditions is 120.09bits for the STM32 and 113.15 bits for the EFM32.Even the key with the worst security level has a remaining entropy of 107 bits and 97 bits for the STM32 and EFM32, respectively.Summing up, while a TA on the key transfer reduces the entropy of the key, it does not compromise the security to an extent where a protection of the cryptographic operation would be pointless. Template attack on unprotected AES The idea of retrofitting protection for AES hardware accelerators is motivated by the assumption that the devices are vulnerable to SCA.Hence, we perform TAs on the hardware AES of the EFM32 and the STM32 to verify this assumption.Additionally, a successful attack result serves as a confirmation that the side-channel measurement setup including the manual positioning of the probe is effective.The evaluation also provides a baseline for comparison during the evaluation of the applied protection mechanism in Sec.6.4. As an intermediate value we target the S-box input of the first AES round, which is common practice for attacks on AES.The results of a correlation-based leakage test depicted in Figs.5a and 5b are used to select suitable POIs.Again, the dashed line marks the threshold for the POI selection.For the STM32 and the EFM32 the threshold is set to 0.04 and 0.017, respectively.In the profiling phase, we acquired 2,000,000 traces (STM32) and 1,000,000 traces (EFM32) with random keys and random plaintexts.Multivariate templates are computed for each of the 256 possible intermediate values of all 8-bit templates.During the attack phase, 10 random keys are evaluated using up to 30,000 traces with random plaintext inputs. In contrast to the attack on the key transfer in Sec.6.2, where all processed values are constant and increasing measurements merely reduce noise effects, variable input values allow the attacker to increase the success probability by increasing the number of traces.Figures 9a and 9b depict the median key rank based on 10 random keys for an increasing number of traces.Each line represents the median key rank for one of the 16 key bytes (S-boxes).The key rank denotes the position of the correct value if the results are sorted according to their likelihoods.A rank of 1 equals a successful recovery, while higher ranks require key enumeration effort by the attacker.For the STM32 the key byte ranks decrease to 1 after about 2,500 traces, i.e., a full key recovery is achieved.In the case of the EFM32, more traces are needed to achieve low key ranks.However, not all key bytes converge to a rank of 1 and therefore a low effort in key rank enumeration is still necessary.Summarizing, both AES engines are vulnerable to SCA and protection mechanisms are required. Template attacks on LR-PRFs with different data complexities This section provides the side-channel evaluation of the proposed retrofitted protection mechanism based on LR-PRFs.As outlined in Sec. 2, the security is determined by a side-channel attack on the initial use of the long-term secret.The number of different plaintexts which are used in the LR-PRF construction is a trade-off and affects the data complexity of a side-channel attack and the runtime.We analyze the security level of the implemented concept for different trade-offs by performing TAs with different data complexities. The same profiling set as described in Sec.6.3 can be used.Data complexities of {2, 4, 8, 16, 32, 64, 128, 256} are used and for each 300 different random keys are attacked.All plaintexts are of the form that all 16 bytes are equal as described by Medwed et al. [5].For each of the 300 attacked keys, we collected a high number of traces to reduce the measurement noise.Specifically, we recorded 30,000 traces in case of the STM32 and 10,000 traces in case of the EFM32.We found that the attack does not achieve further significant improvement with higher numbers of traces. Figures 10a and 10b present the attack results as a median security level4 of the full key in bits (after key rank estimation) over the number of traces and for different data complexities.As expected, the security level generally decreases with increased numbers of traces.However, importantly, the security levels do not approach 0 bit security because of the protection mechanism.Contrarily, the security levels stagnate and do not decrease further after a certain number of traces, which proves that the protection mechanism is effective.Note that a data complexity of 256 does mean that all values are used per plaintext byte.However, all plaintext bytes are still equal.This is the important difference to a regular attack scenario as described in Sec.6.3 and the prevalent reason for the working protection in these cases.As explained in Sec.3.3, the security level of individual keys depends on their concrete value in the case of limited data complexity.Keys are chosen at random and the resulting security levels of attacks vary accordingly.It is therefore important to not only consider the median security level but the whole distribution as the outliers determine the worst case security level.We show this variance in Figs.11a and 11b.The figures present attack results after the maximum number of traces, i.e., 30,000 traces for the STM32 and 10,000 traces for the EFM32.Hence, this focuses on the rightmost verticals of Fig. 10.Each vertical on the x-axis contains the security levels from 300 attacks for this data complexity and the fixed number of attack traces.The distribution of the 300 attack results is visualized as a box plot.The red lines in the center of the boxes denote the median for the respective data complexity (this corresponds to the data shown previously on the rightmost vertical).The boxes include 50 % of the values within the quartiles Q1 and Q3.All whiskers of the boxplots are drawn at 1.5 Interquartile Range (IQR) or at the extrema.Outliers that diverge more than 1.5 IQR from the box edges are denoted as circles. As most important observation it can be noted that for data complexities up to 16, security levels including outliers are higher than 96 bits, which is a very positive result.For this number of attack traces, the attack on the unprotected AES results in a security level close to 0 bit.As expected and observed in the previous results, the security level decreases with increasing data complexity, i.e., if the number of observable plaintexts is extended.Interestingly, the distribution is rather broad with high differences between the median security level and the worst case of individual keys.Considering, e.g., the STM32 with data complexity 128, a median security level above 90 bits is achieved under our attack conditions but individual cases are as low as 70 bits.The variance is similar for both microcontrollers which reinforces the assumption that the choice of key values, not the measurement setup is the main reason for this observation. The two AES engines have different numbers of parallel hardware S-boxes.The EFM32 includes four parallel S-boxes while the STM32 is fully parallel with 16 S-boxes.This means that the desired key dependent noise (algorithmic noise) from parallel structures is higher for the STM32.This explains the observation of higher resulting security levels at lower data complexities (2 to 64) as can be observed when comparing Figs.10a and 10b.The results confirm that the higher parallelism provides better protection.For the EFM32 with four parallel S-boxes the algorithmic noise provides less protection.Nevertheless, the security level still remains above 100 bits for data complexities smaller than 16. Note that the median security level for the data complexity of 256 is lower for the STM32.This is in contrast to the described reasoning and can in our opinion be explained using Fig. 9, which shows that the TA on the unprotected EFM32 does not converge to a key rank of 1 for single bytes.For higher data complexities, attacks generally become more comparable to attacks on the unprotected AES.The results in Fig. 9 for the unprotected case show that the EFM32 is harder to attack judging by the required number of traces and the partially imperfect key recovery.This could be the reason why the security level of the EFM32 does not decrease at the same rate for high data complexities.As a conclusion, the device which was easier to attack without any countermeasures, the STM32, proves to be the more secure platform to implement LR-PRFs due to the higher level of parallelism of the hardware accelerator.In other words, the higher parallelism clearly leads to comparably better protection with the LR-PRF concept despite the fact that the same device is less secure when unprotected. In summary this evaluation shows that secured LR-PRFs, and consequentially secured LR-AEAD, can be achieved on both devices.High security levels above 100 bit are achieved for all experiments with data complexities up to 16 on the STM32 and up to 8 on the EFM32.This shows that for protection in the targeted IoT scenarios, it is sufficient to use the original LR-PRF as long as the data complexity is within the discussed boundaries for the targeted security level.The fully parallel AES implementation of the STM32 allows for more efficient constructions while retaining a higher security level.Naturally, the unknown-inputs and key refreshing LR-PRF that are limited to data complexity 2 by design could also be used, but come with the downside of additional preprocessing steps and the requirement to temporarily store secret plaintexts. Performance analysis In this section we evaluate the performance of our LR-PRF and LR-AEAD implementation with respect to execution time and code size on both microcontrollers.We analyze the impact of different data complexities, however, we only consider LR-PRF configurations that process a number of bits per stage that is a divider of 128 (i.e., n = 1, 2, 4, 8 corresponding to data complexities of 2,4,16,256).This avoids having a last LR-PRF iteration that does not use the full data complexity.In order to measure the execution time of the LR-PRF and LR-AEAD, we use the data watchpoint and trace (DWT) debug component of the Cortex-M processor.This feature allows for non-invasive and cycle accurate execution time measurements.Non-invasive in this case means that it is not necessary to modify the code under test to perform the timing measurements. Single LR-PRF execution Figure 12 depicts the number of clock cycles required for a single LR-PRF execution with varying data complexities and different compiler optimization levels.We use three different optimization levels: no optimization (O0), optimization for size (Os) and performance and size (O3).The results are also included in tabular format in Table 3 in the appendix.Note that a particular optimization level choice does not have an impact on the side-channel security, as the AES hardware accelerator is not influenced by the compiler.The same holds for the key transfer over the bus.The diagram shows that the number of clock cycles grows logarithmically with increasing data complexity (i.e., linearly with the number of input bits processed per iteration).This is expected since an increasing data complexity leads to a decreasing number of required iterations in the LR-PRF tree.Contrary to our expectation, Fig. 12 does not reflect the performance difference between the AES coprocessors of the devices.Even though the STM32 has a fully parallel AES core, the LR-PRF implementation is only slightly faster than the one on the EFM32.We would expect a factor of about four because a fully parallel implementation is capable of calculating an AES round in one cycle whereas an implementation with four S-boxes requires four cycles.We assume that the difference results from the different low-level software libraries used on both devices and differences in the interfacing.For both microcontrollers, the optimizations Os and O3 result in a 2 and 3 times faster execution time, respectively, in comparison to the baseline without optimization (O0).Given the fact that the difference in performance is only marginal we suggest using the optimization Os as it comes with the additional benefit of a reduced code size.Therefore, we evaluate the LR-AEAD performance with optimization level Os. Complete LR-AEAD execution For the evaluation of the complete LR-AEAD, we measure its execution time for different data complexities and varying ciphertext sizes on both microcontrollers.We use the Os optimization level for the evaluation depicted in Fig. 13.We evaluate the LR-PRF with the minimum and maximum data complexity of 2 and 256 on both devices.Additionally, we evaluate the LR-PRF with a data complexity of 4 and 16 for the EFM32 and STM32, respectively.These values turned out to be a suitable trade off between execution time and security in Sec.6.4.These results and additionally the required clock cycles to process a single 16-Byte block of data are also listed in On the EFM32 the implementation makes use of the SHA-256 hardware accelerator whereas on the STM32 the hash is implemented in software [20].This leads to a decreased performance of the STM32 in the LR-AEAD case despite the fact that it has a slightly faster AES core.For smaller ciphertexts one can clearly see the advantage of a higher data complexity, however, the difference vanishes with increasing ciphertext lengths.The reason is that the LR-PRF is only evaluated twice, regardless of the length of the ciphertext, and thus the overhead amortizes with increased length.In the context of firmware updates, we usually deal with encrypted firmware images larger than 16 KiB, hence, the performance penalty from decreased data complexity is low.Assuming a core clock frequency of 4 MHz, a data complexity of two and a firmware update size of 64 KiB, the decryption process take around two seconds on the STM32 and less than a second on the EFM32.These results are quite practical for a secure firmware update in IoT applications. Code size Besides the execution time of the LR-PRF and the LR-AEAD, their code size is an important parameter, especially for constrained embedded devices.In order to determine the code size required to retrofit the LR-AEAD to an existing application, we look at the additional code size of both functions when added to a simple baseline application.This application consists of only a main function with an endless while loop together with the necessary initialization routines such as stack initialization.We use the example code from the microcontroller manufacturers as a template for the baseline application and implement both the LR-PRF and LR-AEAD on top.The additional code size, i.e., the code size required for the LR-PRF and the LR-AEAD is listed in Table 1.The LR-AEAD implementation occupies between 0.49 % and 0.76 % of the STM32's 512 KiB flash memory, depending on the optimization level.For the EFM32, the implementation needs between 0.72 % and 1.04 % of the microcontroller's 1024 KiB flash memory. Comparison to protected software implementation As a comparison, we measure the performance of a side-channel protected software implementation of AES developed by the ANSSI [3].They implement affine masking as described in [21] in combination with several hiding countermeasures.We measured around 108,000 cycles for one call to the protected aes() function on an ARM Cortex-M4 microcontroller similar to their reference platform (optimization Os).For the example of 64 KiB firmware updates, we can give a rough estimate of the runtime by considering only the AES calls that are required to decrypt the firmware.This significantly underestimates the real runtime because it neglects the overhead that arises when implementing a block cipher mode of operation and the MAC calculation.It does also not include the collection of the randomness that is required for the countermeasures.Nevertheless, with an estimate of more than 442 Mio.clock cycles this alone is a factor 208 and 45 slower compared to the complete LR-AEAD with data complexity 2 on the EFM32 and STM32, respectively 5 .This means that a secured firmware update would take in the range of minutes instead of single seconds which is significant.The code size of 6,392 bytes is also about three times larger. In summary, the runtime overhead and the flash memory footprint of the LR-PRF and the LR-AEAD implementation are low in comparison and thus applicable for secure firmware updates in resource constrained scenarios such as IoT applications.The overhead of using lower data complexities to achieve higher security levels becomes less significant if the length of the payload increases. Conclusion In this work we use concepts from leakage resilient cryptography to tackle the difficult problem of securing COTS microcontrollers against side-channel attacks.We propose to implement an LR-AEAD scheme using a block cipher based LR-PRF as the underlying side-channel hardened primitive.Specifically, we implement the LR-PRF in software and use existing hardware accelerators to leverage the algorithmic noise of parallel implementations to protect against side-channel attacks.In a case study on two ARM Cortex-M microcontrollers with AES accelerators we analyze the side-channel security of our construction and find that it resists profiled attacks and retains security levels above 100 bits.We give concrete results for a configuration parameter that allows a trade-off between security level and performance.The overhead in code size is small and occupies only about 1 percent of the available memory on the two devices.Compared to an exemplary side-channel protected software AES implementation, the runtime of our proposal is up to 200 times faster with a memory footprint of only one third.Our solution is applicable to any microcontroller that has an AES accelerator with parallel S-boxes.Therefore, it enables retrofitting side-channel protection to a wide range of devices.This will help to realize root of trust security mechanisms such as secured firmware updates for low-cost IoT devices. Figure 1 : Figure 1: Dataflow graph of an LR-PRF execution for n = 1, i.e., with data complexity of 2 n = 2, and input x of length 128 bits.In each iteration only the AES path highlighted in black is executed depending on the respective bit of x. Figure 4 : Figure 4: Microcontroller running an LR-PRF using an integrated AES hardware accelerator. Figure 5 : Figure 5: Correlation-based leakage test on the AES S-box input for 100,000 traces with known plaintexts and keys. Figure 6 : Figure 6: Positioning of the EM probes. Figure 7 : Figure 7: Correlation-based leakage test on the key transfer for 100,000 traces with known random keys. Figure 8 : Figure 8: Security level for 1,000 random keys subject to a Template Attack on the key transfer. Figure 9 : Figure 9: Median key rank of the 16 key bytes subject to a Template Attack on the AES S-box input for 10 random keys with random plaintexts for varying number of traces. Figure 10 : Figure 10: Median security levels from 300 random keys subject to a Template Attack on the AES S-box input for varying number of traces and data complexities. Figure 11 : Figure 11: Security levels of 300 random keys subject to a Template Attack on the AES S-box input for 30,000 (STM32) respectively 10,000 (EFM32) traces and different data complexities. Figure 12 : Figure 12: Performance evaluation of the LR-PRF implementation for different optimization levels and varying data complexities. Figure 13 : Figure 13: Performance evaluation of the LR-AEAD implementation for different data complexities and varying ciphertext sizes. Table 1 : Code size in bytes of the LR-PRF and LR-AEAD implementations. Table 2 : Execution time in clock cycles of the function calls used by the LR-AEAD, including input/output. Table 3 : Execution time in clock cycles of the LR-PRF implementation for different optimization levels and varying data complexities. Table 4 : Execution time in clock cycles of the LR-AEAD implementation (optimization level Os) for different data complexities (DCs) and varying ciphertext sizes.
12,824
2020-08-26T00:00:00.000
[ "Computer Science", "Mathematics" ]
Crystal structure and Hirshfeld surface analysis of (C7H9N4O2)[ZnCl3(H2O)] In the title molecular salt, (C7H9N4O2)[ZnCl3(H2O)], the crystal packing exhibits O—H⋯O, O—H⋯Cl, N—H⋯O and N—H⋯Cl hydrogen bonds. In the title molecular salt, 1,3-dimethyl-2,6-dioxo-2,3,6,7-tetrahydro-1H-purin-9ium aquatrichloridozincate(II), (C 7 H 9 N 4 O 2 )[ZnCl 3 (H 2 O)], the fused ring system of the cation is close to planar, with the largest deviation from the mean plane being 0.037 (3) Å . In the complex anion, the Zn II cation is coordinated by three chloride ions and one oxygen atom from the water ligand in a distorted tetrahedral geometry. In the crystal, inversion dimers between pairs of cations linked by pairwise N-HÁ Á ÁO hydrogen bonds generate R 2 2 (10) rings. The anions are linked into dimers by pairs of O-HÁ Á ÁCl hydrogen bonds and the respective dimers are linked by O-HÁ Á ÁO and N-HÁ Á ÁCl hydrogen bonds. Together, these generate a three-dimensional supramolecular network. Hirshfeld surfaces were generated to gain further insight into the packing. Chemical context Theophylline, C 7 H 8 N 4 O 2 , is an alkaloid derivative of xanthine, containing a fused pyrimidine-imidazole ring system with conjugated double bonds. It has many biological and pharmacological properties (see, for example, Rao et al., 2005;Piosik et al., 2005). Various studies have shown that theophylline can be used as a medicine for the treatment of asthmatic bronchitis and chronic obstructive bronchitis (under several brand names), and as anticancer drugs (Nafisi et al. 2003;Rao et al. 2005;Piosik et al. 2005). Furthermore, theophylline complexes with transition metals can be used in anticancer drugs (David et al., 1999). As part of our studies in this area, we reacted theophylline with ZnCl 2 under acid conditions to give the molecular salt (C 7 H 9 N 4 O 2 )Á[ZnCl 3 (H 2 O)] and its crystal structure is described herein. Supramolecular features The packing is consolidated by a network of hydrogen bonds (Table 1, Fig. 2). The cations are linked into inversion dimers by pairs of N1-H1Á Á ÁO2 hydrogen bonds, which generate R 2 2 (10) rings. The anions also form inversion dimers, being linked by pairwise O3-H3AÁ Á ÁCl3 hydrogen bonds. The anions are linked to the cations via O3-H3BÁ Á ÁO1 hydrogen bonds from the water molecule to a carbonyl group of the pyrimidine ring. Finally, the cations are linked to the anions via N2-H2Á Á ÁCl2 hydrogen bonds. Taken together, these hydrogen bonds generate a three-dimensional supramolecular network ( Fig. 3), which also features short ClÁ Á Á contacts [ClÁ Á Ácentroid distances in the range of 3.533 (2)-3.620 (2) Å ]. compounds are different from that of the title compound; however, the organic-inorganic moities are linked through hydrogen bonds in all of these structures. Hirshfeld surface analysis In order to gain further insight into the intermolecular interactions in the title compound, we used the program Crystal Explorer (Spackman & Jayatilaka, 2009), to consider separately the (C 7 H 9 N 4 O 2 ) + organic cation and the [ZnCl 3 (H 2 O)] À inorganic anion. Synthesis and crystallization ZnCl 2 Á6H 2 O (0.244 g, 1 mmol) was dissolved in 5 ml of water. Then, theophylline [C 7 H 8 N 4 O 2 ] (0.180 g, 1 mmol) was dissolved in 3 ml of ethanol/water (1:1 v:v) with a few drops of conc. HCl (37%). The two solutions were mixed and after two weeks, colourless crystals of the title molecular salt were obtained. Figure 5 Hirshfeld d norm surface of the [ZnCl 3 (H 2 O)] À anion in the title compound. 1,3-Dimethyl-2,6-dioxo-2,3,6,7-tetrahydro-1H-purin-9-ium aquatrichloridozincate(II) Crystal data (C 7 Special details Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes.
950.6
2020-03-10T00:00:00.000
[ "Chemistry" ]
Automaker’s credits strategy considering fuel consumption and endurance capacity constraints under dual-credit policy in China After implementing the Dual-credit policy, automakers must adjust their production and operation strategies to cope with policy changes. This paper studies an automotive supply chain consisting of an automaker that produces traditional fuel vehicles and new energy vehicles and a dealer as the research object. Meanwhile, this paper constructs a trading strategy and a cooperative strategy model considering consumers’ fuel consumption sensitivity and endurance capacity sensitivity. This paper also compares decentralized and centralized decision-making of automotive supply chain under different strategies. Furthermore, this paper compares and analyzes the optimal credits strategies of automaker with different investment amounts. The research finds that automaker can obtain positive new energy vehicle credits (NEV credits) through direct trading or by cooperatively obtaining NEV credits with other automakers. Whether automaker chooses the trading strategy or the cooperative strategy, members’ profits of centralized decision-making in automobile supply chain are better than decentralized decision-making. When the investment amount of automaker is small, the cooperative strategy is more advantageous. After coordination through the revenue-sharing contract, the benefits of supply chain members reach Pareto optimality. This paper helps enterprises effectively deal with the Dual-credit policy and provides a reference for achieving carbon emission reduction targets in China. Introduction With the increase in consumers' low-carbon awareness and environmental protection awareness, the factors affecting product demand have long gone beyond simple price considerations. With the continuous growth of the number of fuel vehicles, overall carbon emissions continue to rise. There are the directions of joint efforts of governments what reduce fuel consumption of fuel vehicles and promote new energy vehicles. In the face of severe climate change, countries worldwide have introduced relevant policies to assume the responsibility and obligation of carbon emission reduction. The "Bali Road Map" formulated in 2007 insists on tackling climate change under sustainable development and proposes specific emission reduction targets, approaches, and measures (Christoff, 2016). In December 2009, the target proposed by the 15th meeting of the Parties to the United Nations Framework Convention on Climate Change required developed countries to reduce their emissions by 40% by 2020 compared with the base year of 1990 and to achieve zero emissions by 2050 (at least 95% reduction in emissions) (East Asian Seas, 2009). In September 2020, China proposed the goal of Dual Carbon to achieve a carbon peak by 2030 and carbon neutrality by 2060 (Cui, 2020). To implement the national strategic policy, the government promotes the transformation of consumer consumption patterns to new energy or low-carbon energy through subsidies. The government and policymakers should strengthen policies to reduce environmental pollution (Sun H. et al., 2020). With the decline of government subsidy policies, the "Measures for Parallel Management of Average Fuel Consumption of Passenger Vehicle Enterprises and New Energy Vehicle Credits" ("Dual-credit" policy) will become the primary policy affecting the decisionmaking of automakers and dealers. It is reported that Wuling Hongguang MINI EV sold more than 420,000 units in 2021 (Mark, 2021). From the cost analysis alone, the profit of the Wuling Hongguang MINI EV electric vehicle is slim. However, relying on the sales of new energy credits, SAIC-GM-Wuling MINI EV single electric vehicle can earn thousands of RMB in revenue. SAIC-GM-Wuling can earn billions of RMB from Wuling Hongguang MINI EV alone through the Dual-credit policy. Meanwhile, some enterprises that produce traditional fuel vehicles suffer heavy losses. Fuel vehicles use gasoline or diesel as fuel, resulting in exhaust emissions and urban air pollution. Consumers with low-carbon preferences refuse to choose fuel vehicles because of their high fuel consumption (McCollum et al., 2018). Although new energy vehicles are clean products, "mileage anxiety" caused by low endurance has become the most important concern of consumers. Due to the long construction period of charging infrastructure and the large number of stakeholders involved, it is an important direction to increase consumers' willingness to buy, which improves the endurance mileage of new energy vehicles in the short term. With the concept of sustainable development, the study of renewable energy is increasing (Chang et al., 2022;Irfan et al., 2022). New energy is environmentally friendly and uninterrupted (He et al., 2020;Zhao et al., 2021). Improving energy efficiency is sufficient to meet the needs of energy stakeholders . The study of the endurance limit of new energy vehicles must be examined in order to provide better services for consumers (Xia et al., 2020). In summary, consumers with both fuel consumption sensitivity and endurance concerns are the focus of current automobile supply chain decisionmaking. The Dual-credit policy represents the corporate average fuel consumption credits (CAFC credits) and the new energy vehicle credits (NEV credits) (Chai et al., 2022). Under Dual-credit policy, if the actual value of fuel consumption of automakers is greater than the standard value, negative CAFC credits will be generated, and positive NEV credits must be purchased to offset. Otherwise, positive CAFC credits will be generated. If the value of NEV credits is lower than the standard value, negative NEV credits will be generated, which should be compensated at the end of the year. Otherwise, positive NEV credits will be generated, which can be put into the market for sales. Under the Dual-credit policy, some automakers generate NEV credits due to the production and sales of new energy vehicles and rely on the sale of NEV credits to obtain substantial income. Some companies that produce traditional fuel vehicles generate negative CAFC credits due to the production and sales of traditional fuel vehicles. Therefore, they must buy NEV credits to offset. Meanwhile, we investigate the decentralized decision-making and the centralized decision-making of the automotive supply chain consisting of an automaker and a dealer under different strategies to explore the decision relationship between automakers and dealers. Under the Dual-credit policy, how automakers handle relationships with other automakers and dealers and make correct decisions has become an essential issue for enterprises. This paper explores the automotive supply chain consisting of an automaker that produces price-competitive traditional fuel vehicles and new energy vehicles and a dealer under the Dual-credit policy. We construct a trading strategy and a cooperative strategy model and compare decentralized decision-making and centralized decisionmaking of the automobile supply chain under different strategies considering consumers' fuel consumption sensitivity for traditional fuel vehicles and consumers' endurance capacity sensitivity for new energy vehicles. Furthermore, we compare and analyze the optimal credits strategy of the automaker for responding to the Dual-credit policy in time. Finally, this paper designs a revenue-sharing contract to solve the problem of supply chain coordination to maximize the benefits of supply chain members. It provides the corresponding theoretical basis and promotes the sustainable development of the automobile retail industry. The rest of this paper is organized as follows. Section 2 reviews the relevant literature. In Section 3, we introduce our problems and assumptions. Then, we introduce two essential models in Section 4, which are supply chain decision-making based on automaker's credits trading strategy and supply chain decision-making based on automaker's credits cooperative strategy. We also study decentralized decision-making, centralized decision-making, and supply chain coordination decision-making under two strategies, respectively. We analyze the automotive supply chain credits strategy based on supply chain decision-making and the credits strategy considering revenue-sharing contracts to achieve the optimal decision in Section 5. Finally, we provide our conclusions in Section 6. The evidence is given in the Supplementary Appendix. Literature review In this section, we review the research highly related to our work. These studies can be divided into six streams which are consumers' fuel consumption sensitivity and endurance concern, new energy policy, Dual-credit policy, automotive supply chain production decisions under the Dual-credit Policy, automaker's credits strategy under the Dual-credit Policy, and supply chain coordination mechanism. We summarize the relevant literature in Table 1 to compare previous studies and locate this study. Consumers' fuel consumption sensitivity and endurance concern When consumers make decisions, the fuel consumption of traditional fuel vehicles has become an important factor. Consumers with low-carbon preferences generally do not buy fuel vehicles (Lu et al., 2022). It is necessary to study the impact of consumers' low-carbon preferences on consumer decision-making. Sun et al. (2020) construct a Stackelberg differential game model dominated by manufacturers under centralized and decentralized decision-making considering the lag of emission reduction technology and the low-carbon preference of consumers. The result shows that the lag of emission reduction technology and consumers' Frontiers in Energy Research frontiersin.org 02 low-carbon preferences positively affect manufacturers' carbon emission transfer levels (Sun L. et al., 2020). Wang et al. (2021) build a low-carbon supply chain consisting of leading retailers and small and medium-sized manufacturers considering consumers' price and carbon-reduction sensitivity. The result shows that the retailer has a lower selling price, a lower carbon emission reduction level, a lower product demand, and a lower profit . In addition to the fuel consumption coefficient of traditional fuel vehicles, the endurance of new energy vehicles is also an important factor for consumers. It is necessary to study the endurance limit based on charging infrastructure. Hamid (2022) designs an innovative approach that can systematically determine the location of electric vehicle charging stations considering fairness and efficiency to maximize accessibility and utilization (Hamid, 2022). Zhao et al. (2022) discuss two possible solutions to the challenge of electric vehicle mileage anxiety, which are converting various forms of waste energy into electrical energy and reducing battery power to provide ancillary services . Overall, the above literature examines the impact of consumers' low-carbon preferences or endurance concerns on the automotive industry from the demand-side perspective. The development of the automotive industry is influenced not only by consumers but also by policies. New energy policy Most of the existing research on new energy policy is about its impact on the production decision-making of enterprises. Zhao (2021) analyzes the game behavior between the government and automakers, starting with different government subsidy strategies. The study shows that government subsidies can improve battery life compared to no government subsidies (Zhao, 2021). Luo et al. (2014) study an automotive supply chain in which the manufacturer and retailer offer electric vehicles (EVS) to different types of consumers under the government's price discount incentive scheme, which involves price discount rates and subsidy caps. The results show that subsidy caps effectively influence manufacturers' optimal wholesale pricing decisions with higher unit production costs (Luo et al., 2014). Lu et al. (2021) study the impact of government subsidies on the green innovation investment of new energy companies. The results show that the impact of direct subsidies on green innovation investment of new energy companies is more significant than that of indirect subsidies (Lu et al., 2021). Chen et al. (2020) construct and study a two-tier supply chain consisting of a battery supplier (BS) and an electric vehicle manufacturer (EVM). The study finds that a low subsidy threshold enables BS to increase the driving mileage level above the (Chen et al., 2020). Cheng et al. (2018) combine the subsidy relief policy and stochastic demand in the EV market to study the optimal decision-making of EV manufacturers and EV sellers. The research shows that the reduction of EV subsidies does not have a significant negative impact on EV subsidies (Cheng et al., 2018). Dual-credit policy With the decline of government subsidy policies, the Dual-credit policy becomes the primary policy affecting the decision-making of automakers and dealers. The substitution effect of the Dual-credit policy on government subsidy policy must be explored. As the sustainability policy in emerging markets, the Dual-credit policy achieves the energy-saving and emission-reduction goals of the auto industry (Li and Xiong, 2021). discuss the impact of the subsidy policy and the Dual-credit policy on new energy vehicles and find that under the Dual-credit policy, gradually reducing subsidies can partially offset the negative impact of the Dual-credit policy on new energy vehicles (Li et al., 2020a). explore the impact of subsidy policy and Dual-credit policy on NEV and FV production decisions considering battery recycling and found that adopting the Dual-credit policy can simultaneously improve the technical level of NEV and FV manufacturers (Li et al., 2020b). Yu et al. (2021) use a Stackelberg game model to model a two-stage automotive supply chain to explore the impact of alternative policies on production and pricing strategies. The research shows that when subsidies are phased out, demand for traditional fuel vehicles may also decline as well as electric vehicles . Meanwhile, it is necessary to study the impact of the Dual-credit policy on industry development. Li et al. (2018) use an analytical model based on game theory to quantitatively simulate the development of new energy vehicles under different scenarios. The research shows that the Dual-credit policy can effectively promote the development of new energy vehicles. The proportion of the entire automobile market will be as high as 3.9% . Ou et al. (2018) summarize the Dual-credit policy and develop a new energy and oil consumption credits model to quantify the policy's impact on consumer choice and industry profit. The study shows that under the Dual-credit policy, NEV credits are often used to make up for negative CAFC credits (Ou et al., 2018). construct a multi-period credit market dynamic equilibrium model. The research shows that reducing the credits index of new energy vehicles can slow down the growth rate of internal combustion engine vehicle production and promote the substantial growth of new energy vehicles . Most of previous studies are the substitution effect of the Dualcredit policy on the government subsidy policy and the impact of the "Dual-credit" policy on the development of the industry. The impact of the Dual-credit policy on the production decision of the supply chain has not been studied. Automotive supply chain production decisions under the dual-credit policy Under the Dual-credit policy, the production decisions of decentralized and centralized supply chains must be studied. Zhou et al. (2019) explore the impact of the Dual-credit policy on pricing decisions and green innovation investment in Dual-channel supply chains and find that a generalized Dual-credit policy could raise both thresholds to facilitate the transition to achieve associated TECP emissions reductions (Zhou et al., 2019). Lou et al. (2020) establish a model for optimizing fuel economy improvement levels and internal combustion engine vehicle (ICEV) production under a two-credit strategy. The study shows that when the year-end new energy vehicle credits of automakers do not meet the standard, the Dual-credit policy is not conducive to the production of energy-efficient vehicles (Lou et al., 2020). establish a decentralized and centralized decision-making model under the Dual-credit policy. The research shows that the Dual-credit policy can effectively encourage the supply chain of new energy vehicles to increase investment in research and development, improve the technical level of new energy vehicles, and increase the profit of the supply chain (Ma M. et al., 2021). Peng et al. (2021) study the production decisions of automakers under decentralized and centralized supply chains considering consumer preference and two-credit strategy, and the study shows that when consumers have higher environmental preferences, manufacturers and retailers should increase the prices of new energy vehicles (Peng et al., 2021). discuss the level of fuel economy improvement and production of conventional internal combustion engine locomotives (ICEV) and new energy vehicles, research and development (R&D) cost-sharing contracts, and ICEV revenue-sharing contracts aimed at harmonizing the traditional automotive supply chain, and the results show that in some cases, cost-sharing contracts of the supply chain may be better than revenuesharing contracts (Ma H. et al., 2021). Many scholars have studied the production decision of automobile supply chain under the Dual-credit policy, but they have not considered the credits strategy of automakers. Automaker's credits strategy under the dual-credit policy Under the dual credit policy, the choice of credits strategy of automakers affects the operation decision and development of enterprises. The credits trategy of automakers must be examined. Cheng and Fan (2021) study the production strategy options for competition and cooperation between fuel vehicle competitors and new energy vehicle competitors under the Dual-credit policy. The research shows that maintaining a relatively high credit price for the Dual-credit policy is often more conducive to promoting the expansion of new energy vehicles than setting a high output ratio of new energy vehicles (Cheng and Fan, 2021). Lu et al. (2022) study the pricing and emission reduction decision-making of two manufacturers when consi dering consumers' low-carbon preferences and price competition under the background of Dualcredit. The research shows that the Dual-credit policy can reduce the price of new energy vehicles, improve the profits of new energy vehicle manufacturers, and promote the active emission reduction of fuel vehicles (Lu et al., 2022). No scholar studies comprehensively the optimal credits strategy of automakers considering consumers' fuel consumption sensitivity and new energy vehicles' endurance capacity constraints sensitivity under the Dual-credit policy, which is done in this paper. To achieve optimal credits strategy and production decision-making, enterprises need a reasonable mechanism to coordinate. Frontiers in Energy Research frontiersin.org Supply chain coordination mechanism Designing reasonable contract mechanism can make supply chain members reach Pareto optimum. Mondal and Giri (2021) establish four models of centralized, decentralized, retailer-led revenue-sharing, and bargaining revenue-sharing under the purchase restriction policy. The research shows that retailer-led revenue-sharing can achieve a win-win situation for manufacturers and retailers (Mondal and Giri, 2021). Han et al. (2021) design a revenue-sharing contract by building a Stackelberg model. The research shows that supply chains will benefit from the increase in consumer environmental awareness but will be constrained by carbon emission reduction (CER) investment costs (Han et al., 2021). Han et al. (2021) establish a supply chain (SC) model, including a centralized and decentralized decision-making model. The finding suggests that supply chains' quality control (QC) under consumer bundling behavior (CBB) cannot be coordinated only through wholesale price contracts, and it can be perfectly coordinated through this contract in terms of cost (Lan et al., 2021). Shen (2021) establishes a retailer-dominated bargaining expectation game model with a revenue-sharing contract. The research shows that the revenue-sharing contract could improve greening levels and reduce retail prices compared with the decentralized decision-making model (Shen, 2021). Li and Liu (2020) design a contract with a supplier and a retailer to coordinate newsboy settings. The contract is limited to a particular two-part charging system with a wholesale price equal to unit production costs . Cui et al. (2020) establish a revenue-sharing contract considering the green farming cost for farmers and the green marketing cost for retailers. The study finds that revenue-sharing contracts are beneficial to improve the greening level and to increase the profits of farmers and retailers . Shao and Liu (2022) study the revenue-sharing and cost-sharing contract that incentivize manufacturers to improve the greenness of subsidized products compared with wholesale price contracts based on the complementary product supply chain taking into account consumers' environmental awareness and green subsidies provided by the government (Shao and Liu 2022). There are few researches on the coordination and optimization of automobile supply chain under the credit policy. design and develop a cost sharing contract and a revenue sharing contract to coordinate the traditional automobile supply chain. The research result shows that in some cases, the supply chain cost sharing contract may be better than the revenue sharing contract (Ma H. et al., 2021). Under the Dual-credit policy, no scholar has yet considered the fuel sensitivity coefficient of consumers and the endurance concern of new energy vehicles, explored the credits strategy of automakers and the optimal production decisions of automobile supply chain members, and designed reasonable contracts to coordinate the supply chain members. To sum up, previous studies on the Dual-credit policy mainly focused on its impact on the production decisions of decentralized and centralized supply chains or the credits strategy of automaker. By reviewing the previous studies, it is found that no scholar studied the automotive supply chain decision problems based on automaker's credits strategy under the Dual-credit policy. Based on previous studies, this paper explores the credit cooperation and trading strategies among automakers under the Dual-credit policy. We analyze the optimal credits strategy of automaker considering consumers' fuel consumption sensitivity and endurance capacity sensitivity for new energy vehicles. Meanwhile, this paper explores the optimal decision-making of automaker and dealer. It provides the corresponding theoretical reference for automobile manufacturers to deal with the relationship with other automakers and retailers. By studying the credits strategy, we provide corresponding theoretical support for automaker to respond to the Dual-credit policy. The literature review is shown in Table 1. This paper considers the impact of fuel consumption of traditional fuel vehicles and the endurance limit of new energy vehicles on consumer decision-making. Compared with Sun et al. (2020) (Sun L. et al., 2020) and Wang et al. (2021), who focus on the impact of consumers' low-carbon preferences, the research in this paper has more theoretical reference value. Compared with Cheng and Fan (2021) and Lu et al. (2022), who focus on the impact of dual credit policy on decentralized and centralized supply chain production decisions, this paper discusses the decision-making of automobile supply chain based on automobile manufacturers' credits strategy under dual credit policy and provides corresponding theoretical support for automobile manufacturers to respond to Dual-credit policy. Compared with Ma H. et al., 2021(Ma et al., 2021b, who focus on the study of the improvement level of fuel economy and the impact of output of traditional internal combustion engine vehicles and new energy vehicles, this paper comprehensively considers the impact of fuel consumption of traditional fuel vehicles and the endurance limit of new energy vehicles. Problem description and assumptions In the Dual-credit policy, there is competition between traditional fuel vehicles and new energy vehicles produced by automaker. Therefore, we introduce the cross-price elasticity coefficient. Considering consumers' environmental protection and endurance capacity sensitivity for new energy vehicles, this paper studies the credits cooperation and trading strategy of automaker and analyzes the optimal credits strategy of automaker. The market demand for traditional fuel vehicles and new energy vehicles can be expressed as follows: According to the research literature (Lu et al., 2022), we assume the calculation method of credits is CAFC credits = (t 1 − T)* the number of traditional fuel vehicles, NEV credits = λ 1 * the number of new energy vehicles −λ* the number of traditional fuel vehicles. We make that θ and (1−θ) represent consumers' preferences for traditional fuel vehicles and new energy vehicles. μ represents the total market capacity, b represents the price sensitivity coefficient, f represents the cross-price elasticity coefficient, p 1 ij and p 2 ij represent the sales prices of traditional fuel vehicles and new energy vehicles. σ represents the consumers' fuel consumption sensitivity, t 1 represents the actual value of fuel consumption of automaker, φ represents the consumers' endurance capacity sensitivity for new energy vehicles, t 2 represents the new energy vehicles' endurance capacity constraints, T represents the standard value of fuel consumption of automaker, λ 1 represents the value of credits for each new energy vehicle, and λ represents the requirements for the proportion of new energy vehicles stipulated by the state. Frontiers in Energy Research frontiersin.org Under the Dual-credit policy, automaker's strategy for compensating for negative credits or dealing with positive credits for new energy can be divided into trading and cooperative strategies. This paper compares and analyzes the trading strategy and cooperative strategy of automaker. We also consider decentralized decision-making and centralized decision-making of automaker and dealer under two strategies considering consumers' fuel consumption sensitivity and new energy vehicles' endurance capacity constraints sensitivity. Furthermore, we coordinate the supply chain through the revenue-sharing contract so that the benefits of supply chain members achieve Pareto optimality. The strategic choice of automaker for different investment amounts are compared and analyzed. The variables and descriptions used in this article are shown in Table 2. We assume that i = B, T denotes the conditions under which automaker chooses the trading strategy and cooperative strategy; j = D, C, R denotes supply chain decentralized decision-making, centralized decision-making, and contract coordination. We make that {BD, BC, BR, TD, TC, TR} denote the decentralized decision-making, centralized decision-making, and revenue-sharing contract based on credits trading strategy and decentralized decision-making, centralized decision-making, and revenue-sharing contract based on credits cooperative strategy. The frame structure of the article is shown in Figure 1. Analysis of credits strategy of automotive supply chain When the actual value of fuel consumption of automaker is t 1 ∈(T,∞), the actual value of fuel consumption of automaker is greater than the standard value. It is necessary to obtain NEV credits. The credits strategies that automaker can choose are direct purchase strategies and cooperative strategies. Supply chain decision based on credits trading strategy It is the quickest way to eliminate negative CAFC credits that automaker trades directly with other NEV manufacturers. At this time, automaker directly purchases the positive NEV credits of other automakers to repay the excess CAFC credits and repay the negative NEV credits. Model BD: Decentralized decision-making of automaker and dealer Under decentralized decision-making, automaker and dealer play a Stackelberg game. The decision sequence is that automaker determines the wholesale prices of traditional fuel vehicles and new energy vehicles. According to wholesale prices determined by automaker, dealer decides on sales prices of traditional fuel vehicles and new energy vehicles. The profit functions of automaker and dealer are as follows: Among them, (w BD 1 − c 1 )Q BD 1 denotes the income that manufacturers wholesale traditional fuel vehicles, (w BD 2 − c 2 )Q BD 2 denotes the income that manufacturers wholesale new energy vehicles, (p BD 1 − w BD 1 )Q BD 1 denotes the income that retailers sell traditional fuel vehicles, (p BD 2 − w BD 2 )Q BD 2 denotes the income that retailers sell new energy vehicles, Q BD 1 (t 1 − t 2 )p 3 denotes the cost of buying positive NEV credits for automaker who generates excess CAFC credits, (λ 1 Q BD 2 − λQ BD 1 )p 3 denotes the cost of repaying negative NEV credits after deducting the standard NEV credits requirements by automaker. Proposition 1: By using the reverse solution method, under decentralized decision-making based on the credits trading strategy, we have the optimal decisions for sales prices, wholesale prices, and sales volumes of traditional fuel vehicles and new energy vehicles: Analysis of credits strategy in automobile supply chain. Frontiers in Energy Research frontiersin.org 07 Model BC: Centralized decision-making of automaker and dealer Under centralized decision-making, automaker and dealer regard the entire supply chain system as an enterprise and make joint decisions to determine sales prices of traditional fuel vehicles and new energy vehicles. The profit function of the entire supply chain system is as follow: Proposition 2: Under centralized decision-making based on the credits trading strategy, we have the optimal decisions on sales prices and sales volumes of traditional fuel vehicles and new energy vehicles: Proposition 3: When automaker chooses the trading strategy, the benefit of the supply chain system under centralized decision-making is more excellent than those of decentralized decision-making. Under centralized decision-making, the sales volumes of traditional fuel vehicles and new energy vehicles are more significant than in decentralized decision-making. When automaker chooses the trading strategy, automaker and dealer jointly set the sales prices and wholesale prices of traditional fuel vehicles and new energy vehicles under centralized decision-making to maximize the profit of the entire system and pursue a win-win situation. However, dealer chooses to increase the sales prices to obtain higher profit, which will lead to their sales volumes decreasing. The automaker pursues maximizing their profits and increasing wholesale prices under decentralized decision-making and ultimately reducing automaker's profit. The profit of entire supply chain system decreases. So, the sales volumes of traditional fuel vehicles and new energy vehicles under centralized decisionmaking are greater than those under decentralized decision-making, and the revenue of supply chain system is greater than that of decentralized decision-making. Model BR: Revenue-sharing contract From the above propositions, it can be seen that under decentralized decision-making, the sales volumes of supply chain members are less, and the profits are slim. In this section, the supply chain is coordinated through the revenue-sharing contract, and dealer is encouraged to sell products so that the sales volumes of supply chain members can reach the level of centralized decision-making. The revenue-sharing ratio is obtained by solving. Under the revenue-sharing contract, dealer receives a portion of automaker's revenue to achieve the optimal decision, and the revenue-sharing ratio is ρ 1 . The profit functions for automaker and dealer are as follows: Proposition 4: Under the revenue-sharing contract based on the credits trading strategy, we have the optimal decisions on sales prices, wholesale prices, and sales volumes of traditional fuel vehicles and new energy vehicles: In the state of the revenue-sharing contract, the entire supply chain reaches a state of centralized decision-making, which is The revenue-sharing ratio is: After the coordination of the revenue-sharing contract, the sales volumes of traditional fuel vehicles and new energy vehicles are more significant than the sales volume before coordination, which is Q 1 BR* >Q 1 BD* ,Q 2 BR* >Q 2 BD* . The profits of automaker and dealer are more significant than their respective Supply chain decision based on credits cooperative strategy Under the credits cooperative strategy, automaker cooperates with other automakers. The automaker acquires part of profit and assume corresponding responsibility through investment, and does not participate in the decision-making. The cooperative manufacturer offsets the negative CAFC credits. We assume that they jointly determine the credits discount price p 4 =kp 3 (0<k<1) of the positive NEV credits through negotiation. Model TD: Decentralized decision-making of automaker and dealer Under decentralized decision-making, automaker's investment amount is T 1 , and the profit functions of automaker and dealer are as follows: Proposition 5: Under decentralized decision-making based on credits cooperative strategy, we have the optimal decisions for sales prices, wholesale prices, and sales volumes of traditional fuel vehicles and new energy vehicles: Model TC: Centralized decision-making of automaker and dealer Under centralized decision-making, automaker's investment amount is T 2 . The profit function of the entire supply chain system is as follow: Proposition 6: Under centralized decision-making based on credits cooperative strategy, we have the optimal decisions on sales prices and sales volumes of traditional fuel vehicles and new energy vehicles: Proposition 7: When automaker chooses the cooperative strategy, the benefit of supply chain system under centralized decision-making is greater than that of decentralized decision-making. Under centralized decision-making, the sales volumes of traditional fuel and new energy vehicles are more remarkable than in decentralized decision-making. Automaker and dealer jointly set sale prices and wholesale prices of traditional fuel vehicles under centralized decision-making, aiming to maximize the whole system's profit and pursue a win-win situation when automaker chooses the cooperative strategy. Therefore, the sales volumes of traditional fuel vehicles and new energy vehicles are greater than those of decentralized decision-making, and the revenue of the supply chain system is more significant than decentralized decisionmaking under centralized decision-making. Model TR: Revenue-sharing contract Under the revenue-sharing contract, dealer obtains a portion of automaker's revenue and improves the sales volumes of products. The revenue-sharing ratio is ρ 2 . The profit functions of automaker and dealer are as follows: Proposition 8: Under the revenue-sharing contract based on credits cooperative strategy, we have the optimal decisions on sales prices, wholesale prices, and sales volumes of traditional fuel vehicles and new energy vehicles: In the state of the revenue-sharing contract, the entire supply chain reaches a state of centralized decision-making, which is The revenue-sharing ratio is: After coordination, the sales volumes of traditional fuel vehicles and new energy vehicles are greater than the sales volumes before coordination, which are Q 1 . The profits of automaker and dealer are more significant than their respective profits before coordination, which are Π m TR ≥Π m TD and Π r TR ≥Π r TD . Proposition 9: When automaker investment amount in centralized decision-making ΔT, automaker chooses the cooperative strategy with other automakers. This shows that the profit is the largest when automaker and dealer make the centralized decision-making under the trading strategy and cooperative strategy, and automaker's investment amount in centralized decision-making is small. Therefore, the profit of centralized decision-making of cooperation strategy and the profit of centralized decision-making of trading strategy is calculated as the difference. If the difference between the two is more significant than zero, the cooperative strategy is better than the trading strategy when automaker's investment in centralized decision-making is small. Proposition 10: When the discounted price of NEV credit is low, it is more advantageous for automaker to adopt the cooperative strategy. When the discounted price of NEV credit is high, the advantage of the cooperative strategy decreases. When the price of NEV credit is low, it is more advantageous for automaker to adopt the trading strategy. When the price of NEV credit is high, it is more advantageous for automaker to adopt the cooperative strategy. When automaker invests a certain amount of investment, and the discounted price of NEV credit is low, automaker can directly trade NEV credits at a lower discount price of NEV credit, reducing the cost and obtaining high returns. On the contrary, when the discounted price of NEV credit is high, automaker chooses direct trading NEV credits without investing, which is more beneficial. When price of NEV credit is low, automaker adopts a more beneficial trading strategy to increase profit. When the price of NEV credit is high, automaker should adopt the cooperative strategy to obtain NEV credits at a specific discounted price, reduce cost and increase profit. Proposition 11: The sales of traditional fuel vehicles are inversely proportional to customers' fuel consumption sensitivity. The sales of new energy vehicles are inversely proportional to consumers' endurance limits requirement for new energy vehicles. Customers' fuel consumption sensitivity and consumers' endurance limits requirement for new energy vehicles will not affect credits strategy choice of automobile manufacturers. When customers' fuel consumption sensitivity increases, they raise awareness of environmental protection, start to focus on environmental travel, and avoid choosing traditional fuel vehicles to meet their responsibilities and obligations when purchasing cars. It leads to a decrease in sales of traditional fuel vehicles. Meanwhile, when consumers increasingly focus on new energy vehicles' endurance capability, endurance limits requirement on new energy vehicles will affect their purchase decisions. The sales of new energy vehicles decline. Proposition 12: When automaker chooses the cooperative strategy, the sales prices of traditional fuel vehicles improve with the increase of credits discount prices and credits prices. The sales volumes of traditional fuel vehicles decrease with the increase of credits discount prices and credits prices. New energy vehicles are the opposite. When automaker chooses the cooperative strategy, with the increase of credit discount price and credit price, the cost that automaker purchases NEV credits increases. Due to high costs, the automaker must improve the sales prices of traditional fuel vehicles. It will lead to the sales volumes of traditional fuel vehicles decreasing and profits decreasing. Meanwhile, with the increase in credit discount price and credit price, the sales price of new energy vehicle decreases to improve their sales volumes. It will obtain more NEV credits and profits. Hypothesis 1: When the actual value of fuel consumption of automaker is t 1 ∈(0,T], the actual value of fuel consumption of automaker is less than the standard value. The credits strategies that automaker can choose are marketing strategy and cooperative strategy. At this time, there is Q BD 1 (t 1 − T)p 3 0 in the decentralized decision-making. Meanwhile, there is Q TD 1 (t 1 − T)p 4 0 in the centralized decision-making. When the actual value of fuel consumption of automaker is less than the standard value, the automaker does not generate negative CAFC credits. They do not need to purchase positive NEV credits. Therefore, there is Q BD 1 (t 1 − T)p 3 0 in the decentralized decisionmaking. Meanwhile, there is Q TD 1 (t 1 − T)p 4 0 in the centralized decision-making. The automaker can sell surplus NEV credits with marketing and cooperative strategies. Numerical analysis This section takes an automaker producing traditional fuel vehicles and new energy vehicles as an example for analysis. Referencing cars' sales data over the years and combined with the literature (Ou et al., 2018), we assume total market capacity μ = 1 million vehicles, consumers' preferences for traditional fuel vehicles θ = 0.4, price sensitivity coefficient b = 7, the cross-price elasticity coefficient f = 2, consumers' fuel consumption sensitivity σ = 0.6, the actual value of fuel consumption of automaker t 1 = 8.86L/100km, consumers' endurance capacity sensitivity for new energy vehicle φ = 0.6, new energy vehicles' endurance capacity constraints t 2 = 225km, the standard value of fuel consumption of automaker T = 6.9L/100km, NEV credits price p 3 = 3,000RMB/credit, NEV credits discount price p 4 = 1,200RMB/credit, the value of the credit for each new energy vehicle λ 1 = 3.5 1 , requirements for the proportion of new energy vehicles stipulated by the state λ = 0.1, cost of production of traditional fuel vehicles c 1 = 50,000 RMB/vehicle, cost of production of new energy vehicles c 2 = 30,000 RMB/vehicle. Since the focus of this paper is to analyze the credits strategy of automaker, and the difference between the decentralized supply chain Frontiers in Energy Research frontiersin.org and the centralized supply chain is not discussed in-depth, the investment amounts of automaker under model TD and model TC are, respectively: T 1 =1, T 2 =1. Analysis of credits strategy of automotive supply chain It can be seen from Figures 2, 3 that the profit of the supply chain system is more significant under centralized decision-making, whether the trading strategy or cooperative strategy. Proposition 3 and Proposition 7 are verified. When automaker's investment amount is small, automaker should adopt the cooperative strategy. Otherwise, the trading strategy is more advantageous. When automaker invests a certain amount of investment, and the discounted prices of NEV credits are low, it is more advantageous for automaker to adopt the cooperative strategy. When the discounted prices of NEV credits are high, the advantage of the cooperative strategy reduces. Meanwhile, when the prices of NEV credits are low, it is more advantageous for automaker to adopt the trading strategy. When the prices of NEV credits are high, it is more advantageous for automaker to adopt the cooperative strategy. The cooperative strategy can bring discount prices of NEV credits to automaker so that enterprises' costs reduce, and the profits increase when the automaker's investment amount is small. However, when the discounted prices of NEV credits are low, enterprises' costs reduce. When NEV credits price is low, automaker does not need to invest capital. The cost of direct trading of positive NEV credits reduces, and the profit increases. Proposition 10 is verified. Analysis of credits strategy considering contract coordination It can be seen from Figure 4 that by designing the corresponding revenue-sharing contracts under the automaker's trading strategy and cooperative strategy, the profits of automaker and dealer are higher than their profits before coordination. After coordinating, automaker's profit increases as NEV credits prices increase. It shows that the higher the prices of NEV credits, the higher the automaker's profit by trading credits for new energy vehicles from Figure 4. Therefore, the revenue-sharing contract can effectively coordinate the automotive supply chain, make the members of the automotive supply chain achieve Pareto optimality, and promote the rapid development of the automotive retail industry. Proposition 4 and Proposition 8 can be further verified. Analysis of supply chain decision under different credits strategies It can be seen from Table 3 that when automaker's investment amount is smaller, regardless of whether automaker and dealer make the decentralized decision-making or the centralized decision-making, FIGURE 2 Automaker's credits strategy under the trading strategy. FIGURE 3 Automaker's credits strategy under the cooperative strategy. Frontiers in Energy Research frontiersin.org automaker chooses the cooperative strategy to make automaker and dealer get higher returns compared with the trading strategy. Under the trading and cooperative strategy of automaker, the sales prices of traditional fuel vehicles and new energy vehicles in the centralized decision-making of the automotive supply chain reduce, and their sales volumes increase. Compared with decentralized decision-making, the profit of the supply chain system is larger, and customer satisfaction is higher, which helps to improve brand loyalty in the centralized decision-making. 6 Conclusion and policy implications Conclusion Under the background of the Dual-credit policy, the research object of this paper is an automotive supply chain consisting of an automaker and a dealer. According to the different ways of obtaining NEV credits for automaker, the credits strategy can be divided into trading strategy and cooperative strategy. Under the two strategies, the automotive supply chain's decentralized decision-making model and centralized decisionmaking model are constructed. The optimal decision-making of the automotive supply chain under the two strategies is compared and analyzed from the perspective of customers' environmental protection awareness and consumers' endurance capacity sensitivity for new energy vehicles. The supply chain under the two strategies is coordinated through the revenue-sharing contract so that the supply chain members can achieve Pareto optimality to achieve optimal decisions. For different investment amounts, automaker's optimal credits strategy choice is explored. The research results show that: (1)Customers' fuel consumption sensitivity and endurance limits requirement for new energy vehicles will not affect the credits strategy choice of automobile manufacturers but will only affect the sales of traditional fuel vehicles and new energy vehicles. (2)The optimal decision of automaker is affected by credit discount prices and credit prices. With the increase of credits discount prices and credits prices, the sales prices of traditional fuel vehicles increase, and the sales volumes decrease. New energy vehicles are the opposite. (3) Whether automaker chooses the trading strategy or the cooperative strategy, centralized decision-making in the automotive supply chain is better than decentralized decision-making. Therefore, automaker should carry out centralized decision-making with dealer to create a win-win situation. (4) When the actual value of fuel consumption of automaker is greater than the standard value, and automaker's investment amounts are small, the cooperative strategy is more advantageous, and automaker could choose to cooperate with other automakers to get NEV credits. When the actual value of fuel consumption of automaker is less than the standard value, and automaker's investment amounts is more significant, the cooperative strategy is more advantageous, and automaker should choose to cooperate with other automakers to sell NEV credits. (5) Under the trading and cooperative strategies, when the revenuesharing ratio reaches a certain threshold, the revenue-sharing contract can effectively coordinate the supply chain so that the benefits of supply chain members can reach Pareto optimality. Therefore, the automotive supply chain members can use the revenue-sharing contract to coordinate the supply chain system to maximize the members' income. Policy implications The prices and values of NEV credits for each new energy vehicle affect their sales prices and sales volumes. To better achieve the goal of carbon emission reduction, the government should formulate reasonable prices and values of NEV credits for each new energy vehicle to encourage automaker to produce new energy vehicles, improve consumers' low-carbon preferences and encourage consumers to develop a green and environmentally-friendly lifestyle. It will be difficult to further reduce carbon emissions unless consumers' fuel consumption sensitivity is improved and their endurance capacity sensitivity is reduced. In reality, the fuel consumption sensitivity of consumers will be positively affected by increasing fuel prices. It is necessary to systematically and scientifically set the position of the electric vehicle charging pile to improve its utilization rate or increase the number of the electric vehicle charging pile to reduce the requirements for the endurance capacity sensitivity of consumers. As consumers' fuel consumption sensitivity increase and their endurance capacity sensitivity decrease, automobile manufacturers will produce more new energy vehicles to meet consumer demand. This will convert the negative impact of the Dual-credit policy on automobile manufacturers into the positive impact. It will encourage consumers to buy more new energy vehicles so that enterprises can better cooperate with policies and obtain higher returns. To maximize their interests, automaker should collaborate with other automakers in credit to respond to the Dual-credit policy. In addition, establishing a solid partnership between automakers and dealers will increase sales volumes and achieve a win-win situation for both parties striving for centralized decision-making. Similarly, due to the differences in prices of NEV credits, discount prices of NEV credits, and investment amounts, automaker should make optimal strategic choices to cope with market and policy changes in enterprises and promote the sustainable development of the automobile retail industry. This study has several limitations. It would be interesting to extend our research in some directions. This paper considers the strategic choices between automaker, dealer, and other automakers. However, it only starts from a single cycle under the Dual-credit policy from the perspective of automaker that produces both traditional fuel vehicles and new energy vehicles. In the future, the strategic choices of automobile manufacturers that produce traditional fuel vehicles and new energy vehicles in multiple cycles can be studied. In addition, the credits strategy of automakers can be studied in the case of uncertain demand in the future. Data availability statement The original contributions presented in the study are included in the article/Supplementary Materials, further inquiries can be directed to the corresponding author.
10,563
2023-01-12T00:00:00.000
[ "Engineering", "Economics", "Business" ]
Learning an unknown transformation via a genetic approach Recent developments in integrated photonics technology are opening the way to the fabrication of complex linear optical interferometers. The application of this platform is ubiquitous in quantum information science, from quantum simulation to quantum metrology, including the quest for quantum supremacy via the boson sampling problem. Within these contexts, the capability to learn efficiently the unitary operation of the implemented interferometers becomes a crucial requirement. In this letter we develop a reconstruction algorithm based on a genetic approach, which can be adopted as a tool to characterize an unknown linear optical network. We report an experimental test of the described method by performing the reconstruction of a 7-mode interferometer implemented via the femtosecond laser writing technique. Further applications of genetic approaches can be found in other contexts, such as quantum metrology or learning unknown general Hamiltonian evolutions. The genetic algorithm, which aims at learning the unitary transformation U r starting from the collected data set, is structured as follows. 1. A distribution of N DNA sequences, representing N different m × m unitary matrices, is generated. The parameters {t l k , α l k .β l k } are drawn from appropriate distributions, so that the generated unitaries are distributed according to the Haar measure [S1]. An approximate form of these distributions have been evaluated numerically by sampling unitary matrices from the Haar measure. More specifically, the phase differences α l k − β l k are drawn from the uniform distribution, while the transmittivities t l k are drawn from a triangular one u(t i ) = 2t i . The exact form of these distribution can be evaluated as shown in [S2]. The obtained set of N DNAs constitutes the populationΦ 0 = {Ẽ 1 , ...,Ẽ N }. 1 . The analytic method proposed in Ref. [S3] is applied to the experimental data. A set of m 2 independent estimates of the unitary [S4] is obtained, starting from this approach, by selecting appropriate subsets of the data and by performing permutations of the mode indexes. DNA sequences for the N 1 = 20 unitaries presenting higher fitnesses are then evaluted. Finally, N 1 elements of the populationΦ 0 obtained at step 1 are replaced by the N 1 candidates determined from the analytic method. The new set of N DNAs constitutes the initial population Φ 0 = {E 1 , ..., E N }. 2. The population is sorted by decreasing fitness values, evaluated between the experimental data (P i,j ,Ṽ ij,pq ) and the predictions (P E l i,j , V E l ij,pq ) obtained from the matrices of the population, with l = 1, . . . , N . The new ordered population set is Φ 1 = {E 1 , ..., E N }. 3. The single-photon probabilities P E 1 and the two-photon visibilities V E 1 are calculated from the element E 1 . If f (E 1 ) ≥ δ the algorithm halts and returns the solution matrix U E 1 . More specifically, the unitary matrix U E 1 is obtained from the conversion function T (E 1 ) which relates the genetic code to the corresponding unitary transformation [S5]. 4. The second half of the population, consisting of the individuals with lowest fitness values, is removed. The resized population Φ 2 is the set Φ 2 = {E 1 , ..., E N/2 }. 5. Crossover is applied between two randomly chosen individuals. The corresponding generated offspring is added to the population set Φ 2 . This operation is iterated with other couples of individuals until the number of elements of Φ 2 is N . The result of this mechanism is a new population where the elementsĒ l are the newly-generated individuals. 6. During the evolution of the system, several individuals with the identical DNA (clones) corresponding to the element with highest fitness may spread in the population. This effect causes a steady depletion of the gene pool, which in turn leads to an early convergence of the algorithm to a local maximum of f (E). To avoid this effect two countermeasures have been adopted: (i) Random Offspring Generation [S6], which imposes that crossover betweeen two clones generates a child with random DNA, and (ii) Packing, which consists in identifying clusters of clones in the population every q iteration. For each of these clusters, all the elements except one are removed and the population is filled by randomly generated new individuals. 7. For each element l = 2, ..., N , mutation is applied with probability γ. The index l starts from the value 2 to avoid a mutation on the individual with highest fitness in the population. This constraint is commonly refereed to as Elitism. A new population Φ 4 = {E 1 , ..., E N } is obtained. 8. Steps 2-7 are iterated starting from the new population Φ 4 until the halting condition is reached at step 3. SUPPLEMENTARY NOTE 2: ALGORITHM CONVERGENCE To characterize the performance of the developed genetic algorithm, we have performed numerical simulations for different circuit size with simulated data. More specifically, for each tested size we generated N unit = 50 different Haar-random unitary matrices. For each matrix, the complete set of single-photon probabilities and two-photon visibilities is calculated. To include the effect of statistical errors corresponding to a finite size experimental data sample for each quantity, noisy data are simulated by generating random numbers following a Gaussian distribution with µ equal to the exact value, and σ equal to the simulated noise. We employed a value of the (relative) noise equal to 3% for single-photon probabilities, and 5% for the two-photon visibilities. The simulated noisy data are fed into the genetic algorithm to learn the unitary transformation. The obtained results for m = 4, m = 5 and m = 6 interferometers are shown in Supplementary Fig. 1 and compared to what is obtained with the analytic approach (which is employed as seed for the genetic approach) and with a numerical derivative-based minimization routine (adopting the solution of the analytic method as a starting point). More specifically, we report the histograms of the reduced χ 2 ν (that is, the χ 2 divided by the number of degrees of freedom ν). We observe that the conventional numerical routine and the genetic approach provide comparable performances in terms of achieved χ 2 ν , with value close to 1. On the other side, the analytic approach in general fails to capture the optimal solution in the presence of statistical noise. Note that the same set of parameters (mutation rate, population size, ...) is employed for all unitary matrices at a given size m. Furthermore, in these simulations a single set of parameters is shown to be effective for all investigated m. However, it is likely that by further increasing the interferometer dimension m, the set of hyperparameters has to be tuned to optimize and guarantee convergence of the algorithm. SUPPLEMENTARY NOTE 3: EXPECTED AND RECONSTRUCTED UNITARY MATRICES Here we report the unitary matrix corresponding to the interferometer design U and the one obtained from the reconstruction with the genetic approach U (g) r , shown in Fig. 4 of the main text. The expected unitary matrix is calculated by exploiting from the actual internal structure of device, shown in Fig. 1 d, which is composed by a network of symmetric 50/50 beam-splitters interspersed by static relative phases between the modes. The fabrication phases for each layer are reported in Supplementary Table 1 The expected unitary U has real part: and imaginary part: The reconstructed unitary matrix U (g) r with the genetic approach has real part:
1,803.2
2016-10-11T00:00:00.000
[ "Physics" ]
Anticarcinogenic Effects of Gold Nanoparticles and Metformin Against MCF-7 and A549 Cells Metformin is commonly prescribed to people with diabetes. Metformin has been shown in previous studies to be able to prevent the growth of cancer cells. This study aims to investigate the effects of metformin and gold nanoparticles in MCF7 breast cancer and A549 lung cell lines. The effects of metformin and gold nanoparticles on MCF7 breast cancer and A549 lung cells were determined on cells grown in 24 h cell culture. MCF-7 and A549 cells were incubated for 24 h with the treatment of escalating molar concentrations of ifosfamide. The MTT assay was used to determine the cytotoxicity of metformin toward MCF7 and A549 cell lines. The expression of Bax, BCL2, PI3K, Akt3, mTOR, Hsp60, Hsp70, and TNF-α was measured by RT-PCR. Metformin and gold nanoparticles inhibited the proliferation of MCF-7 and A549 cells in a dose and time-dependent manner with an IC50 value of 5 µM and 10 µg/mL. RT-PCR assays showed ifosfamide + metformin + gold nanoparticles significantly reduced the expression of BCL2, PI3K, Akt3, mTOR, Hsp60 and Hsp70 and increased the expression of TNF-α and Bax. The findings obtained in this study suggest that further studies should be conducted, and metformin and gold nanoparticles can be used in breast cancer and lung cancer treatments. Introductıon The most frequent type of cancer-related death worldwide is breast and lung cancer [56,78].More than half of breast cancer patients will develop metastases to the bone, liver, lung, or brain [59].The American Cancer Society has predicted that there will be around 127,070 deaths from lung cancer and about 238,340 new cases of lung cancer in the US in 2023 (https:// www.cancer.org/).In addition, there will be 55,720 new instances of ductal carcinoma in situ (DCIS), 297,790 new cases of invasive breast cancer in women, and 43,700 new breast cancer-related deaths.Chemotherapeutic drugs that are effective in treating a wide range of malignant disorders include ifosfamide, carboplatin, cisplatin, etoposide and paclitaxel [13].Ifosfamide, a cyclophosphamide analog, exhibits broad-spectrum action against a variety of neoplasms in various oncologic specialties, including haematological, breast and lung cancer [8,57]. The first-line treatment for type 2 diabetes is the biguanide drug metformin, also known as 1,1-dimethylbiguanide hydrochloride.Many findings made in recent years have shown metformin's new function [44,55].Although metformin, a first-line medication for T2DM, is being employed for its anticancer properties, there is little information in the literature about how much metformin affects patients' overall survival when they have stage IV cancer [58].Combined treatments of metformin and doxorubicin (DOX) are effective in the treatment of a variety of cancers, including breast cancer [61].In studies, metformin significantly decreased the risk of bladder, oesophageal, and lung cancer [64]. Early tumor detection and diagnosis are the main foundations for using nanotechnology in treating cancer [60].Due to their excellent electrical conductivity, stability, simplicity of modification, and biocompatibility, gold nanoparticles (AuNPs) have emerged as one of the most widely utilized materials in electrochemical biosensors, in medicinal and biological applications (K.X. [38,72].Gold nanoparticles have grown in significance in the realm of biomedical research and diagnostics due to their distinct physicochemical characteristics [29,42].There is hope and possibility for using gold nanoparticles in cancer treatment and diagnosis.However, it is crucial to consider unforeseen consequences for human health [60].A study reports the first successful synthesis of highly stable gold nanoparticles using the bioactive compound naringenin in isolation, serving as a dual reducing and stabilizing agent [48].It is important to note that AuNPs may influence cellular responses without affecting their viability, for example, inhibiting proliferation, altering calcium [25] and nitrogen oxide release [26], stimulating respiratory activity or the activity of mitochondrial enzymes [62].AuNPs have the potential to be cytotoxic to specific cancer cell line types [65].In vivo, AuNPs prevented VEGFinduced permeability and angiogenesis in mouse ovarian and ear tumor models [4,46].Gold nanoparticles that have internalized may modify intracellular signaling and obstruct the MAPK pathway, which would prevent metastasis by interfering with the epithelial-mesenchymal transition (EMT) [3]. A research was demonstrated that naturally occurring GA can be used as a nontoxic phytochemical construct in the production of readily administrable biocompatible AuNPs for diagnostic and therapeutic applications in nanomedicine [32].AuNPs-based contrast agents may be useful in x-raybased computed tomography [6].Other reports have shown an unprecedented 82% reduction in tumor volume after a single-dose administration of GA-198AuNPs (408 μCi) [12].The oncological implications of MGF-198AuNPs as a new therapeutic agent for treating prostate and various solid tumors are presented [30,31].Khoobchandani et al. [33] have achieved in clinically translating, from mice to humans, in using proprietary combinations of gold nanoparticles and phytochemicals to develop the Nano-Ayurvedic drug: Nano Swarna Bhasma (NSB), for treating human metastatic breast cancer patients [33].The antitumor mechanism induced by YF-AuNPs on PC-3 and MDAMB-321 cell lines was attributed to apoptosis.Also, the results demonstrated that RAW 264.7 macrophages treated with YF-AuNPs resulted in elevated levels of antitumor cytokines (TNF-α and IL-12) and reduced levels of pro-tumor cytokines (IL-6 and IL-10) [68]. To our knowledge, in this exploration, the effects of metformin and gold nanoparticles on various genes in MCF7 and A549 cells were investigated for the first time in the literature.Thus, it was tried to reveal the anticarcinogenic effects of metformin and gold nanoparticles. Synthesis and Characterization of Citrate-capped Gold Nanoparticles The synthesis of citrate-capped gold nanoparticles was performed using the procedure given in the literature [70].In summary, a solution of 500 ml of 1 mM HAuCl 4 in distilled water was taken into a one-liter glass flask and stirred until it boiled.A solution of 50 ml of 38.8 mM sodium citrate (Na 3 C 6 O 7 .2H 2 0) in distilled water was added to this solution quickly and stirring was continued for 10 min in boiling state, then it was mixed for 15 min without heating by warming from the heater.The solution, which turned from yellow to light red, was filtered at room temperature with small-pored filter paper and stored in the dark, ready to use. Characterization of Au nanoparticles was observed using Ultraviolet-visible spectroscopy (UV-Vis) (Perkin Elmer Lambda 35 spectrophotometer device), which operated within the wavelength range of 200 to 900 nm.Functional groups were examined using Fourier transform infrared spectroscopy (FT-IR) with a Perkin Elmer Frontier FT-IR spectrophotometer, operating within a wavelength range of 400 to 4000 cm −1 .For transmission electron microscopy (TEM) (Hitachi HT-7700 was employed), operating at 300 kV.Scanning electron microscopy (SEM) images were recorded on a FEI Inspect S50 SEM microscope operating at 25 kV. Cells Culture MCF-7 (ATCC®HTB-22) breast cancer cell from a 69-year-old female patient and lung cancer cell with A549 (ATCC®CRM-CCL-155) epithelial cell type from a 58-year-old male patient were used in the study.MCF-7 cells in RPMI 1640 (Eco-Tech, Cat No: RPMI500) medium containing 10% FBS (Serana, Lot: 34010720FBS), 1% Penicillin/Streptomycin; it was grown at 5% CO 2 and 37 °C ambient conditions.In the study, the effects of metformin and gold nanoparticles on MCF-7 and A549 cells were investigated by applying different concentrations.These changes were tested by comparing them with the control group that received no substance.For this purpose, considering the literature information researched, metformin was applied as 5 mM, 25 mM, 50 mM and 80 mM.The efficacy of gold nanoparticles was investigated by applying 5 µM, 25 µM, 50 µM and 100 µM concentrations.Ifosfamide is an alkylating antineoplastic agent widely used to treat different types of malignancies, including solid tumors and hematological malignancies [5,74].In this study, we applied Ifosfamide to cells at four different concentrations (1 μM, 5 μM, 25 μM, 50 μM, 75 μM). Measurement of Cell Cytotoxicity (MTT) MTT (Measurement of cell cytotoxicity) test was performed to evaluate the effects of metformin and gold nanoparticles on viability by evaluating cell metabolic activity in MCF-7 and A549 cells.Cells reaching 70-80% density were seeded in 96-well plates at 1 × 10 3 cells per well in four replicates for each concentration.After 24 h after sowing the cells, the medium was removed, and metformin and gold nanoparticles were applied at the determined concentrations.The MTT test was performed 24 h after the substances were administered.For this, the medium was removed and the mixture prepared at a ratio of 1:10 (Media: Cell Viability Detection Kit 8 (CVDK-8)) was added to the cells.Incubation was carried out at 5% CO 2 and 37 °C for three hours and measurements were made at 450 nm.The results were calculated statistically using Microsoft Office Excel. RT-PCR Analysis After MTT analysis, the effective dose for metformin was determined as 5 mM, Ifosfamide as 5 μM, and gold nanoparticle as 10 μM (Table 1).RT-PCR analyses were performed by applying these doses.Cells reaching 70-80% confluency were counted on the thoma slide and seeded at 1 × 10 6 cells per well in 6-well plates.After 24 h, the substances were applied at the determined concentrations.At the end of 24 h and 48 h, the cells were removed with Trypsin-EDTA (Gibco, Ref:25,200-056), and RNA was isolated for RT-PCR. RNA Isolation Cells removed with trypsin were centrifuged at 5000 rpm for five minutes and the supernatant was removed.It was washed with 1 mL of phosphate-buffered saline (PBS) and centrifuged again.After this step, the RNA isolation kit (Ambion PureLink RNA Mini Kit Cat.Nos.12183018A, 12,183,025) protocol was applied.RNA concentrations were measured at 260 nm.When the concentrations were around 100 and 260/280 values were around 2, the cDNA synthesis stage was started.cDNA Synthesis cDNA synthesis was performed according to the Maxime RT PreMix Kit (Cat No. 25082) protocol.5µL of RNA samples, with a final volume of 20µL, and 15µL of RNA-free water are added to the PCR tubes and placed in the PCR device.At the end of the PCR, the samples were stored at -20 °C until use. RT-PCR The effects of the applied substances at the gene level was determined by real time PCR method.The cDNA synthesized was added to the mixture prepared with ddH2O, master mix, reverse primer of the gene to be studied and forward primer, with a total volume of 20µL.The tubes were spun and placed in the RT-PCR device.Gene analyses were performed at the end of the period.The effects of metformin and gold nanoparticles in controlled cell death apoptosis and PI3K, AKT3, mTOR, Bax, Bcl-2, Hsp60, Hsp70 and Tnf-a genes, which are genes involved in PI3K/Akt signaling pathway, were investigated by RT-PCR.The delta-delta Ct method (2 -∆∆Ct method) is a formula used to analyze the gene expression. Statistical Analysis GraphPad Prism 8.01 was used to analyze all of the data.The normal distribution of the data was shown by the Shapiro-Wilk normality test.In this study, the statistical analysis of the MTT experiment, which was performed in four repetitions, was calculated using the Microsoft Office Excel program, and the concentration-dependent changes compared to the control group were examined.Taking these changes as a reference, the following tests were started.Afterwards, the RT-PCR experiment was performed and the analyses were calculated using Microsoft Office Excel.The Kruskal-Wallis test was used to analyze the distribution of gene expression results by groups.Statistics were considered significant for values with a p-value of 0.05. Anti-proliferative Effects of metformin and gold nanoparticles in MCF7 and A549 Cells Different doses of ifosfamide (1 μM, 5 μM, 25 μM, 50 μM, 75 μM), metformin (5 mM, 25 mM, 50 mM and 80 mM) and gold nanoparticles (5 µM, 25 µM, 50 µM, 100 µM) were applied to MCF7 and A549 cells.The ability of metformin and gold nanoparticles to inhibit the proliferation of MCF7 and A549 cancer cell lines was determined by MTT assay for 24 h.As shown in Fig. 2, metformin and gold nanoparticles inhibited the cell viability of MCF7 and A549 cancer cells in a time-and dose-dependent manner. The MCF-7 and A549 cell viability in control without any treatment was the highest.The treatments with ifosfamid, metformin and gold nanoparticles reduced the cell viability in the 5 mM metformin and 10 µM gold nanoparticles concentrations.The IC50 value for MCF-7 and A549 cells is presented in the Fig. 2. Since it was observed that the 5 mM dose of metformin and 10 µM gold nanoparticles inhibited the proliferation of tumor cells, 5 mM metformin was preferred as the drug dose in the following parts of the experiments and expression studies were conducted. Determination of the Effects of Metformin and Gold Nanoparticles on Gene Expression Level in MCF7 and A549 Cells by RT-PCR The PI3K, AKT3, mTOR, BCL2, Hsp60 and Hsp70 mRNA expression levels were decreased by the ifosfamide + metformin + AuNPs treatment in MCF7 cell line.Also, the expressions of Bax and TNF-α increased by the ifosfamide + metformin + AuNPs treatment compared to the control (Fig. 3 and 4). The PI3K, AKT3, mTOR, BCL2, Hsp60 and Hsp70 mRNA expression levels were decreased by ifosmamide + AuNPs group and ifosfamide + metformin + AuNPs articles treatment in A549 cell line.Also, the expressions of Bax and TNF-α were increased by the ifosfamide + AuNPs group and ifosfamide + metformin + AuNPs treatment compared to the control (Fig. 5 and 6). Discussion To our knowledge, this investigation is the first in vitro report of the effects of metformin and Au nanoparticles against human MCF7 and A549 cancer cells in the literature. Tumors are related to decreased cell apoptosis and uncontrolled cell proliferation.Some chemotherapy drugs, including doxorubicin, cyclophosphamide, and paclitaxel, work like conventional cancer treatments by causing the tumor cells to die in an immune-mediated manner [36,82]. Metformin has been used extensively in research, including in vivo cancer animal models and several cancer cell lines [2,15,84].First receiving attention in 2005, metformin is an oral anti-diabetic medication that is more affordable than any other anticancer treatment now in use [17].In case-control studies involving patients with breast cancers [27] and lung cancers [67], using metformin as an adjuvant Fig. 3 Gene Expressions of mTOR, PI3K, AKT3, and Bax in MCF-7 cells to standard chemotherapy and radiotherapy has produced encouraging results.Metformin was reported to target cancer-initiating stem cells (R. [40].Metformin may have an impact on tumorigenesis both directly and indirectly through the systemic lowering of insulin levels [49].How to successfully eradicate cancer cells while leaving healthy cells unharmed is one of the main issues facing cancer research.We can create new medicines by analyzing the key molecular pathways underlying metformin's anticancer effects and the main types of death it mediates (J.[79]. In MCF-7 cells, metformin displayed an antiproliferative effect that depended on time and concentration.Compared to 2.5, 5 and 10 mM of metformin, this impact was stronger at 20 mM [53,79] showed that metformin could effectively inhibit the proliferation of some breast cancer cells in dose-and time-dependent manner (J.[79].Metformin therapy inhibited cell growth dose-dependently in MCF-7 and MCF-7/713 cell lines.Also, MCF-7 cells were 57% and MCF-7/713 cells were 50% less likely to proliferate at a dose of 50 mM compared to untreated controls [1].Metformin was used at a concentration that does not affect the growth of non-transformed cells (0.1 or 0.3 mM).Metformin was previously utilized at significantly greater concentrations (usually 10-30 mM) in investigations on cancer cell lines [1].In a study, the scientists looked at metformin's antiproliferative effects in MCF7 cancer cells and its mechanism of action in MCF-7 cancer cells exposed to 10 mM of the drug for 24, 48, and 72 h.It was detected that metformin showed a time-and concentration-dependent antiproliferative impact in MCF-7 cells [53].In this study, it was shown that 5 µM metformin and 10 µM Au nanoparticles were effective in suppressing the proliferation of MCF7 and A549 in groups. Many tumor types have been discovered to be inhibited by metformin.Its impact on non-small cell lung cancer (NSCLC) is still unknown.Metformin might prevent A549 and H1299 cells from proliferating, according to a study Fig. 4 Gene Expressions of BCL2, HSP70, HSP60, and TNF-α in MCF-7 cells [41].A considerable reduction in cell proliferation and significant activation of apoptosis were seen in A549, RERF-LC-A1, IA-5, and WA-hT cells after exposure to metformin (1-20 mM) [66].Metformin is predicted to exert anticancer effects by inhibiting the insulin and mTOR pathways [51].According to previous studies, the relationship between metformin use and lung cancer survival is unclear [87].It has been reported that the use of metformin alone or in combination with existing chemotherapy may be a good approach to managing lung cancer effectively [23].One study examined the relationship between autophagy and apoptosis in the A549 lung cancer cell line using metformin (6 mM) and gedunin (12 µM), an inhibitor of Hsp90 [24].According to the results of this study, metformin and gedunin have cytotoxic effects against A549 lung cancer cells [24].It has been reported that the malignant properties of A549 and H3122 cells can be inhibited by metformin in vitro [52].Metformin has been shown to have an IC50 of 13.5 mM and 21.8 mM against A549 and H3122 cells, respectively.In this study, the findings suggest that 5 µM metformin and 10 µM Au nanoparticles were effective in suppressing the proliferation of A549 cells in groups. In MCF-7 cells, metformin has been reported to decrease IRβ, Akt and ERK1/2 activation and phosphorylation of p70S6K and Bcl-2 protein expression, increase p-AMPK, FOXO3a, p27, Bax and cleaved caspase-3 [53].According to findings in molecular and cellular studies, metformin has been reported to increase p53 and Bax levels significantly and decrease STAT3 and Bcl-2 [43].Apoptosis was markedly accelerated by 10 mM metformin [73].In this study, MCF 7 cells caused a significant decrease in Bax mRNA expression in the groups treated with ifosfamide + metformin + AuNPs.In addition, the BCL2 gene caused a significant decrease in Bcl-2 mRNA expression in the groups [88].In the current study, there was an increase in mRNA expression of the Bax gene in metformin A549 cells in ifosfamide + AuNPs and ifosfamide + metformin + AuNPs groups, while BCL2 mRNA expression decreased. One of the main mechanisms of action of metformin is the activation of adenosine monophosphate-activated protein kinase (AMPK).AMPK is associated with the PI3K/PTEN/ AKT pathway and MAPK/ERK [34].It has been suggested that simultaneously targeting AMPK using metformin and the PI3K/AKT/mTOR pathway by an mTOR inhibitor may become a new therapeutic approach [34].One study shows that metformin inhibits EGF-induced EMT in MCF-7 cells, possibly associated with the PI3K/Akt/NF-κB signaling pathway [45].Metformin, flavone and co-treatment have been shown to have no effect on AKT1 expression in MDA-MB-231 and MCF-7 cells [85]. An increase in AKT3 expression was detected at low frequencies in breast carcinomas, gliomas and hepatocellular carcinomas [35,54].mTOR is known to be abnormally activated in cancers as it plays an important role in regulating metabolism [39].In this study, a statistically significant decrease was found in PI3K mRNA expression in the groups treated with ifosfamide + metformin and ifosfamide + metformin + AuNPs.It was determined that the mRNA expression of the AKT3 gene was lower in the metformin + AuNPs and ifosfamide + metformin + AuNPs groups compared to the control group.Also, mRNA expression of the mTOR gene was low in ifosfamide + AuNPs and ifosfamide + metformin + AuNPs groups. Combining metformin and celecoxib therapy may cause apoptosis in A549 cells by blocking the ERK and PI3K/ AKT signaling pathways [11].Celecoxib and metformin both prevented PI3K/AKT signaling.AKT phosphorylation was scarcely eliminated by combination therapy [11].In this study, it was determined that PI3K mRNA expression in A549 cells decreased significantly in the ifosfamide + AuNPs and ifisfamide + metformin + AuNPs groups compared to the control.AKT3 mRNA expression was significantly decreased in ifosfamide + metformin, ifosfamide + AuNPs and ifosfamide + metformin + AuNPs groups.There was a significant decrease in mTOR gene expression in the ifosfamide + AuNPs and ifosfamide + metformin + AuNPs groups. Cancer cells produce heat shock proteins (Hsps) in response to exposure to thermal and other proteotoxic stresses [80].In addition to thermal stress, HSPs also protect them from exposure to oxidative stress, chemical, physical and other stresses [63].HSP27, 60 and 70 play a crucial role in apoptotic processes at the mitochondrial level [9,10,19] and represent important targets for the development of drugs. HSPs have been reported to be abnormally expressed in different types of cancer, including breast, colorectal, and lung (J.H. [37].Poor clinical outcomes have been linked to particularly high levels of Hsp27, Hsp70, and Hsp90.Intracellular and cell surface HSP70s are recognized as potential targets for the treatment of breast cancer [28].Most mammalian cells have large amounts of Hsp60, essential for protein folding and chaperoning (Frydman & Hartl, 1996).Compared with healthy breast tissues, HSP60 mRNA level has been shown to be significantly increased in primary breast cancer tissues [16].Elevations in circulating HSP70 have been reported in association with malignant transformation, including breast cancer [21,81].In one study, it was emphasized that HSP70 could be used in addition to other diagnostic tests for breast cancer and could be useful in demonstrating the risk of breast cancer [22].We could not find any study showing the effect of metformin on HSP60 and HSP70 genes in breast cancer.However, the effect of metformin on HSP in other cancer cells has been investigated.Metformin has also been shown to reduce the expression of Bcl-2 and HSP27, HSP60 and HSP70 [50].Metformin has been reported to increase NK cell cytotoxicity by regulating the mRNA and protein expression of MICA and HSP70 on the surface of human cervical cancer cells via the PI3K/Akt pathway [76].Metformin has been shown to reduce HSP70 [73].A decrease in the mRNA expression of the HSP60 gene was observed in the groups treated with ifosfamide + AuNPs and ifosfamide + metformin + AuNPs to MCF7 cells.On the other hand, it was shown that there was a decrease in HSP70 mRNA expression in ifosfamide + metformin and ifosfamide + metformin + AuNPs groups. It has also been noted that HSP60 expression is associated with the onset of lung cancer [77].In this study, it was determined that both the HSP60 gene and HSP70 gene mRNA expression in A549 cells were statistically significantly decreased in ifosfamide + AuNPs and ifisfamide + metformin + AuNPs groups than the control. It has been shown that proinflammatory cytokines IL-6 and TNF-α are significantly increased in enhancing the inflammatory cascade in patients with metastatic breast cancer [20].In MDA-MB231 and MDA-MB453 breast cancer cells treated with metformin, the expression of IL-12 and TNF-a cytokines was increased [14].It has been reported that there is a significant increase in the secretion of TNF-a, IL-2 and IFN-y cytokines in NSCLC with metformin.[75].TNF-α mRNA expression was statistically significantly higher in the groups treated with ifosmaid + AuNPs and ifosfamide + metformin + AuNPs to MCF7 cells.On the other hand, in A549 cells, there was a significant increase in TNF-α expression in the groups treated with ifosfamide + metformin, ifosfamide + AuNPs and ifosfamide + metformin + AuNPs. Conclusion In conclusion, the findings of this study showed that metformin and gold nanoparticles inhibited MCF7 and A549 cell proliferation.5 µM ifosfamide, 5mM dose of metformin and 10µM gold nanoparticles cause cell death in cells by inducing cell cytotoxicity, inflammation, and apoptosis.Our findings suggest that using metformin and gold nanoparticles as a promising chemotherapeutic agent for treating human breast and lung cancer requires further investigation. FT -IR, UV, TEM, and SEM spectroscopies were used to characterize citrate-capped gold nanoparticles.Citrate ions binding to gold nanoparticles was confirmed by FT-IR Analysis.The synthesized citrate-capped gold nanoparticles have a characteristic peaks corresponding to a citrate group and the presence of water molecules at 3301 cm −1 (O-H), 1635 cm −1 (C = C, C = O), 1400 cm −1 (COO − ) 1220 cm −1 (C-O), and 772 cm −1 (C-C) (Fig. 1.A).The O-H group can be attributed to the presence of water molecules and O-H stretching of citrate molecules in the sample, while C = C, C = O, C-O and C-C confirm the presence of citrate molecules along with the citrate-capped gold nanoparticles [69],Park and Shumaker-Parry, 2014;[71].The presence of negatively charged citrate molecules for gold nanoparticles has critical functions.One of them is that it helps achieve chemical stability by lowering the surface energy of highly active nanoparticles.Another property is that they stabilize the nanoparticles, preventing their agglomeration and ensuring good dispersion of the nanoparticles, which is essential for their interaction with other molecules[83].In UV analysis, the strong absorbance of the obtained citrate-capped gold nanoparticles solution at approximately 520 nm (ruby red color).The citrate-capped gold nanoparticles showed a typical peak at 520 nm under UV-visible spectroscopy, which further confirmed the synthesis of citrate-capped gold nanoparticles having sizes less than 20 nm[7] (Fig.1.B).In addition, the TEM image of synthesized citrate-capped gold nanoparticles is shown in Fig.1.C.The average size of citrate-capped gold nanoparticles was 14 nm and has a spherical morphology.Scanning electron microscopy (SEM) images of citrate-capped gold nanoparticles are shown in Fig.1.D. SEM analysis, in addition to size the distribution of the synthesized citrate-capped gold nanoparticles appears to be quite well dispersed, while the average size of the citratecoated gold nanoparticles is 10-20 nm. Fig. 1 Fig. 1 The FT-IR spectrum of citrate-capped gold nanoparticles (A), UV spectrum of citrate-capped gold nanoparticles (B), The transmission electron microscopy (TEM) image of citrate-capped gold nanoparticles (C), Scanning electron microscopy (SEM) images of citrate-capped gold nanoparticles (D) Table 1 The dose and groups of treatment
5,801.6
2024-02-15T00:00:00.000
[ "Medicine", "Chemistry" ]
Logistic regression for potential modeling Regression or regression‐like models are often employed in potential modeling, i.e., for the targeting of resources, either based on 2D map images or 3D geomodels both in raster mode or based on spatial point processes. Recently, machine learning techniques such as artificial neural networks have gained popularity also in potential modeling. Using artificial neural networks, decent results in the prediction of the target event are obtained. However, insight into the problem, e.g., about importance of specific covariables, is difficult to obtain. On the other hand, logistic regression has a well understood statistical foundation and works with an explicit model from which knowledge can be gained about the underlying problem. However, establishing such an explicit model is rather difficult for real world problems. We propose a model selection strategy for logistic regression, which includes nonlinearities for improved classification results. At the same time, interpretability of the results is preserved. Logistic Regression The logistic regression model is a special case of a generalized linear model. It consists of the linear predictor η = β 0 + β 1 x 1 + . . . + β m x m and a link function which links the conditional probability P (y|x) = µ(η) to the linear predictor. This link function is the logarithm of the odds, called the logit, given as logit = log µ 1−µ . The MLEβ is obtained by minimizing the negative logarithm of the likelihood function, i.e. This equation is solved using the Newton-Raphson algorithm where the new iterate can be obtained by solving a system of linear equations. Using the conjugate gradient method with an early stopping criterion, logistic regression methods can tackle also large scale applications [2]. The application of logistic regression in potential modeling has some specific challenges. First, one needs to deal with rare events. This means that the occurrence of one class has a significantly lower frequency than the majority class. It was shown, e.g., in [4], that logistic regression underestimates the probabilities of the rare events because it tends to be biased towards the more frequent class, which in many practical applications is the less important class. This can be solved through endogenous sampling which means taking all positive events and a random sample of the negative events to get balanced training data. This makes some corrections necessary such as using a weighted likelihood [5] and calculating the robust variance [6]. Section 17: Applied and numerical linear algebra Model selection strategy Since logistic regression is only a linear classifier we provide a dictionary with some nonlinearities in the data x. After choosing the appropriate nonlinearities, these are added to the linear predictor, i.e. This is still linear in the parameters β i , i = 1, . . . ,m wherem is the total number of variables, including the nonlinearities. To obtain such a model a selection of variables needs to be carried out. The proposed model selection is performed in two main steps. The first step is a coarse selection using the p-value of the Wald test, which tests for the significance of a variable. Because of known problems with the Wald test in rare events and large samples, e.g. [7], many unimportant variables will be left in the model. Therefore a second selection step is needed. This uses the Bayes' information criterion (BIC). The BIC for a given model is calculated as where n and k describe the number of datapoints and number of variables, respectively. Furthermore, L is the function value of the likelihood for the MLEβ. Because the model should be able to achieve good results on unseen data, the BIC is calculated using a validation dataset. After calculating the BIC values of all models with one variable dropped we sort the unimportant variables. Then we apply the following method to discard more than one variable: We calculate the BIC for the models where one-half, one-quarter and one-eights of the most unimportant variables are dropped. The model with the smallest BIC is used as the starting model for the next iteration. Results Experiments on synthetic data, where the true model is known, show, that our suggested method is able to detect the true model if possible or approximates it using the given nonlinearities if it cannot be fully recovered. In both cases, it improves the performance of a simple logistic regression. It comes close to the prediction accuracy of a neural network while remaining interpretable. Due to space limitations the results on synthetic data are not presented here. Our experiments with real world data show a similar behavior. Results for the datasets described in Table 1 are presented in Figure 1 and Figure 2. The model selection improves the simple logistic regression in both cases and for the Cod-RNA dataset it even gives the same prediction result as a neural network with 20 hidden layers and the logistic function as activation function.
1,145.2
2019-11-01T00:00:00.000
[ "Mathematics", "Computer Science" ]
Anabaenopeptins from Cyanobacteria in Freshwater Bodies of Greece Cyanobacteria are photosynthetic microorganisms that are able to produce a large number of secondary metabolites. In freshwaters, under favorable conditions, they can rapidly multiply, forming blooms, and can release their toxic/bioactive metabolites in water. Among them, anabaenopeptins (APs) are a less studied class of cyclic bioactive cyanopeptides. The occurrence and structural variety of APs in cyanobacterial blooms and cultured strains from Greek freshwaters were investigated. Cyanobacterial extracts were analyzed with LC–qTRAP MS/MS using information-dependent acquisition in enhanced ion product mode in order to obtain the fragmentation mass spectra of APs. Thirteen APs were detected, and their possible structures were annotated based on the elucidation of fragmentation spectra, including three novel ones. APs were present in the majority of bloom samples (91%) collected from nine Greek lakes during different time periods. A large variety of APs was observed, with up to eight congeners co-occurring in the same sample. AP F (87%), Oscillamide Y (87%) and AP B (65%) were the most frequently detected congeners. Thirty cyanobacterial strain cultures were also analyzed. APs were only detected in one strain (Microcystis ichtyoblabe). The results contribute to a better understanding of APs produced by freshwater cyanobacteria and expand the range of structurally characterized APs. Structural characterization of cyanobacterial metabolites, including APs, is an emerging issue due to the great diversity of molecules, their bioactivities and possible effects on ecosystems and on human health. Nuclear magnetic resonance (NMR), after the isolation of the compound, usually from a cyanobacterial strain culture, has been used for the structural elucidation of APs e.g., [2,26,27,35]. Mass spectrometric (MS) techniques such as Matrix-Assisted Laser Desorption/Ionization Time-of-Flight (MALDI-TOF) [3,22,95] or MS coupled with liquid chromatography such as liquid chromatography-hybrid triple quadrupole/linear ion trap mass spectrometry (LC-qTRAP) [8,23,25,40,45,48] or liquid chromatography-hybrid triple quadrupole/Time-of-Flight (LC-qTOF) [24,44] are nowadays widely used as they can be applied directly to extracts of field samples or strain cultures. A significant indicator of APs fragmentation spectrum is the characteristic fragment ion of lysine (Lys) at m/z 84 [3]. The typical fragmentation pattern of APs includes the loss of the amino acid and the CO of the side chain, resulting in the peptide ring ion [3,10,44]. Information regarding the presence of APs in Greek freshwater bodies is limited. Only three monitoring studies have been conducted so far, targeting no more than three AP congeners [5,98,99]. In the present study, an untargeted analysis approach utilizing a LC-qTRAP method was applied for the investigation of APs presence in cyanobacteria from Greece. The main aims were (i) to report, for the first time, the structural diversity of APs in cyanobacterial bloom samples collected from lakes of Greece, (ii) to assess the ability of Greek freshwater cyanobacterial strains to produce APs and (iii) to identify the possible new structures of APs, contributing to a better understanding of the existing variety of these hexapeptide cyanobacterial metabolites. Structural Elucidation of Anabaenopeptins Thirteen APs were detected in the samples of cyanobacteria from Greek freshwater bodies. The elucidation of proposed AP structures was based on their precursor ions from full scan (MS1) ( Table S1) and fragmentation (MS2) spectra, enabling annotation of the compounds [100]. Among them, the possible structures of three AP congeners were proposed for the first time in the frame of the present study. The amino acid sequences of the detected APs with their precursor ions [M + H] + and the retention time (tR) are provided in Table 1. The proposed structures, extracted ion chromatograms (EIC), full scan spectra (MS1) and fragmentation mass spectra (MS2), of the three newly annotated APs are shown in Figures 2-4, while the elucidation of their spectra are provided in the relevant captions. The detection of APs was based on the diagnostic fragment ion of lysine (Lys) m/z 84 [3], which was present in the fragmentation spectra of all APs (Figures 2-4 and S2-S11). Structural elucidation of APs was based on fragmentation patterns described in previous studies [3,9,10,23,25,44,48] and on immonium ions of the common amino acids. In Table 1, both leucine (Leu) and isoleucine (Ile) are provided in the proposed AP sequences as these amino acids are isobaric compounds with the same chemical formula (C 6 H 13 NO 2 ) and they could not be distinguished. Generally, one of the intense ions that is always present in the fragmentation spectrum of APs is the ion formed by the loss of the side chain amino acid, i.e., [M + H − X 1 ] + . Fragment ions [M + H − X 3 ] + and [M + H − X 4 ] + are also commonly found in the APs spectra. Furthermore, among the most intense fragment ions of APs is the five-peptide ring ion generated after the loss of the side chain, i.e., [Lys-X 3 -X 4 -MeX 5 -X 6 Anabaenopeptins in Cyanobacterial Blooms from Greek Lakes Samples were collected from nine different lakes of Greece during cyanobacterial bloom events, which were mainly dominated by Microcystis and Dolichospermum species, and were analyzed for the presence of APs. The detected AP congeners and the dominant cyanobacterial species of each sample are presented in Figure S1, and details are provided in Table 2. In total, thirteen different AP congeners were detected, and their amino acid sequences are shown in Table 1. The presence of APs was confirmed in the majority of the examined samples (91%). In addition, a large within-sample structural diversity of APs was observed as at least six AP congeners were detected in each of the 11 samples (48% of total samples). Two samples contained only one AP congener. The largest diversity of APs was observed in three samples collected from lakes Kastoria (5 October 1995), Kerkini (3 August 1999) and Zazari (5 August 1999); eight APs were detected in each of them. These samples were dominated by Microcystis species (Table 2). A large diversity of APs was also observed in samples collected from lakes Pamvotida, Mikri Prespa, and Vistonida. APs were not detected in two samples collected from lakes Marathonas and Karla, although cyanobacterial species that possibly produce APs were present in both lakes (i.e., Microcystis flos-aquae at Lake Marathonas and Planktothrix cf. agardhii at Lake Karla). The most frequently detected APs in Greek freshwater samples were AP F (87% of samples) and Osc Y (87%), followed by AP B (65%) and AP 886 (57%). AP A and AP 872 were also common congeners among the samples. AP 820 and AP KB906 were detected in one sample from Lake Kastoria and Lake Zazari, respectively. AP 894, whose structure is proposed for the first time in the present study, was detected in two samples collected from lakes Kerkini and Zazari. The newly proposed APs, 837 and 851, were detected in one sample collected from Lake Mikri Prespa (4 November 2014). In two previous monitoring studies targeting AP A and AP B by HPLC-PDA, in which cyanobacterial bloom samples were collected from up to 36 freshwater bodies of Greece, the presence of APs in lakes Zazari (AP A), Kastoria (AP A and AP B) and Pamvotis (AP A and AP B) was reported [5,98]. In the current study, both AP A and AP B were detected by mass spectrometry in lakes Kastoria, Pamvotis, Zazari, Kerkini, Mikri Prespa and Vistonida, along with several other APs congeners. According to a three-year monitoring study of the Greek Lake Vegoritis targeting 25 cyanobacterial toxins and peptides, AP B and AP F were found to be the most frequently detected cyanobacterial metabolites; they were present in almost all the samples, followed by Osc Y [99]. These results are in agreement with the current study as AP F, Osc Y and AP B were the most commonly occurring AP congeners in the freshwaters of Greece. The occurrence of cyanobacterial metabolites, including APs in freshwater blooms, has been investigated in a number of past studies. Analysis by MALDI-TOF MS showed the presence of AP B and AP F in samples collected from lakes in Italy [80,81,102], Germany [3], Spain [79] and Brazil [89]. In samples collected from a waterbody of Poland and analyzed by LC-qTRAP MS/MS, the most abundant AP congener was AP B, followed by AP A, AP F, AP G, Osc Y, AP D and AP 915 [75]. The presence of AP A, AP B, AP F and Osc Y was also confirmed by LC-HRMS in samples collected from the freshwaters of Spain [6] and the Czech Republic [77], while AP B, AP A and Osc Y were identified in samples from the United Kingdom [87]. Based on the results of this study and of previous reports, it appears that AP B and AP F followed by AP A and Osc Y are the most frequently reported APs not only in Greece but also in the European continent. AP F, Osc Y, AP B and AP A are protease inhibitors that possess activity against carboxypeptidase A and protein phosphatase 1 (PP1) [9,30,64]. AP B and AP F are also highly selective TAFIa inhibitors [69] and elastase inhibitors, with no activity towards chymotrypsin and trypsin [66], while Osc Y have presented inhibitory activity against chymotrypsin [27]. Additionally, AP A, AP B and AP F have had toxicity effects in the nematode Caenorhabditis elegans [70]. Even though APs toxicity effects on animal models and microorganisms have been reported, there remains a lack of data regarding their toxicity and impact on human health [12]. APs are the 3rd class of cyanopeptides with the highest structural diversity after microcystins and cyanopeptolins [103]. In the present investigation, thirteen structures of APs from the cyanobacteria of Greek freshwaters were detected, and they had a rather low diversity of variable amino acids ( Figure 5). In particular, all the moieties that composed the ring structures were represented by only two different amino acids per site. Even though the diversity was limited, it is interesting that the two amino acids that were determined in each position are among the most commonly found in known AP congeners (Figure 1). Specifically, the currently known 42 APs from freshwater environments mainly sist of Val (45%) and Ile (29%) in position X3, Hty (64%) and Hph (29%) in position MeAla (50%) and MeHty (38%) in position X5 and Phe (45%) and Ile (24%) in positio [12]. The 13 APs identified in Greek freshwaters consist of Val (38%) and Ile (62%) in sition X3, Hty (69%) and Hph (31%) in position X4, MeAla (85%) and MeHty (15%) in sition X5 and Phe (85%) and Ile (15%) in position X6 ( Figure 5). A comparison of find strongly supports that the variable amino acids of AP rings determined during this s are consistent with the most common ones of the known APs from freshwaters. A higher diversity of amino acids was observed in the side chain. Arg (31%) wa most frequent, followed by Tyr (23%), MeHty (23%), OEtGlu (15%) and Lys (8%). Arg Tyr are present in the side chains of commonly found AP congeners worldwide (i.e. B and AP F have Arg; AP A and Osc Y have Tyr). Contrarily, the presence of MeHty side chain has been reported for only seven AP congeners that were detected in cyano teria from Lake Balaton, Hungary [23]. The proposed side chain of the three novel consists of infrequent amino acids (i.e., Lys and OEtGlu). Lys (AP 894) has been d mined in the side chain of six known congeners (Figure 1, Table S1), while OEtGlu Specifically, the currently known 42 APs from freshwater environments mainly consist of Val (45%) and Ile (29%) in position X 3 , Hty (64%) and Hph (29%) in position X 4 , MeAla (50%) and MeHty (38%) in position X 5 and Phe (45%) and Ile (24%) in position X 6 [12]. The 13 APs identified in Greek freshwaters consist of Val (38%) and Ile (62%) in position X 3 , Hty (69%) and Hph (31%) in position X 4 , MeAla (85%) and MeHty (15%) in position X 5 and Phe (85%) and Ile (15%) in position X 6 ( Figure 5). A comparison of findings strongly supports that the variable amino acids of AP rings determined during this study are consistent with the most common ones of the known APs from freshwaters. A higher diversity of amino acids was observed in the side chain. Arg (31%) was the most frequent, followed by Tyr (23%), MeHty (23%), OEtGlu (15%) and Lys (8%). Arg and Tyr are present in the side chains of commonly found AP congeners worldwide (i.e., AP B and AP F have Arg; AP A and Osc Y have Tyr). Contrarily, the presence of MeHty as a side chain has been reported for only seven AP congeners that were detected in cyanobacteria from Lake Balaton, Hungary [23]. The proposed side chain of the three novel APs consists of infrequent amino acids (i.e., Lys and OEtGlu). Lys (AP 894) has been determined in the side chain of six known congeners (Figure 1, Table S1), while OEtGlu (AP 837 and AP 851) is proposed for the first time. A previous study reported the presence of OMeGlu occupying the side chain amino acid position in the AP MM823 [65]. In fact, AP MM823 and the newly proposed AP 837 also share the same five-peptide ring structure. Although methylated amino acids are frequently occurring in AP structures, ethylated ones have also been reported [10,25], indicating the metabolomic potential of cyanobacteria. Anabaenopeptins in Cyanobacterial Strains Isolated from Greek Freshwaters Thirty cyanobacterial strains from the TAU-MAC culture collection [104], isolated from the freshwaters of Greece, were analyzed in order to evaluate their ability to produce APs (Table S2), i.e., fourteen strains of Microcystis, five of Nostoc, three of Jaaginema, two of Synechococcus, and one from the species of the genera Anabaena, Calothrix, Chlorogloeopsis, Desmonostoc, Limnothrix and Nodosilinea. APs were only detected in one strain extract out of the thirty examined. In particular, AP A and Osc Y were identified in the extract of Microcystis ichtyoblabe TAU-MAC 0510. Although AP F and AP B along with Osc Y were the most frequently detected APs in cyanobacterial bloom samples in this study, they were not detected in any of the examined cyanobacterial strains. The diversity of APs in the isolated strains was limited compared to that of bloom extracts. This finding is in agreement with the results of previous studies as it was reported that Microcystis strains have a less diverse peptide pattern compared to that of the entire population of a bloom sample from a German lake [19], and that the Planktothrix agardhii samples from a Polish freshwater reservoir contained up to seven APs while the two strains isolated from the reservoir contained only one AP [75]. This was rather expected because the diversity of APs in field bloom samples reflects the high diversity of the chemotypes present in water bodies, therefore it cannot be compared with the diversity of the compounds in isolated strains [19,75]. The results of previous chemodiversity studies of freshwater cyanobacterial strains also indicate the limited presence of AP congeners in the samples. Welker et al. reported the presence of APs in only 9% of 850 examined Microcystis colonies with five AP structural variants in total [22] while, in another study, 165 Microcystis colonies were examined and only up to four APs were detected in 21% of analyzed samples [20]. Martins et al. have also reported a limited presence of APs in Microcystis aeruginosa strains where one to three APs were detected in the 30% of examined strains [38]. Furthermore, in an investigation of 18 Planktothrix clonal strains, APs were present in 11 of them, with one, two and three APs present in seven, three and one strain, respectively [13]. The limited presence of APs in cyanobacterial strains may also be correlated with the evidence that cyanobacterial strains could lose the ability to produce cyanopeptides under laboratory conditions [105]. In a previous chemo-diversity study including 24 Microcystis strains isolated from the same freshwater blooms or from different populations in various geographical areas (i.e., Netherlands, Scotland, France, Senegal, Burkina Faso), it was found that AP A, AP B, AP F and Osc Y were the most commonly detected AP congeners and were mainly produced by Microcystis aeruginosa strains, while all the examined Microcystis wesenbergii/M. viridis strains did not produce APs. A comparison of the specific chemical footprints of the examined strains showed that the metabolite content was influenced globally by microcystin production rather than sampling locality origins [106]. In another study, it was concluded that AP B and AP E/F were among the principal cyanopeptides detected in 165 Microcystis sp. colonies isolated from German lakes and that APs were mostly produced by Microcystis ichthyoblabe colonies than by Microcystis aeruginosa [20]. According to Fastner et al., AP B, AP F and Osc.Y were the most prominent APs in Microcystis ichthyoblabe colonies isolated from a German lake, followed by AP I and AP A, while APs were rarely detected in the Microcystis aeruginosa colonies and not detected at all in Microcystis wesenbergii colonies [19]. A common conclusion of the above studies was that Microcystis aeruginosa colonies predominately produced microcystins; this was in contrast to Microcystis ichthyoblabe colonies that mainly produced APs rather than microcystins [19,20]. This is in agreement with the results of the present study where one strain belonging to cyanobacterial species Microcystis ichthyoblabe was found to be positive to APs while strains belonging to Microcystis aeruginosa and Microcystis viridis were negative to APs (Table S2). In general, AP A, AP B, AP F and Osc Y are the most commonly detected APs both in Microcystis and Planktothrix strains isolated from several water bodies of European countries, such as Austria [34], the Czech Republic [22], Finland [14], Germany [13,19,20], Norway [74], Portugal [37] and Switzerland [31]. The current study constitutes the first investigation into APs presence in several cyanobacterial strains isolated from Greek freshwaters. Conclusions The structural diversity of APs from bloom samples and cultured cyanobacteria strains of Greek freshwaters was investigated for the first time, utilizing LC-qTPAR MS/MS in IDA and EIP modes in order to structurally elucidate APs from their fragmentation spectra. Overall, thirteen APs were annotated, with three of these being reported for the first time (AP 837, AP 851 and AP 894). A variety of APs were found to occur in 21 out of 23 samples from cyanobacterial blooms from seven out of nine lakes that were mainly dominated by Microcystis and Dolichospermum species. The most frequently occurring APs in bloom samples were AP F and Osc Y, followed by AP B, AP 886 and AP A. On the other hand, in thirty samples of cultured cyanobacterial strains isolated from the freshwater bodies of Greece, APs (AP A and Osc Y) were only found in Microcystis ichtyoblabe TAU-MAC 0510. The results of this study are in general agreement with previous studies on the occurrence of APs in European freshwater bodies and contribute to the expansion of the range of known AP congeners by introducing three new AP structures and their mass fragmentation spectra. Considering that APs are a class of cyanobacterial bioactive metabolites that naturally occur in water bodies in high frequency and possibly in significant amounts, the results of this study highlight the need for further assessment of their environmental effects and impacts. Cyanobacterial Bloom Samples Samples were collected from nine Greek lakes (Amvrakia, Kastoria, Pamvotida, Kerkini, Zazari, Mikri Prespa, Vistonida, Karla, Marathonas) during episodes of cyanobacterial bloom ( Table 2). General characteristics and location of the freshwater bodies are provided in the details of previous studies [98,107,108]. Water samples (100-1500 mL) were collected in airtight polyethylene bottles from the surface layer (0-35 cm) at the margins of the lakes where accumulation of cyanobacteria had been observed from May to October in 1995October in , 1999October in , 2000October in , 2010October in , 2014 and 2015, as previously described [5,98]. Samples were filtered through Whatman GF/C filters (Millipore, Cork, Ireland), lyophilized and stored at −20 • C until analysis. The cyanobacterial biomass of the samples ranged from 10-1000 mg/L. Dominant cyanobacterial species were characterized by microscopic analysis, as previously reported [5,98,109]. Source and Culture Conditions of Cyanobacterial Strains Thirty cyanobacterial strains isolated from Greek freshwaters from 1999 to 2010 [109] were identified and provided by Thessaloniki Aristotle University Microalgae and Cyanobacteria (TAU-MAC) Culture Collection [104]. Strains were planktic or benthic; details of their origin and isolation are provided in [109]. Cyanobacterial strains belonging to Chroococcales, Synechococcales and Nostocales based on polyphasic taxonomy were classified into 10 genera (Anabaena, Microcystis, Nostoc, Synechococcus, Limnothrix, Calothrix, Nodosilinea, Desmonostoc, Chlorogloeopsis and Jaaginema) and 16 taxa, as listed in Table S2 [110]. Cyanobacterial strain cultures were grown in BG11 medium with or without nitrogen (BG11 0 for the nitrogen-fixing strains, see Table S2), shaken manually once per day and maintained at (between days 20 and 35, depending on the strain, see Table S2), the whole liquid culture (250 mL) was centrifuged and the cyanobacterial cells collected, lyophilized and stored at −20 • C until analysis. Chlorophyll-a was extracted from 5 mL of wet biomass with 95% (v/v) acetone solution and spectrophotometrically quantified, as outlined in APHA (2005) [111]. The chlorophyll-a concentration of the strains at the time of the collection (as an estimate of their biomass) ranged from 6.21-6.77 mg/L. Sample Preparation and LC-MS/MS Analysis Analysis of two different sample types, i.e., cyanobacterial blooms and cyanobacterial strain cultures, was performed. The same amount of each sample type was extracted and analyzed. Lyophilized biomass (~10 mg) of each sample was extracted with 1.5 mL of 75% methanol:25% water assisted by vortexing and sonication in an ice bath for 15 min. After centrifugation (10,000 rpm, 15 min), the supernatants were collected and further centrifuged (10,000 rpm, 5 min) prior to LC-MS/MS analysis. Untargeted analysis was carried out with an Agilent 1200, liquid chromatography apparatus (Agilent Technologies, Waldboronn, Germany) coupled with a hybrid triple quadrupole/linear ion trap mass spectrometer (QTRAP5500, Applied Biosystems, Sciex; Concorde, ON, Canada) according to Mazur-Marzec et al., 2013 [44]. Chromatographic separation was achieved with a reversed phase column (Zorbax Eclipse XDB-C18 4.6 × 150 mm, 5 µm Agilent Technologies, Santa Clara, CA, USA) applying gradient elution. Mobile phases consisted of (A) acetonitrile and (B) 5% acetonitrile in MilliQ water, both containing 0.1% formic acid; flow rate was 0.6 mL min −1 and injection volume was 5 µL. Ionization was performed with electrospray (ESI) source in positive mode. For MS detection, information-dependent acquisition (IDA) mode and enhanced ion product (EIP) mode were applied. In IDA mode, a full scan from 500 to 1200 Da was acquired for detection of the compounds. EIP mode was triggered when the signal of an ion was above a threshold of 500,000 cps; the ions were fragmented in the collision cell (Q2) and fragmentation spectra were recorded from 50 to 1000 Da with a scan speed of 2000 Da s −1 and collision energy (CE) of 60 V with collision energy spread (CES) of 20 V. Analyst QS ® 1.5.1 software was used for data acquisition and processing. Obtained fragmentation spectra were examined in order to elucidate the structures of occurring APs. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/toxins14010004/s1, Table S1: List of anabaenopeptins reported in the literature and their amino acid sequence, Table S2: List of the cyanobacterial strains from Greek freshwaters, examined for their ability to produce APs, Figure S1
5,457.2
2021-12-21T00:00:00.000
[ "Environmental Science", "Biology", "Chemistry" ]
Circuits that encode and guide alcohol-associated preference A powerful feature of adaptive memory is its inherent flexibility. Alcohol and other addictive substances can remold neural circuits important for memory to reduce this flexibility. However, the mechanism through which pertinent circuits are selected and shaped remains unclear. We show that circuits required for alcohol-associated preference shift from population level dopaminergic activation to select dopamine neurons that predict behavioral choice in Drosophila melanogaster. During memory expression, subsets of dopamine neurons directly and indirectly modulate the activity of interconnected glutamatergic and cholinergic mushroom body output neurons (MBON). Transsynaptic tracing of neurons important for memory expression revealed a convergent center of memory consolidation within the mushroom body (MB) implicated in arousal, and a structure outside the MB implicated in integration of naïve and learned responses. These findings provide a circuit framework through which dopamine neuronal activation shifts from reward delivery to cue onset, and provide insight into the maladaptive nature of memory. Introduction An organism's behavior is guided by memories of past experiences and their associated positive or negative outcomes. Long-term memory retention requires the strengthening of labile memory traces so they are available for future retrieval. However, successful associations are also dynamic and malleable providing opportunities for updating associations based on new information. Thus, in order for organisms to adapt to their environment, they must find a balance between the persistence and flexibility of memories (Richards and Frankland, 2017). In substance use disorder (SUD), the balance between memory persistence and flexibility is often absent or difficult to achieve (Font and Cunningham, 2012;Torregrossa and Taylor, 2013;Hitchcock et al., 2015;American Psychiatric Assocation, 2013). Alcohol similarly disrupts memory systems resulting in enduring preferences, attentional bias for associated cues, and habitual behaviors (Fadardi et al., 2016;Field and Cox, 2008;Everitt and Robbins, 2005;Corbit et al., 2012;Gerdeman et al., 2003;Yin, 2008;Hyman et al., 2006;Robinson and Berridge, 2003;Goodman and Packard, 2016;White, 1996). In alcohol use disorder (AUD), preference and cravings for alcohol persist in the face of aversive consequences, leading to maladaptive drug seeking behaviors and ultimately a devastating economic and social impact on individuals, communities, and society as a whole (WHO, 2018). Understanding the circuitry mechanisms that underlie the encoding and expression of alcohol-associated memories is critical to understanding why these memories are resistant to change. A significant effort has been devoted to identifying and investigating circuitry changes as a consequence of alcohol (Lovinger and Alvarez, 2017;Corbit and Janak, 2016;Corbit et al., 2012;Keiflin and Janak, 2015;Dong et al., 2017;Stuber et al., 2010;Volkow and Morales, 2015;Volkow et al., 2013). The neuronal, genetic, and physiologic diversity that exists within the mammalian brain, however, has made this task challenging (Morales and Margolis, 2017). Drosophila melanogaster is a powerful model organism to address these challenges because of its lower complexity and the availability of neurogenetic tools that permit dissection of memory circuits with exact temporal and spatial resolution. Further, the neural circuits underlying the Drosophila reward response are remarkably similar to mammals (Scaplen and Kaun, 2016). Drosophila form persistent appetitive memories for the pharmacological properties of alcohol that last up to 7 days post acquisition and impel flies to walk over a 120V electric shock in the presence of associated cues (Kaun et al., 2011;Nunez et al., 2018). This suggests that Drosophila and mammalian alcohol-associated memories are similarly inflexible in the face of aversive consequences. We sought to identify the circuits important for alcohol-associated memories using a multipronged approach combining behavioral, thermogenetic, in vivo calcium imaging, and transsynaptic tracing. We show that circuits required for formation of alcohol preference shift from populationlevel dopaminergic encoding to two microcircuits comprising of interconnected dopaminergic, glutamatergic, and cholinergic neurons. Circuits required for the expression of alcohol-associated memories converge onto a mushroom body output neuron (MBON) that regulates consolidation and the fan-shaped body (FSB), a higher-order brain center implicated in arousal and modulating behavioral response (Donlea et al., 2018;Pimentel et al., 2016;Troup et al., 2018;Qian et al., 2017;Weir and Dickinson, 2015;Weir et al., 2014;Hu et al., 2018;Liu et al., 2006). Our results provide an in vivo circuit framework for how drugs of abuse temporally regulate acquisition and expression of sensory memories, which ultimately results in a shift in behavioral response from malleable to inflexible. Results Dopamine neurons innervating the mushroom body are required for alcohol reward associations Dopamine has a long-standing role in addiction and a defined role in reward-related behavioral learning that spans across species (Wanat et al., 2009;Yoshimoto et al., 1992;Hyman et al., 2006;Robbins and Everitt, 2002;Torregrossa et al., 2011;Kaun et al., 2011;Scaplen and Kaun, 2016). In Drosophila, the establishment of alcohol-associated preference requires a central brain structure called the mushroom body (MB) and dopamine neurons (DANs) (Kaun et al., 2011). It is unclear, however, which population of DANs are necessary for alcohol-associated preference. A discrete population of protocerebral anterior medial (PAM) DANs that innervate the MB have an identified role in detecting and processing natural rewards (Liu et al., 2012;Yamagata et al., 2015;Huetteroth et al., 2015;Lin et al., 2014). PAM neurons are required for the acquisition of sucrose and water reward memories, are activated by sucrose and water administration (Harris et al., 2015;Liu et al., 2012;Lin et al., 2014), and artificial activation is sufficient to induce reward memories (Burke et al., 2012;Yamagata et al., 2015). Thus, we first tested whether PAM neurons were also required for alcohol-associated preference ( Figure 1A). For selective manipulations of PAM neurons, we expressed the dominant negative temperature sensitive shibire (shi ts ) using R58E02-GAL4 (Liu et al., 2012). To establish temporal requirements, we temporarily and reversibly inactivated neurotransmission by raising the temperature to restricted levels (30˚C) during memory acquisition, the overnight consolidation period, or memory retrieval. Acquisition was defined as the time during which an odor was presented in isolation (unpaired odor) for 10 min followed by a second odor that was paired with an intoxicating dose of vaporized ethanol (paired odor + ethanol) for an additional 10 min. During acquisition, reciprocally trained flies received three of these spaced training sessions. Post-acquisition, flies were given a choice between the odor that was previously presented with an intoxicating dose of ethanol and the odor that was presented in isolation ( Figure 1A). Retrieval was measured in a Y-maze 24 hr post acquisition and defined as the time during which the flies chose between the previously presented odors. Inactivating neurotransmission in PAM DANs during acquisition or retrieval, but not during the overnight Figure 1. PAM DANs are necessary for encoding alcohol-associated preference. (A) Schematic illustrating odor condition preference paradigm. Vials of 30 flies are presented with three sessions of 10 min of an unpaired odor, followed by 10 min of a paired odor plus intoxicating vaporized ethanol. To control for odor identity, reciprocal controls were used. Flies were tested 24 hr later in a standard Y maze (B) PAM dopaminergic neurons activity is necessary during acquisition (F(2, 66)=5.355, p=0.007) and retrieval (F(2,71)=5.707, p=0.005), but not consolidation. Bar graphs illustrate mean +/- Figure 1 continued on next page consolidation, significantly reduced preference for cues associated with ethanol ( Figure 1B). Further decreasing dopamine-2-like receptors (D2R), which are thought to act as auto-receptors, (Vickrey and Venton, 2011), in PAM neurons significantly reduced preference for cues associated with ethanol suggesting that the regulation of dopamine release at the synapse is important for alcohol reward memory ( Figure 1C). Strikingly, despite dopamine's established role in modulating locomotor and motor responses (da Silva et al., 2018;Howe and Dombeck, 2016;Dodson et al., 2016;Syed et al., 2016;Lima and Miesenbö ck, 2005;Romo and Schultz, 1990;Schultz, 2007), inactivating all PAM dopaminergic neurons did not disrupt ethanol induced activity (Figure 1-figure supplement 1). Together, these results demonstrate that PAM neurons are required for encoding preference, but not for the locomotor response to the acute stimulatory properties of ethanol, and dopamine regulation at the synapse is important for memory. Dopaminergic encoding of alcohol memory acquisition occurs at the population level To determine how alcohol influenced activity of PAM DANs, we first used a dopamine staining protocol to label dopamine within the brain following 10 min of air or alcohol. As expected, there was a significant amount of dopamine labeled within the mushroom body and the majority of fluorescence was limited to the horizontal lobes ( Figure 1-figure supplement 2). We hypothesized that dopamine fluorescence would increase within the horizontal lobes of the MB in response to alcohol. Quantification of fluorescence revealed a trending increase in dopamine that was not statistically different from control ( Figure 1-figure supplement 2). We reasoned that dopamine staining likely could not distinguish between dopamine in the presynaptic terminals and dopamine in the synaptic cleft. Thus, we turned to 2-photon functional calcium imaging to monitor circuitry dynamics of PAM dopaminergic activity in the context of intoxicating alcohol. We used R58E02-Gal4 to express GCaMP6m (Chen et al., 2013) and recorded from the PAM presynaptic terminals at the MB while naïve flies were presented with 10 min of odor, followed by 10 min of odor plus intoxicating doses of alcohol ( Figure 1C). Interestingly, early in the respective recording sessions (odor vs odor + alcohol), changes in calcium dynamics was greater in the odor only group ( Figure 1D), however with prolonged alcohol exposure, greater calcium dynamics started to emerge in the odor + alcohol group ( Figure 1E). Similar effects were not evident if the fly was Figure 1 continued standard error of the mean. Raw data are overlaid on bar graphs. Each dot is an n of 1, which equals approximately 60 flies (30 per odor pairing). Oneway ANOVA with Tukey Posthoc was used to compare mean and variance. *p<0.05 (C) RNAi knockdown of D2R within the PAM population targeted using the R58E02 GAL4 driver significantly reduced alcohol-associated preference F(2,89)=6.441, p=0.002. (D) Schematic illustrating calcium imaging paradigm. (E) Flies are exposed to odor followed by odor plus intoxicating vaporized ethanol while resting or walking on a ball. We used the same odor for both conditions so we could better compare circuit dynamics in response to ethanol and control for odor identity. Fluorescence was captured for 61 s recording epochs that were equally spaced by 2 min. (F). Average traces recorded during early odor and odor plus ethanol exposures. Middle panels illustrate the binned DF/F0 and highlights a change in calcium dynamics as a consequence of ethanol exposure. Right panels illustrate the average DF/F0 for each fly in each condition. Early Epochs of odor plus ethanol had significantly lower signal (F(1,5)=8.705, p=0.03). (G) Average traces recorded during late odor and odor plus ethanol exposures. Middle panels illustrate the binned DF/F0 and highlights a change in calcium dynamics as a consequence of ethanol exposure. Right panels illustrate the average DF/F0 for each fly in each condition. Late Epochs of odor plus ethanol had significantly higher signal (F(1,5)=24.177, p=0.004). Within Subject Repeated Measures ANOVA was used to compare mean and variance across condition and time. Scale bar = 50 mm *p<0.05 **p<0.01. The online version of this article includes the following figure supplement(s) for figure 1: presented with two different odors alone or alcohol alone (Figure 1-figure supplement 2), suggesting that the reported effects are not merely a consequence of odor identity or the pharmacological properties of alcohol, but perhaps unique to alcohol associations. To address whether specific subsets of dopamine neurons within the PAM neuron population are necessary for alcohol-associated preference, we blocked transmission in subsets of these neurons using 18 highly specific split-Gal4 lines during both acquisition and retrieval. We found that preference was disrupted when neurotransmission was blocked in DANs projecting to the medial aspect of horizontal MB (Figure 1-figure supplement 4A). Similar disruptions were evident when neurotransmission was blocked in intrinsic MB Kenyon cells (Figure 1-figure supplement 4B). We therefore selected split-Gal4 lines that targeted the medial aspect of the horizontal lobe and determined their role specifically in acquisition of alcohol-associated preference. Surprisingly, unlike 24 hr sucrose memory Yamagata et al., 2015;Huetteroth et al., 2015), thermogenetic inactivation of specific subsets of DANs, innervating compartments of the medial horizontal lobe during acquisition did not disrupt 24 hr alcohol-associated preference ( Table 1). Together these data suggest that alcohol reward memories are encoded via a population of DANs involved in reward memory that progressively increase their activity as the flies become intoxicated. Memory expression is dependent on a sparse subset of dopamine neurons A hallmark of reward-encoding DANs is the gradual transfer in response from reward delivery during learning to the cue that predicts a reward during expression of the associated memory (Keiflin and Janak, 2015;Schultz, 2016;Schultz, 2015). However, the circuit mechanisms underlying this shift and knowledge about whether all DANs respond to the predictive cue, or a selective subset of DANs is unknown. We temporarily inactivated neurotransmission in subsets of DANs during retrieval to determine which subsets are required for a behavioral response to the predictive cue. Strikingly, only inactivating DANs innervating b'2a compartment of the MB, using split-Gal4 line MB109B, significantly reduced alcohol-associated preference, demonstrating that these neurons are important for the expression of alcohol-associated preference during retrieval ( Figure 2F). A dopamine-glutamate circuit regulates memory expression Our next goal was to map the circuits through which b'2a DANs drive behavioral choice. We tested the requirement of MB output neurons (MBONs) that align with b'2a DANs. Inactivating glutamatergic MBONs innervating similar compartments during acquisition using five different split-Gal4 lines, did not significantly reduce alcohol-associated preference ( Figure 3A-E). However, similar inactivation during retrieval identified a single b2 b'2a glutamatergic MBON important for the expression of alcohol-associated preference ( Figure 3I) thereby defining a putative retrieval microcircuit that consists of a subset of 8-10 dopamine neurons innervating the b'2a MB compartment and a single glutamatergic MBON that also innervates the b'2a MB compartment (b2 b'2a; Figure 3L). Previous work suggested that b'2a DANs were anatomically connected with b'2amp MBONs at the level of the MB, however, it was unclear to which MBON b'2a DANs were synaptically connected (Lewis et al., 2015). Previous work from our lab reported the requirement of D2Rs in intrinsic MB neurons for alcoholassociated preference (Petruccelli et al., 2018), suggesting an indirect D2R pathway that regulates expression of alcohol memory. A separate dopamine-glutamate circuit regulates memory consolidation Transsynaptic tracing revealed a putative direct synaptic connection between b'2a DANs and b'2mp glutamatergic MBONs in regulating alcohol-associated preference ( Figure 4Bii). We tested whether this connection was functionally important in regulating alcohol-associated preference using dopamine receptor RNAi lines. Decreasing levels of D2R, but not D1Rs, reduced alcohol-associated Figure 2. Memory expression during retrieval is dependent on a sparse population of DANs. (A-H) A thermogenetic approach was used to inactivate neurotransmission during retrieval, but not acquisition, in PAM DANs with varying expression patterns. (F) Inactivating b'2a DANs during retrieval significantly reduced preference for alcohol-associated cues. One-way ANOVA with Tukey Posthoc was used to compare mean and variance. F(2,65) =14.18, p=7.78Â10^À6. Bar graphs illustrate mean +/-standard error of the mean. Raw data are overlaid on bar graphs. Each dot is an n of 1, which equals approximately 60 flies (30 per odor pairing). (I) Chart illustrating the expression pattern of each split-GAL4 tested with intensity ranges of 2-5 (Aso et al., 2014a). (J) Model of circuits responsible for expression of alcohol-associated preference during retrieval, which highlights the importance of sparse subsets of dopaminergic activity during retrieval for the expression of alcohol-associated preference. *p<0.01. and variance. *p<0.01 Bar graphs illustrate mean +/-standard error of the mean. Raw data are overlaid on bar graphs. Each dot is an n of 1, which equals approximately 60 flies (30 per odor pairing). (K) Chart illustrating the expression pattern of each split-GAL4 tested with intensity ranges of 2-5 (Aso et al., 2014a). (L) Updated model of circuits responsible for expression of alcohol-associated preference. Retrieval circuits require specific subsets of DANs and a single MBON glutamatergic neuron innervating the b2'a compartment. during consolidation using MB002B significantly increased alcohol reward preference F(2,54) = 9.287, p=0.0003. Thermogenetic inactivation of b'2mp during consolidation using MB074C significantly increased alcohol reward preference relative to UAS controls F(2,71) = 3.51, p=0.04. (D) Knockdown of D2R in MBON b'2mp using MB002B significantly decreased alcoholassociated preference F(2,63)=12.77, p=2.22Â10^À05. Knockdown of D2R in MBON b'2mp using MB074C significantly decreased alcohol-associated preference relative to GAL4 controls F(2,71)=3.51, p=0.04. One-way ANOVA with Tukey Posthoc was used to compare mean and variance. Bar graphs illustrate mean +/-standard error of the mean. *p<0.05 **p<0.01 (f) Circuits responsible for encoding alcohol-associated preference during retrieval. Scale bar = 50 mm. The online version of this article includes the following figure supplement(s) for figure 4: preference ( Figure 4D, Figure 4-figure supplement 1C), providing functional evidence for a direct D2R-dependent pathway that regulates alcohol memory. Previous work in Drosophila reported that activating b'2mp MBON promotes arousal (Sitaraman et al., 2015). Thus, we hypothesized that inactivating b'2mp MBON while flies normally sleep would further decrease arousal and facilitate memory consolidation. To test this hypothesis, we inactivated neurotransmission of b'2mp MBON using two different split-GAL4 driver lines (MB074C and MB002B) during the overnight consolidation period (Aso et al., 2014a). Despite having no effect during acquisition or retrieval ( Figure 3A,E,F,J), inactivating the b'2mp MBON during overnight consolidation period enhanced alcohol-associated preference ( Figure 4C). Together these data suggest that b'2a DANs inhibit the b'2mp glutamatergic MBON via D2R receptors which leads to the expression of alcohol-associated preference. In the absence of dopamine ( Figure 2F) or D2R receptors ( Figure 4D), preference is disrupted. Convergent microcircuits encode alcohol reward expression The central role for the b'2mp MBON in consolidation suggests that this region may integrate information from several circuits required for memory expression. Previous anatomical studies predicted that b'2mp glutamatergic MBON and a'two cholinergic MBON were synaptically connected (Aso et al., 2014a). trans-Tango experiments demonstrate that b'2mp MBON is indeed a postsynaptic target of the a'2 MBON ( Figure 5A). We previously showed that inactivating the a'2 cholinergic MBON throughout both memory acquisition and expression decreased alcohol-associated preference (Aso et al., 2014b). To establish the specific temporal requirements of a'2 MBON and determine whether its corresponding a2a'2 dopaminergic input is necessary for alcohol-associated preference, we thermogenetically inactivated neurotransmission during either acquisition or retrieval. Inactivating a'2 cholinergic MBONs or its corresponding a2a'2 DANs during retrieval, but not acquisition, significantly reduced alcohol-associated preference ( Figure 5C-F). The involvement of a2a'2 DANs is particularly interesting because it demonstrates a requirement of a separate population of DANs in memory expression. Interestingly, trans-Tango did not identify the a'2 cholinergic MBON as a postsynaptic target of a2a'2 DANs. Of course, the possibility exists that there remains connectivity not identified by trans-Tango, however, RNAi against D1Rs or D2Rs did not disrupt alcohol-associated preference (Figure 5-figure supplement 1), suggesting that, like the b'2 microcircuit necessary for retrieval of alcohol-associated memories, direct connectivity of the a'2 microcircuit is not required for alcoholassociated preference. Alcohol memory expression circuits converge on a higher-order integration center Emerging models in the MB field suggest that MBON activity is pooled across compartments and that learning shifts the balance of activity to favor approach or avoidance . It remains unclear where this MBON activity converges. In order to identify potential regions that integrated MBON activity, we used trans-Tango to map postsynaptic partners of a'2, b'2mp, and b2b'2a MBONs. Interestingly, the dorsal regions of the FSB, specifically layers 4/5 or layer 6, were consistently identified as postsynaptic targets of a'2 MBON (Figure 6a,c). Both b'2mp and b2b'2a MBONs also have synaptic connectivity with the dorsal regions of the FSB (Figure 6b,d). Together these data reveal the dorsal FSB as an intriguing convergent region downstream of the MB whose role in alcohol-associated preference should be investigated further (Figure 6e). Discussion In this study we provide novel insight to the circuit-level mechanisms underlying the acquisition and expression of alcohol reward memories in Drosophila. We found that acquisition of appetitive response for alcohol does not rely on subsets of DANs, but instead requires population level dopaminergic modulation of the MB via PAM DANs, which increases with prolonged exposure ( Figure 7A). The expression of alcohol reward memories, however, requires two discrete dopamine microcircuits within the vertical and horizontal lobes, which converge at several points: a neuron that regulates memory consolidation and the dorsal layers of the FSB ( Figure 7B). We hypothesize that these convergent points provide multiple opportunities for memory to be updated or strengthened to influence subsequent behavior. Surprisingly, contrary to adaptive aversive or appetitive memories in flies (Liu et al., 2012;Yamagata et al., 2016;Yamagata et al., 2015;Masek et al., 2015), encoding alcohol-associated preference is not dependent on a single subset of DANs or MBON. Instead, acquisition appears to depend on a population of DANs whose activity emerges over the course of exposure to intoxicating doses of alcohol and likely increase across odor-alcohol pairing sessions via the recruitment of neurons. Although we cannot rule out the influence of other neurotransmitters or peptides that are potentially co-released with dopamine, dopamine auto receptor knock-down experiments in PAM neurons using the R58E02-GAL4 driver suggests that the regulation of dopamine release at the synapse is important for alcohol reward memory. (E) Schematic of the fly brain highlighting the FSB and its layers. The FSB is a 9-layer structure (Wolff et al., 2015), of which 4,5, and 6 are targets. Scale bar = 50 mm. Previous work in Drosophila reports that increasing the number of encoding DANs enhances how long aversive memory lasts (Aso and Rubin, 2016). Remarkably in an independent set of similar experiments, Ojelade et al., 2019 demonstrate that previous alcohol exposure potentiates dopaminergic responses to subsequent artificial activation. Together these findings are consistent with what is reported in mammalian models, where most drugs of abuse initially increase dopamine levels beyond what is experienced during natural reward (Nutt et al., 2015;Volkow and Morales, 2015;Kegeles et al., 2018) and suggest a general rule where stability of memory is encoded by the number of DANs involved during acquisition. We hypothesize that the recruitment of additional DANs and the potentiation of their responses across sessions contributes to the stability of alcohol memory. Understanding the mechanism by which DANs are recruited may provide powerful insight into why memories for an intoxicating experience are so persistent. Surprisingly, despite the involvement of a1 PAM DANs in the acquisition of long-term sucrose reward memory , the a1 DANs do not appear to play a role in alcohol-associated preference. Perhaps differences in the animal's internal state and/or temporal dynamics of alcohol intoxication underlies the distinction in requisite circuits. It's possible that the involvement of a1 is limited to internal states of hunger and thus not required when flies are sated. Unlike long-term sucrose memory, alcohol-reward memory is present in both hungry and sated flies, offering a unique opportunity to study how internal state might influence circuit selection for memory expression. Further investigation and comparison of circuits important for alcohol-reward memory in hungry, sated, and other internal states should prove to be a compelling line of research. Systems memory consolidation suggests that there are different circuits for memory acquisition and expression. Indeed, work in both fly and mammalian models suggest brain regions have a timelimited role in systems consolidation (Trannoy et al., 2011;Zars et al., 2000;Blum et al., 2009;Akalal et al., 2011;Qin et al., 2012;Cervantes-Sandoval et al., 2013;Krashes et al., 2007;Krashes and Waddell, 2008;Perisse et al., 2013;Roy et al., 2017). Our data suggest that population encoding during acquisition shifts to sparse representation during memory expression and distinct processes regulate consolidation and expression. The expression of alcohol-associated preference is dependent on two separate microcircuits defined by a small subset of PAM DANs (Sitaraman et al., 2015) and layers 4, 5, and 6 of the FSB. (b'2a) within a larger population of reward encoding DANs and a single paired posterior lateral (PPL1; a2a'2) DAN ( Figure 7B). Additionally, we found b'2a DANs make direct connections with a glutamatergic MBON (b'2mp) implicated in arousal (Sitaraman et al., 2015). Converging microcircuits emerge with time, and are not necessary for the acquisition of these long-lasting preference associations ( Figure 7B). Interestingly, blocking b'2mp MBON when flies normally sleep enhanced memory in a D2R-dependent manner. We propose that b'2a DANs inhibit b'2mp MBONS neuronal activity, thus permitting consolidation of alcohol-associated preference. The involvement of PAM b'2a DANs in the expression of alcohol-associated preference is particularly interesting because these neurons (targeted by broader driver lines 104 Gal4 and R48B04-Gal4) were previously implicated in the acquisition of 3 min sucrose memory in starved animals (Burke et al., 2012), as well as naïve water seeking in thirsty animals (Lin et al., 2014). b'2a DANs were also previously reported to inhibit b'2amp MBONs to promote approach behaviors when flies were presented with conflicting aversive and appetitive odor cues (Lewis et al., 2015). The effects of b'2a dopamine neuronal inhibition, however, were not long lasting. Instead, the appetitive food odor, and consequently the activity of b'2a DANs, appears to act as an occasion setter, or a discriminatory stimulus that augments an animal's response to a cue (Lewis et al., 2015). We speculate this neuron resets the response to a cue associated with alcohol, which may be critical for overcoming the initial aversive properties of alcohol. The involvement of PPL1 a2a'2 DANs are also interesting because PPL1 DANs are typically responsible for assigning negative valences to associated cues Waddell, 2013;Claridge-Chang et al., 2009;Kim et al., 2018;Boto et al., 2019), suggesting that a microcircuit associated with negative valence directly interacts with a microcircuit associated with positive valence to regulate the decision to seek alcohol. We hypothesize that repeated intoxicating experiences change the dynamics of b'2a DANs during acquisition or consolidation in a way that creates long term changes to the responsivity of the b'2mp MBON, perhaps to the a'2 MBON. Because the b'2mp MBON is not required for expression of memory, it is likely that its output is integrated elsewhere in the brain to drive goal directed behaviors. Indeed, there is a wealth of examples in the literature of the systems balancing input from integrating neural circuits to drive goal directed behavior (Buschman and Miller, 2014;Hoke et al., 2017;Knudsen, 2007;Perisse et al., 2013;Aso et al., 2014b;Lewis et al., 2015;Dolan et al., 2018). Here we have identified one such structure: the dorsal layers of the FSB, specifically layers 4, 5, and 6, that is an anatomical candidate for pooling MB output activity to drive learned behaviors. Interestingly, although the FSB has an established role in arousal and sleep, more recent work has defined its role in innate and learned nociceptive avoidance further supporting its role in integrating MBON activity (Hu et al., 2018). We hypothesize that signals from the b2b'2a and a'2 MBONs are integrated at the FSB to shift naïve response to cue-directed learned response. Compellingly, the b'2mp MBON, which we show is required for consolidation of alcoholassociated preference, also sends projections the FSB. This presents a circuit framework through which memory could be updated to influence behavioral expression. There are likely other convergent and or downstream structures that are important for reward processing and the emerging full connectome will better shed light on these connections. Alcohol is a unique stimulus, because unlike natural rewards and punishments, it has both aversive and appetitive properties. Flies naively will avoid intoxicating doses of alcohol, but avoidance switches to preference with experience (Shohat-Ophir et al., 2012;Peru Y Coló n de Portugal et al., 2014;Ojelade et al., 2019;Kaun et al., 2011). Previous work in starved flies have similarly described the formation of parallel competing memories when rewards are tainted with bitter tastants (Das et al., 2014). In this case, cue-associated avoidance switches to approach around the same time that the nutritional value of sugar is processed (Musso et al., 2015;Das et al., 2014). During memory acquisition, both bitter taste and shock memories require the MP1 DA neuron, whereas sucrose memories, like alcohol memories, require the PAM neurons. Similar to our work, Ojelade et al., 2019 show that the PAM population of DANs projecting to the MB are required for acquisition of experience-dependent alcohol preference in a consumption assay. They also demonstrate that activating layer six of dorsal FSB leads to naïve alcohol preference. These data are particularly exciting because we also identified the dorsal FSB as a convergent structure to MBONs important for the consolidation and expression of alcohol-associated preference. Perhaps the temporal nature of a valence switch from conditioned aversion to preference is a consequence of system level interactions between the MB and FSB. A classic hallmark of addiction is the enduring propensity to relapse, which is often driven by drugs associated cues. We believe our work provides valuable insight to the mechanisms by which drugs of abuse regulate acquisition, consolidation, and expression of pervasive sensory memories. Here we establish a circuit framework for studying the neural mechanisms of alcohol reward memory persistence in Drosophila and understanding how circuits change in drug-induced states. Materials and methods Key resources table Fly strains All Drosophila melanogaster lines were raised on standard cornmeal-agar media with tegosept antifungal agent and maintained at either 18C or 21C. For a list of fly lines used in the study, see Key Resources Table. All Drosophila melanogaster lines used for trans-Tango were raised and maintained at 18C in humidity-controlled chambers under 14/10 hr light/dark cycles on standard cornmeal-agar media with tegosept anti-fungal agent. Behavioral experiments Odor preference conditioning For behavior experiments, male flies were collected 1-2 days post eclosion and were shifted from 21C to 18C, 65% humidity and placed on a 14/10 hr light/dark cycle. Odor conditioning was performed similar to Kaun et al., 2011. In short, groups of 30 males were trained in perforated 14 ml culture vials filled with 1 ml of 1% agar and covered with mesh lids. Training rooms were temperature and humidity controlled (65%). Training was performed in the dark with minimal red-light illumination and was preceded by a 20 min habituation to the training chambers. Training chambers were constructed out of PlexiGlas (30 Â 15Â15 cm) (for details please refer to Nunez et al., 2018). During habitation, humidified air (flow rate: 130) was streamed into the chambers. A single training session consisted of a 10 min presentation of odor 1 (flow rate: 130), followed by a 10 min presentation of odor 2 (flow rate 130) with 60% ethanol (flow rate 90: ethanol 60: air). Reciprocal training was performed simultaneously to ensure that inherent preference for either odor did not affect conditioning scores. For the majority of experiments odors used were 1:36 isoamyl alcohol and 1:36 isoamyl acetate in mineral oil, however, screen behavioral experiments used 1:36 isoamyl alcohol and 1:36 ethyl acetate in mineral oil. Vials of flies from group one and group two were age matched and paired according to placement in the training chamber. Pairs were tested simultaneously 24 hr later in the Y maze by streaming odor 1 and odor 2 (flow rate 10) in separate arms and allowing flies to walk up vials to choose between the two arms. A preference index was calculated by # flies in the paired odor vial-# flies in the unpaired odor vial)/total # of flies that climbed. A conditioned preference index (CPI) was calculated by the averaging preference indexes from reciprocal groups. All data are reported as CPI. All plots were generated in RStudio. Odor sensitivity Odor sensitivity was evaluated at restrictive temperatures (30˚C). Odors used were 1:36 isoamyl alcohol in mineral oil and 1:36 isoamyl acetate in mineral oil. Groups of 30 naïve males were presented with either an odor (flow rate 10) or air streamed through mineral oil in opposite arms of the Y. Preference index was calculated by # flies in odor vial-# flies in air vial)/total # flies that climbed for each individual odor. Ethanol sensitivity Ethanol sensitivity was evaluated in the recently developed flyGrAM assay (Scaplen et al., 2019). Briefly, for thermogenetic inactivation, 10 flies were placed into arena chambers and placed in a 30C incubator for 20 min prior to testing. The arena was then transferred to a preheated (30˚C) light sealed box and connected to a vaporized ethanol/humidified air delivery system. Flies were given an additional 15 min to acclimate to the box before recordings began. Group activity was recorded (33 frames/sec) for five minutes of baseline, followed by 10 min of ethanol administration and five minutes of following ethanol exposure. Activity was binned by 10 s and averaged within each genotype. Mean group activity is plotted as a line across time with standard error of the mean overlaid. All activity plots were generated in RStudio. trans-Tango immunohistochemistry and microscopy. Experiments were performed according to the published FlyLight protocols with minor modifications. Briefly, either adult flies that are 15-20 days old were cold anaesthetized on ice, de-waxed in 70% ethanol dissected in cold Schneider's Insect Medium (S2). Within 20 min of dissection, tissue was incubated in 2% paraformaldehyde (PFA) in S2 at room temperature for 55 min. After fixation, samples were rinsed with phosphate buffered saline with 0.5% Triton X-100 (PBT) and washed 4 times for 15 min at room temperature. Following PBT washes, PBT was removed and samples were incubated in SNAP substrate diluted in PBT (SNAP-Surface649, NEB S9159S; 1:1000) for 1 hr at room temperature. Samples were then rinsed and washed 3 times for 10 min at room temperature and then blocked in 5% heat-inactivated goat serum in PBT for 90 min at room temperature and incubated with primary antibodies (Rabbit a-GFP Polyclonal (1:1000), Life Tech #A11122, Rat a-HA Monoclonal (1:100), Roche #11867423001) for two overnights at 4˚C. Subsequently, samples were rinsed and washed 4 times for 15 min in 0.5% PBT and incubated in secondary antibodies (Goat a-Rabbit AF488 (1:400), Life Tech #A11034, Goat a-Rat AF568 (1:400), Life Tech #A11077) diluted in 5% goat serum in PBT for 2-3 overnights at 4˚C. Samples were then rinsed and washed 4 times for 15 min in 0.5% PBT at room temperature and prepared for DPX mounting. Briefly, samples were fixed a second time in 4% PFA in PBS for 4 hr at room temperature and then washed four times in PBT at room temperature. Samples were rinsed for 10 min in PBS, placed on PLL-dipped cover glass, and dehydrated in successive baths of ethanol for 10 min each. Samples were then soaked three times in xylene for 5 min each and mounted using DPX. Confocal images were obtained using a Zeiss, LSM800 with ZEN software (Zeiss, version 2.1) with auto Z brightness correction to generate a homogeneous signal where it seemed necessary, and were formatted using Fiji software (http://fiji. sc). Dopamine immunohistochemistry and microscopy Groups of flies were exposed to either 10 min of air or 10 min of ethanol and dissected within 15 min of exposure on ice. Immunohistochemistry was performed according to Cichewicz et al., 2017. With 15 min of dissection, tissue was transferred to fix (1.25% glutaraldehyde in 1% PM) for 3-4 hr at 4˚C. Tissue was subsequently washed 3 times for 20 min in PM and reduced in 1% sodium borohydride. Then the tissue was washed 2 times for 20 min before a final wash in PMBT. Tissue was blocked in 1% goat serum in PMBT overnight at 4˚C and incubated in primary antibody (Mouse antidopamine (1:40) Millipore Inc, #MAB5300) for 48 hr at 4˚C. Following primary antibody incubation, tissue was washed three times in PBT for 20 min at room temperature and incubated in secondary antibody (Goat anti mouse 488 (1:200 in PBT) Thermo #A11029) for 24 hr at 4˚C. The following day tissue was washed 2 times for 20 min in PBT and then overnight in fresh PBT. Tissue was rinsed quickly in PBS, cleared in FocusClear and mounted in MountClear (Cell Explorer Labs). Confocal images were obtained using a Zeiss, LSM800 with ZEN software (Zeiss, version 2.1). Microscope settings were established using ethanol tissue before imaging air and ethanol samples. Dopamine fluorescence analysis Fluorescence was quantified in Fiji (Schindelin et al., 2012) using Segmentation Editor and 3D Manager (Ollion et al., 2013). In segmentation editor ROIs were defined using the selection tool brush to outline the MB in each slice and also outside a background region immediately ventral to MB that lacked defined fluorescent processes. 3D ROIs of the MB and control region were created by interpolating across slices. Geometric and intensity measurements were calculated for each ROI in 3D Manager and exported to CSV files. Integrated density for each ROI was normalized by the integrated density of control regions. Average integrated density for air and ethanol exposures are reported. All fluorescence quantifications were performed by a blinded experimenter. Calcium imaging protocol and analysis To express GCaMP6m in PAM neurons, UAS-GCaMP6m virgin female flies were crossed to male flies containing the R58E02-GAL4 driver. As previously mentioned, all flies were raised on standard cornmeal-agar food media with tegosept anti-fungal agent and maintained on a 14/10 hr light/dark cycle at 24˚C and 65% humidity. Fly Preparation Male flies were selected for imaging six days post-eclosion. Flies were briefly anesthetized on ice to transfer and fix to an experimental holder made out of heavy-duty aluminum foil. The fly was placed into an H-shaped hold cut out of the foil and glued in place using epoxy (5 min Epoxy, Devcon). The head was tilted about 70˚to remove the cuticle from the back of the fly head. All legs were free to move, the proboscis and antenna remained intact and unglued. Once the epoxy was dry, the holder was filled with Drosophila Adult Hemolymph-Like Saline (AHLS). The cuticle was removed using a tungsten wire (Roboz Surgical Instruments Tungsten Dissecting Needle,. 125 mm, Ultra Fine) and forceps #5. The prepared fly in its holder was positioned on a customized stand underneath the twophoton scope. The position of the ball and the stream delivery tubes were manually adjusted to the fly's position in the holder. Imaging paradigm Calcium imaging recordings were performed with a two-photon resonance microscope (Scientifica). Fluorescence was recorded from the PAM neurons innervating the mushroom body for a total duration of 80 to 95 min. The first 10 min the fly was presented an air stream, followed by 10 min of isoamyl alcohol. The fly was then presented with 10 min of isoamyl alcohol paired with ethanol followed by 50 min of streaming air. To avoid bleaching effects and to match the higher resolution imaging properties, the recording was not throughout the entire paradigm but spaced with imaging intervals of 61.4 s. Recordings were performed using SciScan Resonance Software (Scientifica). The laser was operated at 930 nm wavelength at an intensity of 7.5-8 mW. Images were acquired at 512 Â 512 pixel resolution with an average of 30.9 frames per second. Recordings lasted 1900 frames which equals 61.5 s. Recordings were performed at 18.5˚C room temperature and 59% humidity. Imaging analysis Data were registered, processed, and extracted using a matlab GUI developed by C. Deister, Brown University. Calcium image files (.tiff) comprising of 1900 frames taken at 30.94 frames per second rate (61.4 s), were initially averaged every five frames to downsize the. tiff image files to 380 frames. Image files were then aligned and registered in X-Y using a 15-50 frame average as a template. ROIs were constructed over the MB lobes using non-negative matrix factorization to identify active regions and then subsequently segmented to create the ROIs. Fluorescence values were extracted from identified ROIs and DF/F o measurements were created using a moving-average of 75 frames to calculate the baseline fluorescence (F o ). Average fluorescence traces across flies (n = 6) were visualized using ggplot in R Studio. Fiji (Schindelin et al., 2012) was used to construct heat-maps visualizing calcium activity. Calcium image files were summated across 1900 frames to create Z-projections. A heat gradient was used to visualize calcium activity magnitude. qRT-PCR qRT-PCR methods have been described previously (Petruccelli et al., 2018). In brief, total RNA was extracted from approximately 100 heads using Trizol (Ambion, Life Technologies) and treated with DNase (Ambion DNA-Free Kit). Equal amounts of RNA (1 mg) were reverse-transcribed into cDNA (Applied Biosystems) for each of the samples. Then, Biological (R3) and technical (R2) replicates were analyzed with Sybr Green Real-Time PCR (BioRad, ABI PRISM 7700 Sequence Detection System) performed using the following PCR conditions: 15 s 95˚C, 1 min 55˚C, 40x. Primer sequences can be found in Supplementary file 1- Table 4. Across all samples and targets, Ct threshold and amplification start/stop was set to 0.6 and manually adjusted, respectively. All target genes were initially normalized to CG13646 expression for comparative DCt method analysis, then compared to control genotype to assess fold enrichment (DD Ct method). Table 3 includes a description of target and off-target expression of split-Gal4 lines used. Table 4 includes a comprehensive table of detailed statistics that describe all data. Table 5 includes a review of papers published that include use of the RNAi lines used here. Decision letter and Author response . Transparent reporting form Data availability All data generated or analysed during this study are included in the manuscript and supporting files.
9,529.8
2020-06-04T00:00:00.000
[ "Biology", "Psychology" ]
A study of transformer-based end-to-end speech recognition system for Kazakh language Today, the Transformer model, which allows parallelization and also has its own internal attention, has been widely used in the field of speech recognition. The great advantage of this architecture is the fast learning speed, and the lack of sequential operation, as with recurrent neural networks. In this work, Transformer models and an end-to-end model based on connectionist temporal classification were considered to build a system for automatic recognition of Kazakh speech. It is known that Kazakh is part of a number of agglutinative languages and has limited data for implementing speech recognition systems. Some studies have shown that the Transformer model improves system performance for low-resource languages. Based on our experiments, it was revealed that the joint use of Transformer and connectionist temporal classification models contributed to improving the performance of the Kazakh speech recognition system and with an integrated language model it showed the best character error rate 3.7% on a clean dataset. www.nature.com/scientificreports/ models are based on convolutional and modified recurrent neural networks (RNNs). The models implemented using RNN perform calculations on the character positions of the input and output data, thus generating a sequence of hidden states depending on the previous hidden state of the network. This sequential process does not provide parallelization of learning in training examples, which is a problem with a longer sequence of input data and takes much longer to train the network. In 10 , another Transformer-based model was proposed, which allows parallelization of the learning process, and this model also removes repetitions and uses its internal attention to find the dependencies between the received and resulting data. The big advantage of this architecture is the fast-learning rate and the lack of sequential operation, as with RNN. In previous studies 11,12 it was revealed that the combined use of Transformer models and an E2E model, like CTC, contributed to the improvement of the quality of the English and Chinese speech recognition system. It should be noted that the attention mechanism is a common method that greatly improves the quality of the system in machine translation and speech recognition. And the Transformer model uses this attention mechanism to increase the learning rate. This model has its own internal attention, which aligns all positions of the input sequence to find a representation of the set, which does not require alignments. In addition, Transformer does not need to process the end of the text after processing its start. In order to implement such models, a large amount of speech data are required for training, which is problematic for languages with limited training data, namely for the Kazakh language, which is included in the group of agglutinative languages. To date, systems have been developed based on the CTC model 13,14 for recognizing Kazakh speech with different sets of training data. The use of other methods and models to improve the accuracy of recognition of the Kazakh speech is a promising direction and can improve the performance of the recognition system with a small size of the training sample. The main goal of our study is to improve the accuracy of the automatic recognition system for Kazakh continuous speech by increasing training data, as well as the use of models based on Transformer and CTC for recognizing Kazakh speech. The structure of the work is given in the following order: Sect. 2 presents traditional methods of speech recognition, Sect. 3 provides an analytical review of the scientific direction. Section 4 describes the principles of operation of the Transformer-based model and the model we proposed. Further, in Sects. 5 and 6, our experimental data, corpus of speech, and equipment for the experiment are described, and the results obtained are analyzed. The conclusions are given in the final section.. Traditional speech recognition methods Traditional sequence recognition focused on estimating the maximum a posteriori probability. Formally, this approach is a transformation of a sequence of acoustic speech characteristics X into a sequence of words W. Acoustic characteristics are a sequence of feature vectors of length T: X = {x t ∈ R D | t = 1,…, T}, and the sequence of words is defined as W = {w n ∈ V | n = 1,…, N}, having length N, where V is a vocabulary. The most probable word sequence W * can be estimated by maximizing P(W|X) for all possible word sequences V * (1) 15 . This process can be represented by the following expression: Therefore, the main goal of the automatic speech recognition (ASR) is to find a suitable model that will accurately determine the posterior distribution P(W | X). The process of automatic speech recognition consists of sequences of the following steps: • Extraction of features from the input signal. • Acoustic modeling (determines which phones were pronounced for subsequent recognition). • Language modeling (checks the correspondence of spoken words to the most likely sequences). • Decoding a sequence of words spoken by a person. The most important parts of a speech recognition system are feature extraction methods and recognition methods. Feature extraction is a process that allocates a small amount of data essential for solving a problem. To extract features, Mel-frequency cepstral coefficients (MFCC) and perceptual linear prediction (PLP) algorithms are commonly used [16][17][18] . The popular one is MFCC. In the speech recognition task, the original signal is converted into feature vectors, on the basis of which classification will then be performed. Acoustic model. The acoustic model (AM) uses deep neural networks and hidden Markov models. Deep neural network, convolutional neural network (CNN), or long short-term memory, which is a variant of the recurrent neural network is used to map the acoustic frame x t to the phonetic state of the subsequent f t at each input time t (2): Before this acoustic modeling procedure, the output targets of the neural network models, a sequence of phonetic states at the frame level f 1:T , are generated by HMM and GMM in special training methods. GMM models the acoustic element at the frame level x 1:T , and HMM estimates the most probable sequence of phonetic states f 1:T . The acoustic model is optimized for the cross-entropy error, which is the phonetic classification error per frame. (1) where w <u is the previous recognized word. Currently, RNN or LSTM are commonly used extensively for language model architecture, as they can capture long-term dependencies rather than traditional n-gram models, which are based on the Markov assumption and limited to a certain n-range of word history. Hidden Markov models. For a long time, a system based on hidden Markov models (HMM) was the main model for continuous speech recognition. The HMM mechanism can be used not only in acoustic modeling but also in the language model. But in general, the use of the HMM model gives a greater advantage when modeling the acoustic component. In this HMM, the phone is the observation and the feature is the latent state. For an HMM that has a state set {1,…, J}, the HMM-based model uses the Bayesian theorem and introduces the HMM state sequence S = {s t ∈ {1,…, J} | t = 1,…, T} пo p (L|X) (4). p(X|S), p(S|L), and p(L) in Eq. (4) correspond to the acoustic model, the pronunciation model and the language model, respectively. The acoustic model P (X|S) indicates the probability of observing X from the hidden sequence S. According to the probability chain rule and the observation independence hypothesis in the HMM (observations at any time depend only on the hidden state at that time), P(X|S) can be decomposed into the following form (5): In the acoustic model, p (x t |s t ) is the probability of observation, which is usually represented by mixtures of Gaussian distributions. The distribution of the posteriori probability of the hidden state p (s t |x t ) can be calculated by the method of deep neural networks. Two approaches, HMM-GMM and HMM-DNN, can be used to calculate p (X|S) in Eq. 5. The first approach HMM-GMM was for a long time the main method for building speech-to-text technology. With the development of deep learning technology, DNN is introduced into speech recognition for acoustic modeling. The role of DNN is to calculate the posterior probability of the HMM state, which can be converted into probabilities, replacing the usual GMM observation probability. Consequently, the transition of HMM-GMM to the hybrid model HMM-DNN has yielded excellent recognition results, and is becoming a popular ASR architecture. Hybrid models have some important limitations. For example, ANN with more than two hidden levels were rarely used due to computational performance limitations, and the context-dependent model described above takes into account numerous effective methods developed for GMM-HMM. The learning process is complex and difficult for global optimization. Components of traditional models are usually trained on different datasets and methods. Hybrid models based on DNN-HMM. To calculate P(x t |s t ) directly, GMM was used, because this model gives the possibility to simulate the distribution for each state, allowing to obtain probability values of input sequences. However, in practice, these assumptions cannot always be modeled by GMM. DNNs have shown significant improvements over GMMs due to their ability to study nonlinear functions. DNN cannot directly provide a conditional probability. The frame-by-frame posterior distribution is used to turn the probability model P(x t |s t ) into a classification problem P(s t |x t ) using a pseudo-likelihood trick as a joint probability approximation (6) 15 . The application this probability is referred to as a "hybrid architecture". A numerator is a DNN classifier trained with a set of input functions as input x t and target state s t . The denominator P(st) is the prior probability of the state s t . Frame-by-frame training requires frame-by-frame alignment with x t as input and s t as target. This negotiation is usually achieved by using a weaker HMM/GMM negotiation system or using human-made dictionaries. The quality and quantity of alignment labels are usually the most significant limitations of the hybrid approach. End-to-end speech recognition models. E2E automatic speech recognition is a new technology in the field of ASR based on a neural network, which offers many advantages. E2E ASR is a single integrated approach with a much simpler training approach with models that work at a low audio frame rate. This reduces learning time, decoding time, and allows joint optimization with subsequent processing, such as understanding the natural language. Thus, the ANN finds probabilities P(•|x 1 ),…,P(•|x t ), where the input probability parameters are some representations of a sequence of words, i.e. labels. The basic principle of operation is that modern E2E models are trained on the basis of big data. From the above, we can detect the main problem, it concerns the recognition of languages with limited training data, such as Kazakh, Kyrgyz, Turkish, etc. For such low-resource languages, there are no large corpuses of training data. Related work/literature review The Transformer model was first introduced in 8 , in order to reduce sequential calculations and the number of operations for correlating input and output position signals. Experiments were conducted on machine translation tasks, from English to German and from English to French. As a result, the model was shown to have achieved good performance compared to existing results. Moreover, Transformer works perfectly for other tasks with large and limited training data, and is very fruitful for all kinds of seq2seq tasks. The use of Transformer for speech-to-text conversion also showed good results and was reflected in the following research papers: To implement a faster and more accurate ASR system, Transformer and ASR achievements based on RNN were combined by Karita et al. 11 . To build the model, a Connectionist temporal classification (CTC) was E2E with Transformer for co-learning and decoding. This approach speeds up learning and facilitates LM integration. The proposed ASR system implements significant improvements in various ASR tasks. For example, it lowered WER from 11.1% to 4.5% for the Wall Street Journal and from 16.1% to 11.6% for TED-LIUM, introducing CTC and LM integration into the Transformer baseline. Moritz et al. 19 proposed a Transformer-based model for streaming speech recognition that requires an entire speech utterance as input. Time-limited self-attention in the encoder and triggered attention for the encoderdecoder with attention mechanism were applied to generate the output after the spoken word. The model architecture achieved the best result in E2E streaming speech recognition − 2.8% and 7.3% WER for "pure" and "other" LibriSpeech test data. The Weak-Attention Suppression (WAS) method was proposed by Yangyang Shi and other 20 , which dynamically causes sparse attention probabilities. This method suppresses the attention of uncritical and redundant continuous acoustic frames and is more likely to suppress past frames than future ones. It was shown that the proposed method leads to a decrease in WER compared to the basic types of Transformer. In Test LibriSpeech, the proposed WAS method reduced WER by 10% in cleanliness testing and by 5% in another test for streaming Transformers, which led to a new advanced level among streaming models. Dong Linhao and co-authors 21 presented a Speech-Transformer system using a 2D attention mechanism that co-processes the time and frequency axes of 2D speech inputs, thereby providing more expressive representations for the Speech-Transformer. The Wall Street Journal (WSJ) corpus was used as training data. The results of the experiment showed that this model allows to reduce the training time and at the same time can provide a competitive WER. Gangi et al. 22 suggested Transformer with SLT adaptation-an architecture for spoken language translation, for processing long input sequences with low information density to solve ASR problems. The adaptation was based on downsampling the input data using convolutional neural networks and modeling the two-dimensional nature of the audio spectrogram using 2D components. Experiments show that the SLT-adapted Transformer outperforms the RNN-based baseline in both translation quality and learning time, providing high performance in six language areas. Takaaki Hori et al. 23 advanced the Transformer architecture, on the basis of which a context window was developed, which was trained in monologue and dialogue scenarios. Monologue tests on CSJ and TED-LIUM3 and dialog tests on SWITCHBOARD and HKUST were applied. As a result, results were obtained that surpass the basic E2E ASR with one sound and with or without speaker i-vectors. In the E2E system, the RNN-based encoder-decoder model was replaced by the Transformer architecture in Chang X. et al. research 24 . And in order to use this model in the masking network of the neural beamformer in the multi-channel case, the self-attention component has been modified so that it is limited to a segment, rather than the entire sequence, in order to reduce the amount of computation. In addition to improvements to the model architecture, preprocessing of external dereverberation, weighted prediction error (WPE), was also included, which allows the model to process reverberated signals. Experiments with the extended wsj1-2mix corpus show that Transformer-based models achieve better results in echo-free conditions in single-channel and multi-channel modes, respectively. Transformer architecture The Transformer model was first created for machine translation, replacing recurrent neural networks (RNNs) in natural language processing (NLP) tasks. In this model, recurrence was completely eliminated, instead, for each statement, using the internal attention mechanism (self-attention mechanism), signs were built to identify the significance of other sequences for this utterance. Therefore, the generated features for a given statement are the result of linear transformations of sequence features that are significant. The Transformer model consists of one large block, which in turn consists of blocks of encoders and decoders (Fig. 1). Here, the encoder takes as input the feature vectors from the audio signal X = (x 1 Encoder and decoder networks. Conventional E2E encoder/decoder models for speech recognition tasks consist of a single encoder and decoder, an attention mechanism. The encoder converts the vector of acoustic features into an alternative representation, and the decoder predicts a sequence of labels from the alternative information provided by the encoder, then attention highlights the significant parts of the frame for predicting the output. In contrast to these models, the Transformer model can have several encoders and decoders, and each of them contains its own internal attention mechanism. An encoder block consists of sets of encoders; as a study, 6 coders are usually taken, which are located one above the other. The number of encoders is not fixed, it is possible to experiment with an arbitrary number of encoders in a block. All encoders have the same structure but different weights. The input of the encoder receives extracted feature vectors from the audio signal, obtained using Mel-frequency cepstral coefficients or convolutional neural networks. Then the first encoder transforms these data using self-attention into a set of vectors, and through the feed forward ANN transmits the received outputs to the next encoder. The last encoder processes the vectors and transfers the data of the encoded functions to the decoder block. A decoder block is a set of decoders, and their number is usually identical to the number of encoders. Each part of the encoder can be divided into two sublayers: the input data entering the encoder first passes through the multi-head attention layer, which helps the encoder look at other words in the incoming sentence during encoding of a particular word. The output of the inner multi-head attention layer is sent to the feed-forward neural network. The exact same network is independently applied to each word in the sentence. The decoder also contains two of these layers, but there is an attention layer between them that helps the decoder focus on significant parts of the incoming sentence, as is similar to the usual attention mechanism in seq2seq models. This component will take into account previous characters/words and, based on these data, outputs the posterior probabilities of the subsequent character/words. 10 . The advantage of self-attention is fast calculation and shortening of the path between words, as well as potential interpretability. This attention includes 3 vectors: queries, keys and values, and scaling (7): Self-attention mechanism. The Transformer model includes Scaled Dot-Product Attention These parameters are considered useful for calculating attention. Multi-head attention combines several selfattention maps into general matrix calculations (8): . h is the amount of attention in the layer, QW Q h , KW K h , VW V h , s h -trained weight matrices. The multi-head attention mechanism can be used as an optimization problem. Using this mechanism, you can bypass problems associated with unsuccessful initialization, as well as improve the speed of training. In addition, after training, you can exclude some parts of the heads of attention, since these changes will not affect the quality of decoding in any way. The number of heads in the model is designed to regulate attention mechanisms. In addition, this mechanism helps the network to easily access any information, regardless of the length of the sequence, because this is done easily, regardless of the number of words in the set. Encoder block Decoder block Sequence of words www.nature.com/scientificreports/ In the Transformer architecture, you can see the Normalize element, which is necessary to normalize feature values, since after using the attention mechanism, these values can have different values. As a normalization, the Layer Normalization method is usually used (Fig. 2). The outputs of several heads can also be different, and in the final vector the spread of values can be large. To prevent this, an approach has been proposed 11 where values at each position are converted with a two-layer perception. After applying the attention mechanism, the values are projected to a larger dimension using the trained weights, where they are then transformed by the nonlinear activation function ReLU, and then these values are projected to the original dimension, after which the next normalization occurs. Proposed model. Typically, Connectionist temporal classification (CTC) is used as a loss function to train recurrent neural networks to recognize input speech without pre-aligning the input and output data 11 . To achieve high performance from the CTC model, it is necessary to use an external language model, since direct decoding will not work correctly. In addition, the Kazakh language has a rather diverse mechanism of word formation, which the use of language mode contributes to an increase in the quality of recognition of Kazakh speech. In this work, we will jointly use the Transformer and CTC models with LM. The use of LM CTC in decoding results in rapid model convergence, which reduces the amount of time to decode and improves system performance. The CTC function, after receiving the output from the encoder, finds the probability by formula 9 for arbitrary alignment between the encoder output and the output symbol sequence. Here x is the output vector of the encoder, R is an additional operator for removing blank spaces and repeated symbols,γ is a series of predicted symbols. This equation determines the sum of all alignments using dynamic programming, and helps to train the neural network on unlabeled data. The general structure of the resulting model is shown in Fig. 3. During training, the multi-task loss method was used to bring the general formula for combining probabilities according to the negative logarithm, as presented in 10 . Thus, the resulting model can be represented by the following expression (10): The following additions have been included to improve model performance: (1) Using a character-level language model in feature extraction. Convolutional neural networks were used to extract features. To extract high-dimensional features from the audio data, we first wrap all the network parameters under the last hidden CNN layer. Softmax was used as an activation function. Next, a maxpooling layer was added to eliminate noise signals and reduce noise with dimensionality reduction. This layer is needed to reduce the size of the collapsed element into a vector. Also it helps to reduce the processing power required for data processing by reducing the dimensionality. And adaptation of training with character-level language model, without disturbing the structure of the neural network during training, allows us to preserve maximum non-linearity for subsequent processing. Thus, our extracted features are already high-level, and there is no need to map these raw data to phonemes. (2) Application of a language model at the level of words and phrases when decoding together with CTC. To measure the quality of the Kazakh speech recognition system, the following parameters were used: CERthe number of incorrectly recognized characters, because characters are the most common and simple output units for generating texts; and based on the word error rate (WER) 25 . Experiments and results Dataset. To train the Transformer, Transformer + CTC models with LM and without LM, it was decided to divide the corpus of 400 h of speech into two parts: 200 h of "pure" speech and 200 h of spontaneous telephone speech. This corpus was assembled in the laboratory "Computer Engineering of Intelligent Systems" IICT MES RK 13,26 . When creating the corpus, various types of speech were taken into account: prepared (reading), spontaneous. In the corpus, sound files are divided into training and test parts, these are 90% and 10%, respectively. The pure speech database consists of recordings of 380 speakers, native Kazakh speakers of different ages and genders, as well as speech data from artistic audiobooks and audio data of news broadcasts. The voice acting and recording of each speaker took about 40-50 min. For the text, sentences with the richest phoneme of words were selected. Text data was collected from news sites in the Kazakh language, and other materials were used in electronic form in the Kazakh language. To record the speakers, students, doctoral students and undergraduates were involved as a scientific practice, as well as employees of the institute, colleagues from different parts of the country, as well as acquaintances and relatives. Recording the voice-overs took about a year, and experts in the field of linguists and linguistics were involved to evaluate and review the corpus in order to ensure high quality. www.nature.com/scientificreports/ Recordings of telephone conversations were provided by the telecommunications company for scientific use only. Transcribing of telephone conversations was carried out on the basis of the developed methodology for the compilation of texts, since this speech is spontaneous, and may contain information in a foreign language, and also speech may contain various kinds of speech noises, non-speech noises, such as a tone dial signal, a telephone beep, sounds resembling a blow, a click, as well as sounds that serve to think about the next statement. In addition, there may be slurred speech, overlapping of several speakers, etc. It should be noted that the selection of text arrays with predetermined statistical requirements for the contextual use of phonemes is a very time-consuming task and took quite a long time. This process took 5-6 months and is still ongoing. it was necessary to check not only the speech data, but also the correctness of the transcription of the data. The speech recognition system does not require the creation of a dictionary at the phoneme level, it is enough to have audio data with text data. After the above works, one of the important elements was created-a vocabulary base for the speech recognition system (10,805 non-repeating words). All recorded texts are collected in one file and repeated words have been removed. Once sorted alphabetically. The audio data were in .wav format. All audio data have been converted to a single channel. The PCM method was used to convert the data into digital form. Discrete frequency 44.1 kHz, 16-bit. The PyTorch toolkit was used for the Transformer models. The experiments were carried out on a server with eight AMD Ryzen 9 GPUs with a GeForce RTX3090. The datasets were stored on 1000 GB SSD memory to allow faster data flow during training. Results. To optimize the model, a gradient descent optimizer based on Adam 27 was used with optimal parameters β 1 = 0.8, β 2 = 0.95 и ϵ = 10 −4 , which leads to an increase in the learning rate 10 . To improve the model, parameter values during training that affect model learning qualities were configured. At the training stage, 3 regularization methods were used, as indicated in 10 , these are residual dropout, level normalization and label smoothing. Residual dropout is applied before data normalization with a factor equal to 0.3 and then normalization is applied. The label smoothing method was applied during training with a parameter of 0.1. These regularization techniques improve the accuracy of the system's metrics and keep training from overlapping the training set. The proposed in 28 method was used to initialize the Transformer weights and as a study, 6 encoders and 6 decoders were installed, which are located one above the other. All configurable parameters were set the same for the two datasets. The packet size was fixed at 64. For the CTC, the interpolation weight was set to 0.2 and consists of a directional six-layer BLSTM with 256 cells in each layer. The beam research width at the decode stage is 15. The language model contains two 1024-unit LSTM layers and it was trained with crated vocabulary base for speech recognition system. The model has been trained for 45 epochs. The following tables (Tables 1,2) show the results for CER and WER of the built models on two databases (dataset). Experiments were carried out to recognize Kazakh speech using different models. The model trained on a pure data set showed competitive results only with the use of an external language model. In Table 1, it can be seen that the Transformer model with CTC works well with and without the use of the language model and achieved a CER of 6.2% and a WER of 13.5%. The integration of an external language model made the system heavier, but significantly reduced the CER and WER rates by 3.7 and 8.3%, respectively. As can be seen from the tables, the Transformer + CTC LM model shows the best result on two databases. In addition, the Transformer model with CTC learned faster and converged quickly compared to other models than without it (Figs. 4,5). The resulting model was also easier to integrate with LM. With the help of the CTC, the obtained data were aligned. The results obtained during the experiment prove the effectiveness of the joint use of the CTC with the E2E language model and showed the best result on all datasets in Kazakh. In addition, adding a CTC to our www.nature.com/scientificreports/ model generally improves the performance of the system. In the future, it is necessary to expand our speech corpus and improve CER and WER. Discussion To improve the performance of these metrics, a language model trained on the basis of RNN was integrated into the model. This is the only way to achieve good results in our case. In addition, if you conduct additional experiments with a corpus that has more volume, then this addition can also affect the quality of recognition. However, increasing the amount of data for training will probably not solve the problem just like that. There are a large number of dialects and accents of Kazakh speech. It is not possible to collect enough data for all cases. Speech recognition systems make many more errors as noise increases. This can be noticed on the basis of experiments related to the recognition of conversational telephone speech ( Table 2). The model cannot simultaneously recognize 2 people who are talking at the same time, this leads to the overlap of voice data. The issues of diarization and separation of sources and the indicators for determining semantic errors have not been resolved. Transformer takes into account the entire context and learns the language model better, and CTC helps the model learn to produce recognition that is optimally aligned in time with the recording. This architecture can be further adapted for streaming speech recognition. Conclusion In this paper, the Transformer architecture for automatic recognition of Kazakh continuous speech was considered, which uses self-attention components. Despite the multiple model parameters that need to be tuned, the training process can be shortened by parallelizing the processes. The combined Transformer + CTC LM model showed better results in Kazakh speech recognition in terms of character and word recognition accuracy, and reduced these figures by 3.7 and 8.3%, respectively, than using them separately. This proves that the implemented model can be applied to other low-resource languages. In further research, it is planned to increase the speech corpus for the Kazakh language to conduct experiments on the implemented model, and it is also necessary to make significant improvements to the Transformer model to reduce word and symbol errors in the recognition of Kazakh continuous speech.
7,175
2022-05-18T00:00:00.000
[ "Computer Science", "Linguistics" ]
Research of Ontology Building in Semantic Web The Semantic Web is a web of data and is about two things (Ivan Herman et al., 2010). One is common formats for integration and combination of data drawn from diverse sources. Another is about language for recording how the data relates to real world objects. In simple terms it provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. Fig. 1 shows the Semantic Web layer cake. The philosophy was thought up by Berners-Lee. The Web is a system interlinking lots of documents via the Internet.In the Web, people can read lots of books, watch the online TV, publish contents in blogs and so on. The Semantic Web is a web of data and is about two things (Ivan Herman et al., 2010).One is common formats for integration and combination of data drawn from diverse sources.Another is about language for recording how the data relates to real world objects.In simple terms it provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries.Fig. 1 shows the Semantic Web layer cake.The philosophy was thought up by Berners-Lee. Where, the Table 1 describes the specifications each layer. Ontology Ontology as a branch of philosophy is the science of what is, of the kinds and structures of the objects, properties and relations in every area of reality(Smith Barry et al., 2003, 155-166).Each scientific field will of course have its own preferred ontology, defined by the field's vocabulary and by the canonical formulations of its theories.Ontologies play an important role in fulfilling semantic interoperability as described in the seminal article on the Semantic Web (Berners-Lee et al., 2001, pp.35-43). Why Ontology is so essential in the Semantic Web?The first reason is that the search for XML, RDF and RDFS have being rather ripper, which provided the language basis.Another reason is that the upper projects, researched on rules, logic framework, proof and trust, could not get improving smoothly and quickly without the support of Building Ontology.However, it is a hard work and needs a lot of research by scholars and engineers.In all the difficulties, what is the most important? The answer is how to build a available Ontology.Meantime, it must to meet five rules, veracity and objectivity, integrity, consistency, extensibility and the least restriction (Peng Bo, 2009Bo, , pp.2610Bo, -2611)). Double-Channel Helix Methodology Nowadays, there are many methodologies of Ontologies building, e.g., methodology by Ushold and King for Enterprise Ontology, METHONTOLOGY, On-To-Knowledge Methodology and so on (AIAI et al., 2003;Fermandez-Lopez et al., 1999, pp.37-46;Staab et al., 2001, pp.26-34).But un-commonality is the important limitation.Fig. 2 shows a Double-Channel Helix Methodology, a better methodology.Its process of Ontology building is step-and-step.It has two "channel", each of which can make the process be shifted backward or forward.Considering from the engineering theory of ontologies building, the methodology is satisfied to people's acknowledge and logic thinking. Semantic Web knowledge In this layer, the Semantic Web, next generation of the World Wide Web, will consist of data defined and linked in such a way that it can be used for effective discovery, automation, integration, and reuse across various applications (S.Bechhofer et al., 1999, pp.33-36).Moreover, the distributed and dynamic character of the Semantic will cause that many versions and variants of Ontology knowledge will arise.To build a real Semantic Web, it is essential that the knowledge that is represented in the different versions of ontologies is interoperable. Conception Extraction The extraction of conception in the Semantic Web is not an easy task.It requires skills and is still an art rather technology.As below, it shows a method. Firstly, we have to gather all conception in brain storming, by clustering collected and by investigating refined.Though this step, decide the scope defined for conceptions in the Semantic Web.Secondly, make sure the attribution of every conception.Select a natural word which one meaning for each concept.If there is no accessible word for representing a concept, then create a new one.Thirdly, sum up the relation between the different conceptions.Organize these concepts in an is-a hierarchy. In practice, these three steps are done not in a waterfall manner.Users can go back and forth during the process. Tools Chosen Sincerely, many of tools has been developed for enterprises, which include mainly Ontolingua, Ontosaurus, WebOnto, WebODE, OntoEdit, OILEd, Protégé and so on.All tools are assistant well for Ontology to edit, modify, explore and maintain.In Table 2 (Wang Chang-xia et al., 2009, pp.26-28, 31) , it shows the scheme of key identification among seven tool.Users should select an available tool for Ontology building and even that choose the different tools for different stage to finish task better. Ontology Building Edit the coding about the conceptions, attributes and relations using the basic elements, e.g.Class, Property, subClassOf, subPropertyOf and so on.Finally, provide main interface for Ontology merging, Ontology mapping, Ontology translation resolution in the next stage. Ontology Evaluation Same to software engineering, Ontology needs also to evaluate.However, there is no authoritative software to detect new Ontology building.Recently, it is evaluated by extensibility, visibility and inferenceability and so on.Finally, if new Ontology building does not meet standards, the process will be shifted backward continuing loop.Otherwise, the process of Ontology building ends. Conclusion After analyzing the limitations of traditional methodology on Ontology building, the author give a better Double-Channel Helix model.Of course, there are something worthy to refine in the further task. Table 1.The description of the Semantic Web Layer Cake URI Uniform Resource Identifiers This layer makes sure of providing the uniform identifiers of lots of resource.It likes the strings starting with "http:" or "ftp:" that often find on the World Wide Web. Unicode This layer provides the uniform standards for all kinds of languages in the world to coding characters. XML Extensible Markup Language XML replaces HTML for its individual advantages.In addition, it gives a definition on the methods describing data. Namespace This layer provides many ways to differentiate names, so the resources which have same names and different means are still used. RDF M&S Resource Description Framework Model and syntax Here reserve specifications refer to model and syntax of RDF. RDF Schema Resource Description Framework Schema It is a kind of languages for describing RDF vocabularies and has some basic elements, e.g.Resource, Class, Property, subClassOf, subPropertyOf, range, domain. Ontology It is a formal, explicit specification of a shared conceptualization.It need to make sure all knowledge can be known by people and computer. Rules Many principles or regulations between the upper and the lower in the level structure; Logic Framework It provides a logic inferenceability on the knowledge of Ontology describing. Proof On being logic inferenceability, give a proof on whether a statement is right or wrong. Trust Detect whether the web information is trusted or not. Signature A method to make sure the environment more safety Encryption A method to make the environment more safety Table 2 . The comparison on the Ontology building tools
1,615.6
2010-10-19T00:00:00.000
[ "Computer Science" ]
From Digital Twins to Digital Twin Prototypes: Concepts, Formalization, and Applications The transformation to Industry 4.0 also transforms the processes of developing intelligent manufacturing production systems. Digital twins may be employed to advance the development of these new (embedded) software systems. However, there is no consensual definition of what a digital twin is. In this paper, we provide an overview of the current state of the digital twin concept and formalize the digital twin concept using the Object-Z notation. This formalization includes the concepts of physical twins, digital models, digital templates, digital threads, digital shadows, digital twins, and digital twin prototypes. The relationships between all these concepts are visualized as class diagrams using the Unified Modeling Language. Our digital twin prototype approach supports engineers in the development and automated testing of complex embedded software systems. This approach enables engineers to test embedded software systems in a virtual context without the need of a connection to a physical object. In continuous integration/continuous deployment pipelines, such digital twin prototypes can be used for automated integration testing and, thus, allow for an agile verification and validation process. In this paper, we demonstrate and report on the application and implementation of a digital twin using the example of two real-world field studies (ocean observation systems and smart farming). For independent replication and extension of our approach by other researchers, we provide a laboratory study published open source on GitHub. INTRODUCTION For cyber-physical-systems, the Industrial Internet of Things (IIOT), and Industry 4.0 applications, the embedded software is an increasingly crucial asset.With increasing requirements and hence, increasing complexity, new challenges arise for manufacturers and in particular, for the engineers of these systems.While in large software companies, software development is often done by distributed teams of engineers [1], this is usually different for small and medium-sized enterprises (SME) that develop embedded systems [2].Especially, in SMEs, embedded software still is often developed by the same engineers who also develop the electronics and/or mechanical parts [3]. However, with the demand for context-aware, autonomous, and adaptive robotic systems [4], more advanced software engineering methods have to be adopted by the embedded software community.Consequently, the way these systems are developed has to advance.In future development workflows, the embedded software systems will be the center-piece of IIoT applications.To achieve this, the community has to move from expert-centric tools [4] to modular systems, whereby domain experts are enabled to contribute parts of the system. A survey among 2,000 decision makers about trends and challenges in software engineering found that quality is perceived in the software industry as the single most relevant premise to survive [5].Yet, organizations struggle to achieve software quality along with cost and efficiency [6].During the development of embedded (software) systems, at some point, thorough and reliable tests are necessary to verify and validate the whole system [7].A common way to test the control algorithms of an embedded software system is Hardware-in-the-Loop (HIL) testing.An example for HIL testing at large scale is Airbus with creating iron birds of their aircraft, containing the corresponding electronics, hydraulics and flight controls [8].However, many SMEs cannot afford such redundant hardware just for the purpose of testing software.Hence, test automation is among the most popular topics for testing embedded software [9].Still, automatic quality assurance is a challenge in this context, since hardware is in the loop. Many different simulation tools were proposed, developed, and sold, with the promise to reduce costs and time needed for verification and validation.Yet, none of these tools is able to combine all aspects of modern machines during all steps of the production life-cycle, due to the complexity of systems and the high amount of data being processed.Thus, multidisciplinary simulation concepts are increasingly important with regard to scalable and highly modular production environments enabled by cyberphysical systems [10].Alongside HIL testing, manufactures implemented different automated testing strategies with Inthe-Loop simulations to reduce costs, e.g., Software-in-the-Loop (SIL), Model-in-the-Loop (MIL), and Processor-in-the-Loop (PIL) simulations [11]. One promising technique to enhance the overall software quality of embedded systems, is the Digital Twin concept.We start with a discussion of related work in Section 2. As there is no common understanding around the concept, we then dissect the different parts of a digital twin in and formally specify the concepts with the Object-Z notation. Afterwards, the application of digital twins in different industrial contexts are presented to illustrate the approach. RELATED WORK Digital twins are not only a growing topic in academia but also in the industry, especially in manufacturing [12].However, there is still no consensual definition of a Digital Twin, as we explain in Section 2.1.Most of the research conducted to find a general definition of a digital twin, are literature reviews [13]- [16] investigating where digital twins are used, which components are part of it, and which level of integration with the CPS exists.In particular, Kritzinger, Karner, Traar, et al. [16] contributed with their literature review to a consensual understanding about which subsystems are part of a digital twin.They consider the digital model, the digital shadow, and the digital twin as three separate levels of integration in the overall concept of digital twins.In this paper, we extend this work by providing a formalization for all these categories. With regard to mathematical approaches to formalize the concept of digital twins, there is a lack in research papers.Nevertheless, we discuss two approaches [17], [18] that use semi-formal approaches to define the relationships between the different components of a digital twin in Section 2.2. The Evolution of the Digital Twin Concept An innovative method for testing and monitoring embedded systems was used for space missions, dating back to the early Apollo missions conducted by the National Aeronautics and Space Administration (NASA).Here, the "Twin" concept was initially employed during the Apollo missions in the late 1960s as a safety precaution.If a system on the spacecraft failed during the mission, engineers had no access to the capsule.A failure to fix problems in a timely manner could be catastrophic for the space mission.At the time, computational power was insufficient for complex simulations, so NASA engineers came up with the idea of building at least two identical space capsules.One was used for the mission while the other remained on Earth, serving as the "Twin" for simulation purposes.Changes to the system were first tested on the Twin before astronauts received instructions.This approach required both capsules to be maintained exactly the same, including replacing parts on the Twin even if it was not used during a mission.NASA had planned to transfer this approach to the Space Shuttle program, but abandoned the idea due to the high costs. Half a century later, with advancements in computational power and improved simulations, the NASA's Twin concept has evolved into a digital twin.However, there was a second research threads that contributed to the concept.The second thread originated from the manufacturing industry and dates back to 2002, when Grieves [19] first pitched for the formation of a Product Lifecycle Management (PLM) center at the University of Michigan.The presentation slide, as depicted in Figure 1, had the title "Conceptual Ideal for PLM" [20] and sketched the idea of a digital twin and named it "Mirrored Spaces Model" back than [19]. Grieves envisioned with the Mirrored Spaces Model already three crucial components of digital twins: the physical Fig. 1: A Digital Twin by Grieves and Vickers [20] consists of the real space (left side), the virtual space (right side), and the link for data flow from real space to virtual space.The opposite direction is done manually by using information to enhance processes (Source: [20]). space, the virtual space, and the data link between the physical and virtual spaces.Later, in 2016, Grieves and Vickers [20] defined the digital twin as stated in Definition 1: Definition 1 (Digital twin by Grieves and Vickers [20] (2016)).The Digital Twin is a set of virtual information constructs that fully describes a potential or actual physical manufactured product from the micro atomic level to the macro geometrical level.At its optimum, any information that could be obtained from inspecting a physical manufactured product can be obtained from its Digital Twin.Digital Twins are of two types: Digital Twin Prototype (DTP) and Digital Twin Instance (DTI).Digital twin's are operated on in a Digital Twin Environment (DTE).Definition 1 considered the digital twin to be a collection of technologies and distinguished between two types: the Digital Twin Prototype (DTP) and the Digital Twin Instance (DTI).The Digital Twin Prototype is a set of blueprints, etc., used to construct or maintain the physical twin.The Digital Twin Instance is the specific instance created after the physical twin has been manufactured and is linked to it throughout its lifecycle.Although the vision by Grieves and Vickers [20] reflected solutions that are possible today, the technology available in 2002 only allowed for a rudimentary implementation of what a digital twin is known today.Digital twins were seen as a new paradigm for designing, manufacturing, and servicing products [12].However, the meaning of digital twin may vary depending on the sector they are utilized in [12]. After their introduction, digital twins experienced a hype phase until around the year 2006.The first hype of digital twins was driven by high hopes in the industry.However, the technology did not live up to the hype, and digital twins became a buzzword in marketing departments rather than a fully realized concept.Newman [21] observed and criticized something similar with regard to microservice architectures.Saracco and Henz [12] emphasize that the industry drove the development of digital twins, while academia ignored it.The revival of interest in digital twins in 2016 was thanks to the maturity of IIoT and CPS technologies, and academia also joined the bandwagon.Digital twins reached the peak of the Gartner Hype Cycle of emerging technologies in 2018 [22].Furthermore, an increased number in research papers and special issues published by journals can be registered after 2016. It was between 2006 and 2016 when Piascik, Vickers, Lowry, et al. [23], and Glaessgen and Stargel [24] proposed their vision for a digital twin for NASA [19].Piascik, Vickers, Lowry, et al. [23] used the term digital twin in their technology roadmap for NASA.However, they described the digital twin concept, but did not define digital twins.The better known digital twin definition was by Glaessgen and Stargel [24] for next generation fighter aircraft and NASA vehicles shown in Definition 2: Definition 2 (Digital twin by Glaessgen and Stargel [24] (NASA) (2012)).A Digital Twin is an integrated multiphysics, multiscale, probabilistic simulation of an as-built vehicle or system that uses the best available physical models, sensor updates, fleet history, etc., to mirror the life of its corresponding flying twin.The Digital Twin is ultra-realistic and may consider one or more important and interdependent vehicle systems, including airframe, propulsion and energy storage, life support, avionics, thermal protection, etc. They tailored their vision for the specific use case of spacecraft, satellites, and space exploration, where simulations play a crucial role due to the high cost of hardware and human resources.These simulations are used both in the development phase, which indicates at least a MiL approach, and to monitor the systems during missions.To detect anomalies during flight, they also included a channel for sending sensor data from the physical twins to their corresponding digital twins.Loading this data into the simulation with a realistic model supersedes the NASA's Twin approach from the Apollo missions.This is similar to the data link shown in Figure 1, only with far advanced technology and tools.A demonstration of their implementation can be seen in the Perseverance Rover that landed on Mars in 2021 [25]. In parallel to the definition by NASA, Garetti, Rosa, and Terzi [26] defined digital twins for manufacturing as shown in Definition 3: Definition 3 (Digital twin by NASA [26] (2012)).The digital twin consists of a virtual representation of a production system that is able to run on different simulation disciplines that is characterized by the synchronization between the virtual and real system, thanks to sensed data and connected smart devices, mathematical models and real time data elaboration.The topical role within Industry 4.0 manufacturing systems is to exploit these features to forecast and optimize the behaviour of the production system at each life cycle phase in real time. When the attention on digital twins research rekindled, academia proposed multiple definitions for the concept [13].These definitions were influenced by the realistic simulation approach put forth by NASA.Rosen, Wichert, Lo, et al. [15] linked the digital twin concept to the Industry 4.0 strategy of the German Platform Industry 4.0 [27].They illustrated how simulations evolved over time, from mechanics in the 1960s to simulation-based system design and finally to digital twins since 2015.They also highlighted that modularity, autonomy, and connectivity are crucial requirements for digital twins, among other factors. The definitions provided by Grieves and Vickers [20] and NASA only included an automated connection from the physical twin to its digital twin.Trauer, Schweigert-Recksiek, Engel, et al. [28] conducted an industrial case study to analyze how the industry perceived and defined digital twins between 2002 and 2019.They traced the evolution of digital twins and presented Definition 4 as a result. 4 (Digital twin by Trauer, Schweigert-Recksiek, Engel, et al. [28] (2020)).A Digital Twin is a virtual dynamic representation of a physical system, which is connected to it over the entire life cycle for bidirectional data exchange. We present Definition 4 here, because of the inclusion of the bidirectional data exchange from digital twin to physical twin.This bidirectional interaction allows remote control and operation of the physical twin, as well as new opportunities for collaboration between physical twin and digital twin.This poses a challenge for engineers to either develop the software independently for each twin, violating the principle of realistic replication, or to use tools like Docker to containerize the physical twin's software for use as a digital twin. Depending on the research field, the industry, and use cases, the term digital twin is often used synonymous with concepts like Digital Model, Digital Shadow, and Digital Thread [13], [16].Kritzinger, Karner, Traar, et al. [16] conducted a categorical literature review and analyzed research papers with regard of the proposed concept and how it deviates from a common understanding of the essential parts of digital twins.They classify three subcategories of a digital twin by their level of integration with the physical twin: (i) digital model, (ii) digital shadow, and (iii) digital twin.The differences are depicted in Figure 2. • Figure 2a shows the digital model.There is no automated connection between the physical object and the digital model.No automated data exchange is realized.State changes in the physical object do not immediately affect the digital model and vice versa. • If there is an automated one-way data flow from the physical object to the digital object (see Figure 2b), then this is a digital shadow.A change in state of the physical object leads to a change of state in the digital shadow, but not vice versa. • Figure 2c shows a fully integrated digital twin.The data flows are automated between the physical twin and the digital twin in both directions.In such a configuration, the digital twin might also act as a controlling instance of the physical twin.A change in state of the physical twin directly leads to a change in state of the digital twin and vice versa.Fig. 2: Subcategories of digital twins by their level of integration with the physical twins (Source: [16]). With the increasing importance of digital twins, the International Organization for Standardization (ISO) also published the ISO 23247 series, defining a framework to support the creation of digital twins of observable manufacturing elements, including personnel, equipment, materials, manufacturing processes, facilities, environment, products, and supporting documents [29]. Definition 5 (Digital twin by International Organization for Standardization [29] (2021)).A digital twin assists with detecting anomalies in manufacturing processes to achieve functional objectives such as real-time control, predictive maintenance, in-process adaptation, Big Data analytics, and machine learning.A digital twin monitors its observable manufacturing element by constantly updating relevant operational and environmental data.The visibility into process and execution enabled by a digital twin enhances manufacturing operation and business cooperation One aspect of ISO 23247 that immediately catches the eye is the absence of mentioning of bidirectional communication.The focus is on the monitoring aspect of a digital twin.According to the definition by Kritzinger, Karner, Traar, et al. [16], ISO 23247 only describes a digital shadow [29]. Since 2018, IIoT platforms transitioned from basic data hubs to digital twin (DT) platforms.Lehner, Pfeiffer, Tinsel, et al. [30] evaluated the digital twin platforms provided by Amazon Web Services (AWS), Microsoft Azure, and the Eclipse ecosystem and showed that they fulfill many requirements, yet not all key requirements.Features like bidirectional synchronization between physical and digital twins require additional coding, and automation protocols are not covered yet.According to the categorization of the integration level of digital twins [16], these platforms only help to establish a so-called digital shadow [16].Modern simulation tools such as AutoDesk, aPriori, or Ansys, are using IIoT platforms to feed the simulation with data and enable the integration of automation protocols.Often they are promoted with the promise of a digital twin.However, similar to the cloud providers, these tools also just help to establish a digital shadow.The simulation of a physical twin (PT) still does not cover the entire embedded software system that runs on the digital twin and also lacks the ability of proper bidirectional synchronization between digital twin and digital twin. Conceptual Models to Define Digital Twins The presented research projects and papers leave plenty of space for interpretation of the digital twin concept.This is one reason, why there are so many definitions of digital twins. Fig. 3: Semi-formal description of the relationships between physical twin, digital twin, their connections, and environments as described by Yue, Arcaini, and Ali [17]. Yue, Arcaini, and Ali [17] present a semi-formal approach using UML class diagrams to define the physical twin, digital twin and their relationships by the example of an automated warehouse system (AWS).Figure 3 A state change in one twin, triggers the change of the state of its counterpart. Furthermore, they payed attention to two aspects, which are often not considered explicitly: fidelity and the twinning rate.Fidelity considers the accuracy and the level of abstraction of the digital twin and the twinning rate is the interval physical twin and digital twin synchronize their states. However, the semi-formal approach by Yue, Arcaini, and Ali [17] has its flaws.Although they considered the digital model as part of the digital twin, it is not explicitly mentioned in the general overview in Figure 3.Moreover, the digital shadow was ignored completely. Becker, Bibow, Dalibor, et al. [18] present in their conceptual model of digital shadows for CPS in a simlar approach using also UML class diagrams to show the relationships, but solely for the digital shadow.The focus of the digital shadow is on single assets and their information flow from the physical twin to the digital shadow.They also emphasize that an asset's corresponding model is part of the digital shadow and models can be of different natures/types. A formal mathematical approach, yet very abstract, of the relationships between physical twins, digital shadow, and digital twin was presented by Lv, Lv, and Fridenfalk [31].A limitation in their approach is that it still offers a lot of space for interpretation and the mathematical notation is peculiar. In this paper, we extend and merge the relationship diagrams of Yue, Arcaini, and Ali [17] and Becker, Bibow, Dalibor, et al. [18] by also including the digital model and digital shadow to give a full overview of the Digital Twin concept.In addition, we present the formalization of a digital twin software architecture using the Object-Z notation. Continuous Twinning In the development phase of CPS, HIL testing still is the common approach.The pressure to reduce costs [6] led to many different approaches to switch from HIL to SIL.To date, for most industrial applications, sensors and actuators are connected via input/output ports to programmable logic controllers (PLCs).Although new wireless communication technologies and more powerful and efficient singleboard computers open up the embedded community for cheaper and faster development processes, the predominance of PLCs will hold for years.It is quite common to use PLCs in a HIL setup, where the PLC is connected to a simulation [32].Engineers can program the PLC and the simulation delivers the virtual context with simulated sensors/actuators to the PLC.As still only one engineer can work on a HIL system at the same time, SIL approaches become more and more popular to enable the collaboration between engineers.Lyu, Atmojo, and Vyatkin [32] demonstrated that a software PLC in a SIL context can be realized with Docker and other tools. Quality assurance of embedded systems is regulated with standards and norms to ensure robust testing and to prevent malfunctions that might pose a risk to the safety of individuals who work with or use these systems [2].The aviation industry is renowned for its strict and stringent testing procedures, contributing to the fact that aircraft are the safest mode of transportation, statistically.This was not the case half a century ago, as standards and procedures have evolved through various experimentation with different testing strategies. The digital twin prototype approach presented in this paper, enables engineers to produce the first minimum viable product (MVP) with the first implemented device driver and emulator.Thanks to the publish-subscribe architecture, all additional nodes and emulators can be developed and added iteratively.Putting all modules in a source code management system allows all developers to use the digital twin prototype and enhance the entire system incrementally, without the need to connect to the hardware of the digital twin.As a bonus, this also enables automated SIL testing in continuous integration/continuous delivery (CI/CD) pipelines. By following CI/CD workflows the development of embedded software systems becomes an agile and incremental process.Beginning with a prototype of a device driver for a single piece of hardware, to entire production plants, to smart factories, agile software development is enabled.This does not only improve the software quality and shorten release cycles, it also allows additional stakeholders to participate in a feedback loop in the development process from the first MVP.Adjusting software requirements or fixing design flaws can be done during development.With this method, digital twins evolve continuously in small incremental steps, rather than in major releases.Nakagawa, Antonino, Schnicke, et al. [33] envision and call this approach Continuous Twinning. THE DIGITAL TWIN CONCEPT -A FORMALIZA-TION As Grieves [34] elaborates, there is a flaw in the categorization of the digital twin definition by Kritzinger, Karner, Traar, et al. [16].Stating that digital twins have three subcategories, where a digital twin is a subcategory of itself, leads to endless recursion.Furthermore, this increases the confusion around what a digital twin is and what it is not.However, we do not share the recommendation to ignore the difference between a digital shadow and a digital twin with Grieves [34].To enhance clarity around the concepts and relationships between physical twins, digital models, digital shadows, digital threads, digital twin prototypes, digital templates, and digital twins, we formally specify the Digital Twin concept as follows.We propose, similar to Hasselbring [35], a three-level interleaving of formality in the specification: 1) informal prose explanation and illustrations with examples; 2) semi-formal object-oriented modeling with the UML; 3) rigorous formal specification with Object-Z. Object-Z [36] is a formal specification notation used to describe the behavior of software systems.It extends the Z notation [37] and enables the incorporation of objectoriented concepts, such as classes, objects, inheritance, and polymorphism, into specifications.Additionally, Object-Z allows for the specification of operations that can be performed on objects, along with constraints on attribute values and relationships between objects, all expressed in a mathematical notation.The following specification has been checked using a type checker provided by the Community Z Tools Project [38]. The formal specification is exemplified through an embedded software system comprising a sensor, an actuator which also serves as a data transmitter, and an embedded control system connected to both.This control system manages data and command exchange between these components.All example components are very basic and are only meant to demonstrate the core ideas.A real system would be more complex, including more third-party dependencies, tools, and frameworks. The Physical Twin The digital twin concept starts with the physical twin. Definition 6 (Physical Twin). A physical twin is a real-world physical System-of-Systems or product.It comprises sensing or actuation capabilities driven by embedded software. Figure 4 illustrates the deployment diagram of our simple embedded system.In this example, the sensor is connected via an RS232 interface to the controller, and the transmitter is connected via Ethernet.All data collected from the sensor is processed by the controller logic and subsequently sent to an external source via the transmitter.Commands to modify the sensor's behavior are received by the transmitter and forwarded to the sensor through the control logic. Consider both devices as black boxes that maintain a list of accepted commands, a method for executing tasks based on the commands and returning a result, and functions for sending and receiving data.Additionally, a device driver holds a corresponding list of commands that can be sent to the devices.The lists on the device and the device driver are identical, and the device driver handles command transmission and response reception. The UML class diagram in Figure 5 depicts the various classes forming the embedded control system.To align with the clean code principles, abstract classes Device and DeviceDriver are introduced first.Sensors and actuators are considered as devices and thus inherit from Device, as depicted on the left side of Figure 5.All devices are connected to the embedded control system. The crucial elements of embedded software systems are the connections between the control systems and the sensors/actuators.In this example, the connections are established using different PROTOCOL types (TCP or RS232) to facilitate communication between Device and DeviceDriver. Specifically, SensorDriver inherits from DeviceDriver and employs an RS232Connection to establish a connection with a Sensor.Similarly, Transmitter and TransmitterDriver (which also inherits from DeviceDriver) establish a connection using TCPConnection.While a Device is treated as an external component running on the device, a corresponding DeviceDriver is an integral part of the embedded control system. A Device consists of two main components: a Connection object and a set of accepted commands (commandList).The Connection object manages data exchange between a Device and a DeviceDriver.The ExecuteCommand function represents the execution of a task after a command has been sent to the Device.It expects a COMMAND object sent by the DeviceDriver and returns a RESPONSE object.The Send and Receive functions utilize the corresponding functions provided by the contained Connection. To facilitate the exchange of data from a sensor to another process, such as the control logic, EventHandler objects are introduced.It can be assumed that these EventHandler objects are implemented in a manner similar to the Observer pattern, which also encompasses publish/subscribe architectures. In this setup, all events received from the Sensor are emitted to all listeners through a Producer, and processes receive these events by including a Consumer. Object-Z Formalization The specification of this simple embedded system follows a bottom-up approach.The deployment diagram, as depicted in Figure 4, can be defined using the Object-Z notation.To achieve this, some basic type definitions are introduced: PROTOCOL represents the communication protocols utilized between the devices and the control system, while EVENT is the type employed for data exchange between processes. Basic type definitions introduce new types in Z and Object-Z.Such internal structure is considered irrelevant for the specification.In this particular specification, any details that are not architecturally relevant are abstracted this way. The various PROTOCOL types used in the schema architecture are subsequently defined through an axiomatic definition.In this context, TCP and RS232 are established as values of type PROTOCOL: TCP, RS232 : PROTOCOL Up until this point, only basic types have been introduced.However, as Object-Z is object-oriented, objects are also created.In this context, the parent class is denoted as DATA, and it will later be specialized through inheritance into classes specific to the various data types: Communication between devices is represented as a sequence of bits.Given that standard data types such as integers, floats, or strings are irrelevant for the specification, only a bit representation is utilized. As both a device and its corresponding device driver exchange either RESPONSE or COMMAND, the corresponding schemas inherit from the DATA class.In this context, RESPONSE can represent either MEASUREMENT or STA-TUS: RESPONSE MEASUREMENT STATUS Once the data types have been formalized, the various components and their connections can be configured.Initially, the abstract Connection class can be defined as follows: The symbol ?denotes input parameters and !denotes outputs [36]. A Connection possesses a type and manages bit sequences, represented as a stream (dataStream).The Write function appends bit sequences to the stream, while the Read function extracts them by reading bits from it. The specific implementations, RS232Connection and TCP-Connection, are named after the types they set for the Connection object from which they inherit: The symbol ↓ denotes the union of Connection with all sub-types.Connection is abstract, thus the Connection has to be sub-type that implements it.The symbol © denotes object containment [36]. ↾(event) event : EVENT Each EventHandler registers for a specific EVENT, which can represent, for example, a simple response from the Device.In this example, the EventHandler is an abstract class, and Producer and Consumer are the specific implementations.Assuming both register for the same EVENT, like "NEW-DATA," a Producer can emit new events, and the Consumer receives and handles all incoming events.It is important to note that this relationship is not one-to-one but rather oneto-many, allowing for an indefinite number of Consumers to listen to the same Producer. The main function of a Producer is the Emit function that is called with a passed DATA object and then all Consumers are notified: After introducing the basic classes, the logic of the embedded control system can be defined.The DeviceDriver manages all communication between the control system and the Device, with communication being established through the Connection class.In this scenario, assume this De-viceDriver is straightforward and serves as a relay between the control logic and the device. The Consumer handles all incoming DATA from the control logic and forwards them to the device.When responses are received from the device, the emitter forwards these responses to all listeners. In Object-Z, the symbol ∥ represents a sequential execution.Therefore, the Send function first receives an incoming event by invoking consumer.Consume, and only afterwards, that call's result is received, it is passed to the Connection, which then sends the command to the device.Conversely, incoming responses from the device are received from the connection using connection.Read and subsequently emitted to all listeners through emitter.Emit.Now that the abstract classes for Device, Connection, and DeviceDriver have been established, we can proceed to define the concrete classes for the sensor, named Sensor, and its corresponding device driver, SensorDriver, as depicted in Figure 3a In this example, all incoming commands are dispatched by the control logic, consumed by the driver, and subsequently forwarded to the sensor via the connection.Vice versa, all responses from the sensor are emitted as events by the corresponding producer and can be listened to by all consumers. The essence of this specification lies in the communication between a device and its device driver, which is captured by the Communication schema.In this instance, the device is a Sensor, and the driver is a SensorDriver.Both the device and the driver share the same commandsList and are connected through an RS232Connection. In Object-Z, the symbol "∥" signifies the execution of functions in parallel [36].Therefore, ReadFromDevice illustrates the Sensor sending data while the corresponding Sen-sorDriver reads it.Conversely, ReadFromDriver represents the reverse scenario, with communication from the SensorDriver to the Sensor: The details of the control system are not within the scope of this specification.The control logic for an embedded system is often some form of a state machine.State machines fully automate a system, but do not adapt to new or changed processes on the fly.Modern Industry 4.0 application incorporate autonomous behavior, extracted or learned from gathered data and thus, include architectures different from state machines.Furthermore, the orchestration of processes, including different commands to different sensor and actuators, can be quite complex.However, for this example, the only function of the ControlLogic class is to execute the commands received from the transmitter and return the responses from the sensor: The incoming commands contain the value that sets the sample rate of the sensor.To configure the period, the function sendCmd processes events sequentially from the transmitter queue.For each event, the SetPeriod function is called to set the sample rate.The newly configured period is then sent as a command to the sensor, which adjusts its sample rate accordingly.This message exchange is logged in a list called dataLog. Assume the commands from the transmitter only include a period for the sensor's sample rate.To configure the period, the function sendCmd processes events sequentially from the transmitter queue.For each event, the ChangeBehavior executes SetPeriod to internally set the sample rate and newly configured period is then sent as a command to the sensor, which adjusts its sample rate accordingly.This message exchange is logged in a list called dataLog. All events originating from the sensor are handled by sendRsp and are sent to the transmitter without any alterations.Once again, the message exchange is recorded in the data list through the LogData command. With all required classes defined, the schema of the EmbeddedControlSystem from Figure 4 The Digital Model Modeling and simulation are powerful methods utilized in various fields to evaluate complex systems, processes, and knowledge.They empower researchers, engineers, and decision-makers to examine real-world phenomena within controlled and virtual environments.This, in turn, enables them to make informed decisions and gain insights into the system under investigation.At the core of modeling lies the concept of mathematical modeling, which plays a pivotal role in formally capturing the essence of the system. Mathematical models are representations of real-world systems employing mathematical equations, relationships, and logical structures.They provide a means to describe and quantify the behavior of a system.While mathematical models are not confined to any specific domain, in this work, we concentrate on their application in the engineering domain. Before the advent of computers, the construction of machines was primarily carried out on drawing boards.This paradigm shifted with the introduction of computeraided designs (CAD), enabling the creation of 2D and 3D models that could be easily shared and replicated with others.Over the past decades, advancements in tooling and computational power have facilitated the substitution of real prototypes with virtual prototypes.This transition has significantly reduced design cycles and lowered design costs.When components of a system are governed by mathematical relationships, virtual prototypes can be rigorously tested in simulations across a wide range of conditions.This allows for the evaluation of potential design weaknesses, providing immediate feedback on design decisions. The Digital Model serves as a central component of a digital twin.However, most definitions merely mention digital models, assuming that researchers share a common understanding of what a model entails.This often leads to the assumption that a CAD model constitutes the entirety of a digital model, while a simulation is considered something more than a digital model, despite both being forms of mathematical models.Hence, we define a digital model as follows: Definition 7 (Digital Model).A digital model describes an object, a process, or a complex aggregation.The description is either a mathematical or a computer-aided design (CAD).This definition encompasses various aspects of digital modeling, including the use of CAD as the foundational model for system design, its utilization within simulation tools involving complex processes, and even purely mathematical models. Introducing the State Machine Example Although the physical twin is defined as including (autonomous) behaviors instead of a state machine, this example could also be implemented as a state machine, where one can model its different states as follows: A state machine M can be represented by a 5-tuple M, which consists of a finite set of states Q, a finite set of input symbols known as the alphabet , a transition function delta defined as δ : Q × → Q, an initial or starting state q 0 ∈ Q, and a set of accept states F ⊆ Q.The creation of state machines, often done using tools like LabView, remains a common approach employed by engineers for programming machines.This practice falls within the scope of the provided definition of a digital model. The state machine of the embedded control system can be defined as follows: The corresponding UML state diagram is presented in Figure 6.Upon initiation, the initial state is STANDBY, with the corresponding period value for the sensor's sampler rate set to 0, indicating that no samples are taken at this point.If a command with a value x ∈ , where x > 0, is issued, the state machine transitions to the ACTIVE state.Conversely, if a command with a value x = 0 is received, the state reverts to STANDBY.For values of x < 0, the state of the system changes to OFF. Object-Z Formalization This state machine can also be specified in Object-Z.First, the class diagram is displayed in Figure 7. STATE is the parent class: ↾(execute) execute The execute method will be internally overwritten by the child states.For this example, the specific code that is executed is irrelevant.The states the state machine can be in are defined as subclasses: The EventStateMachine encapsulates the logic responsible for state changes upon receiving COMMAND events and maintains both a STATE (state is also the variable) and a period, which is a number.Initially, the period is set to 0, corresponding to the initial state set as STANDBY.The ProcessEvent function is responsible for modifying the state of the state machine in response to incoming events. The Digital Template In their initial definition of digital twins, Grieves and Vickers [20] view the digital twin as a collection of information necessary for constructing and monitoring the physical object.Specifically, the digital twin prototype can be regarded as a virtualized set of blueprints, bills of materials, technical manuals, and similar documentation.When combined with the digital model, which can be used to extract all the information needed for creating blueprints and bills of materials, it can indeed be employed to construct and maintain the physical twin However, this approach does not completely virtualize the physical twin, as later demonstrated by the example of the OSI Model in Figure 17 on Page 19.Thus, the early interpretation of this definition does not fully realize a digital twin of a physical twin. To encompass all available materials for constructing and maintaining the physical twin, including the software running the physical twin and the digital model, these components can be bundled together into a comprehensive package.We refer to this bundle as the Digital Template.Definition 8 (Digital Template).A digital template serves as a framework that can be tailored or populated with specific information to generate the physical twin.It encompasses the software operating the physical twin, its digital model, and all the essential information needed for constructing and sustaining the physical twin, such as blueprints, bills of materials, technical manuals, and similar documentation.[20] initially defined digital template as a digital twin prototype.However, in Grieves [39], they expanded upon their definition of a digital twin prototype.Their digital twin prototype is all the products that can be made, including all their variants.They take shape over time, from an idea to a first manufactured article [39].We still consider that early versions of their digital twin prototype are only a digital template.However, fully developed, they could also include the digital twin prototype definition presented later in this work. Object-Z Formalization The UML class diagram of a digital template is depicted in Figure 8.The digital template includes all documents that either describe the physical twin or are required to build it.Furthermore, it includes the digital model the real system is derived from and the software that operates the physical twin later.For an Object-Z formalization, the general class Document is defined: The Digital Thread With the development of CPS, machines began interacting with servers tasked with monitoring and controlling them.This paradigm also applies to digital twins.In this context, the communication channel facilitating such interaction is referred to as a digital thread.Taking inspiration from Leiva [40], we define the digital thread as follows: Definition 9 (Digital Thread).The digital thread refers to the communication framework that allows a connected data flow and integrated view of the physical twin's data and operations throughout its life-cycle. Data accumulated from physical objects can only be preserved if these objects possess an interface for storing the generated data.Similar to the general digital twin definitions, there is, currently, no universally accepted and standardized solution for digital threads, given their diverse applications across various domains. Furthermore, it is crucial to understand that the digital thread encompasses more than just the communication protocol.It also involves applications and functionalities that assist in tasks such as monitoring, analysis, planning, and execution.These applications have the capacity to incorporate and share knowledge derived from the digital template and the gathered data preserving the physical twin's evolution through time [41]. Object-Z Formalization The UML class diagram for a digital thread between the previously formalized physical twin and a digital twin, which will be defined later in this paper, is illustrated in Figure 9.The DigitalThread exists of a PTtoDTConnection that sends measurement and status messages (see the RE-SPONSES Object-Z class) and the DTtoPTConnection, which sends commands to the physical twin.To send data, a Trans-mitterDriver is used to to establish a Connection.Notice that this connection is not between a DeviceDriver and a Device, but between two transmitters, e.g. using the LoRaWAN protocol.Both connection types gather data from processes (DigitalThreadProcess).In general, these processes can be different in each digital thread.Referencing our example again, the ControlLogic represents a PTDigitalThreadProcess, since it forwards all sensor message to the transmitter, which then can transmit the data to the digital twin.On the digital twin's side, the DTDigitalThreadProcesses can include many different kinds of processes.However, there are is at least one process that is included: the process that decides which command is sent to the physical twin to adjust its sample rate.Since the digital thread is meant to show the evolution of the physical twin over its life-cycle, all the gathered data has to be stored in some form of a database.Hence the database is a DigitalThreadProcess that is part of the digital thread. Formalizing this with Object-Z, we first define the Digi-talThreadProcess: • Monitor: This is the first stage of the framework.In this phase, the system continuously collects data and monitors its performance and the surrounding environment.This can involve data from various sensors, actuators, or monitoring tools that gather information about the system's behavior, resource utilization, and external conditions. • Analyze: To gain insights into the system's behavior and performance, the data collected through monitoring, gets analyzed.The goal is to identify patterns, anomalies, and potential issues and hence, to understand the current state of the system. • Plan: Based on the analysis of the system's current state, the system formulates a plan for actions to be taken.This plan may involve adjustments, optimizations, or corrective measures aimed at improving system performance, resource allocation, or other relevant parameters. • Execute: In the last phase, the system carries out the actions defined in the planning stage.These actions can be automatic or semi-automatic, depending on the level of autonomy and control designed into the system.The system implements the planned changes to achieve the desired state. • Knowledge: This component is critical for learning and adaptation.It involves maintaining a repository of historical data, models, policies, and best practices.The system uses this knowledge to make more informed decisions in subsequent iterations of the MAPE-K loop.Over time, the system becomes better at self-optimization and self-management by learning from its past experiences. These stages are executed sequentially one after another and all have permanent access to the Knowledge about the system.The realization of the data flow between the different stages is part of the Digital Thread.Also, applications around the different stages, which are, for instance, connected via APIs, are also part of the Digital Thread, if they provide better insight for the corresponding physical twin to the user. The Digital Shadow To fully harness the potential of the digital thread, a process situated at either end of the digital thread must consolidate all the disparate elements into a platform that users can utilize to gain insights into the current state of the physical twin.In the context of the Digital twin concept, this role is fulfilled by the digital shadow.The digital shadow is defined as follows: Definition 10 (Digital Shadow).A digital shadow is the sum of all the data that are gathered by an embedded system from sensing, processing, or actuating.The connection from a physical twin to its digital shadow is automated.Changes on the physical twin are reflected to the digital shadow automatically.Vice versa, the digital shadow does not change the state of the physical twin. The configuration of the digital shadow for the physical twin, as specified previously, is illustrated in Figure 11.It is important to note that some parts of the physical twin are not depicted in the figure.The digital shadow operates on a server that establishes a network connection to the physical twin, either through a cable or wireless.In this example, assume a wireless connection between the physical twin and its digital shadow.As the UML class diagram in Figure 13 shows, many classes from the physical twin can be reused.The transmitter uses the same device driver as the physical twin, the event handlers are equal, and also the message types can be reused.Only the classes for the Monitor and Analyze stages of the MAPE-K model are new.A direct association between the two classes is not required, as they exchange data via an Observer pattern using the event handlers.Software package to enhance these two classes, are ignored in this example. For data retrieval, the digital shadow employs a connected transmitter.To facilitate transmitter operation, the physical twin's transmitter device driver can be repurposed.All data is then transmitted from the driver to the MAPE-K components.It is worth mentioning that MAPE-K is not an obligatory component of the digital shadow; it is used only for distinguishing representations between CPS, digital shadows, and a digital twin. Since machines controlled by external computers/servers already exist in the form of CPS, it is essential to clarify the distinction between a digital shadow and a CPS.As illustrated in Figure 12, the digital model holds the same level of importance as Knowledge.However, a CPS does not necessarily have to include a model of the connected machine, and even if it does, this model may not always be up-to-date.In contrast, for a digital shadow, this scenario is different.In the monitoring stage, all received data automatically updates the digital model. Another distinction is that a CPS can be used to directly operate the physical object.In contrast, a digital shadow's sole purpose is to monitor the physical twin and provide data for analysis, enabling insight into the received data.Consequently, the Planning and Execution stages of the MAPE-K model are not inherent components of the digital shadow.While they can be incorporated, the automated change of state in the physical object is not a function of the digital shadow. Digital Shadow Fig. 12: A digital shadow realized with the MAPE-K reference model.The Plan and Execution stages are not included, since there is also no data exchange from the Execution stage to the physical twin. Object-Z Formalization The UML class diagram in Figure 13 A direct association between the classes is not required, as they exchange data via an Observer pattern using the event handlers.Software packages to enhance these two classes, are again ignored in this example. A digital shadow specification with Object-Z can be done as follows.The Transmitter and its operation are managed by the corresponding TransmitterDriver, both of which can be reused from the Object-Z formalization provided for the physical twin earlier.Additionally, all exchanged messages and the EventHandler can also be reused.Any status changes occurring in the physical twin are emitted as STATUS events, while all measurements are emitted as MEASUREMENT events.An emitter-producer is responsible for transmitting all consumed events to any registered listener.The most crucial component here is the digitalModel, which is an object of the previously specified EventStateMachine. All status changes are handled by the handleState function, which reads all STATUS messages from the queue and forwards them to the digital model (state machine) for event processing.Subsequently, the result of the state machine's operation is emitted to all registered listeners.Since measurements do not impact the state machine's state, they are individually read from the queue via the handleMeasurements function and immediately relayed to all registered listeners.One such listener could be a database (part of the Knowledge state) responsible for storing all data. It is worth noting that the digitalModel could also be a separate process that registers as a listener and consumes the STATUS messages.In this example, the direct reference in the Monitor class was used for better demonstration purposes. The Analyze stage is also a DTDigitalThreadProcess and can be a (semi-)automated stage of the MAPE-K model in the context of the digital shadow.In this particular example, the Analyze stage serves a singular purpose, which is to verify whether the received state from the physical twin aligns with the state of the digital model or not.The outcomes of this comparison can then be emitted to all registered listeners.One potential listener could be a service responsible for notifying a user if any disparities in states are detected.Nonetheless, independent from the MAPE-K model, the analysis from the monitored events could also be done manually by a user, since no further stage is following: With these processes, the DigitalShadow schema can be defined.Since the MAPE-K example is only used for a better visualization of the concept, we use a more generic schema definition for the digital shadow: DigitalShadow digitalModel : DigitalModel© DThreadProcesses : P DTDigitalThreadProcess DTtoPTConnection : DTtoPTConnection© Please notice that no data is sent from the digital shadow to the physical twin.The DTtoPTConnection solely receives data from the physical twin. The Digital Twin After defining and specifying the digital thread and digital shadow, the subsequent step is to comprehensively define the digital twin.The digital twin expands upon the digital shadow by enabling automatic synchronization of all alterations made to the digital model with the corresponding physical twin.This means that any changes made to the physical twin are mirrored in the digital twin, and vice versa.Ultimately, the digital twin evolves into a complete replica of the physical twin.To formulate this definition, we draw upon the digital twin definitions put forth by Saracco [41] and Trauer, Schweigert-Recksiek, Engel, et al. [28]: Definition 11 (Digital Twin).A digital twin is a digital model of a real entity, the physical twin.It is both a digital shadow reflecting the status/operation of its physical twin, and a digital thread, recording the evolution of the physical twin over time.The digital twin is connected to the physical twin over the entire life cycle for automated bidirectional data exchange, i.e. changes made to the digital twin lead to adapted behavior of the physical twin and viceversa. Digital Twin Fig. 14: A digital twin realized with the MAPE-K reference model.The status change of the digital model and the corresponding data exchange from the Execution stage to the physical twin is fully automated. Extending the system utilized in this example results in the addition of an extra communication channel from the digital twin to the physical twin, as illustrated in Figure 15.In the previously shown Figure 11, the digital shadow only facilitates communication from the physical twin to the digital shadow.Now, all modifications within the digital model are also transmitted from the digital twin to the physical twin. Moreover, the MAPE-K model must be adapted to accommodate the digital twin, as depicted in Figure 14.The Monitor and Analyze stages in this new model are identical to those in the digital shadow, as shown in Figure 14.The Plan stage takes the analysis results and formulates an execution scenario for the Execution stage if changes to the physical twin are necessary.The key distinction from the original MAPE-K reference model lies in the digital twin, where the Execution stage interacts with the digital model.Only if a positive result is returned, the command is sent to the physical twin.Consequently, the digital model serves as the final control instance, and all incoming and outgoing changes are verified against the digital model. Object-Z Formalization The Object-Z formalization of the digital twin can be built upon the digital shadow, incorporating two additional stages of MAPE-K as mentioned previously.First, the Plan class is introduced: Digital Twin Physical Twin Fig. 15: The digital twin extends the digital shadow in a way, that the communication between physical twin and digital twin is bidirectional.Additional to communication from the physical twin to the digital twin, all changes in the digital twin are automatically sent to the physical twin.All results generated during the planning stage are emitted via the Producer emitter.Similar to the other stages, the Plan stage has direct access to the digitalModel.However, in this example, no specific access details are provided. The primary objective of this stage is to formulate a plan outlining which part of the physical twin's software needs modification and how those modifications should be implemented.This task is executed through the plan function.All incoming data is consumed and subsequently passed to the Planning function.The resulting plan is then emitted to all registered listeners. The last DTDigitalThreadProcess is the Execute class, which is kept straightforward as well.It receives all plans from the previous stage through the execute function.The commands are validated against the digitalModel, and the outcome is sent to the physical twin.The transmitter producer emits the command as an event to the Transmitter-Driver, which subsequently consumes this command and transmits it to the physical twin: Please note that the concrete implementation of the digital model in this context is not critical.The digital model could exist as a separate process that receives events through consumers and provides responses via producers.Alternatively, it could collect all events from the Execute stage and independently transmit the results to the transmitter.There are numerous ways to realize this concept; however, the fundamental idea remains constant: changes to the digital model automatically trigger changes in the state of the physical twin, without requiring any user intervention. ↾(INIT) Similar to the digital shadow, we again define a generic schema DigitalTwin without the MAPE-K processes: The schemes DigitalShadow and DigitalTwin look similar in this Object-Z formalization.The main difference is that the digital twin can send state changes automatically to the physical twin. The Digital Twin Prototype Today's existing modeling and simulation tools can rapidly create a digital twin of a single component or process, and publish/subscribe architectures allow all messages between processes to be captured and sent to a database or an IoT platform.However, complex Industry 4.0 applications require the integration of multiple sensors and actuators into a larger system, posing a challenge with no simple solution yet.The embedded community still uses various industrial interfaces and communication protocols such as ProfiBus, ProfiNet, ModBus, CANOpen, OPC-UA, or MQTT, to name a few.Some are proprietary, making integration difficult, for instance, ProfiBus and ProfiNet. Robust software testing for communication protocols is challenging due to the difficulty of emulating or simulating them.Software engineers frequently use mock-up functions in unit tests to avoid the expensive networking exchange of data between processes, allowing them to obtain expected values.However, even robust unit testing with comprehensive edge case coverage is insufficient.Therefore, some approaches use simulation tools that replace the communication protocols between hardware components with software interfaces.For Industry 4.0 applications, both approaches are inadequate, as insufficient testing can jeopardize the safety of human operators.Despite this, simulation tools are crucial for the development of Industry 4.0 applications as a source of data for sensors and actuators. The software part of the connection can be formalized as shown in the Communication schema.The physical part, however, where the data is sent between Device and De-viceDriver cannot be replaced in the same way.Hence, the approach still involves real hardware in the development loop.During development and testing, the Connection object is the central piece.Without a counterpart, no command is executed, and no data is exchanged.Thus, engineers always require the hardware connected to the embedded software system they develop and test.Replacing the Connection with a software mockup to circumvent HIL would result in a different Connection object than used by the original SensorDriver.Thus, the configuration during development would differ from the real counter part it is deployed on later.Furthermore, not all communication protocols used in industry are properly mockable.This can be demonstrated by the example of ModBus and OPC-UA applications on the OSI-Model shown in Figure 17.Unlike Ethernet-based communication protocols that implement and cover all layers of the OSI-Model, communication protocols based on serial connections, such as ModBus or CANOpen, are placed on the model's 7th layer, the Application Layer.No additional host layers exist.Sending/receiving data is handled immediately by the Data Link and Physical Layers.This means that the physical hardware handles the necessary actions required for data exchange.Mocking these layers is difficult.On the other hand, communication protocols based on TCP, such as OPC-UA, can easily be mocked by opening a socket on the TCP layer and connecting another device to it.For serial protocols, this is not true.On connection, the driver tries to establish a connection to another device via RS232.As no device is connected, this would fail, and a connection error would be thrown. Replacing the entire physical twin during development and testing, which includes the hardware interfaces, leads to a fully virtual representation of the physical twin and engineers do not necessarily need the hardware anymore for development.This is the main difference to the digital twin prototype definitions by Grieves and Vickers [20] and Grieves [39].We define the digital twin prototype as follows: Definition 12 (Digital Twin Prototype).A Digital Twin Prototype (DTP) is the software prototype of a physical twin.The configurations are equal, yet the connected sensors/actuators are emulated.To simulate the behavior of the physical twin, the emulators use existing recordings of sensors and actuators.For continuous integration testing, the DTP can be connected to its corresponding digital twin, without the availability of the physical twin. Object-Z Formalization To reduce the dependency of the embedded software system on the hardware during development and testing, communication protocols such as RS232 need to stay on the host layers of the OSI-Model without the need of changing the original connection properties of a device driver.This circumvents the layers that include the hardware.However, rerouting the connection disconnects the device and its driver.The rerouting only works if another process exists at the other end of the connection.So far, there is none.That is why not only the connection has to be emulated, but also the device.To begin, the emulated connection is defined first.The Object-Z formalization for EmulatedConnection is as follows: The EmulatedConnection object inherits from the abstract Connection class, and thus has all its properties and functions.This is shown on the OSI-Model in Figure 17.The safe way to stay in the host layers is to route all other communication protocols to TCP and from there again back to the original protocol.Hence, the EmulatedConnection does not replace the connection objects of Device and DeviceDriver.Instead, it is an independent additional connection that provides interfaces for a device emulator and a device driver to connect to with their original protocols.The Emulated-Connection then uses TCP and forwards all incoming data via the function EmulateRead and all outgoing data via the function EmulateWrite between the emulated device and device driver. How can this be realized without reconfiguring the device or device driver?Simply by using tools such as socat (SOcket CAT) [42].Socat is a command-line utility that allows for bidirectional data transfer between two endpoints, typically over a network or through pipes.It is similar to the more well-known tool netcat, but with support for multiple connection types and protocols (TCP, UDP, SSL, PTY, etc.).With two virtual serial ports (client and server) via socat for the emulator and the device driver, a connection can be established without the need to change the configuration.In the background, socat forwards the data between the ports via a TCP connection. A device emulator for a sensor could be like the one shown in Figure 18.Similar to the real sensor, the Sen-sorEmulator inherits all properties and functions from the generic Device class.There is only one difference; instead of executing a command and responding with the real result, the emulator uses virtual context for the response.Virtual context can be a list of previously recorded data from the real device or context provided by a simulation.In this example, we assume that the virtual context is previously recorded data with the real device.Formalizing the em- ulated device and connection with Object-Z requires the definition of another data subtype first.Since the sensor responds to commands with a RESPONSE type, a subtype of RESPONSE named RECORDING can be defined: RECORDING RESPONSE The abstract class Emulator inherits all properties and functions from the abstract class Device, and SensorEmulator inherits from Emulator: Emulator Device Although it may seem more obvious to inherit from Sensor, the emulator cannot inherit its properties and functions from there.Most devices are a black box for the developer, and vendors only provide a technical manual and support to interact with the device.Thus, an emulator only mimics the behavior of the real counterpart and provides its API with corresponding return values.However, this is enough to replace the real device with the emulator for development and testing.A developer is mostly interested in the connection and data exchange part, not the internal behavior of a connected device.Due to abstraction reasons, the Sensor object in this example was very simple.That is why the SensorEmulator can also inherit all properties from Emulator and change the ExecuteCommand function to always return RESPONSE objects from the virtualContext set: The SensorDriver remains as it is and does not need any changes.The communication between an emulator and the SensorDriver can be specified as follows using EmulatedCommunication: The EmulatedCommunication object now includes an additional Connection object in the form of EmulatedConnection. The communication from the emulator to the device driver, labeled as ToDrv is now a composition of the connections from the device to the EmulatedConnection.From there, the data is sent to the device driver, where the EmulatedConnection receives it and forwards it to the connection defined by the device driver.The EmulatedConnection is not part of either the device/emulator or the device driver.Therefore, in this example, the SensorDriver cannot differentiate between whether it is connected to a real device or an emulator, which is the goal of our approach. Summary of the Digital Twin Concept The relationships between the different concepts are illustrated in the UML diagram in Figure 19.We extended the semi-formal approaches by Yue, Arcaini, and Ali [17] The special feature of the digital twin prototype is that it is operated by the same Embedded Control System as the physical twin.This software does not even recognize, whether physical hardware or emulated hardware is used.Notice that the Digital Model used by the digital twin prototype is a different instance than the Digital Model updated by the Digital Shadow.Advanced Digital TWins can use the Digital Twin Prototype to evaluate "what-if" questions in more realistic scenarios that include the full software stack. APPLICATION OF THIS CONCEPT In the following, two projects are presented, where the previous definitions and methods were already applied in real life contexts. Field Experiment with Underwater Ocean Observation Systems The digital twin prototype approach was developed for a network of ocean observation systems and tested during the research cruise AL547 with RV ALKOR (October 20-31, 2020) of the Helmholtz Future Project ARCHES (Autonomous Robotic Networks to Help Modern Societies) [43].In ARCHES, with a consortium of partners from AWI (Alfred-Wegener-Institute Helmholtz Centre for Polar and Marine Research), DLR (German Aerospace Center), KIT (Karlsruhe Institute of Technology), and the GEOMAR (Helmholtz Centre for Ocean Research Kiel), several digital twin prototypes for ocean observation systems were developed.The major aim of this project was to implement robotic sensing networks, which are able to autonomously respond to changes in the environment by adopting its measurement strategy, in both space and in the deep sea.A field report on employing digital twin prototypes in this context is published by Barbie, Pech, Hasselbring, et al. [43]. Five digital twin prototypes of ocean observation systems constructed at AWI and GEOMAR were developed.They vary in construction, payload, and configuration.The distance between AWI and GEOMAR are a few hundred kilometers.Hence, the digital twin prototypes were used to develop the software, without a permanent connection to the physical ocean observation systems.The microservices were implemented with ROS and encapsulated in Docker.How the different digital twin prototypes of the ocean observation systems were developed, was describe by Barbie, Hasselbring, Pech, et al. [44].A special feature in this project was that the digital twin prototypes were used as digital twins of the physical twins underwater.The fully virtualized embedded software systems showed the state of the physical twins.This way, no extra software to run a digital twin was required. Furthermore, with digital twin prototypes it was possible to develop and test scenarios before the mission took place.Automated testing is implemented through CI/CD in Gitlab.During the mission, all exchanged messages on the digital twin and digital twin were recorded and can now be used to increase the quality of the CI/CD pipelines. Case Study with Smart Farming Applications As the digitalization of agricultural processes promotes the use of digital twins for various use cases [45], we also report on a case study that experimented with the digital twin prototype approach for a smart farming application. The smart farming project SilageControl with a consortium of the Silolytics GmbH (project lead), Blunk GmbH, and Kiel University used digital twins to adopt the digital twin prototype approach for development and maintenance.The major goal of SilageControl is to improve the process of silage making, i.e. the fermentation of grass or corn in silage heaps.In order to avoid mold formation, the harvested crop is compacted by heavyweight tractors.As displayed in Figure 20, these tractors are equipped with a sensor bar, which includes GPS sensors, an inertial measurement unit (IMU), and a LiDAR.In combination, the sensors enable the continuous and accurate representation of the tractor's position / orientation and the shape and volume of the silage heap. Since silage making is season dependent, the digital twin prototype approach is used to improve the sensor platform independent from the current season.The first field experiments were conducted from May to October 2022.During this period, sensor data was be recorded to further improve the accuracy of physical models and create scenarios for automated testing of future features.Thereby, data gathered by the digital twin improves the digital twin/digital twin prototype and vice versa.A case study with more details about this project was published by Barbie, Hasselbring, and Hansen [46]. CONCLUSION AND FUTURE WORK Digital twins find applications across all layers in Industry 4.0 scenarios [44].However, there exists confusion in the definitions of digital models, digital shadows, digital twins, and digital twin prototypes.While many studies attempt to list and categorize these differences, a formal description has been lacking.Therefore, in our Digital Twin concept, we formally specified the various components, ranging from the physical twin to the digital twin, culminating in a fully virtualized digital twin prototype capable of substituting the physical twin during development.To underscore the distinctions among these different facets of the digital twin from a software engineering standpoint, we provide an Object-Z formalization for each component. We extended the digital twin concept by the Digital Template.A digital template describes the physical twin and is used to build it.It includes the physical twin's Digital Model, describing documents, and the Embedded Control Software. We have provided real-world application examples to illustrate the practical context.A proof of concept for the formal specifications was demonstrated in a demonstration mission showcasing the viability of digital twins in ocean observation systems [43].Moreover, we offered insight into how this approach could be employed in the SilageControl smart farming project, which aims to enhance the silagemaking process through the development of a sensing platform [46]. The usage of digital twin prototypes transforms the way how embedded software systems are developed.By starting with the emulation of hardware sensor by sensor, actuator by actuator, and communication protocol by communication protocol, the development of embedded software systems becomes an iterative process.Furthermore, the integration of a fully operational digital twin prototype heralds a shift towards collaborative efforts between engineers and domain experts, regardless of their physical location or connection to the hardware. Besides reducing the time that is needed for testing by switching from HIL to SIL testing with digital twin prototypes, this approach also avoids expenses for redundant hardware and paves the way for more efficient development workflows that are otherwise difficult to implement for embedded software systems.Digital twins become a key enabler for fully automated integration testing of embedded software systems in CI/CD pipelines.While building, testing, and releasing of software is possible for embedded software just like in other fields of software engineering, integration testing with hardware interaction is expensive, due to the HIL testing, and is often done manually.Thus, the integration tests are a bottleneck in the verification and validation activities and, hence, the release of new software.Anyway, with proper integration testing, developers increase the robustness of the embedded software systems.This may even embrace Industrial DevOps methods in the embedded field [3]. In summary, digital twins have the potential to enhance the quality of embedded software systems, concurrently reducing costs and accelerating development speed.These benefits align with the challenges cited by both Ebert [6] and Ozkaya [5]], who identified the challenges to achieve quality while managing costs and efficiency. Nevertheless, the digital twin community still has a lot of home work to do.The lack of a consensual definition of digital twins leads to a lot of room for interpretation what a digital twin is.Instead of introducing abstract approaches that are described using an attached case study, researchers should focus more on formal approaches to demonstrate and distinguish different approaches.This still may leads to many different digital twin definitions, but at least the community is able to consolidate similar approaches and has a starting point to discuss differences, flaws, or benefits of different approaches.With the introduction of virtualization tools such as Docker and open platforms such as GitHub, the distribution of code and tools to replicate results of a research study or experiment with an approach became easy and has no costs attached. The validation of research results and the reproducibility of experiments are integral aspects of good scientific practice [47].However, replicating the conducted field experiments from our ARCHES demonstration mission or the SilageControl case study using similar hardware can be quite expensive.To facilitate independent replication of the digital twin prototype approach by engineers and other researchers, we have developed a digital twin prototype using cost-effective hardware, specifically a PiCar-X by SunFounder [48].This digital twin prototype is based on the ARCHES Digital Twin Framework [49] and is publicly available on GitHub [50].More comprehensive details about the PiCar-X digital twin prototype will be presented in a separate publication. depicts the relationships.Physical twin and digital twin exchange data via the PT-To-DT-Connection and DT-To-PT-Connection. Fig. 4 :Fig. 5 : Fig.4: The deployment diagram of an embedded system comprising a sensor, a data transmitter and the embedded control system both are connected to.The sensor is connected via RS232 and the transmitter via transmitter via Ethernet. A Device comprises a Connection object and a set of accepted commands (commandList).The Connection object is responsible for managing data exchange between a Device and a DeviceDriver.The ExecuteCommand function represents the execution of a task following the transmission of a command to the Device.It expects a COMMAND object sent by the DeviceDriver and returns a RESPONSE object.The Read and Write functions make use of the corresponding functions provided by the contained Connection: Device ↾(INIT, Send, Receive, commandList) connection : ↓Connection© commandList : P COMMAND connection ̸ ∈ Connection #commandList > 0 Producer ↾(INIT, event, Emit) EventHandler Emit occuredEvent?: ↓DATA eventToEmit!: ↓DATA eventToEmit!= occuredEvent?A Consumer registers via the Observe to an EVENT and only listens to the emitted events and handles them in a queue.The Consume function returns always the first element in the queue: Consumer ↾(INIT, event, queue, Observe, Consume) EventHandler queue : P ↓DATA INIT queue = ∅ Observe ∆(queue) item?: ↓DATA . In this particular example, Sensor and Sensor-Driver are interconnected using an RS232Connection.The outcome of an executed command is categorized as a RESPONSE, which can represent either a MEASUREMENT or a STATUS object.The remaining functions within these specific classes remain consistent with those in the abstract parent classes Device and DeviceDriver: Sensor ↾(INIT, Send, Receive, commandList) Device connection : RS232Connection© ExecuteCommand command?: COMMAND result!: RESPONSE command?∈ commandList A SensorDriver inherits the EventHandlers from its parent class: SensorDriver ↾(INIT, Send, Receive, commandList) DeviceDriver connection : RS232Connection© Fig. 6 :Fig. 7 : Fig.6: A state machine of the embedded control system formalized for the physical twin. EventStateMachine ↾(INIT, ProcessEvent, state) state : ↓STATE period : Z INIT period = 0 ProcessEvent ∆(state) newEvent?: COMMAND newState!: ↓STATE state ′ = newState!It is important to note that, at this stage, the EventStateMachine has no connection to the physical twin.All modifications and updates are made manually, and there is no automatic synchronization between the digital model and physical twin.The schema for the digital model than includes the state machine: DigitalModel ↾(INIT, ProcessEvent) stateMachine : EventStateMachine© INIT stateMachine.INIT ProcessEvent = stateMachine.ProcessEvent Fig. 11 : Fig.11: The digital shadow is deployed separately from the physical twin.The automated communication is unidirectional from the physical twin to the digital shadow.Status changes and all other data is sent by the physical twin and received by the digital shadow via transmitters.The digital shadow can reuse the transmitter driver from the physical twin.The logic inside the digital shadow is based on the MAPE-K model. is reduced to the two new classes for the Monitor and Analyze stages.All other classes and relationships are identical to the UML class diagram of the physical twin in Figure 5 on Page 7. Fig. 13 : Fig. 13: Reduced UML class diagram of the digital shadow.The MAPE-K stages Monitor and Analyze are included, all other classes and relationships are identical to the UML class diagram of the physical twin in Figure 5 on Page 7. Fig. 16 : Fig. 16: UML class diagram of the digital twin, including only the MAPE-K relevant classes Monitor, Analyze, Plan, Execute, and the EventHandlers used for data exchange.All other classes are identical to the UML class diagram of the digital shadow in Figure 13. This class is also a DTDigitalThreadProcess and includes a Consumer component to receive data from the Analyze stage. Fig. 18 : Fig. 18: UML component diagrams for sensor and emulator components.The real SensorComponent in (a) can be replaced by an EmulatedSensorComponent (b) and the SensorDriver (c) cannot distinguish whether it is connect to the real sensor in (a) or the emulated on in (b). Fig. 19 : Fig.19: Relationships between physical twin, digital model, digital template, digital shadow, digital twin, and digital twin prototype. (a) Sensor bar in lab environment (b) Sensor bar mounted on a tractor Fig. 20 : Fig. 20: Sensor bar which monitors the process of silage making. Assume for this example that the DeviceDriver fully implements all interactions with the Device and hence, the commandList for both instances is equal.The Receive and Send functions in this class also utilize the Connection's Read and Write functions.Any further implementations beyond this scope are not relevant to our specification.Data exchange between different processes, such as the DeviceDriver and the ControlLogic, occurs through Even-tHandlers: Similar to the Device class, the DeviceDriver class also contains a Connection object, a set of commands, a set of known behaviors, and a function that maps a behavior to the corresponding command that can be sent to the Device: Similar to the SensorDriver, the TransmitterDriver represents only a data relay between device and control logic: Communication device : Sensor driver : SensorDriver ∀ x : device.commandList• x ∈ driver.commandList∀ x : driver.commandList• x ∈ device.commandListReadFromDevice = device.Send ∥ driver.Receive ReadFromDriver = driver.Send ∥ device.Receive The Transmitter class is akin to the Sensor class in many ways.It handles incoming commands and provides responses in return.However, since the Transmitter is an actuator, it does not return measurements but instead sends data using another communication protocol, such as LoRaWAN.It is important to note that this communication differs from the Communication schema described earlier.Additionally, the Connection object solely represents the connection between the Device and DeviceDriver and does not pertain to the communication between two transmitters: [17]ical Environment.The Physical Environment is not a real class, but the real world context in which the Device operates.Changing behaviors lead to changes in the current state of the physical twin.Hence, the physical twin updates its state and sends the change of state via the Digital Thread, which was named Twinning in Yue, Arcaini, and Ali[17], to the Digital Shadow.Different to the formalization by Yue, Arcaini, and Ali[17], the physical twin is not directly connected to the digital twin, but via the Digital Shadow, which is included by the digital twin.In our Object-Z formalization of the digital shadow and digital twin, we illustrated the difference utilizing the MAPE-K model and showed that the digital shadow does not send any data to the physical twin.All state changes are received by the digital shadow, which then changes the Digital Model.Only the Digital Twin updates state changes similar to the change of state of the Physical Twin.Instead of physical processes, the digital twin uses the Digital Model, which operates in a Virtual Environment, to change the physical twins state.During the development phase, the Digital Twin Prototype can replace the physical twin.A digital twin prototype executes commands on Emulated Hardware in a Virtual Environment.The Virtual Environment should mirror the real world, which can be realized via a Simulation.To describe and construct the Physical Twin its Digital Template can be used, since it includes the Digital Model and the Embedded Control Software. [18]Becker, Bibow, Dalibor, et al.[18]for the digital twin the digital shadow.A Physical Twin performs actions using real Devices in a
18,138.4
2024-01-15T00:00:00.000
[ "Computer Science", "Engineering" ]
Structural characterization and antibacterial activity of silver nanoparticles synthesized using a low-molecular-weight Royal Jelly extract In recent years silver nanoparticles (Ag NPs) gained increased and widespread applications in various fields of industry, technology, and medicine. This study describes the green synthesis of silver nanoparticles (Ag NPs) applying a low-molecular-weight fraction (LMF) of Royal Jelly, the nanoparticle characterization, and particularly their antibacterial activity. The optical properties of NPs, characterized by UV–Vis absorption spectroscopy, showed a peak at ~ 430 nm. The hydrodynamic radius and concentration were determined by complementary dynamic light scattering (DLS) and nanoparticle tracking analysis (NTA). The particle morphology was investigated using transmission electron microscopy (TEM), and the crystallinity of the silver was confirmed by X-ray diffraction (XRD). The antibacterial activities were evaluated utilizing Gram-negative and Gram-positive bacteria and colony counting assays. The growth inhibition curve method was applied to obtain information about the corresponding minimum inhibitory concentrations (MIC) and the minimum bactericidal concentrations (MBC) required. Obtained results showed that (i) the sizes of Ag NPs are increasing within the increase of silver ion precursor concentration, (ii) DLS, in agreement with NTA, showed that most particles have dimensions in the range of 50–100 nm; (iii) E. coli was more susceptible to all Ag NP samples compared to B. subtilis. Materials and methods Materials. Raw royal jelly (RRJ) was supplied by the local Armenian beekeeping factory ''Royal Jelly'' LTD (Province Kotayk, Armenia) and stored at -20 °C. Silver nitrate was purchased from Carl Roth (Germany). The Luria-Bertani (LB) broth and other media components were purchased from Sigma-Aldrich and Carl Roth (Germany). All solutions were prepared in deionized water. Preparation of Ag NPs. Ag NPs were prepared by reducing Ag + ions using RJ extract as a source of reducing agents. The aqueous solution of RRJ in 20 mg mL −1 concentration was centrifuged at 4500g for 40 min (Eppendorf, 5804R, Germany). Transparent supernatant was used in the subsequent synthesis process (Fig. 1). Ag NPs were synthesized by mixing RJ supernatant at a 1:1 volume ratio with 1 mM, 2.5 mM, and 5 mM solutions of AgNO 3 , respectively, and stirring the mixture for 20 min before keeping under lamplight (~ 230 V, 50 Hz, 11 W) for 6 h at room temperature. The NPs samples synthesized using 1 mM, 2.5 mM, and 5 mM silver nitrate solutions, were labelled in order as follows: Ag NPs-I, Ag NPs-II, Ag NPs-III. Further characterization and antibacterial activity assessment were conducted on all Ag NP samples as synthesized. Ag NPs characterization. The formation of Ag NPs was confirmed by UV-Vis absorption spectra with a wavelength range of 280-780 nm at a resolution of 1 nm using a Nanodrop 2000C Spectrophotometer (Thermo Scientific, USA). Each sample of GS Ag NPs was diluted in deionized water, and 50 μL volume of each suspension was loaded into a cuvette (UVette; Eppendorf, Germany) and applied for UV-Vis absorption spectrum measurements. The dynamic light scattering method was employed for the determination of hydrodynamic radii. Each sample of NPs diluted in deionized water was analysed using a SpectroSize 300 instrument (XtalConcepts, Germany) at 660 nm wavelength. 15 μL of each sample was loaded into a closed quartz high precision cell (Hellma Analytics, Germany; light path: 1.5 × 1.5 mm). Twenty measurements each for 20 s duration were recorded, and the autocorrelation functions were analysed by the CONTIN algorithm 36 . The hydrodynamic diameter and concentration of GS Ag NP suspensions were determined by a Nanosight LM10 nanoparticle tracking instrument (Malvern Panalytical, UK) injecting a diluted suspension volume of 300 μL of each sample into the flow cell. All dilutions were prepared in deionized water. Five particle tracking periods with 20 s duration each were recorded applying a 405 nm wavelength laser. The obtained data were averaged and normalised using the accompanying software. To determine the colloidal stability of GS Ag NPs, the Mobius device (Wyatt Technology, USA) was used to measure the zeta potential of five-fold diluted aqueous suspensions of the GS Ag NPs. The flow cell was loaded with ~ 180 µL of each sample. A 532 nm laser, 2.5 V voltage amplitude, and 10 Hz electric field frequency were In-depth analysis of morphology, size distribution, and aggregation of GS Ag NPs was performed by using transmission electron microscopy (TEM, JEM-2100-Plus, JEOL, Germany). For TEM measurements, a single drop of each sample was placed on freshly negatively charged quantifoil copper grids (R1.2/1.3, 400 mesh, Science Services, Germany), incubated, blotted, and dried. Transmission electron microscopy images were recorded at an accelerating voltage of 200 kV. Additionally, selected area electron diffraction patterns (SAED) were collected. TEM experiments were conducted in the XBI Biolab of the European XFEL 37 . X-ray diffraction (XRD) experiments were performed using a MAR345 image plate detector (detector diameter: 345 mm; pixel size: 0.15 mm; MARXperts GmbH, Germany) coupled with a micro-X-ray tube (IµS, INCOATEC GmbH, Germany) providing Cu-Kα radiation (λ = 1.54179 Å) at 50 kV and 1 mA current. Data were collected applying a dried sample on a silicon chip (Suna-Presicion GmbH, Germany) with 600 s exposure time and 10° rotation per image, and a sample to detector distance of 76 mm. Sample crystallinity was determined by comparing observed patterns with standard powder patterns of the Joint Committee on Powder Diffraction Standards (JCPDS). Bacterial strains, culture conditions, and growth determination. The present study was performed using Gram-negative and Gram-positive bacteria. Two wild-type strains were applied: Escherichia coli BL-21 and Bacillus subtilis sp. 168. Bacterial strains were kindly provided by Prof. Daniel Wilson (Institute of Biochemistry and Molecular Biology, University of Hamburg, Germany). E․ coli and B. subtilis were cultivated in an LB broth medium (pH = 7.5) at 37 °C. The pH of the medium was measured with a pH-selective electrode (Mettler Toledo, Switzerland) and adjusted by 0.1 M NaOH or HCl. Calculation of the specific growth rates after inoculation of the medium with bacteria was conducted using the equation below: where µ is the specific growth rate (h −1 ), OD 1 and OD 2 are two optical density (OD) values chosen on the growth timeline at a wavelength of 600 nm, corresponding to t 1 and t 2 time points 38 . In vitro susceptibility test. Preliminary antibacterial activity screening of GS Ag NPs against selected bacteria was tested by the Kirby-Bauer Disk Diffusion (DD) susceptibility method with some modifications 39 . Overnight grown bacteria strains were spread on LB-agar using a sterile cotton swab. A sterile blank disk was used in the test. The disks were prepared as follows: filter paper disks with 6 mm diameter were sterilized by autoclaving and then loaded with 20 μL RJ LMF extract and Ag NPs samples. After spreading the bacteria on the LB-agar surface, the disks were placed on the agar plates and incubated at 37 °C for 24 h. The inhibition zone was documented and measured after 24 h of incubation. The experiments were done in two replicates for each strain. Evaluation of minimum bactericidal concentration (MBC) and minimum inhibitory concentration (MIC). The MIC and MBC of GS Ag NPs were examined as described by Loo et al. 25 with some modifications. The standard broth microdilution method was applied to perform MIC determination in 96-well microtiter plates. Prior to the MIC test, Ag NPs-II and Ag NPs-III samples were diluted to the same concentration as Ag NPs-I (54 µg mL −1 ). Each solution of GS Ag NPs samples was mixed with LB-medium at a 1:1 volume ratio and then two-fold diluted. Negative (i.e. only medium) and positive (i.e. medium and bacterial inoculums) controls were applied within the experiments. The overnight grown bacterial suspension was transferred into fresh LB-medium and diluted to an OD of ~ 0.1 measured at 600 nm wavelength (TECAN Reader Infinite 200 M Plex, Switzerland). The 96-well microtiter plates were incubated at 37 °C, and the OD changes of a bacterial suspension were recorded at 600 nm wavelength for 24 h. The bacterial specific growth rate was determined as described above (Eq. 1). MIC values were calculated by determining the lowest concentration of Ag NPs to inhibit the growth of bacteria. The MBC test was performed on LB-agar plates to determine the lowest concentration that is bactericidal. After 24 h of growth, suspension from each well and when required their distinct dilutions (10 1 -10 8 ), were applied for determining the colony-forming ability by the drop plate (DP) method 40,41 . 10 μL of each sample was transferred onto LB-agar plates and incubated at 37 °C. After 24 h of incubation, viable bacterial colonies were counted to determine the number of colony-forming units (CFUs). NP-free plates as controls were incubated under the same conditions. MBC values are determined by the lowest concentration in which no visible growth appears on LB-agar plates. Data processing and statistical analysis. The average data and calculated standard deviations (SDs) are presented based on 3 independent experiments; if not shown, they do not exceed 3%. Multiple groups were compared by one-way analysis of variance (ANOVA); unless otherwise specified, p < 0.05 or less. ImageJ software was used to process TEM micrographs. Figure 1a and 7 were prepared using Biorender.com. Ethics declarations. This article does not contain any studies with human and animal participants. Results and discussion UV-Vis spectra analysis. As the first level of GS NP characterization, UV-Vis spectroscopy is a relatively fast, simple, and sensitive method 42 . Color changes of the reaction mixture indicated the reduction of Ag + ions and the formation of Ag NPs 43 . A brownish color of the solution attributed to Ag NPs is conditioned by the surface plasmon resonance (SPR) that results from an oscillation of electrons when they are in resonance with light waves 42 . Ag NPs are known to exhibit a UV-Vis absorption maximum in the range of 400-500 nm 44 . The absorption spectra of GS Ag NPs are shown in Fig. 2a. Obtained results revealed a strong SPR band maximum at ~ 430 nm, which is characteristic for Ag NPs 25,45 . During the synthesis, the transparent solution changed its color gradually to brownish, indicating the reduction of Ag + ions and formation of Ag NPs, in that the higher the concentration of AgNO 3 , the darker the color of the final solution, and the absorbance was comparatively higher. The SPR peak is sensitive to particle size and shape as well as dielectric medium and surroundings 42 . The absorbance peak shifts towards longer wavelengths indicate the increase in particle size 46 DLS is a non-destructive light scattering analytical method widely applied in different fields of life and material sciences 47 . The method depends on the interaction of light with particles and is based on the measurements of light intensity fluctuations over time due to particle Brownian motion 42,47 . This allows for determining the diffusion coefficient (D), which is related to the hydrodynamic radius (R h ) of the particle through the Stokes-Einstein equation, where k b is the Boltzmann constant (1.380 × 10 -23 kg m 2 s −2 K −1 ), T is the absolute temperature, and η is the viscosity of the medium 47 . The particle size distribution data from DLS measurements are summarized in Fig. 2b. DLS results showed polydispersity in each sample. Obtained results revealed that the hydrodynamic radius of NPs is mainly in the range of 30-100 nm. In this range, the radius increased with the increase of silver nitrate concentration in the applied solution. The samples' polydispersity index (PDI) was ~ 32% up to ~ 34%, confirming a relatively broad particle size distribution of the sample suspensions. Nanoparticle tracking analysis. NTA is a frequently applied technique for particle size and concentration determination in liquid samples. Laser light scattering microscopy in combination with a charge-coupled device (CCD) camera enables visualization and recording of particles in a solution 48 . Individual particles moving under Brownian motion are identified and tracked to determine their speed of movement. The software calculates the hydrodynamic diameter of particles based on a modified Stokes-Einstein equation and also provides an approximate concentration of particles per volume 49 (Fig. 2c). In agreement with DLS measurement results, NTA showed that most particles are in size range of 50-100 nm. The number of particles per sample increased between Ag NPs-I (dilution: 1:25) and Ag NPs-II (dilution: 1:50), while it was relatively constant comparing Ag NPs-II and Ag NPs-III (dilution: 1:50). Averaged values with standard error of the particle concentration are presented in Table 1. Zeta potential evaluation. Zeta potential determination was performed to assess the stability of particles in a suspension based on their surface charge 50 . A large positive and large negative zeta potential value indicates high stability of the particles conditioned by substantial repulsive forces that also prevent aggregation 51 . NPs with zeta potential values of less than + 25 mV and greater than − 25 mV tend to form aggregates mediated by the interparticle interactions 52 . Zeta potential measurements showed that all GS Ag NPs are positively charged Transmission electron microscopy. TEM is a frequently used technique for the characterization of morphology and size of nanomaterials 42 . TEM micrographs shown in Fig. 3, revealed that all samples of GS Ag NPs contain single as well as clustered particles. The Ag NPs-I sample contained more branched or clustered particles compared to the other sample suspensions. The size and growth of clusters are conditioned and controlled by interfacial chemical reactions and particle transport mechanism 53 . Many studies evidence that smaller particles are more susceptible to aggregation 54 . As the particle size decreases, the surface energy increases, causing a change in surface reactivity and system destabilization 55 . The aggregation will lower the systems' free energy, thus stabilizing it 53 . Another crucial factor for the growth of particles is mass transfer. It is controlled by the rate of stirring of the reaction mixture 56 . Various studies showed that stirring rate could reduce NP synthesis dura- The preliminary experiments to evaluate the antibacterial activity of GS Ag NPs were DD experiments. A further investigation in estimating the antibacterial activity was performed by determining the MIC and MBC. The MIC of Ag NPs was defined as the lowest concentration at which significant inhibition of bacterial growth was achieved. E. coli revealed an MIC value of 13.5 µg mL −1 for Ag NPs-I and 6.75 µg mL −1 for both Ag NPs-II and Ag NPs-III, in that more potent inhibition has been observed in the case of Ag NPs-III. Ag NPs-II inhibited the growth of bacteria for ~ 6 h, after which the bacteria started to recover, while in the case of Ag NPs-III the growth was inhibited for ~ 9 h (Fig. 6). This difference can be conditioned by the presence of clustered particles in Ag NPs-II, whereas Ag NPs-III contained more single particles which demonstrate higher reactivity and antibacterial activity. The MIC for B. subtilis was 27 µg mL −1 for all three suspensions of Ag NPs, in that inhibition duration was increased from Ag NPs-I to Ag NPs-III. The specific growth rates expressed in h −1 and calculated for the exponential growth interval for both bacteria are summarized in Fig. 5. The lowest concentration of NPs that was bactericidal, i.e. that showed no growth on agar plates, was selected as MBC. In this way, for E. coli in case of Ag NPs-I the MBC was 27 µg mL −1 and 13.5 µg mL −1 for both Ag NPs-II www.nature.com/scientificreports/ and Ag NPs-III. In case of B. subtilis Ag NPs-II and Ag NPs-III showed bactericidal activity at 54 µg mL −1 concentration, while Ag NPs-I exhibited only an inhibitory effect, since the growth was recovered (Fig. 6). Results showed that E. coli is more susceptible to all Ag NP samples prepared than B. subtilis. Ag NPs-III demonstrated higher antibacterial activity compared to other sample suspensions of NPs. This can be conditioned by the presence of small-sized particle fractions, which can easily penetrate through the cell wall and cause disorders in cell functions such as permeability, respiration, and apoptosis 63 . Antimicrobial properties of silver and its organic and inorganic compounds against a wide-range of microorganisms, have been well-known for centuries dating back to ancient times 64,65 . The increasing rate of multiple antibiotic resistance among bacterial strains to traditional antibiotics, including penicillin, biomycin, and others, prompted the development of Ag NPs as antibacterial agents in recent years 63,66 . Various studies showed that the size, shape, aggregation, or agglomeration of Ag NPs, which are mainly dependent on the synthesis conditions of NPs, i.e. temperature, silver ion precursor concentration, pH, etc., play a crucial role in their toxicity and biological activity 67 . Even though many publications have been devoted to understanding the mechanism of action of the Ag NPs, the exact process remains unclear. Figure 7 illustrates several possible mechanisms of antibacterial activity of Ag NPs. However, available data in scientific literature allows separating two main approaches regarding the action mechanism of Ag NPs. The first one is interaction with bacterial cell membranes through perforating and penetrating the cells. The NPs can accumulate on the surface and inside the bacterial cell, where they can bind to cellular machinery components, thus damaging essential cell functions and destroying the cell 63 . Besides that, Ag NPs can lead to the release of reactive oxygen species, thus leading to cell death 68 . The thinner cell membrane of Gram-negative bacteria makes them more susceptible to Ag NPs compared to Gram-positive bacteria 70 . Ag NPs can easily penetrate bacterial cells through the porins in the outer membrane of Gram-negative bacteria 63 . The second possible mechanism of action of NPs can be the release of silver ions. The NPs can attach to the cell wall due to the electrostatic interaction of positively charged Ag NPs and the negatively charged surface of the cell membrane. This kind of interaction can cause structural changes in the cell membrane, change permeability, dissipate proton motive force, and thus destroy the membrane 63 . In addition, bacterial growth inhibition can occur by depositing silver ions into the vacuole and cell walls as granules 71 . Lara et al. 69 reported the interaction of Ag NPs and DNA compounds. According to earlier reports, Ag + ions released from NPs can inhibit DNA replication due to DNA double-strand breakage and due to inactivating proteins by binding to them, potentially leading to cell death 72 . Therefore, the question of the leading player in the antibacterial effect of Ag NPs-silver ions, or NPs-is still open. However, it is evident that NPs are most valuable compounds and that the approach of designing NPs is of high interest to complement and serve as an alternative to antimicrobial drug discovery and development. Conclusions The presented data and results confirm that the low-molecular-weight fraction of the RJ extract can be used for the green synthesis of Ag NPs. Complementary DLS and NTA experiments of NP suspensions revealed nanoscale size of particles. Spherical and clustered NPs were observed using TEM. The crystalline nature of all NP samples was confirmed applying SAED and XRD. Furthermore, in addition to silver content, the XRD results suggested the presence of silver oxide and organic materials. The latter can be related to the composition of low-molecularweight fraction of RJ extract. The concentration of silver ion precursors during synthesis plays a critical role in the formation, stability, and functional activity of synthesized NPs. In the case of lower concentrations of silver nitrate, particles form larger clusters to become more stable, which however leads to particle aggregation and a decrease in their antibacterial activity. GS Ag NPs demonstrated antibacterial activity against one representative species of both Gram-positive and Gram-negative bacteria. In this context, Gram-negative bacteria were more susceptible to all applied NPs. The high and growing demand for NPs requires eco-friendly approaches to reduce the environmental footprints and preferably use of biocompatible and sustainably produced substances, which is attainable based on a cost-effective RJ-mediated synthesis approach. Data availability All data generated and/or analysed during this study are included in this article.
4,695.6
2022-08-18T00:00:00.000
[ "Materials Science", "Chemistry", "Medicine", "Environmental Science" ]
Study on structural geometry and dynamic property of [NH3(CH2)5NH3]CdCl4 crystal at phases I, II, and III Organic–inorganic hybrid perovskites can potentially be used in electrochemical devices, such as batteries and fuel cells. In this study, the structure and phase transition temperatures of the organic–inorganic material [NH3(CH2)5NH3]CdCl4 crystal were confirmed by X-ray diffraction and differential scanning calorimetry. From the nuclear magnetic resonance results, the crystallographic configurations of 1H, 13C, and 14N in the cation changed at temperatures close to TC1 (336 K), whereas that of 113Cd in the anion shows significant changes at temperatures close to TC1 and TC2 (417 K). The activation energy, Ea, values for 1H and 13C obtained from the spin–lattice relaxation time, T1ρ, below and above TC1 were evaluated, where the Ea value for 13C was more flexible at low temperatures than at high temperatures. In addition, the effect on molecular motion was effective at high temperatures. The phase transition at 336 K was associated with the change in the N–H···Cl bond due to the change in the coordination geometry of Cl around Cd in the CdCl6 anion. On the other hand, the phase transition at 417 K was related to the ferroelastic phase transition attributed to the twin domains. The synthesis and characterization of [NH 3 (CH 2 ) 5 NH 3 ]CdCl 4 were first discussed by Kind et al. 26 , where the structural phase transitions were studied using 35 Cl and 2 D nuclear magnetic resonance (NMR), birefringence, dilatation measurements, and optical domain investigations. Negrier et al. 15 evaluated the crystal structures via X-ray diffraction (XRD) and Raman scattering experiments at 293 K and 353 K. Our group has also recently reported the effects of 13 C length in the cation of [NH 3 (CH 2 ) 2 NH 3 ]CdCl 4 , [NH 3 (CH 2 ) 3 NH 3 ]CdCl 4 , and [NH 3 (CH 2 ) 4 NH 3 ]CdCl 4 crystals on the thermal and structural dynamic properties 13 . Meanwhile, a lot of research has been done on the electrical and conductive properties of this type of compound 16,[27][28][29][30] . Here, the crystal structures, thermodynamic properties, and ferroelastic domain walls of [NH 3 (CH 2 ) 5 NH 3 ] CdCl 4 were investigated. The roles of cations and anions in the [NH 3 (CH 2 ) 5 NH 3 ]CdCl 4 single crystal were discussed, and the chemical shifts and spin-lattice relaxation time, T 1ρ , with increasing temperature were measured using 1 H magic angle spinning (MAS) NMR, 13 C MAS NMR, and static 14 N NMR to identify the roles of the [NH 3 (CH 2 ) 5 NH 3 ] cation. Furthermore, the 113 Cd MAS NMR chemical shifts were recorded to evaluate the coordination geometry of the CdCl 6 anion. The results would provide insights into the physicochemical properties of [NH 3 (CH 2 ) 5 NH 3 ]CdCl 4 crystals, facilitating their various applications in the future. The structures of the [NH 3 (CH 2 ) 5 NH 3 ]CdCl 4 crystals at 298 K were analyzed using an XRD system. The lattice parameter and space group was considered by single-crystal XRD at the Seoul Western Center of the Korea Basic Science Institute. Experiments were performed in the same manner as before 31 . Differential scanning calorimetry (DSC) (TA, DSC 25) experiments were carried out at a heating rate of 10 K/min from 190 to 550 K in N 2 gas. Thermogravimetric analysis (TGA) and differential thermal analysis (DTA) curves were obtained using a thermogravimetric analyzer (TA Instrument) with the same heating rate as in DSC from 300 to 973 K in N 2 gas. In addition, the domain patterns were observed using an optical polarizing microscope within the temperature range of 300 to 450 K, where the prepared single crystals were placed on the plate with the temperature sensor of a Linkam THM-600. NMR spectra of the [NH 3 (CH 2 ) 5 NH 3 ]CdCl 4 crystals were performed using a Bruker 400 MHz Avance II+ solid-state NMR spectrometer in the same facility. The Larmor frequencies for 1 H and 13 C MAS NMR experiments were 400.13 and 100.61 MHz, respectively. In MAS NMR experiment, the spinning speed was set to 10 kHz to minimize sideband. And tetramethylsilane (TMS) was used as a standard material to obtain accurate NMR chemical shift. The experimental method to obtain the T 1ρ values for 1 H and 13 C was used in the same way as the previously reported method 13 . And, static 14 Experimental results Crystal structure. The powder XRD pattern of the [NH 3 (CH 2 ) 5 NH 3 ]CdCl 4 crystal at 298 K is shown in Fig. 1. And, the lattice constants analysized from the X-ray crystal diffraction were determined to be a = 7.3292 ± 0.002 Å, b = 7.5058 ± 0.002 Å, and c = 23.9376 ± 0.006 Å with the space group Pnam; this is consistent with the previously reported results 14,15 . Phase transition temperature, thermal property, and ferroelastic twin domain. The DSC curves of the [NH 3 (CH 2 ) 5 NH 3 ]CdCl 4 crystal at a heating and cooling rate of 10 K/min in N 2 gas are presented www.nature.com/scientificreports/ in Fig. 2. Two endothermic peaks were observed at 336 K (T C1 ) and 418 K (T C2 ) during heating, whereas two exothermic peaks were recorded at 327 K (T C1 ′) and 407 K (T C2 ′) during cooling. The phase transition enthalpy on heating is 3.17 kJ/mol at 337 K and 0.55 kJ/mol at 417 K, respectively. On the other hand, previous studies reported endothermic peaks at 337 K and 417 K during heating and at 336 K and 407 K during cooling 14,15 . To determine the preliminary thermal characteristics, including the structural phase transitions, TGA and DTA results were conducted at the same heating rate as the DSC experiment. Based on the TGA and DTA curves shown in Fig. 3, the crystal exhibited excellent stability up to approximately 600 K. The small inflection points observed at temperatures near 336 K and 417 K in the DTA curve were coincides with the two phase transition temperatures obtained from the DSC results, suggesting that the molecular weight of [NH 3 (CH 2 ) 5 NH 3 ]CdCl 4 decreased at increasing temperatures. The amount of crystal remaining in the solid state was evaluated from the molecular weights. The 10% and 20% weight losses of the crystal at temperatures of about 617 K and 626 K were attributed to the decomposition of HCl and 2HCl, respectively. On the other hand, the weight loss at approximately 800 K and 900 K shown in Fig. 3 was observed 46% and 87%, respectively. A single crystal with ferroelastic properties exhibits two or more orientation states even if mechanical stress does not exist since mechanical stress can change the existing orientation state of the single crystal. Polarized microscopy observations revealed the ferroelastic domain structures of the crystal and their changes at the phase transition temperatures, as shown in Fig. 4. The domain pattern represented by parallel lines was not observed in phases III (300 K, Fig. 4a) and II (403 K, Fig. 4b). No change in the behavior of the crystal was observed at T C1 . However, in phase I, twinning occurred in the crystal at temperatures above T C2 , resulting in a highly dense domain pattern indicated by the red circle (Fig. 4c). At 433 K, new domain walls indicated by the blue circles were formed next to the parallel domain walls (Fig. 4d). The phase transition at T C2 occurred due to the ferroelastic twin domain. The [NH 3 (CH 2 ) 5 NH 3 ]CdCl 4 crystal existed in two crystallographic phases: monoclinic (2/m) at temperatures above 417 K, orthorhombic (mmm) at temperatures between 417 and 337 K, and orthorhombic (mmm) at temperatures below 337 K. According to Aizu 32 and Sapriel 33 , for the transition from the mmm space group of the orthorhombic phase II to the 2/m space group of the monoclinic phase I, the domain wall directions Fig. 5 as a function of temperature. At low temperatures, only one resonance signal was observed. These resonance signals were asymmetric due to the overlapping 1 H lines of NH 3 and CH 2 in [NH 3 (CH 2 ) 5 NH 3 ] cations. At 180 K, a single resonance line was present at a chemical shift of 9.04 ppm. The line width and full-width at half-maximum (FWHM) at this temperature were also different from those represented as symbol "1" at 2.97 ppm and as symbol "2" at 6.07 ppm, respectively. At 330 K, which was close to T C1 , the NMR spectrum was divided into two resonance lines, showing chemical shifts of 7.56 and 2.58 ppm for NH 3 and CH 2 , respectively. The spinning sidebands were marked with crosses and open circles. Here, phases I, II, and III were plotted in olive, red, and black, respectively. The 1 H chemical shifts of NH 3 and CH 2 , presented by dotted lines in Fig. 5, were almost independent of temperature. These results suggested that the surrounding environments of 1 H of NH 3 and CH 2 did not change with temperature. 13 C MAS NMR spectrum. The 13 C chemical shifts at increasing temperature for the in situ MAS NMR spectra are shown in Fig. 6. The TMS reference signal at 300 K recorded at 38.3 ppm was used as the standard for the 13 C chemical shift. In the [NH 3 (CH 2 ) 5 NH 3 ] cation, CH 2 located close to NH 3 was designated as C-3, CH 2 located at the center was designated as C-1, and CH 2 located between C-3 and C-1 was designated as C-2. The structure of the cation for this crystal is shown in the inset of Fig. 6. At 300 K, the 13 C chemical shifts were recorded at 28.26, 25.90, and 41.67 ppm for C-1, C-2, and C-3, respectively. The FWHM for 13 C NMR at 300 K were 6.20, 5.72, and 9.06 ppm for C-1, C-2, and C-3, respectively. The line width of C-3 located close to N was wider than those of C-1 and C-2. The chemical shifts changed at temperatures close to T C1 (336 K), but not at temperatures close to T C2 (417 K). Below T C1 , all 13 C positions showed positive chemical shifts with increasing temperatures. Above T C1 , the chemical shift of C-2 was almost independent of temperature, while the shifts in C-1 and C-3 progressed in a negative and positive direction, respectively. The results proved that below T C1 , the surrounding environments of all 13 C ions would change with temperature. At temperatures above T C1 , the surrounding environments of C-2 did not change. However, the chemical shifts of C-1 and C-3 continuously changed in all temperature ranges, including T C1 and T C2 . Fig. 7. Despite the presence of intense background noise due to the very low NMR frequency (28.90 MHz), the 14 N spectrum was obtained without difficulty. Here, the crystal demonstrated an arbitrary direction with respect to the magnetic field. The six resonance lines of the three pairs at increasing temperatures were below T C1 . At temperatures close to 336 K (T C1 ), the number of resonance lines and resonance frequencies of the NMR spectra showed abrupt changes. At T C1 , a reduction from three pairs to two pairs of NMR lines was observed. At T C2 , another pair of NMR lines reappeared. Below T C1 , as the temperature increased, the resonance frequencies increased, and above T C1 , as the temperature increased, the resonance frequencies decreased. At T C2 , only the number of resonance lines changed, and the resonance frequency showed almost continuous values. Symbols with the same color indicated the same pairs of 14 N. Changes in the 14 N resonance frequencies due to the change in temperature were related to the changes in the crystallographic configuration of the crystal. Cd NMR spectroscopy. The changes in the in situ 113 Cd MAS NMR spectra are shown in Fig. 8. The 113 Cd chemical shift at 300 K was 323.19 ppm. As the temperature increased, the 113 Cd chemical shifts slightly moved in the negative direction, but these chemical shifts changed discontinuously near T C1 and T C2 . In particular, more changes were observed at temperatures near T C2 than at temperatures near T C1 , suggesting that temperature affected the environments around Cd. This proved that the coordination geometry of 6Cl around Cd ions in the CdCl 6 octahedra, as shown in the inset of Fig. 8, would change at the phase transition temperatures. 1 H and 13 C spin-lattice relaxation times. The 1 H MAS NMR and 13 C MAS NMR spectra were obtained with increasing delay times, and the plot of spectral intensities against increasing delay times was expressed as an exponential function. The decay rates of the spin-locked proton and carbon magnetization are expressed as the spin-lattice relaxation time, T 1ρ , as 34,35 : where P H(C) (τ) and P H(C) (0) are the signal intensities for the proton (carbon) at time τ and τ = 0, respectively. The 1 H T 1ρ values of NH 3 and CH 2 at several temperatures were determined by the slope of the logarithmic plots of intensities against delay times. From the slope of their recovery curves, the 13 C T 1ρ values for C-1, C-2, and C-3 were determined. The 1 H T 1ρ and 13 C T 1ρ values are shown in Fig. 9 as a function of the inversed temperature. The 1 H T 1ρ values increased rapidly from 100 to 1000 ms. While the slope of the T 1ρ values at temperatures near T C1 changed, the slope at temperatures near T C2 exhibited a rather continuous value. Above T C1 , the 1 H T 1ρ value for NH 3 showed a decreasing trend. The activation energy, E a , values for 1 H in NH 3 were evaluated from the slopes (represented by the solid lines in Fig. 9) of their log T 1ρ versus 1000/T plots. The E a values below T C1 were 6.65 ± 0.40 kJ/mol and 8.60 ± 2.32 kJ/mol for NH 3 and CH 2 , respectively, while the E a values above T C1 were 2.85 ± 0.96 kJ/mol and 3.49 ± 1.47 kJ/mol for NH 3 and CH 2 , respectively. And, the 13 C T 1ρ values below T C1 increased gradually with increasing temperature and then increased rapidly above T C1 . Near T C2 , the T 1ρ values were almost continuous, showing no significant changes. The E a values of C-1, C-2, and C-3 below T C obtained from the plot of log T 1ρ versus 1000/T were 1.73 ± 0.58 kJ/mol, 1.33 ± 0.49 kJ/mol, and 1.36 ± 0.76 kJ/ mol, respectively. The E a values of C-1, C-2, and C-3 above T C1 were 3.04 ± 1.38 kJ/mol, 5.57 ± 1.04 kJ/mol, and 0.97 ± 1.43 kJ/mol, respectively. The behavior of T 1ρ for random motions with a correlation time, τ C , could be www.nature.com/scientificreports/ described as fast-and slow-motion zones. The 1 H and 13 C T 1ρ values at low and high temperatures correspond to the fast-motion region, where ω 1 τ C ≪ 1 and T 1ρ −1 α exp(E a /k B T). In contrast, the 1 H T 1ρ values in NH 3 at high temperatures were attributed to the slow-motion region, where ω 1 τ C ≫ 1 and T 1ρ −1 α ω 1 −2 exp(E a /k B T). Conclusion The structure and phase transition temperatures of the [NH 3 (CH 2 ) 5 NH 3 ]CdCl 4 crystal were confirmed using XRD and DSC. Based on the NMR analysis of the crystal, we deduced that the crystallographic surroundings of 1 H, 13 C, and 14 N in the cation at temperatures close to T C1 changed, whereas that of 113 Cd in the anion at temperatures close to T C1 and T C2 exhibited significant changes. The changes in the NMR chemical shifts near T C1 and T C2 also suggested that the N-H···Cl hydrogen bond was affected. On the other hand, the T 1ρ values of 1 H in NH 3 changed from fast to slow motion near T C1 . The T 1ρ values of 13 C in CH 2 increased rapidly at T C1 , and the E a values for 13 C were more flexible at low temperatures than at high temperatures. By evaluating the T 1ρ values, we deduced that the effect on the molecular motion was effective at high temperatures.
3,967.6
2022-03-11T00:00:00.000
[ "Materials Science" ]
Equivalence checking and intersection of deterministic timed finite state machines There has been a growing interest in defining models of automata enriched with time, such as finite automata extended with clocks (timed automata). In this paper, we study deterministic timed finite state machines (TFSMs), i.e., finite state machines with a single clock, timed guards and timeouts which transduce timed input words into timed output words. We solve the problem of equivalence checking by defining a bisimulation from timed FSMs to untimed ones and vice versa. Moreover, we apply these bisimulation relations to build the intersection of two timed finite state machines by untiming them, intersecting them and transforming back to the timed intersection. It is known that many problems like inclusion and equivalence checking are undecidable for timed automata. Our results show that TFSMs correspond to a decidable subclass of timed automata that admits a restricted form of ε\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon $$\end{document}-transitions (i.e., timeouts) where most of the relevant problems like equivalence and intersection are decidable. Introduction Finite automata (FA) and finite state machines (FSMs) are formal models widely used in the practice of engineering and science, e.g., in application domains ranging from sequential circuits, communication protocols, embedded and reactive systems, to biological modelling. Since the 90s, the standard classes of FA have been enriched with the introduction of time constraints to represent more accurately the behaviour of systems in discrete or continuous time.Timed automata (TA) are such an example: they are finite automata augmented with a number of resettable real-time clocks, whose transitions are triggered by predicates involving clock values [3]. More recently, timed models of FSMs (TFSMs) have been proposed in the literature by the introduction of time constraints such as timed guards or timeouts.Timed guards restrict the input/output transitions to happen within given time intervals.The meaning of timeouts is the following: if no input is applied at a current state for some timeout period, the timed FSM moves from the current state to another state using a timeout function; e.g., timeouts are common in telecommunication protocols and systems. For instance, the timed FSM proposed in [25,18,19] features: one clock variable, time constraints to limit the time elapsed at a state, and a clock reset when a transition is executed.Instead, the timed FSM proposed in [32,27] features: one clock variable, TFSM with timed guards and timeouts TFSM with timeouts TFSM with timed guards Loop-free TFSM with timeouts TFSM with LCRO timed guards Untimed FSM Figure 1: Comparison of TFSM models.time constraints to limit the time elapsed when an output has to be produced after an input has been applied to the FSM, a clock reset when an output is produced, and timeouts. In [12] the following models of deterministic TFSMs with a single clock were investigated: TFSMs with only timed guards, TFSMs with only timeouts, and TFSMs with both timed guards and timeouts. The problem of equivalence checking was solved for all three models, their expressive power compared, and subclasses of TFSMs with timeouts and with timed guards equivalent to each other were characterized (see Fig. 1 from [12] for a diagram showing the expressivity hierarchy of TFSMs with timed guards and timeouts, TFSMs with only timed guards, TFSMs with only timeouts, loop-free TFSMs with timeouts, TFSMs with LCRO -Left Closed Right Open -timed guards, and finally untimed FSMs).Equivalence checking was obtained by introducing relations of bisimulation that define untimed finite state machines whose states include information on the clock regions, such that the timed behaviours of two timed FSMs are equivalent if and only if the behaviours of the companion untimed FSMs are equivalent.This operation is reminiscent and stronger than the region graph construction for timed automata [3]. Here we work directly with deterministic TFSMs with both timed guards and timeouts, since they subsume the previous two models.For such TFSMs, we give the detailed construction of the untimed FSM from a timed FSM (what we get is the FSM abstraction of the TFSM), and then we provide the complete proof that we can describe the behavior of a TFSM using the corresponding untimed FSM, i.e., that two deterministic TFSMs are equivalent if and only if their timed-abstracted FSMs are equivalent. Then we study the conditions under which the opposite transformation is possible: we take an untimed deterministic FSM that accepts and produces words from input and output alphabets (both including a special symbol that simulates the passing of time), and we build an equivalent deterministic TFSM with timeouts and timed guards, under the same notion of abstraction of timed words.This is the key technical result of this paper. Finally, we apply the previous transformations to perform the intersection of two deterministic TFSMs, as an example of composition operator under which TFSMs are closed.We prove how the transformation from TFSMs to untimed FSMs of Section 2 and the transformation from untimed FSMs to TFSMs of Section 3 can be used to construct the intersection of two TFSMs. We outline the structure of the paper.Sec. 2 introduces deterministic timed finite state machines with timed guards and timeouts, describes the untiming procedure to obtain a finite state machine and proves the bisimulation with the original timed one, from which an equivalence checking procedure follows.This is a revision of the material in [12], whereas the following sections are completely new.Sec. 3 describes the backward transformation from untimed FSMs to TFSMs and proves the backward bisimulation relation.The two results are used in Sec. 4 to compute the TFSM that is the intersection of two given deterministic TFSMs.Sec. 5 relates TFSMs to timed automata, and surveys expressiveness and complexity results of various models of timed automata, with final conclusions drawn in Sec. 6. Models of Timed FSMs (TFSMs) Let be a finite alphabet, and let R + be the set of non-negative reals.A timed symbol is a pair ( , ) where ∈ R + is called the timestamp of the symbol ∈ .A timed word is then defined as a finite sequence ( 1 , 1 ) ( 2 , 2 ) ( 3 , 3 ) . . . of timed symbols where the sequence of timestamps 1 ≤ 2 ≤ 3 ≤ . . . is increasing.Timestamps represent the absolute times at which symbols are received or produced.In the following we will sometime also reason in terms of relative times, or delays, measured as the difference between the timestamps of two successive symbols.More formally, the delay of a symbol is defined as − −1 when > 1 and as 1 when = 1. The timed models considered in this paper are initialized input/output machines that operate by reading a timed input word ( 1 , 1 ) ( 2 , 2 ) . . .( , ) defined on some input alphabet , and producing a corresponding timed output word ( 1 , 1 ) ( 2 , 2 ) . . .( , ) on some output alphabet .The production of outputs is assumed to be instantaneous: the timestamp of the -th output is the same of the -th input .Models where there is a delay between reading an input and producing the related output are possible but not considered here.Given a timed word denotes the word obtained when deleting the timestamps. A timed possibly non-deterministic and partial FSM (TFSM) is an FSM augmented with a clock.The clock is a real number that measures the time delay at a state, and its value is reset to zero when a transition is executed.In this section we introduce the TFSM model with both timed guards and timeouts defined in [12].Such a model subsumes the TFSM model with timed guards only given in [18,25] and the TFSM model with timeouts only given in [32,43].In addition, we establish a very precise connection between timed and untimed FSMs, showing that it is possible to describe the behavior of a TFSM using a standard FSM that is called the FSM abstraction of the TFSM. A timed guard defines the time interval when a transition can be executed.Intuitively, a TFSM in the present state accepts an input at a time only if satisfies the timed guard of some transition labelled with input symbol .The transition defines the output to be produced and the next state ′ .A timeout instead defines for how long the TFSM can wait for an input in the present state before spontaneously moving to another state.Each state of the machine has a timeout (possibly ∞) and all outgoing transitions of the state have timed guards with upper bounds less than the state timeout.The clock is reset to 0 every time the TFSM activates a transition or a timeout expires.Definition 1 (Timed FSM).A timed FSM is a finite state machine augmented with timed guards and timeouts.Formally, a timed FSM (TFSM) is a 6-tuple ( , , , , 0 , Δ ) where , , and are finite disjoint non-empty sets of states, inputs and outputs, respectively, 0 is the initial state, ⊆ × ( × Π) × × is a transition relation where Π is the set of input timed guards, and where is a nonnegative integer, while is either a nonnegative integer or ∞, ≤ , and ∈ (, [ while ∈ ), ] . The timed state of a TFSM is a pair ( , ) such that ∈ is a state of and ∈ R + is the current value of the clock, with the additional constraint that < Δ ( ) ↓N (the value of the clock cannot exceed the timeout).If no input is applied at a current state before the timeout Δ ( ) ↓N expires, then the TFSM will move to anther state Δ ( ) ↓ as prescribed by the timeout function.If Δ ( ) ↓N = ∞, then the TFSM can stay at state infinitely long waiting for an input.An input/output transition can be triggered only if the value of the clock is inside the guard , labeling the transition.Transitions between timed states can be of two types: • timed transitions of the form ( , ) − → ( ′ , ′ ) where ∈ R + , representing the fact that a delay of time units has elapsed without receiving any input.The relation − → is the smallest relation closed under the following properties: for every timed state ( , ) and delay ≥ 0, if + < Δ ( ) ↓N , then ( , ) − → ( , + ); for every timed state ( , ) and delay ≥ 0, if + = Δ ( ) ↓N , then • input/output transitions of the form ( , ) , − − → ( ′ , 0), representing reception of the input symbol ∈ , production of the output ∈ and reset of the clock.An input/output transition can be activated only if there exists ( , , , , , ′ ) ∈ such that ∈ , . The usual definitions for FSMs of deterministic and non-deterministic, submachine, etc., can be extended to the timed FSM model considered here.In particular, a TFSM is complete if for each state , input and value of the clock there exists at least one transition ( , ) , − − → ( ′ , 0), otherwise the machine is partial.A TFSM is deterministic if for each state , input and value of the clock there exists at most one input/output transition, otherwise is non-deterministic. For the sake of simplicity, from now on we consider only deterministic machines (possibly partial), leaving the treatment of non-deterministic TFSMs to future work.So for a partial and deterministic TFSM , we have that for every input word , ( ) is either not defined or a singleton set.Moreover, we can consider the transition relation of the machine as a partial function : × ×R + ↦ → × that takes as input the current state , the delay and the input symbol and produces the (unique) next state and output symbol ( , , ) = ( ′ , ) such that ( , 0) − → ( ′ , ′ ) , − − → ( ′′ , 0).With a slight abuse of the notation, we can extend it to a partial function : × ( × R + ) * ↦ → × * that takes as inputs the initial state and a timed word , and returns the state reached by the machine after reading and the generated output word.We will use , − −− → ′ as a shorthand for ( , ) = ( ′ , ). Abstracting TFSMs with timeouts and timed guards.In this section we show how to build an abstract untimed FSM that describes the behaviour of a TFSM with guards.To do this we define an appropriate notion of abstraction of a timed word into an untimed word and a notion of bisimulation to compare a TFSM with guards with untimed FSM.From the properties of the bisimulation relation, we conclude that the behaviour of the abstract untimed FSM is the abstraction of the behaviour of the TFSM. For every ≥ 0, we define I as the set of intervals Given a TFSM , we define max( ) as the maximum between the greatest timeout value of the function Δ (different from ∞) and the greatest integer constant (different from ∞) appearing in the guards of .The set I defines a discretization of the clock values of TFSMs.The following lemma proves that such a discretization is correct, namely, that a TFSM cannot distinguish between two timed states where the discrete state is the same and the values of the clocks are in the same interval of I . We can exploit the discretization given by I to build the abstract FSM as follows. States of the abstract FSM will be pairs ( , , ′ ) where is a state of and , ′ is either a point-interval [ , ] or an open interval ( , + 1) from the set I defined above, where = max( ).Transitions can be either standard input/output transitions labelled with pairs from × or "time elapsing" transitions labelled with the special pair (Ø, Ø), which intuitively represents a time delay 0 < * < 1 without inputs. (a) TFSM with timeouts and timed guards 0 Definition 3. Given a TFSM with timeouts and timed guards = ( , , , , 0 , Δ ), let = max( ).We define the Ø-abstract FSM = ( × I , ∪ {Ø}, ∪ {Ø}, , ( 0 , [0, 0])) as the untimed FSM such that: Figure 2 shows an example of a TFSM with timeouts and its Ø-abstraction.In this case the untimed abstraction accepts untimed input words on ∪ {Ø}.The delay is implicitly represented by sequences of the special input symbol Ø interleaving the occurrences of the real input symbols from .The representation of delays in the abstraction is quite involved: • an even number 2 of Ø symbols represents a delay of exactly time units; • an odd number 2 + 1 of Ø symbols represents a delay included in the open interval ( , + 1). The notion of abstraction of a timed word captures the above intuition. To prove that Ø-bisimilar machines have the same behavior we need to introduce the following technical result, connecting timed transitions with the special symbol Ø. The following lemma proves that Ø-bisimilar machines have the same behavior. Theorem 1.A TFSM with timeouts and timed guards is Ø-bisimilar to the abstract FSM . We can use the above theorem to solve the equivalence problem for TFSM with timed guards. Corollary 1. Let and ′ be two TFSM with timeouts and timed guards.Then and ′ are equivalent if and only if the two abstract FSM and ′ are equivalent. Proof.The claim is a direct consequence of Theorem 1 and Lemma 3. From untimed FSMs to TFSMs In the previous section we have shown how to build an abstract untimed FSM that represents the behaviour of a TFSM, by means of appropriate notions of bisimulation and of abstraction of timed words.In this section we study the conditions under which the opposite transformation is possible: we take an untimed FSM that accepts and produces words from input and output alphabets that include the special symbol Ø, and we show how to build an equivalent TFSM with timeouts and timed guards, under the same notion of abstraction of timed words.Now, let and be, respectively, the input and output alphabets of our machines.We are interested in studying untimed FSMs that accept words in ( ∪ {Ø}) * and produce words in ( ∪ {Ø}) * .Clearly, not all untimed FSMs represent valid timed behaviours.In particular, since in our TFSMs model outputs are instantaneously produced when an input is received, and since a TFSM cannot stop the advancing of time, we have that a deterministic untimed FSM = ( , ∪ {Ø}, ∪ {Ø}, , 0 ) can be transformed into a TFSM only if every state of respects the following two conditions: 1. 5: for all ∈ do 6: A T T ( , , ) return T We call any untimed FSM that respects the above two conditions time progressive. In the following we prove that every deterministic time progressive FSM can be transformed into an equivalent TFSM with timeouts and timed guards.Since we cannot directly compare the behavior of an untimed FSM with the behavior of a timed FSM, we will use the notion of Ø-abstraction of a timed word (Definition 4) to compare timed and untimed machines.Definition 6.Given a deterministic and time progressive FSM = ( , ∪ {Ø}, ∪ {Ø}, , 0 ), and a TFSM with timed guards and timeouts = ( , , , , 0 , Δ ), we say that refines if and only if for every timed input word = ( 1 , 1 ) . . .( , ) we have that (Ø( )) = Ø( ( )). The intuition behind the construction is the following.Since we start from a deterministic and time progressive FSM , from every state of there exists exactly one transition with input Ø (and output Ø).Hence, given a state we can build the (infinite) "delay run" Since the number of states of is finite, we have that the delay run is "lasso shaped", namely, that it consists of a prefix Ø,Ø − − → .The refined TFSM will have the same set of states of .Then, for every state the delay run is computed, and the transitions and timeouts are defined as follows: • every / transition leaving a state in the prefix is replaced with a timed transition from with an appropriate timed guard; • a timeout corresponding to the length of the prefix forces to switch from to a state in the loop. Algorithms 1 and 2 describe the above procedure in detail.To simplify the code, we will unfold the final loop once, and put the timeout in correspondence to the second occurrence of in the delay run.Moreover, since is assumed to be deterministic, we consider the transition relation as a partial function : × ∪ {Ø} ↦ → × ∪ {Ø} returning the next state and the output. We prove the correctness of our construction by showing that the TFSM obtained from Algorithm 1 is Ø-bisimilar to .Hence, by Lemma 3, we can immediately conclude that is a refinement of . The loop terminates when is a M state, that is, when it reaches the first repetition of a state in the delay run from .Lines 13-19 take care of setting appropriately the timeout at state .Two different situations may arise: either = [ , ] or = ( , + 1) for some ∈ N. In the former case, the state is repeated after an even number of transitions, which corresponds to an integer time delay.Hence, the timeout at is set to Δ ( ) = ( , ).Consider now the predecessor of in the delay run.By the invariant, we have that ( , ) ∼ for every ∈ ( −1, ).Hence, we have that and ( , 0) ∼ , respecting conditions 1 and 2 of Definition 5.In the latter case ( = ( , + 1)), is repeated after an odd number of transitions.Since the timeout at must be an integer value, lines 14-19 repeat the construction of the while loop one more time and then update to a state that corresponds to precisely + 1 time units before setting the timeout.As in the previous case, we can prove that the invariant is respected. Figure 3 shows the TFSM with timeouts and timed guards that can be obtained by applying Algorithm 1 to the untimed FSM of Figure 2(b), where the states have been renamed as follows: In the picture, transitions with adjacent guards have been merged: for instance, the application of the algorithm creates the transitions ( 0 , , 1 , [0, 0], 0 ) and the transition ( 0 , , 1 , (0, 1), 0 ) that are merged into the unique transition ( 0 , , 1 , [0, 1), 0 ) in the picture.The picture includes only the states that are reachable from the initial state 0 .This shows that in the final result only the three states 0 , 2 and 5 are relevant: the other states have been replaced by either timed guards or timeouts. To better understand how Algorihm 1 works, let us review the application of function A T T (Algorithm 2) to the initial state 0 (state ( 0 , [0, 0]) in the picture) of the untimed FSM of Figure 2(b).The procedure starts by unmarking all states of and by initialising the current state to 0 and the current guard to [0, 0].Then the while loop of lines 5-12 follows the sequence of Ø/Ø transitions in , marking the states it reaches, until a previously marked state is found.At lines 7-9, for every I/O transition exiting the current state, a corresponding timed transition labelled with the current value of is added to the TFSM.Then the current state is updated to the next state in the sequence of Ø/Ø transitions and is increased following the sequence [0, 0], (0, 1), [1,1], (1, 2), . . . .In this example, the first iteration of the while loop considers all I/O transitions exiting from the state 0 of , namely the transition Notice that the starting state of the timed transition is still 0 .The Ø/Ø transition between 0 and 1 of models the fact that the machine waits for a time included in the interval (0, 1) before accepting an input.This situation is modelled in the TFSM by adding the guard (0, 1) to the transition while keeping 0 as starting state.Then the loop continues by adding the following transitions to the TFSM: At this point, the current state of is 5 (i.e., ( 1 , (1, ∞))) and the guard is (2, 3).Because of the self loop on Ø/Ø of in state 5 , at the end of the loop does not change and is updated to [3,3]: a previously marked state is reached and the loop terminates.Lines 13-19 of A T T set the timeout at state 0 to ( 5 , 3), terminating the function call.The value of the timeout is set to 3 because the first marked state is reached after 6 Ø/Ø transitions, which corresponds to 3 time units.A subsequent call to A T T on state 5 will set the timeout at state 5 to ( 5 , 1) (i.e., the self-loop on = 1 depicted in the figure), to model the fact that in the untimed FSM there is a self-loop on Ø/Ø at state 5 .In this way, the sequence of Ø/Ø transitions 0 Figure 4: Ø-abstraction of the TFSM in Figure 3. units. The application of A T T to the other states of builds the rest of the TFSM. By applying the equivalence checking methodology presented in Section 2, we can prove that the TFSM of Figure 3 is indeed equivalent to the TFSM of Figure 2(a).Figure 4 shows the Ø-abstraction of the TFSM of Figure 3, which is equivalent to the FSM of Figure 2(b) (by standard FSM state-minimization of the FSM in Figure 4, we get a reduced FSM isomorphic to the one in Figure 2(b)).This is consistent with the fact that the FSM of Figure 2 Intersection of TFSMs In this section we apply the previous transformations to perform the intersection of TFSMs.In general, TFSMs can be composed to build complex systems out of simpler components.Several composition operators exist for untimed FSMs, the most relevant ones being the intersection operator, the serial composition, and synchronous and asynchronous parallel composition (see [40]).Parallel composition of TA was discussed in [35].Preliminary work on parallel composition of TFSMs with timed guards and output delays can be found in [29], and on parallel composition of TFSMs with timeouts and output delays in [26].When extending compositions to Timed FSMs, one must verify that TFSMs are closed under the type of composition of interest.In our setting, this means that the behaviour of the composed system should be represented by a machine with only a single clock.Here we focus on the intersection operator for which we show that closure holds. In the following we show how the transformation from TFSMs to untimed FSMs of Section 2 and the transformation from untimed FSMs to TFSMs of Section 3 can be used to implement the intersection of TFSMs.Suppose that we have two TFSMs 1 and 2 and that we want to compute the intersection 1 ∩ 2 whose behaviour is the intersection of the behaviours of 1 and 2 .We can proceed as follows: 1. compute the Ø-abstract FSMs 1 and 2 as in Definition 3 for, respectively, 1 and 2 ; 2. intersect 1 and 2 using the standard algorithm for untimed FSMs, obtaining the untimed FSM = 1 ∩ 2 ; 3. compute the TFSM that is Ø-bisimilar with using Algorithm 1. . ( , ) if and only if 1 ( ) and 2 ( ) are defined and such that 1 ( ) = 2 ( ) = .Proof.Let 1 and 2 be two deterministic TFSMs, and let 1 and 2 be their respective Ø-abstractions.By Definition 3 we have that 1 and 2 are deterministic and time progressive.Hence, the intersection 1 ∩ 2 is also deterministic and time progressive and Algorithm 1 can be applied to obtain the TFSM . As an example, consider the TFSMs 1 and 2 of Figure 5, and suppose we want to compute the intersection 1 ∩ 2 .Following the above procedure, the first step is to obtain the Ø-abstract FSMs 1 and 2 in Figure 6.Then, by applying the standard constructions for intersection and minimization of untimed FSMs, we obtain the machine depicted in Figure 7 and finally, using Algorithm 1, the TFSM = R ( 1 ∩ 2 ) of Figure 8.It is worth pointing out that the intersection of two complete and deterministic TFSMs is still a deterministic machine, but it may be partial.This is indeed the case of our example: for instance, when the TFSM in Figure 8 is in state 0 it can react to the input only when the clock is in the intervals [0, 0] or (1,2).No behaviour is specified when the clock is inside the interval (0, 1] and [2,3).In state 1 and 13 no behaviour is specified when the clock has an integer value smaller than the timeout (0, 1, 2 and 3 for state 1, 0 for state 13). Timed FSMs and Timed Automata In this section, we compare TFSMs with Timed Automata (TA), and survey the known results on the expressivity and computability of various classes of TA, according to their computational resources.The landscape of finite automata augmented with time is much (0, 1) (2, ∞) more complex than in the case of untimed ones, where both language recognizers (FA) and producers (FSMs) share the fact that there is an underlying common model with corresponds to regular languages (FSMs transform regular input languages into regular output languages).TA are the most common formalism obtained by adding timing constraints (as clocks) to finite-state automata [3], defining timed regular recognizers.TA are a more expressive model than TFSMs because they allow multiple clocks, invariants as conditions on clocks associated to a location, guards as conditions on clocks associated to a transition, resets by which a clock may be reset to 0 or may be kept unchanged, and states which are products of a location and clock valuations.Excellent surveys about the classes of TA proposed in the literature can be found in [24,42]. TFSMs can be transformed into TA with -transitions (called also in the literature silent transitions or internal transitions or non-observable transitions) by the following transformation: • there is one location of the TA for every state of the TFSM; • given the input and output alphabets and of the TFSM, the alphabet of the TA is given by × • as in the TFSM, the TA has a single clock, reset to zero at every transition; • intervals on transitions are replaced with guards; • timeouts of the TFSM are replaced by invariants and -transitions.An example of such transformation is shown in Fig. 9, where on the left there is a TFSM and on the right the corresponding TA. This reduction is not necessarily practical, since decision problems are in general undecidable for timed automata, even for restricted versions of them.In the following we mention some of these relevant results.For a classic survey on decision problems for timed automata, see [5], where the following results can be found: 1. TA are closed under union, intersection, projection, but not under complementation. 2. The language emptiness problems is PSPACE-complete (a by-product of reachability analysis obtained by means of the region construction). 3. The universality.inclusion and equivalence problems for TA are undecidable. 4. Deterministic TA are closed under union.intersection and complementation, but not under projection.The language emptiness, universality, inclusion and equivalence problems for deterministic TA are PSPACE-complete. Further results are proved in [22] and [23], e.g., that one cannot decide whether a given timed automaton is determinizable or whether the complement of a timed regular language is timed regular.One may wonder whether the complexity goes down, if we reduce the resources of the timed automaton.The answer is sometimes yes, but only in very restricted cases.In [34,1] it is shown that the problem of checking language inclusion ( ) ⊆ ( ) of TA and is decidable if has no -transitions.and either has only one clock, or the guards of use only the constant 0. These two cases are essentially the only decidable instances of language inclusion, in terms of restricting the various resources of timed automata.Similar conclusions for the universality problem (does a given TA accept all timed words) are drawn in [2]: the one-clock universality problem is undecidable for TA over infinite words, and decidable for TA over finite words, but undecidable for both if -transitions are allowed.Model checking and reachability of timed automata with one or two clocks are discussed in [30,21]. It is a fact that reducing resources, like the number of clocks, may simplify some problems, but allowing -transitions, even with few resources, makes the problems as hard as in the general case.A score of papers [7,8,15,9] investigated the expressiveness of timed automata augmented with -transitions, and proved the following results: 1.The class of timed languages recognized by timed automata with -transitions is more robust and expressive than those without them. 2. A timed automaton with -transitions that do not reset clocks can be transformed into an equivalent one without -transitions (equivalent means with the same timed language). 3. A (non-Zenonian) timed automaton such that no -transitions that reset clocks lie on a direct cycle can be transformed into an equivalent one without -transitions. 4. There is a timed automaton, with an -transition which resets clocks on a cycle, which is not equivalent to any timed automaton without -transitions. More undecidability questions for timed automata with -transitions were answered in [11], e.g.: given a timed automaton with -transitions, it is undecidable to determine if there exists an equivalent timed automaton without -transitions. The problem of removing -transitions got a new twist in [17], where it was shown that if one allows periodic clock constraints and periodic resets (updates), then we can remove -transitions from a timed automaton; moreover, the authors proved that periodic updates are necessary, defining a language that cannot be accepted by any timed automaton with periodic constraints and transitions which reset clocks to 0 and no -transitions. In conclusion, timed automata are a rich model with and without -transitions, therefore in general their decision problems are undecidable or very difficult also for restricted versions, even more so if -transitions are admitted. An interesting restricted model are Real-Time Automata (RTA) introduced by C. Dima [16] in 2001: they are finite automata with a labeling function (from states to an alphabet) and a time labeling function (from states to rational intervals) which together define the label of a state.RTA work over signals that are functions with finitely many discontinuities from non-negative rational intervals [0, ) (with > 0) to an alphabet, so that the domain of a signal is partitioned into finitely many intervals where the signal is constant.A run is associated with a signal iff there is a sequence of partitioning points consistent with the state labels (stuttering, i.e., repetition of signal values is allowed); signals associated with an accepting run are the timed language associated to an RTA.The author states in [16] that RTA can be viewed as a class of state-labeled timed automata over timed words (instead than signals) with a single clock which is reset at every transition (stuttering being reduced to -transitions).Moreover, it is claimed that RTA are the largest timed extension of finite automata whose emptiness and universality problems are decidable, -transitions can be removed, there is a determinization construction, are closed under complementation, and a version of Kleene theorem holds. More complex classes of timed automata have been studied, in which the interplay between variants of the basic constituents defining them yields interesting combinations of expressivity and computability. Event-Clock Automata [4] (ECTA) are a determinizable robust subclass of timed automata.Event-clock automata are characterized with respect to timed automata by the fact that explicit resets of clocks are replaced by a predefined association with the input symbols such that for each input ∈ Σ: a global recorder clock records the time elapsed since the last occurrence of and a global predictor clock measures the time required for the next occurrence of (clock valuations are determined only by the input timed words).They are closed under Boolean operations (TA are not closed under complement) and language inclusion is PSPACE-complete for them (it is undecidable for TA).It is mentioned in [16] that RTA are incomparable with ECTA, which are the largest known determinizable subclass of timed automata.since RTA may accept languages that ECTA cannot. Timed Automata with Non-Instantaneous Actions [6] are such that an action can take some time to be completed; they are more expressive that timed automata and less expressive than timed automata with -transitions.Updatable Timed Automata were introduced in [10] as an extension to update the clocks in a more elaborate way than simply resetting them to 0; their emptiness problem is undecidable, but there are interesting decidable subclasses.Any updatable automaton belonging to some decidable subclass can be effectively transformed into an equivalent timed automaton without updates, but with -transitions. A complete taxonomy of timed automata is presented in [24], and issues of undecidability are discussed in depth in [33].Properties of timed automata are contrasted in [13] with those of a special class of hybrid automata with severe restrictions on the discrete transitions: hybrid systems with strong resets, which have the property that all the continuous variables are non-deterministically reset after each discrete transition, (differently from timed automata, where flow rates are constant, and it is not compulsory to reset variables on each discrete transition).Connections between timed automata and timed discrete-event models are explored in [37]. The trade-off in preferring TA vs.TFSMs depends also on the specific problem at hand.For instance, TA and TFSMs are used when deriving tests for discrete event systems.However, methods for direct derivation of complete test suites over TA return infinite test suites [38].Therefore, to derive complete finite test suites with a guaranteed fault coverage, a TA is usually converted to an FSM and FSM-based test derivation is then used (see [36,20]).Therefore, TFSMs may be preferred over TA and other models when the derivation of complete tests is required (as done in [19] for TFSMs with timed guards), even though the test suites so obtained are rather long.We mention also that the FSM abstraction introduced in this paper was used in [39], to derive complete finite test suites for TFSMs with both timeouts and timed guards.Since FSMs are used for testing, state distinguishability, and state identification problems of hardware and software designs (see [31,14,28]), TFSMs may be applied to the timed versions of these problems, instead than using TA. Conclusions We investigated deterministic TFSMs with a single clock, with both timed guards and timeouts.We showed that the behaviours of the timed FSMs are equivalent if and only if the behaviours of the companion untimed FSMs obtained by time-abstracting bisimulations are equivalent, so that they exhibit a good trade-off between expressive power and ease of analysis. Then we defined and proved the correctness of the backward construction from Untimed FSMs to TFSMs.The construction starts from any deterministic FSM recognizing a subset of the language ( Ø /Ø) * / * ( Ø /Ø) * and builds a deterministic TFSM that recognizes the corresponding timed language.Using the two constructions we showed how to intersect two deterministic TFSMs, first by transforming them into untimed FSMs, then applying the standard intersection algorithm for untimed FSMs, and then transforming back into a deterministic TFSM.Future work includes studying more general composition operators to define and solve equations over deterministic TFSMs [41], and addressing the previous problems for TFSMs with output delays [32] and nondeterministic TFSMs. 0 Figure 3 : Figure 3: Example of application of Algorithm 1. of is replaced by the sequence of timeout transitions 0 3 − → 5 1 − → 5 1 − → . . . .In both cases the machines can wait in 5 forever, if no input is received in the first 3 time 0 (b) is the Ø-abstraction of the TFSM of Figure 2(a). (Figure 9 : Figure 9: Transformation from TFSM (on the left) to -timed automaton (on the right).
9,048.2
2021-03-08T00:00:00.000
[ "Computer Science" ]
Approximation via Hausdorff operators Abstract Truncating the Fourier transform averaged by means of a generalized Hausdorff operator, we approximate functions and the adjoint to that Hausdorff operator of the given function. We find estimates for the rate of approximation in various metrics in terms of the parameter of truncation and the components of the Hausdorff operator. Explicit rates of approximation of functions and comparison with approximate identities are given in the case of continuous functions from the class $\text {Lip }\alpha $ . Introduction e classical Hausdorff operator is defined, by means of a kernel φ, as and, as is shown first in [ ] (see also [ ] or [ ]), such an operator is bounded in L (R) whenever φ ∈ L (R). In the last two decades, various problems related to Hausdorff operators have attracted a lot of attention. e number of publications is growing considerably; to add some of the most notable, we mention [ , , , , , ]. ere are two survey papers: [ ] and [ ]. In the latter, as well as in [ ], numerous open problems are given. e Hausdorff operator ( . ) is expected to have better Fourier analytic properties than f. For example, in general, the inversion formula f (x) = π Rf (y)e i x y d y does not hold for f ∈ L (R); in order to "repair" this, one can consider some transformation of the function f or of its Fourier transform. In relation to the Hausdorff operator, we will consider integrals of the form R (H φf )(y)e i x y d y. Approximation via Hausdorff operators Here we analyze not this Hausdorff operator but a more general one, apparently first considered in [ ] (see also [ ]). Given an odd function a such that a(t) is decreasing, positive, and bijective on ( , ∞) (so that both a and a possess inverse functions in such an interval), we define It is clear that ( . ) corresponds to ( . ) with a(t) = t − , and one can easily derive the corresponding results from the general ones. Moreover, we consider some such particular cases as examples. ere is one more reason for considering general Hausdorff operators: they provide a proper basis for future multidimensional extensions (see, for instance, [ ] and [ ], where those operators were introduced independently). Such multidimensional operators have been extensively studied in Lebesgue and Hardy spaces. We refer the reader to [ , , ] for further details. e consideration of these "alternative" transformations such as ( . ) requires the development of a parallel theory to Fourier integrals. In this paper, we address three basic issues of approximation theory applied to (generalized) Hausdorff operators. (i) To find the operator T such that the integrals of the type approximate T f as N → ∞ (in the L p norm), for reasonable choices of φ (here some assumptions on f and φ are needed in order for (H φ,af )(y) to be well defined; see the discussion at the beginning of Section ). As we will see, the operator T is by no means the identity operator, but the dual operator of H, denoted by H * , and formally defined by the relation (ii) To study the rate of convergence to H * f of the partial integrals (iii) To modify ( . ) in a way that allows us to to derive a method for approximating f in the L p norm (rather than approximating H * f , as in (i) and (ii)). In particular, the problem of exploiting Hausdorff operators in approximation is raised. Indeed, application of analytic results in approximation seems to be the most convincing proof of their usefulness. is work is the first attempt to understand what kind of approximation problems may appear in the theory of Hausdorff operators and to solve some of them. e results obtained will open new lines in both the theory of Hausdorff operators itself and approximation theory. e difference between Hausdorff means and more typical multiplier (convolution) means, which comes from the difference between dilation invariance for the former and shi invariance for the latter, leads not only to new results but also to novelties in the methods. A. Debernardi and E. Liflyand e structure of the paper is as follows. In the next section, we being with certain preliminaries, we formulate the main results. In Section , we prove the main results. Section is devoted to presenting some examples of operators and their approximation estimates. A er several works on the boundedness of the Hausdorff operators on various function spaces, this paper is the first application of Hausdorff operators to the problems of constructive approximation. In particular, we compare the obtained results with their traditional counterparts (approximate identities given by convolution type operators). Finally, in Section , we give concluding remarks, and in particular, we show that some regularity of the kernel φ is needed in order to obtain good approximation estimates. We denote by is the usual modulus of continuity. We will also write A ≲ B to denote A ≤ C ⋅ B for some constant C that does not depend on essential quantities. e symbol A ≍ B means that A ≲ B and B ≲ A simultaneously. Main Results First of all, let us discuss some boundedness properties of the Hausdorff operator in Lebesgue spaces, in order for H * ( f ) (and also the Hausdorff operator in ( . )) to be well defined. We will always assume that f ∈ L (R), so thatf is well defined, andf ∈ L ∞ (R). On the other hand, a sufficient condition for the operator H * to be bounded on (moreover, if φ ≥ almost everywhere, then such a condition is also necessary; see the recent paper [ ] and also [ ]). Similarly, a sufficient condition (and necessary whenever φ ≥ a.e.) for the Hausdorff operator to be bounded on L p (R) is that then H φ,af is well defined as a function from L max{ , p ′ } (R). us, we will always assume that f ∈ L (R) ∩ L p (R) and that ( . ) holds. For further results on boundedness (and also Pitt-type inequalities) of Hausdorff operators, we refer the reader to [ ]. Approximation via Hausdorff operators We give some more observations before stating our main results. It is easy to check by substitution that we have Let us now define the partial integrals By substitution, it is easy to see that ese observations make clear that (H Nf )ˇis a good candidate to approximate H * f (informally, letting N → ∞ in ( . ) we obtain ( . )). We will prove that this is actually the case, at least in the L p setting. Our main results concerning approximation of adjoint Hausdorff operators read as follows. eorem . For where we take the convention p = if p = ∞, and furthermore, A. Debernardi and E. Liflyand e fact that the adjoint Hausdorff operator of a function is approximated may be unsatisfactory in principle, as one would rather approximate the function itself. However, approximating a function instead of its adjoint Hausdorff operator is also possible as a consequence of the following observation. For φ ∈ L (R) and a(t) as in the introduction, one has is gives a natural way of approximating f through Hausdorff operators by using More precisely, we have the following theorem. Remark . In order for the right-hand sides of ( . ) and ( . ) to be finite, one should assume that φ vanishes at a fast enough rate as t → ∞, or even more, that it has compact support. e latter is the case for the Cesàro operator (where φ = χ ( , ) ), which we discuss in more detail in Section , along with other examples. Proofs First of all, we give pointwise estimates for which will be the starting points for all subsequent estimates. Lemma . For any x ∈ R, Proof To prove ( . ), we apply rather straightforward estimates. Indeed, as desired. In the last inequality we use that a possesses an inverse on ( , ∞) (and therefore also on (−∞, ), since it is an odd function), and moreover, ( a ) − (t) = a( t) − on ( , ∞). ∎ Note that by ( . ), Also, by ( . ), we can write, for any Lemma . For any x ∈ R, Proof By ( . ) and ( . ), we have the equality sin a(t)s s ds dt. A. Debernardi and E. Liflyand e proof now follows the same lines as that of Lemma . , with the only difference being that in the above integral the term, We now proceed to the proofs of the main theorems. Proof of eorem . We treat the cases ≤ p < ∞ and p = ∞ separately. For the case p = ∞, it suffices to estimate the two terms on the right-hand side of ( . ) in the L ∞ norm. For the first one, we have As for the second term on the right-hand side of ( . ), we have Collecting all the estimates, we get where the right-hand side is uniform in x. Let us now prove the case ≤ p < ∞. Using ( . ), we get Approximation via Hausdorff operators Note that if p = , the factor on the le -hand side can be taken to be (in fact, such a factor appears due to the inequality (a + b) p ≤ p (a p + b p ), for a, b ≥ and p > ). On one hand, applying Minkowski's inequality twice, we get Since On the other hand, applying Minkowski's inequality again, we obtain A. Debernardi and E. Liflyand Collecting all the estimates, we derive where the factor on the le -hand side is omitted in the case where p = . e proof is complete. ∎ Proof of eorem . First of all, note that the case p = ∞ follows trivially from eorem . and the fact that ω( f ; δ) = ω(τ y f ; δ) for every y ∈ R. We now show the case ≤ p < ∞. By Lemma . , If p = , the factor on the le -hand side can be omitted, similarly as in the proof of eorem . . Now, applying Minkowski's inequality twice, we estimate where the last inequality follows from the fact that ω( f ; δ) p is increasing in δ. On the other hand, applying Minkowski's inequality again, we obtain Putting all the estimates together, we finally obtain ( . ). ∎ Examples We now obtain approximations of functions by means of certain specific Hausdorff operators. We shall give bounds for the approximation error explicitly in L p , ≤ p ≤ ∞, in each case, which will follow from eorem . . In the first place, we consider a general Hausdorff operator under some assumptions on the kernel φ (besides the assumptions from eorem . ). We suppose without loss of generality that a(t) > for t ∈ ( , ∞), that φ is compactly supported, say on [−T, T], and φ ∈ L ∞ (R) (note that the Cesàro operator, given by a(t) = t and φ = χ ( , ) , satisfies these conditions). en, on one hand, On the other hand, Now, the substitution s → a(t) yields so we conclude that for any ≤ p ≤ ∞, A. Debernardi and E. Liflyand by eorem . (recall that ω( f ; δ) ∞ = ω( f ; δ)). If, furthermore, a(t) = t, then for ≤ p ≤ ∞, (recall also that in the case p = , the estimate on the right-hand side can be multiplied by the factor ). To the best of our knowledge, no approach through Hausdorff operators has been considered in approximation problems so far, and therefore even the basic estimate ( . ) is new in this respect. Approximation via the Cesàro Operator e Cesàro operator C given by a(t) = t and φ(t) = χ ( , ) (t) [ , ] is the prototype Hausdorff operator H φ,a . In this case, its adjoint operator is also referred to as the Hardy operator. We have It readily follows from ( . ) that and in the case p = ∞, we obtain a Dini-type estimate Note also that for p = , condition ( . ) does not hold, so we have to restrict ourselves to the case < p ≤ ∞. In particular, we can conclude the following corollary. Approximation via Hausdorff operators (ii) If f is continuous and ∫ ω( f ;t) t dt < ∞, then F N converges uniformly to f on R as N → ∞. In particular, Remark . For < q ≤ ∞, ≤ p ≤ ∞, and ≤ s < , the Besov seminorm (defined via the modulus of continuity) is We refer the reader to [ , § . . , eorem ] for the description of Besov seminorms in terms of moduli of continuity. Note that in Corollary . , the assumption that is equivalent to saying that the Besov seminorm B p, of f is finite. We shall now compare the approximation estimates from Corollary . with those for approximate identities. Comparison: Cesàro Operators and Approximate Identities Since the Cesàro operator is the prototype example of Hausdorff operator, it is instructive to compare the obtained approximations with the classical ones given by approximate identities for convolutions. A family of functions {C r } r> defined on R is called an approximate identity if ( ) sup r C r L (R) < ∞, and ( ) for every δ > , e following is well known [ , eorem . . ]. As an example of an approximate identity satisfying ( . ), we have the family of functions where C(x) is the Fejér kernel on the real line, From now on, we assume that the approximate identities we consider satisfy condition ( . ). Comparing eorem A and Corollary . , we readily see that the latter requires further assumptions in order to guarantee L p convergence (p < ∞), namely that the seminorm f B p, (R) is finite (cf. Remark . ). However, when restricted to certain classes of functions, the approximation rates become the same, or even better. As classes of functions, we consider Lip p α = Lip p α(R) with < α ≤ , and ≤ p ≤ ∞, which consists of the functions f satisfying Note that Lip α = Lip ∞ α is the class of usual Lipschitzα continuous functions on R, i.e., those satisfying For f ∈ Lip p α, < α < , and ≤ p ≤ ∞, it is known that any approximate identity {C r } yields the approximation rate , while for α = , an additional logarithm appears: In the case of the Cesàro operator, Corollary . yields, for any < p < ∞ and f ∈ Lip p α, Approximation via Hausdorff operators with all the estimates valid for the range < α ≤ . Note that these approximation rates are the same as those for approximate identities when restricted to functions f ∈ Lip p α with < α < (compare with ( . )), and are actually better than their counterparts in the case α = (compare with ( . )), in the sense that the extra logarithm from ( . ) does not appear. us, in the case α = , the "Hausdorff "approximation improves the classical convolution approximations in the sense of rate of convergence. Approximation via the Riemann-Liouville Integral For α > , the Riemann-Liouville integral is defined as A rescaled version of this operator can be easily obtained as an adjoint Hausdorff operator. Indeed, for a(t) = t and φ α (t) Note that if we formally consider α = in the definition of I α , we recover the Cesàro operator. Using eorem . , we approximate f (x) by cf. ( . ). Note that by the observation made in ( . ), we will obtain the same convergence rates via the Riemann-Liouville integral as those we obtain via the Cesàro operator. So, for continuous f, we have while for f ∈ L p (R) with < p < ∞ (note that for p = condition ( . ) does not hold, so we have to exclude such a case), we have by Corollary . and ( . ). Final Remarks We conclude with a couple of remarks: first, we show that one can use the same approach to approximate the Hausdorff operator (instead of its adjoint) applied to a function. Secondly, we show that we cannot expect any good approximations of Hausdorff operators if the kernel φ does not decay fast enough at infinity. Approximation of Non-adjoint Hausdorff Operators One can also approximate the Hausdorff operator instead of its adjoint, if one considers the adjoint Hausdorff averages in the approximant. More precisely, it is also possible to approximate H f (x) by which, by substitution, is easily seen to equal Since for any t ≠ , one has A similar estimate to that of Lemma . can now be proved. Lemma . For any x ∈ R, Approximation via Hausdorff operators Proof e proof is essentially the same as that of Lemma . , as desired. ∎ By means of the pointwise estimate from Lemma . , it is possible to obtain approximation results analogous to eorem . , where the Hausdorff operator, rather than its adjoint, is approximated. e details are essentially the same and are thus omitted. A Hausdorff Operator with Slowly Decaying φ: the Bellman Operator Let us see what happens if we try to approximate an adjoint Hausdorff operator with slowly decaying φ. We consider the particular example of the Bellman operator B (which is nothing more than the adjoint Cesàro operator C * ). Its adjoint B * is defined by letting a(t) = t and φ(t) = t − χ ( ,∞) (t) in ( . ): It is clear that we cannot use the methods from Section in order to approximate functions, since the hypothesis φ ∈ L (R) is not satisfied in this example. What is more, not even the basic assumption ( . ) from eorem . is satisfied for any ≤ p ≤ ∞. Nevertheless, we now try to use the approximation estimates from eorem . (heuristically, since the hypotheses of eorem . are not met) just to illustrate their bad behaviour for functions φ that do not decay fast enough. As the approximant for B * , we take x − s t ds dt. A. Debernardi and E. Liflyand For ≤ p < ∞, the estimate from eorem . yields while in the case p = ∞, i.e., in this case we cannot guarantee any convergence on the L p norm by using our estimates, even for well-behaved functions f. As was pointed out in Remark . , this is because in order to obtain useful estimates from eorem . , one should assume that φ is of compact support, or that it decays fast enough as t → ∞. For the adjoint Cesàro operator, the functions φ has some decay, but it is not fast enough. Also note that the estimate ( . ) is not good, as the right-hand side is infinite for nonconstant functions.
4,628.4
2020-08-13T00:00:00.000
[ "Mathematics" ]
Measurements of CP-Violating Asymmetries and Branching Fractions in B Decays to omegaK and omegapi We present measurements of CP-violating asymmetries and branching fractions for the decays omegapi+, omegaK+, and omegaK0. The data sample corresponds to 232 million BBbar pairs produced by e+e- annihilation at the Upsilon(4S) resonance. For the decay omegaKs, we measure the time-dependent CP-violation parameters S=0.51+0.35-0.39+/-0.02, and C=-0.55+0.28-0.26+/-0.03. We also measure the branching fractions, in units of 10^-6, B(omegapi+)=6.1+/-0.7+/-0.4, B(omegaK+)=6.1+/-0.6+/-0.4, and B(omegaK0)=6.2+/-1.0+/-0.4, and charge asymmetries Ach(omegapi+)=-0.01+/-0.10+/-0.01 and Ach(omegaK+)=0.05+/-0.09+/-0.01. PACS numbers: 13.25.Hw,12.15.Hh,11.30.Er Measurements of time-dependent CP asymmetries in B 0 meson decays through a Cabibbo-Kobayashi-Maskawa (CKM) favored b → ccs amplitude [1,2] have firmly established that CP is not conserved in such decays.The effect, arising from the interference between mixing and decay involving the CP -violating phase β = arg (−V cd V * cb /V td V * tb ) of the CKM mixing matrix [3], manifests itself as an asymmetry in the time evolution of the B 0 B 0 pair. Decays to the charmless final states φK 0 , K + K − K 0 , η ′ K 0 , π 0 K 0 , f 0 (980)K 0 , and ωK 0 are all b → qqs processes dominated by a single penguin (loop) amplitude having the same weak phase β [4].CKM-suppressed amplitudes and multiple particles in the loop complicate the situation by introducing other weak phases whose contributions are not negligible; see Refs.[5,6] for early quantitative work in addressing the size of these effects.We define ∆S as the difference between the time-dependent CP -violating parameter S (given in detail below) measured in these decays and S = sin2β measured in charmonium K 0 decays.For the decay B 0 → ωK 0 , these additional contributions are expected to give ∆S ∼0.1 [7,8], although this increase may be nullified when finalstate interactions are included [8].A value of ∆S inconsistent with this expectation could be an indication of new physics [9]. We present an improved measurement of the timedependent CP -violating asymmetry in the decay B 0 → ωK 0 , previously reported by the Belle Collaboration based on a sample of ∼30 events [10].We also measure branching fractions for the decays B 0 → ωK 0 , B + → ωπ + , and B + → ωK + (charge-conjugate decay modes are implied throughout), and for B + → ωπ + , and B + → ωK + , we measure the time-integrated charge asymmetry A ch = (Γ − − Γ + )/(Γ − + Γ + ), where Γ ± is the width for these charged decay modes.In the Standard Model A ch is expected to be consistent with zero within our experimental uncertainty; a non-zero value would indicate direct CP violation in this channel. The data were collected with the BABAR detector [11] at the PEP-II asymmetric e + e − collider.An integrated luminosity of 211 fb −1 , corresponding to 232 million BB pairs, was recorded at the Υ (4S) resonance (center-ofmass energy √ s = 10.58GeV).Charged particles are detected and their momenta measured by the combination of a silicon vertex tracker (SVT), consisting of five layers of double-sided detectors, and a 40-layer central drift chamber, both operating in a 1.5 T axial magnetic field.Charged-particle identification (PID) is provided by the energy loss in the tracking devices and by the measured Cherenkov angle from an internally reflecting ringimaging Cherenkov detector (DIRC) covering the central region.A K/π separation of better than four standard deviations (σ) is achieved for momenta below 3 GeV/c, decreasing to 2.5σ at the highest momenta in the B decay final states.Photons and electrons are detected by a CsI(Tl) electromagnetic calorimeter. From a B 0 B 0 pair produced in an Υ (4S) decay, we reconstruct one of the B mesons in the final state f = ωK 0 S , a CP eigenstate with eigenvalue −1.For the time evolution measurement, we also identify (tag) the flavor (B 0 or B 0 ) and reconstruct the decay vertex of the other B. The asymmetric beam configuration in the laboratory frame provides a boost of βγ = 0.56 to the Υ (4S), which allows the determination of the proper decay time difference ∆t ≡ t f − t tag from the vertex separation of the two B meson candidates.Ignoring the ∆t resolution (about 0.5 ps), the distribution of ∆t is The upper (lower) sign denotes a decay accompanied by a B 0 (B 0 ) tag, τ is the mean B 0 lifetime, ∆m d is the mixing frequency, and the mistag parameters w and ∆w are the average and difference, respectively, of the probabilities that a true B 0 (B 0 ) meson is tagged as a B 0 (B 0 ).The parameter C measures direct CP violation.If C = 0, then S = sin2β + ∆S.The flavor-tagging algorithm [1] has seven mutually exclusive tagging categories of differing purities (including one for untagged events that we retain for yield determinations).The measured analyzing power, defined as efficiency times (1 − 2w) 2 summed over all categories, is (30.5 ± 0.6)%, as determined from a large sample of Bdecays to fully reconstructed flavor eigenstates (B flav ). We reconstruct a B meson candidate by combining a π + , K + or K 0 S with an ω → π + π − π 0 .We select K 0 S → π + π − decays by requiring the π + π − invariant mass to be within 12 MeV of the nominal K 0 mass and by requiring a flight length greater than three times its error.We require the primary charged track to have a minimum of six Cherenkov photons in the DIRC.We require the π + π − π 0 invariant mass (m 3π ) to be between 735 and 825 MeV.Distributions from the data and from Monte Carlo (MC) simulations [12] guide the choice of these selection criteria.We retain regions adequate to characterize the background as well as the signal for those quantities taken subsequently as observables for fitting.We also use in the fit the angle θ H , defined, in the ω rest frame, as the angle of the direction of the boost from the B rest frame with respect to the normal to the ω decay plane.The quantity H ≡ | cos θ H | is approximately flat for background and distributed as cos 2 θ H for signal. A B meson candidate is characterized kinematically by the energy-substituted mass m ES ≡ where (E 0 , p 0 ) and (E B , p B ) are four-momenta of the Υ (4S) and the B candidate, respectively, and the asterisk denotes the Υ (4S) rest frame.We require, assuming the B + → ωπ + hypothesis, |∆E| ≤ 0.2 GeV and 5.25 ≤ m ES ≤ 5.29 GeV. To reject the dominant background from continuum e + e − → qq events (q = u, d, s, c), we use the angle θ T between the thrust axis of the B candidate and that of the rest of the tracks and neutral clusters in the event, calculated in the Υ (4S) rest frame.The distribution of cos θ T is sharply peaked near ±1 for jet-like q q pairs and is nearly uniform for the isotropic B decays; we require | cos θ T | < 0.9 (0.8 for the charged B decays). From MC simulations of B 0 B 0 and B + B − events, we find evidence for a small (0.5%) BB background contribution for the charged B decays, so we have added a BB component to the fit described below for those channels. We use an unbinned, multivariate maximum-likelihood fit to extract signal yields and CP -violation parameters.We use the discriminating variables m ES , ∆E, m 3π , H, and a Fisher discriminant F [13].The Fisher discriminant combines five variables: the polar angles with respect to the beam axis in the Υ (4S) frame of the B candidate momentum and of the B thrust axis; the tagging category; and the zeroth and second angular moments of the energy flow, excluding the B candidate, about the B thrust axis [13].We also use ∆t for the B 0 → ωK 0 S decay, while for the charged B decays we use the PID variables T π and T K , defined as the number of standard deviations between the measured DIRC Cherenkov angle and that expected for pions and kaons, respectively. For the B 0 → ωK 0 S decay we define the probability density function (PDF) for each event i, hypothesis j (signal and qq background), and tagging category c where σ i ∆t is the error on ∆t for event i.We write the extended likelihood function as where Y j is the fit yield of events of species j, f j,c is the fraction of events of species j for each category c, and N c is the number of events of category c in the sample.We fix f sig,c to f B flav ,c , the values measured with the large B flav sample [1].The same likelihood function is used for the charged decays except that the hypothesis j also includes BB background, the tagging category is not used and the PDF is slightly different, involving flavor k (primary π + or K + ): The PDF P sig (∆t, σ ∆t , c), is the convolution of F (∆t; c) (Eq. 1) with the signal resolution function (a sum of three Gaussians) determined from the B flav sample.The other PDF forms are: the sum of two Gaussians for all signal shapes except H, and the peaking component of the m 3π background; the sum of three Gaussians for P qq (∆t; c); an asymmetric Gaussian with different widths below and above the peak for P j (F ) (a small "tail" Gaussian is added for P qq (F )); Chebyshev functions of second to fourth order for H signal and the slowly-varying shapes of ∆E, m 3π , and H backgrounds; and, for P qq (m ES ), a phase-space-motivated empirical function [14], with a small Gaussian added for P BB (m ES ). We determine the PDF parameters from simulation for the signal and BB background components.We study large control samples of B → Dπ decays of similar topology to verify the simulated resolutions in ∆E and m ES , adjusting the PDFs to account for any differences found.For the qq background we use (m ES , ∆E) sideband data to obtain initial PDF-parameter values but ultimately leave them free to vary in the final fit. We compute the branching fractions and charge asymmetry from fits performed without ∆t or flavor tagging.The free fit parameters are the following: the signal and qq background yields (the BB yield, if present, is fixed); the three shape parameters of P qq (F ); the slope of P qq (∆E) and P qq (m 3π ); the fraction of the peaking component of P qq (m 3π ); ξ [14]; and, for the charged B decays, the signal and background A ch . Table I lists the quantities used to determine the branching fraction.Equal production rates of B + B − and B 0 B 0 pairs have been assumed.Small yield biases are present in the fit, due primarily to unmodeled correlations among the signal PDF parameters.In Table I we include estimates of these biases, evaluated by fitting simulated qq experiments drawn from the PDF into which we have embedded the expected number of signal and BB background events randomly extracted from the fully simulated MC samples.The estimated purity in Table I is given by the ratio of the signal yield to the effective background plus signal, the latter being defined as the square of the error on the yield. FIG. 1: The B candidate mES and ∆E projections for d), and B 0 → ωK 0 (e, f) shown for a signal-enhanced subset of the data.Points with error bars represent the data, the solid line the fit function, and the dashed line the background components.Note that the ωK + signal in the ∆E plot is displaced from zero since ∆E is defined for the ωπ + hypothesis. Fig. 1 shows projections onto m ES and ∆E for a subset of the data (including 45-65% of signal events) for which the signal likelihood (computed without the variable plotted) exceeds a threshold that optimizes the sensitivity. For the time-dependent analysis, we require |∆t| < 20 ps and σ ∆t < 2.5 ps.The free parameters in the fit are the same as for the branching fraction fit plus S, C, the fraction of background events in each tagging category, and the six primary parameters describing the ∆t back- ground shape.The parameters τ and ∆m d are fixed to world-average values [15].Here we find a slightly smaller yield of 95±14 events and S = 0.51 +0.35 −0.39 , C = −0.55+0.28 −0.26 .The errors have been scaled by ∼1.10 to account for a slight underestimate of the fit errors predicted by our simulations when the signal sample size is small.Fig. 2 shows the ∆t projections and asymmetry of the timedependent fit with events selected as for Fig. 1. The major systematic uncertainties affecting the branching fraction measurements include the reconstruction efficiency (0.8% per charged track, 1.5% per photon, and 2.1% per K 0 S ) estimated from auxiliary studies.We take one-half of the measured yield bias (3-4%) as a systematic error.The uncertainty due to the signal PDF description is estimated to be < ∼ 1% in studies where the signal PDF parameters are varied within their estimated errors.The uncertainty due to BB background is also estimated to be 1% by variation of the fixed BB yield by its estimated uncertainty.The A ch bias is estimated to be −0.005± 0.010 from studies of signal MC, control samples, and calculation of the asymmetry due to particles interacting in the detector.We correct for this bias and assign a systematic uncertainty of 0.01 for A ch for both B + → ωπ + and B + → ωK + . For the time-dependent measurements, we estimate systematic uncertainties in S and C due to BB background and PDF shape variation (0.01 each), modeling of the signal ∆t distribution (0.02), and interference between the CKM-suppressed b → ūc d amplitude and the favored b → cūd amplitude for some tag-side B decays [16] (0.02 for C, negligible for S).We also find that the uncertainty due to SVT alignment and position and size of the beam spot are negligible.The B flav sample is used to determine the errors associated with the signal PDF parameters: ∆t resolutions, tagging efficiencies, and mistag rates; published measurements [15] are used for τ B and ∆m d .Summing all systematic errors in quadrature, we obtain 0.02 for S and 0.03 for C. We are grateful for the excellent luminosity and machine conditions provided by our PEP-II colleagues, and for the substantial dedicated effort from the computing organizations that support BABAR.The collaborating institutions wish to thank SLAC for its support and kind hospitality.This work is supported by DOE and NSF (USA), NSERC (Canada), IHEP (China), CEA and CNRS-IN2P3 (France), BMBF and DFG (Germany), INFN (Italy), FOM (The Netherlands), NFR (Norway), MIST (Russia), MEC (Spain), and PPARC (United Kingdom).Individuals have received support from the Marie Curie EIF (European Union) and the A. P. Sloan Foundation. TABLE I : Fit sample size, signal yield, estimated yield bias (all in events), estimated purity, detection efficiency, daughter branching fraction product, statistical significance including systematic errors, measured branching fraction, and corrected signal charge asymmetry.
3,658.2
2006-03-20T00:00:00.000
[ "Physics" ]
Comment on acp-2021-82 The manuscript is well-written and contains very interesting findings which are presented in comprehensive figures and are described very clearly. The study adds to our current understanding of mixed-phase cloud (MPC) dynamics in CAOs and highlights the importance of microphysical processes, such as riming, for cloud field transitions and hence cloud albedo. Due to remaining lacks in our current MPC understanding and especially the importance of cloud field transitions for regional climate, I encourage publication in ACP. However, I have a few points that should be addressed prior to final publication. Introduction Line 18-24: Please provide some references to this passage. We have added references as suggested.We have added a footnote (l.73) that points out the two most significant differences: (1) the experiments in Eirund et al. (2019) were set up with zero average horizontal wind speed, while this study nudges to horizonal winds exceeding 15 m/s.In our simulations, such strong winds drive enormous surface fluxes that drive boundary layer deepening and that in turn affects the cloud deck evolution, including the morphological transition from closed roll convection to open cells.(2) Another major difference is that Eirund et al. (2019) did not consider (microphysical) depletion of CCN, which is responsible for the morphological transition from closed to open cells in our simulations and a focal point of our study. Line 72-73: This statement sounds as if your analyses were performed for a variety of CAO throughout the campaign.However, in section 2.1 you describe that you simulate one specific CAO during the shoulder season.In order to allow for an evaluation of the generality of the results found in this study as you mention here, it could be useful to expand your findings to a variety of CAOs, potentially in the discussion.A potential discussion point could be if your results would remain valid if the CAO index was different, e.g. the median of the collected indices? In ongoing work with other cases, we are finding similar mechanisms at play in cold air outbreaks of different intensity and in different regions, but we strongly agree that generality should be tested in future work.We have extended a paragraph to the .We are currently completing a follow-up study that considers several well-observed cold air outbreaks and, as already stated (moved to ll. 362-367), examining the role of meteorological boundary conditions (including CAO index) that affect CAO cloud deck evolution more than, for example, N inp .We expect to report additional findings soon.We have revised Fig. 1a as suggested. Simulations of a Cold Air Outbreak Lines 83-85: This sentence is a bit hard to read, maybe spilt into two sentences. We have revised as suggested. Line 89: Similar to what I mentioned above, I assume here you mean aerosols available for CCN activation? We have revised as suggested.Preliminary ACTIVATE measurements from the NW Atlantic also indicate that FT aerosol concentrations are typically less than MBL concentrations, which we assume here for our baseline setup.We have extended the paragraph in the Discussion (ll.379-381) accordingly. Results Line 137: Why is this threshold arbitrary (line 190) and not e.g. based on a percentile of the simulated cloud cover?A pdf as shown in Figure 3 of cloud cover could maybe show if the 75% threshold is reasonable considering the cloud field evolution. As there is no universal definition for "cloud breakup", any cloud cover threshold, whether relative or absolute, is somewhat arbitrary.Sandu et al. ( 2010) used a relative change and defined breakups by a drop in cloud cover down to half the peak value.In this study such a definition would correspond to a 50% cloud cover threshold that we already consider (ll. 204-206, 277-280, 308-311) but now stress further in Section 3 (ll. 151-152).We also now note that Christensen et al. ( 2020) used 75% to delineate overcast from broken cloud state (ll. 149-150).We have also added Fig. 1c to show how well this cloud cover threshold applies to MODIS products (from MYD06 data in which we define cloud cover as the fraction of pixels with a cloud optical thickness greater equal 2.5 -equivalent to LES diagnostic, though at a different spatial scale). Line 147: How are the ensemble simulations set up and why were they performed only for the ice0 case? We ran an ensemble to crudely characterize uncertainty from turbulent noise, set up by varying the seed to the pseudo-random number generator applied to the initial fields of water vapor and potential temperature.We only run one ensemble because we assume the turbulent noise of ice0 is representative of the other variants on the case.We have expanded the text in Section 2.2 (ll.116-119) and the caption of Figure 2. Line 159: Does the prognostic CCN implementation allow for recycling of CCN?If yes -doesn't the evaporation of rain below cloud release CCN, which could be reentrained into the cloud layer?Indeed, there is recirculation in that one CCN is released per evaporating raindrop, but one raindrop is the product of collisions among and with many cloud droplets, and thus that one CCN corresponds to a reduction in CCN numbers.We have added a short comment to Section 3.1 (ll.215-216).2019) also show that precipitation formation is necessary for a cloud deck breakup, which might be worth noting as the latter studies also investigated MPCs. We have revised the statement as suggested.In response to the reviewer's suggestion we have run the case without autoconversion, labeled "no rain" in the Figure S1 of the supplement.Without autoconversion the LWP increases until plateauing beyond 1000 g m -2 and the cloud deck remains overcast throughout.We have added a sentence to Section 3. 1 (ll. 227-228).We have revised the figure as suggested. Line 210: "substantial deepening of the PBL associated with longwave cooling"do you have evidence of LW cooling? A profile of radiative heating rates is shown at 4.5 h (roughly associated with the peak LWP maximum) in Figure S2 of the supplement , showing the slight warming in the lower cloud and strong cooling near cloud top.It would be noteworthy if this expected dipole were not evident.We have added "(not shown)" after "longwave cooling" in the revised manuscript (l.226).For clarity we have linked the sentence in question to the introduction.There we hypothesize a possible delay in cloud transitions as LWP reduction (from mixed-phase processes) may delay transition-initiating rain.We have added the two references mentioned (Knight et al., 1974, Field & Heymsfield, 2015) when discussing possible reasons for an accelerated transition (ll.73-74).We found it surprising that ice had so little impact on the timing of warm precipitation at sufficient rates to trigger transition, and we believe that this is owing to reduction of droplet number concentration associated with riming, which offsets the substantial decrease in LWP (that would otherwise presumably delay precipitation onset) as already stated in the manuscript (ll.269-272). Line 233: "ice vapor growth" -do you mean growth by deposition? We have changed the phrasing to "ice depositional growth". Same line: related to my comment 1, it would be very helpful to include the riming rates or the snow content/snow particle concentration e.g. in Figure 2 in order to follow this thought.Otherwise, the LWP reduction through riming sounds more like a suspicion rather than a fact. We have added a figure depicting ice mass microphysical budgets for selected times, which show that riming is by far the predominant mechanism by which water freezes, which is discussed in a new paragraph (ll. 264-268).We have expanded Fig. 7 to show cloud-base precipitation by type and now note that snow is the dominant form of precipitation prior to cloud transition. Line 285: Did you also investigate differences in longwave radiation between the simulations?As the difference between SST and cloud top temperature is quite large (Figure 1b, Figure 2h), is would be interesting to see the effect of changes in longwave radiation versus the simulated change in albedo/shortwave radiation. Agreed, we have revised Fig. 2 to show upwelling longwave radiation at 5 km, the top of the domain (panel j).Indeed, the upwelling longwave drastically changes over time from changes in cloud-top height, cloud cover, and underlying surface temperature.We have added a back-of-the-envelope calculation that shows longwave cloud radiative effects offsetting a non-trivial fraction of the shortwave effects in a new paragraph in the Discussion section (ll. 355-361). Discussion Line 349: How (and why) do you assume to change in a warming climate?Would the change in cloud ice alone not be sufficient for a negative cloud-climate feedback in the future? While one could indeed consider the temperature dependence of ice formation, we are simply referring to the very simple principal that a warmer boundary layer can be expected to have less ice, as we now note explicitly (ll. 404-405). I also think in the context of climate impact, it would be worth to again highlight the strong difference in albedo (as shown in Figure 2i) between the different simulations in the Discussion. For our cloud radiative effect comparison mentioned in the point before last, we now include a back-of-the-envelope estimate of shortwave radiative effect of about 140 W m -2 (ll.355-360). Reviewer 2 I lost my comments before submitting the preview.When I hit the preview, they were not there anymore.I did not copy / paste this before hitting the preview, so it is lost.This is a quick, shorter retype, in a different state of mind of course, so apologies for the brevity.If I get to review this again, I will work in a separate app, cut/paste into this form, and avoid this data loss, so that review will be better.I am asking the Editor to warn other reviewers about this pitfall.We very much thank the reviewer for the careful reading and helpful suggestions that have now improved the manuscript, especially despite technical difficulties.We agree with the reviewer that the cloud cover threshold is arbitrary.We now reference the study by Christensen et al. (2020), which also used an absolute threshold of 75% (ll. 150-151).Throughout the original manuscript we also considered 50%, which corresponds to the relative threshold for breakup used by Sandu et al., (2010) that we now also reference (ll. 151-152). General Following the reviewer's recommendation, we have produced a MODIS-based cloud cover based on that used in the analysis of our simulations.From MYD06 data we determine cloud cover over regions of (0.5°) 2 , comparable to the extent of our large domain simulation seen in Fig. 4a.We do note that MODIS product pixel size and the LES mesh differ (1 vs. 0.15 km) and further note that the COD threshold (here 2.5, originally from Bretherton et al. 1997) is itself is arbitrary.In Figure S3 of the supplement we also show isolines for 50% (left) and 75% (right) cloud cover. The 75% isoline appears to distinguish brighter cloud streets from dimmer, open cellular fields downwind.The north-south gradient (with shorter overcast durations further north, as discussed in the next point below) is also captured using this 75% cloud cover threshold.In contrast, 50% cloud cover appears less telling as most of the cold air outbreak has a cloud fraction greater than 50% by above described method. We have revised Fig. 1a and added Fig. 1c to show our cloud-cover diagnosis of (0.5°) 2 regions. This study argues that the accelerated transition and ice-mediated reduction in albedo may have important implications for cloud-climate feedbacks, i.e. a negative feedback.This remains speculative and needs to be borne out by other modelling and observational studies.In fact, Fig. 1a shows evidence to the contrary.The young (short-fetch) cloud albedo is higher to the north, and much lower south of Cape Hatteras.Helical roll circulations probably are omnipresent along the coast in the convective BL, amassing small convective cells, but further north the ice crystals near cloud top bridge the streets, whereas south of Hatteras, in the absence of ice, the cloud edge is defined by water droplets, which remains closer to the parent updrafts, less likely to bridge the cloud street subsidence regions, hence lower albedo. We agree that our speculation requires further study, including the gathering and analysis of observational evidence.In-situ observations of cold air outbreaks unsurprisingly reveal rimed particles, and we have extended a paragraph in the Discussion (ll.374-378) accordingly. The reviewer's observations hint at meteorological controls.This study only considers one trajectory, which corresponds to a single set of meteorological boundary conditions.We expect that similar mechanisms should be at play in cold air outbreaks of different intensity and in different regions, including the North Sea, as discussed further below.We have extended a paragraph to the Discussion (ll.368-392) accordingly.We are currently completing a follow-up study that considers several cold air outbreaks across differing regions and, as already stated (moved to ll. 362-367), examines the role of meteorological boundary conditions (including CAO index) that affect CAO cloud deck evolution more than, for example, N inp .We expect to report additional findings soon.that can be compared and contrasted.Moreover, is it not problematic that the satellite imagery suggests that the "overcast state was sustained hours longer" than in the simulations when the maximum difference between all the simulations is only ~1.5 hours (Fig. 8h)?See comment 1 as well. In the discussion To qualitatively compare with Abel et al. (2017), we now also note that their LWP maximum of about 400 g/m 2 appearing upwind of the cloud transition corresponds well with the mixed-phase simulations of this study (ll. 372-374).To compare with Abel et al. (2017), we already included metrics showing the progressive PBL stratification during the transition.Also, as mentioned in our response to the reviewer's previous point, rimed particles are evident in the in-situ observation of Abel et al, ( 2017) and other cold air outbreak studies (e.g., Huang et al., 2017, Fridlind andAckerman, 2018, andpreliminary ACTIVATE observations). Regarding different transition speeds in satellite observations versus our simulations, we note that the breakup speed sensitivity to ice nuclei concentrations actually varies by up to 2.5 h in the simulations.The duration of the overcast state results from adding the metrics in Fig. 8g (up to 1 h difference) and Fig. 8h (up to 1.5 h difference).We now articulate this additive aspect in Section 3.3 (l.293). If there were more accumulation mode aerosol in the PBL, we think that this microphysical sensitivity could partly explain the difference from the satellite-observed overcast state.As Section 3.4 already shows, higher aerosol concentrations available for CCN activation delay the cloud transition.Additionally, we now also discuss (ll.388-392) the possibility of activating smaller aerosol size modes than considered in this study, given the substantial peak supersaturations in the simulations.In principle, activation of smaller modes could delay cloud transitions.These smaller modes are evident in preliminary ACTIVATE measurements. Minor Comments: Line 23: "capped by strong subsidence" We have revised as suggested. Section 2.1: I think it would be good to add some more description of how this specific CAO event was chosen.What observations (if any) are available for this CAO event? As now stated, for this pre-campaign study, we were motivated to consider a case in the NW Atlantic (we added this information to Section 2. 1 [ll. 89-90]).We also now note that the case was selected on the basis of weather-state analysis of satellite imagery by George Tselioudis (l.89). Line 147: The ensemble members are not mentioned until this point.The authors might want to add some description of the 3 ensemble members, and how and why they were chosen. We ran an ensemble to crudely characterize uncertainty from turbulent noise, set up by varying the seed to the pseudo-random number generator applied to the initial fields of water vapor mixing ratio and potential temperature.We only run one ensemble because we assume the turbulent noise of ice0 is representative of the other variants on the case.We have added this information to Section 2.2 (ll.116-119) and the Figure 2 caption.Section 2.2: In my opinion, the authors could add a table summarizing the setup of their simulation (which schemes are used/horizontal and vertical grid, etc.), to make it easier for the reader to see the whole setup at one glance. We have added a table as suggested. Line 114: Do the authors potentially mean 230K?130K seems excessively low. We used 130 K for the overlying isothermal layer in concert with overlying columnintegrated water vapor and ozone to match the downwelling longwave radiation profiles that we computed from radiative computations using a much deeper vertical grid (up to 30 km).We added "isothermal" (l.125) to clarify the setup.Please be consistent with the naming of "u-phys term" in Figure 6 and "u-phys loss" in Figure 8. Also add a legend for the dot dashed lines in Fig. 6a. Does We have revised Figures 6a and 8 Line 69 : This research question itself sounds very similar to what was already answered by Eirund et al. (2019).I assume the difference is that you simulate a CAO in a Lagrangian perspective, while simulations by Eirund et al. (2019) were idealized and stationary?It would be worth pointing this out here. Fig 1 : Fig 1: From the coastlines, it looks Figure 1a and b do not exactly cover the same area.It would be helpful to adapt either Figure 1a or Figure 1b, such that the cloud field and the MERRA-2 trajectories match up. Line 95 : Please remove "01 May" after the Morrison and Grabowski reference.We have revised as suggested.Line 109: Is it justified to follow Abel et al. (2017) here, even though their case was in a different location and a different season? Figure 2 : Figure 2: You performed a "no aerosol loss" simulation, but did you also test the development of the cloud field under a scenario where autoconversion is not allowed as a baseline simulation (similar to Eirund et al. 2019 and Abel et al., 2017)?In their studies, the cloud deck remained completely overcast in the absence of precipitation (see my previous comment) -a similar experiment could strengthen your conclusion that precipitation formation is essential for cloud deck breakup also in this case. Figure 5 : Figure 5: It looks like the x-axes do not cover the full range of the vertical cloud water mixing ratio as well as the rain drop concentration shown in the small plots to the right of the contour plots. Line 218 : Is that really so unclear?It has previously been shown that cloud ice generally increases precipitation (Knight et al., 2002, Field & Heymsfield, 2015), which can then initiate regime transitions and cloud dissipation (Abel et al., 2017, Eirund et al., 2019). Figure 6c : Figure 6c: I assume, in the legend, the u-phys term should be dashed? Outbreak using Lagrangian LES simulations.With simulations that have no ice, they demonstrate the importance of precipitation and loss of activated aerosol for the transition from overcast to broken clouds.Using simulations that include ice nuclei, they show that riming can lead to an acceleration of this transition through three different processes: (1) Reduction of cloud liquid water, (2) early consumption of cloud condensation nuclei, and (3) early and light precipitation cooling and moistening below cloud.The authors refer to this as preconditioning by riming.The findings of this study are interesting and further the understanding of the cloud transition in cold air outbreaks.The writing of the manuscript is very concise and clear and the findings are displayed in well-chosen figures that are easy to understand.The authors account for uncertainties by varying various parameters.I'm really interested to which extend the described phenomena can be observed in the future because the accelerated transition has important implications for cloud-climate feedbacks.Overall, I have some general comments, however, I would not consider these major comments and would suggest this submission for publication after minor revisions. mentioned later the selection of the 75 % cloud cover threshold for a broken cloud field is somewhat arbitrary.It might make sense to show a MODIS image and indicate what 75 % cloud cover looks like in that image.This might help to justify the selection of this threshold or might also lead to the selection of a different threshold.I would think that 75 % cloud cover might be a little too high for a broken cloud field in Cold Air Outbreaks. section I would like to see some more comparison with observations.Is Abel et al. (2017) (mature marine post-frontal clouds) really the best choice here if it looks at a different location for CAOs?This study looks at a step-change environment, rapid air mass transformation.Very different in terms of aerosol supply and surface flux history, compared to Abel et al.The authors should probably at least add some more quantitative values from Abel et al. Figure 3 show the statistics of the whole domain or only where clouds are present?Statistics only include cloudy samples, as we now note in the Figure 3 caption. accordingly.Overall, I like the content of all the figures and how it is displayed.However, I would improve some minor things in the figures.Here are my suggestions: in Fig. 3 and 6 I would put the legends outside the plot and make it larger like it is in Fig. 3.In the figures which have a colorbar (Fig. 4,5,7) I would improve the display of the colorbar, maybe put a black box around them and color the ticks in black instead of white.In Fig. 5 some of the plots have data going outside the range which should be corrected.We have revised Figures 4, 5, and 7 as suggested.
4,949
2021-05-20T00:00:00.000
[ "Environmental Science", "Physics" ]
Comparison of Conventional Statistical Methods with Machine Learning in Medicine: Diagnosis, Drug Development, and Treatment Futurists have anticipated that novel autonomous technologies, embedded with machine learning (ML), will substantially influence healthcare. ML is focused on making predictions as accurate as possible, while traditional statistical models are aimed at inferring relationships between variables. The benefits of ML comprise flexibility and scalability compared with conventional statistical approaches, which makes it deployable for several tasks, such as diagnosis and classification, and survival predictions. However, much of ML-based analysis remains scattered, lacking a cohesive structure. There is a need to evaluate and compare the performance of well-developed conventional statistical methods and ML on patient outcomes, such as survival, response to treatment, and patient-reported outcomes (PROs). In this article, we compare the usefulness and limitations of traditional statistical methods and ML, when applied to the medical field. Traditional statistical methods seem to be more useful when the number of cases largely exceeds the number of variables under study and a priori knowledge on the topic under study is substantial such as in public health. ML could be more suited in highly innovative fields with a huge bulk of data, such as omics, radiodiagnostics, drug development, and personalized treatment. Integration of the two approaches should be preferred over a unidirectional choice of either approach. Introduction Machine learning (ML) is a type of artificial intelligence (AI) consisting of algorithmic approaches that enable machines to solve problems deprived of explicit computer programming [1]. ML is becoming increasingly relevant in medicine as it can optimize the trajectory of clinical care of patients affected by chronic diseases and might inform precision medicine approaches and facilitate clinical trials. As shown in Figure 1, the number of articles applying ML to the medical field has been exponentially increasing, especially with regard to diagnostics and drug discovery. According to Accenture data, vital medical health AI applications can possibly create USD 150 billion in yearly savings for the United States healthcare sector by 2026 [2]. These data show that the healthcare industry can heavily leverage the possibilities provided by ML. This might also explain why AI companies are being increasingly involved in the area of medicine, from diagnosis to treatment and drug development. For instance, convolutional neural networks (used in image recognition and processing) have been able to effectively improve the diagnostic process of diabetic retinopathy [3,4]. Another example is rehabilitation, where learning agents can be trained to run by controlling the muscles attached to the virtual skeleton. Ideally, doctors might predict if a patient is able to walk, jump, or run properly after a specific treatment. Furthermore, data obtained during phases of rehabilitation might be later used to project new, AI designed, leg prostheses. AI uses multiple layers of non-linear processing units to "teach" itself how to understand data, classify the records, or make predictions [5]. Thus, AI can produce electronic health records (EHRs) data and unstructured facts to make predictions about a patient's health. For instance, AI can rapidly read a retinal image or flag cases for follow up when several manual reviews would be too cumbersome [6]. When applied to big data, AI offers the promise of unlocking novel insights and accelerating breakthroughs. Paradoxically, although an unprecedented quantity of data is becoming available, only a fraction is being properly integrated, understood, and analyzed. The challenge lies in harnessing high volumes of data, integrating them from hundreds of sources, and understanding their various formats. AI offers potential for addressing these challenges, since cognitive answers are explicitly intended to integrate and analyze big datasets. AI can understand diverse types of data such as lab calculations in a structured database or the script of a scientific publication. These software solutions are trained to understand technical, industry-specific content and use advanced reasoning, predictive modelling, and ML techniques to advance research. Indeed, AI can be applied to big data using different approaches. When it comes to the effectiveness of ML, the rule of thumb is that the more data, the more accurate the prediction. Although this is an oversimplification, it is evident that the healthcare sector is sitting on a data AI uses multiple layers of non-linear processing units to "teach" itself how to understand data, classify the records, or make predictions [5]. Thus, AI can produce electronic health records (EHRs) data and unstructured facts to make predictions about a patient's health. For instance, AI can rapidly read a retinal image or flag cases for follow up when several manual reviews would be too cumbersome [6]. When applied to big data, AI offers the promise of unlocking novel insights and accelerating breakthroughs. Paradoxically, although an unprecedented quantity of data is becoming available, only a fraction is being properly integrated, understood, and analyzed. The challenge lies in harnessing high volumes of data, integrating them from hundreds of sources, and understanding their various formats. AI offers potential for addressing these challenges, since cognitive answers are explicitly intended to integrate and analyze big datasets. AI can understand diverse types of data such as lab calculations in a structured database or the script of a scientific publication. These software solutions are trained to understand technical, industry-specific content and use advanced reasoning, predictive modelling, and ML techniques to advance research. Indeed, AI can be applied to big data using different approaches. When it comes to the effectiveness of ML, the rule of thumb is that the more data, the more accurate the prediction. Although this is an oversimplification, it is evident that the healthcare sector is sitting on a data goldmine. Estimates are that big data and ML in pharma and medicine could generate a value of up to USD 70 billion to 100 billion annually [7], given the downstream effects of these approaches. One main difference between ML and traditional statistical methods lies in their purpose, as the former remains focused on making predictions as accurate as possible, while the latter are aimed at inferring relationships between variables [8]. However, the key difference between traditional statistical approaches and ML is that in the latter, a model learns from examples rather than being programmed with rules. For a given assignment, samples are provided in the form of inputs (called features) and outputs (called labels). For instance, digitized slides read by pathologists are rehabilitated to features (pixels of the slides) and labels (e.g., data indicating that a slide comprises evidence of deviations indicating cancer) [9]. Using algorithms for learning from observations, computers then govern how to accomplish the mapping from features to labels in order to create a model that will generalize the data, such that an assignment can be achieved properly with new, never seen before inputs (e.g., pathology slides that have not yet been read by a human). This process is called supervised machine learning. When predictive accuracy is critically significant, the ability of a model to find statistical patterns through millions of features and instances is what enables superhuman performance. Nonetheless, these patterns do not necessarily relate to the identification of underlying biologic pathways or modifiable risk factors that might facilitate the development of new therapies [9]. A crucial difference between human learning and ML is that humans can learn to make general and complex associations from small amounts of data. Machines, in general, require several more samples than humans to acquire the same task, and machines are not capable of common sense. The flipside, however, is that the machine can learn from massive amounts of data: it is perfectly feasible for an ML model to be trained with the use of tens of millions of patient charts warehoused in EHRs, with hundreds of billions of data points, deprived of any lapses of attention, while it is very challenging for a human physician to understand more than a few tens of thousands of patients in a complete career. The performance of well-developed conventional statistical approaches needs to be evaluated and compared with ML in terms of predictivity of clinically relevant outcomes (e.g., survival, response to treatment, patient-reported outcomes (PROs), etc.). In this narrative review, we aim to offer an expert perspective on the comparison of traditional statistical methods with ML, and their corresponding advantages and limitations in medicine, with a specific focus on the integration between the two approaches and its application to illness detection, drug development, and treatment. To this end, we have selectively reviewed the literature on this topic, presenting evidence illustrating the difference between traditional statistical methods and ML in healthcare. Advantages of Traditional Statistical Methods over ML Traditional statistical approaches have the advantage of being simple to understand. Indeed, they usually take into account a small number of clinically important variables and they produce "clinician-friendly" measures of association, such as odds ratios in the logistic regression model or the hazard ratios in the Cox regression model. Traditional statistical approaches allow us to easily understand the underlying biological mechanisms. On the other hand, the results of ML are often difficult to interpret. Lack of interpretability is particularly evident in neural networks, but it is less pronounced in least absolute shrinkage and selection operator (Lasso) regression. Moreover, computation to find the minimum of the cost function of neural networks is quite complex and time-consuming, depending on the type of cost function chosen, the number of nodes and layers of the neural network, and the number of training observations [10]. Furthermore, ML algorithms entail data pre-processing, training on datasets, require large datasets, and iterative refinement with regard to the real medical problem [1]. ML techniques can also lead to overfitting, i.e., to the production of a model too closely related to the underlying dataset. This phenomenon can limit the possibility of generalizing the model to different datasets, and hence, making predictions [11]. An appropriate balance between the training set and the validation set is necessary to avoid this problem. Advantages of ML over Traditional Statistical Techniques ML techniques have large flexibility and are free from a priori assumptions, while traditional statistical methods rely on strong assumptions, such as the type of error distribution, additivity of the parameters within the linear predictor, and proportional hazards. These assumptions are often not met in clinical practice and they are often overlooked in the scientific literature. For instance, the assumption of proportional hazards has been violated when studying survival in gastric cancer patients, as the prognostic significance of the depth of tumor invasion and nodal status tends to decrease with increasing follow-up, while the histology and the loss of TP53 gene acquire prognostic importance after at least two years of follow-up [12]. ML has the advantage of taking into account all the available information on a particular field. Traditional statistical approaches, even those at the top of the pyramid of evidence, often fail because they make a priori selection of the variables to be considered. For instance, a Cochrane review, dealing with the extension of lymphadenectomy in gastric cancer surgery, was criticized and later withdrawn mainly because it failed to take into account the quality of surgical procedures under comparison [13]. ML is particularly suited when there are few observations and many predictors, such as in genomics, transcriptomics, proteomics, and metabolomics [14]. In such a situation, traditional regression models show several limitations, especially for the choice of the most important risk factors. Therefore, in building ML predictive models, it is possible to use numerous approaches to apply also on small datasets. ML can also easily address interactions, which are difficult to investigate with traditional statistical methods that can mostly address interactions between the main determinant and single potential confounders. For instance, the effect of the surgical approach on survival in gastric cancer patients is modulated by tumor stage and histology [15]. However, this second-order interaction is difficult to highlight within a Cox model [16], as the interaction between lymphadenectomy and histology becomes apparent after the first two years of follow-up. Furthermore, ML algorithms have the ability to analyze various data types (for instance, imaging data, demographic data, and laboratory findings) and integrate them into predictions for illness risk, diagnosis, prognosis, and applicable treatments [1]. Different Indications for the Two Computational Approaches Taking into account the strengths and limitations discussed above, different fields of application can be proposed for traditional statistical techniques and ML. Traditional statistical approaches could be more suitable than ML when: (1) there is substantial a priori knowledge on the topic under study; (2) the set of input variables is limited and rather defined in the current literature; (3) the number of observations largely exceeds the number of input variables. This situation is typically encountered in public health research, especially when performed on large healthcare utilization databases [17,18]. On the other hand, ML techniques have proven to be more appropriate in "omics" [19], where numerous variables are involved (genes, RNA molecules, proteins, metabolites). Indeed, with a large number of interactions (such as polygenicity and epistatic effects in genomics), ML might help disentangle the complex relationships between these components in determining their effect on the main outcome (i.e., the illness risk). Traditional statistical approaches are appropriate when the set of predictors tends to be defined a priori on the basis of available reliable evidence on the specific topic. For instance, most articles dealing with gastric cancer surgery include a fixed set of covariates in survival models, comprising sex, age, tumor site, histology, and stage [12]. The selection of variables is important to avoid the introduction of strongly collinear variables, such as tumor stage and surgical efficacy (completeness of tumor removal), and this is usually done on the basis of a priori knowledge, as techniques to compare non-nested models, such as Akaike Information Criterion, are rather limited. This approach makes the studies more comparable: for instance, the use of the same prognostic factors allows the comparison of datasets collected in different countries and makes easy to develop internationally accepted prognostic scores [20]. On the other hand, this approach could slow down the progress of clinical research, as few novel prognostic factors are addressed by each research project. ML allows us to take into account a huge bulk of potential predictors, avoiding an a priori choice among them. Hence, ML is more suited for big steps in diagnostics and therapeutics. ML has given an important contribution to the rapidly progressing therapeutic revolution fostered by "omics". However, whatever boundaries we can establish today between traditional statistics and ML, these will be surely overcome in the next future. Integration between the Two Approaches A traditional statistical approach requires us to choose a model that incorporates our knowledge of the system, and ML requires us to choose a predictive algorithm by relying on its empirical capabilities [19]. Justification for an inference model generally rests on whether it sufficiently captures the characteristics of the system. The choice of algorithm in pattern learning frequently hangs on measures of previous performance in similar scenarios. Inference and ML are complementary in pointing us to biologically meaningful conclusions. Of note, traditional statistical approaches and ML are often used in sequence. When trying to differentiate groups of patients based on their proteomic or metabolomics profile, classical statistical techniques are first used for preliminary screening, while ML is used to finalize the analysis. For instance, Fabris et al. have recently identified a set of urinary proteins that allow the discrimination between two different renal diseases, nephrolithiasis and Medullary Sponge Kidney [14]. Remarkably, this result was achieved on a very small series (22 patients with MSK and 22 patients with idiopathic calcium nephrolithiasis), analyzing a huge bulk of urinary proteins (n = 1529). Traditional statistical techniques (multidimensional scaling, volcano plot, and ROC curves) allowed them to reduce the set of urinary proteins considered from 1529 to 16, while Support Vector Machine (SVM) permitted a further reduction to 5 proteins. In a subsequent study on the same topic, Bruschi et al. first used partial least squares discriminant analysis and then SVM [21]. Diagnostic Process The ability of ML to detect diagnostic models reaching the level of clinical accuracy remains an objective not yet achieved, but seemingly feasible. This objective faces the challenge of finding ways to work with all the available data. This highlights the relevance of interdisciplinary collaborative work. In the area of brain diseases like depression, the Predicting Response to Depression Treatment (PReDicT) project has applied predictive analytics to help diagnose depression and identify the most effective treatment, with the overall goal of producing a commercially available emotional test battery for use in clinical settings [22]. In general, the use of ML to aggregate large datasets could significantly accelerate the diagnostic processes [23]. In Table 1, we have summarized information on ML in medicine. Of the numerous opportunities for the use of ML in clinical practice, medical imaging workflows are those that will be likely be most impacted in the near term. ML-driven algorithms that automatically process two-or three-dimensional image scans to recognize clinical signs (e.g., tumors or lesions) or articulate diagnoses are now available and some are progressing through regulatory steps toward the market [24]. Many of these use deep learning, a form of ML based on layered representations of variables, referred to as artificial neural networks. The latter can learn extremely complex relationships between features and labels and have been shown to exceed human abilities in performing tasks such as classification of images. ML can improve diagnostic accuracy by analyzing not only medical images but also textual records. Indeed, ML allowed the identification of varicella cases in a pediatric Electronic Medical Record Database with a positive predictive value of 63.1% and a negative predictive value of 98.8% [25]. Predicting Prognosis ML has been shown to achieve the same or better prognostic definition in several clinical conditions, as compared to conventional statistical methods. In particular, ML can better predict clinical deterioration in the ward [26], mortality in acute coronary syndrome [27], survival in patients with epithelial ovarian cancer [28], complications of bariatric surgery [29], and risk of metabolic syndrome [30]. On the other hand, other studies reported that ML and conventional statistical methods have similar prognostic usefulness in predicting mortality in intensive care units [31], readmission in patients hospitalized for heart failure [32], and all-cause mortality and cardiovascular events [33]. Drug Discovery ML can facilitate various phases of the early stages of drug discovery, from initial screening of drug compounds to predicted success rates based on biological factors. This includes R&D technologies like next-generation sequencing. Precision medicine, which relies on the recognition of pathophysiological mechanisms and might serve the development of alternative therapeutic pathways, appears as the most innovative area. Much of this study encompasses unsupervised learning, which is in large part still limited to identifying patterns in data without predictions (the latter is still in the realm of supervised learning). Data from experimentation or manufacturing processes have the potential to aid pharmaceutical manufacturers to diminish the time required to produce drugs, leading to lowered costs and better replication. Adopting ML approaches could play a significant role in discovering new molecules or repurposing existing drugs for rare conditions or epidemics where urgency is key. With the increase in antibiotic resistance, exploiting ML techniques is already proving quite powerful in identifying new antibacterial agents in a faster and potentially inexpensive way [23]. For example, AI recently allowed the discovery of halicin, a compound structurally divergent from conventional antibiotics, acting against Clostridium difficile and pandrug-resistant Acinetobacter baumannii infections in murine models [34]. Personalized Treatment Personalized medicine, which should lead to the identification of more effective treatment based on individual health data paired with predictive analytics, is closely related to better disease assessment. To meet the complexity of personalized medicine, new types of trials have been developed, such as basket, umbrella, or platform trials. The area is presently governed by supervised learning, which permits physicians, for instance, to select from further limited sets of diagnoses or estimate patient risk based on symptoms and genetic information. Over the next decade, the increased use of micro biosensors and devices, as well as mobile apps with more sophisticated health measurement and remote monitoring capabilities, will provide an additional surge of data that can be used to help facilitate research and development, and treatment efficacy. This type of personalized treatment has significant consequences for the individual in terms of health optimization, but also for plummeting overall healthcare costs. If more patients adhere to following prescribed drug or treatment tactics, for instance, the reduction in health care charges will trickle up and back down. Using ML in these settings depends on the collection and analysis of huge amounts of data, but with the emergence of big data comes the challenge of statistical inference from complex datasets to identify genuine patterns, while also restraining false classifications and making decisive judgments on diagnosis and treatment possibilities. Statistical bioinformatics has proven very useful in proteomic and genomic data analysis, and the adoption of ML to build predictors and classifiers has shown significant potential [23]. Discussion ML has the potential to transform the way medicine works [35]. However, increased enthusiasm has previously not been met by a corresponding interest from healthcare providers and operators. Examples where ML has done well: Gulshan et al. have applied deep learning to build an algorithm-automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs [36]. Bejnordi et al. have recently evaluated the performance of automated deep learning algorithms at identifying metastases in hematoxylin and eosin-stained tissue sections of lymph nodes of women with breast cancer and related it with pathologists' diagnoses in a diagnostic setting [37]. There are several similar ML studies on images and challenges in radiology, pathology, dermatology, ophthalmology, gastroenterology, cardiology, etc. ML is beginning to have an impact in medicine at three levels: for clinicians, predominantly via rapid, accurate image interpretation; for patients, by enabling them to process their own data to promote health; and for health systems, by improving workflow and the potential for reducing medical errors [38]. Steele et al. observed that data-driven models used on a prolonged dataset can outperform conventional models for prognosis, deprived of data pre-processing or imputing missing values for predicting patient mortality in coronary artery disease [39]. Examples where ML has done poorly: Esteva et al. recently demonstrated the effectiveness of deep learning in dermatology, as regards both general skin conditions and specific cancers [40]. However, they also observed that in the set of biopsy images, if an image had a ruler in it, the algorithm was more likely to call it tumor malignant because the presence of a ruler was associated with an augmented likelihood that a lesion was cancerous. There is no clear line between ML models and traditional statistical models, and a recent article summarizes the relationship between the two [41]. However, sophisticated new ML models (e.g., those used in "deep learning" [42,43]) are well suited to learn from the complex and heterogeneous kinds of data that are generated from current clinical care, such as medical notes entered by doctors, medical images, continuous monitoring data from sensors, and genomic data to aid make therapeutically significant predictions. Most ML classifiers perform uncertainly with risk prediction. Possibly much bigger sample sizes are required to gain reliable (calibrated) risk predictions [44] than reliable (diagnostic) classifications. ML is creating a paradigm shift in medicine, from basic research to clinical applications, but it should be carefully implemented. Vulnerabilities such as security of data and adversarial attacks, where malicious manipulation in the input can affect a complete misdiagnosis, which could be employed for fraudulent interests, present a real threat to the technology [23]. However, these vulnerabilities can be met with adequate efforts. In the 1970s and 1980s, computerized tomography, based on the automatic elaboration of a huge bulk of X-rays images, revolutionized radio diagnostics, enabling radiologists to overcome the so-called "grey barrier". The use of CT allowed radiologists to improve their role in the healthcare system. However, the ML revolution seems to threaten one of physicians' most exclusive tasks, i.e., diagnostic activity. The new generation of practitioners should accept the challenge of ML, by learning how to comprehend, develop, and eventually, control it so as to improve patient care [24]. ML can analyze large amounts of data and turn that information into functional tools that can assist both doctors and patients. The increased integration of ML into everyday medical applications might improve the efficiency of treatments and lower costs in various ways. The challenge is to combine big data provided by genomics, transcriptomics, proteomics, and metabolomics with complex systems science, systems biology, and systems medicine of the body [45]. ML tools can be built for system-level interventions, comprising improving patient selection and enrolment for clinical trials, decreasing patient readmission, and automated follow-up of patients for scrutiny of complications. Conclusions As technology is widening and innovations and ideas are pouring, there is an enormous volume of data that is being generated in modern healthcare. Proper analytical methods are key to obtain the maximum insight from collected data. The periphery between traditional statistics and ML is a topic to debate [46]. Some approaches fall squarely into one or the other domain, but numerous are used in both. Both statistics and ML can be of value, traditional statistics being more useful in public health and ML in omics science. Conventional statistical approaches and ML are complementary in directing us to biologically significant conclusions: the ideal approach would be to integrate the two technologies in a way that can determine an added value. Our review has provided compelling insights into the difference between conventional statistical approaches and ML in healthcare, which in turn may help us to better integrate technology and medical care.
5,951.4
2020-09-01T00:00:00.000
[ "Medicine", "Computer Science" ]
Molecular Dynamics Simulations of Nanochannel Flows at Low Reynolds Numbers In this paper we use molecular dynamics (MD) simulations to study nano- channel flows at low Reynolds numbers and present some new interesting results. We investigated a simple fluid flowing through channels of different shapes at the nano level. The Weeks-Chandler-Anderson potentials with different interaction strength factors are adopted for the interaction forces between fluid-fluid and fluid-wall molecules. In order to keep the temperature at the required level, a Gaussian thermostat is employed in our MD simulations. Comparing velocities and other flow parameters obtained from the MD simulations with those predicted by the classical Navier-Stokes equations at same Reynolds numbers, we find that both results agree with each other qualitatively in the central area of a nanochannel. However, large deviation usually exists in areas far from the core. For certain complex nanochannel flow geometry, the MD simulations reveal the generation and development of nano-size vortices due to the large momenta of molecules in the near-wall region while the traditional Navier-Stokes equations with the non-slip boundary condition at low Reynolds numbers cannot predict the similar phenomena. It is shown that although the Navier-Stokes equations are still partially valid, they fail to give whole details for nanochannel flows. I. INTRODUCTION The behavior of a flow at the nanometer scale has been a subject of interest in recent years. As a typical flow in the reduced-size fluid mechanics system, the nanochannel flow embodies a series of special properties and has attracted much attention. The understanding of the physical properties and dynamical behavior of nanochannel flows has great importance on the theoretical study of fluid dynamics and many engineering applications in physics, chemistry, medicine and electronics. It is obvious that when the system length reduces to the nano scale, the behavior of the flow is mainly affected by the movements and structure of many discrete particles. The molecular dynamics (MD) simulation, based on the statistical mechanics of nonequilibrium liquids [1], is an effective way to describe the details of a flow at the nano scale. At the same time, MD simulations also calculate physical properties of nanofluids by solving the equations of molecular motion. The role of replacing the continuum description for the nano level flows makes MD a powerful tool to study many fundamental nanofluid problems which may be extremely difficult to be implemented in the laboratory at the present time. MD simulation has been used successfully in studying aspects of molecular hydrodynamics, such as the lubrication, wetting and coating problems. Simple or complex channel flows have been investigated by several researchers, e.g. Todd et al. [2] and Jabbarzadeh et al. [3]. Couette and Poiseuille flows of simple Lennard-Jones liquids or polymers in a nanochannel of several molecular sizes demonstrate some new features and have also attracted much attention at the present. The salient features of such simple flows are the departure from the Navier-Stokes (NS) theory [4] and some different dynamic properties compared with the macroscopic systems [5]. It is well known that the quadratic velocity profile in a Poiseuille flow may be successfully predicted in the Newtonian fluid mechanics framework when the fluid density is assumed to be a temperature dependent function and does not vary appreciably over the length and time scale comparable to the molecular size and molecular relaxation time. The transport coefficients are also considered to be space and time independent in the classical continuum theory. However, as we focus attention on flows at nano level, the density becomes a function related to the position and time of fluid particles, that means, the density cannot be uniform in space and time. The irregular density distribution also affects other physical properties and hence changes the dynamical behavior of a nanofluid system. Travis and Gubbins [6] reported that the density oscillates along the flow direction with a wavelength of the order of a molecular diameter in a channel of about 4 molecular-diameter width. They also verified that the quadratic velocity profile is approximately obtained for a channel of 10 molecular-diameter width in a planar Poiseuille flow, but this phenomenon disappears when the channel width is less than 10 molecular diameters. They even indicated that the disagreement with the NS prediction occurs when the channel width is of the order of 5 molecular diameters. Koplik and Banavar [7] further pointed out that the classical fluid mechanics theory can be applied safely for Couette and Poiseuille flows in channels of width greater than 10 molecular diameters with no-slip boundary conditions. Based on such assumptions, Fan et al. [5] carried out simulations of a periodic nozzle flow of a Weeks-Chandler-Anderson (WCA) liquid. The flow that occurs in a practical nanochannel may be more complicated than a simple Poiseuille flow in that many factors such as the channel geometry, flow type and boundary conditions may be involved altogether. The faced challenge in this area is t o find the deviation between the MD simulation and the classical NS prediction, and to obtain the qualitative difference between them with physical explanation. In this paper, we will focus mainly on flows of a simple liquid in various channels. We emplo y the MD simulations to explore the dynamic details of nanochannel flows and try to examine the limitations of the NS solutions for a liquid moving along nanochannels. II. PHYSICAL MODELS In this paper, we assume the liquids are constituted of quantities of spherical molecules. As a rigorous quantum-mechanical approach is not feasible at the present for a system of more than a few molecules, Newton's second law is regarded as valid to describe the molecular motion in the system. For a fluid, the momentum p i of molecule i should satisfy Newton's second law, where F i is the intermolecular force on molecule i by other molecules, and F e is the external force. In simulation, the external force may be the body force, e.g. the gravity force. The velocity v i of molecule i is related to the fluid molecule mass m By The molecular interaction forces usually depend on the physical properties and space phase structures of the fluid and wall molecules. A complex model taking into account both the fluid/fluid and fluid/wall intermolecular interactions is presented in this paper to obtain the velocity and stress distributions in a nanochannel flow. For the fluid/fluid interaction, both repulsion and dispersion effects of molecules are considered. The WCA potential, which is a modification of the Lennard-Jones potential, is adopted in the MD simulations in this paper where s is the diameter of a molecule, e is the energy parameter characterizing the molecular interaction strength and r c stands for the cutoff distance. For the fluid/wall interaction, noting the existence of the fluid -solid melting line and probable substance interchange on the boundary, a realistic sixth-power soft sphere potential is used to simulate the nanofluid system. where ε fw is called the fluid -wall energy parameter. For the weak fluid -wall interaction, we take ε fw = ε, and for the strong fluid -wall interaction, ε fw = 3.5e. The factor 3.5 is obtained by matching the argon potential well on a smooth wall (Heinbuch & Fischer [8]). In MD simulations, the solid walls may be represented by several layers of wall molecules which are located at the sites of a planar face-centered lattice or a body-centred cubic lattice. Each wall molecule is assumed to be anchored at its lattice site by a stiff spring. A Hookean spring force is introduced here as the external force for wall molecules. For simplicity, we adopt the two-layer linear spring model. The potential may be written as where δ the displacement of the wall molecule from its lattice site, and C is the stiffness of the spring. To keep the molecular oscillation small from its site and to overcome the slip problem, a relatively soft wall spring with a stiffness of C = 75e/ σ 2 is adopted. From the viewpoint of a microscopic system of particles, the stress tensor is just a result of the space phase distribution and the dynamic properties generated by the movements of molecules in the system. According to the Irving-Kirkwood method [9], the contribution of each particle to the stress tensor may be divided into two parts, a kinetic component related to the velocity distribution and a configuration component related to the position distribution of particles. The stress tensor is written as The angular brackets denote the ensemble average, n is the density. The first sum on the right-hand side indicates the contribution from the momentum transfer where m is the molecular mass, u a and u ß are the peculiar velocity components of molecule i in the a and ß directions. The second sum represents the potential contribution where r a and F ß are the a and ß components of the distance vector and the potential force vector between two molecules. For a simple channel flow, the viscosity of the fluid is calculated by the formula µ = τ/γ, the shear rate γ is usually acquired by the finite difference method. III. SIMULATION METHOD We have outlined in the previous section the molecular models to be used in our MD simulations. The simulation details of a liquid flowing through various nanochannels will be given here. For brevity, we use a reduced system to perform the simulations. T he units of the characteristic quantities are listed in Table 1 for constructing the dimensionless parameters used in the simulations. As the work done by the external force may partially be converted into heat in the system, the wall/fluid interaction also contributes to the increase of temperatures of the fluid and the wall. It is important to keep the nano-scale fluid system at a fixed temperature during the simulation. There are usually two ways to keep the temperature of the system constant, using a thermostat or adjusting the temperature by re-scaling the molecular peculiar velocity during the simulation. The latter method requires more calculation work, as the approach is to adjust the wall temperature to a given constant at each time step [8]. It produces satisfactory results in many cases as long as the wall and fluid temperatures link appropriately. As regards to the use of thermostats, some researchers have argued that although it imposes additional constrains on the equations of motion, it improves computation efficiency when applied to a more complex system and sometimes even enhances the accuracy of the simulation [3]. In order to avoid some unphysical effects in the simulation, a Gaussian thermostat method [1], which has been approved to be a good thermostat method, is adopted here to obtain reasonable simulation results. Actually, the Gaussian Method is to add an external force with the following term (7) where Pi is the peculiar momentum of molecule i and coefficient ξ represents the thermostatting multiplier for keeping the kinetic temperature fixed in the system, where γ i is the local shear rate of flow field at r i and P ix and P iy are the x and y components of the peculiar momentum of molecule i. In this simulation, we have developed a FORTRAN code program called MDFluid. MDFluid consists of three modules: data preprocessor, MD simulation and data output. The first part is used to generate the initial configuration of the computation domain. The second part is the core module and its main function is to realize the MD simulation for a nanofluid and to acquire the flow field via some initial and boundary conditions. It is open to adopt different algorithms for the computation of velocity and temperature. The last part is the data output interface. Currently MDFluid is only suitable for several simple nano flows, it is under further development by us. For simplicity, the mass of wall molecules is usually assumed to be identical to that of fluid molecules. The initial configuration of wall molecules are generated separately by the preprocessing module and read in as input data. The total number of molecules depends on the size and geometry of the computational domain and the densities of the fluid and wall materials. All fluid molecules are initially located at the sites of a face-centered lattice. The initial velocities of inner fluid and wall molecules are set randomly according to the given temperature and the inlet and outlet velocities are assigned the same velocity profile due to the periodic boundary condition. At the beginning of simulation, fluid molecules are allowed to move without applying the external force until a thermodynamic equilibrium state is reached. Then the external force field is switched on and the non-equilibrium simulation starts. During the course of simulation, the equations of motion are solved by the leap-frog method. Periodic boundary conditions are applied on the fluid boundary of the computational domain. To verify and validate the NS equations in a nano scale system is obviously an important and serious task at the present. In this paper, we try to check the validity of the NS equations in a nanochannel flow. Considering a body force e xerting on the system, the governing equations in dimensionless form may be written as where the Reynolds number is defined as Re = ρUL/µ, the fluid density ρ and viscosity µ are assumed to be constants and U and L are t he characteristic velocity and length of the domain field. The dimensionless body force is F body = ReLg/U 2 e x , where g is the gravity constant and e x is the unit vector in the x direction. We use the finite element method (FEM) to obtain the flow field of a nanochannel flow. The same initial and periodic boundary conditions are applied in the FEM simulations. To compare the MD and FEM results, we use the Reynolds number as an adjustable parameter in the FEM to match the flux of the MD simulation. In the f ramework of continuum fluid mechanics, the Reynolds number plays an important role in analyzing the flow field. Any two solutions with the same Reynolds number and initial/boundary conditions must be similar. In nano flows, the characteristic length reduces to the nano size and the Reynolds number becomes extremely small. In this sense, all nano flows may be classified as low Reynolds number flows and the knowledge on low Reynolds number flows can be inherited in some ways. We wish to find similarities and differences between the NS equations and the MD simulations when they are applied to the same nano fluid system in this paper. IV. RESULTS AND DISCUSSION Let us examine five complex channels with different geometry. The first one is a simple plane channel whereas the other four have concave or convex surfaces. The computational domain for the planar flow of a WCA liquid is shown in Figure 1. The system is surrounded by periodic images of itself in x dimension. The computational domain is 0 < x < 200 and -10 < y < 10, the fluid density and viscosity may be determined during simulation. The periodic dynamic conditions are also used in the course of simulation. The spring constant C of wall molecules is C = 75 and the gravity constant g = 0.5. We assume the fluid/fluid interaction energy parameter e = 1 and the wall/fluid interaction energy parameter is characterized by ε wf = ε or ε fw = 3.5ε according to the interaction intensity. For convenience, we compute the five nanochannel flows with the Reynolds number based on the width of the channel being 3 (first three cases) and 5 (last two cases). The x -direction velocity profile at position x = 100 is plotted in Figure 3. We find from Figure 3 that the velocities obtained by the MD simulation agree very well with the FEM results. This verification in a simple Poiseuille flow encourages us to apply MDFluid to more complex cases. This observation can further be confirmed by the x -direction velocity profile in Figure 5, from which we notice that there are obvious negative velocity molecules near the walls (y = 7.5 and y = -7.5). The reverse momenta induce the generation and development of nano-size vortices under the non-slip boundary conditions. Case 3 is a similar channel with convex surfaces. We also set 1600 fluid molecules including 250 wall molecules and strong fluid/wall interaction in the MD simulation. In this case, both predictions on the flow field are analogous (see Figure 6). We cannot see any vortices in both results. One possible reason is that the stream velocities maybe somewhat small compared with those in Case 2 (see Figures 5 and 7). In case 2, the concave surfaces make the middle channel like a contraction nozzle and increase the stream velocity. In this case, on the other hand, the corresponding momentum cannot form vortices in the cavity. We next consider the channel with semi-circular surfaces. In order to watch the flow more clearly, we increase the system to 2100 fluid molecules and 360 wall molecules in the course of MD simulation. The Reynolds number is adjusted to 5. The nano vortices reappear in the MD simulation, while the FEM still predicts no any abrupt velocity changes after the middle nozzle area. In Figure 9, we also find the negative velocities near the walls. Case 5 is a simple alteration of Case 4, the concave surfaces are replaced by the semi-circular convex surfaces, and all other conditions are kept the same. The vortices occur in the dents due to the accurate simulation by large number of molecules in the MD simulation, while the FEM presents no striking flow attribution (see Figure 10). The velocity profile shows negative velocities near the wall region. From the above analysis, we find that: (1) The NS equations and the MD simulations are both applicable for nanochannel flows. The velocity profiles still maintain quadratic for nano Poiseuille flows in the MD simulations (see Figure 12). (2) The MD simulation can determine the nano vortices for flows at low Reynolds numbers. The nano vortices may be generated near the walls in complex flows due to the large molecular momenta. In the meantime, the continuum NS equations cannot predict nano flows well near the walls, as they fail to give flow details in the whole field. To this point, we can conclude that large deviations between the NS equations and the MD simulations exist in areas far from the core of the flow. (3) A guarantee for a good MD simulation is to include as many molecules as possible into the system and to deal appropriately with wall molecules and the boundary conditions. V. SUMMARY In this paper we adopt MD simulations to investigate various nanochannel flows at low Reynolds numbers and to present some new interesting results. The WCA potentials with different interaction strength factors are employed for the interaction forces between fluid -fluid and fluid -wall molecules. In order to keep the temperature at the required level, a Gaussian thermostat is introduced in our MD simulations. Comparing the flow fields obtained from the MD simulations with the predictions by the classical NS equations at the same Reynolds number, we find that both results agree with each other qualitatively in the central area of a nanochannel. However, large deviations usually occur in areas far from the core. The nano-size vortices due to the large momenta of molecules in the near-wall region may be found in some complex nanochannel flows. But the traditional NS equations with the no-slip boundary condition at low Reynolds numbers cannot predict the similar phenomenon. It is shown that although the NS equations are still partially valid for nano flows, they fail to give whole details of the flow fields.
4,590.6
2003-01-01T00:00:00.000
[ "Physics" ]