text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
2023 A space–time DG method for the Schr¨odinger equation with variable potential ∗ We present a space–time ultra-weak discontinuous Galerkin discretization of the linear Schr¨odinger equation with variable potential. The proposed method is well-posed and quasi-optimal in mesh-dependent norms for very general discrete spaces. Optimal h -convergence error estimates are derived for the method when test and trial spaces are chosen either as piecewise polynomials, or as a novel quasi-Trefftz polynomial space. The latter allows for a substantial reduction of the number of degrees of freedom and admits piecewise-smooth potentials. Several numerical experiments validate the accuracy and advantages of the proposed method. Introduction In this work we are interested in the approximation of the solution to the time-dependent Schrödinger equation on a space-time cylinder Q T = Ω×I, where Ω ⊂ R d (d ∈ N) is an open, bounded polytopic domain with Lipschitz boundary ∂Ω, and I = (0, T ) for some final time T > 0: ψ(x, 0) = ψ 0 (x) on Ω. (1.1) Here i is the imaginary unit; ∂ nx (•) is the normal derivative-in-space operator; V : Q T → R is the potential energy function; ϑ ∈ L ∞ (Γ R × I) is a positive "impedance" function; the Dirichlet (g D ), Neumann (g N ), Robin (g R ) and initial condition (ψ 0 ) data are given functions; Γ D , Γ N , Γ R are a polytopic partition of ∂Ω. The model problem (1.1) has a wide range of applications.In quantum physics [25], the solution ψ is a quantum-mechanical wave function determining the dynamics of one or multiple particles in a potential V .In electromagnetic wave propagation [24], it is called "paraxial wave equation" and ψ is a function associated with the field component in a two-dimensional electromagnetic problem where the energy propagates at small angles from a preferred direction.In such problems, the function V depends on the refractive index and the wave number.In underwater sound propagation [22], it is referred to as "parabolic equation" and ψ describes a time harmonic wave propagating primarily in one direction.In molecular dynamics [2], by neglecting the motion of the atomic nuclei, the Born-Oppenheimer approximation leads to a Schrödinger equation in the semi-classical regime. Space-time Galerkin methods discretize all the variables in a time dependent PDE at once; this is in contrast with the method of lines, which combines a spatial discretization and a time-stepping scheme.Space-time methods can achieve high convergence rates in space and time, and provide discrete solutions that are available on the whole space-time domain. The literature on space-time Galerkin methods for the Schrödinger equation is very scarce.In fact, the standard Petrov-Galerkin formulation for the Schrödinger equation, i.e., the analogous formulation to that proposed in [32] for the heat equation, is not inf-sup stable, see [14, Sect.2.2].In [20], Karakashian and Makridakis proposed a space-time method for the Schrödinger equation with nonlinear potential, combining a conforming Galerkin discretization in space and an upwind DG time-stepping.This method reduces to a Radau IIA Runge-Kutta time discretization in the case of constant potentials.Moreover, under some restrictions on the mesh that are necessary to preserve the accuracy of the method, it allows for changing the spatial mesh on each time-slab, but not for local time-stepping.A second version of the method, obtained by enforcing the transmission of information from the past through a projection, was proposed in [21].This version reduces to a Legendre Runge-Kutta time discretization in the case of constant potentials.Recently, some spacetime methods based on ultra-weak formulations of the Schrödinger equation have been designed.The well-posedness of such formulations requires weaker assumptions on the mesh.Demkowicz et al., in [8], the authors proposed a discontinuous Petrov-Galerkin (DPG) formulation for the linear Schrödinger equation.The method is a conforming discretization of an ultra-weak formulation of the Schrödinger equation in graph spaces.Well-posedness and quasi-optimality of the method follow directly from the inf-sup stability (in a graph norm) of the continuous Petrov-Galerkin formulation.In [14], Hain and Urban proposed a space-time ultra-weak variational formulation for the Schrödinger equation with optimal inf-sup constant.The formulation in [14] is closely related to the DPG method in [8], but differs in the choice of the test and trial spaces.While for the method in [8] one first fixes a trial space and then construct a suitable test space, the method in [14] requires the choice of a conforming test space and then the trial space is defined accordingly.We are not aware of publications proposing space-time DG methods for the Schrödinger equation other than [8,14,20,21], outlined in this paragraph, and the space-time Trefftz-DG method in [11,12], which motivated the present paper. Trefftz methods are Galerkin discretizations with test and trial spaces spanned by local solutions of the considered PDE.Trefftz methods with lower-dimensional spaces than standard finite element spaces, but similar approximation properties, have been designed for many problems, e.g., Laplace and solid-mechanics problems [31]; the Helmholtz equation [16]; the time-harmonic [15], and time-dependent [10] Maxwell's equations; the acoustic wave equation in second-order [1] and first-order [27] form; the Schrödinger equation [11]; among others.Nonetheless, pure Trefftz methods are essentially limited to problems with piecewise-constant coefficients, as for PDEs with varying coefficients the design of "rich enough" finite-dimensional Trefftz spaces is in general not possible.A way to overcome this limitation is the use of quasi-Trefftz methods, which are based on spaces containing functions that are just approximate local solutions to the PDE.In essence, the earliest quasi-Trefftz spaces are the generalized plane waves used in [17] for the discretization of the Helmholtz equation with smoothly varying coefficients.More recently, a quasi-Trefftz DG method for the acoustic wave equation with piecewise-smooth material parameters was proposed in [19], where some polynomial quasi-Trefftz spaces were introduced.As an alternative idea, the embedded Trefftz DG method proposed in [23] does not require the local basis functions to be known in advance, as they are simply taken as a basis for the kernel of the local discrete operators in a standard DG formulation.This corresponds to a Galerkin projection of a DG formulation with a predetermined discrete space onto a Trefftz-type subspace.In practice, it requires the computation of singular or eigenvalue decompositions of the local matrices. In [11], the authors proposed a space-time Trefftz-DG method for the Schrödinger equation with piecewise-constant potential, whose well-posedness and quasi-optimality in mesh-dependent norms were proven for general discrete Trefftz spaces.Optimal h-convergence estimates were shown for a Trefftz space consisting of complex-exponential wave functions. In this work we propose a space-time DG method for the discretization of the Schrödinger equation with variable potentials, extending the formulation of [11] to more general problems and discrete spaces.The main advantages of the proposed method are the following: • The proposed ultra-weak DG variational formulation of (1.1) is well-posed, stable, and quasioptimal in any space dimension for an almost arbitrary choice of piecewise-defined discrete spaces and variable potentials. • A priori error estimates in a mesh-dependent norm can be obtained by simply analyzing the approximation properties of the local spaces. • The method naturally allows for non-matching space-like and time-like facets and all our theoretical results hold under standard assumptions on the space-time mesh, which make the method suitable for adaptive versions and local time-stepping. • Building on [19], for elementwise smooth potentials, we design and analyze a quasi-Trefftz polynomial space with similar approximation properties of full polynomial spaces but with much smaller dimension, thus substantially reducing the total number of degrees of freedom required for a given accuracy. Structure of the paper: In Section 2 we introduce some notation on the space-time meshes to be used and the proposed ultra-weak DG variational formulation on abstract spaces.Section 3 is devoted to the analysis of well-posedness, stability and quasi-optimality of the method.In Sections 4.2 and 4.3 we prove optimal h-convergence estimates for the method when the test and trial spaces are taken as the space of piecewise polynomials or a novel quasi-Trefftz space, respectively.In Section 5 we present some numerical experiments that validate our theoretical results and illustrate the advantages of the proposed method.We end with some concluding remarks in Section 6. Space-time mesh and DG notation Let T h be a non-overlapping prismatic partition of Q T , i.e., each element K ∈ T h can be written as K = K x × K t for a d-dimensional polytope K x ⊂ Ω and a time interval K t ⊂ I.We use the notation . We call "mesh facet" any intersection F = ∂K 1 ∩ ∂K 2 or F = ∂K 1 ∩ ∂Q T , for K 1 , K 2 ∈ T h , that has positive d-dimensional measure and is contained in a d-dimensional hyperplane.We denote by n F = ( n x F , n t F ) ∈ R d+1 one of the two unit normal vectors orthogonal to F with n t F = 0 or n t F = 1.We assume that each internal mesh facet F is either a space-like facet if n x F = 0, or a time-like facet if n t F = 0. We further denote the mesh skeleton and its parts as union of all the internal time-like facets, F space h := the union of all the internal space-like facets. We employ the standard DG notation for the averages { {•} } and space • N and time • t jumps for piecewise complex scalar w and vector τ fields: where n x K ∈ R d and n t K ∈ R are the space and time components of the outward-pointing unit normal vectors on ∂K ∩ F time h and ∂K ∩ F space h , respectively.The superscripts "−" and "+" are used to denote the traces of a function on a space-like facet from the elements "before" (−) and "after" (+) the facet. The space-time prismatic meshes described in this section may include hanging space-like and time-like facets, so the proposed method allows for local time-stepping and local space-time refinements.Tent-pitched meshes are popular in space-time methods for wave propagation problems; see e.g., [30] and [27,Eq. 3].However, such meshes do not lead to a semi-implicit discretization of the Schrödinger equation because the propagation speed of its solutions, which dictates the slope of space-like facets of the tents, is infinite. We denote space-time broken function spaces as Variational formulation of the DG method For any finite-dimensional subspace V hp (T h ) of the broken Bochner-Sobolev space the proposed ultra-weak DG variational formulation for the Schrödinger equation (1.1) is: where for some mesh-dependent stabilization functions More conditions on these functions, in particular on their dependence on the local mesh size, will be specified in Section 4. The variational formulation (2.1) can be derived by integrating by parts twice in space and once in time in each element as in [11], and treating the Neumann and the Robin boundary terms similarly to [11,Rem. 3.7].However, as the current setting does not require the discrete space V hp (T h ) to satisfy the Trefftz property (Sψ |K = 0, ∀K ∈ T h ), there are an additional volume term that is needed to ensure consistency (the first integral over K in A (•; •)), and a local Galerkin-least squares correction term (the second integral over K in A (•; •)) that were not present in the previous method.Such additional terms vanish when V hp (T h ) is a discrete Trefftz space, thus recovering the formulation in [11]. Remark 1 (Implicit time-stepping through time-slabs).The variational problem (2.1) is a global problem involving all the degrees of freedom of the discrete solution for the whole space-time cylinder Q T .However, as upwind numerical fluxes are taken on the space-like facets, if the space-time prismatic mesh T h can be decomposed into time-slabs (i.e., if the mesh elements can be grouped in sets of the form Ω × [t n−1 , t n ] for a partition of the time interval of the form 0 = t 0 < t 1 < . . .< t N = T ), the global linear system stemming from (2.1) can be solved as a sequence of N smaller systems of the form for n = 2, . . ., N .This is comparable to an implicit time-stepping, and it naturally allows for local mesh refinement in different regions of the space-time cylinder Q T .Moreover, when T h is a tensor-product space-time mesh, the potential V does not vary in time, and the partition of the time interval is uniform, the matrices K n and R n are the same for every time-slab. Remark 2 (Self-adjointness and volume penalty term).The well-posedness of the variational formulation (2.1) strongly relies on the L2 (K)-self-adjointness of the Schrödinger operator S(•) on each K ∈ T h (in the sense that K Sψ ϕ dV = K ψ Sϕ dV for all ψ ∈ V(T h ), ϕ ∈ C ∞ 0 (K), thanks to the fact that the only odd derivative in S is multiplied to the imaginary unit), which makes the local Galerkin-least squares correction term consistent.On the one hand, such term is essential in the proof of coercivity of the sesquilinear form A (• ; •) (see Proposition 1 below).On the other hand, numerical experiments suggest that it can be neglected without losing accuracy and stability, see Section 5.1.2below.This is also the case for the quasi-Trefftz DG method for the Helmholtz equation [18, §5.1.3]and for the wave equation [19, §5.1],where a similar correction term was used.Nonetheless, in the design of an ultra-weak DG discretization for a PDE with a non-self-adjoint differential operator L(•) (e.g., the heat operator L(•) = (∂ t − ∆ x ) (•)), the corresponding local least-squares correction term K∈T h K µLψ hp Lv hp dV would not control the consistency term K∈T h K ψ hp L * v hp dV arising from the integration by parts. Remark 3 (Time-dependent potentials).The variational problem (2.1) allows for time-dependent potentials V .This is an important feature as, in such a case, the method of separation of variables cannot be used to reduce the time-dependent problem (1.1) to the time-independent Schrödinger equation. Well-posedness, stability and quasi-optimality of the DG method The theoretical results in this section are derived for any spatial dimension d, and are independent of the specific choice of the discrete space V hp (T h ). Recalling that the volume penalty function µ, the stabilization functions α, β and the impedance function ϑ are positive, and that δ ∈ (0, 1 2 ), we define the following mesh-dependent norms on V(T h ): . The sum of the L 2 (K)-type terms ensures that |||•||| DG + is a norm.That |||•||| DG is a norm on V(T h ) follows from the following reasoning (see also [11,Lemma 3.1]): if w ∈ V(T h ) and w DG = 0, then w is the unique variational solution to the Schrödinger equation (1.1) with homogeneous initial and boundary conditions.Moreover, by the energy conservation (if , for all t ∈ (0, T ]; therefore, w = 0.The DG norms in (3.1)-(3.2) are chosen in order to ensure the following properties of the sesquilinear form A (•; •) and the antilinear functional ℓ(•), from which the well-posedness and quasi-optimality of the method (2.1) follow. Proposition 1 (Coercivity). For all w ∈ V(T h ) the following identity holds Proof.The result follows from the following identities (see [11,Prop. 3.2] for more details): Proposition 2 (Continuity).The sesquilinear form A (•; •) and the antilinear functional ℓ(•) are continuous in the following sense: ) and F D h are controlled as in [11,Prop. 3.3].The remaining terms are bounded using Cauchy-Schwarz inequality and the inequality δ ≤ 1 − δ < 1. Theorem 1 (Quasi-optimality).For any finite-dimensional subspace V hp (T h ) of V(T h ), there exists a unique solution ψ hp ∈ V hp (T h ) satisfying the variational formulation (2.1).Additionally, the following quasi-optimality bound holds: Moreover, if g D = 0 and g N = 0 (or Proof.Existence and uniqueness of the discrete solution ψ hp ∈ V hp (T h ) of the variational formulation (2.1), and the quasi-optimality bound (3.4) follow directly from Propositions 1-2, the consistency of the variational formulation (2.1) and Lax-Milgram theorem.The continuous dependence on the data (3.5)follows from Proposition 1, and the fact that if g D = 0 and g N = 0 (or Γ D = ∅ and Γ N = ∅), the term w DG + on the right-hand side of (3.3b) can be replaced by w DG .Theorem 1 implies that it is possible to obtain error estimates in the mesh-dependent norm ||| • ||| DG by studying the best approximation in V hp (T h ) of the exact solution in the ||| • ||| DG + norm.Moreover, according to Proposition 3 below, a priori error estimates can be deduced from the local approximation properties of the space V hp (T h ) only, as the ||| • ||| DG + norm can be bounded in terms of volume Sobolev seminorms and norms.The proof of error estimates in mesh-independent norms on the full computational domain for ultra-weak DG methods is a delicate issue; see e.g., [15,Lemma 1] and [27, §5.4] for related results concerning Trefftz methods for the Helmholtz and the wave equations, respectively. So far, we have not imposed any restriction on the space-time mesh T h .Henceforth, in our analysis we assume: • Uniform star-shapedness: There exists 0 < ρ ≤ 1 2 such that, each element K ∈ T h is star-shaped with respect to the ball B := B ρhK (z K , s K ) centered at (z K , s K ) ∈ K and with radius ρh K . • Local quasi-uniformity in space: there exists a number lqu(T The proof of Proposition 3 is a direct consequence of a collection of trace inequalities (see [3,Theorem 1.6.6]and [27, Lemma 2]), which in our space-time setting can be written for any element where D 2 x ϕ is the spatial Hessian of ϕ, and C tr ≥ 1 only depends on the star-shapedness parameter ρ. 2 ), and assume that V ∈ L ∞ (K), ∀K ∈ T h .For all ϕ ∈ V(T h ), the following bound holds , where The factor 3 2 C tr appearing in the bound of Proposition 3 is due to the integral terms with arguments The existence and uniqueness of the discrete solution for any choice of the discrete space V hp (T h ), as well as the quasi-optimality estimate (3.4), follow from the coercivity and continuity of the sesquilinear form A (•; •) on the continuous space V(T h ) in Propositions 1 and 2, together with the consistency of the method.Thus, optimal convergence rates can be proven for the full polynomial space as in Section 4.2, since this space provides a good enough approximation of any sufficiently smooth solution.On the other hand, the quasi-Trefftz space introduced in Section 4.3 would require some adjustments in order to approximate the solution of an inhomogeneous problem. The proposed DG method is dissipative, but the energy loss can be quantified in terms of the local least-squares error, the initial condition error, the jumps of the solution on the mesh skeleton, and the error on F D h ∪ F N h due to the weak imposition of the boundary conditions.More precisely, for g D = 0, g N = 0 and F R h = ∅, the discrete solution to (2.1) satisfies where This follows from the definition of the ||| • ||| DG norm of the solution ψ hp , the coercivity of the sesquilinear form A (• ; •), the definition of the antilinear functional ℓ(•) and simple algebraic manipulations; see [11,Rem. 3.6]. Discrete spaces and error estimates In this section we prove a priori h-convergence estimates on the ||| • ||| DG + norm of the error for some discrete polynomial spaces.In particular, for each element K ∈ T h , we consider two different polynomial spaces: the space P p (K) of polynomials of degree p on K, and a quasi-Trefftz subspace QT p (K) ⊂ P p (K) with much smaller dimension, i.e., dim(QT p (K)) ≪ dim(P p (K)) (see Proposition 5 below).A polynomial Trefftz space for the case of zero potential V has been studied in [12].We denote the local dimensions n d+1,p := dim(QT p (K)) and r d+1,p := dim(P p (K)) in dependence of the space dimension d of the problem and the polynomial degree p, but independent of the element K.For simplicity, we only describe the case where the same polynomial degree is chosen in every element; the general case can easily be studied. Multi-index notation and preliminary results We use the standard multi-index notation for partial derivatives and monomials, adapted to the space-time setting: for j = (j x , j t ) = (j x1 , . . ., j We also recall the definition and approximation properties of multivariate Taylor polynomials, which constitute the basis of our error analysis.On an open and bounded set Υ ⊂ R d+1 , the Taylor polynomial of order m ∈ N (and degree m − 1), centered at (z, s) ∈ Υ, of a function ϕ ∈ C m−1 (Υ) is defined as If ϕ ∈ C m (Υ) and the segment [(z, s), (x, t)] ⊂ Υ, the Lagrange's form of the Taylor remainder (see [4,Corollary 3.19]) is bounded as follows: where h Υ is the diameter of Υ.In particular, if Υ is star-shaped with respect to (z, s), then the following estimate is obtained which, together with the well-known identity (see [3, Prop.(4.1.17)]) The Bramble-Hilbert lemma provides an estimate for the error of the averaged Taylor polynomial, see [9] and [3,Thm. 4.3.8]. , be an open and bounded set with diameter h Υ , star-shaped with respect to the ball B := B ρhΥ (z, s) centered at (z, s) ∈ Υ and with radius ρh Υ , for some 0 < ρ ≤ 1 2 .If ϕ ∈ H m (Υ), the averaged Taylor polynomial of order m (and degree m − 1) defined as satisfies the following error bound for all s < m A sharp bound on C d,m,ρ > 0 is given in [9, p. 986] in dependence of d, s, m and ρ, and the second bound is proven in [27, Lemma 1]. Full polynomial space In next theorem, we derive a priori error estimates for the DG formulation (2.1) for the space of elementwise polynomials Theorem 2. Let p ∈ N, fix δ as in Proposition 3 and assume that V ∈ L ∞ (Q T ).Let ψ ∈ V(T h )∩H p+1 (T h ) be the exact solution of (1.1) and ψ hp ∈ V hp (T h ) be the solution to the variational formulation (2.1) with V hp (T h ) given by (4.2).Set the volume penalty function and the stabilization functions as then the following estimate holds Moreover, if h Kx ≃ h Kt for all K ∈ T h , there exists a positive constant C independent of the element sizes h Kx , h Kt , but depending on the degree p, the L ∞ (Q T ) norm of V , the trace inequality constant C tr in (3.6), the local quasi-uniformity parameter lqu(T h ) and the star-shapedness parameter ρ such that Proof.The proof follows from the choice of the volume penalty function µ and the stabilization functions α, β, the quasi-optimality bound (3.4), Proposition 3, the inequality for all elements K ∈ T h , and the Bramble-Hilbert lemma 1. Quasi-Trefftz spaces We now introduce a polynomial quasi-Trefftz space.Let p ∈ N and assume that V ∈ C p−2 (K).For each K ∈ T h we define the following local polynomial quasi-Trefftz space: for some point (x K , t K ) in K.We consider the following global discrete space , then by the multi-index Leibniz product rule for multivariate functions we have where The next proposition is the key ingredient to prove optimal convergence rates in Theorem 3 for the DG method (2.1) when V hp (T h ) is chosen as the quasi-Trefftz polynomial space defined in (4.3).[ψ] at (x K , t K ) that appear in (4.5) are at most of total order |j|+ 2 ≤ p, so they coincide with the corresponding derivatives of ψ.Furthermore, since Sψ = 0, then ) be the exact solution of (1.1) and ψ hp ∈ V hp (T h ) be the solution to the variational formulation (2.1) with V hp (T h ) given by (4.4).Set the volume penalty function µ and the stabilization functions α, β as in Theorem 2.Then, the following estimate holds Moreover, if h Kx ≃ h Kt for all K ∈ T h , there exists a positive constant C independent of the mesh size h, but depending on the degree p, the L ∞ (Q T ) norm of V , the trace inequality constant C tr in (3.6), the local quasi-uniformity parameter lqu(T h ) and the measure of the space-time domain Proof.The proof follows from the choice of the volume penalty function µ and the stabilization functions α, β, the quasi-optimality bound (3.4), bound (3), the inequality The a priori error estimate in Theorem 3 requires stronger regularity assumptions on ψ than Theorem 2 (namely ψ ∈ C p+1 (T h ) instead of ψ ∈ H p+1 (T h )) due to the fact that QT p (K) is tailored to contain the Taylor polynomial T p+1 (xK ,tK ) [ψ], but in general it does not contain the averaged Taylor polynomial Q p+1 [ψ]. Remark 6 (Non-polynomial spaces).Optimal h-convergence estimates can also be derived for non-polynomial spaces, by requiring the local space V hp (K) to contain an element whose Taylor polynomial coincides with that of the exact solution.This is the approach in [11] for the Trefftz space of complex exponential wave functions for the Schrödinger equation with piecewise-constant potential. Basis functions and dimension So far, we have not specified the dimension and a basis for the space QT p (K), which is the aim of this section. Recalling that r d,p = dim be bases of P p (R d ) and P p−1 (R d ), respectively.We define and the following n d+1,p elements of where g x K , • denotes the restriction of g : K , where x Any element q p ∈ QT p (K) can be expressed in the scaled monomial basis as for some complex coefficients {C j } |j|≤p .By the conditions D j Sq p (x K , t K ) = 0 for all |j| ≤ p − 2, in the definition of QT p (K), we have the following relations between the coefficients which can be rewritten as The conditions imposed in (4.6) on the restriction of b J to x 1 = x K fix the coefficients of their expansion for all j with j x1 ∈ {0, 1}.In Figures 1 and 2, we illustrate how the coefficients that are not immediately determined by the conditions in (4.6) (i.e., those for j x1 ≥ 2) are uniquely defined and can be computed for the (1 + 1)-and (2 + 1)-dimensional cases using the recurrence relation (4.7).Proposition 5.The set of functions {b J } n d+1,p J=1 defined in (4.6) are a basis for the space QT p (K).Therefore, Proof.We first observe that the set of polynomials {b J } n d+1,p J=1 is linearly independent due to their restrictions to x 1 = x (1) K .On the other hand, the relations (4.7), imply that q p is uniquely determined by its restriction q p (x (1) K , •) and the restriction of its derivative ∂ x1 q p (x (1) K , •).In addition, there exist some complex coefficients {λ s } n d+1,p s=1 such that case.The colored dots in the (j x , j t ) plane represent the coefficients C jx jt .Each shape connects three dots located at the points (j x , j t + 1), (j x , j t ) and (j x + 2, j t ): this shape represents one of the equations (4.7) which, given C jx(jt+1) and C jx jt , allows to compute C (jx+2)jt .If the 2p+1 values with j x ∈ {0, 1} (corresponding to the blue nodes in the shaded region) are given, then these relations uniquely determine all the other coefficients, which can be computed sequentially using the relations (4.7) by proceeding left to right in the diagram.In the figure p = 7, the number of nodes is r 2,p = 36, the number of nodes in the shaded region is n 2,p = 15, the number of relations is r 2,p − n 2,p = 21. whence q p = n d+1,p s=1 λ s b s , which completes the proof. Remark 7 (Quasi-Trefftz basis construction: difference between Schrödinger and wave equations). The definition of the basis functions b J in (4.6) can be modified by fixing the restriction of b J and its partial derivative ∂ x ℓ b J to x ℓ = x (ℓ) K for any 1 ≤ ℓ ≤ d.However, it is not possible to assign the values for a given time t = t K , as the order of the time derivative appearing in the Schrödinger equation is lower than the order of the space derivatives.How this affects the basis construction is visible from Figure 1: the coefficients (the colored dots) can be computed sequentially when all the other coefficients of a relation (the Y-shaped stencil) are known, so it is possible to reach all dots moving left to right, but not moving bottom to top.Imposing the values at a given time is possible for the wave equation, as it is done in [19, §4.4], precisely because in that case time and space derivatives have the same order. Remark 8 (Constant-potential case).The space QT p (K) does not reduce to a Trefftz space for the case of constant potential V .Nonetheless, the pure Trefftz space T p (K) defined as does not possess strong enough approximation properties to guarantee optimal h-convergence.In particular, it does not contain the Taylor polynomial of all local solutions to the Schrödinger equation; for d = 1, p = 1 and V = 0, T p (K) = span {1, x}; however, ψ(x, t) = exp x + i 2 t satisfies Sψ = 0, and T p+1 (0,0 Remark 9 (Trefftz dimension).As seen in Proposition 5, the quasi-Trefftz polynomial space has considerably lower dimension than the full polynomial space of the same degree.This "dimension Figure 2: A representation of the relations defining the coefficients of b J for the (2+1)-dimensional case.The colored dots in position j = (j x , j y , j t ), |j| ≤ p, correspond to the coefficients C jx jy jt (here p = 5 and r p = 56).Each white circle is connected by the segments to four nodes and represents one of the equations in (4.7): given C jx jy jt , C jx jy(jt+1) and C jx(jy +2)jt , it allows to compute C (jx+2)jy jt (the leftmost of the four nodes connected to a given white circle) using (4.7). reduction" is common to all Trefftz and quasi-Trefftz schemes.In particular, the dimension n d+1,p of QT p (K) is equal to the dimension of the space of harmonic polynomials of degree ≤ p in R d+1 , the Trefftz space of complex exponential wave functions for the Schrödinger equation with piecewiseconstant potential in [11], the Trefftz and quasi-Trefftz polynomial space for the wave equation in [27, Eq. ( 42)-( 43)] and [19]. Numerical experiments In this section we validate the theoretical results regarding the h-convergence of the proposed method, and numerically assess some additional features such as p-convergence and conditioning.Although we do not report the results here, optimal convegence rates of order O h p+1 are observed for the error in the L 2 (Q T )-norm.We list some aspects regarding our numerical experiments • We use Cartesian-product space-time meshes with uniform partitions along each direction, which are a particular case of the situation described in Remark 1. • We choose (x K , t K ) in the definition of the quasi-Trefftz space QT p (K) in (4.3) as the center of the element K. • In all the experiments we consider Dirichlet boundary conditions. • The linear systems are solved using Matlab's backslash command. • The quasi-Trefftz basis functions {b J } n d+1,p J=1 are constructed by choosing m J and m J in (4.6) as scaled monomials and by computing the remaining coefficients C j with the relations (4.7). • In the h-convergence plots, the numbers in the yellow rectangles are the empirical algebraic convergence rates for the quasi-Trefftz version (continuous lines).The dashed lines correspond to the errors obtained for the full polynomial space. (1 + 1)-dimensional test cases We first focus on the (1 + 1)-dimensional case, for which families of explicit solutions are available for some well-known potentials V . h-convergence In order to validate the error estimates in Theorems 2 and 3, we consider a series of problems with different potentials V .No significant difference in terms of accuracy between the quasi-Trefftz and the full polynomial versions of the method with the same polynomial degree p (corresponding to different numbers of DOFs n d+1,p and r d+1,p , respectively) is observed in all the experiments. In Figure 4, we present the errors obtained for ω = 10, n = 2 and a sequence of Cartesian meshes with uniform partitions and h x = h t = 0.05 × 2 −i , i = 0, . . . 4. Rates of convergence of order O (h p ) in the DG norm are observed, as predicted by the error estimate in Theorem 3. A convergence of at least order O h p+1 is observed for the L 2 -error at the final time, which is faster (by a factor h) than the order that can be deduced from the estimates in Theorems 2 and 3. We have also included the plots for the error decay with respect to the total number of degrees of freedom, where the same h-convergence rates are observed for both versions of the method (see also the p-convergence plot in Figure 9a for a clearer understanding of the dependence of the error on p). (a) Energy error evolution Due to the fast decay of the exact solution close to the boundary (see Figure 8 (panel a), the energy is expected to be preserved.In Figure 3, we show the evolution of the energy error, and the convergence of the energy loss E loss to zero for the quasi-Trefftz version.In the latter, rates of order O h 2p are observed, which follows from Remark 5 and the error estimates in Theorems 2 and 3. (a) DG error ) quantum harmonic oscillator problem with potential V (x) = 50x 2 and exact solution ψ 2 in (5.1).Convergence with respect to the mesh size h (top panels) and the total number of degrees of freedom (bottom panels). Reflectionless potential (V (x) = −a 2 sech 2 (ax)) This potential was studied in [5] as an example of a reflectionless potential.On the space-time domain Q T = (−5, 5) × (0, 1), we consider the Schrödinger equation with exact solution (see [13,Problem 2.48 In Figure 5, we show the errors obtained for a sequence of meshes with h x = 2h t = 0.2 × 2 −i , i = 0, . . ., 4, and a = 1.As in the previous experiment, rates of convergence of order O (h p ) and O h p+1 are observed in the DG norm and the L 2 norm at the final time, respectively.The real part of the exact solution is depicted in Figure 8 (panel b). Morse potential (V (x) = D(1 − e −αx ) 2 ) This potential was introduced by Morse in [28] to obtain a quantum-mechanical energy level spectrum of a vibrating, non-rotating diatomic molecule.There, the following family of solutions was presented (see also [6]) where ⌊•⌋ is the floor function, n = 0, . . ., ⌊λ − 1/2⌋, L denote the general associated Laguerre polynomials as defined in [29,Table 18.3.1]and In Figure 6, we show the errors obtained for the Morse potential problem with D = 8, α = 4 and exact solution ψ 1,1 on the space-time domain Q T = (−0.5,1.5) × (0, 1) for a sequence of meshes with h x = h t = 0.1 × 2 −i , i = 0, . . ., 4. The observed rates of convergence are in agreement with those obtained in the previous experiments.The real part of the exact solution is depicted in Figure 8 (panel c). Square-well potential We now consider a problem taken from [11], whose exact solution is not globally smooth.On the space-time domain 2) × (0, 1), we consider the Schrödinger equation with homogeneous Dirichlet boundary conditions and the following square-well potential for some fixed V * > 0. The initial condition is taken as an eigenfunction (bound state) of where k * is a real root of the function f (k ).The solution of the corresponding initial boundary value problem (1.1) is ψ(x, t) = ψ 0 (x) exp(−ik 2 t) and belongs to the space H p+1 (T h ) ∩ C ∞ I; C 1 (Ω) \C ∞ I; C 2 (Ω) for all p ∈ N, provided that T h is aligned with the discontinuities of the potential V ; therefore, Theorems 2 and 3 apply.Among the finite set of values k * for a given V * , in this experiment we take the largest one, corresponding to faster oscillations in space and time. In Figure 7, we show the errors obtained for V * = 20 (k * ≈ 3.73188) and a sequence of meshes with h t = √ 2h x = 0.1 × 2 −i , i = 0, . . ., 4. Optimal convergence in both norms is observed for the errors of the quasi-Trefftz version of the method. Effect of stabilization and volume penalty terms In this experiment we are interested in the effect of neglecting some of the terms in the variational formulation (2.1).To do so, we consider the (1+1)-dimensional quantum harmonic oscillator problem with exact solution (5.1).In Tables 1-2 (quasi-Trefftz space) and 3-4 (full polynomial space) we present the errors in the DG-norm obtained for the same sequence of meshes and approximation degrees as in the previous section, for different combinations of the stabilization terms α, β and the volume penalty parameter µ.Although the proof of well-posedness of the method (2.1) relies on the assumption that α, β and µ are strictly positive, in our numerical experiments, the matrices of the arising linear systems are non-singular and optimal convergence rates are observed even when all these parameters are set to zero.Moreover, the errors obtained when α = 0 or β = 0 are smaller as some terms in the definition (3.1) of • DG vanish, while the presence of µ seems to have just a mild effect in the results.Not shown here, similar effects were observed for the error in the L 2 (F T h )-norm. p-Convergence We now study numerically the p-convergence of the method, i.e., for a fixed space-time mesh T h , we study the errors when increasing the polynomial degree p.We consider the (1 + 1)-dimensional problems above with the same parameters and the coarsest meshes for each case.In Figure 9, we compare the errors obtained for the method with the two choices for the discrete space V hp (T h ) analyzed in the previous sections: the full polynomial space (4.2) and the quasi-Trefftz polynomial [17,11,1,30] but no proof is available yet (differently from the stationary case, [16, §3]).In general, for a (d+1)-dimensional problem, we expect exponential convergence of order O(e −b d √ NDoF s ) and O(e −c (d+1) √ NDoF s ) for the quasi-Trefftz and full-polynomial versions, respectively. Conditioning We now assess the conditioning of the stiffness matrix.In Figure 10 we compare the 2-condition number κ 2 (•) for the stiffness matrix K n defined in Remark 1, for the free particle problem V = 0 on the space-time domain Q T = (0, 1) × (0, 1).We consider the proposed polynomial quasi-Trefftz space in (4.4), the full-polynomial space in (4.2) and the pure-Trefftz space of complex exponential wave functions T p (T h ) proposed in [11].A basis {φ ℓ } 2p+1 ℓ=1 ⊂ T p (T h ) was defined in [11] as (5.5) We consider two choices for the parameters κ ℓ : the arbitrary choice used in [11] κ ℓ = −p, . . ., p, and the choice κ ℓ = 2πℓ/h x which makes the basis orthogonal in each element.The conditioning number κ 2 (K) for the quasi-Trefftz space, the full polynomial space, and the Trefftz space with orthogonal basis asymptotically grows as O h −1 for all p ∈ N, while for the Trefftz space with a non-orthogonal basis, asymptotically grows as O h −(2p+1) .Unfortunately, with higher dimensions and non-Cartesian elements, choosing the parameters and directions defining the basis functions {φ ℓ } so as to obtain an orthogonal basis is more challenging. (2 + 1)-dimensional test cases We now present some numerical test for space dimension d = 2.We recall that we use Cartesian space-time meshes with uniform partitions along each direction. h-convergence Singular time-independent potential (V (x, y) = 1 − 1/x 2 − 1/y 2 ) We consider the (2 + 1)dimensional problem on Q T = (0, 1) 2 × (0, 1) with exact solution (see [33]) ψ(x, y, t) = x 2 y 2 e it . (5.6) In Figure 11, we show the errors obtained for a sequence of meshes with h x = h y = h t = 0.1, 0.0667, 0.05, 0.04 and different degrees of approximation p.As in the numerical results for the (1 + 1)dimensional problems, we obtain rates of convergence of order O (h p ) in the DG norm, and O h p+1 in the L 2 norm at the final time. We now consider a manufactured problem with a time-dependent potential (see [7]).On the space-time domain Q T = (0, 1) 2 × (0, 1) the exact solution is ψ(x, y, t) = ie i(t−1/2) 4 sech(x)sech(y). (5.7) In Figure 12 we show the errors obtained for the sequence of meshes from the previous experiment, and optimal convergence is observed in both norms. p-convergence In Figure 13 we show the results obtained for the p-version of the method applied to the (2 + 1)dimensional problems above, on the coarsest mesh.As expected, for the (2 + 1)-dimensional case, the error of the quasi-Trefftz version decays root-exponentially as O e −b √ N dof s . Concluding remarks We have introduced a space-time ultra-weak discontinuous Galerkin discretization for the linear Schrödinger equation with variable potential.The DG method is well-posed and quasi-optimal in mesh-dependent norms for any space dimension d ∈ N, and for very general prismatic meshes and discrete spaces.We proved optimal h-convergence of order O (h p ), in such a mesh-dependent norm, for two choices of the discrete spaces: the space of piecewise polynomials, and a novel quasi-Trefftz polynomial space with much smaller dimension.When the space-time mesh has a Figure 1 : Figure 1: A representation of the relations defining the coefficients of b J for the (1+1)-dimensional Energy loss at T = 1 Figure 3 : Figure 3: Time-evolution of the energy error for the quantum harmonic oscillator problem with potential V (x) = 50x 2 and exact solution ψ 2 in (5.1). Figure 4 : Figure4: h-convergence for the (1 + 1) quantum harmonic oscillator problem with potential V (x) = 50x 2 and exact solution ψ 2 in (5.1).Convergence with respect to the mesh size h (top panels) and the total number of degrees of freedom (bottom panels). Figure 8 : Figure 8: Real part of the exact solutions for the (1 + 1)-dimensional problems. 2 and exact solution ψ 2 in (5.1) for different combinations of the stabilization parameters α, β and volume penalty parameter µ = 0. Figure 10 : Figure 10: Conditioning of the stiffness matrix for the DG method with different discrete spaces. L 2 error at T = 1 {w} }| 2 on F time h in the definition (3.1) of the ||| • ||| DG norm.The Remark 4 (Inhomogeneous Schrödinger equation).The space-time ultra-weak DG variational formulation in (2.1) can be easily extended to approximate the solution to inhomogeneous Schrödingertype problems with a sufficiently smooth term f : Q T → C at the right-hand side of the first equation in (1.1); see [26, Ch. 3, § 10] for the well-posedness of such problems.In order to preserve the consistency of the method, it is necessary to add the following term to the antilinear functional ℓ(•): Table 1 : h Fx DG error Rate DG error Rate DG error Rate DG error Rate h-convergence for the quasi-Trefftz version applied to the quantum harmonic oscillator problem with potential V (x) = 50x 2 and exact solution ψ 2 in (5.1) for different combinations of the stabilization parameters α, β and volume penalty parameter µ = 0. Table 2 : h Fx DG error Rate DG error Rate DG error Rate DG error Rate h-convergence for the quasi-Trefftz version applied to the quantum harmonic oscillator problem with potential V (x) = 50x 2 and exact solution ψ 2 in (5.1) for different combinations of the stabilization parameters α, β and volume penalty parameter µ = 0.
10,745
2023-06-09T00:00:00.000
[ "Mathematics" ]
Integrating Multicultural Literature in EFL Teacher Training Curriculum The paper addresses the need and importance of including literature of indigenous people and immigrants and their descendants in teaching English as a Foreign Language (EFL) worldwide, mainly in teacher training colleges. The aim is to qualify competent English teachers linguistically, cross-culturally, and morally who will be aware of social injustices, which would extend their roles as language teachers to become agents for change. Introduction In general, English speaking countries are constituted of different cultural groups that usually do not share the same values, traditions and styles of living. They also lack common social qualities in terms of race, language, gender, social class, physical and mental disorders. This situation is a result of immigration. Immigrants arrived at their shores at different times and due to different reasons and became an integral part of the society. However, they experienced stereotypes and prejudices, which nurture racism, discrimination and gross misinformation against other ethnic groups, promoting invisibility and concealing the social realities of sociocultural groups (Li, 2003(Li, , 2004. To encounter racism and prejudice, many language educators believe in integrating the first culture of minority groups by including the literature of indigenous and immigrants and their offspring in the language curriculum. Tomalin (2008) describes culture as the fifth language skill (Tomalin, 2008), which involves different abilities, such as the ability to understand, accept, appreciate, pupils would have the opportunity to learn about themselves, to connect to school, and to succeed academically (Gay, 2000). However, since language and culture are inseparable (Bin Mohamed Ali & Mohideen, 2016;Cakir, 2006;Honigefled, Giouroukakis, & Garfinkel, 2011;Sailaja, 2013), language cannot be understood without reference to the target culture. Traditionally, the focus of EFL teaching has been on teaching the target culture, without acknowledging the learners' native culture (Bin Mohamed Ali & Mohideen, 2016;Chlopek, 2008;Shin, Eslami, & Chen, 2011). In this case, the target culture refers solely to the mainstream culture, which is Anglo-Saxon, not the culture of other ethnic groups that have become part of North American culture, mainly the United States and Canada. (2016) propose ling cultural teaching, which means integrating language with the culture of other cultural and ethnic groups in teaching ESL/EFL since "English-speaking countries are not monocultural anymore, but increasingly multicultural" (p. 52). Therefore, incorporating learners' racial and cultural backgrounds is important for hearing their voices and empowering them (Shin, Eslami, & Chen, 2011) to promote tolerance, understanding, acceptance and respect (Chlopek, 2008). The aim of multicultural education through literature is to develop not only the linguistic competence of learners, but also to raise their cultural awareness and cultural sensitivity. As a result, Bin Mohamed Ali & Mohideen Following this notion means including indigenous cultures and narratives of immigrants in the ESL curriculum in the English-speaking countries, which is essential for empowering the learners on the one hand. On the other hand, it is a way to decrease hetaerism and racism, aiming to promote cross-cultural understanding, acceptance of diversity and civic responsibility (Honigefled, Giouroukakis, & Garfinkel, 2011). It would promote not only the linguistic competence of the students but also their cross-cultural or intercultural competence. It is also important to promote critical thinking (Cakir, 2006;Sercu, Mendez Garcia, & Preito, 2004) by helping learners to think of other people who have contributed to humanity, despite differences and in becoming global citizens, tolerant of people of other faiths, ethnicities and nationalities. In addition, "diverse perspectives enlighten all students and allow them to explore and critically examine topics from multiple viewpoints" (Honigefled, Giouroukakis, & Garfinkel, 2011: p. 29 Integrating Literature of Indigenous and Immigrant Writers in EFL Education In general, literature is a window into the lives of others and gives readers the opportunity to realize the existence of issues outside their lives (Landt, 2007;McGinnis, 2006). Learning about life through literature supports the personal growth and maturation process of readers (Aerila, Soininen, & Merisuo-Storm, 2016 (Honigefled, Giouroukakis, & Garfinkel, 2011) and to develop their empathy to others. Kubota (2004) claims that language teachers usually are not prepared to understand social inequities and prejudices. Therefore, she proposes a critical multiculturalism curriculum to question the widespread norms and beliefs, aiming to raise student teachers' awareness of issues of injustice including race, class and cultural groups raised by authors who represent the voices of their own cultural groups. She asserts Open Journal of Modern Linguistics that if such issues are raised in EFL instruction, teachers will understand the injustices in society, which will ultimately enhance their own intercultural communicative competence. As a result, they will become agents for change who are motivated to promote intercultural competence in their pupils (Sercu, Mendez Garcia, & Preito, 2004). These discussions also reveal the relationship between the content of the stories and the motives of the writers. In addition, it is an opportunity to improve the English proficiency of the student teachers, helping them develop a habit of reading, exposing them to a wide variety of cultural contexts, while also keeping their language active and helping them produce language (Sailaja, 2013). Including literature of indigenous people and immigrants in the curriculum could be an effective method to promote multiculturalism since it helps learners identify with their own culture since some selections are written by authors who belong to the learners' culture. They will also be exposed to other cultures, and open dialogues on issues regarding diversity. The classroom becomes a rich, multicultural environment that is accepting and inclusive. Such an environment might inspire EFL learners to be involved in social justice issues. Since narratives are about good and bad experiences in the past and are related to the present, readers are emotionally and linguistically involved. The content of these stories is usually relevant to daily lives, so it might provoke feelings, thoughts and personal responses. In addition, literature, in general, is a world of fantasy, horror and visions put into words; therefore, it is a source of enjoyment that could be employed for instructional purposes. Moreover, integrating narratives of indigenous and immigrant groups in the EFL curriculum gives pre-service teachers the opportunity to meet many narratives and be exposed to different points of view, ideas, thoughts, and minds. Literature, in general, has a personal value since it arouses empathy and helps learners "walk in someone else's shoes", explaining the life of others with different experiences and providing moral reasoning for concepts of right and wrong. Therefore, teaching literature of indigenous and immigrant writers promotes cross-cultural appreciation by acknowledging contributions of minorities (Arellano, 2011;Landt, 2006). In addition, readers are exposed to beliefs of others so that they can understand and accept others, their history, culture, and struggles. Exposing learners to the history and geography of the original countries of minority groups now English-speaking countries arouses respect, empathy and acceptance of all people, aiming to create a global community and raising social consciousness. As a result, students become aware of differences among people (Landit, 2006) and help eradicate prejudice while fostering empathy, tolerance and awareness of global problems. Furthermore, it illustrates similarities and common beliefs among the different religious groups in addition to acceptance of religious diversity as a reality, which enforces being proud of personal faith and assisting in finding one's identity in terms of their actions, beliefs and emotions (Peyton & Jalongo, 2008). In terms of types, there are two types of immigration: forced immigration and voluntarily. The former refers to slavery and the latter includes several reasons such as escaping poverty, looking for better economic opportunities or escaping religious and political persecution (Kortenaar, 2009) Implications for Teaching The above discussion by different researchers and EFL educators should be considered for designing a course that deals specifically with literature and narra- of developmental appropriateness, quality of writing, relevance of issues to students, believability of characters, interest level of the story and realistic social issues (Landt, 2006). The theme is also important since it underlines the meaning or significance of the moral of the story. Selecting stories that help in promoting multiculturalism is an important issue that should be considered seriously. Landt (2006) (2001) add some other criteria such as the authenticity of characters, not stereotyped, and the richness in terms of cultural details Life stories of families of ethnic minority groups such as Chinese and Middle Eastern families in the North American school population of the USA and Canada could serve as good authentic materials for EFL teacher training programs. The American and Canadian education systems are different than Chinese and Middle Eastern systems and adopt different cultural and educational values. As a result, schoolchildren of these groups of immigrants face stereotypes (Derderian-Aghajanian & Wang, 2012). In addition, Packard (2001) indicates that children from immigrant Chinese families experience an intergenerational, intercultural gap with parents in terms of language and traditions, and the cultural background is an essential aspect of personal identity. Mismatch between learners' primary discourse of home and the discourse at school affects identity formation (Derderian-Aghajanian & Wang, 2012), which might be a useful topic for discussion in EFL college classrooms. Cultural differences in the family also affect immigrant students' adjustment and they might be cultural barriers in students' success at school (Derderian-Aghajanian & Wang, 2012). Since incorporating technology is one of the main principles for fostering education in the 21 st century, it could be utilized effectively for including pieces of children's literature aiming at promoting multiculturalism. Digital stories which are defined as short vignettes that combine the art of telling stories with multimedia objects including images, audio, and video (Rossiter & Garcia, 2010), could be very helpful sources for not only for teaching English, but also for promoting tolerance and combating ignorance, prejudice, discrimination and hatred. Additionally, instruction must be varied to include multiple sources of input such as the internet, movies, literature and music to help learners live the experience found in the content not just with their mental abilities, but also with all senses. Therefore, we recommend starting the course with a movie called "United States History Origins to 2000: Immigration and Cultural Change", which is Open Journal of Modern Linguistics part of the United States History Origins to 2000DVD Series (2003. This movie portrays the reality in the United States as a country of immigrants, where some immigrants assimilated rapidly, while others endured difficulties in adapting to their new lifestyle. These immigrants did not have an easy cultural transition because they worked long hours in dangerous jobs, were underpaid and lived in poverty in urban slums. The aim of showing this movie is to give the students a chance to explore the history of North America, mainly the waves of different immigrant groups, their experiences and struggles in the new country. I recommend introducing the story "Shooting an Elephant" by George Orwell, aiming to acquaint the students with the historical background of the hegemony of the British Empire in the 20 th century, the consequences of such hegemony on colonialized countries as well as the colonizing ones, and the importance of the English language in the world. The focus will be on six stories that represent the lives of immigrants from different parts of the world, showing how to utilize the content of these stories to discuss cases of struggles of indigenous groups and cultural mismatches and conflicts between Chinese, Middle Eastern and Latino community members and the culture of the new country. To understand the historical development and events, especially in the USA, the first selection should represent the struggles of African Americans and discriminatory policies against them from slavery in the 18 th century to the 21 st century. More information about these recommended six stories appear (Table 1). In addition, the discussions will expose the learners to the difficulties in adjustment of these writers and their communities in the melting pot of English-speaking countries at present. See a list of other stories in Appendix. Moreover, learners will be acquainted with the intergenerational conflicts between immigrants and their children who attended American schools. Such conflicts are obvious in the first two proposed stories: The Woman Warrior and The Joy Club. These narrations will also expose EFL learners to the struggles of working-class immigrants and their dreams of going back to their countries at the age of retirement, which were crushed by their harsh reality in North America. Pedagogical Recommendations As an introduction to the course, EFL instructors can show documentaries downloaded from the internet or ordered from production websites, to support their instruction with vivid pictures and sounds, showing the present lives of the mixed-colored citizens in North American as well as the tumultuous world changes of the 20 th century that caused massive immigration. In addition, showing such documentaries would empower student teachers worldwide. For example, the documentary called "A visit to a Mosque in America" includes visits of American people to a mosque in Cincinnati and interviews with Muslims of different colors and ethnicities, including Whites. This is a source not only for diffusing prejudices against American Muslims but also for empowering Muslim EFL viewers. Open Journal of Modern Linguistics The story represents the experience of a young educated woman who was born to a Chinese family. On the one hand, the mother used to tell the story of a swordswoman, a Chinese strong woman. On the other hand, she expected her daughter to accept old Chinese traditions, being submissive and silent. In addition, the story is a mixture of fear, resentment, displacement and disappointment. It shows how many Chinese American families are torn between two worlds without really being part of any of them. To facilitate reading, discussing and understanding the stories, course instructors use questions to organize learning. To guide learners to discover answers to questions, instructors can prepare a list of questions of lower-and higher-order thinking skills about the content of each reading selection and upload them to the course website. While questions of lower-order thinking skills ask about facts, higher-order thinking skills require them to analyze events and draw conclusions. These questions should be answered during class time, where students work in groups. Creating learning groups on-line would be helpful to give readers the chance to exchange ideas and discuss dilemmas and themes. To involve EFL student teachers actively, inquiry based instruction, which emphasizes the learner's role and engagement in the learning process (Kidman, 2019), should be considered. Students could be given the chance to explore the lives of native and immigrant authors by preparing the biography of the writers in Power Point presentations and including short movies about them, their lives and experiences that influenced them to write their narratives. The student teachers should also show the world map locating the writers' original countries, to link between the historical events in these countries and the motives of the writers. To vary the methods of involving learners in the course, some lessons could start with a free writing activity, where the course participants are invited to write about the main characters or events and share their writing in groups. Other times, in groups, they could respond to quotes from stories. Conclusion This paper explains the importance of including literature in indigenous and immigrant authors in English-speaking countries in the EFL teacher training curriculum, aiming not only to develop the literacy skills of the learners, but also to raise their awareness of racism, stereotypes against others who are different than them, and social injustices. The aim is to develop intercultural competence in appreciating their own cultures and others' cultures and contributions to the world, towards promoting multiculturalism. In return, EFL pre-service teachers would enhance their self-image as individuals in a particular minority group. In addition, their roles as EFL teachers would not be restricted to pedagogy and instruction. They will be agents for social change in this world.
3,659.6
2020-09-14T00:00:00.000
[ "Education", "Linguistics" ]
Axial anomaly as a collective effect of meson spectrum We study the transition form factors of the light mesons in the kinematics, where one photon is real and other is virtual. Using the dispersive approach to axial anomaly we show that the axial anomaly in this case reveals itself as a collective effect of meson spectrum. This allows us to get the relation between possible corrections to continuum and to lower states within QCD method which does not rely on factorization hypothesis. We show, relying on the recent data of BaBar Collaboration, that the relative correction to continuum is quite small, and small correction to continuum can dramatically change the pion form factor. Introduction The phenomenon of axial anomaly [1,2] is known to be one of the most subtle effects of quantum field theory. Perhaps the most vivid manifestation of it in particle physics can be found in twophoton decays of pseudoscalars. Usually the differential form of the axial anomaly is utilized to study this kind of processes. However, less known dispersive approach to axial anomaly ( [3,4], for a review, see [5]) leads to anomaly sum rule (ASR) relation, which provides a very powerful tool for study of various characteristics of meson spectrum. The absence of corrections to ASR allows to get certain exact relations between characteristics of hadrons, such as decay width [6] and relations between mixing parameters of pseudoscalars [7,8]. In the study of two-photon decays of pseudoscalars, usually the case of real photons is considered. However, ASR is valid for virtual photons also [9,10] which leads to several interesting applications. As we will see, in the case of one real and one virtual photon [9] the ASR takes into account the infinite number of meson states (which can contribute to it due to their quantum numbers), i.e. axial anomaly reveals itself as a collective effect of meson spectrum. This nontrivial fact together with the exactness of ASR gives us a tool to study different characteristics of meson spectrum (form factors of mesons, relation between possible corrections to lower meson states and continuum). The experimental measurements of the photon-pion γγ * → π 0 transition form factor F πγ (Q 2 ) in the range of photon virtualities Q 2 < 9 GeV 2 were performed by CELLO [11] and CLEO [12] Collaborations. The measured values of F πγ (Q 2 ) appeared to be consistent with predictions based on factorization approach to pQCD. Surprisingly, recent data of BaBar Collaboration on F πγ (Q 2 ) [13], which is available in the range 4 < Q 2 < 40 GeV 2 , have shown a strong disagreement with the pQCD predicted behaviour of γγ * → π 0 transition form factor. Though the BaBar data in the range Q 2 < 10 GeV 2 fit well the CLEO data and is in a good agreement with theoretical predictions from the light-cone QCD sum rules (LCSR), offered in [14], however, at larger virtualities strong disagreement takes place. Moreover, more precise recent LCSR analysis [15,16] shows, that it is impossible to explain BaBar data on F πγ (Q 2 ) at large Q 2 by use of usual (endpoint-suppressed) form of pion distribution amplitude. This have led (quite unexpectedly) to the question of pQCD factorization validity. Recently, there were proposed several approaches to explain such anomalous behaviour of F πγ (Q 2 ) [17][18][19][20][21], in particular, questioning pQCD factorization. At the same time, in [22] the authors give some arguments against the related flat-type pion distribution amplitude, while in [23] some doubts about BaBar results analysis were expressed. In this paper we study what can be learnt about the meson-photon transition form factors from the anomaly sum rule for the case of one virtual photon. This generalizes the usual application of anomaly, providing the boundary condition in the real photon limit only. Our (non-perturbative) QCD method does not imply the QCD factorization and is valid even if the QCD factorization is broken. It is shown, that using axial anomaly in the dispersive approach we can get the exact relations between possible corrections to lower states and continuum providing a possibility of relatively large corrections to the lower states. Anomaly sum rule Following [9], we briefly remind some results of the dispersive approach to axial anomaly which are relevant for this paper. The VVA triangle graph amplitude contains axial current J 5 α = (ūγ 5 γ α u −dγ 5 γ α d) and two vector currents J µ = ((2/3)ūγ µ u − (1/3)dγ µ d); k, q are momenta of photons. This amplitude can be presented as a tensor decomposition where the coefficients F j = F j (k 2 , q 2 , p 2 ; m 2 ), p = k + q, j = 1, . . . , 6 are the corresponding Lorentz invariant amplitudes (form factors). Note that these form factors do not have kinematical singularities and are suitable for dispersive approach, which we use to derive anomaly sum rule. Symmetries of the amplitude T αµν (k, q) impose the relations for the form factors F j (k, q). Bose symmetry, i.e. T αµν (k, q) = T ανµ (q, k) leads to Vector Ward identities for the amplitude T αµν (k, q) in terms of form factors read: Anomalous axial-vector Ward identity for T αµν (k, q) [1, 2] in terms of form factors can be rewritten as follows: where G is a form factor, related to the 2nd rank pseudotensor T µν involved in the "normal term" on the r.h.s. of (6): Writing the unsubtracted dispersion relations for the form factors one gets the finite subtraction for axial current divergence resulting in the anomaly sum rule which for the kinematical configuration we are interested in (k 2 = 0, q 2 = 0) takes the form [9]: where It holds for an arbitrary quark mass m and for any q 2 in the considered region. Another important property of the above relation is absence of any α s corrections to the integral [24]. Moreover, it is expected that it does not have any nonperturbative corrections too ('t Hooft's principle). Transition form factors of mesons The form factor F πγ of the transition π 0 → γγ * is defined from the matrix element: where k, q are momenta of virtual photons, p = k + q, and J µ = ((2/3)ūγ µ u − (1/3)dγ µ d) is the electromagnetic current of light quarks. Three-point correlation function T αµν (k, q) has pion (pole at p 2 = m 2 π ) and higher states contributions: where f π is a pion decay constant, which can be defined as a coefficient in the projection of axial current J 5 α onto one-pion state: The pion decay constant f π = 130.7 MeV is experimentally well determined from the decay of charged pion π − → µ − ν. Using the kinematical identities we can single out the pion contribution to 1 2 (F 3 − F 6 ). Then, the contribution of pion to Im(F 3 − F 6 )/2 (imaginary part is taken w.r.t. p 2 ) is: where Q 2 = −q 2 1 . It is well known that at Q 2 = 0 the pion contribution saturates anomaly sum rule (9) [3] and F πγ is known to be normalized by the π 0 → γγ decay rate [2]: On the other hand at Q 2 = 0, factorization approach to perturbative quantum chromodynamics (pQCD) for exclusive process in the leading order in the strong coupling constant predicts [25,26]: where f π = 130.7 MeV and ϕ π (x) is a pion distribution amplitude (DA). The pion DA depends on the renormalization scale [25,27] and at large Q 2 asymptotically acquires a simple form [28] ϕ asymp . This leads to asymptotic behaviour for the pion form factor: From (17) and (15) we get the contribution of pion to anomaly sum rule (9): We see, that at Q 2 = 0 anomaly sum rule (9) cannot be saturated by pion contribution 2 due to 1/Q 2 behavior, so we need to consider higher states. The higher mass pseudoscalar states have the same behavior and suppressed by the factor m 2 π /m 2 res as follows from the PCAC (since ∂ µ J 3 µ should vanish in the chiral limit). The other contributions are provided by axial mesons, the lightest of which is the a 1 (1260) meson. In fact, the contribution of longitudinally polarized a 1 is given by the similar equation to (18) at large Q 2 . indeed, the analysis of axial current bilocal matrix elements of a 1 is completely similar for that [31] of vector current matrix elements of ρ. The contribution of transversally polarized a 1 to (9) decreases even faster. Actually, the same is true for all the higher axial mesons and mesons with higher spin. So we make an important observation: for the case Q 2 = 0 the anomaly relation (9) cannot be explained in terms of any finite number of mesons due to the fact that all transition form factors are decreasing functions. That is why we conclude that only infinite number of higher states can saturate anomaly sum rule and therefore at Q 2 = 0 the axial anomaly is a genuine collective effect of meson spectrum in contrast with the case of two real photons Q 2 = 0, where the anomaly sum rule is saturated by pion contribution only. Let us note that this conclusion does not depend on any choice of meson distribution amplitudes. Quark-hadron duality Now we proceed to particular analysis of anomaly sum rule (9)using quark-hadron duality. In some sense the discussion of the previous section is based on the quark-hadron duality. In this section we will apply local quark-hadron duality to anomaly sum rule (9). According to the quark-hadron duality, let us saturate the the spectral density A 3a by pion and continuum contributions: where continuum contribution A QCD 3a θ(s − s 0 ) as usually supposed to be equal to QCD calculated spectral function A 3a , while s 0 is a continuum threshold. Substituting (20) into (9), we obtain the anomaly sum rule in the following form: One-loop PT calculation [32,33] leads to a simple result for the spectral density function: so we can rewrite the anomaly sum rule in the following form: and finally the pion form factor is where s 0 = 0.7 GeV 2 is a continuum threshold. This result coincides with the interpolation formula proposed by S. Brodsky and G.P. Lepage [26] which was derived in the quark-hadron duality context by A.V. Radyushkin [34]. The contact with exact anomaly sum rule observed here allows to find the relations between contributions of π 0 with higher states which may be chosen as a 1 and continuum. With account of these contributions the anomaly sum rule (9) can be rewritten in the following form: where I a 1 is a contribution of a 1 meson to sum rule(which can be expressed in terms of a 1 form factors), s 1 = 2.5 GeV 2 is a continuum threshold for this case. Using the asymptotic formula for pion form factor (24) we can estimate the behavior of I a 1 at large Q 2 as This equation can be treated as a good interpolation for a 1 contribution with correct asymptotic behavior (large and small Q 2 ). The plot for contributions of pion, a 1 meson and continuum is shown in Fig.1. The figure illustrates the anomaly collective effect: indeed, the contribution of infinite number of higher resonances (continuum contribution) dominates starting from relatively small Q 2 ≃ 1.5 GeV 2 . Corrections' interplay and experimental data As we learned above, axial anomaly is a collective effect of meson spectrum. That is why it is natural study the relation between possible meson and continuum corrections from anomaly sum rule. Let us write out anomaly sum rule once more in the following form: where continuum contribution I cont takes into account all other higher mass axial and higher spin states. As we already mentioned, the anomaly sum rule (9) is an exact relation ( ∞ 0 A 3a (s; Q 2 )ds does not have any corrections). However, the continuum contribution (29) may have perturbative as well as power corrections. Note that the two-loop corrections to the whole triangle graph were found to be zero [35] implying the zero corrections to all spectral densities 3 . To match this result with the earlier found non-zero corrections in the factorization approach (see [16] and references therein) one should be careful. When one is dealing with the corrections to the form factor itself, only the corrections to the coefficient function should be considered, while all other corrections are absorbed to the definition of the distribution amplitude. At the same time, when the factorization theorem is applied for calculations of the large Q 2 asymptotics of VVA diagram, all the corrections should be taken into account. To do so, one should add Eq. (3.11) of [16] with the projector to the local axial current (proportional to asymptotic pion distribution amplitude 4 ) and Eq. (B1) (coinciding with Eq. (1) of the Erratum) of [38] and get a zero result, compatible with [35]. Therefore, the model of the corrections to continuum discussed below should rather correspond to some non-perturbative corrections. Let us first consider the contributions of local condensates. Naively, they should strongly decrease with Q 2 compensating the mass dimension of gluon (as quark one is suppressed even more) condensate. However the 't Hooft's principle requires (see [9], Section 4) the rapid decrease of the corrections with Borel parameter M 2 (related to s) so that the power of Q 2 in the denominator may be not so large. In reality the actual calculations do not satisfy this property and the situation may be improved by the use of non-local condensates (see [9] and references therein). Another possibility is other non-perturbative contributions, like instanton-induced ones. So we assume the appearance of such corrections in what follows modelling the corrections to continuum. In order to preserve the sum rule (28) (or in particular (23), (26)) the corrections to continuum contribution should be exactly compensated by corrections to lower states, in particular to pion. It turns out that this is a rather uncommon situation: corrections to continuum are compensated by the main terms of lower states which are of the same order in Q 2 as continuum corrections. To be more specific, let us consider the model "π 0 +continuum". From (23) the main contributions of pion and continuum read: If the corrections to pion and continuum to ASR are δI π and δI cont respectively I cont = I 0 cont + δI cont (33) then, since δI π = −δI cont , the ratio of relative corrections to continuum and pion is For instance, for Q 2 = 20 GeV 2 , s 0 = 0.7 GeV 2 the ratio is We see, that the relative correction to continuum is suppressed by factor 1/Q 2 as compared to the correction to pion. To illustrate our conclusion, we assume the correction to continuum at large . This correction preserves asymptotics of continuum contribution at large Q 2 . Contributions of pion and continuum to ASR then have the following explicit forms: If this correction corresponds, as it was discussed above, to (non-local) gluon condensate G 2 one may formally substitute the dimensional factor cs 0 by c G 2 /s 0 stemming from G 2 /M 2 for Borel transforms. One can see, that the leading power correction to continuum preserving its asymptotics results in a substantial (of the order of the main term I 0 π ) contribution to the pion state changing the pion form factor asymptotics at large Q 2 . The experimental data on pion form factor behavior at large Q 2 allows us to get estimation for the corrections to continuum. If the pion form factor expression (24) have matched the experimental data, the continuum leading correction might be only of order 1/Q 4 . However, the last BaBar data on pion transition form factor [13] manifests large discrepancy of F πγ values at large Q 2 with the expected asymptotic behaviour (24). Relying on the BaBar data, we can fit parameters b, c: The corresponding plot of combination Q 2 F πγ for the best-fit parameters (38) is shown in Fig. 2. Basing on the model "pion+continuum" we can calculate the relative correction to continuum contribution to ASR δI cont /I 0 cont relying on different fits of F * πγ : In Fig. 3 the ratios δI cont /I 0 cont for our fit (36,38) and fit, obtained in recent paper [18] are shown. We see, that the correction to continuum is indeed small, even though the BaBar data shows that the relative correction to pion contribution is large. Above we estimated the correction to continuum in the model "π 0 +continuum". However, one can consider more refined models like "π 0 + a 1 +continuum". This model may assume interplay between corrections to three terms in the ASR. Moreover, in the case of the small correction to continuum which currently seems to be the most likely situation, ASR will lead to the relation between the transition form factors of pion and a 1 which may be further studied both theoretically and, most important, experimentally. Discussion and Conclusions Dispersive approach to axial anomaly proved to be a useful tool for studying the properties of meson spectrum. It is well known, that when both photons are real (Q 2 = 0) the ASR saturates by pion contribution only. However, when one of the photons is virtual (Q 2 = 0) we immediately get different situation: ASR can be saturated only with a full meson spectrum (any finite number of mesons cannot saturate the anomaly sum rule). So the axial anomaly is a collective effect of meson spectrum. The anomaly sum rule and quark-hadron duality in case of model "π 0 +continuum" allows to reproduce the well-known Brodsky-Lepage interpolation formula for F πγ . We estimate the contribution of a 1 meson to anomaly sum rule in the model "π 0 +a 1 +continuum". The exactness of the anomaly sum rule leads to the relation between corrections to continuum and lower mass states contributions. The last experimental data on pion transition form factor F πγ at large Q 2 allows us to estimate the possible continuum correction. One can also consider γ * γ → η transition form factor in the same way. Considering the similar ratio for the relative contributions of η meson and continuum as (34) (with the continuum threshold s η 0 ∼ 2.5 GeV 2 ), we can estimate the relative correction to η to be several times smaller than the one for π 0 : δI η /I 0 η δI π /I 0 π ≃ s π 0 s η 0 ≃ 0.3.
4,474.4
2010-09-06T00:00:00.000
[ "Physics" ]
Exploring Needs of Academic Writing Course for LMS in the New Normal: A Development of EFL Materials Abstract INTRODUCTION After the outbreak of the novel coronavirus and the number of cases outside China increased 13-fold within 2 weeks, WHO announced it a global pandemic (Cucinotta & Vanelli, 2020). In the middle of March, public schools in 107 countries were closed and the school closures affected 862 million children and young people, almost a half of student population worldwide (Viner et al., 2020). This was an attempt to reduce social contacts and interrupt the transmission. Due to its outbreak, Indonesian government has issued two regulations on early April 2020; the government regulation and Health Ministry regulation to impose a largescale social restriction or PSBB (Sutrisno, 2020). It means to implement a partial lockdown. In a field of education, the decree of the Minister of Education and Culture No. 4 year 2020 issued four main points in implementing distance learning or learning from home with some consideration to the life skills, learning barrier and facilities at home (Yulia, 2020). These policies resulted in the schools and university closures, and they changed the teaching system into remote teaching and online learning (Purwanto et al., 2020). It can also be said as pembelajaran jarak jauh (PJJ). The sudden change to switch the teaching mode comes with inveitable consequences for both teachers and students. Though online distance learning can be an effective solution for this crisis situation, some barriers are also predictable, such as poor internet connection, budget, limited facilities at home, and health problems due to longer screen time (Heliandry, Nurhasanah, Suban, & Kuswanto, 2020). In Fact, Fauzi, Hermawan, and Khusuma (2020) have surveyed 45 elementary teachers in Banten and found that they are aware of online learning as a need during the pandemic though they face some difficulties. They include a low facility, internet usage and connection, planning, implementing and evaluating the learning process as well as collaboration with parents. At university level, most of the students in Indonesia have experienced online learning for the first time during the pandemic. Some of them, from Kendari (Anhusadar, 2020), perceived that online teaching mode is somewhat helpful. Meanwhile, some students from Makassar (Sujarwo, Sukmawati, Akhiruddin, Ridwan, & Siradjuddin, 2020) revealed that they feel positive and interested in applying online learning, though it was not entirely efficient. However, students on both research yielded that 'back to campus' was more preferable (Anhusadar, 2020;Sujarwo et al., 2020). In the context of teaching English as a foreign language (EFL), a recent study by Atmojo and Nugroho (2020) has investigated how EFL teachers carry out EFL learning and its challenges. They found that a variety of application and platforms and a series of activities were employed; from LMS to additional sources and either synchronous or asynchronous mode. Despite the attempts and arrangement, the research showed that the online learning seems to end with failure because it lacked preparation and planning. Therefore, online learning should be well-prepared with knowledge and skill dealt with the subject-matter, pedagogy, and technology. And need analysis can be an initial step in course preparation. The fact that the ongoing Covid-19 pandemic has no sign of abating immediately aspirates the distance learning as one of several options on teaching system at university. Speaking on the Indonesian Rectors Forum through virtuak conference on early July 2020, President Joko Widodo has recommended online learning as a default teaching system for universities in Indonesia. As cited from thejakartapost.com, the president said that this kind of teaching system has become a 'new normal'even the next normal (Fachriansyah, 2020). Moreover, the implementation of online learning course uses various technological devices or tools. Those can be digital mobile application, video conferencing software, learning platforms, and learning management system. The latter, known as LMS, has become everywhere, approximately 99% of colleges and universities reporting they employ an LMS in place (Dahlstrom, Brooks, & Bichsel, 2014). In a very simple way, LMS is an online portal that connects lecturer and students and provides an avenue to share classroom materials and activity (Adzharuddin, 2013). Historically speaking, LMS was derived from the generic terms such as computer-based instruction (CBI), computer-assisted instruction (CAI), and computer-assisted learning (CAL), but at present days, the term LMS refers to a number of different educational computer applications that handles all element of learning process, such as distributing and managing instructional content, identifying and evaluating instructional goal, tracking the progress, and handling course registration and administration (Watson & Watson, 2007). Compare to other computer educational term, LMS has specific systemic nature. The general characteristics of LMS are as follow (Bailey, 1993 cited by Watson & Watson, 2007): 1) instructional objectives are tied to individual lessons, 2) lessons are incorporated into the standardized curriculum, 3) courseware extends several grade levels in a consistent manner, 4) a management system collects the results of student performance, 5) lessons are provided based on the individual student's learning progress. Examples of the most popular LMSs are Canvas, Sakai CLE, MOODLE, Blackboard, Desire2learn, and eCollege (Dahlstrom et al., 2014). Additionally, Simonson (2007, cited in Chung, Pasquini, & Koh, 2013) brought more practical definition to LMSs, which is also known as course management system (CMS), as software systems designed to assist in the management of educational courses for students, especially by helping teachers and learners with course administration. Cavus & Zabadi (2014) revealed that, among the open source LMSs, Moodle comes at the first position, with its user-friendly interface and accessibility, which serves more than 70 million users worldwide on June 2013. Almost similar result was also revealed by Machado and Tao (2007) when comparing Blackboeard and Moodle. They concluded that in the aggregate when the systems were compared in their entireties, that the Moodle learning management system was the preferred choice of the users. Despite its complexity, risks and cost, the uptake of LMS in campuses and universities are rapidly growing. It seems that LMS offers some attractiveness to the universities. Coates, James, and Baldwin (2005) wrote that LMS brings at least six elements of attractiveness. At first, LMS provides a means of increasing the efficiency of teaching for delivering large-scale resource-based learning programmes. Secondly, it is dealt with the promise of enriched student learning. Next element of LMS is that LMS fulfils the student expectation, advanced technology. The other elements of attractiveness are competitive pressure of institutions, demands of greater access, and part of important culture shift. Along with the acceptance and usage of LMSs at university, studies dealth with its effectiveness have been extensively conducted based on either the users' point of view or evaluation of system and management. Lonn, Teasley, and Krumm (2011) surveyed the students on both residential and commuter campuses which used Sakai community course architecture and found that most students perceived course-related activities available within LMS were valuable. Moreover, Kakasevki, Mihajlov, Arsenovski, and Chungurski (2008) evaluated the usability of Moodle as a system and an individual module on the faculty of Informatics and faculty of Economic Science. They reported that Moodle uses well known e-tools for communication: online chat, forum, e-mail; however, some of these modules are not well developed. At the university of Belgrade, Horvat, Dobrota, Krsmanovic, and Cudanov (2015) investigated the students' perceptions about the use of Moodle LMS and found that students that use a Moodle LMS only before an exam have a significantly lower satisfaction and significance of quality characteristic rating when compared to students who use it on daily basis. Furthermore, the use of LMSs for language learning and teaching has also been widely known. And its advantages have been empirically proven; for example, learning language through Moodle-based teaching materials impede the process of being autonomous language learners (Khabbaz & Najjar, 2015). In Malaysia, LMS has helped the university students to improve their writing as well as to enhance their understanding of certain topics through explanation and examples given by their peers and their lecturers (Hamat, Azman, Noor, Bakar, & Nor, 2014). Moreover, in teaching a foreign language, Moodle LMS has facilitated a number of benefits over traditional system particularly in organizing individual work (Anatolievna, 2018). For writing class, the application of LMS has resulted in positive and encouraging student feedback It is indicating that students, with Moodle LMS in Hong Kong, genuinely enjoyed the integration of technology (Cheung, Fong, & Wong, 2006) as well as favourable attitudes toward the application of Edmodo social network LMS in writing classes in Iran (Ma'azi & Janfeshan, 2018). Meanwhile in Thailand, study by Pumjarean, Muangnakin, and Tuntinakkhongul (2017) found that Moodle LMS is a feasible and cost-effective educational technology for developing EFL students' grammar and writing skills in a blended-eLearning environment. Specifically for teaching academic writing skill, Blackboard LMS has given positive effects toward academic writing and attitude through facilitating interactions and scaffolding learning; the longer students' experiences the more positive attitude towards the use of Blackboard to enhance academic literacy (Fageeh & Mekheimer, 2013). Furthermore, an experimental study by Imran (2020) showed that Schoology LMS increased the students' writing achievement. Teaching English writing at university level has with large opportunity and autonomy (Widiati & Cahyono, 2001), but it does not come without any constraints, such as large class sizes, unique teaching and learning approaches and ideologies that affect L2 writing instruction (Bhowmik, 2009). As a consequence, an analysis of the students' needs in writing class seems necessary and important, as stated by Richards (2001) that "a sound educational program should be based on analysis of learners' needs". An investigation related to the needs at university has been conducted in the field of language teaching; for example a study by Sundari, Febriyanti, and Saragih (2016) which analysed the needs of EFL writing class for undergraduate students in developing task-based syllabus and materials. However, the need analysis in development of EFL materials to serve LMS has not yet been investigated. Therefore, as a course preparation to face remote online teaching through LMS in the new normal era, this study becomes urgent and essential. It aimed to describe the needs of academic writing course through Moodle LMS from students' viewpoint, and it was addressed the following research question: what are the students' needs in EFL materials for academic writing course using Moodle LMS? METHODS This present study was an initial stage of research and development of EFL materials for academic writing course using Moodle LMS in English Education Department, Faculty of Postgraduate Program at one private university in Jakarta. To fullfil the research purpose, the steps of the system approach model of educational research and development by Dick, Carey and Carey (2001, cited by Gall, Gall, & Borg, 2003) was adopted. The need analysis assessed in identifying course goal/s and objectives, conducting instructional analysis dan analysing learners and context. The participants were 67 students (41 females and 26 males), with age ranged between 22 to 57 years old, who registered on academic writing course through remote teaching during the Covid-19 pandemic. Moreover, participants' backgrounds of education (graduate degree) are various though most of them graduated from English education major (82.1%), as seen on Fig. 1. Figure 1. Participants' Educational Background For collecting data, an online questionnaire was distributed as a major instrument to explore the students' preferences when the course runs through LMS. The item questions were to discover some aspects: 1) course goal/objectives, 2) teaching principles and approach, 3) content materials, 4) format of content materials, 5) writing activities, 6) types of feedback, 7) types of writing assessment/evaluation, and 8) virtual learning mode. In addition, some documents, such as syllabus, worksheets, materials and activities during the existing course sessions were collected to depict the existing materials and course activities. As a result, the information from the data covers two dimensions: present (what is being taught) and future (what needs to be taught, why and how it is being taught). The information as the research data were then analysed quantitatively using percentages, interpreted, and presented in the forms of various layout, such as tables, chart, and description. Moreover, the results of the need analysis became a consideration in developing content materials for academic writing course through Moodle LMS, and the proposed EFL materials were discussed and presented. Results This present study analysed the needs of academic writing course for English Education Department, Faculty of Postgraduate Program, as an initial stage in developing Moodle LMS-based EFL materials. Meanwhile, the data is presented in two parts: the needs and the proposed Moodle LMS-based EFL materials. The Needs of Academic Writing Course Academic writing course is one of compulsory subjects at English Education Department. It is designed for fourteen meetings including mid-test and final test with three credits. On the existing course, content materials are essay development with several texttypes or genre, such as comparison/contrast essay, cause/effect essay, argumentative essay, and a research article. Most of the content materials are prepared for face-to-face teaching system using some format, such as PDF file and PPT presentation. Meanwhile book reference was Writing Academic English fourth edition by Alice Oshima and Ann Hogue. During the remote teaching on the Covid-19 pandemic, the course was suddenly run through several digital platforms, such as Google Classroom, WhatsApp mobile messenger application, and Zoom video conferencing. For next course, Moodle LMS will be applied to facilitate the teaching learning sessions. Learner needs of academic writing course through LMS includes nine aspects as given on the questionnaire. The first aspect was goal or objectives of the course. The majority of the student-participant thought that academic writing course should be designed to enable the students to write a research article for academic purposes. Meanwhile, paragraph and essay development were also prefereable rescpectively. The students' preferences on course goal/objective can be seen on table 2. Related to the learning principles and teaching approach for academic writing course, most of the student-participants preferred process writing approach which serves the steps of prewriting-drafting-revising. The other preferences of teaching principles and approaches was displayed on table 3. (prewriting-drafting-revising) 62.7% Product approach (controlled writing-guided writing-freer writing) 29.9% Dealt with the materials, table 4 shows the students' preferences on content topics. Similar to the course goal and objectives, how to write a publishable research paper possessed the highest response among the student-participants. On the question about material delivery, all formats given on the options gained quite large response. It indicates that student-participants had a desire on a combination of materials delivery. However, the most preferred format was providing a great number of sample texts on PDF files. The percentages of the material delivery can be seen on figure 2. Concerning to the writing activities and exercises during the session, the studentsparticipants thought that essay development the most prefereable. Additionally, paragraph development and discussion forum were also more desirable among other activities, as seen on figure 3. Figure 3. Students' Responses on Writing Activities On feedback on the student's writing product, most of student-participants, equals 88.1%, considered that teacher-feedback more prefereable than peer-feedback (40.3%). Moreover, for assessing academic writing, they perceived that individual project-based assessment was more effective than group-project based assessment and individual sitting examination, with the percentages 74.6%, 37.3%, and 22.4% respectively. Related to virtual learning mode, student-participants perceived that real-time online session or synchronous mode with 64.2% on percentage was more preferable than asynchronous mode. The Proposed Moodle-based EFL Materials for Academic Writing Considering the learner needs and context, the materials contain several texts in academic genre in the form of essays, such as cause/effect essay dan argumentative essay, and research article. This supports the course goal/objectives in developing students' writing skill and enabling them to write several texts in academic genre proficiently and confidently. Moreover, to cultivate students' critical thinking and argumentation, a framework of Bloom's digital taxonomy (Churces, 2009in Wedlock & Growe, 2017 is also adopted. It is the updated Bloom's revised version. a number of digital additions is embedded to each key term in Blooms' Revised Taxonomy. Digital additions are a set of action verbs to show digital activities, such as browsing, googling, uploading, sharing, collaborating, and publishing. CONCLUSION This present research was a need analysis as an initial stage at EFL material development for academic writing course through Moodle LMS for English Education Department. Based on the students' needs and context, the course is designed to develop students' writing skill in academic genre. Process genre approach and Genre-based approach are the most prefereable teaching approaches. Students though that essay development and research article are the content materials discussed and practised on the sessions with a plenty of sample texts in PDF files. Moreover, they also perceived that teacher-feedback, individual project-based assessment, and real-time online session or synchronous mode are the best for online learning through LMS. After having collected information about the needs and context of the course through Moodle LMS, a brief concept of proposed Moodle based-EFL materials for academic writing is presented. It is adopted steps of process genre-based approach to teaching writing and a framework of Bloom's digital taxonomy. Next stage of the process is to develop the Moodle-based EFL materials for academic writing based on in the systematic format and well-structured layout and syllabi. The content materials contain the sequence and presentation for each meeting including learning activity and assessment for each meeting.
4,006.8
2021-01-20T00:00:00.000
[ "Education", "Computer Science" ]
From Stochastic Geometry to Structural Access Point Deployment for Wireless Networks : A Lloyd Algorithm Approach In a wireless network, locations of base stations (BSs)/access points (APs)/sensor nodes can be modeled based on stochastic processes, e.g., a Poisson point process (PPP) or a deterministic pattern planned ahead by providers. While deterministic deployment does not provide tractable interference analysis in general, PPP yields tractable analysis for interference. However, PPP allows APs to be deployed very close to each other and gives pessimistic results compared to the field measurements. In this study, in order to address this issue, Lloyd’s algorithm, which functions as a bridge between random and structural APs deployments, is investigated for analyzing coverage probability in a network. The link distance distribution is modeled as a mixture of Weibull distributions and its parameters are obtained by using the expectation-maximization (EM) algorithm for each iteration of Lloyd’s algorithm. The link distance distribution is further utilized for calculating the coverage probability approximately by exploiting the tractability of PPP. Introduction Rapidly accumulating device diversity, user demands, and need for better coverage make network planning more complicated and introduce randomness in the deployment of BSs/APs.In the scenarios where the locations of BSs/APs do not follow a deterministic structure, modeling the performance of the network precisely becomes a challenging task.One of the proposed approaches is to model BS/AP deployment as an independent PPP, a methodology which provides analytical tractability for interference and coverage probability analyses [1,2].However, the independent PPP assumption ignores the correlation among the BSs/APs.Field measurements show that the coverage probability lies in practice between the traditional hexagonal model and the independent PPP approach.This is mainly due to the fact that network operators have still control on BS/AP deployment in a deterministic way [3,4], which creates intentional repulsion between BSs/APs.Therefore, more realistic ways should be incorporated while still maintaining the tractability of PPP for interference analysis.The authors in [5,6] apply a α-Ginibre point process (GPP) and a β-GPP to model the correlation between BSs/APs.The GPP is a deterministic point process and takes into account the repulsion between BSs/APs. In this study, we investigate scenarios where BSs/APs are deployed neither totally random nor totally deterministic.We propose a semi-analytical strategy by adopting the Lloyd's algorithm to account for the scenarios that lie between the pessimistic PPP-based deployment and the optimistic structural BS/AP deployment.We derive the link distance distribution for each iteration of Lloyd's algorithm by using the EM algorithm.It is shown that the link distance can be approximated well by a mixture of Weibull distributions.By integrating the link distance distribution into the PPP analysis, we provide a coverage probability analysis. The rest of the paper is organized as follows.The Lloyd's algorithm is described in Section 1.The analysis of link distance distribution is given in Section 2. Section 3 presents the coverage probability study.The numerical results are presented in Section 4. The concluding remarks are provided in Section 5. Lloyd's Algorithm Approach A two-dimensional (2D) Voronoi diagram is a tessellation in which each polygon depicts the set of points nearest to a central generator point.Voronoi diagrams present diverse applications in many fields such as wireless communications, astronomy, archeology, physics, mathematics, and coding [7,8].Lloyd's algorithm incrementally moves the generator of each polygon to the centroid of that polygon and maximizes the distance between adjacent generators [9].The maximization procedure creates repulsion between adjacent generators until the generators establish a fixed state such as centroidal Voronoi tessellation (CVT).The resultant Voronoi diagram gives a structural geometry asymptotically, depending on how many iteration steps are used [10].The centroid of each Voronoi cell is given for each iteration by C i is the centroid of the Voronoi cell, r is the position and λ(r) denotes the intensity of r and A stands for the area.Lloyd's Algorithm 1. Choose N points and ∀ n i ∈ N find C i the closest center and repeat until the Euclidian distance between C i and N i equals to zero. In this study, we initialize the tessellation of BSs/APs based on a PPP.While the initial geometry captures the randomness of BS/AP deployment, the asymptotic Voronoi diagram with Lloyd's algorithm yields a structural BS/AP deployment.Each iteration of the Lloyd's algorithm represents an intermediate deployment scenario between the random and structural BS/AP deployments, which motivates us to adopt Lloyd's algorithm for modeling BS/AP deployment.A demonstration of iteration steps {0, 9, 490} is illustrated in Fig. 1.Furthermore, BSs/APs and/or sensors can be placed on the drones, i.e., autonomous planes, and drones can give coverage to areas such as disaster/public safety regions, rural areas and downtown areas as seen in Fig. 2. We can also utilize it for the self organized networks (SON) networks to decide the best coverage options for a given area.Hence, to exploit Lloyd's algorithm for modeling BS/AP and/or sensor deployment, the analytical expression of link distance distribution at each iteration of Lloyd's algorithm is required.To the best of our knowledge, the link distance distribution is not available in the literature, and an approximate distribution is derived in the next section by exploiting the EM algorithm. Link Distance Distribution Analysis Consider a snapshot of a wireless network that covers an area A. The users are distributed uniformly in the area.Each user is associated with the closest BS/AP, i.e., the users in a polygon generated with Voronoi tesselation are connected to the corresponding generator of that polygon.The link distance between a user and its associated BS/AP is denoted by r.As an initial stage of the Lloyd's algorithm, we consider a random BS/AP deployment where BSs/APs are spatially distributed in the area as a realization of a homogeneous 2D PPP Φ with intensity λ.The probability density function (PDF) of link distance is equivalent to the null probability for the PPP [1] and is given by which corresponds to a Rayleigh distribution with variance 1/2λπ.On the other hand, considering the case of hexagonal tessellation, the PDF of link distance is given by [11] f r (r) = ( The transition from (2) to (3) via Lloyd's algorithm can be approximated as a mixture of Weibull distributions by employing the EM algorithm.A mixture of Weibull distributions can be expressed as where φ j is the weight of jth component and ∑ l j=1 φ j = 1, δ j and ϕ j are the scale parameter and the shape parameter, respectively, and l is the number of Weibull distributions.In order to consider various BS/AP intensities, we define δ j to be ψ j √ λ 0 /λ, where λ 0 is a constant and ψ j is the scale parameter when λ = λ 0 .The main reasons for using a mixture of Weibull distributions are: (i) Rayleigh distribution is a special case of a Weibull distribution if the Weibull parameters are properly selected, (ii) The support of Weibull distribution is [0, ∞], and (iii) Weibull distribution can provide negative and positive skewness, a feature required in the transitions from (2) to (3).Next, we discuss the calculations of parameters φ j , δ j and ϕ j with the EM algorithm. EM Algorithm for Link Distance Distribution We have a training set r = {r (1) , r (2) , • • • , r (m) } consisting of m independent observations generated by considering each iteration step of Lloyd's algorithm.Our goal is to fit the Weibull parameters to the link distance distribution by utilizing the EM algorithm.The EM algorithm consists of two steps, namely, the expectation (E)-step and the maximization (M)-step.The reader is referred to [12] for more detailed explanations about the EM algorithm. The complete log-likelihood is defined as where θ = {ϕ j , δ j , φ j } and w (i) j = p(z (i) = j|r (i) ; θ) denotes the posterior probabilities associated with the hidden label information z (i) .The steps of the EM algorithm are: • E-step: Choose w j to maximize L(w j , θ) L(w j , θ t ). • M-step: Choose θ to maximize L(w j , θ) Maximizing (5) with respect to the parameters ϕ j and δ j , we obtain ( 6) and (7), respectively In order to maximize (5) with respect to φ j when ∑ l j=1 φ j = 1, the Lagrangian function is constructed as where stands for a Lagrange multiplier.After taking the derivative of ( 8) with respect to φ j and equating it to zero, we obtain: An iterative method such as Limited Broyden-Fletcher-Goldfarb-Shanno (L-BGFS) can be applied to obtain ϕ j and δ j [13] due to the fact that ϕ j and δ j in ( 6) and ( 7) do not have explicit forms. Coverage Probability Probability of coverage is the ratio of the network area where signal-to-interference-noise ratio (SINR) is greater than a certain threshold T to the total area.It can be defined as where α ≥ 2 is the path loss exponent, h denotes the channel gain between tagged BS/AP and its user, and σ 2 is the noise power.Variable I r stands for the total interference power received from the neighboring BSs/APs and is given by where b o is the tagged BS/AP, g n and R n are the channel gain and the distance between the nth interfering BS/AP and the tagged user, respectively.Assuming that the channel gains are characterized with i.i.d.exponential distributions where (10) is expressed as where L I r (•) is the Laplace transform of I r and is given by Due to the independence of fading coefficients, ( 13) can be re-written as By considering the properties of probability generating functional (PGFL) [1,14, ch.4,p.126], ( 14) can be expressed as Plugging ( 4) and ( 15) into (12), and using the substitution r ϕ = u, the coverage probability is expressed as where and Γ (c, b) stands for the incomplete Gamma function. Numerical Results In this section, we evaluate Lloyd's algorithm approximation for coverage probability with computer simulations.BSs/APs are arranged according to a homogeneous PPP in a 300 × 300 square meter area where λ = 1 unless other stated.We consider a 75 × 75 square meter in the middle of the total coverage area to eliminate the boundary effect [1].We consider a Rayleigh fading channel and set α to be 4.The parameters ϕ j , δ j , and φ j are provided in TABLE 1 by using the EM algorithm when l = 3, m = 10 4 , and λ 0 = λ = 1.It is worth emphasizing that mixture of three Weibull distributions is sufficient to characterize the link distance distribution.The values in the TABLE 1 are employed in the calculation of coverage probability. In Fig. 3, the link distance distribution is investigated when λ = 1 and λ = 0.25.The mixture of Weibull distributions obtained via the EM algorithm agrees with the results of Lloyd's algorithm.Lloyd's algorithm performs like a bridge between (2) and (3).The radius of each Voronoi cell becomes evenly distributed as a result of increase in the iteration values, therefore, f r (r) converges to (3).This is mainly due the fact that the shape of Voronoi tessellation becomes more consistent as in the case of hexagonal-like tessellations.In Fig. 4, the impact of Lloyd's algorithm on coverage probability is investigated.As seen in Fig. 4, Lloyd's algorithm represents the intermediate deployment scenarios between the pessimistic, i.e., random, and the optimistic, i.e., structural, BS/AP deployments.compares the PPP base station model to our proposed Lloyd algorithm model for different α and iteration values.In these plots, α values are 2.5, 4 and 6.The cumulative distribution function (CDF) vs signal-to-interference ratio (SIR) values are plotted for benchmark paper [1] and our proposed approach.One of the common observation in each α value is that PPP deployment provides the lower bound.Also, α plays crucial role in terms having better SIR and coverage as expected.One can easily see that when α takes greater values i.e., 4,6, the coverage probability is increasing since greater α means better received power.In Fig. 6, we compare the coverage probability of the random PPP BS/AP model, hexagonal BS/AP model, and Lloyd's approximation.The tightness of the proposed method for coverage probability is illustrated for different iterations of Lloyd's algorithm, i.e., {0, 2, 9, 29}.If the iteration value increases then the coverage probability for the proposed method tends to approach the hexagonal BS/AP tessellation.It is important to note that the analytical approximations lose the tractability of Lloyd's algorithm at larger iteration values such as after iteration number 9. The analytical approximation suffers from the fact that PGFL assumption begins to fail.Nevertheless, the proposed approximation holds for low SIRs. Concluding Remarks In this study, the impact of Lloyd's algorithm on the coverage probability of wireless networks is investigated.The link distance distribution is modeled as a mixture of Weibull distributions.Its parameters are derived based on the EM method at each iteration of Lloyd's algorithm.The numerical results show that if the Lloyd's algorithm is employed, the transitions between pessimistic PPP to optimistic hexagonal deployment can be approximately modeled. Figure 1 . Figure 1.Illustration of transition from random BS/AP deployment to structural BS/AP deployment with the Lloyd's algorithm. Figure 3 . Figure 3.The transition from Rayleigh distribution to hexagonal distribution. Abbreviations Figure 4 . Figure 4.The variation of the coverage probability for a given number of iterations of Lloyd's algorithm. Figure 5 .Figure 6 . Figure 5. Proability of coverage for PPP and Lloyd's algorithm for different α values.
3,122
2018-05-02T00:00:00.000
[ "Computer Science" ]
Sol-Gel Synthesis, Crystal Structure, Electronic and Magnetic Properties of AlxTi1-xBiO3 (0.0 ≤x≤ 0.33) Oxides Synthesis of AlxTi1-xBiO3 (0.0 ≤x≤ 0.33) (S1-S4: x = 0.0, 0.11, 0.22, 0.33) oxides is performed by sol-gel method via nitratecitrate route. Analysis of the powder X-ray diffraction (XRD) patterns show tetragonal unit cell with lattice parameters: a = 6.6377, 6.6398, 6.6370, 6.6366 Å; c = 6.5445, 6.5391, 6.5259, 6.6583 Å, respectively in S1-S4, with space group P42/mnm and Z=4. Average crystallite sizes determined by Scherrer relation are found to be in the range ~16-36, 18-50, 19-48 and 19-41 nm in S1-S4, respectively. On Rietveld refinement of unit cell structures the agreement factors are lowered to: Rp= 98.28, 97.65, 98.85, 94.29 %; Rwp= 97.11, 96.76, 97.92, 95.73 %; Rexp= 0.09, 0.09, 0.09, 0.09 % in S1-S4, respectively. Fourier electron density mapping show irregular contours around Bi 3+ , Ti 3+ and O 2ions due to significant ionic character in Ti-O and Bi-O bonds in the materials. Presence of hysteresis loops in the range -6 kG to +6 kG at 300 K with magnetic susceptibility values in the range 5.926 x 10 -8 6.461 x 10 -8 emu/gG in S1-S4 show soft ferromagnetic nature of the oxides. Density functional theory (DFT) calculations using CASTEP (Cambridge Serial Total Energy Package) programme package show energy band gap, Eg, ~ 0.01-0.02 eV indicating weak semiconducting nature of the oxides. The valence band (VB) predominantly comprises O 2p, Ti 3d, Al 3p and Bi 6p orbitals, and the conduction band (CB) comprises mostly O 2p, Al 3p and Bi 6p orbitals with extension of band tails narrowing the energy band gap. INTRODUCTION In recent years, nanosized materials, such as nanoparticles, nanowires, nanotubes etc. have received much attention due to their extraordinary electronic, magnetic, optical, and thermal properties [1][2][3][4]. In the domain of nanosized materials, electronic structures of transition metal (TM) containing complex oxides [5,6] have attracted a special attention due to their unusual electronic and magnetic properties with potential applications in next generation magnetic recording media and optical memory devices [7][8][9]. Simultaneous presence of strong electron-electron interaction within the TM 3d manifolds and sizeable interaction strength due to thermally excited hopping of electrons between the 3d and oxygen 2p states are primarily responsible for the wide range of properties exhibited by these materials. In this paper, we report on the synthesis, crystal structure, electronic and magnetic properties of AlxTi1-xBiO3 (0.0 ≤x≤ 0.33) non-perovskite oxides [10] using direct structure-sensitive techniques such as: powder XRD, differential scanning calorimetry (DSC)/differential thermal analysis (DTA)-thermogravimetric analysis (TGA), scanning electron microscopy (SEM), magnetic measurements, AC electrical conductivity measurements, optical absorption and DFT calculations using CASTEP programme package on the optimized lattice constants and atomic positions. EXPERIMENTALS Sample Preparation The samples are prepared by sol-gel method [11] via nitrate-citrate route. Stoichiometric amounts of aluminium nitrate, Al(NO3)3, titanium(III) oxide, Ti2O3 and bismuth nitrate, Bi(NO3)3·5H2O, are dissolved in distilled water to prepare 0.1 M solutions each and mixed together. The pH of the resulting solution is adjusted to ~ 2 by adding HNO3. To this solution 30 ml of 1.5 M citric acid solution is added to prepare sol, which is then air dried by stirring continuously at ~ 60 °C for 160 h to form the gel. The resulting gel is decomposed to fine powder at ~ 120 °C which is then heated in air at 450 ºC for 6 h for complete combustion of organic artesia, followed by sintering at 800 °C for 8 h and quenching in air to obtain grey colour fine powder. Experimantal Techniques Powder XRD patterns of the samples are recorded on an X'pert powder X-ray diffractometer (PAN ANALYTICAL make) with scan rate 2°/min in the range 5°-85° in 2θ. Monochromatic Cu Kα radiation (λ ~ 1.5406 Å) is used as the x-ray source with power 40 kV/30 mA. The XRD patterns are analyzed using FullProf Suite (version 3.90) software package [12] to determine the unit cell parameters and indexing. Refinement of the unit cell structure is done using Rietveld method [13]. Fourier electron density mapping of the crystal structure is done using FullProf Suite (version 3.90) software. Microstructures of the materials are examined by a SEM JSM-5410 and Energy dispersive X-ray analysis (EDX) is done by a SUPER DRYER II instrument. DSC/DTA-TGA traces were obtained using a thermal system (DSC/TGA-DTA: TA instrument) (sensitivity of 0.2 μg) in the range 30-1000 °C at heating rate of 10 °C/min in N2 atmosphere. AC electrical conductivity measurements are carried out using a broad band dielectric spectrometer (BDS) (Novocontrol make model concept 80). Magnetic moments of the samples are recorded in the range ±6 kG at 300 K on a LAKESHORE VSM 7404 vibrating sample magnetometer. UV-VIS absorption measurements are performed in range 200-800 nm using a Varian-Model Cary5000 UV-VIS-NIR spectrophotometer. Bulk densities of the samples are determined by liquid displacement method using CCl4 as an immersion liquid (density 1.594 g/cc at 300 K). Electronic Energy Band Structure and Density of States (DOS) Calculations Calculations of electronic energy band structures and DOS are done using a plane-wave DFT with local gradient-corrected exchange-correlation functional [14] and performed with a commercial version of the CASTEP programme package [15][16][17], using Material Studio (MS) software, which uses a plane-wave basis set for the valence electrons and normconserving pseudopotential [18] for the core states. This program evaluates the total energy of periodically repeating geometries based on DFT and the pseudopotential approximation. In this case only the valence electrons of the elements are represented explicitly in the calculations, and the valence-core interaction being described by nonlocal pseudopotentials. Determination of Unit Cell Parameters Powder XRD patterns of S1-S4 of AlxTi1-xBiO3 (0.0 ≤x≤ 0.33) oxides are shown in Fig. 1. Using the Fullprof software the phase in the samples is determined and also the indexing of the lattice planes is done. The unit cell in the samples is found to be tetragonal with space group P42/mnm. The unit cell parameters are shown in Table 1 along with the values of experimental densities. Comparison of the experimental density values with those of the calculated ones the value of Z is determined to be 4 in S1-S4. These results show that despite perovskite oxide, ABO3, type compositional formula these materials are non-perovskite oxides [10]. Average crystallite sizes in the samples determined by Scherrer relation [19] are found to be in the range ~16-36, 18-50, 19-48 and 19-41 nm in S1-S4, respectively which show the formation nanoparticles in the oxides. A p r i l 0 3 , 2 0 1 4 Microstructures, EDX and Thermal Analysis SEM micrographs of selected samples S1 and S4 are shown in Fig. 2 (a, b), which show irregular globular agglomerated particles in S1(x=0.0) through S4 (x=0.33). Various localized regions of the samples also show similar microstructures with respect to the particle sizes and shapes. Fig. 2 (c) shows the EDX profile of S4. Presence of the constituent elements Al, Ti, Bi and O only in the profile show the purity of the samples. Fig. 3 shows the DSC/DTA-TGA traces of S1 and S4 in the range 50-1000 °C. Abrupt weight loss in the TGA traces of S1 in the range 105 to 420 °C with no characteristic peak in the DSC/DTA traces is attributed to the removal of physically and chemically adsorbed water in the sample. Further at higher temperatures upto 900 o C absence of any characteristic event in the DSC/DTA-TGA traces show the thermal stability of the samples upto 900 o C. Furthermore, weight loss in the samples ~ 900 °C could be due to the decomposition of the samples. Crystal Structure Refinement and Fourier Electron Density Mapping Unit cell structures of S1-S4 are developed on space group P42/mnm with Ti1 Table 1. 3-dimensional view of the unit cell structure of S4 (x=0.33) is shown in Fig. 4 (a) along with the projection onto (001) plane ( Fig. 4 (b)). The above views in all the samples S1-S4 are similar. The crystal coordinates and the cartesian coordinates before and after refinement, the bond lengths and the bond angles in the unit cell structures are shown in Tables 2, 3 and 4, respectively. The various bond lengths and bond angles in the asymmetric units of S1-S4 show only marginal variation due to variations in compositions. 3-dimensional Fourier electron density mapping from <001> plane of S4 (x=0.33) and the 2-dimensional electron density contour on (001) plane are shown in Fig. 4 (c, d). Irregular electron density contours around the Ti Band Structure and Density of State (DOS) Electronic energy band structures due to chemical bonding, valence electrons distribution in atoms and electron localization function could provide physical insight into the structural and other related properties. Fig. 5 (a, b) shows the electronic energy band structures and the DOS of S1, while AC and DC Electrical Conductivity The complex AC conductivity [23,24], σ*, is given by, where σ′ is the real part and σ′′ is the imaginary part. The real part σ′ is called the AC conductivity. Fig. 7 (a-d) shows the plots of σ′(S/cm) versus log f (Hz) of S1-S4 at different temperatures. Each of the plots was extrapolated to zero frequency to obtain the DC electrical conductivity, σdc, for the sample at that temperature. From the DC electrical conductivity data, log σdc versus 10 3 /T (K -1 ) plots (Fig. 8) are obtained for all the samples using the Arrhenius equation. A p r i l 0 3 , 2 0 1 4 σdc = σo exp (-Ea/kT) (2) where Ea is the activation energy, k is the Boltzmann constant and T is the absolute temperature. The plots show nearly straight line behaviour and the slopes are obtained from the linear fit of the plots. From the slopes the values of the activation energy, Ea, of the samples are found to be 0.225, 0.318, 0.357 and 0.446 in S1-S4, respectively. This result shows that the samples are weak semiconductors, and the semiconductivity increases from S1 (TiBiO3) to S4 (Al0.33Ti0.67BiO3) with the progressive substitution of Al 3+ (2p 6 ) ions in Ti 3+ (3d 1 ) sites. The result further shows that the semiconductivity arises from partial delocalization of the single 3d 1 electron at Ti 3+ sites which are linked through the bridging O 2ions to the Bi 3+ sites (Ti-O-Bi) in the crystalline lattice. Thus incorporation of Al 3+ ions result in the formation of Al-O-Bi bridge containing only the localized electrons in the stable ions in the crystalline solid matrix thereby the delocalization of number of hopping electrons is reduced and consequently semiconductivity is enhenced. It may be mentioned that the calculated values of the energy band gap, Eg, by CASTEP from the unit cell structures are found to be ~ 0.02 eV in S1-S4 which are much lower than the values obtained by observed electrical conductivity data as above. This inconsistency is due to the drawback of the DFT calculations in such complex oxide systems. Dielectric Properties The complex dielectric function, ε*, is given by, ε* = ε′ + iε′′ (3) where the real part ε′ is called the dielectric constant, and the imaginary part, ε′′, is called the dielectric loss factor due to conduction process. Fig. 9 (a-d) shows the plots of real part, ε′, and imaginary part, ε′′, of dielectric loss of S1 (x=0.0) and S4 (x=0.33) at different temperatures from 20-300 o C versus AC frequencies. At higher temperatures near 300 o C the ε′ decreases rapidly with increasing AC frequency in S1 and S4, while at lower temperatures near room temperature, ε′ decreases marginally with increasing frequencies. Furthermore, ε′ increases with increasing temperatures from 20 o C to 300 o C at a fixed AC frequency with significant increase at lower frequencies to marginal increase at higher frequencies (~ 10 7 Hz). This result shows that thermally activated hopping [23] of Ti 3+ (3d 1 ) electron is the prime factor for AC conductivity in these systems. Also at higher temperatures upto 300 o C relatively more ordered orientation of the electric dipoles enhances the increased values of ε′. It is also important to mention that these rigid non-viscous systems do not exhibit ideal Debye-type pattern of the ε′ versus frequency (Hz) plots ( Fig. 9 (a, c)) which indicates that electronic conductivity in these systems is significant as compared with the ionic conductivity. However, as compared with S1(x=0.0) (TiBiO3) the sample S4(x=0.33) (Al0.33Ti0.67BiO3) shows slight indication of Debye-type peak formation at mid-frequency ranges. This could be due to some contribution in ionic conductivity from the mobile Al 3+ ions in S4 due to smaller ionic radius. Fig. 9 (b, d) shows that the dielectric loss factor, ε′′, increases with the increasing temperatures at a fixed frequency, f (Hz), and decreases rapidly for high temperature cases with increasing frequencies as compared with the decrease at lower temperatures with less prominent Debye-type peak in the upper mid range frequencies in S1 and S4. This result could also be attributed to the enhanced ordering of the electric dipoles at higher temperatures as discussed in the case of dielectric permittivity, ε′. The ε′′ versus f (Hz) plots normally show a peak in the mid frequency range which is called the relaxation peak. In Fig. 9 (b, d) a very less prominent such peak is observed at lower temperatures. However, the peak shifts towards higher frequencies with increasing temperatures. This result shows that very less significant ionic conductivity takes place in these materials within the temperature range studied in contrast to other systems reported by earlier workers [23]. Fig. 9. Plots of (a) real part dielectric constant, ε′, at different temperatures for S1(x = 0.00), (b) dielectric loss, ε′′, at different temperatures for S1(x = 0.00), (c) real part dielectric constant, ε′, at different temperatures for S4(x = 0.33), (d) dielectric loss, ε′′, at different temperatures for S4(x = 0.33), versus AC frequencies. Magnetic Properties Fig. 10 shows the field dependence of magnetization (M-H) curves of S1-S4 of AlxTi1-xBiO3 (0.0 ≤x≤ 0.33) oxides measured at 300 K. Formation of hysteresis loops indicate the soft ferromagnetic nature in S1-S4 at 300 K. The M-H loops are analysed in the low-field region (±6.0 kG) and the values of saturation magnetization (Ms), coercivity (Hc), and A p r i l 0 3 , 2 0 1 4 remanent magnetization (Mr) and the calculated magnetic susceptibility (of S1-S4 at 300 K are shown in Table 5. The table shows that the above magnetic properties in the samples show only marginal variation which indicates the weak ferromagnetic nature of the materials. Optical Absorption Spectroscopy Room temperature optical absorbance spectra of S1-S4 are shown in Fig. 11 Fig. 11 (b) shows the plots of (αhν) 2 versus hν of S1-S4 where α is the absorption coefficient. The energy band gap, Eg, in the materials are obtained by extrapolating the linear part of the plots to the photon energy axis (Fig. 11 (b)). The obtained Eg values are: 2.92, 2.85, 2.84, and 2.91 eV in S1-S4, respectively. However, the Eg values calculated from the unit cell structure by DFT are of the order of ~0.02 eV in the samples. This discrepancy/underestimation is due to the known limitations of the DFT calculations which involves exchange correlation functions.
3,547.6
2016-12-21T00:00:00.000
[ "Materials Science", "Physics" ]
Turbulent superstructures in Rayleigh-Bénard convection Turbulent Rayleigh-Bénard convection displays a large-scale order in the form of rolls and cells on lengths larger than the layer height once the fluctuations of temperature and velocity are removed. These turbulent superstructures are reminiscent of the patterns close to the onset of convection. Here we report numerical simulations of turbulent convection in fluids at different Prandtl number ranging from 0.005 to 70 and for Rayleigh numbers up to 107. We identify characteristic scales and times that separate the fast, small-scale turbulent fluctuations from the gradually changing large-scale superstructures. The characteristic scales of the large-scale patterns, which change with Prandtl and Rayleigh number, are also correlated with the boundary layer dynamics, and in particular the clustering of thermal plumes at the top and bottom plates. Our analysis suggests a scale separation and thus the existence of a simplified description of the turbulent superstructures in geo- and astrophysical settings. correlated with those patterns? This subtlety is important to clarify because it has implications for modeling of the large-scale structures. Page 8, right column: some speculation about why spatial separation scales increase up to Pr~10 and then decay beyond that should be included. Page 9, methods: How would the authors expect results to change without rigid sidewall boundary conditions? Would periodic boundary conditions, such as might be more realistic for an atmospheric boundary layer lead to significant changes? Reviewer #3 (Remarks to the Author): The manuscript "Turbulent superstructures in Rayleigh-Bénard convection" has a very promising title because of the notion of superstructure that hitherto had no relation with convection. Unfortunately, the manuscript does not deliver on its promise. The so called superstructures turn out to be no different from convection cells as we know them since the beginning of the study of Rayleigh-Bénard convection a long time ago. The reference list contains several papers that identified cells or superstructures in turbulent flows in the past. It is not clear from the text what is new apart from the more extended data base obtained from new simulations. For example, the data analysis around equations (4) and (5) does not seem to be exactly the same as in reference [14], but is there a noteworthy improvement? The paper goes on to discuss relevant time scales of the superstructures. The authors choose to study k_phi, which is not well defined when it first appears. If it is the azimuthal angle, phi_k would be a more suggestive name. At any rate, this angle is a basically random result for polygonal patterns, and because it is an average, it does not detect the characteristic time of single superstructures. Fig. 5 makes more sense if its only for roll patterns. I could not understand the mechanism the beginning of the first paragraph on p. 7 (Connection to boundary layers) attempts to explain, especially if "erratic variations of temperature filaments" are supposed to explain variation in superstructure size as large as observed. The connection between cell walls and plumes again is not really new. Cell walls are almost by definition places where most plumes fall or rise, either because they entrain the mean flow, or because the roll flow detaches the plumes (cause and effect presumably cannot be separated). I have also seen visualizations of the type of fig. 6 before, at least at high Pr. In summary, I do not see in this manuscript the kind of novelty I would expect in a high profile publication. It is true that the authors have pushed their simulations to lower Prandtl numbers and larger box sizes than anyone before, but they fail to extract from their data some new physical understanding. Even though the quantitative results will be of interest to an expert audience, I do not think they belong into Nature Communications. Response to Reviewer #1 First, we wish to thank the Reviewer for her/his careful reading and the constructive comments. The resulting (and other) changes have been highlighted in color in the revised manuscript PDF file. The authors present results of direct numerical simulations (DNS) of turbulent Rayleigh-Bénard convection (RBC) in an square cell with large aspect ratio. Simulations were performed for a moderately high Rayleigh (Ra) number of 10 5 and Prandtl (Pr) numbers ranging from 0.005 to 70 on the one hand and for Pr= 0.7 and Ra numbers ranging from 5 × 10 3 to 10 7 on the other hand. For each case small-and large-scale structures are separated by time-averaging of instantaneous flow fields The found large-scale flow structures are denoted a superstructures. The paper is well written and contains interesting results regarding RBC in a large aspect ratio cell for a wide range of Pr numbers. The main message is that the time-averaged flow is organized in large-scale flow patterns which are similar to those found at much lower Rayleigh numbers. It is at least questionable to denote these large-scale motions as 'superstructures'. Further, it is wellknown from studies of turbulent RBC in air (Pr=0.7) and other common fluids that time-averaged large Ra number flows are organized in large-scale patterns. However, the present paper confirms the latter observation for a wide range of Pr numbers. Therefore, the paper is recommended for publication. Answer: We provide a few explanatory remarks here. The standard notion of turbulent superstructures comes from wall-bounded flows where turbulent superstructures (or very large-scale motion) are spatially extended patterns in the velocity fluctuations. But it has not been clear if this notion applies to other flows. This paper demonstrates that they exist in turbulent convection flows for a variety of conditions and also that they have their origin in a linear instability. We have demonstrated that the turbulent flow in a horizontally extended domain is organized into prominent long-lived patterns that exceed the typical scale height of the problem, i.e., the height of the convection layer. We have added new sentences on page 1 (left column). Indeed, as we state on the first page of our manuscript, one focus of the present work is on a wide range of Prandtl numbers, in particular to very low Prandtl numbers. In the latter case, the fluid turbulence is highly inertial, even at the moderate Rayleigh numbers discussed here. Even in these cases we find these regular patterns. One further aspect that has not studied to the best of our knowledge so far in this specific context is the slow evolution of these largescale patterns in turbulent convection flows. We have emphasized this point better on page 2 (left column). Yours sincerely, the authors. Response to Reviewer #2 First, we wish to thank the Reviewer for her/his careful reading and the comments. In the following we address all of them point by point. The resulting (and other) changes have been highlighted in color in the revised manuscript PDF file. For a paper to be of broad interest it needs a compelling scientific question, something that makes the reported work challenging, and relevancy to a widely appreciated problem. This paper has all three. The intriguing idea can be simply stated as follows: when a fluid is heated from below by just the right amount, convection begins, in the form of smooth, laminar flow patterns. The variety of patterns that can be achieved is vast, and considerable work has been done to understand what causes the patterns and how they depend on controlling parameters such as the relative values of thermal diffusion and viscosity (as quantified through the Prandtl number). At some point, as the heating is increased, the laminar flow breaks down, resulting in highly random turbulence. The surprise is that, in a sufficiently large container (wide compared to the depth), even the seemingly featureless turbulent flow self-organizes into large-scale, slowly-varying flow patterns. At a qualitative level, anyone who has looked at the cloudy sky and seen hexagonal cells or parallel cloud 'streets' has observed a macroscopic manifestation of this phenomenon. And yet, while some empirical rules of thumb have been developed for classifying what patterns can be expected to occur, a rigorous and quantitative description is lacking. This paper addresses that question, and although mysteries still remain, some important clues are revealed. The second needed aspect is something challenging about the work, and here that aspect is clear: the simulations are a computational tour de force, spanning orders of magnitude in Prandtl and Rayleigh numbers, for high-aspect-ratio geometry, and for very long simulation times compared to the turnover time. Simply put, the authors are showing results that are the first of their kind at this level of extensive coverage. Finally, the work is clearly motivated by a host of practical problems, not the least of which is the hope of someday developing quantitative, physically-based models of atmospheric convection and cloud formation in the earth's turbulent boundary layer. This would have important implications for our ability to forecast weather and properly model earth's climate. The paper is clearly written, fully describes the computational and analysis tools, and provides both a vast new look at the parameter space of turbulent Rayleigh-Benard convection, as well as important new insights into the orga-nization of turbulent superstructures. I recommend the work be accepted for publication and am confident it will be of interest to the broad audience targeted by Nature Communications. The following points should be considered: Answer: Thank you for these supportive comments. Page 1, bottom of left column: Be clearer about 'long-term investigations' I believe what is meant by long-term is that the simulations run for times very large compared to typical turnover times. Answer: Times are measured in free fall time units T f which are given by the layer height H divided by free fall velocity U f , a velocity which is composed by a set of system parameters (g, α, ∆T , and H). The turnover time scale is the time it takes for a fluid parcel on average to circulate in a large-scale roll that extends between both plates. The turnover time scale depends strongly on the Prandtl number (see Table I in the supplementary material) while T f is the same time for all simulation cases. The turnover time is about 7 T f for the lowest P r and more than 100 T f for the highest P r. Long-term integrations imply always that at least of the order of 10 turnover times have been studied. We have added this information on page 1. Page 2, left column: The sentence containing "... correlated with the most prominent ridges in the vertical temperature field derivative at the bottom and top plate..." is somewhat unclear. I believe it would be clearer as "... correlated with the most prominent ridges in the derivative (at the bottom and top plates) of the vertical temperature field..." Answer: Done. We have clarified this sentence on page 2 in the left column. Page 2, right column: It is stated that three cases are non-turbulent (panels e, f, and g in Figure 3). What metric is used to determine whether the flow is turbulent? Could it be that the peak and subsequent falloff in Figure 4, panel d for the two largest Pr are a result of those two being non-turbulent, or only weakly turbulent? Answer: We have restated our point more precisely. Certainly, the flows for P r = 0.7, Ra = 5 × 10 3 , 10 4 (f,g) and P r = 70, Ra = 10 5 (e) are also chaotic and time-dependent. As seen in Fig. 2, they all have a non-vanishing level of temperature and velocity fluctuations about the time-averaged patterns. When the magnitude of the velocity fluctuations exceeds the value of the mean velocity, we call the convection flow fully turbulent, otherwise it is weakly turbulent. This point is stated in the text on page 2. In case (e), we have additionally analysed time-series of the Nusselt number averaged at the heating and cooling plates. These time series show oscillations with respect to time that indicate oscillatory instabilities and the magnitude of these oscillations is also time-dependent. For the second part of your question, we refer to the last-but-one answer in this response below. Page 3, left column: For the general reader, make it clear that the free-fall time is not the same as the circulation time. Answer: Done on page 3. Rev. Lett. side by side) studied the time evolution of thermal convection close to the onset for such very long times in large-aspect ratio cells and detected a gradual evolution of the patterns on this scale. We have inserted these references on page 3 and extended the sentence in this respect. We expect (although we cannot prove) that this time scale is not relevant for a turbulent convection flow. Over multiples of this time scale, the patterns should have evolved such that even the coherence of the superstructures is lost. Page 4, equation 6: Wave numbers k and k φ should be more clearly defined here. Answer: We have changed the notation to k and φ k to make it clearer that the variable k stands for the magnitude of the wave vector, and φ k is the angle the wavevector makes in wavenumber space. We have also added this discussion on page 4. Figure 5 is updated accordingly. Page 5, figure 4: Please comment on the interpretation of an azimuthal average for linear or banded structures, as opposed to cellular structures. Some physical interpretation would help here. Answer: Since these horizontal slices exhibit two-dimensional patterns, the Fourier transform is two-dimensional, giving either (k x , k y ) or (k, φ k ) depending on your coordinate system. Since the structures are not always exactly aligned with the x and y axes in these cases (see figure 3), one sees in Fourier space a ring with a few prominent peaks (which can also be seen in the angle-time plots in figure 5). Hence the most convenient way to find the dominant wavenumber is through an azimuthal average, as then the exact orientation of the rolls does not need to be known. See Morris et al., Phys. Rev. Lett. 71, p. 2026-2029(1993 for more explanation. The azimuthally averaged power spectrum, E(k), is also known as the structure function S(k) in other literature. We added the Morris reference (now ref. 35) and more explanation on page 4 close to Eq. (6). Page 5, right column: A brief definition of Péclet number would be useful for general readers. Answer: The Péclet number P e compares the advection and diffusion terms in a scalar transport equation (here for temperature). Its definition is similar to that of a Reynolds number Re. The kinematic viscosity ν in Re is replaced by the thermal diffusivity κ in P e. We have extended the text on page 6 to make this point clear. Page 5, figure 4: Make clear here, in the caption or in the axis labels, that λ is in units of H, and τ is in units of free-fall time (it is stated in the appendix, but it is significant here in order to understand the interpretation of the numbers). Pages 7 and 8, including figure 6: There is some ambiguity regarding causality here. Perhaps I misunderstand the meaning, but is it the author's view that the boundary layer features should be considered sources or drivers for the coherent convection patterns, or simply that they are correlated with those patterns? This subtlety is important to clarify because it has implications for modeling of the large-scale structures. Answer: Yes, the only thing that we can say is that the boundary layer features are correlated with those patterns. Two possibilities exist: either there exist a global instability that sets the characteristic pattern scale and the resulting circulation rolls determine the features in the boundary layer, or the boundary layer dynamics, i.e. the local features close to the wall set a scale by plume clustering that determines the characteristic patterns which we analyse in the midplane. We have added this point on page 7. Page 8, right column: some speculation about why spatial separation scales increase up to Pr 10 and then decay beyond that should be included. Answer: One possible explanation builds on similar arguments to those that we used in the text to explain whyλ Θ >λ U . When the Prandtl number is increased at a fixed Rayleigh number, the ratio of the thermal to the viscous boundary layer thickness decreases. It is approximately given by δ T /δ v √ Re/(2N u) and Table 1 of supplementary material). The thermal boundary layer is thus increasingly thinner and more deeply embedded inside the viscous boundary layer as P r grows. As a consequence the thickness of rising and falling thermal plumes (which is comparable to the thermal boundary layer thickness δ T ) decreases such that plumes can less efficiently stir the fluid. This becomes visible in the Péclet number P e h which is based on the root mean square of horizontal velocity fluctuations in the center of the layer v h = ( u 2 x + u y ) 1/2 . As figure 1 of this response shows, the Péclet number peaks at P r ∼ 1 and decreases for larger P r. This could explain the decrease of the characteristic scales for larger P r. We have also analysed this behaviour for Ra = 10 6 and P r = 0.7, 7, 35, 70. Again, the wavelengthsλ U andλ Θ peak for the run at P r = 7 and we find a similar trend for P e h (see figure 1). We have added a corresponding paragraph on page 6 (left column). Page 9, methods: How would the authors expect results to change without rigid sidewall boundary conditions? Would periodic boundary conditions, such as might be more realistic for an atmospheric boundary layer lead to significant changes? Answer: No-slip walls are taken since we want to compare our simulations directly with laboratory experiments of convection in air and water in the near future (in the same geometry). We have added a paragraph to the supplement that shows a nearly perfect agreement of the statistical quantities. Except for wavelength in Fourier spaceλ U , all reported quantities agree in a direct comparison at a smaller aspect ratio of Γ = 16. First, we acknowledge the critical comments by the referee which helped us to highlight the new aspects of our work better. In the following we address the comments point by point and hope that she/he changes her/his point of view. The resulting (and other) changes have been highlighted in color in the revised manuscript PDF file. The manuscript "Turbulent superstructures in Rayleigh-Bénard convection" has a very promising title because of the notion of superstructure that hitherto had no relation with convection. Unfortunately, the manuscript does not deliver on its promise. Answer: We think that the present work demonstrates for the first time the existence of these large-scale patterns of the temperature and velocity for a broad range of Prandtl numbers that spans more than four orders of magnitude with a particular emphasis on very low Prandtl numbers. In the latter case, the fluid turbulence is highly inertial and it cannot necessarily be expected that the large-scale patterns persist into this parameter range. It is thus the first comprehensive study in this respect that extends the analysis of the spatial and temporal scales of turbulent superstructures from wall-bounded shear flows (see e.g. added new ref. 5) to turbulent convection flows. The so called superstructures turn out to be no different from convection cells as we know them since the beginning of the study of Rayleigh-Bénard convection a long time ago. The reference list contains several papers that identified cells or superstructures in turbulent flows in the past. It is not clear from the text what is new apart from the more extended data base obtained from new simulations. For example, the data analysis around equations (4) and (5) does not seem to be exactly the same as in reference [14], but is there a noteworthy improvement? Answer: On page 1, we clarified our definition of a turbulent superstructure in RBC, which we consider to accurately represent the patterns we have found here when taking the appropriate time average (i.e., over τ as defined in equation 7). We have also now explained that we are extending the idea from wall-bounded shear flows. Also, novel aspects of the present work are to our view the following ones: 1. Parameter range: We extended the parameter space for turbulent convection flows significantly, both, in terms of the aspect ratio, and more importantly in terms of Prandtl number as stated above. In this regime, no pattern formation studies in a turbulent convection flow exist. 2. Scale separation: We extract characteristic spatial and temporal scales that suggest a scale separation into large-scale slowly evolving patterns and rapid small-scale fluid motions. Spatial scales are extracted in multiple ways (Fourier space as in [14] and physical space) leading to consistent results. We provide an argument for the maximum of the characteristic scale as a function of P r in figure 4d which is based on the horizontal Péclet number. 3. Slow time evolution: We demonstrate the slow temporal evolution of the turbulent superstructures by windowed averaging. Radially averaged spectra provide an alternative way to determine the lifetime of superstructures. 4. Correlation to plume formation: We connect the characteristic scale of the patterns, as observed in the bulk, to the correlations of skin friction field which is analysed at the plates and considered as a blueprint of the nearwall velocity field. We have repeated this analysis for the time-averaged fields now which supports our argumentation even better and updated figure 6 and the corresponding text. The paper goes on to discuss relevant time scales of the superstructures. The authors choose to study k φ , which is not well defined when it first appears. If it is the azimuthal angle, φ k would be a more suggestive name. At any rate, this angle is a basically random result for polygonal patterns, and because it is an average, it does not detect the characteristic time of single superstructures. Fig. 5 makes more sense if its only for roll patterns. Answer: We have changed the notation to k and φ k to make it clearer that the variable k stands for the magnitude of the wave vector, and φ k is the angle the wavevector makes in wavenumber space. We have also added this discussion on page 4. We are not interested in the exact angle, we only want to determine any dominant orientations and how they evolve with time. An angle-time plot, in conjunction with figure 4b, which gives the dominant wavenumber, can provide a lot of information about the patterns, in many cases, and not just for parallel rolls. Many lattice structures can be represented as superpositions of rolls at different orientations (see for example, figure 4.6 of Cross and Hohenberg "Pattern Formation and Dynamics in Non-Equilibrium Physics", Cambridge, 2009). We found it to be very interesting that the angle-time plots are not more homogeneous, and show, for some cases, very dominant orientations that grow, peak and fade with time, reminiscent of the patterns seen in Cross, Meiron, and Tu, Chaos 4, 607 (1994). We have added the additional reference 37. We added a paragraph to the supplement that now describes the identification of the duration of various peaks in figure 5 with a lifetime of our superstructures. This provides a second alternative way to determine τ . I could not understand the mechanism the beginning of the first paragraph on p. 7 (Connection to boundary layers) attempts to explain, especially if "erratic variations of temperature filaments" are supposed to explain variation in superstructure size as large as observed. The connection between cell walls and plumes again is not really new. Cell walls are almost by definition places where most plumes fall or rise, either because they entrain the mean flow, or because the roll flow detaches the plumes (cause and effect presumably cannot be separated). I have also seen visualizations of the type of fig. 6 before, at least at high P r. Answer: The connection between cell walls and plumes might have been discussed somewhere else. Our flows are turbulent which causes a dispersion of most hot (cold) plumes when they rise (fall) into the bulk. What is shown in Fig. 6 is that the most prominent plume ridges can be correlated with the cell patterns in the midplane. We now show this analysis for the time-averaged fields which supports our argumentation better and have updated the corresponding text on page 7. Furthermore, we include for the first time the structure of the skin friction field into this analysis. The clear connection of this pattern formation to the sources and sinks of the 2d skin friction field is done for the first time and provides, in our view, a better physical understanding of the correlation between boundary layer dynamics and bulk processes. In summary, I do not see in this manuscript the kind of novelty I would expect in a high profile publication. It is true that the authors have pushed their simulations to lower Prandtl numbers and larger box sizes than anyone before, but they fail to extract from their data some new physical understanding. Even though the quantitative results will be of interest to an expert audience, I do not think they belong into Nature Communications. Answer: We are pleased with the referee's acknowledgment that we have pushed out simulations to lower Prandtl numbers than anyone before and larger box sizes, but wish to emphasize that the paper is significantly more than just that. Although the reviewer is correct that large-scale structures in turbulent RBC have been found for specific cases before (as in reference 14, for example), there is no general understanding in the RBC community that these structures are present for such a wide range of parameters and that this is a general feature of turbulent RBC. In this study, where we cover a wide range of parameters, we have definitively demonstrated the existence of superstructures in turbulent RBC. We have also provided an analysis of the characteristic times and length scales of these structures as a function of Rayleigh and Prandtl number. We think this paper makes important enough contributions to be published in Nature Communications so that its general lessons can be disseminated to a wider
6,174
2018-01-13T00:00:00.000
[ "Physics" ]
RaiseAuth: a novel bio-behavioral authentication method based on ultra-low-complexity movement Authentication plays an important role in maintaining social security. Modern authentication methods often relies on mass data datasets to implement authentication by data-driven. However, an essential question still remains unclear at data level. To what extent can the authentication movement be simplified? We theoretically explain the rationality of authentication through arm movements by mathematical modeling and design the simplest scheme of the authentication movement. At the same time, we collect a small-sample multi-category dataset that compresses the authentication movement as much as possible according to the model function. On this basis, we propose a method which consists of five different cells. Each cell is matched with a custom data preprocessing module according to the structure. Four cells are composed of neural network modules based on residual blocks, and the last cell is composed of traditional machine learning algorithms. The experimental results show that arm movements can also maintain high-accuracy authentication on small-sample multi-class datasets with very simple authentication movement. Introduction Human authentication has always been the focus of attention in security field. Human authentication and identity camouflage are like two armies that confront each other and upgrade their equipment, and constantly propose corresponding solutions according to each other's technological development. The backwardness of the authentication often leads to seri-ous economic problems, the lack of credibility, and even more fatal consequences. In recent years, the improvement of computer computing power and the development of related methods have promoted the diversity and accuracy of human authentication, and gradually met various personal privacy protection needs and organizational privacy protection needs. The knowledge-based authentication method based on "people knows information" [1] has been widely used for several decades. Including common access card [2,3], username and password, etc. The 4-or 6-digit pin codes authentication method or the common access card authentication method makes it difficult for the perpetrator to imitate others in a short period of time without preparation. However, several researches [4] have shown that knowledge-based authentication methods such as pin codes are difficult to remember and vulnerable to attack by perpetrators [5,6]. Common access cards are easily stolen or lost [3], and usernames and passwords are easily disclosed and embezzled. Hence, about one-fifth [4] of people prefer to store all of the personal information in one device, leading to a significantly increased risk and harm of information leakage. The biometric-based authentication method based on "people is something" has greatly alleviated the above prob-lems. The biometric is usually divided into bio-physiological and bio-behavioral due to the nature of the human feature. Bio-physiological authentication achieves the purpose of recognizing a person's identity by measuring the physical features of the human body, including hand region features [7,8], facial region features [9,10], ocular features [11,12], etc.. Bio-physiological authentication does not require people to remember anything, but requires cooperation to provide a sufficient amount of features for authentication. Therefore, bio-physiological authentication is usually used in situations where the person voluntarily provides personal information or is compelled to provide personal information under supervision, such as police stations and banks. The bio-behavioral authentication is well adapted to the authentication needs under unsupervised conditions. Bio-behavioral authentication identifies people by detecting and learning the biological behaviors such as the personal habitual movements. Including voice recognition [13,14], gait recognition [15,16], keystroke dynamics [17] and signature [18,19]. Because each person's body characteristics (such as: height, weight, arm length, muscle development, throat development, etc.) and behavioral habits (such as: keystroke speed and strength, step size, etc.) are different, bio-behavioral authentication can identify people with acceptable accuracy. Compared with knowledge-based authentication, bio-behavioral authentication does not have the disadvantages of being easily lost or stolen. Compared to bio-physiological authentication, biobehavioral authentication requires less hardware and can be operated by low-cost sensors. At the same time, biobehavioral authentication has higher concealment [20,21] and is not easy to be discovered. However, no matter in the collection of dataset or the application of the methods, there is a fact that cannot be ignored, that is, in most real life scenarios, the person being measured will not provide such ample authentication information. For the most common example, stealing is often a quick and precise movement completed within 1-5 s. The thief's face is not completely exposed to the camera, and there is no interaction with the phone screen or buttons. In addition, such as the movement of the intruder opening the door, etc., people in these scenarios often do not provide sufficient data and rich variety of features for authentication. Therefore, this leads to two important questions and challenges: 1. Can a simple movement that is completed in a short time and only generates a small amount of data be used for authentication? 2. Can simple and commonly used sensors capture these very short-term movements? In response to the above challenges, we compress the time and complexity of movements to an unprecedented degree. At the same time, we design a variety of method structures to process data, and make a detailed analysis. In this work, we first model the authentication movement based on human joints and bones from a mathematical point of view and construct the model function. According to the relevant parameters of the function, the authentication movement is analyzed in detail and the sensor selection method is designed. Then, because the existing public dataset could not support our work, we collected a 110-persons movement dataset for the designed authentication movement. The participants have no prior knowledge, that is, the participants who are collected data do not know the purpose of the data before the collection, and only inform them of data use after collection and obtain data use authorization. In this way, it is ensured that the collected data conform to the daily habits of the collected participants. Finally, since there has never been a previous authentication method based on data of the same order of magnitude as our movement data, we designed different deep neural network structures based on a variety of common backbones and combined with a variety of machine learning classification methods for the results comparison and analysis. We note that a shorter conference version [22] of this paper appeared in ACM Turing Celebration Conference (2020). Our previous work did not analyze the authentication movement mathematically, and did not clarify the connection between movement, mathematical model and sensor. Compared with previous work, we greatly compress the movement complexity, reduce the number of sensors, add four additional cells and add a cell evaluation module. The remainder of the paper is organized as follows: In the next section, we describe related work in bio-behavioral authentication. In the subsequent section, we analyze the mathematical model and design the authentication movement followed by which we introduce our proposed method named RaiseAuth. In the penultimate section, we analyze the model performance and the resistance to attack. We conclude this work in the final section. Related work In this section, we summarize the efforts of recent research community in bio-behavioral authentication and corresponding measurement studies. A number of bio-behavioral authentication methods have been reported during the last decade [1,23]. Hong [24] collected the sensor data generated by the human hand when writing through a special watch containing an acceleration sensor, and used this as the basis for authentication. Each par-ticipant is required to write 20 words and approximately 120 strokes, which takes approximately three minutes to generate one training sample. Langyue [25] proposed a novel feature extractor which achieved 97% accuracy in authentication based on people's gait behavior, where each person needed to provide data generated by walking for more than 33 min for training. Timothy [26] authenticated people through the coherent movements of people hitting the keyboard continuously, and designed the method based on a training dataset of over 5000 keyboard strokes per person. The above methods can often achieve an accuracy of more than 95%, and have good application prospects in some specific situations. However, it is obvious that the amount of dataset used by these methods is huge, and it takes a long cost to collect data from one person to train the model and improve the accuracy, which makes these methods unsuitable for short-term simple movement data. Reducing the complexity and data scale of the authentication movement is important because each order of magnitude decrease in authentication movement ushers in a new set of unforeseen challenges. Several researches [27][28][29][30] are also trying to use simple movements to achieve authentication. In [31], the author used the combined movement when the user answers the phone, that is, the user unlocked the mobile phone and took the mobile phone to the ear as an authentication movement. A 5% Equal Error Rate (EER) was achieved on a self-built dataset of 48 people based on a training set where each person performed ten movements within 6000 ms. Jakub [32] collected the phoneholding behavior data of participants within 40 min, and proposed a method based on Multilayer Perceptron (MLP) and Convolutional Neural Network (CNN) machine learning models. The trained model is able to achieve 8% EER on 20 s level test samples. Although the above methods and systems have made certain contributions in compressing the amount of data, there is still room for improvement due to the following two reasons. First, the above methods are all datadriven, and do not analyze and explain the authentication movement itself, which makes the correlation between the authentication movement and authentication method unclear. Therefore, our method fills in this gap by analyzing in detail the mathematical correlation between movements, sensors, and authentication. Secondly, even if the above methods simplify the authentication movement, the amount of data required for single-sample training is still more than 2.5 times ours. In contrast, RaiseAuth uses a simpler authentication movement, establishes a mathematical model to explain the rationality of the movement, and comprehensively analyzes the model structure and authentication results. Movement and sensors In this section, we build a mathematical model of human joints and bones. By analyzing the relevant parameters of the model functions, the authentication movement is designed and the reasons for the selection are explained. Also the sensor selection method is designed according to the analysis results. Mathematical analysis Most of the recognizable behaviors of the human body depend on the operation of the limbs. Compared with the trunk, the limbs have fewer bones but have higher flexibility. The muscles can ensure the normal operation of the limbs, and bones and joints provide a natural entry point for the establishment of mathematical models. We chose the right arm among the limbs as the model building template ( the angular velocity is: For the forearm M − N , set point N (x 2 , y 2 , z 2 ), relative to the upper arm the angle of rotation is: the angular velocity is: When Then calculate the differential: then where + ω 1y (ω 1y + ω 2y ) sin θ 1y sin(θ 1y + θ 2y ) The derivation of the above formulas show that the movement state function is composed of two parts, L 1 and L 2 , which means that the most comprehensive data acquisition scheme should be to wear sensors at the elbow M and wrist N for measurement, and use the data to authentication. However, through Formulas (12) and (13), it can be found that the parameters required to calculate L 2 cover all the parameters required by L 1 , and the parameter richness of L 2 is much higher than that of L 1 , that means the weight of L 2 in the movement state function should be much higher than L 1 . So it can be concluded that only adding sensors at wrist N to measure L 2 should also be able to achieve authentication with good performance. Authentication movement In Formula (14), it can be found that the main parameters that affect the movement state function are the upper arm length a, the forearm length b, the angular velocity ω with the three components of the space coordinate system, and the rotation angle θ , which means that the authentication movement only needs to meet these conditions to be able to authenticate. So the movement we designed is very simple (Fig. 2): We asked participants to place their arms below their waist and then raise them 15-20 cm according to their natural strength and speed. This movement is typically completed within 2 s. Putting the hands below the waist is only to ensure that the participants have enough room to raise the hand, and the reason for the raise is that we need to ensure that the arm movement is from the participant's subjective decision, which makes the participants to mobilize more muscle power to perform the movement, which is the key to affecting angular velocity and rotation angle. At the same time, since the lengths of the upper arms and forearms of each participant are different, this will also provide more parameter variables for the movement state function. Therefore, with such a simple movement, all the requirements of the movement state function are satisfied. At the same time, we provide two scenarios, that is, to complete this movement in the scenario of sitting and standing, respectively. Sensors selection Accelerometer, magnetometer and gyroscope sensors are chosen as the sensor for data acquisition for the following two reasons. First, after mathematical analysis of arm model, the movement function we obtained reveals the parameter composition that affects authentication movement. The accelerometer sensor can provide good data support for calculating the trajectory traveled by the wrist, the gyroscope sensor can intuitively reflect the change of angular velocity, and the magnetometer sensors provides convenience for confirming the instantaneous direction of movement. Second, We aim to increase the applicability of the method by collecting data using simple and commonly used sensors. At present, most smartphones have built-in accelerometer, magnetometer and gyroscope sensors to meet the different needs of different mobile applications. Therefore, choosing these three sensors can make our method have a wide range of application scenarios, rather than just stay in theory. Accelerometer The acceleration sensor can collect the acceleration of the sensor itself. Due to the gravity of the earth, when the device is stationary, the value of the accelerometer sensor will always generate 9.81 m/s 2 interference. To eliminate this interference, high-pass filtering is used to process the acceleration sensor data. The low-pass filter is also used to process the raw data of acceleration sensor. The low-pass filter can eliminate the noise generated in the process of data collection. The accelerometer data is used to reflect the user's habits of the arm strength and direction of the arm force. At the same time, the accelerometer data will be used for auxiliary calculation trajectory. The data processed by high-pass filter and low-pass filter are added to the feature subset select together with the raw data to ensure that the important feature information will not be lost. Gyroscope Due to the rigidity and precession of the gyroscope, the gyroscope data can provide an important basis for calculating the change of the angular velocity in three-dimensional direction. Magnetometer Magnetometer sensor can measure the strength and direction of the magnetic field of the sensor. The data of magnetometer sensor can reflect the instantaneous direction change of sensor. In our method process, magnetometer data are used to reflect the direction during the participant moving his arm. Method For better evaluation, we build a variety of architectures suitable for us based on the existing popular backbones, and put them into different cells to select the most suitable one and do further analysis. The architecture of the RaiseAuth method is shown in Fig. 3, which contains 5 cells, each cell is configured with the corresponding data preprocessing method. Cell evaluation will calculate the score each cell gets on the dataset during training, and provide feedback that will cause RaiseAuth to close paths to the cells with low scores. The data of accelerometer, magnetometer and gyroscope were, respectively, expressed as (AX, AY , AZ), (M X, MY , M Z), (G X, GY , G Z), and the experimental data obtained by the accelerometer were processed by low-pass filtering (LPF) and high-pass filtering (HPF) respectively, and the obtained data were, respectively, expressed as (L X, LY , L Z) and (H X, HY , H Z). 1D multi-scale cell What we first consider is to process and analyze the sensor data from a one-dimensional perspective. Due to the limited receptive field of the one-dimensional convolution kernel, we extract features from the input data from three different scales based on the bottleneck of ResNet [33] to achieve multiple classification task. 1D data preprocessing As shown in Fig. 4, we stitch together the sensor data in the order of HPF, LPF, accelerometer, magnetometer and gyroscope. The length of data generated by each sensor on each axis is fixed at 140, the insufficient part is filled with zero, and the excess part will be discarded, forming a one-dimensional data input with a length of 2100. Cell architecture details The cell structure and details are shown in Fig. 5. Where BN means batch normalization, "/2" means stride is 2. Taking "1×7conv, 32, /2" as an example, it means to perform a convolution calculation with a convolution kernel of 1×7 size, the number of output channels is 32, and the stride is 2. Taking "1×3 maxpool, /2" as an example, it means to perform a max pooling calculation with a convolution kernel of 1×3 size and the stride is 2. Taking "1×6 avgpool, /1" as an example, it means to perform a average pooling calculation with a convolution kernel of 1×6 size and the stride is 1. When the preprocessed data are input into the cell, it will first go through two convolution layers to increase the number of channels. We choose to use a large convolution kernel to increase the receptive field, thereby increasing the breadth and depth of the feature map. We adopt batch normalization (BN) [34] right after each convolution and before activation, following [34]. We choose to add max pooling to mix features and adjust output resolution after the activation function. Then, considering the restricted receptive field of one-dimensional convolution, we perform residual convolution operations on the input from three scales of 1×3, 1×5, and 1×7, and finally concatenate the feature vector by the output of the average pool. And input the vector into the fully connected layer, and combine with softmax to achieve classification. During training, we use a batch size of 64 and use SGD with a momentum of 0.9 to initialize the weight. The learning rate is initialized to 0.01 and exponentially descends with a descending rate of 0.95. When the error plateaus, the learning rate will be divided by 10. Dropout is not used, following the practice in [33]. The models are trained for up to 150 epochs. 2D residual cell In this cell, we preprocess the signal into a 2D structure and combine ResNet to achieve multi-classification tasks. We choose models of different depths to construct cells. 2D data preprocessing As shown in Fig. 6, we construct the sensor data into an input format of 3 (channel) × 5 (height) × 140 (width). The length of one training sample data generated by each sensor on each axis is fixed at 140, the insufficient part is filled with zero, and the excess part will be discarded. Among them, 5 height corresponds to the data generated by 5 sensors, and 3 channels correspond to the data generated by 5 sensors on the three axes of (x, y, z). The advantage of this structure is that during the convolution operation, the convolution kernel can simultaneously acquire the data generated by the three axes at the same time, Cell architecture details Referring to previous work on Occam's Razor and ResNet [33], we choose ResNet-18, ResNet-50 and ResNet-101 to construct cell. Even though it is theoretically shown that the residual block will ensure that the network accuracy will at least not decrease as the network deepens, we still try shallow networks to improve the accuracy comparison. During training we use a batch size of 32 and use SGD with a momentum of 0.9 to initialize the weight. The learning rate is initialized to 0.01 and exponentially descends with a descending rate of 0.94. When the error plateaus, the learning rate will be divided by 10. Dropout is not used. The models are trained for up to 130 epochs. Traditional machine learning cell In [35], it was shown that traditional machine learning methods can often achieve better results than deep neural networks on small sample datasets. For example, on the ORL dataset [36], 400 images are divided into 40 categories, and the Random Forest algorithm achieves better results than the deep neural network in classification, which is also confirmed in some of our previous work [22]. Our dataset is also a small sample dataset, so we built a traditional machine learning cell for experiments and comparisons. Feature extraction and concatenation To increase the optional range and diversity of features, on the basis of the original data, we take MA = AX 2 + AY 2 + AZ 2 , MM = MX 2 + MY 2 + MZ 2 , MG = GX 2 + GY 2 + GZ 2 , MLA = LX 2 + LY 2 + LZ 2 , MHA = HX 2 + HY 2 + HZ 2 . For the above 20 sets of data, we calculated the minimum, maximum, mean, standard deviation, skewness and kurtosis values for each set of data. Therefore, for each training sample, we get a total of 120 features as shown in Table 1. For the convenience of the following, we assign a unique id to each feature in the Table 1. For example, FId.15 is the mean value of the Z component of the acceleration measured by the accelerometer (AZ) and FId.95 is the skewness of the magnitude of the acceleration (MA). The feature id will be used instead of the long name of the feature in the following. The feature fusion of the bio-behavioral authentication can increase the accuracy and reduce the redundancy of the data. Moreover, the earlier the data fusion, the better the effect [37], but due to the early sensor-level data fusion, a large amount of noise will be brought into the model, so sensor-level fusion often does not yield the best results. Therefore, the featurelevel data fusion is considered to be a more effective choice for improving accuracy. We selects feature-level data fusion. Feature subset selection Feature subset selection plays an important role in reducing data dimensions and preventing data overfitting. Based on the correlation between features and classes, a feature subset selection algorithm is proposed, named correlation feature maximization (CFM) feature subset selection algorithm (Algorithm 1). Suppose the number of features is N feature , CFM evaluates each feature according to the correlation between features and classes [38]. N feature correlation evaluation scores are obtained, formed array S feature . The higher the score, the greater the correlation between the feature and the class. Suppose N sensor sensors are involved in the authentication movement, CFM regards features extracted from the same sensor as a set, totally N sensor sets of features. Suppose M subset is the number of features in each set of features. At the same time, CFM assigns a weight value to each feature according to the ranking of correlation evaluation scores, So there is the array W feature . The higher the ranking, the bigger Table 2). This enables the CFM algorithm to filter out the best-performing features produced by each sensor and group them into feature subsets. Among them, in the sitting posture, the proportions of the features related to the high-pass filtered accelerometer, low-pass filtered accelerometer, accelerometer raw data, magnetometer and gyroscope to the total number of features of the feature subset are: 15%, 21.7%, 21.7%, 28.3%, 13.3%. In the standing posture, the proportions are 14.9%, 19.1%, 19.1%, 23.4%, and 23.4%, respectively. Each sensor data occupies an important proportion in the feature subset. Classifier selection According to the feature subsets (Table 2), to find the most suitable classification algorithm, 5 commonly used classification algorithms [39] (Naive Bayes (NB), Bayes Net (BN), J48, Random Forest (RF) and Simple Logistic (SL)) were used to model and authenticate in the sitting and standing postures respectively. Cell evaluation Cell evaluation will calculate the Score each cell gets: where N param is the number of parameters involved in the cell calculation, and Error is the minimum error rate on the valid set during training. This module selects the cell with the highest score at the end of training and keeps its path, while closing the paths to other cells, ensuring that only one cell path will be left in the end. Performance evaluation In this section, we first describe the details of our dataset composition and the collection of training and test samples. Secondly, we evaluate the performance of different cells of the model on the dataset, and conduct random attack experiments and imitation attack experiments. Data collection used iPhone7 64 GB, with IOS 12.2 system. The calculation is performed on Windows 10, RTX3080Ti 32 GB RAM, 1TB hard disk. Data collection We used the built-in accelerometer, magnetometer and gyroscope sensors to collect the data. The number of participants in the experiment was 138, 110 participants participated in the training set and testing set collection, 20 participants participated in the random attack experiment, and 8 participants participated in the imitation attack. The sampling rate of the accelerometer, magnetometer, and gyroscope was set to 100 Hz. In response to our previous challenge of using the smallest possible dataset to achieve high-accuracy authentication, to compress the sample size required for each person's training as much as possible, we asked each of 110 participants to perform 5 times authentication movement in the sitting and standing positions. The duration of each movement was determined by the participants' personal habits, but was usually no more than two seconds (in fact, we founded that the longest movement time of the 110 participants was 1.38 s). Therefore, in the 1D data preprocessing section and 2D data preprocessing section, we set the data length of each axis of each sensor to 140, to ensure that all sample data can be included while padding as few zeros as possible. A total of 1100 training sample data were collected. We then asked 110 participants to perform 4 more times authentication movement in the sitting and standing positions to form the test dataset. Therefore, we got a total of 1980 training and testing samples, a total of 110 types of authentication movement datasets. We will perform a 110-class multi-classification task on this dataset. To test RaiseAuth's resistance to random attacks, each of the 20 random attackers provided 50 random attacks in sitting posture and 50 random attacks in standing posture. Each random attack has the following characteristics: random attacker will try to make authentication movement with random behavior habits without knowing the 110 real user data. A total of 1000 random attack data of standing posture and 1000 random attack data of sitting posture were provided. To test RaiseAuth's resistance to imitation attack, four pairs of people who were similar in height, weight, forearm length and upper arm length were required to make authentication movement. Each pair of people will have a real user and a imitated attacker. During the process of observing the real user's authentication movement, the imitated attacker tries to imitate the real user 20 times in the standing posture and sitting posture respectively. four pairs of people were provided with 80 attack data of standing posture imitation and 80 attack data of sitting posture imitation. Results Based on the training set and testing set of 110 participants, we evaluated the performance of each Cell, and parts of the results are shown in Table 3. The column Params indicates the number of parameters involved in the calculation process of the Cell. The more the number of parameters, the more computing resources are required to implement training and testing. It can be seen that the network structure parameters in 1D multi-scale cell are significantly lower than 2D ResNet-18 cell, 2D Resnet-50 cell and 2D Resnet-101 cell. In terms of error rate, we selected three ways to evaluate. We took out the data of sitting posture, the data of standing posture, and the data of mixed sitting posture and standing posture for model training and testing respectively. From the results, we can see that even if the network structure includes a 1dimensional shallow low-parameter neural network to a 2dimensional 101-layer deep high-parameter network, their errors are basically the same. The depth of the network did not bring about a significant change. This is in agreement with our previous predictions, and with previous work in [35]. We believe that this is due to the order of magnitude of the sample data is too low, and the sample categories are too many, so that the deep neural network structure cannot obtain enough data to train the weight in the network. This makes the network usually overfit very quickly on the training set, but not well performance on the valid set and test set. However, we can still see from the results that compared with the standing posture, the sitting posture has a lower error rate, and the effect achieved by the mixed data set is comparable to the sitting posture. The relatively low error of 1D multi scale cell indicates that, compared with deepening the structure of the network, collecting more information from different scales is better. The richness of scales information can improve the training accuracy of small sample dataset. Then, we tested the traditional machine learning cell. We selected a total of 5 algorithms (Naive Bayes (NB), Bayes Net (BN), J48, Random Forest (RF) and Simple Logistic (SL) ) as the classifier of the model, train the model under the standing and sitting posture dataset respectively, and select 5-fold cross-validation. It can be seen that the RandomForest classification algorithm has the best performance, achieving an accuracy of 97.49% in sitting posture and an accuracy of 94.68% in standing posture (Fig. 7). The accuracy is where TPR is true positive, FPR is false-positive rate, TNR is true-negative rate and FNR is false-negative rate. At the same time, we selected five commonly used feature subset selection algorithms to compare with the CFM algorithm. We limit the number of features of the feature subset selection algorithms to have the same number of features as the CFM algorithm, and also use the RandomForest classifier. In the sitting posture, the accuracies of the Reli-efF [40], GainRatio [41], InfoGain [42], SymmetricalUncert [43], oneR [44], and CFM (ours) algorithms are 96.98%, 97.21%, 97.21%, 97.35%, 97.38%, and 97.49%, respectively. In the standing posture, they were 93.17%, 93.55%, 93.17%, 94.49%, 93.24%, and 94.68%, respectively. This shows that our CFM algorithm has better performance on this task. The False Positive Rate (FPR) and True Positive Rate (TPR) of the model are calculated and the Receiver Operating Characteristics (ROC) curve is drawn when the user is sitting posture (Fig. 8) and standing posture (Fig. 9). The ROC curves can visually display the FPR and TPR. The more the curve is "convex" to the upper left corner, the better the classifier effect. Random attack test To test the robustness and security of RaiseAuth, 20 attackers randomly performed 50 authentication movement data collections in both sitting and standing postures without knowledge of the 110 real user's information. This resulted in a total of 1000 sitting attack data and 1000 standing attack The sample predictions of 1000 random attacks calculated by RaiseAuth in standing posture data. These 2000 attack data was used to test RaiseAuth, and the sample prediction of each random attack data was shown (Figs. 10, 11). The sample prediction represents how much confidence the model has in classifying samples to the current class. For example, 40% prediction means that the current sample matches one classified class by 40%. In the sitting posture, no attack prediction was more than 33%, and only 2 attacks had a prediction of 31%-35%. In the standing posture, no attack prediction was more than 35%, and only 3 attacks had an prediction of 31%-35%. The Minimum Threshold Line indicates how much the model needs to set the sample prediction rate threshold under attack to be able to resist the attack well. Usually, the threshold of sample prediction rate of real users is set to 90%, that is, when the model makes a judgment that the prediction is more than 90%, the judgment will be considered reliable. Our experiments prove that all 2000 random attack samples can be filtered out only by setting the minimum threshold line to 35%, which makes the difference between random attack samples and real user samples very obvious. This means that traditional machine learning cell in RaiseAuth structure perform well in resisting random attacks. Imitated attack test Imitated attacks occur occasionally in real life, where attackers achieve identity obfuscation by observing and imitating real users. We tested the imitated attack resistance of the RaiseAuth. Participants were four pairs of people who were similar in height, weight, forearm length and upper arm length ( Table 4). Four of them are real users and four are attackers. Real users normally collected authentication movement data, while attackers observed and imitated the collection process of real users. Next, each attacker imitated 20 attacks by imitating real users of similar body size in sitting (Fig. 12) and standing (Fig. 13) posture. In the results, the predictions of attacks were mostly concentrated in the 40% -60% range, no one was more than 70%. The minimum threshold line is 70%. At the same time, we analyzed the experimental data of real users and attackers. We showed the two best features in the standing posture (Fig. 14) as an example. This data comes from a pair of Real user1 and Attacker 1, of which the lower left corner is the value of FId.19 and FId.21 generated after 20 authentication movements by real user1. The upper right corner is the value of FId.19 and FId.21 generated by the imitator Attacker 1 imitating the real user1 to make 20 imitation movements. It could be seen that even though the real user and the attacker were very similar in body size, and the attacker was given 20 opportunities to observe and imitate, the difference between them was still very obvious for RaiseAuth. Conclusions In this work, we aim to achieve high-accuracy authentication while compressing the movement complexity of the authentication movement as much as possible. The contribution of this work is twofold. First, based on the mathematical modeling of the arm, we constructed the movement state function, and according to the parameters of the movement state function, the complexity of the authentication movement was maximally compressed. Second, to the best of our knowledge, we are the first to authenticate against an authentication movement with such low movement complexity. We constructed an authentication dataset involving a total of 138 people. At the same time, based on this dataset, we designed an authentication method named RaiseAuth with multiple Cells. Through the analysis of the experimental results, we can draw the following two points, (1) RaiseAuth is better than deep neural network cell when paired with traditional machine learning cell, which means that in the face of small samples and multi-category dataset, traditional machine learning algorithms are more effective. This is in agreement with previous work in [35]; (2) RaiseAuth performs well against random attacks that do not know the real user information, but it suffers a certain impact when confronting imitators with similar physical parameters that stare at real users and conduct 20 imitation attacks. This shows that under extreme conditions, authentication based on body movements may be affected by imitated attacks. In summary, RaiseAuth can perform multi-classification tasks with good performance on the 110-class authentication dataset with low movement complexity and small samples. Funding No funding was received to assist with the preparation of this manuscript. Conflict of interest On behalf of all authors, the corresponding author states that there is no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.
8,635.2
2023-02-07T00:00:00.000
[ "Computer Science" ]
MODELING AND SIMULATION OF A DUAL CHANNEL ACTIVE NOISE CONTROL SYSTEM FOR POWER TRANSFORMER USING FXLMS ALGORITHM Power transformers are known to generate noise, however, active noise emanating from power transformers many at time is unbearable and has been a subject of concern overtime in power engineering. Several studies on it have principally centered on single channel active noise controller (scANC) using fixed step size, which are characterized with the problems such as signal congestion and instability. This study, therefore, employed Filtered-X Least Mean Square (FXLMS) on a dual channel active noise controller (dcANC) using variable step-size. The noise emanating from a 100 MVA, 132/33 kV power transformer was captured with the help of a smart phone in flight mode in accordance with IEC 60076-10 standard of 2 m away from the transformer and 1 m apart between each measurement. The recorded noises were taken at one-third of the height of the transformer tank, while electromagnetic interference from the phone and others were assumed to be negligible. A dcANC with FXLMS was model mathematically and implemented in Simulink in the MATLAB environment. Noise reduction ratio, loudness unit full scale and mean square error were used as performance metrics. The simulation results obtained showed that the original noise emanating from the power transformer when ANC was not used was found to be 70 dB. When scANC with FXLMS was used, the noise was reduced to 30.55 dB whereas, when dcANC with the FXLMS was employed it was reduced to 0.19 dB. Also, the MSE value of -72 dB was obtained in the proposed dcANC with FXLMS, compared with -64 dB obtained from scANC with FXLMS algorithm. The results of the simulation using FXLMS on both scANC and dcANC showed that the performance of the dcANC is comparatively better in term of the stated performance metrics. INTRODUCTION The main source of electrical power grid (EPG) is the generator, which generates electric power, and then accompanied by transformers for transmission and distribution of the generated power up to the utilization level [1]. For this reason, transformers form the main and the most essential components of EPG. Meanwhile, transformers can either be power transformer for transmission of high voltages such as 330 kV, 132 kV, 33 kV etc., or distribution transformers for distribution at low voltages for utilization. Power transformers are usually big and are accompanied with noise [2]. Noise has been proved by WHO as the second source of health hazard to human in the world [3]. And as such, the noise generated from a power transformer installed mostly close to residential area has to be at the level that will not cause harm to the residents there. Hence, it is important to clarify the harmful noise level for human health [4]. Power transformer noise can be core noise, and this is generated by magnetic force in the core due to the vibration of the stator core magnetostriction that causes displacement of the silicon steel sheets [4]. Because the magnetostrictive period is half of the current and the magnetic circuits of the stator core vary in length, the noise mainly includes low harmonic frequency of 100 Hz when the fundamental frequency of the power transformer is 50 Hz [5]. The noise level of 50 dB to 90 dB, resulting from the low harmonic frequency may give rise to human chronic injury and eventually results in neurological diseases [6]. This is known as the dominant source of power transformer noise. Also, it can be load noise caused by current due to load increase, or Fan-Pump noise caused by fan and pump designed as cooling system in the power transformer. Therefore, it is imperative to consider the electrical parameters used and their environmental effects when power transformers are to be designed and operated, because the power transformer used mainly in the urban areas must be at noise level that does not pose threat to human health. Despite this, the existence of noise on the transformer is inevitable. Mitigation of noise emanating from power transformers can either be done by passive noise controllers (PNC), such as enclosures, barriers and silencers which are traditional approaches employed in the reduction of acoustic noise and unwanted sounds [7]. This technique is strenuous to analyze quantitatively [8], and it is hard to achieve more significant noise decrease because of the environmental restriction and cost [8]. According to [8], this method, although, is relatively successful over a broad frequency range, but can be costly and ineffective for lowfrequency types of acoustic noise. Comparatively, based on the destructive interference of two sound waves, the active noise control (ANC) method which generates a secondary signal with the opposite phase to offset the primary noise can reduce low-frequency noise, such as transformer noise as stated in [6], more effectively, with better controllability, easier installation and lower cost. The transformer ANC system consists of electroacoustic devices and a controller. The ANC system is either single or multiple channels in Feed forward or feedback or Hybrid technique with either Least Mean Square (LMS) [9] or FXLMS [10] or Filtered-U Least Mean Square (FULMS) [11] algorithm, which serves as the control mechanism. The choice of the algorithm determines several crucial aspects of the overall adaptive process, such as existence of sub-optimal solutions, biased optimal solution, and computational complexity. The need to control the noise emanating from power transformer as it becomes an inevitable requirement of time in order to protect people's health and the environment few researchers have been able to work on using ANC in mitigation of power transformer noise across the world. For instance, a scANC based on LMS algorithm was proposed by [12] and the model was implemented in Simulink in the MATLAB environment using a practical noise from a 100 kVA power transformer. The analysis of the results from the study of [12] shows huge reduction in the power transformer noise. However, the problem with this model is that a fixed step-size is used which limits its functionality, and difficulty in choosing the appropriate step-size. Another problem is the use of LMS algorithm which achieves large noise reduction, but in an event of time delay caused by secondary path, the result may not be accurate and then results in system divergence. Also, noise extraction method based on IEC 60076-10 standard was not established in the study. Another discussion by [6] shows how ANC technique was used to suppress a high-decibel, low frequency power transformer noise using FXLMS adaptive ANC algorithm based on off-line and on-line secondary path modelling. The method proposed by [6] solved the divergence problem due to the existence of the secondary path when time delay occurs. The transformer noise was monitored on-line, and active control system was realized using both software and hardware, which were selected based on the noise feature. The results from the model showed reduction in noise for on-line secondary path is more compared with off-line secondary path algorithm. Also, Genetic algorithm (GA) is used in this proposed model to optimize the convergence coefficient while the effect of the convergence coefficient on the algorithm was analysed using simulation. However, the drawbacks of this model are the technique for the extraction of the noise from the power transformer is absent in the study, and it cannot handle a complex sound pressure. Having looked into strengths and the weaknesses of the previous methods on power transformer noise reduction, this study aims at complementing the efforts of earlier researchers by using dcANC with FXLMS algorithm and variable step-size to address signal congestion, divergence and time varying problems. It can work for higher sampling rates up to 44.1 kHz and with the noise time of 14.3 seconds, hence, the number of iterations is 631512. Also, the power transformer noise extraction was based on IEC 60076-10 standard [13]. Therefore, the goal of this study is to model and simulate a dcANC to reduce noise emanating from a 100 MVA practical power transformer maximally, using a FXLMS algorithm. Extraction and characterization of noise from power transformer The measurement of power transformer noise consisting of sound pressure and intensity can be done using two methods in accordance with IEC 60076-10 standard [13]. The first approach is the use of sound level meter, while for the second method, the measurement is taken from 0.3 m distance with forced cooling system off, or 2 m distance with forced cooling system on. Also, the measurement should be taken at half of the tank, if the transformer tank height is less than 2.5 m. But, if the tank height is greater than 2.5 m, then the measurement must be taken at one-third and two-third of the tank height. Measurements are to be separated at 1 m apart. Noise from a typical power transformer becomes noticeable when its intensity is higher than the ambient noise. Hence, any sound recorder can be used for the purpose of recording the noise of a power transformer. In this study, the tank height of the power transformer under test is 7.5 m, and a smartphone was used to record the noise of a 100 MVA 132/33 kV power transformer at 1/3 the height mentioned, at Ipaja-Ayobo Transmission Station in MPEG-4 AAC file format. Ipaja-Ayobo is a suburb of Lagos in Nigeria. The noises were recorded at six-interval distances of 2 m away from the transformer and 1 m apart between each measurement. Meanwhile, the electromagnetic signal interference from the mobile phone, the power line and other noises in the environment were assumed to be negligible. MATLAB 2019a, 64-bit version 9.6, was used in the implementation of the proposed system. This package can read MP3 and MPEG-4 AAC formats hence, the recorded transformer noise was in a ready-to-use format. The developed MATLAB script then accesses the stored noise on the laptop through a built-in function 'audioread' where the continuous noise signal was discretized, and the sample frequency returned as the output of the function as shown in Figure 1. Development of dcANC with FXLMS algorithm FXLMS algorithm is a modified version of the LMS algorithm where an extra filter is applied before the adaptive LMS update. As shown in Figure 2, the noise signal from the transformer is passed through the primary path , Adaptive filter , and secondary path estimate to obtain the desired signal , and filtered reference signal respectively. In this study, the secondary path estimate, where 1,2, is modelled as FIR filter with the tap weight length ; , which is an input into the LMS update algorithm, and the control filter is derived by filtering the reference signal through and is given by: where, ̂ , ,̂ , , … ,̂ , is the impulse response of and , 1 , … , 1 is a sample reference signal vector. The sample reference signal vector is obtained as: ( 2) where is the residual error signal given as: where is modelled as the primary disturbance signal, while is the cancelling signal which is expressed as: where n is the impulse response of the secondary path z , whereas, is the output signal. is modelled as a FIR filter with tap-weight length , therefore, can be expressed as: y n , , ℎ In equation (6), , , , , … , , , is the tap-weight vector, and , 1 , … , 1 is a sample reference signal generator. The noise control filter is updated using LMS algorithm as: where is the step-size for the control filter, is the error signal for which can be expressed as and , 1 , … , 1 is the filtered reference signal vector. It is evident from the Figure 2 that the reference signal is first filtered through the secondary path estimate . Therefore, the error signal can be expressed as: The error signal should be minimized to zero for to converge. Hence: The ratio of the noise output to the desired signal can be given as: In equation (11), is the sampled desired audio/noise signal and the ratio in dB(A) can be expressed as: 10 log ∑ (12) where ∑ is the mean of the ratio [14]. The percentage reduction can be expressed as: Mean square error (MSE) The mean square error (MSE) is a metric indicating how much a system can adapt to a given solution [15]. A small MSE is an indication that the adaptive system has accurately modelled, predicted, adapted and/or converged to a solution for the system [15]. The MSE evaluation of an ANC system is given by: where and * denotes statistical expectation operator, which is a theoretical function. Equation (14) can be approximately expressed as: (15) and as such: 10 (16) Figure 3 shows the pseudo code implementation of equations (1) ÷ (16) whereas its realization in Simulink in the MATLAB environment is presented in Figure 4. Extraction and characterization of noise from power transformer In this study, six noises were recorded and tested hence the filter length and weight is assumed to be 2 5 =32 and the values for step size are assumed to be in between 0.01 and 0.9. The power transformer noise is shown in Figure 5 for the first recorded noise taking 1 m apart according to IEC 60076-10 standard, with loudness of -26.6 Loudness Unit Full Scale (LUFS) as shown in Figure 6. According to European Broadcasting Union R128 standard, this noise loudness level falls within the acceptable limit. However, the number of people around a power transformer is much lower than the number of people seeing a movie, especially when expressed in people-area ratio. Hence, this noise loudness is substantially greater than the ambient noise (relative to the environment) as there is less body to absorb the power transformer noise or generate a noise greater than the power transformer noise. Ambient noise is the background sound level at a given location, usually specified as a reference level to study the effect of a noise. In the processing of this noise, one channel is extracted leading to a mono audio. Analysis of the six noises recorded and tested is presented in Table 1. Figure 7 shows the amplitude waveform of the scANC with FXLMS under laid with the original signal for the first recorded noise. It can be observed that the amplitude of the signal obtained with scANC with FXLMS algorithm becomes attenuated with time. This is the effect of continue approximation of the filter weights. The attenuation of the noise in dB(A) is shown in Figure 8 for the first recorded noise. The noise amplitude was reduced to about that of the original first recorded noise. Figure 9 shows the original first recorded noise signal overlaid, by the dcANC with FXLMS obtained signals. It shows that the recorded noise was greatly reduced. The attenuation graph shows that the amplitude of the noise was reduced to approximately zero as shown in Figure 10. The results analysis for the six recorded and tested noises for this study is indicated in Table 2. The MSE (dB) results of scANC and dcANC for the six recorded and tested noises are presented on the Table 3 to show effective reduction of noise with dcANC system. Figure 11 shows the original signal, scANC with FXLMS and dcANC with FXLMS signals overlaid on one another. The figure shows that the proposed dcANC with FXLMS algorithm outperforms scANC with FXLMS algorithm. Figure 12 shows the corresponding attenuations. Also, analysis of the results of the six (6) tested noises indicated on Table 1 shows that the proposed dcANC with FXLMS performs better than scANC with the same algorithm. The integrated loudness is calculated by breaking the noise signal into 0.4 second segments with 0.3 second overlap. It is a measure of the perception of loudness. The higher the value is, the higher the perceived loudness. Momentary loudness measures the loudness of the past 400 milliseconds. Short term loudness measures the loudness over the past 3 seconds. Loudness range measures the difference between highest recorded short-term loudness and the lowest short-term loudness over the entire duration of the signal. True peak measures inter-sample peaks. Discussion The average value -61.1 LUFS on Table 1 implies that the attenuated noise by scANC with FXLMS algorithm sounds 30 LUFS less loud than the average original noise. The mean value -∞ of the dcANC with FXLMS algorithm on Table 1 implies that the attenuated noise cannot be heard at 44.1 kHz frequency. The attenuation graphs' results presented on Table 2 show approximately 30.5 dB averagely, for the scANC with FXLMS signal and approximately -0.19 dB for the dcANC. This is the ratio of the attenuated noise to the original noise for scANC and dcANC with FXLMS algorithms respectively. This implies that the amplitudes of the original recorded noises were reduced to approximately 30 dB when scANC is utilized. Equally, the dcANC model reduced the original recorded noises to approximately 0 dB. This explains why the attenuated recorded noises using dcANC with FXLMS algorithm is hardly audible to human ears at 44.1 kHz frequency. From the simulation results shown on Table 3, it can be observed that the MSEs for dcANC with FXLMS algorithm (-72.33 dB) is lower than that for scANC (-64.17 dB). Thus, the former gives better performance of noise reduction. All the results of performance analysis tools used for this study show that dcANC with FXLMS performs better than scANC with FXLMS. This justifies better performance of dcANC at different point of measurements. CONCLUSIONS This study has modeled and simulated a dual channel active noise control system for a 100 MVA power transformer using filtered-x least mean square algorithm. The noise emanating from a typical practical power transformer was obtained and characterized according to IEC 60076-10 standard and neglecting the external interference. The noise and the dcANC with FXLMS algorithm model were implemented in Simulink in the MATLAB environment using DSP, audio and control tools. The results obtained using noise reduction ratio, LUFS, and mean square error performance metrics, show that the power transformer noise reduction is greater in the dcANC with FXLMS algorithm when compared with scANC. This study has shown that dcANC with FXLMS using variable step-size is an effective technique for reducing humming noise from a power transformer. Further studies can consider the use of other algorithms that can effectively provide a proper acoustic feedback effect and compared with the result of this research. Also, the use of adaptive IIR filter with recursive least square (IIR-RLS) should also be considered regarding noise reduction in power transformer.
4,312
2021-10-10T00:00:00.000
[ "Physics" ]
Better Together: Combining Language and Social Interactions into a Shared Representation Despite the clear inter-dependency between analyzing the interactions in social networks, and analyzing the natural language content of these interactions, these aspects are typically studied independently. In this paper we present a first step towards finding a joint representation , by embedding the two aspects into a single vector space. We show that the new representation can help improve performance in two social relations prediction tasks. Introduction The interactions, social bonds and relationships between people have been studied extensively in recent years. Broadly speaking, these works fall into two, almost completely disconnected, camps. The first, focusing on social network analysis, looks at the network structure and information flow on it as means of inferring knowledge about the network. For example, works by (Leskovec et al., 2008;Kumar et al., 2010) model the evolution of network structure over time, and works such as (Xiang et al., 2010;Leskovec et al., 2010) use the network structure to predict properties of links (e.g., strength, sign). The second camp, focusing on natural language analysis, looks into tasks such as extracting social relationships from narrative text (Elson et al., 2010;Van De Camp and van den Bosch, 2011;Agarwal et al., 2012) and analyzing the contents of the information flowing through the network. For example, works by (Danescu-Niculescu-Mizil et al., 2012;Hassan et al., 2012;Filippova, 2012;Volkova et al., 2014;West et al., 2014;Rahimi et al., 2015;Volkova et al., 2015) extract attributes of, and social relationships between, nodes by analyzing the textual communication between them. Other works (Krishnan and Eisenstein, 2014;Sap et al., 2014) use the social network to inform language analysis. Both perspectives on social network analysis resulted in a wide range of successful applications; however, they neglect to model the interactions between the social and linguistic representations and how they complement one another. One of the few exceptions was discussed in (West et al., 2014), which inferred sentiment links between nodes in a social network by jointly modeling the local output probabilities of a sentiment analyzer looking at the textual interactions between the nodes and the global network structure. While resulting in better performance, inference is done over two independent representations, one capturing the linguistic information, and the other, the network structure. Instead, in this paper we take the first step towards finding a joint representation over both linguistic and network information, rather than treating the two independently. We follow the intuition that interactions in a social network can be fully captured only by taking into account both types of information together. To achieve this goal, we embed the input social graph into a dense, continuous, low-dimensional vector space, capturing both network and linguistic similarities between nodes. Word (Mikolov et al., 2013;Pennington et al., 2014) and Network (Perozzi et al., 2014;Tang et al., 2015) embedding approaches that were recently proposed, aim to combat a similar problem in their respective domains-data sparsity. Both follow a similar approach-embed discrete objects (words or nodes in the graph) into a continuous vector representation, based on the context they appear in. Our approach aims to map both social and linguistic information into the same vector space, rather than embedding the two aspects into two independent spaces. The social graph, originally containing only quantitative properties of the interaction between nodes (e.g., number of messages exchanged between nodes), is extended to capture the contents of these interactions, by computing the textual similarity between the messages generated by each one of the nodes. The computed similarity is used to weight the edges between adjacent nodes. We embed the modified graph nodes into a vector space, using the embedding technique described by (Tang et al., 2015). We evaluate the joint representation by using it in two social relationship prediction tasks and comparing it to several different word-based and network based representations. Our experiments show the advantage of the joint representation. Problem Formulation Our primary assumption is there is a latent space that influences the interactions we observe among people. Thus the goal of our work is to learn this latent representation from the observed data. We describe the data and problem more specifically below. Data We assume that the data comprise a graph G = (V, E), where nodes V correspond to entities (e.g., users in a social network), and the edges E correspond to textual interactions among the entities (e.g., emails, messages). Each edge e t ij ∈ E, which refers to a message sent from node v i to node v j at time t, has an associated document representation d t ij . We refer to the set of messages (documents) between nodes v i and v j as E ij := {e t ij } t (D ij respectively). Moreover, we refer to the set of messages (documents) sent by a node v i to any other node as Motivation Given this type of network data, the goal is to discover the underlying latent representation of the nodes. Our assumption is that the entities are embedded in a latent space that influences the frequency and nature of their communication. We assume that each node has a location in space (e.g., in R 2 , the location of v i is v i := (x i , y i )), and that pairwise node distances (e.g., d(v i , v j )) affect the likelihood of communication and the content of that communication. More specifically, we assume that nearby nodes are more likely to communicate, and talk about similar things. Thus, we assume the latent space embedding represents entities' interests and pairs of entities with similar interests are more likely to interact. These assumptions are motivated by online communities where users exhibit homophily (McPherson et al., 2001), i.e., users with common interests are more likely to form relationships. Problem Definition Given the framework and assumptions described above, we can now state the problem definition for the work in this paper. Assume as input, a multigraph G = (V, E) with messages between nodes in the graph that can be modeled as a set of documents. The goal is to learn an embedding of the nodes V in R k such that the representation reflects both the frequency and content of the messages. To achieve this we will consider several different ways to compute the embedding based on optimizing (1) network connectivity, (2) message content, and (3) connectivity and content. Our conjecture is that jointly considering connectivity and content will produce an embedding that is more robust to noisy interaction data. Strong (but introverted) friends may talk less frequently but share more common interests, compared to gregarious users who talk more frequently but with many (weak) friends. Since there is no ground truth for quantitative evaluation, it is difficult to directly evaluate the quality of a learned embedding. Thus, we evaluate our methods indirectly via related classification tasks. In this work, we will use the learned embeddings in two link-based prediction tasks, where we differentiate (1) strong vs. weak(er) friendships, and (2) employees working in the same vs. different groups. Method The input for our task is the text-enriched network graph G. The goal is to compute a node embedding from G and then use the embedding to generate features for pairs of nodes, which can then be used for a prediction task. The process follows these steps. • Textual-Similarity (TS) Infused Social Graph: Construct graph weights W ij based on the text in G, according to (1) a Node or Edge view of the documents, and (2) using Topic Model or Word Embedding to represent the content. • Node Embedding: Construct an embedding function V → R k , mapping the (weighted) graph nodes into a R k dimensional space. We used the LINE method (Tang et al., 2015). We omit the details due to space restrictions. • Feature Extraction: Construct a feature set for each node pair, using 9 similarity measures between the nodes' k-dimensional vector representations from the embedding. We experiment with additional features extracted directly. Creating the TS-Infused Social Graph The TS-Infused social graph captures the interaction between node pairs by modifying the strength of the edge connecting them according to the similarity of the text generated by each one of the nodes. We identify several design decisions for the process. Node vs. Edge Each edge e ij ∈ G is associated with textual content d ij . We can characterize the textual content from the point of view of the node by aggregating the text over all its outgoing edges (i.e., D i ), or alternatively, we can characterize the textual content from the edge point of view, by only looking at the text contained in the relevant outgoing edges (i.e., D ij ). Representing Textual Content using Topic Models vs. Word Embedding Before we compute the similarity between the content of two parties, we need a vector space model to represent the textual information (the set of documents D i , or D ij ). One obvious method for this is topic modeling, in which the textual content is represented as a topic distribution. In this approach, we learn a topic model over the set of documents, and then represent each document via a set of topic weights (T i or T ij ). An alternative approach is using word embedding, which has been proved effective as a word representation. In this approach, we represent each document as the average of the embedding over the words in the document (WE i or WE ij ). Given the distributional representation of text associated with a node/edge, we assign a weight (w ij ) for each edge (e ij ) as the cosine similarity between vector representation of contents from neighboring nodes (e.g., d(T i , T j ) or d(T ij , T ji ), where d is cosine similarity). Node Embedding We utilize the LINE embedding technique (Tang et al., 2015), aimed at preserving network structures when generating node embedding for social and information networks. LINE uses edge weights corresponding to the number of interactions between each pair of nodes. This only makes use of the network structure, without taking advantage of the text in the network. We modify the embedding procedure by using the edges weights W ij described above (i.e., based on the cosine similarity of the text between nodes i, j) and use the LINE algorithm to compute a k-dimensional embedding of the nodes in G. Feature Extraction Distance-based Features Given a node pair represented by their k-dimensional node embedding, we generate features for the pair according to nine similarity measures. The nine measures used by us are Bray-Curtis distance, Canberra distance, Chebyshev distance, City Block (Manhattan) distance, Correlation distance, Cosine distance, Minkowski distance, Euclidean and squared Euclidean distance. Additional Features Besides the distance-based features, we can also add one or more other basic features related to nodes in the network. These include the following: (1) Network: The number of interactions between two nodes, e.g. number of emails sent and received. (2) Unigram: The unigram feature vector for text sent for each node. (3) Word embedding features: The word embedding vector for text sent for each node. Again we use the average of word embedding to represent documents. Experiments Purdue Facebook Network We analyzed the public Purdue Facebook network data from March 2007 to March 2008, which includes 3 million post activities. Members can set friends as top (close) friends to get the timely notifications without a confirmation by the other. We collected 945 mutually top friend pairs for two users who set each other as top friend and 34633 one-way top friend pairs if there is only one of them set the other as top friend. The dataset will be referred as "Facebook" in this Table 1: Prediction results over the two datasets. We report the F1 score. paper. We evaluated our method by a classification task of the two different social relationships. Avocado Email Collection This collection consists of 279 e-mail accounts, from which we extracted the job titles and departments of 136 accounts. We divided these accounts into three groups, according to their positions in the company, namely executives, engineering department, and business department. We will refer to this dataset as "Avocado" in this paper. The task is defined as predicting whether two accounts belong to the same group. In order to make use of text signal. We will only consider account pairs that have correspondence between each other. There are 2232 positive and 1409 negative examples in this dataset. Result Using the features defined in the previous section, we train Logistic Regression classifier via scikit-learn in Python. We show the ten-fold crossvalidation performance of our features on Facebook and Avocado datasets in Table are the average scores of ten different random downsampling. For Facebook dataset, the results of all embeddings constructed by TS-Infused social graph outperforms the original embedding GE. It shows the joint representation over linguistic information and network structure is more effective than only considering one of them independently. The results on Avocado dataset also confirm the advantage of shared representation. GE N T M significantly outperforms other text-based or network-based methods. The performance of aggregating text sent by a node is better than only looking at text on one outgoing edge, which is opposite to the results on Facebook dataset. This could be resulted from the difference between two prediction tasks. In the Facebook dataset, we try to distinguish strong and weak(er) friendship, in which case the messages they sent to each other are most indicative. While when we predict whether two persons belong to the same group inside a company, the interaction they had with their colleagues would tell us more about the community they are from.
3,203.2
2016-06-01T00:00:00.000
[ "Computer Science" ]
Antimicrobial Coatings from Hybrid Nanoparticles of Biocompatible and Antimicrobial Polymers Hybrid nanoparticles of poly(methylmethacrylate) synthesized in the presence of poly (diallyldimethyl ammonium) chloride by emulsion polymerization exhibited good colloidal stability, physical properties, and antimicrobial activity but their synthesis yielded poor conversion. Here we create antimicrobial coatings from casting and drying of the nanoparticles dispersions onto model surfaces such as those of silicon wafers, glass coverslips, or polystyrene sheets and optimize conversion using additional stabilizers such as cetyltrimethyl ammonium bromide, dioctadecyldimethyl ammonium bromide, or soybean lecithin during nanoparticles synthesis. Methodology included dynamic light scattering, determination of wettability, ellipsometry of spin-coated films, scanning electron microscopy, and determination of colony forming unities (log CFU/mL) of bacteria after 1 h interaction with the coatings. The additional lipids and surfactants indeed improved nanoparticle synthesis, substantially increasing the conversion rates by stabilizing the monomer droplets in dispersion during the polymerization. The coatings obtained by spin-coating or casting of the nanoparticles dispersions onto silicon wafers were hydrophilic with contact angles increasing with the amount of the cationic polymer in the nanoparticles. Against Escherichia coli and Staphylococcus aureus, bacteria cell counts were reduced by approximately 7 logs upon interaction with the coatings, revealing their potential for several biotechnological and biomedical applications. Introduction Biomimetic hybrid coatings have often been used as antibacterial materials [1][2][3][4]. For example, silver nanoparticles (NPs) embedded on dextran films or on a lysozyme/dextran network of natural polymers can be grafted onto a variety of surfaces with several biomedical applications possible from coating implants to catheters [5][6][7]. Biocompatible and antimicrobial polymers can be combined to yield a variety of nanostructures, among them, the popular and very useful NPs, which may further form coatings and films [8][9][10]. Antimicrobial polymeric NPs of poly(methylmethacrylate) (PMMA) synthesized in the presence of the cationic antimicrobial polymer poly(diallyldimethyl ammonium) chloride (PDDA) were first obtained in 2015 joining the biocompatible character of PMMA with the microbicide character of the cationic PDDA [11]. PMMA belongs to the Eudragit trademark that includes a diverse range of poly(methacrylate) and polyacrylate-based copolymers which are non-biodegradable, non-absorbable, and nontoxic with several applications in drug delivery [12]. The pharmaceutical applications of polyacrylates for coatings and films were recently and comprehensively reviewed [13]. On the other hand, PDDA was first described as a cationic antimicrobial polymer about 10 years ago, displaying outstanding activity as a microbicide and fungicide [7,[14][15][16]. However, the synthesis of hybrid PMMA/PDDA NPs by emulsion polymerization in absence of surfactant yielded low conversion percentiles [11]. This was consistent with previously described and not very successful attempts to polymerize methyl acrylate (MA) or methyl methacrylate (MMA) using large amounts of monomer (>1.9 wt %) in oil-in-water microemulsions for which phase separation during polymerization took place [17][18][19]. The two major steps in emulsion polymerization are nucleation and particle growth. In the presence of surfactant, if the monomer has high affinity for the micelle core, nucleation occurs in the micelles where the monomers are. If the monomer is polar to a certain extent, there will be some affinity for the water phase so that polymerization also occurs in monomer droplets [20,21]. The initiator generates free radicals that react with MMA in the micelles and with MMA inside the droplets in the aqueous phase, yielding oligo radicals that colocalize with the monomers and proceeding with the polymerization. Apparently, the presence of PDDA during NPs synthesis in the absence of surfactant stabilized the smaller droplets of MMA yielding PMMA/PDDA hybrid and very small NPs [11]. Coatings prepared by spin-coating of PMMA and dioctadecyldimethyl ammonium bromide (DODAB) cationic lipid revealed a good compatibility between DODAB and PMMA leading to good antimicrobial activity against bacteria upon contact [22]. The dependence of the antimicrobial activity on the quaternary ammonium compound structure for combinations of PMMA and DODAB, cetyl trimethylammonium bromide (CTAB), or tetra propyl bromide (TPAB) for spin-coated films also yielded interesting results [23]. DODAB remained associated with PMMA films and killed bacteria upon contact, in contrast to CTAB that diffused out of the films killing bacteria in the outer medium [23]. In dispersion, PMMA/DODAB or PMMA/CTAB NPs prepared by emulsion polymerization over a range of high concentration of the quaternary ammonium amphiphiles showed remarkable antimicrobial activity over a range of micromolar concentrations [24]. Here we present some novel antimicrobial coatings based on hybrid NPs of PMMA and PDDA and solve the problem of low conversion during emulsion polymerization for PMMA/PDDA NPs synthesis by adding amphiphiles such as DODAB, CTAB, and lecithin in the reaction mixture. The results revealed remarkable microbicidal activity for the PMMA/PDDA coatings obtained from casting and drying PMMA/PDDA NPs and a substantial increase in conversion due to the presence of the amphiphiles during PMMA/PDDA NPs synthesis. Physical Properties and Microbicidal Activity of Coatings from PMMA/PDDA Dispersions The synthesis of PMMA/PDDA NPs, described previously by Sanches et al. [11], yielded monodisperse and cationic NPs in water dispersion named in accordance with MMA and PDDA concentrations used in the particles synthesis. For the dispersions A4, the concentrations used were 0.56 M MMA and 4 mg/mL PDDA; for A5, they were 0.56 M MMA and 5 mg/mL PDDA; and for B4, they were 1.32 M MMA and 4 mg/mL PDDA. NPs in A4 have a mean diameter of 112 ± 17 nm as determined by Scanning Electron Microscopy (SEM) [11]. Casting and drying the original A4 dispersion on silicon wafers yielded the coating shown on the SEM micrograph ( Figure 1), with the macroscopic features for the film seen on Figure 2. The coatings were homogeneous on the hydrophilic surfaces such as the silicon wafers and the glass coverslips. However, cracks and discontinuities were visible for those on the hydrophobic polystyrene substrates ( Figure 2). The NPs structure was shown to involve a PMMA core surrounded by a PDDA shell [11] proving that the outer cationic and hydrophilic layer clearly interacted better with the hydrophilic surfaces such as those of the silicon wafer or the glass. The coating adhesion to the hydrophilic and anionic substrates was clearly better for A5 and A4-derived coatings than for those derived from B4 ( Figure 2). The reason for this can be related to the higher relative ratio of PDDA to PMMA in A5 and A4-derived coatings than in the B4-derived ones. The interpretation for the ring appearing after casting B4 dispersion was related to the coffee-ring effect; such ring deposition occurs when liquid evaporation from the edge is replenished by liquid from the interior so that the resulting outward flow can carry most of the dispersed material to the edge [25]. This took place for the most hydrophobic NPs, namely, those with the lowest PDDA:PMMA molar ratios represented by the B4 dispersion. Similar ring deposition pattern was also observed for hydrophobic polystyrene particles deposited on glass from a water droplet and explained from the coffee ring effect [26]. The crack patterns visible for A4-derived coating on the polystyrene substrate were radial and similar to the ones previously described in the literature for similar systems [27]. The poor adhesion of the hydrophilic NPs of the A4 derived-coating to the hydrophobic polystyrene sheet might also have contributed to cracks in the coating ( Figure 2 the hydrophilic and anionic substrates was clearly better for A5 and A4-derived coatings than for those derived from B4 ( Figure 2). The reason for this can be related to the higher relative ratio of PDDA to PMMA in A5 and A4-derived coatings than in the B4-derived ones. The interpretation for the ring appearing after casting B4 dispersion was related to the coffee-ring effect; such ring deposition occurs when liquid evaporation from the edge is replenished by liquid from the interior so that the resulting outward flow can carry most of the dispersed material to the edge [25]. This took place for the most hydrophobic NPs, namely, those with the lowest PDDA:PMMA molar ratios represented by the B4 dispersion. Similar ring deposition pattern was also observed for hydrophobic polystyrene particles deposited on glass from a water droplet and explained from the coffee ring effect [26]. The crack patterns visible for A4-derived coating on the polystyrene substrate were radial and similar to the ones previously described in the literature for similar systems [27]. The poor adhesion of the hydrophilic NPs of the A4 derived-coating to the hydrophobic polystyrene sheet might also have contributed to cracks in the coating ( Figure 2). PMMA/PDDA coatings on silicon wafers are derived from two different procedures: (1) spin-coating of lyophilized A5 in 1:1 dichloromethane: ethanol; (2) casting of A5, A4, or B4 dispersions of NPs followed by drying under vacuum. Spin-coating allows for the preparation of lipid [28][29][30] or polymer films [22] on very smooth surfaces such as those of the silicon wafers. In the present case, the composition of a hydrophobic polymer, such as PMMA, and a hydrophilic one, such as PDDA, required a special combination of solvents in order to obtain solubilization of both in the solvents mixture. (Table 1). These characteristics of the hybrid films compared to those of pure PMMA coatings revealed similar thicknesses and refractive indices but higher wettability for the hybrid coatings than those determined for the pure PMMA film (Table 1). For coatings obtained by casting the PMMA/PDDA dispersions onto the silicon wafers, there was a consistent decrease of the contact angle upon increasing the PDDA relative amount in the dispersions from 35 ± 6 to 9 ± 2 degrees (Table 1). Coatings obtained by casting the dispersions yielded lower contact angles than those obtained by spin-coating, reconfirming that the hydrophilic PDDA immobilized as an outer layer of the PMMA/PDDA nanoparticle imparted a more hydrophilic character to the film surface than the one of the spin-coated PMMA/PDDA (Table 1). The antimicrobial activity of the hybrid PMMA/PDDA coatings derived from A5, A4, and B4 casted onto glass coverslips revealed a remarkable microbicidal effect against Escherichia coli and Staphylococcus aureus ( Table 2). In this case, the real potency of the coatings was established over orders of magnitude by determining bacteria viability from the log of CFU/mL. Bacteria viability decreased by 10 7 -10 8 colony forming units (CFU) upon interaction with the coatings for 1 h (Table 2). Optimization of Nanoparticles Synthesis and Conversion Percentiles The synthesis of PMMA/PDDA NPs as dispersion A5 in absence of surfactants displayed low monomer-into-polymer conversion since only approximately 10% of the monomer mass added was converted into polymer [11]. In order to improve conversion percentiles, the effect of monomer concentration on conversion was determined ( Figure 4; Table 3). At 5 mg/mL PDDA, decreasing the methylmethacrylate (MMA) concentration [MMA] improved conversion, and possible reasons for this would be the relative increase in PDDA capable of stabilizing the droplet/water interface and the increased average distance between MMA droplets reducing coalescence. One should note that NPs size could also be reduced by decreasing [MMA] meaning that polymerization from smaller droplets yielded smaller NPs. At this point, stabilizing the droplet/water interface seemed crucial for improving conversion. Therefore, CTAB, DODAB, and lecithin were introduced in the reaction mixture for further stabilization of the monomer droplets. Optimization of Nanoparticles Synthesis and Conversion Percentiles The synthesis of PMMA/PDDA NPs as dispersion A5 in absence of surfactants displayed low monomer-into-polymer conversion since only approximately 10% of the monomer mass added was converted into polymer [10]. In order to improve conversion percentiles, the effect of monomer concentration on conversion was determined ( Figure 4; Table 3). At 5 mg/mL PDDA, decreasing the methylmethacrylate (MMA) concentration [MMA] improved conversion, and possible reasons for this would be the relative increase in PDDA capable of stabilizing the droplet/water interface and the increased average distance between MMA droplets reducing coalescence. One should note that NPs size could also be reduced by decreasing [MMA] meaning that polymerization from smaller droplets yielded smaller NPs. At this point, stabilizing the droplet/water interface seemed crucial for improving conversion. Therefore, CTAB, DODAB, and lecithin were introduced in the reaction mixture for further stabilization of the monomer droplets. Table 3. In fact, all amphiphiles employed improved conversion ( Table 4). The most efficacious amphiphile was CTAB, followed by DODAB and lecithin. Since lecithin corresponds to a mixture of lipids and fatty acids with a net negative charge [31,32], at 2 mM lecithin, the NPs became negatively charged; all other NPs exhibited high and positive zeta-potentials (Table 4). In the presence of two Table 3. In fact, all amphiphiles employed improved conversion ( Table 4). The most efficacious amphiphile was CTAB, followed by DODAB and lecithin. Since lecithin corresponds to a mixture of lipids and fatty acids with a net negative charge [31,32], at 2 mM lecithin, the NPs became negatively charged; all other NPs exhibited high and positive zeta-potentials (Table 4). In the presence of two stabilizers (amphiphile and PDDA), conversion was substantially increased in comparison to the one in the presence of a single stabilizer. Another interesting observation refers to the lower zeta-potential for PMMA/CTAB in comparison to the one for PMMA/DODAB; this is consistent with the reported immobilization of DODAB in the PMMA polymeric matrix which is absent for CTAB, since CTAB was reported to be more mobile than DODAB easily diffusing to the outer medium from PMMA films [22,23]. In summary, although amphiphiles indeed improved conversion, PDDA as a second stabilizer possibly provided an additional stabilizing factor, which was the electrosteric repulsion between the MMA droplets during NP synthesis. This also represented an important stabilizing factor for the final polymeric NPs. Table 4. The effect of PDDA, surfactants, and lipids on NPs size (Dz), polydispersity (P), and zeta-potential (ζ) on the stabilization of MMA droplets in water and the improvement of solid contents and conversion percentiles for NPs synthesis. Dispersion * Dz (nm) P m(V) Solids (mg/mL) Conversion (%) PMMA/PDDA 226 ± 3 0.01 ± 0.01 +51 ± Figure 5 and Table 5 show the remarkable colloidal stability of the NPs characterized by the physical properties on Table 4. The photos taken one day and 4 months after synthesis revealed very similar macroscopic features and absence of precipitates. The analysis of sizes, polydispersities, and zeta-potentials also revealed maintenance of these physical properties of the NPs over time ( stabilizers (amphiphile and PDDA), conversion was substantially increased in comparison to the one in the presence of a single stabilizer. Another interesting observation refers to the lower zeta-potential for PMMA/CTAB in comparison to the one for PMMA/DODAB; this is consistent with the reported immobilization of DODAB in the PMMA polymeric matrix which is absent for CTAB, since CTAB was reported to be more mobile than DODAB easily diffusing to the outer medium from PMMA films [22,23]. In summary, although amphiphiles indeed improved conversion, PDDA as a second stabilizer possibly provided an additional stabilizing factor, which was the electrosteric repulsion between the MMA droplets during NP synthesis. This also represented an important stabilizing factor for the final polymeric NPs. Figure 5 and Table 5 show the remarkable colloidal stability of the NPs characterized by the physical properties on Table 4. The photos taken one day and 4 months after synthesis revealed very similar macroscopic features and absence of precipitates. The analysis of sizes, polydispersities, and zeta-potentials also revealed maintenance of these physical properties of the NPs over time (Table 5). Table 5. As compared to other similar systems in the literature, the present NPs use the self-assembly of biocompatible PMMA and the antimicrobial polymer PDDA instead of synthesizing block copolymers incorporating both functions. For example, glycosylated block copolymers were used as surfactants in butyl methacrylate emulsion polymerization [33]. However, the antimicrobial activity was not as high as the one obtained for the coatings described in this work ( Table 2). The higher hydrophobicity inherent to the two methyl groups on the quaternary nitrogen of the PDDA molecule, as compared to the cationic glycosylated moieties, was an advantage for efficient microbicide activity. Indeed, several derivatives of PDDA evaluated for their antimicrobial activity revealed that these cationic polymers exhibit the highest activity when their chemical structure bears high frequency of hydrophobic methyl moieties [11,34]. The hydrophilic character of cationic antimicrobial polymers does not contribute to improvement of the antimicrobial action, although the NPs synthesis certainly benefits from their use as surfactants. A major drawback of PMMA/PDDA NPs synthesis was the low conversion due to the relatively poor function of PDDA at the interface of MMA droplets and the surrounding water medium during NP synthesis (Figure 4). In this work, we solved this problem by adding amphiphiles such as CTAB, DODAB, and lecithin as surfactants active as stabilizers during the NPs synthesis. In addition, we must recognize the excellent perspective of these ternary systems as antimicrobials since PDDA, DODAB, and CTAB have already being described in separate as good antimicrobial agents [3,8,10,14,24,[35][36][37]. The antimicrobial properties of these ternary systems both as latexes dispersions in water and as coatings still have to be determined. Materials MMA, PDDA, azobisisobutyronitrile (AIBN), NaCl, CTAB, DODAB, soybean lecithin, chloroform, ethanol, dichloromethane, and Mueller-Hinton agar (MHA) were purchased from Sigma-Aldrich (Darmstadt, Germany) and used without further purification. The composition of soybean lecithin includes several fatty acids and phospholipids [31,32]. Silicon (100) wafers were from Silicon Quest (Santa Clara, CA, USA) with a native oxide layer approximately 2 nm thick and used as substrates for casting the dispersions. These Si wafers with a native SiO 2 layer were cut into small pieces of ca 1 cm 2 , cleaned with acetone, and dried under a N 2 stream; they are smooth substrates for the coatings. The syntheses in 1 mM NaCl solution prepared with Milli-Q water yielded NPs dispersions by emulsion polymerization that underwent dialysis for purification using a cellulose acetate dialysis bag with molecular weight cut-off around 12,400 g/mol. All other reagents were analytical grade and used without further purification. Preparation of NPs by Emulsion Polymerization A variety of hybrid and polymeric NPs were obtained by polymerization of MMA at 70 to 80 • C for 1 h using 10 mL of aqueous solutions of NaCl 1 mM and PDDA and/or CTAB, DODAB, or lecithin in accordance with compositions shown in Table 6 [11]. Briefly, a weak flux of nitrogen was applied to the solution during a few minutes before adding 3.6 mg of AIBN initiator and MMA. For dispersions containing surfactants or lipids, DODAB or lecithin were previously dissolved in chloroform in order to prepare lipid films under a nitrogen flux to evaporate the chloroform solvent [38,39]. Ten milliliters of the NaCl 1 mM solution was then added to the dried lipid films before proceeding with NP synthesis. In the case of CTAB, the required amount of CTAB in the NPs dispersion was directly added to the 1 mM NaCl solution before starting the NPs synthesis. The NP dispersions obtained were further purified by dialysis against Milli-Q water until water conductivity reached 5 µS/cm. Table 6. Concentrations of MMA, PDDA, cetyl trimethylammonium bromide (CTAB), dioctadecyldimethyl ammonium bromide (DODAB), and/or lecithin used to synthesize hybrid NPs by emulsion polymerization. Size distributions, Dz, ζ, and P were obtained by dynamic light-scattering (DLS) using a Zeta Plus Zeta Potential Analyzer (Brookhaven Instruments Corporation, Holtsville, NY, USA) equipped with a laser of 677 nm with measurements at 90 • . P of the dispersions was determined by DLS following well defined mathematic equation [40]. Dz values were obtained from the log normal distribution of the light-scattered intensity curve against the diameter. ζ values were determined from the electrophoretic mobility (µ) and Smoluchowski equation ζ = µη/ε, where η and ε are the viscosity and the dielectric constant of the medium, respectively. Samples were diluted 1:30 with a 1 mM NaCl water solution for performing the measurements at (25 ± 1) • C. The colloidal stability of the dispersions was followed by two procedures: (1) from photographs of the dispersions; (2) from the physical properties (Dz, P, and ζ), both procedures performed at days one and 120. Preparation of Coatings from the NPs Dispersions by Spin-coating or Casting For preparing spin-coated films, 1 mL of the A5 dispersion was lyophilized and a 10 mg/mL solution in the solvents mixture (1:1 dichloromethane: ethanol) was prepared; 0.1 mL of this solution was then spin-coated onto silicon wafers using a Headway PWM32-PS-R790 spinner (Garland, TX, USA), operated at 3000 rpm during 40 s, at (24 ± 1) • C, and (50 ± 5)% of relative humidity. Thereafter, the film was characterized by ellipsometry [41] which allowed us to obtain the thickness and refractive index of the film independently [22]. Films prepared by casting employed 0.05 mL of A5, A4, or B4 original dispersions casted onto three different surfaces: polystyrene, silicon wafers, or glass coverslips. After drying overnight under vacuum the films were photographed, observed by SEM, characterized regarding their wettability, and used for determining antimicrobial activity. Physical Characterization of Coatings by SEM, Macroscopic Features from Photographs and Contact Angle Determinations SEM for the coatings employed Jeol JSM-7401F equipment (JEOL Ltd., Akishima, Tokyo, Japan). In short, 2 µL of A5 dispersion on silicon wafers dried in a desiccator before coverage with a thin gold layer as required for contrast and visualization by SEM. Coatings from A5, A4, or B4 onto different substrates (polystyrene sheet, silicon wafers, and glass coverslip) were obtained by casting 50 µL onto the substrates and allowing the material to dry overnight in a desiccator before taking pictures or determining wettability by using a home built apparatus, as previously described [29,30]. Photos of sessile water droplets of 10 µL allowed for the determining of the advancing contact angle (8 A ) over the first 5 min after depositing the droplet on the films. Each determination was taken as a mean ± the standard deviation of at least 4 measurements. Sixty microliters of the bacterial suspensions were deposited on the coatings (obtained by casting of A5, A4, or B4 dispersions onto glass coverslips) and left in a water-vapor-saturated chamber for 1 h to prevent water evaporation from the droplet. Thereafter, the glass coverslips were transferred to 10 mL of 0.264 M D-glucose isotonic solution in Falcon tubes and vigorously stirred by vortexing before withdrawing 0.1 mL aliquots and preparing their 1:10 and 1:100 dilutions for plating on MHA plates, incubating the plates (37 • C/24 h), and reading the CFU. These readings were converted into CFU/mL and log (CFU/mL). When no counting was obtained, since the log function does not exist for zero, the CFU/mL counting was taken as 1 so that log CFU/mL could be taken as zero. Controls were bare glass coverslips. Conclusions PMMA/PDDA nanoparticles coated three different substrates by two different procedures: (1) spin-coating; (2) casting followed by drying of the casted dispersions. Macroscopically homogeneous films without cracks coated the hydrophilic substrates such as silicon wafers or glass coverslips. On hydrophobic substrates such as polystyrene surfaces, the coatings showed cracks after drying. The most homogeneous coatings occurred at the highest relative contents of PDDA:PMMA. Upon lowering PDDA contents in the NPs, the NPs accumulated at the periphery of the droplets casted on the substrates. This was due to the coffee ring effect, since the more hydrophobic NPs followed the capillary flow to the periphery of the droplet. The contact angles for the coatings showed a clear dependence of wettability on the PDDA content of the NPs. The higher the PDDA content, the lower the contact angle and the better the adhesion to oppositely charged hydrophilic substrates. Comparing films obtained by spin-coating with those obtained by casting of the NPs onto the substrates showed that spin-coated coatings had larger contact angles than coatings obtained by casting, suggesting that some PDDA molecules might have migrated to the silicon wafer-water interface hiding from the film surface and therefore becoming somewhat unavailable to kill bacteria at the film surface. There was a remarkable microbicide activity due to 0.8-1.0 mg of PDDA distributed in the coatings: after 1 h interaction with bacteria, their viability decreased by approximately 7 to 8 logs as tested against E. coli or S. aureus cells. This was possibly due to the more hydrophobic nature of PDDA in comparison with other hydrophilic cationic polymers. CTAB, DODAB, or lecithin as additional stabilizers for the PMMA/PDDA NPs synthesis substantially improved conversion of MMA into PMMA. These ternary systems were stable and maintained their macroscopic and microscopic physical characteristics with time (checked for 4 months). The use of these ternary systems as microbicides still needs systematic evaluation. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.
5,724.6
2018-09-28T00:00:00.000
[ "Materials Science" ]
Scenario Analysis of Carbon Emissions of China ’ s Electric Power Industry Up to 2030 In this paper, the Long-range Energy Alternatives Planning (LEAP) model is constructed to simulate six scenarios for forecasting national electricity demand in China. The results show that in 2020 the total electricity demand will reach 6407.9~7491.0 billion KWh, and will be 6779.9~10,313.5 billion KWh in 2030. Moreover, under the assumption of power production just meeting the social demand and considering the changes in the scale and technical structure of power industry, this paper simulates two scenarios to estimate carbon emissions and carbon intensity till 2030, with 2012 as the baseline year. The results indicate that the emissions intervals are 4074.16~4692.52 million tCO2 in 2020 and 3948.43~5812.28 million tCO2 in 2030, respectively. Carbon intensity is 0.63~0.64 kg CO2/KWh in 2020 and 0.56~0.58 kg CO2/KWh in 2030. In order to accelerate carbon reduction, the future work should focus on making a more stringent criterion on the intensity of industrial power consumption and expanding the proportion of power generation using clean energy, large capacity, and high efficiency units. Introduction The fundamental reason for global climate warming are CO 2 and other greenhouse gases from the consumption of fossil energy, which seriously hinders the sustainable development of human beings.By 2050 the CO 2 emissions from the energy field will be more double than that of today if effective approaches to reduce them are not adopted [1].China is faced with increasing international pressure to reduce carbon emissions [2].Thus, energy conservation and emission reduction will to become an essential way for the achievement of the sustainable development of human society.As the largest greenhouse gas emitter in the world [3,4], China has pledged that in 2020 the carbon emission intensity per Gross Domestic Product (GDP) will be lower than that of 2005 by 40%~45%, and the proportion of non-fossil energy in the primary energy mix will be increased to 15% [5].According to the statistics of International Energy Agency (IEA), in 2010 CO 2 emissions from the production of China's electric power and thermal energy reached 35.8 million tons, accounting for 49.3% of the total CO 2 emissions [6].Therefore, it might be of great significance to promote carbon mitigation by the power industry for China and the whole world to achieve the carbon mitigation targets.During the 11th Five-Year Plan of China, the power industry achieved a CO 2 reduction of about 17.4 million tons through developing non-fossil energy sources, decreasing net coal consumption, and reducing line losses [7].China's power industry has achieved certain effects with respect to CO 2 emission reduction, but it might still be unable to realize the proposed carbon reduction targets.Consequently, there is an urgent need for delving further into carbon mitigation policies. There is abundant literature on China's carbon mitigation policies, most of which has been published in the past few years.Concerning the research methods, scenario simulation has been extensively applied in the field of carbon reduction issues.The modelling approaches of these studies can be classified into three categories: top-down (MARKet ALlocation (MARKAL) models, Computable General Equilibrium (CGE) models) models, bottom-up models and hybrid models [8].For instance, Chen [9] employed three MARKAL models to investigate China energy system's carbon mitigation strategies and the corresponding impacts on the economy.Cheng [10] analyzed the impacts of the low-carbon policy in the power sector of Guangdong Province in China on its energy and carbon emission targets by 2020 using a regional CGE model.Li [11] assessed the influences of CO 2 mitigation measures in China during the period of 2010-2050 by using a CGE method.Xiao [12] explored the impacts of the environmental tax on China's economy in light of a dynamic recursive multi-sector CGE model.Chi [13] studied the impacts on China's economic growth, energy consumption, and carbon emissions under the carbon tax policy scenarios on the basis of the dynamic CGE model.Top-down models can investigate the broader economy and incorporate feedback effects among different markets triggered by policy-induced changes in relative prices and incomes, but they generally cannot provide technological details of energy production or conversion [8]. Moreover, top-down models might have some limitations in terms of application.For example, the design idea of the CGE model is mainly based on the general equilibrium of the macro-economy, thus it seems to be only applicable to the research on national or regional carbon emission reduction rather than the issue of individual industries.Furthermore, the simulation policies of CGE models with having a global impact on the economy generally may be very difficult to evaluate some major emission reduction measures regarding the inner structure of specific industries.In addition, the input-output relationship of production function in MARKAL models and CGE models can be constant or obtained by the extrapolation method, which cannot accurately reflect the technical changes in reality, and cannot be utilized for technique policy simulation. However, the Long-range Energy Alternatives Planning (LEAP) model developed by the Stockholm Environment Institute can effectively address the issues of MARKAL models and CGE models.The LEAP model, which is a bottom-up model, can describe current and prospective technologies in detail, making it suited to analyze specific changes in technology or policies [14].Researchers can flexibly establish various policy models according to the specific problems to be studied using the LEAP model.The model can not only be widely used in urban, regional, national and even global energy and environmental analysis, but also can be applied to the research on energy demand and greenhouse gas emission reduction of various sectors of the national economy.The LEAP model can identify the department-level techniques or policies effectively by analyzing the matters such as the energy demand, conversion, transmission and distribution, end use, and the impacts on energy environment from diverse sectors under different policies or technology simulation scenarios [8]. In the present study, since it can be used to set the parameters and model structures according to the characteristics of problem and the availability of data, the LEAP model is widely utilized to identify potential problems, and estimate the possible impacts of energy policies on various areas [15,16].This is evident from more than 75 country studies with LEAP model for energy and environmental systems.Bala [17] assessed rural energy supply and demand with the LEAP model, and studied the global warming contributions from Bangladesh caused by the drawbacks of traditional biomass fuels uses in rural areas of the country.Shin [18] estimated and analyzed the impacts of landfill gas electricity generation on the energy market in Korea using a LEAP model.Song [19] accomplished an environmental and economic assessment in Korea based on the energy policy changes for climate change agreements and an increase of CO 2 mitigation technology according to operating data for the CO 2 chemical absorption pilot plant that is installed in the Seoul coal steam power plant.Tao [20] employed three scenarios to simulate China's low-carbon economic development level in 2050 by using the LEAP model.Takase [21] studied various alternative paths for nuclear power development and GHG emission abatement in Japan.Amirnekooei [22] conducted demand and supply side analysis for Iran through developing different scenarios.Roinioti [23] explored the impacts of electricity generation scenarios on environmental emissions in Greece by using LEAP model.Pan [24] applied the LEAP model to forecast the reduction effects of main atmospheric pollutants and GHG in Beijing under different scenarios.Kale [25] developed electricity demand and supply scenarios for the state of Maharashtra in India using the LEAP model. In contrast to the wealth of studies on carbon mitigation policies from a national or regional level, there has been less research looking at the carbon mitigation policies of the power industry in China.In the previous literature, Zhang [26] assessed the CO 2 reduction potentials for China's electricity sector under different CO 2 emission scenarios by using the LEAP model.Huang [27] estimated China's future power demand according to the degree of electrification using the LEAP model.Yuan [28] constructed two energy conservation and emissions reduction scenarios to probe the 2020 energy conservation potential of China's power industry. The contributions of this paper may be summarized as follows: considering the objects of the total electricity demand and carbon mitigation comprehensively, the LEAP model is constructed to simulate the various scenarios for carbon mitigation potential of China's power industry through forecasting electricity demand and carbon emissions.The results can simulate the future trend of China's electricity demand and CO 2 emissions, as well as provide some general insights on the effectiveness of measures aimed at energy savings and carbon reduction of China's power industry, which will be beneficial for future energy planning and policy making. The rest of this paper is organized as follows: Section 2 describes the method of this study in detail, including the structure of LEAP model, and the major steps of the analysis.In Section 3 the scenario description and the parameters setting of electricity demand and carbon emission are presented.Then, in Section 4 the forecasting results of electricity demand and carbon emissions are obtained by using the LEAP model.The future carbon emissions intensity of the power industry is analyzed and assessed.Finally, Section 5 offers some conclusions and recommendations of the whole research. LEAP Model The Long-range Energy Alternatives Planning (LEAP) system is an energy-economy-environment model developed by the Stockholm Environment Institute [29].The model, which is a scenario-based modeling tool for energy policy analysis and climate change assessment, can develop different scenarios for future energy demand and environmental impact in the light of how energy is consumed, converted, and produced in a given region or economy under the assumption of a range of values for parameters such as population increase, economic development, technology utilization, and inflation [30].Also, the model can be adopted for a scenario analysis of the energy consumption and CO 2 emissions, and the policy or technology effects can be obtained on the basis of the comparative analysis of the simulation results under various scenarios. The structure of the LEAP model is described in Figure 1.There are two major steps to conduct the analysis of this study: (1) Electricity demand forecasting.Electricity demand includes household electricity demand and industrial electricity demand.Taking into account the difference between urban residents and rural residents, future electricity demand of urban residents and rural residents are respectively calculated on the basis of predicting the total population in the future.In order to analyze the distribution situation and industrial electricity demand, the secondary industry is divided into industry and construction industry.In this paper, industry in turn is divided into high electricity consumption industries and non-high electricity consumption industries.The high electricity consumption industries include the chemical industry, non-metal mineral product industry, metal smelting and calendering industry, and the electric, heat, and water production and supply industry.According to the GDP growth rate and the elastic coefficient of industrial GDP, the future production and industrial structure of each industry can be calculated.Then, the electricity demand under the assumptions of different power intensity can be obtained. (2) CO 2 forecasting of the electric power industry.Assuming that the future electricity production meets future electricity demand and only thermal power generates CO 2 , power production technology can be divided into thermal power and other power generation technologies.Thermal power fuels include coal, oil and gas.Moreover, coal-fired power generation is classified according to unit capacity and whether to use the Integrated Gasification Combined Cycle (IGCC) technology.Furthermore, the proportion of thermal power units using Carbon Capture and Storage (CCS) technology is predicted, and CO 2 generated by thermal power can be captured and stored on the basis of the ratio using CCS technology. production and industrial structure of each industry can be calculated.Then, the electricity demand under the assumptions of different power intensity can be obtained. (2) CO2 forecasting of the electric power industry.Assuming that the future electricity production meets future electricity demand and only thermal power generates CO2, power production technology can be divided into thermal power and other power generation technologies.Thermal power fuels include coal, oil and gas.Moreover, coal-fired power generation is classified according to unit capacity and whether to use the Integrated Gasification Combined Cycle (IGCC) technology.Furthermore, the proportion of thermal power units using Carbon Capture and Storage (CCS) technology is predicted, and CO2 generated by thermal power can be captured and stored on the basis of the ratio using CCS technology. Electricity Demand Forecasting Household electricity demand and industrial electricity demand constitute the total electricity demand: Electricity Demand Forecasting Household electricity demand and industrial electricity demand constitute the total electricity demand: where pd t is the total electricity demand in the tth year; ipd t is the industrial electricity demand in the tth year; hpd t is the household electricity demand in the tth year. (1) Industrial electricity demand: where adv it is the added value of the ith industry in the tth year; iipd it is the electricity consumption intensity of the ith industry in the tth year. where adv i0 is the added value of the ith industry in the baseline year; g t is the growth rate of GDP in the tth year; e i is the elasticity coefficient of GDP of the ith industry.In this paper, the baseline year is 2012: where S it (industrial structure) is the ratio of the added value of the ith industry to the total added value in the tth year. (2) Household electricity demand: where j = 1 represents urban residents; j = 2 represents rural residents; pop jt is the population of urban area (rural area) in the tth year; ihpd jt is the per-capita household electricity consumption intensity of urban area (rural area) in the tth year. where pop 0 is the population in the baseline year; β is the growth rate of population; city is the urbanization rate. where γ is the coefficient of household electricity consumption and the per-capita income; dpi jt is the per-capita income of urban area (rural area) in the tth year; dpi j0 is the per-capita income of urban area (rural area) in the baseline year. CO 2 Forecasting of Electric Power Industry In this paper, it is assumed that the total generated energy just meets the electricity demand of the whole society.Total energy demand and carbon dioxide emissions of the power industry in the future are calculated in two steps: (1) Energy demand from the power industry: where pd mt represents the generated energy of the equipment m due to transformation; pd t represents the total generated energy; µ m is the proportion of the generated energy of the conversion equipment m accounting for the total generated energy: where et is the total energy demand of power industry; f m,s is the energy conversion efficiency of the equipment m using the primary energy s. (2) CO 2 emissions from the power industry: The calculation approaches of carbon emissions can be divided into three categories: practical measure methods, model methods, and conservation algorithms.Based on the scientific sampling and continuous monitoring, practical measure methods are difficult to apply widely since there have some issues, such as high monitoring cost and poor reliability.The model methods are utilized to estimate and predict the carbon emissions by using system models or comprehensive evaluation models.For instance, Mi [4] employed an input-output model to calculate consumption-based CO 2 emissions for thirteen Chinese cities in terms of both overall and per capita carbon emissions.Mi [31] also employed an optimization model based on the input-output model to assess the CO 2 emissions of Beijing.Mi [32] further developed a Climate Change Mitigation Index (CCMI) with 15 objective indicators and performed a complex assessment of China's provincial performance in climate protection using the Similarity to Ideal Solution (TOPSIS) method.However, in the model methods, the uncertainties of model setting and parameter selection have direct implications on estimation accuracy.Based on the law of mass conservation, the conservation algorithm considers that the quality of carbon in oxygen combustion is equal to the sum of carbon and oxygen before combustion, and the sensitivity of the conservation algorithm to the different combustion technologies and combustion conditions of various countries is relatively low.Carbon emission factors can be computed by using the carbon contents of unit calorific value of different fuels.Therefore, the IPCC employed the data collected from different countries to obtain emission factors of unit calorific value, which have been widely used in the study. Conservation algorithms can be classified into two kinds: reference approaches and sectorial approaches.The reference approaches are top-down calculation methods without considering the intermediate conversion of fossil fuels and distinguishing the consumption of the fuels of various types in different sectors.Compared with the sectorial approach, the reference methods make it easier to obtain relevant data, they are convenient and calculations are simple, so they are regarded as a default methods recommended by the IPCC.In this paper, in order to study the carbon emissions in the process of power production, the sectorial approach of IPCC is applied to calculate carbon emissions.The calculation formula is as follows: where cet is the total CO 2 emission from power industry; e f m,s is the CO 2 emission factor from the primary energy s using the conversion equipment m. The emission factors provided by the IPCC are based on calorific value units, and in practice the consumption of primary energy s is generally counted as a quality.However, there is a big difference in the the fuel calorific value of the same quality among different countries.Hence, CO 2 emission factor of primary energy s of unit mass can be calculated according to the average low calorific value in China of primary energy s of unit mass.The equation can be described as follows: where C m,s is the carbon content of unit calorific value of primary energy s, which is the default value provided by the IPCC [33]; E m,s is the average low calorific value of primary energy s of unit mass; O m,s is the carbon oxidation rate of primary energy s, and in this study the value is 1; 44/12 is the molecular weight ratio of CO 2 to C. Scenario Description of Electricity Demand From the previous literature, it can be seen that the factors affecting electricity demand mainly include economic growth rate, industrial electricity consumption intensity, industrial structure, population, and per-capita household electricity consumption intensity.According to Equations ( 3) and ( 4), the faster the economic growth rate is, the faster industrial structure changes under the assumption that the elasticity coefficient of GDP is constant.Moreover, based on the Equations ( 7) and ( 8), the faster the economic growth rate is, the higher the per-capita income and per-capita household electricity consumption intensity are.Industrial structure and per-capita household electricity consumption intensity can be considered as the derived variables of economic growth rate.Therefore, in this paper six scenarios are designed for future electricity demand on the basis of economic growth rate and industrial electricity consumption intensity, and the specific description is shown in Table 1.In order to determine future carbon emissions, the highest electricity demand and the lowest electricity demand among the six scenarios are selected to conduct the scenario analysis of carbon emissions.The new capacity of power industry in the future is expected to focus on large capacity and high efficiency units.As the future electricity demand increases, the faster the proportion of large capacity and high efficiency raises, the faster the power generation technical structure adjusts.Thus, two scenarios are designed to analyze carbon emissions, as shown in Table 2. Table 2. Description of carbon emissions scenarios in the electric power industry. Scenario 1 Power industry develops at a high speed; technical structure adjusts at a high speed Scenario 2 Power industry develops at a low speed; technical structure adjusts at a low speed Parameter Setting of Electricity Demand (1) Economic growth.According to the estimation results of China's average annual GDP growth rate from International Energy Agency, Citibank, World Bank Hong Kong and Shanghai Banking Corporation in 2010-2020 and 2020-2030, the highest prediction value amongst them is selected as the economic growth rate under the scenarios of economic grows with a high speed, the lowest value is chosen as the economic growth rate under the scenarios of economic grows with a low speed, and the average value is regarded as the economic growth rate under the scenarios of economic grows with a medium speed [34]. (2) Industrial structure.Industrial added value and the proportion of industrial structure consider 2012 as the baseline year.The elasticity coefficient of GDP of each industry is assumed to be constant, and adopts the average value of GDP elasticity coefficient in 2007-2012.Under the assumption that the GDP elasticity coefficient of each industry is invariable, the faster the economic growth rate is, the faster the adjustment of the industrial structure is since that the economy is designed for high speed, medium speed and low speed growth of three scenarios.Therefore, industrial structure will be set for high, medium and low speed changes in three scenarios. (3) Industrial electricity consumption intensity.Industrial electricity consumption intensity can be defined as the electricity consumption per unit value added of each industry.Under the restraint of energy saving and emission reduction policies, the electricity consumption intensity of all the industries in China has been declining.However, from the point view of margin, the speed of descent of industrial electricity consumption intensity should be decreasing.Therefore, in this study the change rate of industrial electricity consumption intensity under the scenario of the rapid decline of electricity consumption intensity should be the 50% of the average annual descent speed of 2007-2012, and the change rate under the scenario of the low decline of electricity consumption intensity is 30% of the average annual descent speed of 2007-2012. (4) Population growth.The average annual growth rate of China's population was approximately 0.5% in 2006-2015.The Fifth Plenary Session of the 18th Central Committee of the Communist Party of China proposed in October 2015 that each family can now raise two children.According to the fertility willingness survey of the National Health and Family Planning Commission of China, it can be seen that about 15 million-20 million couples conform to the new policy, and only 50%-60% couples have the fertility willingness with the increment of newborns reaching 7.5 million-12 million in the short term.If the fertility target can be achieved within 5 years, the annual increase of newborns will be 1.5 million-2.4million.Then, the birth rate will increase to 1.3%-1.4%,and the annual mortality rate will be around 0.7%.China's future population growth rate will be around 0.6%-0.7%.Hence, in this study the average annual growth rate of population is supposed to be 0.65% in the future. (5) Urbanization rate.Regarding the large differences of income level and the amount of electricity consumption between China's rural and urban areas, there is an assumption about the future trend of urbanization.In the "New National Urbanization Planning", China's State Council announced that the goal of about 60% urbanization rate should be achieved by 2020.According to "Report of State Council on the Work of the Construction of Urbanization", till 2030 the urbanization rate will reach about 70%. (6) Per-capita household electricity consumption intensity.Per-capita household electricity consumption intensity depends on per-capita income.According to the per-capita household electricity consumption published by China Electricity Council and the per-capita Gross National Income (GNI) announced by World Bank in 2011, this paper calculates the correlation coefficient between the per-capita living electricity consumption and per capita GNI, and estimate the per-capita household electricity consumption intensity of China at different economic grow rates. The parameters of electricity demand are listed in Tables 3 and 4. Parameter Setting of Carbon Emissions in the Power Industry (1) Generated energy.In this paper, the maximum of electricity demand forecasting results is considered as the maximum of future power production, which can be defined as the rapid power industry development scenario; the minimum of the electricity demand forecasting results is considered as the minimum of future power production, which can be defined as the slow power industry development scenario. (2) Technical structure of power generation.The increment of the proportion of clean energy generation and the change of new high efficient coal-fired units in generation structure can be employed to reflect the adjustment of technical structure of power generation.According to "the Plan of Action of Energy Saving Coal Upgrading and Transformation (2014-2020)", the average coal consumption of power supply of the new coal-fired generating units should be lower than 300 g/KWh.The net coal consumption of conventional air cooling units and circulating fluidized bed boilers are higher than 300 g/KWh.To achieve the average standard of 300 g/KWh, China's new conventional coal projects in the future should focus on (ultra)-supercritical units (600 thousand kilowatts and above).* the numbers in the bracket are respectively economic growth rates under the scenarios of high speed, medium speed, and low speed; ** the numbers in the bracket are respectively per-capita household electricity consumption in the urban area under the scenarios of high speed, medium speed, and low speed; *** the numbers in the bracket are respectively per-capita household electricity consumption in the rural area under the scenarios of high speed, medium speed, and low speed; all numbers in the bracket are calculated on the basis of the data from China Statistics Yearbook (2005-2013) [35][36][37][38][39][40][41][42][43]. In this paper, 60-100 and more than 100 of the ultra-supercritical units are respectively assumed to be accounted for 50% of the new capacity, and the rest of the generating capacity remains at the status of 2012.Under the scenario of the high speed development of electric power production, the increment of the proportion of the new units is correspondingly fast, which is defined as the rapid adjustment of technical structure; under the circumstances of the low speed development of electric power production, the raise of the proportion of new units is correspondingly slow, which is defined as the slow adjustment of the technical structure.Technical parameters of power generation are described in Table 5. * the data of net coal consumption of various types of units are obtained from "Chinese power statistical yearbook 2013" [44]; the net coal consumption of IGCC technology can be seen in [45]; the net coal consumption of oil, gas, and other thermal power can be seen in [46]; ** technical efficiency of CCS technology is assumed to be 90%; *** the first figure in the brackets is the proportion of generated energy when the technical structure adjusts rapidly, and the second figure in the brackets is the proportion of generated energy when the technical structure adjusts slowly. Results of Electricity Demand Forecasting Figure 2 describes the forecasting results of electricity demand under six different scenarios.From Figure 2, it can be seen that the minimum of electricity demand occurs in Scenario 5, and the forecasting values in 2020 and 2030 are 6407.9and 6779.9 billion KWh, respectively.The main reason may be that the slow growth of economy and the rapid decline of electricity consumption intensity restrain the growth of electricity demand.Moreover, the maximum of electricity demand exists in Scenario 2, and the estimation values in 2020 and 2030 are 7491.0and 10,313.5 billion KWh, respectively.There is a paramount reason for this phenomenon, namely that the rapid growth of economy and the slow decline of electricity consumption intensity prompt the growth of electricity demand. Furthermore, it can be seen that the absolute amount of electricity demand of the primary industry has declined under the six scenarios, and the absolute amount of electricity demand of other sectors has increased under Scenarios 1, 2, 3, 4, and 6.Only in Scenario 5 the slow growth of economy and electricity demand intensity stimulate the decline of electricity demand of the secondary industry.A possible explanation may be that industrial electricity consumption has decreased from 4343.2 billion KWh in 2020 to 4128.6 billion KWh in 2030.The electricity consumption of high electricity consumption industry has decreased from 2859.1 billion KWh in 2020 to 2736.9 billion KWh in 2030.The electricity consumption of non high electricity consumption industry has decreased from 1478.0 billion KWh in 2020 to 1427.4 billion KWh in 2030.However, the electricity demand of chemical industry is rising.Therefore, except for Scenario 5, China's power industry will continue to show a trend of continuous expansion in the other scenarios.Table 6 reflects the electricity demand structure of various sectors.From Table 6, it can be concluded that the proportions of the electricity consumption of the primary, secondary and their breakdown industries all display a downward trend.The proportions of electricity demand of the tertiary industry and households show an upward trend.Specifically, in 2020 the ratio of electricity demand to the total electricity demand of the primary industry is about 1.56%~1.68%.The proportion of the secondary industry is about 69.23%~70.68%,and the proportion of the tertiary industry is approximately 12.09%~12.94%. The share of household electricity demand is about 15.10%~16.73%.The largest proportion of secondary industry is industry, accounting for 66.93%~70.51%.The share of high electricity consumption industry is 43.87%~44.62%.Till 2030, the ratio of electricity demand to the total electricity demand is about 1.11%~1.41%.The proportion of the secondary industry is about 62.60%~66.75%, and the proportion of the tertiary industry is approximately 12.73%~15.64%.The share of household electricity demand is about 17.81%~22.05%.The largest proportion of the secondary industry is industry, accounting for 57.59%~62.34%.The share of high electricity consumption industry is 37.85%~40.61%. Furthermore, the change of electricity consumption intensity has greatly positive implications on reducing the total electricity demand of the whole society.In this paper, with the same economic growth rate, the difference of electricity demand under the two scenarios of rapid decline and slow decline of industrial electricity consumption intensity is applied to measure the potential for reducing energy consumption.Table 6 reflects the electricity demand structure of various sectors.From Table 6, it can be concluded that the proportions of the electricity consumption of the primary, secondary and their breakdown industries all display a downward trend.The proportions of electricity demand of the tertiary industry and households show an upward trend.Specifically, in 2020 the ratio of electricity demand to the total electricity demand of the primary industry is about 1.56%~1.68%.The proportion of the secondary industry is about 69.23%~70.68%,and the proportion of the tertiary industry is approximately 12.09%~12.94%. The share of household electricity demand is about 15.10%~16.73%.The largest proportion of secondary industry is industry, accounting for 66.93%~70.51%.The share of high electricity consumption industry is 43.87%~44.62%.Till 2030, the ratio of electricity demand to the total electricity demand is about 1.11%~1.41%.The proportion of the secondary industry is about 62.60%~66.75%, and the proportion of the tertiary industry is approximately 12.73%~15.64%.The share of household electricity demand is about 17.81%~22.05%.The largest proportion of the secondary industry is industry, accounting for 57.59%~62.34%.The share of high electricity consumption industry is 37.85%~40.61%. Furthermore, the change of electricity consumption intensity has greatly positive implications on reducing the total electricity demand of the whole society.In this paper, with the same economic growth rate, the difference of electricity demand under the two scenarios of rapid decline and slow decline of industrial electricity consumption intensity is applied to measure the potential for reducing energy consumption.The average potential of high, medium and low speed of economic growth has been shown in Table 7.Compared the decline rate of 30%, the abatements of electricity demand in 2020 and 2030 are 660.132and 1705.184 billion KWh, respectively, when the electricity consumption intensities of all industries decrease at the 50% annual average decline rate in 2007-2012.The contribution of the decline in industrial power consumption is the most prominent, and the potentials of reducing energy consumption are 476.130and 1242.590 billion KWh, respectively, accounting for 7.20% and 16.28% in the total electricity demand of the whole society in 2020 and 2030.In 2020 and 2030, the potentials of reducing energy consumption of the high electricity consumption industry are 287.87 and 744.701 billion KWh, respectively, accounting for 4.35% and 9.75% in the total electricity demand of the whole society. Forecasting Results of Carbon Emissions Carbon emissions from the power industry in two different scenarios are shown in Table 8.In Scenario 1, power industry carbon emissions will reach 4692.52 and 5812.28 million tCO 2 in 2020 and 2030.In Scenario 2, carbon emissions will reach 4074.16 and 3948.43 million tCO 2 till 2020 and 2030.Moreover, solely considering the contribution of power generation structure adjustment, the rapid adjustment of power generation structure can achieve a reduction of 0.7045 million tCO 2 in 2020 and 194.05 million tCO 2 in 2030 compared with the slow adjustment of the power generation structure.From Table 8, it can be seen that carbon emission intensity of electric power production has a downward trend in the two scenarios.In 2020, carbon emission intensity from power production decreased by 38.83% and 37.86% of 2005 in Scenarios 1 and 2, a decrease of 45.63% and 43.69% till 2030.The Chinese government put forward that in 2020 and 2030 unit GDP carbon emissions intensity will decrease by 40%~45% and 60%~65% compared to that of 2005.Moreover, according to the statistical result of International Energy Organization (IEA), in 2013 CO 2 emissions from China's electric power and thermal energy production reached 4.39 billion ton, accounting for 48.86% of China's total CO 2 emissions [6].Considering the high proportion of carbon emissions of China's power industry, the possibility of successfully achieving the overall reduction targets in 2020 and 2030 it seems very grim.Therefore, in order to ensure the realization of the emission reduction targets, in the future more stringent industrial electricity intensity targets should be developed, while speeding up the structure adjustment of power generation technology, and continuously improve the proportion of clean energy generation, large capacity, and high efficiency units. Conclusions On the basis of the LEAP model, this paper establishes a scenario analysis model of China's electricity demand and carbon emissions.In line with the industrial segmentation and overall consideration of the influence factors of China's electric power demand, the electricity demand of China in 2020 and 2030 is simulated by setting six different scenarios.Based on the power generation scale and the adjustment speed of technical structure, two kinds of carbon emission scenarios are set to predict the carbon emission in 2020 and 2030, and the corresponding carbon emission intensity is calculated.The results display that the whole society's electricity consumption in 2020 will be 6407.9~7491.0 billion KWh, and will reach 6779.9~10313.5 billion KWh in 2030; carbon emissions will be 4074.16~4692.52 million tCO 2 in 2020 and 3948.43~5812.28 million tCO 2 in 2030, respectively; carbon emission intensity will reach 0.63~0.64kg CO 2 / KWh in 2020 and 0.56~0.58kg CO 2 / KWh till 2030.The task of decreasing carbon emission intensity by 40%~45% in 2020 and 60%~65% in 2030 compared to that of 2005 will be very arduous.The focus of China's future emissions reduction programs should be to develop more stringent industrial power targets, as well as to accelerate the pace of structural adjustment of power generation technology, stimulate the improvement of the ratio of clean energy with large capacity and high efficiency units. Figure 1 . Figure 1.The structure of the LEAP Model. .2 billion KWh in 2020 to 4128.6 billion KWh in 2030.The electricity consumption of high electricity consumption industry has decreased from 2859.1 billion KWh in 2020 to 2736.9 billion KWh in 2030.The electricity consumption of non high electricity consumption industry has decreased from 1478.0 billion KWh in 2020 to 1427.4 billion KWh in 2030.However, the electricity demand of chemical industry is rising.Therefore, except for Scenario 5, China's power industry will continue to show a trend of continuous expansion in the other scenarios. Table 1 . Description of electricity demand scenarios. Economic grows at a low speed; industrial structure changes at a low speed; industrial electricity consumption intensity decreases at a low speed; population and urbanization rates grow steadily; per-capita household electricity consumption intensity increases at a low speed3.1.2.Scenario Description of CO 2 Emission Table 3 . Parameters of economic development and household electricity consumption intensity. Table 5 . Technical parameters in power generation. Table 6 . Proportion of power demand in different sectors of 2020 and 2030 (%). Table 7 . The potential of the change of industrial electricity consumption intensity to reduce the total electricity demand of the whole society. Table 8 . Carbon emissions of electric power industry of 2020 and 2030(million tCO 2 ).
8,348.2
2016-11-25T00:00:00.000
[ "Economics", "Engineering", "Environmental Science" ]
Unveiling the Small-scale Jets in the Rapidly Growing Supermassive Black Hole IZw1 Accretion of black holes at near-Eddington or super-Eddington rates represents the most powerful episode driving black hole growth, potentially occurring across various types of objects. However, the physics governing accretion and jet–disk coupling in such states remains unclear, primarily due to the difficulty in detecting associated jets, which may emit extremely weakly or exhibit episodic behavior. Only a few near/super-Eddington systems have demonstrated radio activity, and it remains uncertain whether jets exist and what their properties are in super-Eddington active galactic nuclei (AGNs) and ultraluminous X-ray sources. This uncertainty stems mainly from the complex radio emission mix, which includes contributions from jets, star formation activity, photoionized gas, accretion disk wind, and coronal activity. In this work, we conducted high-resolution, very long baseline interferometry observations to investigate jets in the highly accreting narrow-line Seyfert I system I Zw 1. Our observations successfully revealed small-scale jets (with a linear size of ∼45 pc) at both 1.5 and 5 GHz, based on the high radio brightness temperature, radio morphology, and spectral index distribution. Additionally, the parsec-scale jet observed in I Zw 1 displays a knotted morphology reminiscent of other sources accreting at similar rates. In summary, the high accretion rates and jet properties observed in the AGN I Zw 1 may support the AGN/X-ray binary analogy in this extreme state. The Eddington ratio1 is a key indicator of black hole accretion and ejection states, in both stellar-mass black holes (SBHs) and supermassive black holes (SMBHs) (e.g.Fender et al. 2004;Falcke et al. 2004;Körding et al. 2006), which is generally < 1 under the assumption of spherical accretion.The accretion flows and associated ejection processes with low and moderate Eddington ratios generally can be described with Advection Dominated Accretion Flows (ADAFs, Narayan & Yi 1994;Esin et al. 1997) and standard accretion disk (or Shakura-Sunyaev disk, SSD, Shakura & Sunyaev 1973), respectively, and corresponding revisions (e.g.Fender et al. 2004;Done et al. 2007;Yuan & Narayan 2014). However, super-critical accretion (with the super-Eddington accretion rate for which the black hole radiates above the Eddington luminosity) is viable in both observations and physics and potentially plays an essential role in feeding the black hole growth in the early Universe (see Yang et al. 2020, and references therein).Furthermore, super-Eddington accretion of the first-generation SMBHs may have a deep impact on regulating the (host) galaxy evolution and the epoch of reionization through feedback processes.As accretion increases to near or super-Eddington rates, the standard disc geometry cannot be maintained and the accretion flow will inevitably evolve into a 'slim disc' (Vierdayanti et al. 2013).The corresponding state is sometimes called the 'ultraluminous state' (Gladstone et al. 2009). Regardless of the importance and the viability of super-Eddington accretion, our understanding of the accretion and ejection processes in this accretion state remains limited, which is primarily due to that only a few systems can temporarily trigger super-Eddington accretion (e.g.Greiner et al. 2001;Dai et al. 2018), and even fewer systems can maintain long-lived super-Eddington accretion (e.g.Middleton et al. 2021;King et al. 2023).Even worse, the mechanism for sustaining super-Eddington accretion in those sources which persistently accrete at super-Eddington rates is unclear. It is also widely accepted that supermassive and stellar-mass black holes have similarities in accretion physics, i.e., Active galactic nuclei (AGNs) and XRBs have similar accretion state transitions and associated ejection processes.However, it is still unclear whether the AGN/XRB analogy holds in the 'ultraluminous state' and whether the geometry of the disc-corona system and jet-disc coupling are similar.Here, our interest is the connection between the short-lived canonical 'very high state' (universally found in XRBs) and the longstanding super-Eddington accretion in, for example, the microquasar SS 433, and ULXs, and which parameters are driving the long-lived super-Eddington accretion.As the time scale of state transition is proportional to black hole mass (Svoboda et al. 2017;Yang et al. 2020), a 'very high state' in SMBHs (e.g.M BH = 10 7 M ), would last 10 6 times longer than in 10 M stellar-mass black holes found in XRBs.Therefore, the study of near/super-Eddington AGNs provides an opportunity to understand the ejection process in a quasi-steady 'very high state' and may shed light on the physics to sustain such a near/super-Eddington accretion.For this reason, we present very long baseline interferometry (VLBI) observations of an AGN, I Zw 1, which radiates close to or above the Eddington limit. I Zw 1 is one of the closest quasars located at a redshift of z = 0.0589 (Ho & Kim 2009) and is regarded as an archetypal narrow-line Seyfert 1 galaxy (NLS1) based on its optical properties (Schmidt & Green 1983;Pogge 2000).The black hole mass of I Zw 1 was estimated to be M BH = 9.3 × 10 6 M from optical reverberation mapping (Huang et al. 2019).The bolometric luminosity estimated from the spectral fitting is log L bol = 45.50 − 45.68 erg s −1 (Martínez-Paredes et al. 2017), which exceeds its Eddington luminosity with an Eddington ratio of λ Edd = 2.77−4.20.Another work obtained a higher black hole mass of M BH = 2.8 × 10 7 M using X-ray reverberation (Wilkins et al. 2021) and estimated the Eddington ratio of I Zw 1 as unity (or 0.3) based on the optical monochromatic luminosity (or the X-ray luminosity).However, the authors note that the luminosity could be underestimated due to photons being trapped in the disk.If we take the bolometric luminosity of log L bol = 45.50 − 45.68 erg s −1 (Martínez-Paredes et al. 2017), which is thought to be more accurate than the estimation from the single-band luminosity, and use the larger black hole mass measurement of M BH = 2.8 × 10 7 M (Wilkins et al. 2021), then the Eddington ratio would be 0.92 − 1.40.We should bear in mind that when the bolometric luminosity approaches and exceeds the Eddington luminosity, the actual mass accretion rate would be significantly higher than expected from the observable luminosity assuming a typical radiative efficiency (η < 1, Bian & Zhao 2003) because of the photon trapping effect (Mineshige et al. 2000).The radiative efficiency of I Zw 1 was estimated, as one of the Palomar-Green (PG) quasars PG 0050+124, to be log η = −2.21 or −1.18 ± 0.04 (based on the mass estimates with the broad emission line widths and the M − σ * correlation, respectively, see Davis & Laor 2011).With such a high Eddington ratio and low radiative efficiency, therefore, the SMBH in I Zw 1 must be growing with a mass accretion rate notably higher than the Eddington limit, i.e. the super-Eddington accretion rate. I Zw 1 is also an extremely radio-quiet AGN with radio loudness parameter R = 0.35 (fn.2 ) (Yang et al. 2020).Radio emission from radio-quiet AGNs is complex and remains a subject of debate (Panessa et al. 2019).On the other hand, the presence of jets in near or super-Eddington systems and how it is launched are also questions that need to be explored.X-ray observations of I Zw 1 indicate that the X-ray corona exhibits some structure and part of it may be collimated and ejected (Gallo et al. 2007;Wilkins et al. 2017).I Zw 1 is an extreme example of a nearby highly accreting and radioquiet AGN, providing an ideal laboratory for studying outflow activities3 including jets and winds, with high spatial resolutions. In this work, we report the Very Long Baseline Array (VLBA) and European VLBI Network (EVN) plus the enhanced Multi-Element Remote-Linked Interferometer Network (e-MERLIN) observations of the nuclear region in I Zw 1 and we also analyze the archival data from VLA and MERLIN.Our paper is organized as follows, Section 2 details the multi-band observations, data reduction, and analysis of the target I Zw 1, while Section 3 presents the results and discussions.Finally, we provide our conclusions in Section 4. Throughout this work, we adopt the standard ΛCDM cosmology with H 0 = 71 km s −1 Mpc −1 , Ω Λ = 0.73, Ω m = 0.27, and the corresponding physical scale is 1.125 pc mas −1 in I Zw 1. VLBI observation and data reduction We observed I Zw 1 on 2018 September 23 with 10 antennas of the Very Long Baseline Array (VLBA) and on 2020 November 17 with 19 antennas of the European VLBI Network (EVN) plus the enhanced Multi-Element Remote-Linked Interferometer Network (e-MERLIN).The VLBA observation was carried out at L-band (1.548 GHz or 1.5 GHz for short, the project code BY145), and the EVN+e-MERLIN observation was conducted at C-band (4.926 GHz or 5 GHz for short, the project code EY037), respectively.The total VLBA observing time is 2 h with a data recording rate of 2 Gbps, and the total time of the EVN+e-MERLIN observation is 8 h with a data recording rate of 4 Gbps.Both observations were performed in the phase-referencing mode, using J0056+1341 (R.A.: 00 h 56 m 14.816010 s ±0.000013 s , Dec.: +13 • 41 15.75506 ±0.00044 ) as the phase reference calibrator. We calibrated the VLBI data in the Astronomical Image Processing System (aips), a software package developed by the National Radio Astronomy Observatory (NRAO) of U.S. (Greisen 2003), following the standard procedure.A-prior amplitude calibration was performed using the system temperatures and the antenna gain curves provided by each station.The earth orientation parameters were obtained and corrected using the measurements from the U.S. Naval Observatory database and the ionospheric dispersive delays were corrected based on a map of the total electron content provided by the GPS satellite observations4 .The opacity and parallactic angles were also corrected based on the auxiliary files attached to the data.The delay of the visibility phase and the telescope bandpass were calibrated using the bright radio source 3C 454.3.Next, we performed a global fringe-fitting on the phase-referencing calibrator, J0056+1341, by assuming a point source model to solve miscellaneous phase delays. The phase calibrator J0056+1341 shows a core-jet structure that extends upto ∼100 mas to the north (see supplementary Figure 1 in Appendix B for their 1.5 and 5 GHz images).We performed self-calibration to the phase calibrator and obtained its CLEAN model, which was then used as the input model to re-solve the phases in aips.This operation can eliminate phase reference errors due to the jet structure.Finally, both phase and amplitude solutions obtained from the phase calibrator were applied to the target I Zw 1.The calibrated uv-data was exported to difmap (Shepherd 1997) for deconvolution.Based on a signal-to-noise ratio of ∼ 30 and ∼ 14 in 1.5 and 5 GHz in the residual map, respectively, we decide to not perform self-calibration to the target source. We performed different deconvolution algorithms in difmap to produce radio maps, i.e.CLEAN and Gaussian model-fit.It is important to note that the solutions for visibilities are not necessarily unique when complex structures are to be handled.In order to compare the goodness of the deconvolution results, we summarised statistical parameters in Table 1.Obviously, the statistical parameters of the Gaussian model-fit are close to the CLEAN, while the CLEAN is better than the Gaussian model-fit based on the χ 2 r -value (especially at All the images are produced with natural weight and the map reference is at the Gaia position.The restoring beams are displayed as grey ellipses in the lower-left/right corner of each panel, which are 19.9 × 14.9, 11.5 × 4.67, 9.27 × 5.95 and 3.22 × 1.14 mas from panel (a) to (d), respectively.The contours are at 3σ × (−1, 1, 1.41, 2, 2.83, . . .), where positive contours are white and black solid and negative ones are red dashed.From panel (a) to (d), the image peak is 1.66, 0.88, 0.24, and 0.10 mJy/beam, respectively, and the rms noise is 0.06, 0.025, 0.008 and 0.007 mJy/beam, respectively.The circles with crosses inside indicate the corresponding Gaussian models.In panels (b) and (c), the red stars mark the Gaia optical positions.The uncertainty in the Gaia position is ∆α = 0.18 and ∆δ = 0.17 mas, which includes an astrometric excess noise error of 0.14 mas.At the redshift of I Zw 1, 1 mas corresponding to 1.125 pc. 5 GHz), which is reasonable as the emission regions are generally more complex than the representation of a few Gaussian components.The resulting CLEAN images and Gaussian model-fitted components are shown in Figure 1.The root-mean-square (rms) noise in the residual map (Column 3 of Table 1) is larger than the off-source rms noise in the map (Column 4 of Table 1), indicating that some diffuse emission cannot be recovered in the full resolution map.The off-source rms repre-sent the background noise fluctuation, whereas the rms of the residual map incorporate residual flux densities.Therefore, the loss of flux density in the images can be characterized as σr−σrms σrms (Column 5 of Table 1).Simultaneously, we also produced a uv-tapered map to reduce the weight of the long-baseline visibilities (and thus also reduce the resolution) in an attempt to recover the weak and extended emission, see panels (a) and (c) in Figure 1. Archived VLA data We retrieved the raw visibility data of I Zw 1 observed by the Very Large Array (VLA) from the NRAO data archive5 , including historical VLA and the newly observed Karl G. Jansky VLA (JVLA) data.Although some data have been published (see Table 2), to ensure consistency in the data reduction, we performed a manual calibration for all available datasets using the Common Astronomy Software Application (CASA v5.1.1, McMullin et al. 2007).Our data reduction followed the standard routines described in the CASA cookbook.We adopted the 'Perley-Butler 2017' flux density standard to set the overall flux density scale for the primary flux calibrator and then bootstrapped the secondary flux density calibrators and the target.For the historical VLA datasets, we determined the gain solutions using a nearby phase calibrator and transferred them to the target I Zw 1.For the JVLA datasets, we also determined antenna delay and bandpass by fringe-fitting the visibilities.For the data observed after 1998, we performed an ionosphere correction using the data obtained from the CDDIS archive.Deconvolution, self-calibration, and model-fit were performed in difmap.The final images were created using natural weight.Due to the good uv-coverage, simple emission structure, and high signalto-noise ratio (SNR>9), the VLA data allows for selfcalibration using a well-established model.For data with lower SNR, we used three times the image noise as the upper limit for the flux density. Astrometry of the VLBI data We measure the uncertainties of the astrometric measurements from three main origins: (1) Positional uncertainties of phase-referencing calibrators.In phasereferencing observations, the coordinates of the target are referenced to a close calibrator.The calibrator J0056+1341 was selected from the catalog rfc 2022a in Astrogeo6 with precise position with accuracies ∆α = 0.20 mas in right ascension and ∆δ = 0.44 mas in declination; (2) Astrometric accuracy of phase-referenced observations.Primarily concerning the station coordinate, Earth orientation, and troposphere parameter uncertainties, which can be measured through the formula and data from Pradel et al. (2006).This portion contribute position errors ∆α ≈ 0.26 mas and ∆δ ≈ 0.50 mas in VLBA 1.5 GHz observation and ∆α ≈ 0.27 mas and ∆δ ≈ 0.47 mas in EVN+e-MERLIN 5 GHz observation; (3) Thermal error due to the random noise (e.g.Thompson et al. 1986;Rioja et al. 2017).This uncertainty can be characterized as σ t ∼ θ B /(2×SNR), where θ B is the full-width at half maximum of the restoring beam and SNR is the signal-to-noise ratio.In this work, we take this value from difmap. During the self-calibration process, the absolute coordinate position of the phase-referencing calibrator is lost and the brightest feature of the image is shifted to the phase center of the map.In general, due to the frequency-dependent shift in the peak of the optically thick component and the slightly different distribution of the radio emission at different resolutions, the brightest component may not be the same component from one observation to the next.This would induce a systematic offset between two images.The alignment between the images of two frequency observations can be done using an optically thin component as a reference since its position is less affected by the frequency-dependent opacity effect (e.g.Marr et al. 2001;Kovalev et al. 2008;Sokolovsky et al. 2011;Fromm et al. 2013). In our VLBI observations, we used J0056+1341 as the phase-referencing calibrator, which has a flat-spectrum radio core.We obtained VLBI C (4.34 GHz) and X-band (7.62 GHz) data from Astrogeo.As neither the C and X-band data are self-calibrated, the core shift at C-band can be directly estimated as ∆R.A. ∼ 0.69 mas, ∆DEC.∼ −0.29 mas relative to the X-band data (see supplementary Table 3 in Appendix B).J0056+1341 also has a significant offset between C and L bands (see supplementary Figure 1 in Appendix B) in our observations, and the offset of the VLBA L-band image is determined using the optically thin component J1 of the jet, estimated as ∆R.A. = 1.742 mas and ∆DEC.= −3.228mas (see supplementary Table 3 in Appendix ordinates for the target are also listed in supplementary Table 3 in Appendix B. Radio Spectrum To obtain the radio spectral index, we first checked the variability of I Zw 1. Figure 2 shows the radio flux density versus the observing epoch.The largest variability we identify is ∼8%, which is from the VLA Aarray observations at the C-band between epochs 1983 and 1995.Since there is no evidence for extreme variability on a time scales of ∼30 years, we plot the radio flux density versus the frequency in Figure 3 using all archival data.The least-square fitting gives an overall radio spectral index of −0.89 ± 0.10.From Figure 3, we can see the radio flux density changes with the collection area.The radio spectral index between 1.4 and 5 GHz, using the datasets with a similar resolution (i.e., 1.3 ∼ 1.5 and 4.3 ∼ 5.3 arcsec), is −0.69 ± 0.08 and −0.61 ± 0.02, respectively.These yields flatter spectral indices than the overall fit.Given the total flux density in Section 2.5, the spectral index between VLBA 1.5 and EVN+e-MERLIN 5 GHz is −1.06±0.13,consistent with the overall radio spectral index. To obtain the spectral index distribution for the highresolution data observed with the VLBA and EVN+e-MERLIN, we created a spectral index map following the procedure described in Hovatta et al. (2014).Here, we 2. The blue dashed line is the model-fitting result with a power-law spectrum using all the data points presented here.The power-law slope (spectral index) is −0.89 and the blue belt shows the 95% confidence interval (0.10).The green and red dashed lines show the power-law fitting between 1.4 and 5 GHz datasets with similar size scales, i.e. 1.3∼1.5 (arcsec, red) and 4.3∼5.3(arcsec, green), respectively.The green and red belts indicate their 95% confidence intervals. used the uv-tapered image by 0.5 at 20 Mλ at 5 GHz and restored it to match the 1.5 GHz map.Similarly, the alignment of two images is through the brightest optically thin component E1.The spectral index was calculated pixel-by-pixel between the 1.5 and 5 GHz total intensity maps.For a given frequency, pixels with an intensity less than 3σ rms were removed.The spectral index map between 1.5 and 5 GHz is shown in the left panel of Figure 4.Both Gaia position and component C are in the flat spectral region (α > −0.5, see also Figure 5.) We study the radio spectral index distribution along the jet trajectory by estimating a ridge line of the jet.We define the jet ridge as the line that connects the peaks of one-dimensional Gaussian profiles fitted to the profile (slices) of the jet brightness drawn orthogonally to the jet direction (see Vega-García et al. 2020).To obtain the ridge line, we performed a fitting to the tapered and restored (with circular beam) 5 GHz EVN+e-MERLIN image.The first slice starts from the Gaia position and a position angle of 220 • (regarding to the East direction anticlockwisely) was initially adopted for the jet direction.The step between individual slices was set to be smaller than the beam size (a measure taken in order to ensure the continuity of the brightness profiles We estimate flux density uncertainties following the instructions described by Fomalont (1999).In this work, the integrated flux densities S i were extracted from Gaussian model-fit in difmap, where a standard deviation in model-fit was estimated for each component and considered as the fitting noise error.Additionally, we assign the standard 5 and 10% errors originating from amplitude calibration of VLBA (see VLBA Observational Status Summary 2018B8 ) and EVN (e.g.Radcliffe et al. 2018) data, respectively. The radio brightness temperature was estimated using the formula (Ulvestad et al. 2005): where S i is the integrated flux density of each Gaussian model component in units of mJy (column 5 of Table 4); φ min and φ maj is minor-and major-axis of the Gaussian model (i.e.FWHM) or the restored beam in milli-arcsec; ν is the observing frequency in GHz (column 2 of Table 4), and z is the redshift.The resolution limit θ lim for Gaussian components can be estimated using the formula (Lobanov 2005), where θ B is the FWHM of the synthesized beam, SNR is the signal-to-noise ratio and β = 2 for the natural weight.If the fitted component size is lower than the corresponding resolution limit, then θ lim was used instead as the component size, and the component was identified as unresolved (or the radio-emitting region cannot be constrained).The resolution limits for each component are listed in Column 7 of Table 4 and the estimated 1.5 GHz and 5 GHz radio brightness temperatures are listed in Column 8 of Table 4. Since the measured component size is only the upper limit, the radio brightness temperature should be considered as the lower limit.We also estimated a total radio flux density from a uv-tapered image, which is 2.636±0.282and 0.765±0.085mJy for 1.5 and 5 GHz, respectively, and the corresponding source angular sizes are ∼50 and ∼40 mas, respectively.Panel (b) of Figure 1 shows the 1.5 GHz VLBA image, displaying a quasi-continuous emission structure elongated along the east-west direction with an extent of ∼45 parsec (pc).Panel (d) of Figure 1 shows a higherresolution (the beam FWHM is 3.22 × 1.14 mas) image obtained from the 5 GHz EVN+e-MERLIN observation.The bright components in the 1.5 GHz image are resolved into a series of knots in the 5 GHz image.Most of these components (except for components S and W1) have brightness temperatures > 10 7 K (Table 4), and the whole radio emitting structure has an overall steep spectrum (Figure 3), which favors a jet origin and is unlikely from star-forming activities (Condon et al. 1991) and thermal free-free radiation of the hot molecular disc surrounding AGNs (Gallimore et al. 1997).Furthermore, based on the identification of optical and radio cores (see below), the bilateral radio structures in the 5 GHz EVN image (panel d of Figure 1) are consistent with the assembling of approaching and receding jets.Correspondingly, the brightness asymmetry between the two branches of the bilateral structures indicates the Doppler boosting effect of relativistic jets. The jet base of an AGN typically has a flat radio spectrum due to the synchrotron self-absorption in the optically-thick region (α > −0.5, fn. 9 ).The radio core of I Zw 1 is likely close to component C because it is located in a relatively flat spectral region (see Figure 4), has a high radio brightness temperature of log (T B /K) > 7 in both 1.5 and 5 GHz (see Table 4), and is roughly associated with the Gaia position.The 5 GHz image further resolves component C and measures a more accurate position of its peak, which slightly offsets from the Gaia position (see panel d of Figure 1 and Figure 5).Actually, it was already shown that there is a significant offset between VLBI and Gaia positions in AGNs (Petrov & Kovalev 2017;Kovalev et al. 2017;Plavin et al. 2019).Interestingly, the offset between component C and the Gaia position is consistent with the observations of Seyfert I galaxies (i.e. the upstream offsets of Gaia positions correspond to VLBI positions, see Plavin et al. 2019).Therefore, the VLBI/Gaia offset in I Zw 1 can be attributed to the dominance of the accretion disk in the optical band (the Gaia position), while the VLBI observations alternatively track the emissions of jets (Plavin et al. 2019). Based on both the radio and optical cores, there is an obvious spectral steepening along the eastern jets, while it is less prominent in the western jet (see the right panel of Figure 4).This property resembles canonical jets in blazars and can be explained with radiative losses of the synchrotron-emitting plasma or with the evolution of the high-energy cutoff in the electron spectrum (Hovatta et al. 2014). Is the ejection process in I Zw 1 episodic? Component C is most likely the closest component to the radio core, however, the spectrum of C itself is too steep (as a reference, the spectral index at the peak position of component C is roughly −0.4 from the spectral index map, see Figure 5) to be from the nucleus of I Zw 1. Taking the integrated flux density of C at 1.5 and 5 GHz (see supplementary Table 4 in Appendix B), we can estimate the 1.5 − 5 GHz spectral index of component C as −0.88 ± 0.11.Recalling the offset between the VLBI position of component C and the Gaia position of the optical nucleus, the natural explanation of component C is nascent ejecta. Interestingly, the positions of local brightness enhancements (the knots E1 and W1) are symmetric (see Figure 6) and components E1 and C seem to be embedded on a continuous background of the jet stream and mimic discrete blobs/knots.On the origin of the knots, several works envisage them as the shocks traveling along the jets (e.g.Hovatta et al. 2014), while the other works interpret them as discrete blobs from episodic ejection (e.g.Shende et al. 2019).We note that the relatively flatter spectra around the Gaussian components (C and E1) in Figure 4 do imply the shock acceleration in the discrete blobs (C and E1) in I Zw 1 (see Hovatta et al. 2014).However, it seems that the heavily knotty jets (comparing with the jets in blazars, Hovatta et al. 2014) in I Zw 1 is inconsistent with the only mild flattening in the spectrum, i.e. the spectral distribution shows only plateaux than clear bumps (see Hovatta et al. 2014).This discrepancy may be indicative of an episodic scenario.Actually, the lacks of a bright radio core and the steep spectrum in the nascent ejecta C both support the episodic ejection scenario.In observations, there is also growing evidence to show that highly accreting AGNs tend to launch episodic jets (Yao et al. 2021;Yang et al. 2022aYang et al. ,b,c, 2023)). On the physical interpretations of the complex jets By summing the radio flux density along the jet trajectory, i.e.only excluding the component S, the measured 5 GHz jet luminosity of I Zw 1 is log L 5GHz = 38.547± 0.003 erg s −1 .Taking the X-ray luminosity of log L 2−10keV = 43.65 erg s −1 measured in 2020 (Wilkins et al. 2021), the radio to X-ray luminosity ratio of I Zw 1 is L R /L X = 10 −5.102 , suggesting the nature of the jet is the corona ejection from the accretion disk (Yuan et al. 2009), i.e.L R /L X ∼ 10 −5 (Panessa et al. 2019;Yang et al. 2020), and consistent with the interpretation of the X-ray behavior (Gallo et al. 2007;Wilkins et al. 2017).Models and observational evidence suggest that episodic jets in both AGN and microquasar are ejected from the accretion disc corona (see Shende et al. 2019, and references therein). The bilateral morphology and the linear size of ∼45 pc (the tapered 5 GHz image in panel c of Figure 1 and Figure 6) are reminiscent of I Zw 1 being a compact symmetric object (CSO, O'Dea & Saikia 2021).However, the lack of a spectral peak at GHz (Figure 3) alternatively implies that it belongs to compact steepspectrum (CSS) radio sources.Assuming minimum energy (approximately equipartition) conditions in synchrotron emission, we can estimate the magnetic field B min of I Zw 1 in units of Gauss (G) through the formula (e.g.Miley 1980;Patil et al. 2022) (G), (3) where S i (in mJy) is the integrated flux density of the source measured at frequency ν (in GHz) and angular size θ (in mas), α is the spectral index, z is the redshift of the source, and r is the comoving distance in Mpc.Here we take p = 0.5, the overall spectral index α = −0.89± 0.10 from ν 1 ∼ 1 GHz to ν 2 ∼ 15 GHz, a filling factor for the relativistic plasma f rl = 1 and a relative contribution of the ions to the energy a = 2.By adopting the standard ΛCDM cosmology and using the cosmology calculator provided by NED 10 , r = 245.7 Mpc in this case.Here we use the integrated flux density of 2.636 ± 0.282 mJy and the angular size of 50 mas from the tapered VLBA 1.5 GHz image, which yield an overall magnetic field B min ≈ 10 −3.6 G.The magnetic field is consistent with the typical value of e.g.CSOs with peaked-spectrum (PS, B ∼ 10 −3 G) and CSS (B ∼ 10 −4 G) radio sources (O'Dea 1998).On the other hand, the total radio power of I Zw 1 is only 10 21.8 W Hz −1 (from the 5 GHz EVN+e-MERLIN observational flux density of jets, see above), which is at least 1 order of magnitude lower than the typical sample of PS and CSS sources (O'Dea & Saikia 2021).However, the radio power of I Zw 1 remains higher than our recently discovered CSO/PS in NGC 4293 (∼ 10 20 W Hz −1 , see Yang et al. 2022c), which hold the current lower limit record of radio power.Simultaneously, the magnetic field at component E1 can be estimated as B min ≈ 10 −2.9 G by using the model-fitting results with either 1.5 GHz VLBA or 5 GHz EVN+e-MERLIN observational result. As the promising spectral turnover of I Zw 1 must be below 1 GHz, we can estimate the lowest allowed electron lifetime via the formula (O'Dea 1998) where B is the magnetic field in G, B R 4(1 + z) 2 × 10 −6 G is the equivalent magnetic field of the microwave background, and ν p is the break frequency in GHz.For I Zw 1, using values estimated above, i.e.B ≈ 10 −3.6 G and ν p < 1 GHz, we find the electron lifetime is substantially > 10 6.3 years.Again, the estimated electron lifetime of component E1 is > 10 5.2 years.Taking component E1 as a reference (15 mas away from the core) and assuming a typical knot advance speed of 0.1 c for PS/CSS (O'Dea & Saikia 2021), the ejection time scale of component E1 is only ∼ 550 years and smaller than its electron lifetime above, therefore it suggests the knot advance speed of I Zw 1 should be substantially slower than 0.1 c.In stellar mass XRBs, discrete radio blobs are produced in the 'very high state' or 'super-Eddington state' at a time scale spanning from a few days to one year (to the best of our knowledge, see Margon & Anderson 1989;Mirabel & Rodríguez 1994;Hjellming & Rupen 1995;Fender et al. 1999;Tudose et al. 2007;Joseph et al. 2011;Miller-Jones et al. 2012, 2019;Bright et al. 2020).The longer ejection time scale in I Zw 1 seems consistent with its more massive nucleus than XRBs, i.e. the ejection time scale can be scaled through the mass of accretors, which supports the scale-invariant ejection process in both XRBs and AGNs. Component S in the VLBA 1.5 GHz image is a real structure (panel b of Figure 1) as it is identified in the 1.5 GHz tapered map (panel (a) of Figure 1).The 5 GHz EVN image fails to detect component S possibly due to the loss of large-scale and diffuse emission.The radio emission at component S is likely responsible for the flux density deficit in 5 GHz (see supplementary Figure 3 in Appendix B).Furthermore, the southern bump in the VLA image (Figure 2) mimics component S, which requires further identification.Interestingly, component S clearly deviates from the jet trajectory, because the jet is along the East-West direction and extends up to 0.5 arcsec scale (see Figure 2).Here we interpret the bending of component S as the result of jet-medium interactions.Given that the jet direction (extending up to ∼ 0.5 parsec) is nearly aligned with the kpc-scale molecular disk (Tan et al. 2019;Shangguan et al. 2020) in I Zw 1, it's possible to support a jet-disk interaction.On the other hand, I Zw 1 hosts strong multi-scale wideangle outflows: an ultrafast wind-like outflow with a velocity of > 0.25 c obtained from the fitting of the iron Kline profile (Reeves & Braito 2019), an ionized ultraviolet gas outflow with a velocity of 1870 km s −1 (Laor et al. 1997) and X-ray outflow with velocities of ∼ 2000 km s −1 (Costantini et al. 2007;Silva et al. 2018), and a neutral gas outflow with a velocity of 45 km s −1 (Rupke et al. 2017).Given that the wide-angle outflows tend to be perpendicular to the kpc-scale molecular disk, for example, the neutral gas outflow (see Rupke et al. 2017, for their Figure 13) of I Zw 1 is along the North-South direction, therefore the component S can be a result of jet-wind collision as well. SUMMARY In summary, AGNs with near or super-Eddington accretion rates are often discussed as a scaled-up version of the stellar-mass black hole in the 'very high state' or 'ultraluminous state'.The observational evidence of episodic jets in I Zw 1 indicates that the analogy between AGNs and XRBs might also hold in this extreme state.Our observations imply that near or super-Eddington and extremely radio-quiet AGNs can also launch short-lived, small-scale, and weak jets.In the Supplementary section, we present further analysis of the radio emission in I Zw 1, which sheds light on the long-standing question about the origin of radio emission in radio-quiet AGNs.Such a population is important for building our understanding of jet-disc coupling in near or super-Eddington states.There is only one Galactic microquasar (SS 433) that exhibits long time-scale, super-Eddington behavior and quasicontinuous ejection (Fabrika 2004).Only a few XRBs evolve into a near/super-Eddington state and are associated with episodic jets (Revnivtsev et al. 2002;Fabrika 2004;Done et al. 2004;Miller-Jones et al. 2019), but this phase is short-lived.Finally, the radio emis-sion from a handful of (extragalactic) ultraluminous Xray sources is too weak to be detected (Kaaret et al. 2017).The time scale of the super-Eddington state in AGNs is longer than the canonical 'very high state' in XRBs according to the scaling relation, and the radio luminosity of the super-Eddington state in AGNs is higher than that of stellar-mass black holes according to the relation L R /L X = 10 −5 (Panessa et al. 2019;Yang et al. 2020), which is essential for testing the generic model of jet-disc coupling for all near/super-Eddington systems.Our findings here may indicate common features for high-Eddington AGNs, because there are similar knots/discrete radio blobs in jets and corresponding transverse/bending structures in other sources, e.g.NGC 4051 (Giroletti & Panessa 2009) with Eddington ratio 0.2 (Yuan et al. 2021) and Mrk 335 (Yao et al. 2021) with Eddington ratio 0.48-3.11(Yang et al. 2020).Further observational and theoretical studies will be necessary to establish a comprehensive understanding of the outflows of near/super-Eddington AGNs, and the results obtained from this work may contribute to this goal.This work is supported by the National Science Foundation of China (12103076, 11721303, 11991052) Figure 3 shows I Zw 1 has a power-law radio spectrum over the entire observed frequency range.This indicates the dominance of synchrotron radiation, and it is not significantly affected by the size of collection areas (see supplementary Figure 3 in Appendix B) where the intercept between the L-band and C-band lines can be regarded as the spectral index.The overall spectral index is −0.89 ± 0.10 and there is no significant decrease at higher frequencies, indicating a continuous replenishment of fresh electrons (Morganti 2017).Interestingly, the spectral index at the jet edge farthest from the core is ∼ −1.8 (Figure 4).This is inconsistent with the overall spectral index estimated for the larger area but suggests a non-jet origin because the spectral index decreases along the jet trajectory.The large-scale flux density is dominated by diffuse radio emission with only a fraction coming from the (parsec-scale) core region (only account ∼ 30% and ∼ 47% of radio emission from 60 kpc and ∼ 1.54 kpc scale region at 1.5 GHz, respectively).The distribution of radio flux density can be fitted as S L = (0.85 ± 0.18)r 0.212±0.020and S C = (0.88 ± 0.02)r 0.131±0.005, where S L and S C is the L and C-band flux density in mJy, and r is the angular size in mas (see supplementary Figure 3 in Appendix B).The VLBA 1.5 GHz emission satisfies the flux density versus collection area distribution, while the 5 GHz flux density from our EVN+e-MERLIN observation is underestimated due to the loss of large-scale emissions. Both star-forming activities and relativistic winds can produce large-scale radio emission (Panessa et al. 2019).Here the star-forming activities are preferentially referred to as supernovae or supernova remnants due to the power-law spectrum.Assuming all of the radio emission is from star-forming activities, we can estimate the star formation rate (SFR) from radio emission by using the SFR-radio relation (formula 3 in Yang et al. 2020).The largest SFR can be obtained from the datasets: NVSS at 1.4 GHz, AE0022 at 1.4 GHz and 4.86 GHz, and AA0048 at 14.94 GHz.These yield a SFR of ∼ 20 M yr −1 , which is similar to other estimates (Molina et al. 2021) of ∼ 26 M yr −1 , suggesting the large scale radio emission can be entirely due to star-forming activities.Whilst the SFR-radio relation is crude and we can not fully rule out the contribution of wind-like outflow, the radio-emitting wind at large scales (a few kiloparsec scales) is negligible.In addition, the radio-emitting wind is still possible at the intermediate scale (tens of parsec scale), as there are no compact supernovae and supernova remnants detected in our VLBI and e-MERLIN observations (e.g.Fenech et al. 2008).2).As I Zw 1 is not resolved in the given observations, the synthesized beams are taken to represent the collection area.The dashed lines and belts show power-law fittings and 95% confidence intervals, respectively, in the L (red) and C-band (green). Figure 1 Figure 1.1.5 and 5 GHz VLBI images of I Zw 1.All the images are produced with natural weight and the map reference is at the Gaia position.The restoring beams are displayed as grey ellipses in the lower-left/right corner of each panel, which are 19.9 × 14.9, 11.5 × 4.67, 9.27 × 5.95 and 3.22 × 1.14 mas from panel (a) to (d), respectively.The contours are at 3σ × (−1, 1, 1.41, 2, 2.83, . . .), where positive contours are white and black solid and negative ones are red dashed.From panel (a) to (d), the image peak is 1.66, 0.88, 0.24, and 0.10 mJy/beam, respectively, and the rms noise is 0.06, 0.025, 0.008 and 0.007 mJy/beam, respectively.The circles with crosses inside indicate the corresponding Gaussian models.In panels (b) and (c), the red stars mark the Gaia optical positions.The uncertainty in the Gaia position is ∆α = 0.18 and ∆δ = 0.17 mas, which includes an astrometric excess noise error of 0.14 mas.At the redshift of I Zw 1, 1 mas corresponding to 1.125 pc. B) relative to the EVN+e-MERLIN C-band image.For the target I Zw 1, we use the position of the brightest optically thin (α = −0.86 ± 0.07) component in the tapered EVN+e-MERLIN 5 GHz image (panel c of Figure 1) to align with the VLBA 1.5 GHz image.The peak position of the 1.5 GHz image was moved in difmap to align with the 5 GHz image by ∆R.A. = 1.33±0.38mas, ∆DEC.= −0.77± 0.69 mas, where the positional uncertainties accounts for both 1.5 and 5 GHz astrometric uncertainties of the brightest component E1.The centroid of the optical emission obtained from the second data release 7 of the Gaia mission (Gaia Collaboration et al. 2018a,b) is R.A.= 00 h 53 m 34 s .933288± 0.000012 and DEC.= +12 • 41 35 .93081± 0.00017 (J2000).This includes astrometric excess noise error of 0.14 mas.Co- Figure 2 . Figure 2. The radio light curves of I Zw 1 over a time interval of 37 years.The integrated radio flux densities and their uncertainties are taken from Table2, where the data with the same observing band (approximately equal central frequencies) and arrays/sub-arrays are concatenated to show the variability. Figure 3 . Figure 3. Wide-band radio spectrum of I Zw 1.The integrated radio flux density measurements of I Zw 1 in five radio frequency bands between 1.4 and 15 GHz are shown, where the flux density and uncertainties are taken from Table2.The blue dashed line is the model-fitting result with a power-law spectrum using all the data points presented here.The power-law slope (spectral index) is −0.89 and the blue belt shows the 95% confidence interval (0.10).The green and red dashed lines show the power-law fitting between 1.4 and 5 GHz datasets with similar size scales, i.e. 1.3∼1.5 (arcsec, red) and 4.3∼5.3(arcsec, green), respectively.The green and red belts indicate their 95% confidence intervals. Figure 4 . Figure 4. 1.5-5 GHz spectral index distribution of I Zw 1 on the parsec scale.Left panel: the spectral index map produced by using the naturally weighted clean map at 1.5 and 5 GHz.The region with radio flux density below 3σ was set as blank (white), i.e. the outer region of the red (for 1.5 GHz) and black (for 5 GHz) curves.Radio spectral indices within both red and black curves are reliable.The black dots and the grey line indicate the ridgeline obtained from the tapered 5 GHz EVN+e-MERLIN image.The red stars indicate the centroid positions of the Gaussian components E1 and C from the 5 GHz image.Right panel: the spectral index distribution along the ridgeline.A positive radius corresponds to positive right ascension coordinates, and vice versa.The Gaia position is set as the reference.The grey belt marks the uncertainty of the spectral indexes along the ridgeline.The red stars mark the locations of the Gaussian components E1 and C from the 5 GHz image.In both panels the blue asterisks indicate the Gaia position. Figure 5 . Figure 5.Comparison of positions between Gaia DR2 and VLBI component C in spectral index map.Here only the flat spectral region (α > −0.5) is shown.We take 3σ position error of component C here, where 1σ position error is estimated through the method described in Section 2.3. Figure 6 . Figure 6. 5 GHz flux density distribution along jet ridge line for I Zw 1.The data points with positive (blue) and negative (red) radii are the approaching and receding jets, respectively.Positional uncertainties were directly measured in fitting the ridge line in Section 2.4 and flux uncertainties are estimated by accounting for thermal noise errors and calibration uncertainties (see Section 2.5). Note-Column 1: component name; Column 2: frequency; Column 3-4: right ascension and declination offset correspond to the Gaia DR2 position; Column 5: integrated flux density; Column 6: the angular size of components from a Gaussian model-fit; Column 7: resolution limit along the major and minor axis direction of the synthesized beam; Column 8: lower limit of the radio brightness temperature. Figure 1 . Figure 1.Model-fitting images of the phase calibrator J0056+1341 at 1.5 GHz (panel a and b) and 5 GHz (panel c).The images are produced using a two-dimensional Gaussian model fit with natural weights.The contours are plotted as 3σ × (−1, 1, 2, 4, 8, . . .), where σ is the root mean square (rms) noise.The white solid curves represent positive values and the red dashed curves represent negative values.The rms noise is 0.2 mJy/beam for both 1.5 and 5 GHz images.The model-fitting components are superimposed as yellow circles.The grey ellipses in the bottom left corner of each panel represent the full width at half-maximum (FWHM) of the restoring beam.The grey lines between panels c and b indicate the corresponding components without the core-shift effect, i.e. the optically thin components. Figure 2 . Figure 2. VLA A-array 8.4 GHz image of I Zw 1. Gaia DR2 position of I Zw 1 is set as the map center.The image peak is 0.798 mJy beam −1 .The contours are at 3σ × (−1, 1, 2, 2, 4, 8, . . . ) and 1σ = 0.038 mJy beam −1 , where positive contours are white and negative ones are red dashed.FWHM of restoring beam is 0.258 × 0.245 arcsec at 12.7 • and displayed as grey ellipses in the lower-left corner.The red arrow marks a possible southern bump. Figure 3 . Figure 3.The radio flux density of I Zw 1 over a collection area range from ∼0.04 to ∼50 arcsec.The integrated radio flux densities and uncertainties of I Zw 1 in L and C-band are shown (Table2).As I Zw 1 is not resolved in the given observations, the synthesized beams are taken to represent the collection area.The dashed lines and belts show power-law fittings and 95% confidence intervals, respectively, in the L (red) and C-band (green). Grant No. 2022SKA0120102), and the science research grants from the China Manned Space Project with NO.CMSCSST-2021-A06.Scientific results from data presented in this publication are derived from the EVN project EY037 and the VLBA project BY145.The European VLBI Network (EVN) is a joint facility of independent European, African, Asian, and North American radio astronomy institutes.e-MERLIN is a National Facility operated by the University of Manchester at Jodrell Bank Observatory on behalf of STFC.The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia),processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium).Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. Table 1 . Statistical parameters for different image deconvolution algorithms. Table 2 . Summary of historical observations and results for I Zw 1. Table 4 . Model-fitting results of the radio components detected in I Zw 1 with the VLBA 1.5 GHz and EVN+e-MERLIN 5 GHz observations.
10,784.2
2023-05-21T00:00:00.000
[ "Physics" ]
The Role of H. pylori CagA in Regulating Hormones of Functional Dyspepsia Patients Helicobacter pylori (H. pylori, Hp) colonizes the stomachs of approximately 20%–80% of humans throughout the world. The Word Healthy Organization (WHO) classified H. pylori as a group 1 carcinogenic factor in 1994. Recently, an increasing number of studies has shown an association between H. pylori infection and various extragastric diseases. Functional dyspepsia (FD) is considered a biopsychosocial disorder with multifactorial pathogenesis, and studies have shown that infection with CagA-positive H. pylori strains could explain some of the symptoms of functional dyspepsia. Moreover, CagA-positive H. pylori strains have been shown to affect the secretion of several hormones, including 5-HT, ghrelin, dopamine, and gastrin, and altered levels of these hormones might be the cause of the psychological disorders of functional dyspepsia patients. This review describes the mutual effects of H. pylori and hormones in functional dyspepsia and provides new insight into the pathogenesis of functional dyspepsia. Introduction Functional dyspepsia (FD), a very common condition that impairs quality of life, is a relapsing and remitting disorder with various chronic symptoms referable to the gastroduodenal region, including typical abdominal bloating or pain, early satiety, belching, heartburn, and nausea in the absence of organic or metabolic disease [1]. There are several diagnostic categories for FD based on the Rome III criteria, which are epigastric pain syndrome (EPS), postprandial discomfort syndrome (PDS), and a combination of these symptoms. The pathogenesis of FD is considered to be multifactorial or even a biopsychosocial disorder that causes abnormal gastrointestinal motility, visceral hypersensitivity, vagal dysfunction, and probable central nervous system disturbance [2]. Currently, Helicobacter pylori (H. pylori, Hp) infection is considered the major cause of the chronic gastric inflammation of FD patients [3]. Gastric inflammation has been shown to affect motor function and visceral sensitivity in experimental models. H. pylori strains that express CagA may be responsible for the FD associated with the more severe forms of gastritis. It was reported that CagA-positive H. pylori strains induced more dyspeptic symptoms than CagAnegative or H. pylori-negative strains in patients with FD [4]. Some of these functional symptoms can be explained by a gut-driven brain disorder [5], and in vivo hormonal changes have also been implicated. However, no definite clinical manifestation is linked to CagA-positive H. pylori strains infection or fluctuating levels of hormones in FD patients. This review discusses the possible correlation between an infection with CagA-positive H. pylori strains and the levels of several hormones in FD patients. of localized cells. Cytotoxin-associated protein A (CagA) is injected directly into the host epithelial cells via the typefour secretion system (T4SS), which is encoded by the Cag pathogenicity island (PAI) of H. pylori type I strains and associated with the development of gastric cancer. In the case of gastric MALT lymphoma, CagA was translocated into B-lymphoid cells and promoted their proliferation, possibly through the CagA-mediated proteins SHP-2, ERK, and MAPK, and increased the levels of Bcl-2 and Bcl-xL [6]. The relationship between infection with CagA-positive H. pylori strains and a higher risk of peptic ulcers and gastric adenocarcinoma in humans is widely known [7]. CagA is phosphorylated by host kinases, which alters cell signaling and various cellular responses involved in inflammation. Multiple oncogenic pathways were activated by CagA, such as the Ras/Erk, PI3K/Akt, and Wnt/beta-catenin pathways. Infection with CagA-positive H. pylori strains is the main factor driving the hyperactivity of the PI3K/Akt signaling pathway in gastric cancer, which is due to CagA-induced activation of the PI3K/Akt pathway, the representative downstream MEK/ERK pathway, and the nuclear factor-kappaB (NF-kB) signaling pathway, which subsequently induces the nuclear translocation of beta-catenin [8]. As observed in human gastric mucosae infected by CagA-positive H. pylori, CagA activates the Wnt/beta-catenin signaling pathway and induces beta-catenin transcription [9]. These cellular responses and inflammation may be responsible for the increase in the level of p53. The level of this key tumor suppressor is increased following infection with CagA-positive H. pylori strains and decreased rapidly during H. pylori eradication [10], whereas a continuous bacterial infection caused a persistently high level of p53. This phenomenon may be driven by the DNA damage related to inflammatory processes [11]. In addition, H. pylori is known to activate the NF-kB signaling pathway. IkappaB kinase alpha (IKK alpha) is a critical regulator of NF-kB activity and H. pylori induces the nuclear translocation of IKK alpha, which is indispensable for an inflammatory response, through a Cag PAI-dependent manner [12]. A study explained that CagA could activate the NF-kB signaling pathway and induced the downstream release of IL-8 via the MEK/ERK signaling pathway [13]. Infection with CagA-positive H. pylori strains promotes inflammatory processes that result in neoplastic transformation [14]. The inflammatory response associated with CagA-positive H. pylori gastritis is due to the upregulated expression of proinflammatory cytokines, including tumor necrosis factor (TNF)-, interferon gamma, interleukin-(IL-) 1beta and IL-8, and, particularly, IL-17, a key cell element in the inflammation caused by H. pylori, which mediates the activation of polymorphonuclear neutrophils [15]. Furthermore, IL-17 generally causes IL-8 secretion through the ERK 1/2 MAP kinase pathway [16]. Both CagA protein and Cag PAI have been shown to activate the NF-kB signaling pathway, which increased the level of IL-8 expression, but the effect of CagA in regulating NF-kB activation is still unclear. Recently, a study demonstrated that NF-kB activation and IL-8 release require the Cag PAI-encoded T4SS [17]. H. pylori eradication was shown to increase the platelet count in patients with ITP. The pathogenesis of ITP due to H. pylori infection is probably associated with variable host immune responses to VacA and CagA [19]. Recent guidelines indicated that IDA patients should be evaluated for an H. pylori type I strain infection because this bacterium can induce IDA through several mechanisms [20]. Active hemorrhaging caused by CagA-positive H. pylori gastritis or ulcers is known to be due to CagA increasing the level of transferrin, thus affecting iron acquisition [21]. Tamer et al. [22] have suggested that H. pylori may cause atherogenesis through persistent inflammation, and another possibility for this connection is the molecular mimicry of CagA. The persistence of serum CagA antibodies now appears to be predictive of Parkinson's disease with a poor prognosis; the proposed possible mechanism by which H. pylori causes this pathology is that it triggered mitochondrial damage and autoimmunity [23]. Moreover, H. pylori-like DNA is more commonly found in liver samples from chronic liver disease patients than from controls [24]. An H. pylori infection is positively correlated with metabolic syndrome and is inversely correlated with morbid obesity and type 2 diabetes mellitus (T2DM). The rate of seropositivity is higher in patients with metabolic syndrome than in healthy subjects [25]. Another study demonstrated that more than 75% of gallbladder cancer patients and 50% of chronic cholecystitis patients harbored H. pylori in the bile and gallbladder. An H. pylori infection was shown to aggravate gallbladder mucosal lesions and even lead to a potentially precancerous condition [26]. H. pyloribased gastritis is responsible for a higher risk of colorectal polyps, particularly dysplastic adenomas [27]. The findings showed that root canals may be a reservoir for H. pylori and a potential source for its transmission [28]. Whether dental plaque is a primary source of H. pylori infection of the gastric mucosa of patients with poor oral hygiene needs to be confirmed [29]. The external manifestations of H. pylori infections suggest that the mechanism through which H. pylori CagA causes diseases is complex and diverse rather than a single pathophysiological mechanism. 2.3. Functional Dyspepsia. H. pylori is generally accepted as the main pathological agent for the occurrence of functional dyspepsia [30]. H. pylori CagA protein is associated with the development of functional gastrointestinal disorders (FGIDs). Although the role of CagA in peptic ulcer disease and gastritis is established, its role in functional dyspepsia is controversial. FD patients who are H. pylori positive have no clear clinical manifestations and the effect of H. pylori eradication is contradictory in these patients. A retrospective study showed that patients with CagA-positive H. pylori strains have a higher symptom score and more dyspeptic symptoms than patients with CagA-negative or H. pylorinegative functional dyspepsia [31]. Hormone Level Changes after H. pylori Colonization. An H. pylori infection could induce fluctuations in the levels of serotonin (5-HT), ghrelin, dopamine, cortisol, and other hormones in the circulatory system, resulting in damage to various systems, including the central nervous system, and the occurrence of the corresponding symptoms ( Figure 2). 5-Hydroxytryptamine (5-HT, Serotonin). Psychological disorders, such as anxiety or depression, have been reported to be associated with FD [32]. The results of a populationbased investigation suggested that anxiety worsens the symptoms of FD [33]. A 12-year prospective population-based study found that people with higher levels of depression were significantly more likely to develop FD after 12 years [5]. Social anxiety disorder is associated with an overactive presynaptic serotonin system, increased serotonin synthesis, and increased transporter availability. The study by Harmer provided the insight that serotonin pathways may influence the mood of patients with depression by altering how the brain appraises emotional information at an implicit level [34]. Increasing evidence supports a close relationship between 5-hydroxytryptamine (5-HT, serotonin) and gastrointestinal motility and visceral hypersensitivity. Serotonin is synthesized through tryptophan hydroxylase-1 (TpH1) and TpH2, which are found in EC cells and neurons, respectively, and is inactivated by its uptake into enterocytes or neurons via the serotonin reuptake transporter (SERT). Approximately 90% of 5-HT in the body is synthesized in the gut, which has 14 different 5-HT receptor subtypes. The 5-HT 3A receptor and the 5-HT 2A receptor are associated with dyspeptic symptoms, while 5-HT4 receptor agonists may improve dyspeptic symptoms, particularly delayed gastric emptying [35]. In Japan, a 5-HT transporter gene polymorphism was found to be associated with dyspepsia [36]. motility [54] emptying [81] ACTH [59] Figure 2: Effects of gastrointestinal hormones level to mental disorders after H. pylori colonization. CagA-positive H. pylori strains could induce fluctuations in the levels of serotonin (5-HT), ghrelin, dopamine, and cortisol, and might be the cause of some dyspepsia symptoms and mental disorders through blood circulation and brain-gut axis. Abnormal levels of 5-HT have been reported in irritable bowel syndrome (IBS) patients. Raised plasma 5-HT levels were particularly found in female IBS-diarrhea patients, whereas reduced levels were found in IBS-constipation patients. Ahern [37] demonstrated that FD patients had significantly lower preprandial and postprandial plasma serotonin levels compared with those of healthy subjects. Decreased 5-HT levels may impair gastric accommodation or cause visceral hypersensitivity. A significant relationship between postprandial plasma serotonin levels and postprandial dyspepsia scores has been observed, which also indicated that serotonin plays a role in dyspeptic-symptom pathogenesis [37]. However, the role of 5-HT in regulating gastrointestinal tract function is imperfectly understood. Functional dyspepsia (FD) and IBS have been proposed to have a common pathogenesis. Nevertheless, little is known about the role of 5-HT in FD, which is largely due to the presence of various types of 5-HT receptors in the gastrointestinal tract and the absence of suitable and selective antagonists. Serotonin has been demonstrated to affect many immunological processes and to increase or decrease the levels of proinflammatory cytokines [38]. Chemokines and signaling pathways active during inflammatory processes can affect the synthesis and degradation of serotonin. Stone and Darlington [39] discovered that NF-kB signaling pathway activation increased the rate of release of 5-HT via the phosphorylation of TPH1. It was known that IL-1, IL-2, IL-6, and IFN caused the degradation of the precursor of tryptophan (TRP) through activating an enzyme (indoleamine 2,3-dioxygenase, IDO) and that a decreased level of TRP in the blood leads to a decreased level of 5-HT in the brain. The NF-kB signaling pathway that is involved in gene transcription participates in regulating inflammatory cytokines and plays a key role in the immune response, inflammation, and cell apoptosis in gastrointestinal diseases. Moreover, the increased expression of inflammatory cytokines in turn activates NF-kB. These results suggest a potential relationship between NF-kB activation and the 5-HT level in an infected host. Because serotonin is a weak platelet agonist, a number of studies have shown an association between severe upper gastrointestinal bleeding with H. pylori infection and the levels of selective serotonin reuptake inhibitors (SSRIs). This effect depends on the release of 5-HT by platelets, which acquire serotonin from the blood. H. pylori eradication therapy of patients with upper gastrointestinal bleeding was reported to reduce their rebleeding rates [40]. Although 5-HT and H. pylori are related to gastric disease, this association requires further studies. 2.6. Ghrelin. Ghrelin, which is produced by gastric enteroendocrine X/A-like or G cells and is acylated by ghrelin Oacyltransferase (GOAT) before being released into the blood, plays a crucial role in gastric motility, appetite regulation, and acid secretion. Many gastrointestinal disorders involving inflammation, infection, and malignancy are also associated with altered ghrelin production and secretion. Age, lactation, sex hormones, and the expression of mRNA encoding GOAT, a critical enzyme for ghrelin activity, can impact ghrelin secretion. Gastroenterology Research and Practice 5 A lower level of gastric ghrelin mRNA expression was observed in patients with an H. pylori infection compared to that in uninfected subjects [41]. Moreover, due to damage to the gastric X/A-like cells (which produce ghrelin), H. pyloriinfected patients also have a decreased serum ghrelin level. Inversely, the level of both serum ghrelin and ghrelin mRNA expression can rebound following H. pylori eradication therapy, with the consequent relief of dyspeptic symptoms [42,43]. However, some researchers failed to find significant changes in ghrelin levels after H. pylori eradication [44]. They demonstrated that H. pylori infection of the stomach did not significantly affect the ghrelin levels. A study of mice found that the normal gut microbiota, independent of H. pylori, had a significant effect on ghrelin levels [45]. They also found that the presence of H. pylori did not upregulate leptin (a satiety hormone) production or decrease ghrelin secretion in the absence of the normal gastrointestinal microbiota, whereas when H. pylori colonized the gut together with the normal microbiota, the opposite result was observed. An abnormally low level of ghrelin has been found in FD patients, particularly in those with PDS, compared with the level in healthy subjects. And the ghrelin plasma level is associated with the severity of the symptoms in patients with FD [46]. Paoluzi et al. [47] also reported recently that female patients with FD had lower fasting and postprandial ghrelin levels and that the abnormal ghrelin response was apparently involved in their meal-related symptoms. Although whether altered ghrelin levels are the cause or the result of dyspepsia is unclear, these data imply a possible role for ghrelin in the pathogenesis of FD patients with H. pylori positive. The gastritis induced by an H. pylori infection is predominantly related to T helper (Th)1/Th17 cell immunity. Ghrelin suppresses Th cell-dependent pathology. The downregulated level of ghrelin in the gastric mucosa of H. pylori-infected patients might promote an ongoing Th1-cell response and chronic active gastritis [48]. B. L. Slomiany and A. Slomiany [49] reported the role of phosphatidylinositol 3-kinase (PI3K) in digestive tract mucosa infected with H. pylori, demonstrating that the modulation of ghrelin as a gastric mucosal response to H. pylori depends on PI3K activation. The ghrelin receptor is highly specific, which means that only the acylated form of ghrelin can bind to it, and active ghrelin stimulates the appetite though neuropeptide Y (NPY). Because ghrelin receptor activation leads to the enhanced activity of the NPY pathway, activating the ghrelin receptor would be beneficial in abating early satiety and appetite loss. However, the mechanisms through which ghrelin is regulated in FD patients require more studies. Gastrin and Somatostatin. Gastrin, CCK, and somatostatin are all sensitive to the stress of anxiety [50]. Gastrin, which is released by G cells in the pyloric mucosa, stimulates gastric acid secretion, promotes cell growth (increasing the rate of cell division and inhibiting apoptosis), and transacts with cholecystokinin-2 receptors (CCK2Rs). The CCK2Rs, mitogen-activated protein kinase 1 (MP1), and ERK1/ERK2 are reported to mediate the gastrin-induced growth of gastric adenocarcinomas [51]. Patients with chronic renal failure have 2-to 3-fold higher serum gastrin levels because the kidneys are responsible for clearing gastrin [52]. Somatostatin is produced by D cells and neurons in the gastrointestinal tract and pancreas. In the stomach, the antrum mucosa secretes acid to stimulate the secretion of somatostatin and the latter inhibits gastrin secretion. That phenomenon explains why PPIs stimulate gastrin secretion by decreasing acid secretion, thus resulting in sustained hypergastrinemia, gastrinoma, and atrophic gastritis and possibly in gastric carcinoid tumors. It was shown that an acute H. pylori infection activates the sensory neurons associated with somatostatin stimulation [53]. H. pylori infection and a reduced somatostatin level have a complex etiological relationship in chronic gastritis. In H. pylori-positive patients, a decreased somatostatin content (likely mediated by proinflammatory cytokines) leads to increased gastrin secretion, perhaps due to the effect of H. pylori (a direct effect of CagL, a component of the T4SS for CagA) on G cells [54]. A study showed that Cag PAI-positive H. pylori strains or CagL activate the gastrin promoter, whereas Cag PAI-negative strains do not [55]. A further study indicated that gastrin expression stimulated by CagL is involved with epidermal growth factor receptor and MP1 signaling. In investigating the relationship between the gastric motility disorder and the gastrointestinal hormone abnormality in the GI mucosa of FD patients, Van Oudenhove et al. [56] found that the levels of gastrin in the postprandial plasma and the gastric mucosa were significantly higher in FD patients with delayed emptying and suggested that the altered gastrin levels may play a role in the pathophysiology of the abnormal gastric motility of FD patients. A study found that weight loss and symptom severity in FD patients were determined by somatization and depression [57]. A positive relationship was found between the degree of dyspeptic symptoms and the level of somatostatin [50]. The cited study also indicated that CCK and somatostatin might correlate certain psychological reactions with the pathophysiology of FD. Dopamine (DA). The homeostasis of the digestive system is dependent on the activity of aminergic mediators [such as 5-HT, noradrenaline (NA), and dopamine (DA)], which play crucial roles in the central or peripheral generation of gastrointestinal motility, secretion, and sensation. Dopamine is mainly produced by mesenteric organs (the GI tract, spleen, and pancreas) and is metabolized by monoamine oxidase and catechol-O-methyl-transferase (COMT). DA is known to modulate diverse physiological functions of the digestive system, such as acid and mucus secretion in the stomach and bicarbonate excretion in the duodenum. DA exerts its biological functions through two types of receptors, the D1-type receptors (D1 and D5) and D2-type receptors (D2, D3, and D4) receptors. In the digestive system, DA inhibits motility via the D1-type receptor present on smooth muscle and modulates the release of acetylcholine (ACTH) from myenteric neurons via D2-type receptors. Dopamine and its agonist have a protective effect on the development of various lesions in the gastroduodenum. It was also reported that dopamine might protect the gastric mucosa against acidified ethanol through the activation of the 2 adrenoceptor, which leads to inhibition of gastric motility [58]. Recently, it was shown that dopamine acts as a strong antitumor/antiangiogenic factor by suppressing the expression of growth factors, such as vascular endothelium-derived growth factor (VEGF), to inhibit angiogenesis in malignancies [59]. However, another study suggested that dopamine is rapidly metabolized by COMT in the gastrointestinal tract; therefore, there would not be a sufficient effect on ulcer healing [60]. Giusti et al. [61] found that dopamine receptor antagonists or inhibitors, such as diazepam and other antipsychotic drugs, can stimulate the development of peptic ulcers and infections by H. pylori. However, Kirschbaum and Hellhammer [62] found the rate of H. pylori infection was increased after dopamine treatment in women with a high prolactin level. These studies may indicate a fuzzy relationship between dopamine and H. pylori infection, the pathological mechanism of which requires further study. Although DA receptors are considered to modulate GI motility and GI motor symptoms associated with FGIDs, their potential role in the pathophysiology of functional dyspepsia is still unknown. 2.9. Cortisol. Cortisol, which plays an important role in the defense of a host infected with bacteria, is secreted through the stress-mediated activation of the hypothalamic-pituitaryadrenal (HPA) axis. High levels of adrenocorticotropic hormone (ACTH) and cortisol are generally considered to be outcomes of an HPA-axis disorder. Moreover, the levels of the ACTH immunoreactive substance (IS) are regulated by negative feedback from neurogenic stimulation and plasma cortisol. On the other hand, leptin suppresses HPA-axis activity, whereas ghrelin stimulates neuropeptide Y and food intake, which induces high concentrations of plasma ACTH and cortisol. HPA-axis alterations are related to gut motor functions [63]. Moreover, serum-free cortisol fluctuations are associated with various symptoms of functional gastrointestinal disorders (FGIDs), including FD [64]. Some studies showed that an increased serum cortisol level promoted H. pylori colonization. Koşan et al. found that, compared with those of the healthy control group, the serum IGF-I and IGF-II concentrations were significantly decreased in H. pylori-positive patients, although their serum cortisol level was increased. These authors did not discuss the probable effect of cortisol on H. pylori infections [65]. In contrast, Katagiri et al. [66] reported that H. pylori-infected patients had significantly decreased cortisol levels than H. pylori-negative patients. They also demonstrated that cortisol prevents H. pylori colonization through strengthening the host defense mechanisms. Recently, studies have shown that drugs, such as cimetidine, reduce basal and stimulated cortisol synthesis through inhibiting enzymes. However, proton pump inhibitors, such as lansoprazole and rabeprazole, are thought to cause increased cortisol levels in a starvation condition due to their possible effects of stimulating the HPA axis and increasing the plasma ACTH-IS level [67]. However, few studies have investigated HPA-axis parameters in FD patients, and inconclusive results were obtained. Adults with FD who have an autonomic nervous system disorder have been reported, and the most common finding in these patients is decreased vagal tone [68]. In PDS patients, mental stress before a meal increases the symptom severity through sympathetic hyperactivity and increased cortisol levels. Another study suggested that these neurohormonal responses to HPA-axis activation mainly affect gastric sensitivity [69]. A study in which the HPA-axis activity of FGID patients was inhibited showed that they displayed both salivary morning cortisol and diurnal cortisol levels that were significantly lower than those of controls [70]. De la Roca-Chiapas et al. [64] have reported that FGID patients with high cortisol levels complained of more depression than those with low or medium cortisol levels, whereas the latter described experiencing more pain. In contrast, another study did not find a tendency toward higher cortisol levels in FD patients and suggested that these patients have not had enough exposure to daily stress to activate the HPA axis. It is not impossible that both stress-induced anxiety and an altered neuroendocrine response could increase the severity of dyspeptic symptoms (Table 1). Treatment of FD. The diverse clinical manifestations and the uncertain pathophysiological mechanisms make it difficult to select a therapeutic strategy to manage FD. There is currently no established treatment regimen. Updated guidelines for FD treatment have been published by academic gastroenterology organizations, such as the American Gastroenterological Association, promoting a comprehensive strategy that includes diet, behavior modification and cognitive therapy, psychological interventions, and drug therapy [71]. However, drug treatment is still the main form of practical FD treatment in China. H. pylori eradication and treatment with antacids, motility-regulatory drugs, and antidepressants are commonly recommended as an effective method to treat FD. Patients in Asian countries receive a relatively higher benefit from H. pylori eradication. Researchers have shown that H. pylori eradication can result in a long-term remission of FD symptoms [72]. Some evidence suggested that H. pylori eradication significantly improved gastrointestinal symptoms in patients with EPS compared with its effects on patients with PDS [73]. H. pylori eradication certainly has a statistically significant but small benefit in patients with FD. It is likely that the results of H. pylori eradication reflect not only the effect of treating an H. pylori infection but also the effect on the gastrointestinal microbiome. The results of some randomized controlled trials (RCTs) support eradication therapy [74], whereas most other studies found no benefit. A Cochrane review reported no significant difference between patients who had received a placebo and those who had undergone H. pylori eradication. Furthermore, due to the wide use of antibiotics in H. pylori eradication regimens, drug-resistant H. pylori strains are arising. The success rate of triple-antibiotic eradication therapy is less than 80% throughout the world. This phenomenon makes it challenging for clinicians to manage infections with H. pylori strains that are resistant to antimicrobial agents, particularly those that are resistant to clarithromycin and metronidazole. As a secondline treatment, moxifloxacin has been investigated. Zhang et al. [75] suggest that moxifloxacin-based therapy is more effective than the standard triple or quadruple therapy for this disease. However, its adverse effects, such as tendonitis or a nervous system reaction, are a matter of concern. The diverse results of therapy may be due to the use of different trial designs, different methods of patient selection, or different H. pylori eradication strategies. Furthermore, accumulating evidence demonstrates that that the human stomach contains a complex microbial ecosystem that does not include H. pylori strains [76]. The balance of these communities is crucial for health maintenance, and their disturbance is considered to be involved in gastrointestinal diseases [77]. Andersson et al. [78] reported that the gastric mucosa displayed a diverse microbiota after H. pylori eradication, although it is difficult to discriminate whether the changes in the gastric microbiota were caused by H. pylori eradication or by antibiotic treatment. Dyspepsia patients have significantly more health care consultations due to psychological distress. Chronic stress is considered a major risk factor for FD, possibly due to brain-gut axis dysregulation mediated by the hypothalamicpituitary axis [79]. Antianxiety and antidepressant drugs have been reported to have positive effects on FD, particularly on intractable FD, with tricyclic antidepressants (TCAs) and small doses of selective serotonin uptake inhibitors (SSRIs) most often mentioned as efficacious [72]. When these drugs are used to treat FD, clinicians should consider the direct effects of neurotransmitters on gastrointestinal disorders as well as their effects on mental disorders because they may affect the modulatory function of neurotransmitters (e.g., 5-HT) on gastrointestinal sensory, motor, and secretory functions. However, 5-HT receptors have been recognized as the target points for symptomatic improvement. 5-HT3 receptor antagonists and 5-HT4 receptor agonists have been selected to treat functional diarrhea or constipation. 5-HT4 receptor agonists cause the release of acetylcholine, which stimulates smooth muscle contraction, leading to accelerated gastric emptying [80]. Because the pathogenesis of FD is uncertain and most likely multifactorial, individualized treatment should be established based on the patients' chief complaints. Summary Whether there are distinct modes of pathogenesis of PDS and EPS remains controversial. Fang et al. [81] found that although PDS and EPS have some common risk factors, including a younger age, anxiety, and NSAID consumption, different risk factors appear to be associated with different FD subgroups; for example, H. pylori infection, an unmarried status, sleep disturbance, coffee consumption, and depression disorder are risk factors only for PDS but not for EPS. These CagA and 5-HT may participate in the pathogenesis of FD, but the specific underlying pathogenic mechanisms are still unclear. H. pylori infection and anxiety or depression are also significant factors in the pathogenesis of FD. Therefore, whether and how CagA and 5-HT affect the pathogenesis of FD (including PDS and EPS) are a topic worthy of study and more research is needed. Moreover, the wide distribution of 5-HT receptors and their respective roles in the pathogenesis of FD are of special significance and are worthy of further study.
6,437.2
2016-10-20T00:00:00.000
[ "Medicine", "Biology" ]
America and Iran-Pakistan-India ( IPI ) Gas Pipeline India and Iran share a historical and long term economic relation that has formed the basis of close bilateral relationship. In contemporary World politics, energy resources play an important role and are considered as the engines of economic growth and development for a country. India’s growing energy demand and Iran’s vast energy resources make the two nations natural economic partners. For India, Iran becomes attractive because it occupies second and fourth place among the countries having the highest reserves of gas and oil in the whole world. Iran, on the other hand, needs substantial investments not only in its oil and gas industry but in educational, health, defence and in technological sector as well. The sanctions imposed by the West particularly by United States of America have made it difficult for Iran to emerge as a major regional power on the basis export of gas and oil to its nearest and huge markets (Pakistan and India). This paper examines the Iran-Pakistan-India gas pipeline as a confidence building measure in developing strategic relationship between India, Pakistan and Iran, secondly the nature of American attitude and influence on the relationship between Iran and India particularly in context of the proposed (IPI) Iran, Pakistan and India gas pipeline. INTRODUCTION India and Iran have a rich history of civilization going back several millennia.Their bilateral relationship is the continuity of an ancient phenomenon, since the Aryans era and shared common homeland as well as traditions.Historians claim that Indo-Iranians belong to a single family and lived together for many centuries in the pasture land of Central Asia that is known as Oxus valley (India embassy in Tehran, 2014).The Indo-Iranian relationship was given concrete foundation during the period of Mughal rule over India.Mughal rulers not only invited the Iranian architects to India but also the educationalist of that times, who translated important books related to medicine, poetry and religion from Persian language to Hindi language.During the period that both countries enjoyed collateral relations, there were free movement of traders, architects, poets, and educationalist (Haider, 2001).With the appearance of British colonial rule over India ties between the two countries were broken till the independence of India.In post-independence period India and Iran established their formal relations on 15 th March 1950 by signing the "Treaty of Friendship" that states "there shall be perpetual peace and friendship between two Governments of the two countries and their differences shall be settled through ordinary diplomatic channels, by arbitration and by such peaceful means as deemed suitable by them" (Abedi, 1996).However, their emerged various factors like with the creation of Pakistan India and Iran lost the direct land link, out-break of wars in South Asian subcontinent (India-China, India-Pakistan), and the Iran's support to Pakistan as well as Iran's partnership with the West.All these factors became obstacle in transforming their past relations into long term strategic partnership.However it was after the Iranian revolution of 1979 which changed the whole political structure and leadership of Iran, both the countries started looking at opportunities for re-establishing their relations by exchanging the official visits from time to time.It was in April 2001 the visit of then Indian Prime Minister, Mr. Atal Bihari Vajpayee who provided a break-through and opened a new chapter in the history of Indo-Iran relationship.The visit boosted the efforts for developing a close Indo-Iran relationship based on mutual interests like Taliban dominated Afghanistan, independence of Central Republics due to the fall of USSR, the economic interests like export of natural gas, developed technology, investment opportunities in areas like health sector, defence and industrial sector etc.Both the countries have identified mutual interests for developing long term strategic partnership especially in information technology, fertilisers, petrochemical and energy sector etc (Khan, 2008). In contemporary times, India and Iran have not only deepened this relationship but also expanded it to cover wide ranging political, economic, security as well as science and technological aspects.The importance of Iran for India lies in its geostrategic position, energy resources as well as in providing access to the Central Asian region.Iran can play a pivotal role in a number of regional configurations in the Persian Gulf, Afghanistan and in the Caspian areas for India.Iran ultimately will help India not only in countering expanding Chinese and Pakistan's influence in these areas but also in securing reliable and huge energy sources required for developing its economy and in becoming as well as in playing the role of a regional power.But there are various challenges and issues (role of U.S.A., Iranian Nuclear Program, and Afghanistan, so on) which have become the main factors for the two countries to redesign their foreign policy towards each other. Since the end of the Cold war, United States has been seeking to establish a permanent global dominance in order to take control of strategically important regions of world and particularly of West Asia region of which Iran is one of the major countries.Iranian nuclear programme and its possible implications for the entire region, its ideological disposition, and its huge oil reserves, strategic and economic importance of the Strait of Hormuz, all Ahmad 261 these make Iran an important regional actor.While as the policy of United States towards Iran is to undermine the Iranian regime and make it subordinate.United States want to go to any extreme to have an excuse to put the square on Iran either by imposing unilateral sanctions or by the threat of military action.From time to time India became the tool for implementing the policy of United States either due to its national interests or due to the inability of its policy makers in handling the issues efficiently. India's growing energy situation Currently Coal, oil and natural gas are major sources of primary energy in India, accounting for 52.9, 29.6 and 10.6%, respectively of the primary energy consumption.However, the country has the world's fourth largest coal reserves.The demand and supply gap of coal has been continuously increasing with domestic production unable to keep pace with the demand.In case of oil and gas the deficit is even more.India holds just 0.7% of the world's proven oil reserves while accounting for 3.9% of the global oil consumption.Similarly, the country has 0.8% of the world's proven natural gas reserves, while accounting for 1.9% of the worldwide gas consumption which results in India importing nearly 20% of its natural gas consumed through LNG 1 .Over the past few years, the country's dependence on imported oil has steadily increased as a result of stagnant domestic production and rising demand.This has significant implications on energy security and the overall financial health of the country.While as Domestic production remained flat, hampered by limited prospectively delays in the commissioning of new projects and declining production from existing maturing fields.Disruption in crude oil supplies has always been a cause of concern for India.The recent upheaval in the Middle East countries especially in Libya and Egypt triggered a drop in crude oil production in the region, resulting increased crude oil prices driving up inflation in India.According to Goldman Sachs, the increase in oil price by US$10 per barrel could potentially slow India's GDP growth by 0.2% and may inflate the current account deficit by 0.4%.The recent depreciation of the rupee raised the cost of crude oil imports for India, which in turn has led to increase in inflationary pressures on the economy.Notably, the import of crude oil and oil products rose from US$50.3 billion in Financial Year 2006 to US$115.9 billion in Financial Year 2011.In Financial Year 2012 (till October 2011) imports touched US $75 billion.2Over the long run the widening trade deficit may result in the dearth of foreign exchange reserves for the country to deploy in other critical infrastructure and social projects. As India is one of the fastest growing energy markets in world with the demand continuing to outstrip the supply and the main drivers of energy consumption in India are industrial operations, transportation, and urban and rural household uses.The energy sector in India decides the direction of economic growth, as there is a direct correlation between the two faster economic growths continues to accelerate the demand for energy products.Although Indian government has plans for enhancing the exploitation of its hydro power, nuclear energy, and renewable energy resources the analysis indicates that the impact of these supply side alternatives is minor when compared with the total requirements of commercial energy by 2031.Although, the contribution of nuclear hydro and renewable energy forms together increases by about six times during 2001-31.These sources can at most contribute to a mere 4.5% of the total commercial energy requirements over the modelling time frame.It is, therefore evident that the pressure on the three conventional energy forms that is coal, oil, and gas will continue to remain high at least in the next few decades.As the world's third largest coal producer, India probably will rely on coal to meet the majority of its energy needs for the foreseeable future.Coal currently provides 60 percent of India's commercial energy consumption.Between 1984 and 2004, coal consumption in India increased from 140 million tons (mt) to over 400 mt annually, growing at a rate of 5.4 percent per year.Of the coal consumed 90 percent is produced domestically while about 10 percent is imported, primarily from Australia and South Africa.However, the adverse effects of coal like global warming are already visible.On the other hand, no doubt India has signed the nuclear agreement with US for overcoming India's energy crisis.But it will put further burden of millions of rupees on the annual budget for developing a single nuclear plant.Further it (nuclear deal) has developed the feeling of insecurity in the minds of large section of population which is the witness of the Union Carbide gas leak in 1984, which killed thousands of people and the persons responsible for the accident were let free by the government of India, even the victims struggled for long period for the insufficient compensation, relief and rehabilitation that was given to them by government.Further by enacting the nuclear liability bill Indian government has deepened the feeling of insecurity in the minds of its people by following the same path which led to the Union Carbide accident by (a) fixing the liability of the operator of the nuclear installation in case of accident only 1500 crores at the place were lose could be huge and unimaginable (b) not making the nuclear material supplier responsible for his role in case of nuclear accident due to negligence while supplying the material to the nuclear plant (c) by fixing the maximum period of 10 years for claiming the compensation by the victims (d) Victims are not given right to sue anyone neither the operator nor the supplier (Suvrat and Ramana, 2010).As a result of this now local population protests against the construction of nuclear plants in their areas the latest example is the Kudankulam nuclear plant.So it clearly indicates that natural gas is a preferred option for power generation as well as for the production of nitrogen fertilizer.The availability of natural gas therefore, needs to be facilitated by removing infrastructural constraints.Besides its high end-use efficiency, it is a cleaner fuel and relatively much easier to handle than coal, nuclear material and nuclear plants (Bhat, 2013). Iranian energy sources and IPI gas pipeline In terms of Iran's potential to meet India's rapidly growing energy requirements, Iran has the second largest gas reserves in the world and is seeking to repair damage caused by the Iran and U. S. relations and sanctions.Iran hope for an opportunity to exploit its natural gas reserves through the mega-project (Iran, Pakistan and India gas pipeline) that could spur economic prosperity in the provinces where the pipeline ran.The IPI gas pipeline project could transform Iran from being merely an oil producer to a major energy exporter.That ultimately enhances Iran's regional and global stature (Nadeem and von Ochssée, 2009).The projected Iran-Pakistan-India Pipeline (IPI) would stretch 1,724 miles, or 2,775 kilometres, is now under construction to deliver natural gas from Iran to Pakistan.Iran has already completed a 900-kilometre portion of 56-inch diameter pipeline from Assaluyeh to Iran Shehr.The remaining 250 kilometre portion up to the Pakistan border is still under design and is expected to be completed in limited period.In 2012, Pakistan decided to finish the huge pipeline project "at any cost" as it suited Pakistan in odd circumstances.The capacity of the pipeline would be between 8.7 billion cubic meters to 40 billion cubic meters of natural gas per year.It could be raised up to 55 billion cubic meters per year.It is generally believed that gas delivered from Iran would be cheaper than delivered through the proposed Trans-Afghanistan Pipeline (Daheem, 2013).The idea of an overland trans-Pakistan pipeline was first proposed in 1989 by Ali Shams Adekani than acting Deputy Foreign Minister of Iran and R.K. Pachauri, the then Director General of the TATA Energy Research Institute in New Delhi (Temple, 2007).It was in 1993 both countries signed Memorandum of Understanding for the project.However, at the earlier stage, the project did not influence many because of Pakistan's initial reluctance to participate in the project.The negative response of Pakistan forced Iran and India to look for other possible options for laying down the pipeline project.Thus they started looking at the options of shallow water pipeline and to the deep sea (sea bed) route for carrying out the project which was much expensive than the overland proposal.While with the change in the government of Pakistan in 1999, it announced its support and participation in the proposed gas pipeline project.At the early stage, both India and Pakistan tried to relate the project with their political issues like India wanted to link it with the transit right for trade with Afghanistan and demanded to remove its restrictions on the bilateral trade.While, Pakistan tried to link it with the resolution of the long pending Kashmir dispute.But later on since 2005, both countries have taken it positively and dropped these demands in order to overcome their energy shortage in their respective countries.In response to the security concern for the gas pipeline raised by India, Pakistan tried to assured both Iran and India that Pakistan would guarantee the security for the project.In a letter, the then petroleum and natural gas minister of Pakistan Usman Aminuddin to his Iranian counterpart assured that Pakistan is prepared to address all concerns of the Indian government.Further Iranian government also assured that if Pakistan at any point of time stopped the gas supplies to India, Tehran will not only stop the supplies to Pakistan but also provide India equal amount of LNG as the same price (Naaaz, 2008). The prospect of supplying energy to two major markets just next door was an enticing one for Iran.Iran figures very prominently in Indian thinking and is considered highly beneficial to the country's economic future.Piped natural gas poses perhaps the most environmentally and economically cost-effective solution to India's dire energy situation.Not only the IPI pipeline could provide the necessary fuel to India's fertilizer and industrial sectors but also imported gas could help revitalize the defunct electricity market.India's inadequate infrastructure could also benefit from a reliable energy source, which would in turn encourage further foreign investment.As the Planning Commission report on an integrated energy policy has noted, the benefits of a stable power source would eventually be tangible for people at every socioeconomic level of Indian society.Furthermore, since high demand for gas in the private sector will only increase over time, the pipeline is guaranteed to be profitable even if the power sector is eventually able to overcome its Ahmad 263 dependency on thermal generation (Temple, 2000).For India and Pakistan, the energy needed to fuel economic growth projections could not be fulfilled from a single line (source).Though both states had sought to diversify their energy sources, with natural gas being a crucial component of their strategy and inadequate domestic reserves make those plans imagery without foreign sources.Thus, access to Iran's vast gas reserves by the pipeline project would go in long term towards energy security and Pakistan on its part enjoys an additional incentive of transit fees that can be profitable revenues for boosting its economy. IPI gas pipeline and regional integration Finally, the economic integration inherent in the project offers the possibility of improving strained India-Pakistan political relations.The Project can act as a confidence building measure in the process of integrating whole South Asia particularly India and Pakistan politically, economically, socially and culturally.It can play a vital role not only in resolving the decades old disputes like Kashmir, Siachen and Sir-creek, etc but also provide a peaceful atmosphere as well as bring an end to the arms race between the two countries and ultimately help both the countries in developing their health sector, educational sector, eradicating poverty and unemployment.Despite constraints and series of disputes between India and Pakistan, still the project is still seen by all the three countries as indispensable for their development.Thus IPI gas pipeline can be treated as measure of removing trust deficit between the countries and integrating them in an interdependent whole.On the part of Pakistan by this project it not only would earn a large amount of revenue in the form of transit fee but also would enhance its relations with India in other areas like education, health, information technology, industry and electricity generation as well, which could be helpful for Pakistan in boosting its war torn economy.While on the part of India it would be in its interest to deal with Pakistan economically that not only will weak the anti India forces and propaganda in Pakistan but also push Pakistani government to dismantle the basis of anti-India forces and restrict their activities.IPI also will become an important source of energy for energy starved country like India in long run. The IPI pipeline project would further act as a bridge for both countries (India and Pakistan) to get access to the Persian Gulf and Central Asian countries through Iran. Hence, the IPI project may become a source of regional integration i.e. why some experts called the IPI as the "Peace Pipeline." U.S.'s influence on IPI gas pipeline While as U.S. on the other hand has been applying pressure against India and Indian companies which have energy relations with Iran.The most prominent is the Iran-Pakistan-India (IPI) gas pipeline project.Iran and Pakistan have announced that they will go ahead with the project at a bilateral level for the time being.Following the announcement of the recent sanctions against Iran, Washington has told Islamabad that it could be subjected to US sanctions if it went ahead with the pipeline project with Iran.India seems to have delinked from the same though the government has not withdrawn formally.India claims that security and pricing issues with Pakistan and Iran respectively are the main obstacles to its participation, though there has been substantial pressure from US against proceeding with the project.(Dietl, 2008).Under American pressure, it has stopped the export, largely by reliance of oil products to Iran and reduced oil imports from Iran.Since 2010 India´s payments for its oil import form Iran have become problematic.Its search for payments took India to banks in Germany, Turkey and U.A.E.Even both the countries agreed to partial payment through Indian Rupee.On the American part India was asked to prove its loyalty by providing support to U.S. on the question of Iran's nuclear program at the International Atomic Energy Agency (IAEA).Some analysts are of the opinion that as large reserves of natural gas have been discovered in India's offshore territory.However, given India's projected huge and growing demand for gas, it will require import of gas, at least in the future.While as the U.S. sponsored Turkmenistan-Afghanistan-Pakistan and India gas pipeline is highly uncertain because of the internal situation of Afghanistan.As long as law and order is not restored in Afghanistan and writ of the government established in the tribal areas of Pakistan and one cannot expect to see the project getting functionalized.Furthermore, the quickly changing internal situation of Afghanistan and the withdrawal of foreign forces from Afghanistan and the chances of Taliban emerging as a strong political power are likely to make the situation more complex.So Iran-Pakistan-India gas pipeline becomes important and reliable source of energy for both India and Pakistan.That may be the reason why India is not officially closing the project of the IPI gas pipeline.As an Indian official who was closely involved with the negotiations said, barring a few issues, everything is in place for the project to be brought to fruition.As and when India feels the time is right for implementing the project, it will do so (Shebonti and Mahtab, 2010).Therefore, it is evident that the IPI pipeline has not gone ahead due to American pressure. Conclusion Energy is one of the most important and efficacious elements in the strategic collaborations between Iran and India.India being a country having second largest population with fast growing economy is in need of new energy sources.In the field of energy, it can be claimed that because of Iran enormous resources of oil supplies it is known as India's one of the most important oil trade partner.The strongest capacities and potentialities for developing relationship between Iran and India lie in energy and trade collaborations.There are sufficient areas through which the mutual collaboration can be transformed into strong mutual integration and energy can be one of the most consequential fields for the expansion of ties.While as, on the other hand United States has left no stone unturned to put pressure on other countries particularly on India not to establish friendly relations with Iran for its personal benefits.Now with the change in the leadership of Iran, the new moderate president Hassan Rouhani has shown flexibility in order to remove the sanctions imposed on Iran for resolving the economic crisis and the issue of Iranian nuclear program peacefully and as per news reports U.S. as well is thinking of easing the sanctions on Iran.So it is time for India to play an active role as a mediator and supporter of negotiations between Iran and United States and on the other to start thinking and approaching to the Iranian authorities in order to resolve the issues related the price and transit fee of gas in order to secure and to protect the huge energy sources as well as the historical relations with Iran from the influence of other countries particularly of China. Many in U.S. Congress voice concern about India's relations with Iran and their relevance to U.S. interests.As America's concerns about the Iranian nuclear program have increased, so increased the pressure on India about its participation in gas pipeline project with Iran.The United States has endured India's friendship with Iran as an irritant that could be ignored, the development of the India-Iran energy relationship is a new serious threat.Such a relationship has the potential for revitalizing the Iranian energy sector, as well as opening up new possibilities for the export of oil and gas from the wider Caspian region through Iran.This would undermine the U.S. policy of isolating the Iranian regime in the global polity and economy.The U.S. government reportedly has warned leading oil companies, as well as governments of various nations including India that sanctions are possible if they pursue energy deals with Iran(Steven, 2007).With signing of the U.S.-India civil nuclear pact in 2005.India's relationship with Iran has attracted an even closer scrutiny from America.In March 2005, the US Secretary of state Condoleezza Rice visited India and Pakistan.In New Delhi she said that "the United States had conveyed its concern to India on the gas pipeline.Our ambassador to India has made statements in that regard.So those concerns are well known to India."The US Energy Secretary Samuel Bodman queried his Indian counterpart Murli Deora regarding the pipeline during his visit to India, declaring that Washington needed to stop it.The Sam Lantos, chairman of the foreign Affairs Committee of the US House of Representatives, led a group of US congressmen in writing a letter to the India Prime Minister advising that India pull back not only from the pipeline but from the LNG deal with Iran too
5,602.8
2014-11-30T00:00:00.000
[ "Economics" ]
Magnetic frustration induced large magnetocaloric effect in the absence of long range magnetic order We have synthesized a new intermetallic compound Ho2Ni0.95Si2.95 in a single phase with a defect crystal structure. The magnetic ground state of this material found to be highly frustrated without any long range order or glassy feature as investigated through magnetic, heat capacity and neutron diffraction measurements. The interest in this material stems from the fact that despite the absence of true long range order, large magnetocaloric effect (isothermal magnetic entropy change, −ΔSM ~ 28.65 J/Kg K (~205.78 mJ/cm3 K), relative cooling power, RCP ~ 696 J/Kg (~5 J/cm3) and adiabatic temperature change, ΔTad ~ 9.32 K for a field change of 70 kOe) has been observed which is rather hard to find in nature. Magnetocaloric effect is a thermodynamical phenomenon in which change in material temperature occurs due to application of magnetic field under adiabatic condition. Magnetocaloric materials, working on the principle of magnetic refrigeration, are one of the most energy efficient and environment friendly replacement of those conventional systems based on gas compression/expansion technique [1][2][3][4][5][6] . In general, the compounds which exhibit large change in magnetic entropy, adiabatic temperature and cooling power, are considered as large MCE materials. Large MCE near room temperature is important for household purpose 7 , but for liquefaction of hydrogen, helium and space technology application 8 , low temperature region is also very important. A major goal in this emerging research area is to find new materials that exhibit large MCE and are capable to operate in different temperature ranges, suitable for corresponding application. In most practical purposes sub-Kelvin temperatures are achieved through adiabatic demagnetization of paramagnetic salts 9 or magnetic garnets 10 , but none of them is metallic. An ideal magnetocaloric material preferred to be metallic and non-superconducting in nature for better heat conduction and easy machining. The compound also should not degrade over time, i.e., the compound must be stable enough at the ambient condition. To overcome this problem, investigation of MCE was initially focused on metallic ferromagnetic materials around the ferromagnetic Curie temperature [11][12][13] . Later on it has been found that many antiferromagnetic metallic systems undergoing field induced ferromagnetism or metamagnetic transition also exhibit large MCE 14,15 . However, in most cases this type of magnetic transition is usually accompanied by thermal and (or) magnetic hysteresis, which is disadvantage for application purpose. Another promising alternative material to exhibit large MCE can be metallic materials having infinitely degenerate magnetically frustrated ground states. With increase in magnetic field, degeneracy in the ground state tend to be lifted causing frustrated magnetic moments to polarize in the field direction. This also results in large magnetic entropy change. Recently, few theoretical predictions have also been made to obtain large value of MCE in frustrated magnetic systems [16][17][18][19] . Quite a few experimental supportive results for such theoretical prediction have been obtained till now [20][21][22] . Magnetic frustration are also known to enhance the barocaloric effect as well 23 . However, all the bulk materials so far reported to exhibit such frustrated ground state found to coexist with long range magnetic order. As a result, the exact role of magnetic frustration on the enhancement of MCE is difficult to determine. Large MCE material that show neither the long range magnetic ordering nor even the spin freezing behaviour is rather hard to find in nature. In hexagonal ternary intermetallic compound R 2 TX 3 , where R = rare-earth element, T = transition metal and X = Si, Ge, In, etc, only R ions generally carry the magnetic moment [24][25][26] . Since T and X ions are randomly distributed in the 2d Wyckoff position, the local environment of R ions varies randomly. In the presence of antiferromagnetic interaction, such geometry may result in geometrical frustration 27 . Additionally, since the ratio of lattice parameters (c/a) approaches to be close to unity, one also expects to get strong frustration when nearest-neighbour exchange interaction (J NN ) and next-nearest-neighbour exchange interaction (J NNN ) are of opposite signs 28 . In this work, we show that Ho 2 Ni 0.95 Si 2.95 forms in single phase only in defect structure and exhibits large MCE over a wide temperature range in the absence of any true long range magnetic order. Results and Discussions The room temperature X-ray diffraction (XRD) pattern of full stoichiometric Ho 2 NiSi 3 found to contain minor (<10% of 100% peak) additional peaks of HoNiSi 2 (Inset: Fig. 1). The phase purity could not be improved even on annealing. Single phase material however could only be obtained in defect structure Ho 2 Ni 0.95 Si 2.95 (Fig. 1). The lattice parameters obtained are a = 3.953(2)Å and c = 4.000(1)Å (space group P6/mmm). Interestingly, we found that the c/a ratio close to unity, suggesting that nearest-neighbour (NN) and next-nearest-neighbour (NNN) distances for Ho ions are quite comparable. The crystal structure remains conserved down to 15 K, the lowest measurable temperature at our X-ray diffractometer. Neutron diffraction (ND) result (discussed later) suggests that the crystal structure does not change even at 1.5 K. The temperature dependence of dc magnetic susceptibility (χ = M/H) under zero-field-cooled (ZFC) and field-cooled (FC) protocols at 100 Oe applied magnetic field show a peak-like structure at T P = 3.6 K in both the protocols [ Fig. 2(a)]. The temperature derivative of dc magnetic susceptibility exhibit a crossover from negative to positive value, which is commonly considered to be characteristic of antiferromagnetic transition. The peak at T P , however, appear to be quite weak in nature, as χ(0)/χ(T P ) found to be close to unity. Generally, in a polycrystalline material with collinear Heisenberg antiferromagnetic arrangement, one expects χ(0)/χ(T P ) to be 2/3 29 . The relatively large value of χ(2 K)/χ(T P ) ~ 0.985 in Ho 2 Ni 0.95 Si 2.95 suggests that the magnetic spin structure, if any, would be of canted nature having strong ferromagnetic component 30 . Signature of ferromagnetic interactions could also be found from the positive value of paramagnetic (PM) Curie-Weiss temperature (θ CW = 1.8 K) estimated from the inverse susceptibility in the paramagnetic region (60-300 K). In an ideal antiferromagnetic system, one would expect −θ CW ~ T N and should be even lower for geometrically frustrated systems. The presence of ferromagnetic interactions in an antiferromagnetic compound generally brings the value of θ CW towards zero or even positive, depending on the strength of ferromagnetic interaction in the system. Below 60 K, inverse susceptibility devites from linearity, which may be due to growth of short range magnetic interactions in the system or due to crystalline electric field effect. The effective magnetic moment (μ eff ) estimated to be 10.8 μ B /Ho 3+ ion which is slightly larger than that of free Ho 3+ ion (10.6 μ B ). The slightly larger value of μ eff in the compound may originate from the polarization of the conduction electron of Ni ion, or may be due to reduction in moment density, which generally found in frustrated magnetic systems 27 . The origin of competing ferromagnetic and antiferromagnetic exchange interaction strength may be found in the crystal structure of the compound. In rare-earth (R) based intermetallic compounds exchange interaction between the R ions is of RKKY type and exchange interaction strength (J ex ) depends on the inter-ionic distances (d) as, J ex ~ 1/d 3 . Since the ratio of the lattice parameters (c/a) of Ho 2 Ni 0.95 Si 2.95 is of the order of 1.01 (Fig. 1), the nearest-neighbour rare-earth ion distance [d NN = a = 3.953(2)Å] is comparable to next-nearest-neighbour distance [d NNN = c = 4.000(1)Å]. As a result, the nearest-neighbour exchange interaction (J NN ) and next-nearest-neighbour exchange interaction (J NNN ) found to be of comparable strength. Since the third nearest-neighbour rare-earth ions are placed relatively further away [d NNNN = 5.624(2)Å], its contribution to the exchange interaction strength (J NNNN ) is relatively quite small. It is therefore quite plausible that J NN and J NNN , which are nearly of equal strength but of opposite sign, might be responsible for the competing nature of ferromagnetic and antiferromagnetic interactions in this compound. The signature of long range magnetic ordering however found to be absent in the heat capacity measurement down to 2 K. It shows only a broad anomaly in the temperature range 3-25 K, with a rather sharp drop below 3 K [ Fig. 2(b)]. The magnetic contribution (C magnetic ) of molar heat capacity has been calculated by subtracting heat capacity data of isostructural stoichiometric La 2 NiSi 3 after appropriate lattice volume correction. The magnetic contribution shows a broad hump in the temperature range 3-25 K. Similar broad hump in heat capacity data generally found in systems having frustrated short range magnetic ground state [31][32][33] . The magnetic entropy (S magnetic ) at T P is found to be only about 60% of Rln2, suggesting the absence of true long range ordering in the system. The magnetic entropy value reaches saturation value [R ln(2J + 1) = Rln17, with J = 8] around 60 K, due to presence of spin fluctuation and presence of short range magnetic correlation up to such higher temperature 34 . This is in agreement with the deviation of Curie-Weiss law of inverse susceptibility below 60 K. Neutron diffraction experiments for this polycrystalline compound were carried out at different temperatures, above and below T P (=3.6 K), to look for the possible arrangement of magnetic spin structures (Fig. 3). The data collected at 1.5 K showed magnetic peaks which were much weaker than those expected for Ho 3+ with gJ = 10 μ B and much broader than the nuclear diffraction peaks. The weak intensity of magnetic peaks did not allow to fully determine the magnetic structure. However, the correlation length was estimated from the width of magnetic peaks using the Scherrer formula which yielded the value of ~35 Å. This is the same order of magnitude previously found for magnetically frustrated materials with short range magnetic correlations [35][36][37] . Thus, both the neutron diffraction as well as heat capacity result confirms the absence of true long range ordering in Ho 2 Ni 0.95 Si 2.95 . In all practical purpose, the system thus appear to be in the magnetically frustrated state. Such frustration might have originated due to random distribution on Ni/Si along with strongly competing exchange interaction originating for c/a ~ 1. A magnetically frustrated state coupled with random disorder is generally known to be conductive of spin or cluster glass behaviour 27 . The estimated low values of relaxation time constant (~120 sec) cast doubt about the presence of any spin glass freezing behaviour in this compound. The peak in the real part of ac susceptibility data around 3.6 K is frequency independent, which further confirms the absence of any glassy interaction [Inset: Fig. 4(a)]. The compound also does not show any magnetic memory effect which is generally exhibited by glassy systems. Thus, surprisingly, despite having strong disorder density, Ho 2 Ni 0.95 Si 2.95 does not show any glassy magnetic feature. One of the main feature of magnetically frustrated systems is that their ground states are highly degenerate in nature. On application of magnetic field, moments try to align themselves along the field direction by lifting the ground state degeneracy. After a critical field strength (H sat ) majority of the moments get aligned in the field direction making a unidirectional spin alignment. The value of the critical field strength depends on the degree of frustration present in the system. Field dependent dc magnetic susceptibility of Ho 2 Ni 0.95 Si 2.95 [ Fig. 4(a)] shows that the peak observed at T P (=3.6 K) at low external fields vanishes above 10 kOe because of the polarization of the short range moments in the field direction. Further increase in applied magnetic field strength results in increasing ferromagnetic-like volume fraction. This behaviour is also reflected in the heat capacity measurement under various external fields [ Fig. 4(b)]. The rather sharp drop in heat capacity observed below 3.6 K, as described earlier, appear to get broadened with the application of magnetic field similar to that found in other magnetically frustrated systems [31][32][33] . With the application of magnetic field, the magnetic entropy, estimated from the heat capacity result, gradually gets reduced in such frustrated magnetic system due to lifting of magnetic degeneracy [inset: Fig. 4(b)]. The rate of change of magnetic entropy with applied magnetic field exhibit a discrete change for field above 10 kOe. Such abrupt change in magnetic entropy is also an indicator of large magnetocaloric effect in this system that exhibit no long range magnetic order. The field variation of magnetization, measured at 2 K, shows a linear behaviour for H ≤ 10 kOe, but tends to saturate at higher field (Fig. 5). The value of magnetic moment for 70 kOe field change is 8.18 μ B /Ho 3+ ion, which is slightly smaller than the theoretical saturation value (10 μ B /Ho 3+ -ion, with g = 5/4 and J = 8). It is surprising that magnetically frustrated Ho 2 Ni 0.95 Si 2.95 resulted such ferromagnetic like behaviour with large magnetic moment value, implying a strong modification of the competition of ferromagnetic and antiferromagnetic exchange interaction under the influence of magnetic field. The magnetic field dependence can thus be analysed considering a combined effect of ferromagnetic and antiferromagnetic isotherms as M(H) = A[tanh(BH)] + CH, where, A, B and C are fitting parameters. The first term in the equation corresponds to ferromagnetic volume fraction and origin of the linear term is the combining effect of antiferromagnetic and paramagnetic volume fraction. Such type of analysis have also been performed for earlier reported systems with ferromagnetic and antiferromagnetic interactions 38 . As seen from inset II of Fig. 5, moment of the ferromagnetic component saturates above an applied field value of 20 kOe. As temperature increases, the ferromagnetic like contribution weakens gradually. However, the M(H) value taken at a temperature (50 K) much higher than T P (=3.6 K), still do not show linear behaviour expected in a truly paramagnetic system. This is in commensurate with our observation that inverse susceptibility deviates from linearity below 60 K. No magnetic hysteresis is observed even at the lowest temperature (2 K, inset I: Fig. 5). The magnetocaloric effect of Ho 2 Ni 0.95 Si 2.95 have been estimated from the isothermal magnetization [ Fig. 6(a)] and zero field heat capacity data, by estimating magnetic entropy changes All the three parameters found to be quite large for this compound. For example, the maximum value −ΔS M is found to be 28.65 J/Kg K (~205.78 mJ/cm 3 K) and 23.25 J/Kg K (~167 mJ/cm 3 K) for a field change of 70 kOe and 50 kOe, respectively [ Fig. 6(b)]. Even for a low field change of 20 kOe, the value of −ΔS M is 10.5 J/kg K (~75.42 mJ/cm 3 K), being very beneficial for application purpose. The basic nature of −ΔS M (T) estimated from heat capacity measurements are quite similar, except for a minor difference in absolute magnitude. These values are comparable or even larger than those reported for most of the potential magnetic refrigerant intermetallic materials exhibiting ferromagnetic ground state 13,39,40 or antiferromagnetic ground state with metamagnetic transition(s) 14,15,41,42 in the cryogenic temperature region. Observation of such large value of −ΔS M is extremely rare in intermetallic compounds having frustrated ground state with no true long range ordering. Additionally, the large value of −ΔS M (T) coupled with its asymmetric spread over a wide temperature range makes the RCP values very high as well. The calculated RCP Fig. 6(b)], which are one of the largest value of RCP reported for good refrigerant material around this temperature scale 13,15,39,40,43,44 . It may be noted here that in case of long range magnetic order −ΔS M (T) appears to be symmetric around the Curie temperature, while asymmetric spread are seen primarily in case of spin fluctuations 45 or spin flop transitions 46 . In the paramagnetic region, theoretical calculation yields −ΔS M ~ H 2 /2T 2 , where H is the applied field and T is the corresponding temperature. At low temperature, −ΔS M (H) deviates quite significantly from the H 2 behaviour [inset II: Fig. 6(b)]. As temperature increases, the discrepancy to H 2 behaviour decreases. However, even at 47.5 K, −ΔS M (H) still exhibit minor discrepancy with H 2 behaviour, suggesting the system has not yet reached truly paramagnetic state which is in accordance with the magnetic susceptibility and heat capacity results described earlier. The short range magnetic correlations due to strong magnetic frustration of the To conclude, we found Ho 2 Ni 0.95 Si 2.95 is one of the extremely rare intermetallic compound having frustrated ground state with no true long range magnetic ordering that show large MCE. The relevant MCE parameter values are comparable or even larger than those reported for most of the potential magnetic refrigerant intermetallic materials exhibiting ferromagnetic ground state or antiferromagnetic ground state with metamagnetic transition(s) in the cryogenic temperature region. The absence of long range magnetic order indicates that the magnetic frustration is primarily responsible for large MCE in this compound. Such mechanism, although theoretically predicted earlier, had been rarely observed experimentally in any other systems. Methods The polycrystalline samples were synthesized in arc furnace by melting appropriate amount of constituent elements (purity >99.99%) under inert (Ar) atmosphere using a water cooled Cu hearth. The ingot was re-melted several times, by flipping every time to promote volume homogeneity. The weight loss is less than 0.2%. X-ray diffraction (XRD) experiments were performed on the powdered as-cast sample using Cu-K α radiation on a Rigaku TTRAX-III powder diffractometer having 9 kW power in the temperature region 15-300 K, for structural characterization. Full-Rietveld analysis of XRD data was carried out using FULLPROF package 47 . The magnetic measurements were carried out in a SQUID VSM (M/s Quantum Design, Inc., USA) and Ever Cool II VSM (M/s Quantum Design Inc., USA) in the temperature range 2 K-300 K and magnetic field up to 70 kOe. The heat capacity measurements were carried out in PPMS (M/s Quantum Design, Inc., USA) in the temperature range 2 K-300 K and magnetic fields up to 70 kOe. The neutron diffraction experiments were performed at ECHIDNA beam line in ANSTO, Australia at different temperatures.
4,323.2
2017-08-04T00:00:00.000
[ "Physics" ]
Chaos in the inert Oort cloud Context: Distant trans-Neptunian objects are subject to planetary perturbations and galactic tides. The former decrease with the distance, while the latter increase. In the intermediate regime where they have the same order of magnitude (the 'inert Oort cloud'), both are weak, resulting in very long evolution timescales. To date, three observed objects can be considered to belong to this category. Aims: We aim to provide a clear understanding of where this transition occurs, and to characterise the long-term dynamics of small bodies in the intermediate regime: relevant resonances, chaotic zones (if any), and timescales at play. Results: There exists a tilted equilibrium plane (Laplace plane) about which orbits precess. The dynamics is integrable in the low and high semi-major axis regimes, but mostly chaotic in between. From 800 to 1100 au, the chaos covers almost all the eccentricity range. The diffusion timescales are large, but not to the point of being indiscernible in a 4.5 Gyrs duration: the perihelion distance can actually vary from tens to hundreds of au. Orbital variations are favoured in specific ranges of inclination corresponding to well-defined resonances. Starting from uniform distributions, the orbital angles cluster after 4.5 Gyrs for semi-major axes larger than 500 au, because of a very slow differential precession. Conclusions: Even if it is characterised by very long timescales, the inert Oort cloud is much less inert than it appears. Orbits can be considered inert over 4.5 Gyrs only in small portions of the space of orbital elements, which include (90377) Sedna and 2012VP113. Effects of the galactic tides are discernible down to semi-major axes of about 500 au. We advocate including the galactic tides in simulations of distant trans-Neptunian objects, especially when studying the formation of detached bodies or the clustering of orbital elements. Introduction Beyond Neptune, the orbits of distant small bodies around the barycentre of the solar system are subject to two kinds of perturbations: an internal perturbation from the planets (mainly the giant ones), and an external perturbation from the galactic tides, passing stars, and molecular clouds.Historically, a distinction is made between the trans-Neptunian or Kuiper belt objects (with all their subclasses) and the Oort cloud.This distinction was made because the trans-Neptunian population is indeed observed on orbits lying beyond or close to Neptune, whereas the longperiod comets coming from the Oort cloud are only observed when they are injected into the inner solar system, making them observable from Earth.These different classes of objects are thought to have been initially populated through distinct mechanisms (see e.g. the recent review by Morbidelli & Nesvorny 2019).However, there is no dynamical boundary between the trans-Neptunian and the Oort cloud populations, and numerical simulations show a continuous transfer of objects in both directions (Fouchard et al. 2017;Kaib et al. 2019).This means that objects that were initially dominantly perturbed by the planets are driven into a region where the galactic tides dominate, and vice versa. However, the external perturbations are often neglected in simulations of trans-Neptunian objects, even when they feature very distant orbits (Gallardo et al. 2012;Saillenfest et al. 2017a,b;Batygin & Brown 2016;Becker et al. 2017), whereas the internal perturbations are usually neglected in simulations of the Oort cloud, at least beyond a distance threshold (see e.g.Higuchi et al. 2007;Fouchard et al. 2018).These simplifications are not necessarily wrong, but a clear understanding of where the transition occurs is still missing, as well as the behaviour of small bodies when they cross the limit. In reality, there necessarily exists an intermediate region where perturbations from the planets and from the galactic tides have the same order of magnitude.This region, which is itself a continuous transition rather than a clear boundary, can be thought of as the dynamical frontier between the trans-Neptunian and the Oort cloud populations.Since both types of perturbations are expected to be small in this region, we call it the "inert Oort cloud" throughout the article.Authors generally consider that nothing has happened in this region since the formation of the solar system, excluding a very unlikely star passage going completely through, or an even more unlikely close encounter with a giant molecular cloud.Strong orbital perturbations could only have occurred there in the very early evolutionary stages of the solar system, when it was still in a dense stellar cluster.For this reason, the inert Oort cloud is sometimes called "fossilised", or "detached", meaning that the objects it contains could have been placed very early on their current orbits through A&A 629, A95 (2019) the interaction with neighbour solar siblings (see e.g.Brasser et al. 2012;Jílková et al. 2015).For such a frozen configuration to be achieved, the objects within this region should have a perihelion far away from the giant planets, and a semi-major axis small enough for the Galactic tides not to be able to significantly change the perihelion distance over long timescales. A rough idea of the location of the inert Oort cloud can be obtained from previous works.Gladman et al. (2002) showed that the scattering effect by Neptune is significant over long timescales only for perihelion distances below about 45 astronomical units (au).The precise limit actually increases with the semi-major axis (Gallardo et al. 2012), because energy kicks result in larger variations of semi-major axis if the semi-major axis is large.In fact, some observed objects with perihelion beyond 45 au are known to experience scattering (Bannister et al. 2017).In any case, we look here for a rough limit only.The scattering process mostly affects the semi-major axis of small bodies, which diffuses chaotically, while the perihelion distance does not vary much.Later on, Gomes et al. (2005), Gallardo et al. (2012), and Saillenfest et al. (2016Saillenfest et al. ( , 2017a)), showed that the Lidov-Kozai mechanism raised by the giant planets inside a mean-motion resonance with Neptune is able to raise the perihelion of small bodies beyond 60 au in a few thousands of million years.Contrary to scattering effects, this mechanism induces a variation of perihelion distance and inclination, while the semi-major axis remains at the resonance location.This mechanism, however, is only efficient for semi-major axes smaller than about 500 au.From these studies, one can deduce that the action of the planets is limited to orbits with perihelion distances smaller than about 80 au, and that for perihelion beyond 45 au, the semi-major axis should be smaller than 500 au for the planets to possibly have a substantial effect through mean-motion resonances.As regards the effects of the galactic tides, Fouchard et al. (2017) showed that an object with perihelion in the Jupiter-Saturn region, that is, below 15 au from the Sun, should have a semi-major axis larger than 1600 au for the tides to be able to raise its perihelion beyond 45 au in less than the age of the solar system.In other words, the tides can move its perihelion out of reach of any significant planetary scattering. Consequently, the inert Oort cloud can be considered as the region where the semi-major axis is smaller than 1600 au and the perihelion distance is larger than 45 au, but the semi-major axis should be larger than 500 au if the perihelion distance is smaller than 80 au.The resulting zone is schematised in Fig. 1. At the early stages of the solar system history, most of the small bodies had nearly circular and coplanar orbits that were close to, or even intersecting, the trajectories of the planets (see e.g.Tsiganis et al. 2005).By scattering, their semi-major axes then spanned a large range of values, populating the bottom part of Fig. 1.It is therefore common in simulations to see small bodies wander around the inert Oort cloud, roughly following the straight lines of Fig. 1 (see Dones et al. 2004 or Gomes et al. 2015).However, a few bodies have been discovered within this region: (90377) Sedna (Brown et al. 2004), 2012 VP 113 (Trujillo & Sheppard 2014), and 2015 TG 387 (Sheppard et al. 2019).After the discovery of 2012 VP 113 , Trujillo & Sheppard (2014) conjectured the existence of a massive stable population lying in this region.However, one can already notice that these bodies all have very eccentric orbits, implying that either dramatic events occurred during the early evolutionary stages of the solar system (e.g. as a result of a dense stellar environment, see Brasser et al. 2012), or this inert Oort cloud may not be as inert as it appears at first glance.The Planet 9 hypothesis could point in this direction (Trujillo & Sheppard 2014; where neither the planets nor the galactic tides have substantial effects on the orbit of small bodies.In this schematic view, the planet scattering makes small bodies move horizontally, whereas the galactic tides and the isolated mean-motion resonances with planets (labelled "planet resonances" on the graph) make them move vertically.The orbital inclination of small bodies is not considered in this picture, though it is known to play a role as well (Saillenfest et al. 2017a). Batygin & Brown 2016), but strong external perturbations would still be required to emplace the planet itself on its distant orbit.Anyway, it should not distract us from studying the complex interplay between planets and galactic tides in this transitional regime.Even though planets and galactic tides have almost negligible effects on the inert Oort cloud over long timescales, their combined effects could pile up and still induce substantial orbital changes over a 4.5 Gyr evolution. The aim of the present paper is to characterise and explore the long-term dynamics of the inert Oort cloud, driven by the perturbations from both the galactic tides and the giant planets.We will investigate the dynamical mechanisms at play in this region (resonances, chaos) and draw a quantitative picture of the relevant timescales. Section 2 is devoted to the dynamical model used and its underlying simplifications.General considerations about the long-term dynamics are exposed in Sect.3.They are followed in Sect. 4 by a detailed exploration of the trajectories allowed through Poincaré surfaces of section.In Sect.5, we discuss the implications of this mixed-type dynamics for real objects, and we map the inert region in the space of orbital elements.We finally conclude in Sect.6. Unified model of planets and galactic tides We consider a small body of negligible mass with respect to the giant planets of the solar system.The Hamiltonian function governing its orbital motion can be decomposed into the Sunbody Keplerian part1 , a perturbation due to the planets, and a perturbation due to the galactic tides: (1) Expressed using Keplerian elements, the two-body part is where a is the semi-major axis of the small body and µ is the gravitational parameter of the Sun.We assume that the small body never goes inside the orbits of the planets.The Hamiltonian ε P H P can therefore be expanded in Legendre polynomials.As explained in the Introduction, meanmotion resonances are inefficient in the inert Oort cloud, such that we are allowed to use the averaged perturbation from the planets (whose orbital periods are much smaller than the one of the small body).At this level of approximation, effects coming from the small eccentricities and mutual inclinations of the planets are perfectly negligible for such distant small bodies.Consequently, using circular and coplanar orbits for the planets, we obtain (3) These terms correspond to the monopole (index 0), quadrupole (index 2), and hexadecapole (index 4), respectively.Their expressions can be taken from Saillenfest et al. (2016), or Laskar & Boué (2010) in a general context: In these expressions, (x, y, z) are the coordinates of the small body in a reference frame centred on the Sun, where the (x, y) is the orbital plane of the planets, and r ≡ x 2 + y 2 + z 2 .We call this frame the "ecliptic" reference frame.The quantities µ i and a i are the gravitational parameter and the semi-major axis of the planet i, for a total of N planets.Because of their small semi-major axes, the planets are supposed to be unaffected by the galactic tides, such that this reference frame is inertial (we consider no precession of the ecliptic pole around the galactic pole). We now consider the coordinates (X, Y, Z) of the small body in a fixed reference frame centred on the Sun, where the (X, Y) plane is the galactic plane.We call it the "galactic" reference frame.We note (X , Y , Z ) the coordinates of the small body in an analogous reference frame, but for which at any time the X axis points towards the galactic centre.Because of the motion of the Sun in the Galaxy, the latter reference frame is rotating.At lowest-order of approximation, the Sun describes a circular orbit with constant velocity lying in the galactic plane (e.g.Fouchard 2004).We have in this case the relation where the time derivative of θ is a constant and corresponds to the angular velocity of the galactic centre seen from the Sun. In the following, we write it ν G .In the quadrupolar approximation, the Hamiltonian function describing the orbital perturbations of the small body from the galactic tides can be written where G 1 , G 2 and G 3 are constants encompassing the shape of the galaxy, its mass density, and the inertial forces due to the rotation of the frame (Fouchard 2004).The momentum P θ is conjugate to the angle θ; it has been introduced such that the Hamiltonian function is autonomous.Using the usual approximation where The symbols V and R are used here in reference to the vertical and radial components of the galactic tides, respectively. The perturbations due to the planets and due to the galactic tides being both very small with respect to the Keplerian part, they act on a much longer timescale.Therefore, we use a perturbative approach to order one.The resulting Hamiltonian function is obtained by averaging H (Eq. ( 1)) over an orbital period.The momentum conjugate to the mean anomaly of the small body becomes a constant of motion, which implies the conservation of the secular semi-major axis (that we still denote a).Dropping the constant parts, the secular Hamiltonian is We will now introduce explicit expressions for the small parameters: We note (e, I, ω, Ω) the Keplerian elements of the small body in the ecliptic reference frame, with e its eccentricity, I its inclination, ω its argument of perihelion, and Ω its longitude of ascending node.We will use the subscript G for the same quantities measured in the galactic reference frame (excepting e that does not change).Performing the required averages, the different components of Eq. ( 9) can be written and We write ψ the inclination of the ecliptic plane in the galactic reference frame, and α its ascending node.Since we have (1982), the galactic constants G 2 and G 3 are taken from Fouchard (2004), and the inclination of the ecliptic is obtained from Murray (1989).Even though it is quite old, the theory of Bretagnon (1982) has the advantage of directly giving the secular component of the planetary dynamics.Since it is semi-analytical, this theory is also expected to be more robust than numerical ephemerides when considering very long timescales. neglected the precession of the ecliptic pole, the angles ψ and α are constant.The ascending node of the ecliptic can therefore be used as the origin of longitudes in the galactic frame, meaning that α ≡ 0. The corresponding conversion formulas between the two reference frames are given in Appendix A, and the values for the physical constants of the problem are gathered in Table 1.We note that the galactic and ecliptic reference frames used here are a natural choice considering the dynamics under study, but they are not the usual IAU ones (this is only a matter of origin of the longitudes). The galactic Laplace plane The explicit expressions of the small parameters (Eq.( 10)) have been chosen such that the Hamiltonian functions HP 2 , HP 4 , HG V , and HG R have the same order of magnitude for e = 0.The secular semi-major axis rules the relative importance of the different perturbation terms.Figure 2 shows that below a ∼ 600 au, the planetary perturbations dominate over the galactic tides by more than a factor 10. The situation is reversed beyond a ∼ 1500 au. In between, both kinds of perturbations have the same order of magnitude (ε P 2 and ε G V cross at a ∼ 950 au).However, since the eccentricity appears at the denominator in HP 2 (see Eq. ( 11)), we expect that the planetary perturbations always have a substantial effect in the high-eccentricity regime.From Fig. 2, it is also clear that in the weakly perturbed intermediate regime, the planetary perturbations are dominated by the quadrupolar term, whereas the galactic tides mostly consist in their vertical component (the radial component is always smaller by one order of magnitude, see Table 1).In the remaining parts of the article, we will therefore limit the study to the simplified Hamiltonian function in order to draw a qualitative picture of the dynamics in the intermediate regime.This Hamiltonian has two degrees of freedom, and we will use the canonical Delaunay elements: where L = √ µa is a constant.We note that g only appears in HG V , whereas h only appears in HP 2 (through cos I).In this Size of the small parameters listed in Eq. ( 10) with respect to the secular semi-major axis of the small body.The value of the physical parameters used are given in Table 1. context, the galactic coordinates are therefore the most natural coordinates to use. First of all, we note that the Hamiltonian F (see Eq. ( 13)) is very similar to the Hamiltonian governing the secular orbital motion of a satellite perturbed by the Sun and by the J 2 flattening of its host planet.In the quadrupolar approximation used here, the two Hamiltonian functions are even identical for small eccentricities, as shown in Appendix B. This means that the concept of "Laplace plane" introduced in the satellite case (see e.g.Tremaine et al. 2009) has its equivalent for distant trans-Neptunian objects in the galactic potential.The Laplace plane is normal to the axis around which the orbital angular momentum precesses.In other words, the orbital inclination measured with respect to the Laplace plane is almost constant, while the corresponding longitude of ascending node circulates.More specifically, a Laplace plane corresponds to a fixed points of the dynamics.The results obtained by Tremaine et al. (2009) in the satellite case remain valid here for circular orbits (e = 0 is a fixed point for the eccentricity).We have the same geometry of phase space, as shown in Fig. 3: we recognise the "circular coplanar" equilibria at Ω G = 0 and π, among which the stable ones correspond to the classical Laplace plane (located equivalently at Ω = π and 0).We also recognise the "circular orthogonal" equilibrium for I G = I = 90 o and Ω G = Ω = ±π/2.Since the phase space is a sphere, we stress that all trajectories oscillate around one of the stable Laplace equilibria.The stability of the equilibria against eccentricity growth is different from the satellite case, but this has no consequence here considering the timescales involved (see Appendix B for details). Low-eccentricity orbits initially lying close to the ecliptic (I G ≈ ψ and Ω G ≈ 0) should all precess around the classical Laplace plane.Using the expression of the Hamiltonian F (see Eq. ( 13)), the inclination of the classical Laplace plane is a root of a second order polynomial in tan I G . Figure 4 shows the inclination of the classical Laplace plane according to the value of the secular semi-major axis a.For small values of a, this plane is very close to the ecliptic plane, whereas for large values of a, it is very close to the galactic plane.In between, the orbits of small bodies precess about an intermediary plane.This transition occurs in the region where ε P 2 and ε G V have the same order of magnitude (compare Figs. 2 and 4).13)) for a circular orbit.The semi-major axis taken as parameter is a = 900 au, and the level curves are shown in black.The two graphs show the same level curves for two sets of variables (in order to avoid being misled by coordinate singularities).The dotted curves represent the inclination ψ of the ecliptic, and the blue spots show the location of the ecliptic plane. The red curve shows an example of nearly circular trajectory precessing around the normal to its local Laplace plane, obtained by numerical integration.The eccentricity slightly varies creating a deviation from the initial level curve.The oscillation period around the fixed centre is about 200 Gyrs, in accordance with Fig 5. However, for nearly circular orbits, the oscillations about the classical Laplace plane in the transition regime are extremely slow compared to the age of the solar system (see Fig. 5), meaning that in practice these orbits hardly change at all and are indeed "inert".More precisely, the half precession period of the orbit pole exceeds the age of the solar system for semi-major axes between 350 and 13 400 au.Below 350 au, the precession is about the ecliptic pole (I ≈ const.),so that a set of orbits initially lying close to the ecliptic plane never departs from it.Beyond 13 400 au, the precession is about the galactic pole (I G ≈ const.), .Period of small oscillations about the two kinds of circular Laplace equilibrium.The period of oscillations around the classical equilibrium exceeds 9 Gyrs for semi-major axes between 350 au and 13 400 au (out of the graph).The period of oscillations around the orthogonal equilibrium is linear with a, and it is everywhere dramatically long with respect to the age of the solar system (apart from very small values of a where the overall model is questionable). such that orbits explore all the values of I between ψ − I G and ψ + I G in less than the age of the solar system.Considering a swarm of particles with initially small inclinations I, this upper limit corresponds to the transition between a disc-like and an isotropic region, even though previous authors based their criterion on a full period for very eccentric orbits (see e.g.Higuchi et al. 2007;Fouchard et al. 2018;Vokrouhlický et al. 2019).For such large values of a, the revolution period of Ω G tends to the value obtained when neglecting the planets (noted P Ω * by Higuchi et al. 2007, see their Fig. 2). We note that contrary to regular satellites slowly migrating from their formation region, that are expected to remain very close to their local Laplace plane (see e.g. the discussion of Polycarpe et al. 2018 about Iapetus), distant trans-Neptunian objects can be subject to "fast" changes of orbit: either a diffusion of a by planetary scattering, or an overall randomisation due to passing stars.This last mechanism is thought to have been quite efficient in the inert Oort cloud during the early stages of the solar system.This means that the orthogonal equilibrium could be populated as well, or at least, it could a priori play a role in the dynamics of distant trans-Neptunian objects.In the circular case, the orthogonal orbits are however completely frozen, as shown in Fig. 5. As can be guessed from the expression of the Hamiltonian function, the situation is different for the eccentricity degree of A95, page 5 of 20 A&A 629, A95 (2019) freedom.Indeed, if the orbit is very eccentric (and this is the case for all known distant trans-Neptunian objects) the planetary part of the Hamiltonian is large, bending the Laplace plane towards the ecliptic (Appendix C), and resulting in much shorter timescales than shown in Fig. 5. Eccentric orbits have a much shorter revolution period of Ω G also when neglecting the planetary perturbations (function P Ω * of Higuchi et al. 2007).The behaviour of eccentric orbits is the subject of the next section. Exploration of the dynamics Since the Hamiltonian F (Eq. ( 13)) is composed of two parts that dominate respectively in the low-and high-semi-major axis regimes (see Fig. 2), the first step is to understand the two kinds of dynamics taken separately.Both ε P 2 HP 2 and ε G V HG V are integrable, and their dynamics are well known.In this section, we recall briefly their main aspects and study the interplay between the two kinds of perturbations. Planetary regime If ε G V ε P 2 , the dynamics is largely dominated by the planetary perturbations.Expressed in ecliptic coordinates, the Hamiltonian ε P 2 HP 2 taken alone is trivially integrable (see Eq. ( 11)): the momenta are conserved, and the angles circulate with constant velocities.More specifically, e and I are constant, and the precession velocities are The angle ω increases for I < 63 o , decreases for 63 o < I < 117 o , and increases again for I > 117 o .In contrast, Ω decreases for I < 90 o and increases for I > 90 o .Figure 6 shows a map of these precession velocities with respect to the ecliptic inclination, as well as the places where their main integer combinations vanish.These combinations cannot be called "resonances" at this stage, because the two degrees of freedom are strictly decoupled when considering the planetary perturbations alone, but we can expect that they will have a dynamical importance in the perturbed problem (see below).Some of these combinations are mentioned by Gallardo et al. (2012) as affecting observed trans-Neptunian objects.As shown by Saillenfest et al. (2016), the hexadecapole (see Eq. ( 11)) and successive planetary terms only make small libration islands of ω appear at I ≈ 63 o and 117 o .These islands have a maximum width of 16.4 au for the perihelion distance, which, for large semi-major axes, represents a very small variation of eccentricity.When expressed in galactic coordinates, the evolution of I G , ω G and Ω G driven by ε P 2 HP 2 are combinations of sinusoids (see Appendix A for the conversion formulas). Planetary regime weakly perturbed by galactic tides When the planetary perturbations dominate over the galactic tides (i.e. for small semi-major axes and/or high eccentricities), the effects of galactic tides can be studied in a perturbative approach: the planetary component ε P 2 HP 2 (see Eq. ( 11)) acts as the integrable dominant part of F , while the galactic component ε G V HG V (see Eq. ( 12)) acts as a small perturbation. Since our dominant part ε P 2 HP 2 is already expressed in action-angle coordinates (i.e. it does not depend on the angles), the perturbative approach is straightforward.Expressed in ecliptic coordinates, our perturbing part ε G V HG V is composed of several terms featuring various combinations of ω and Ω (see the Ω ω complete list in Appendix A).Therefore, because of the galactic tides, such combinations become genuine resonances, whose characteristics can be obtained analytically.The procedure is detailed in Appendix D. Figures 7 and 8 show the location and widths of all the strongest resonances (the ones that appear at first order in ε G V ), obtained analytically.These figures are restricted to small perihelion distances, for the planets to remain by far the dominant term of the dynamics.We focus on prograde orbits, since the resonances for I > 90 o are obtained by replacing cos I by − cos I and Ω by −Ω.As shown in the top panel of Figs. 7 and 8, the libration zone of Ω has by far the largest width in inclination (it actually corresponds to the emergence of the orthogonal Laplace equilibrium, see Sect.3).We note that the resonances ω − Ω and 2ω − Ω are the first ones to overlap when the galactic tides increase (yellow and grey areas).As shown in the bottom panel of Figs. 7 and 8, the resonances ω + Ω and 2ω + Ω are by far the largest ones in perihelion distance.The other resonances are quite small in comparison, and the libration zone of Ω even has a null width in q.The resonances ω ± 2Ω, visible in Fig. 6, do not even appear in Figs.7 and 8: this means that they only exist at second order of ε G V , and have virtually no effect in the weakly perturbed planetary regime.perihelion distance q of the resonance centre (au) 7. Location and widths of the strongest resonances in the planetary regime weakly perturbed by the galactic tides.The semi-major axis taken as parameter is a = 500 au.For better visibility, the perihelion distance of the resonance centre is directly used as horizontal axis.Top: location and width in inclination (filled areas).Bottom: upper and lower half widths in perihelion distance, for a centre given by the horizontal axis. features a term that is responsible for the emergence of the classic Laplace plane: low-inclination orbits do not precess about the ecliptic pole, as in Sect.4.1, but about an inclined axis. As shown by Figs. 7 and 8, when we increase the semimajor axis or the perihelion distance, the resonances become very large and overlap massively.For overly large resonances, the whole dynamical structure outlined in this section is actually destroyed: the galactic tides cannot be treated as a small perturbation anymore. Galactic regime the dynamics is dominated by the galactic tides.The dynamics driven by ε G V HG V taken alone has been studied by many authors.The solutions can actually be expressed analytically in terms of elliptic integrals (see Breiter & Ratajczak 2005;Higuchi et al. 2007;Higuchi & Kokubo 2015 and references therein).The quantity is conserved and can be used as parameter in the Hamiltonian, which, in turns, has only one degree of freedom.Figure 9 shows the level curves of HG V for different values of K.The limit I G = 0 or 180 o is a stable fixed point whatever the eccentricity; it coincides with the classic Laplace plane in the large-a regime (see Sect. 3), and results in a frozen orbit.Using K as parameter, this is equivalent to e 2 = 1 − K 2 (border of the forbidden regions in Fig. 9).The limit e = 0 is a fixed point with circulating Ω G , but it is unstable for K 2 < 4/5, that is, for 27 o < I G < 153 o .For K 2 < 4/5, there are two additional fixed points located at perihelion distance q of the resonance centre (au) 12)).The parameters chosen are K 2 < 4/5 (left) and K 2 > 4/5 (right).Top and bottom rows: the same level curves for two sets of variables. ω G = π/2 or 3π/2 and which is equivalent to the condition A95, page 7 of 20 A&A 629, A95 (2019) These ones are stable, still with circulating Ω G .Finally, as already noted by Higuchi et al. (2007), the conservation of K implies that the orbit cannot become retrograde if it is prograde, and vice versa.This is the same for the planetary perturbations, even if we go beyond the quadrupolar approximation (Saillenfest et al. 2016), but this time this concerns the galactic inclination I G , not the ecliptic one I.Moreover, Ω G is always decreasing if I G < 90 o and always increasing if I G > 90 o (the period of its linear part, already mentioned in Sect.3, is noted P Ω * by Higuchi et al. 2007). Intermediate non-integrable regime In the intermediate regime (say, from a ∼ 500 to 2000 au, see Fig. 2), the dynamics features two fully interacting degrees of freedom, represented by the two pairs of conjugate coordinates (g, G) and (h, H).The dynamics is chaotic in general, but can be explored through Poincaré sections, in the spirit of Li et al. (2014) and Saillenfest et al. (2017b).This method allows one to locate the regular trajectories and to determine the size of the chaotic zones.For 10 values of a, and about 15 values of the Hamiltonian spanning the different dynamical regimes for each value of a, we computed simultaneously four Poincaré sections (for increasing and decreasing g and h).We give our conclusions below and show the most representative figures of our sample. For semi-major axes smaller than 500 au, the dynamics is dominated by the planetary perturbations, meaning that the eccentricity and ecliptic inclination do not vary much, while the angles ω and Ω circulate (see Sect. 4.1).However, the two degrees of freedom now interact, meaning that genuine resonances appear between ω and Ω (see Sect. 4.2).For a as small as 500 au, Fig. 10 shows that such resonances allow quite large variations of the perihelion distance, but at the speed of only a few au per Gyr.When varying the fixed value of the Hamiltonian, we note that the resonances ω + Ω and 2ω + Ω are by far the most prominent ones for prograde orbits (even if this is not immediately obvious with the parameters chosen to draw Fig. 10).The same holds for the resonances ω − Ω and 2ω − Ω for retrograde orbits.As expected, we also observe libration islands of ω at I ≈ 63 o and 117 o , and libration islands of Ω at I ≈ 90 o .Hence, the 90 o limit is not a barrier anymore for the inclination when considering the perturbed problem.Narrow chaotic regions are present (bottom graph of Fig. 10), as predicted analytically in Sect.4.2, but this chaos acts on extremely large timescales.We also notice thin resonances that we did not mention in Sect.4.2: using a perturbative approach, such resonances are only of order ε 2 G V or more. When we increase the semi-major axis, chaos spreads near the separatrices of the main resonances and libration islands of ω and Ω. Figure 11 shows that chaotic flips between prograde and retrograde orbits are possible for a > 600 au, but the very long timescales involved make these flips of little practical interest. For semi-major axes between about 800 and 1100, the phase space is almost completely filled with chaos (Fig. 12).Stable trajectories only persist for very eccentric orbits, because they are governed almost entirely by the strong planetary perturbations.Moreover, the perihelion distance and the inclination evolve much faster than for a < 800 au, making their variations substantial in a duration comparable to the age of the solar system.This confirms that eccentric orbits evolve much faster than circular orbits studied in Sect.3. Finally, Figs. 13 and 14 show that when the semi-major axis exceeds about 1300 au, the galactic structure of the phase space emerges from the chaotic sea and progressively dominate.Resonances and libration islands of ω and Ω disappear, and the situation is now better characterised in galactic coordinates: we retrieve the structure described in Sect.4.3 and illustrated in Fig. 9 for K 2 smaller or larger than 4/5.This means that the ecliptic inclination I oscillate with a very large amplitude while the ecliptic longitude of ascending node Ω stays around 0 or π, as expected from the Laplace plane (Sect.3).It should be noted, however, that whatever the value of the semi-major axis, there is always a chaotic region for very eccentric orbits, and a stable region for even higher e, because the denominator of HP 2 diverges and makes the planetary perturbation dominate again (see Eq. ( 11)). Dynamics of known inert-Oort-cloud bodies The dynamics of the three observed members of the inert Oort cloud (Sedna, 2012 VP 113 , and2015 TG 387 ) are known to be stable, even though Sheppard et al. (2019) mentioned that 2015 TG 387 is at the limit of destabilisation by galactic tides (one of its numerical clones was ejected from the solar system).As expected from previous results, Poincaré sections computed in the vicinity of the orbit of 2012 VP 113 (a ≈ 270 au) show that e and I are almost constant while ω and Ω circulate.This is also the case for Sedna (a ≈ 540 au), despite its larger semimajor axis, because its orbital elements, and in particular its low inclination with respect to the ecliptic, make it unable to reach any of the features shown in Fig. 10.The case of 2015 TG 387 is more interesting (a ≈ 1190 au), because its high semi-major axis corresponds to the region where both the planetary and galactic perturbations have substantial effects.Figure 15 shows that 2015 TG 387 is not far from a chaotic zone surrounding the ω + Ω resonance.Its trajectory, however, is strictly quasi-periodic in our simplified model.Increasing the value of its semi-major axis in the uncertainty range leads to faster oscillations of q with a larger amplitude (see Fig. giant planets, triggered when the perihelion distance reaches its minimum (see Fig. 1).In addition to its few observed members, the inert Oort cloud could also contain the hypothetical Planet 9 ("P9") proposed by Batygin & Brown (2016).Fixing its initial conditions to the nominal orbital elements adopted for instance by Fienga et al. (2016), in particular a = 700 au, q = 280 au, and I = 30 o , we obtain the left panel of Fig. 16.The situation is similar to that of 2015 TG 387 , meaning that the trajectory of P9 is regular but close to a chaotic zone emerging from the ω + Ω resonance.Since P9's orbit is still quite undetermined (if P9 ever exists), we cannot rule out the possibility that it actually lies inside the chaotic zone.We did not investigate in detail the structure of the chaos in the vicinity of P9's orbit, but we can mention that the chaos reaches P9 if we increase its semi-major axis by only 50 au.For a = 800 au, the chaotic region extents down to q = 45 au (right graph of Fig. 16), and for a = 900 au it extents down to Neptune-crossing orbits.The long timescale involved would prevent P9 to actually encounter Neptune in 4.5 Gyrs, but its perihelion distance could anyway vary quite substantially, possibly modifying over time the characteristics of its shepherding effect on distant trans-Neptunian objects.This remains true for the updated P9 orbit obtained by Batygin et al. (2019), even though it has a somewhat smaller semi-major axis. Evolution of a sample of objects over 4.5 Gyrs The exploration of the inert-Oort-cloud dynamics conducted in Sect. 4 allowed us to characterise the structure of the phase space in this region, including the location of the chaotic regions.This structure should appear as imprints in the orbital distribution of a large sample of small bodies.For instance, the existence of a Laplace plane (Sect.3) that is distinct from the ecliptic should naturally produce an accumulation of Ω near the ascending node of the galactic plane (here located at Ω = π).Moreover, we found in Sect.4.4 that the combination = ω + Ω is among the strongest resonances in the transitional regime, which should preferentially orient around ±π/2 minus the galactic node.However, this picture was drawn for an infinite timescale, and numerous trajectories flagged as chaotic in the Poincaré sections actually wander over the chaotic zones in a dramatically long timescale, even though no dynamical barrier prevents these trajectories from freely wandering around.In order to determine which of the dynamical structures could be discernible during a time span restricted to the age of the solar system, we monitor the evolution of a swarm of test particles over 4.5 Gyrs.The simplicity of the system under study (Eq.( 13)) allows for the propagation of millions of trajectories in a reasonable amount of computation time. Since we aim to fully explore the parameter space, rather than modelling a realistic population of trans-Neptunian objects, we do not restrict our sample to a particular distribution.Our setup is organised as follows.At first, we use a uniform distribution of semi-major axis a between 100 and 2000 au.In order to manipulate easily understandable variables, we build our initial conditions in ecliptic coordinates.The angles ω and Ω are set uniformly between 0 and 2π, and we structure our exploration in slices of perihelion distance (e.g.q ∈ [40, 60] au, [60, 80] au, etc.) and slices of ecliptic inclination cosine (e.g.cos I ∈ [0.9, 1], [0.8, 0.9], etc.).Each slice is uniformly populated by a sample of 10 5 test particles, which are integrated numerically for 4.5 Gyrs using Hamilton's equations of motion applied to the Hamiltonian function from Eq. ( 13).These propagations still do not contain the planetary scattering, active for low perihelion distances (see Fig. 1), that would add fuzziness in our distributions.After 4.5 Gyrs, all our samples feature overdensity regions for the ecliptic angles ω and Ω at large semi-major axes.As illustrated in Fig. 17, the extension and shape of these regions are different according to the sample considered, and in some cases, overdensities are noticeable even below a = 500 au.The planetary perturbations alone cannot modify the angular distributions in our samples, because they induce precession velocities that are independent of the angles (see Eq. ( 15)).Consequently, Fig. 17 demonstrates that the galactic tides have a noticeable effect in 4.5 Gyrs even for moderate values of the semi-major axis.It happens, however, that these overdensity regions have little to do with the dynamical mechanisms (libration zones, resonances) revealed in Sect.4.4.In fact, as shown by Fig. 18, most of the particles follow only a small portion of the dynamical cycles involved, due to the large timescales at play.The galactic tides induce a gradient of precession velocities with respect to ω and Ω; therefore, orbits that initially precess faster catch up with orbits that precess slower, before all orbits go away on their respective dynamical paths (which can be totally different).This "phase effect" produces temporary overdensity regions, like the ones shown in Fig. 17.The gradient of precession velocities is different for each of our slices of inclination and perihelion distance, producing differing patterns.As shown in Fig. 18, the sharp patterns disappear as time goes by, replaced by actual dynamical features like resonances and libration zones.In other words, the patterns that we observe after 4.5 Gyrs are a direct relic of our initial distribution of particles.At this point, it would be tempting to conclude that the orbits of distant trans-Neptunian objects still keep a clear memory of their primordial distribution, and that all that is needed to extract the relevant dynamical patterns is to find a sufficient number of them.However, other dynamical mechanisms, like the randomisation by passing stars or the presence of distant unseen perturbers could erase the signature that we are looking for.We also stress that the patterns mentioned above (e.g. the ones appearing in Fig. 17) cannot be linked to the clustering of objects that motivates the Planet 9 hypothesis (Batygin et al. 2019), mostly because they form at too large semi-major axes values.Still, we find it a surprising coincidence that the ecliptic plane, the galactic plane, and the proposed P9 orbital plane intersect almost along the same line2 . We now focus on the excursion in perihelion distance q of our samples after 4.5 Gyrs.We recall that the galactic tides produce large cycles of eccentricity and inclination (Sect.4.3), at a rate that increases with the semi-major axis value.The planets, on the contrary, do not change the eccentricity and ecliptic inclination, but induce a precession of ω and Ω that is faster for smaller semi-major axes and smaller perihelion distances (Sect.4.1).If this precession is fast with respect to the galactic cycles, it has the consequence of averaging to zero the galactic contribution.Hence, as a rule of thumb, the planetary perturbations block the cycles raised by the galactic tides, with an efficiency that decreases for growing semi-major axis and perihelion distance.This is indeed what we observe in Fig. 19, by comparing the behaviour of the samples with and without the planetary perturbations.This "blocking" effect is very efficient at a = 100 au (and even up to about 1500 au in the top middle panel), but almost null at a = 2000 au.It is Fig. 19.Distribution of a few samples of particles after 4.5 Gyrs in the plane (a, q).As indicated in the titles, the first column is for an evolution with only the galactic tides (Hamiltonian ε G V HG V , Eq. ( 12)), and the second and third columns are for an evolution with both planets and galactic tides (Hamiltonian F , Eq. ( 13)).Six slices of initial conditions are shown, as written in the top and right side of the graphs: the first two columns are for initial perihelion distance q i in [30, 40] au, and the third one in [80, 100] au, while the cosine of the initial ecliptic inclination I i is distributed in [0.9, 1] for the top row, [0.5, 0.6] for the middle row, and [0, 0.1] for the bottom row.On each graph, the horizontal black line shows the semi-major axis of Neptune (∼30 au) for reference.also less efficient for high perihelion distances (right column of Fig. 19).The spreading of the distribution is damped most for small initial ecliptic inclinations (top row of Fig. 19), because this corresponds to the maximum of the planetary-induced precession velocities (see Fig. 6).Moreover, we note that two ranges of initial ecliptic inclination are much more prone to orbital variations than other ranges, as exemplified by the middle row of Fig. 19.This is particularly visible in Fig. 20, showing the value of the semi-major axis above which the perihelion distance of small bodies, starting from [40,60] 6).These resonances being by far the strongest ones in terms of their widths in perihelion distance (see Sects.4.2 and 4.4), they favour variations of q. We finally focus on the excursion in ecliptic inclination I of our samples after 4.5 Gyrs. Figure 21 shows that for small perihelion distances, the spreading of the inclination distribution is strongly damped by the planetary perturbations for initial inclinations near I = 0 o (and 180 o ).This is similar to what we observed for the perihelion distance (Fig. 19).However, this time, the smallest value of the semi-major axis at which the spreading is substantial is reached for initial ecliptic inclinations I ∼ 90 o (see the middle column of Fig. 21), and we observe no marked enhancement of the inclination excursions for other specific ranges of initial inclination.As before, these results are a direct consequence of the form of the galactic potential (see Sect. 4.2).Actually, the particles naturally spread in inclination as they precess about an inclined axis, corresponding to the tilt of the galactic Laplace plane (Sect.3).For very eccentric orbits, the classic Laplace plane is severely bent towards the ecliptic (Appendix C), but the orthogonal equilibrium at I ∼ 90 o produces large oscillations of the inclination (Sect.4.2). Limits of the inert region Looking at how samples of particles initially distributed in localised regions of the orbital elements space spread under the secular action of the planets and galactic tides, we are now able A95, page 13 of 20 to answer one of the main questions: what are the limits of the inert Oort cloud?Where in the (a, q, I) space can small bodies like Sedna and 2012 VP 113 remain efficiently fossilised since the early stages of the solar system evolution?For each value of (a, q, I), we now look in the plane (ω, Ω) for the initial condition producing the maximum variation of q or I in 4.5 Gyrs.To this end, we save the extrema q min and q max reached by q (resp.I) in the course of each numerical integration, and we apply an optimisation algorithm to maximise their difference ∆q = q max − q min (resp.∆I).We opted for the Particle Swarm Optimisation method (Poli et al. 2007) in order to limit the cases of convergence towards local maxima.The initial condition (ω, Ω) that maximises ∆q is generally different from the one that maximises ∆I, so two separate optimisation procedures are needed. Using this method, we obtain the full three dimensional structure in the (a, q, I) space of the largest orbital changes produced within our simplified model.Figures 22 and 23 show representative sections of this space in the (a, q) and (a, I) directions.As expected, the highest orbital variations are reached for orbits with largest semi-major axes, in the regime where the galactic tides strongly dominate over the planetary perturbations (see Sect. 4).The black curve in Figs.22 and 23 delimits the "inert" portion of the space, defined arbitrarily as ∆q < 10 au or ∆I < 5 o .We see that the naive picture depicted in Introduction from previous works largely overestimates the inert region.Fig. 22 shows that orbits are truly inert only if: (a) a 500 au and the orbit is nearly circular.(b) 500 a 1500 au and q is close to the planetary region.(c) a 500 au.These three inert regions are labelled on the bottom left panel of Fig. 22.They are dynamically distinct: (a) For nearly circular orbits, we know from Sect. 3 that very large orbital variations are actually allowed by the dynamics, but that the timescale is dramatically long; this means that such orbits hardly even precess in 4.5 Gyrs (unless the semimajor axis is extremely large, see Fig. 5). (b) For small perihelion distances, the planetary perturbations produce a fast precession of the orbits, which averages out the galactic contribution; this means that such orbits have frozen q and I even when considering an infinite timescale.However, a very small perihelion distance implies that the planetary scattering, not taken into account here, is triggered (see Introduction).In case of scattering, of course, the orbit cannot be considered inert.(c) For a 500 au, the inert regions a and b merge.These orbits precess substantially, similarly to region b, even for large perihelion distances.This is where Sedna (a ≈ 540 au) and 2012 VP 113 (a ≈ 270 au) are located.In regions b and c, Fig. 23 confirms that the border of the inert region has a complex structure that is directly linked to the main resonances between ω and Ω (see Sect. 4.2).As discussed in Sect.5.2, this complex structure produces a very marked differential spreading of small bodies in the space of orbital element.This structure disappears for large perihelion distances, as all resonances overlap.Sedna and 2012 VP 113 have large perihelion distances (76 au and 81 au), but not large enough to completely suppress the effect of resonances.However, due to their relatively small ecliptic inclinations (12 o and 24 o ), both of them are out of any of the main resonances.This makes them true "inert" objects with precessing orbits.As expected from Sect.5.1, on the contrary, 2015 TG 387 is out of the inert region depicted in Figs.22 and 23 (a ≈ 1190 au, q ≈ 65 au, I ≈ 12 o ).It is however close to its border. For completeness, Appendix E gives sections of the (a, q, I) space in the (q, I) direction: we retrieve the resonance structure described analytically in Sect.4.2. Summary and conclusions We studied the long-term orbital dynamics of small bodies in the intermediate regime between the Kuiper belt and the Oort cloud, that is, where the planetary perturbations and the galactic tides have the same order of magnitude.The two kinds of A95, page 14 of 20 perturbations are weak in this region, and we call it the "inert Oort cloud" in reference to the few observed detached Kuiper belt objects, which have extremely stable orbits. The problem is formally close to the case of a satellite perturbed by the J 2 flattening of its host planet and the averaged attraction from the star.As such, it possesses a tilted Laplace plane (the "galactic Laplace plane"), with a crossover located at about 1000 au.This means that for semi-major axes much smaller than this value (say 500 au), circular orbits precess about the ecliptic pole, whereas for semi-major axes much larger than this value (say 1500 au) they precess about the galactic pole.In between, they precess about an intermediately tilted pole.In this regime, however, the precession period for circular orbits counts in hundreds of Gyrs, meaning that these orbits hardly change at all in practice. These dramatically long timescales are greatly reduced for eccentric orbits.The dynamics is integrable in the small and large semi-major axis regimes, when one kind of perturbation strongly dominates the other one.Between about 800 and 1100 au, however, the phase space is almost completely filled with chaos, from very eccentric down to nearly circular orbits.The chaotic diffusion timescales are quite large, but they decrease with the semi-major axis value.For semi-major axes as small as 800 au, the joint action of planets and galactic tides can produce a chaotic diffusion of perihelion distance q over tens of astronomical units in a few billion years.Even though frozen orbital regions do exist (this is the case for Sedna and 2012 VP 113 ), we conclude that this region is far from being inert, contrary to what one could expect from the weakness of the perturbations. In 4.5 Gyr, the galactic tides have noticeable effects down to semi-major axes of about 500 au.At 2000 au, the orbital excursions induced can exceed 400 au in perihelion distance and 80 o in inclination.Interestingly, the largest changes of perihelion distance are reached for ecliptic inclinations I in the ranges [45 o , 55 o ] and [125 o , 135 o ].These ranges are delimited by the two pairs of strong resonances (ω + Ω, 2ω + Ω) and (2ω − Ω, ω − Ω), which ease perihelion variations.When monitoring swarms of particles over 4.5 Gyrs, we also observe accumulations of orbital angles in localised zones, for semi-major axes larger than about 500 au.Indeed, due to the long timescales at play, particles do not have the time to go away on their respective dynamical paths, but they rather spread in a non uniform manner, creating (temporary) overdensities.Such accumulations are a relic of the initial distribution of small bodies, but they have little observational consequences at this stage, considering the very distant objects involved. In conclusion, when mapping the truly "inert" region (∆q < 10 au and ∆I < 5 o over 4.5 Gyrs), we find that it is remarkably small.The precise limits of the inert region can be found in Figs.22 and 23.It is either composed of: (a) nearly circular orbits with a 500 au, (b) orbits with 500 a 1500 au and perihelion close to the planetary region, or (c) orbits with a 500 au (as long as they are unaffected by mean-motion resonances).Moreover, orbits are truly inert only if their perihelion distance is high enough to avoid planetary scattering; in region b, this only leaves a thin inert zone.Out of the inert region, the excursions mentioned above in perihelion distance and inclination, as well as the angular accumulations, are direct effects of the galactic tides.They are quite noticeable after 4.5 Gyrs, and can therefore be decisive when classifying observed bodies as "detached" or not, or when monitoring samples of them, as is done for P9 simulations (see e.g.Batygin et al. 2019 and references therein).Hence, we advocate including the galactic tides in numerical simulations of trans-Neptunian object with semi-major axis larger than 500 au. (B.3) In these expressions, µ P , J 2 , and R P are the gravitational parameter, the flattening coefficient, and the equatorial radius of the host planet, respectively, whereas µ , a , and e are the gravitational parameter, the semi-major axis, and the eccentricity of the Sun, respectively.We keep the same notations as in the rest of the article (see e.g.Appendix A) in order to emphasize the similarities with the distant trans-Neptunian case.This time, however, the indexless orbital elements are measured with respect to the equatorial plane of the host planet, and the G index refers to the orbital plane of the Sun. The overall Hamiltonian function in Eq. (B.1) should be compared with Eq. ( 13).It is well known that the averaged quadrupolar effect of inner bodies has the same form as a J 2 flattening of the central body (see e.g.Tremaine et al. 2009).As such, ε J is strictly equivalent to the parameter ε P 4 used above (see Eq. ( 10)), and KJ is identical to HP 4 (see Eq. ( 11)).Furthermore, we see here that ε has the same a 2 multiplier as ε G V (see Eq. ( 10)), and that K has nearly the same form as HG V (see Eq. ( 12)), apart from the additional term −e 2 /4.The two problems are therefore not strictly equivalent, unless e = 0. Hence, the results obtained by Tremaine et al. (2009) for strictly circular orbits remain valid in the trans-Neptunian case, like the location of the equilibrium points (see Fig. 3), and their stability against inclination variation. However, the stability of the circular equilibrium points (reusing the nomenclature of Tremaine et al. 2009) against eccentricity growth are different.This can be shown using linearised equations around the e = 0 equilibrium (and a set of variables that are not singular for circular orbits).In the satellite case, the circular coplanar equilibrium is stable for any a as long as the obliquity of the host planet is smaller than 68.875 o (see Tremaine et al. 2009).In the trans-Neptunian case, on the contrary, using the inclination of the ecliptic from Table 1 (which is smaller than 68.875 o ), we find that the circular coplanar equilibrium is unstable against eccentricity growth for a between 875 and 1509 au.This instability, however, is absolutely unable to affect real bodies because it acts on a dramatically long timescale (see Fig. B.1).Finally, the stability of the circular orthogonal equilibrium against eccentricity growth is also different from the satellite case.Indeed, in the trans-Neptunian case, we obtain the stability condition Period of small oscillations about the two kinds of circular Laplace equilibrium.Top: coplanar equilibrium; bottom: orthogonal equilibrium.In the grey zone, the equilibrium point is unstable against eccentricity growth; accordingly, the oscillation period of eccentricity is replaced by the period T after which the eccentricity is multiplied by exp(2π) ≈ 535 (black curve).which is twice the analogous limit obtained in the satellite case.This corresponds to a semi-major axis of about 904 au.As before, though, the timescales involved here have no physical relevance. Appendix C: Laplace plane for eccentric orbits Strictly speaking, the Laplace plane is defined for circular orbits (see Sect. 3 and Tremaine et al. 2009).For eccentric orbits, the two degrees of freedom are fully coupled and the orbit does not precess around a fixed pole.However, one can still get an idea of the geometry of the phase space with e > 0 by plotting the level curves of the Hamiltonian in the (I G , Ω G ) plane for different values of (e, ω G ).As can be guessed from the expression of F (see Eq. ( 13)), we obtain strictly the same geometry as for e = 0 (Fig. 3), but where the equilibrium points at Ω G = 0 and π, denoting the classic Laplace plane, are shifted in inclination.For example, Figs.C.1 and C.2 show the inclination of this "instantaneous Laplace plane" with respect to (e, ω G ) for two given values of the semi-major axis.We see that the instantaneous equilibrium plane is close to the classic Laplace plane, except for very eccentric orbits, where it is severely bent towards the ecliptic (that it reaches for e → 1).This property can be seen in Appendix D: Dynamics in the weakly perturbed planetary regime We consider the dynamical system with Hamiltonian F = ε P 2 HP 2 + ε G V HG V , where the expressions for each part can be found in Eqs.(A.4) and (A.5).If the planetary perturbations dominate over the galactic tides, ε P 2 HP 2 acts as the integrable dominant part, whereas ε G V HG V acts as a small perturbation.Luckily, the unperturbed part is already expressed in actionangle coordinates.This means that, neglecting terms in O(ε 2 G V ), the long-term behaviour of the system is simply given by the average of F over the non-resonant angles.This allows us to investigate the effects of each term one by one, and to study the structure of the flow in the vicinity of the resonances: (i) The first term of HG V does not include the angles; its acts therefore only as a small modulation of the precession velocities ω and Ω governed by ε P 2 HP 2 (see Eq. ( 15) and Fig. 6). (ii) The second term of HG V is factored by cos Ω. Strictly speaking, this term cannot be called "resonant", because it features no separatrix.It actually corresponds to the emergence of the classic Laplace plane: the orbit does not precess exactly about the ecliptic pole, as it would for ε G V = 0, but about a tilted pole.By averaging over ω, the momentum G becomes a constant of motion, and we retrieve exactly the "eccentric Laplace plane" introduced in Appendix.C, that rules the dynamics of the variables (I, Ω). (iii) All the remaining terms of HG V correspond to resonances and libration zones for ω and Ω.Assuming that there is a single resonance, the resonant angle can be taken as a new independent variable by a linear canonical change of coordinate (unimodular matrix).For instance, we consider the resonance ω + Ω.From the ecliptic Delaunay coordinates (P ω , P Ω , ω, Ω), the corresponding change of coordinates is σ = ω + Ω γ = Ω Σ = P ω Γ = P Ω − P ω . (D.1) After averaging over the circulating angle γ, we end up with one constant of motion Γ, and one degree of freedom (Σ, σ).Table D.1 gives the constants associated to all resonances that appear at first order in ε G V (that is, the ones directly appearing in the expression of HG V in Eq. (A.5)).The resonance centre is mostly governed by the unperturbed Hamiltonian ε P 2 HP 2 : it is fixed in inclination (see Fig. 6), but goes from e = 0 to e = 1 when varying the value of the constant quantity given in Table D.1. Since the resonances are thin in the weakly perturbed problem, we can use the pendulum approximation.This amounts to using a Taylor expansion of the Hamiltonian around the resonance centre Σ 0 , keeping terms up to degree 2 for the unperturbed part, and up to degree 0 for the perturbation.The resulting Hamiltonian has the form which is a pendulum of centre Σ 0 and half width 2|β/α|.Using the constants of motion in Table D.1, the widths can then either be expressed in terms of the eccentricity or in terms of the inclination. Figures 7 and 8 show the location and widths of all the resonances that appear at first order in ε G V , computed analytically using this perturbative approach.In these figures, instead of using the value of the constant quantities as parameters, we directly use the perihelion distance of the resonance centre (this allows us to draw all resonances on a single graph).In the pendulum approximation, the upper and lower half widths are equal when they are expressed in canonical coordinates (i.e.Σ), but, as shown by Figs. 7 and 8, this is not necessarily the case in I nor in q (and we indeed observe asymmetric resonances in Sect.4.4). The hexadecapolar planetary term (see Eq. ( 11)) can be easily incorporated using this perturbative approach; it only slightly changes the widths of the ω libration island. Appendix E: Maximum orbital variations reachable in 4.5 Gyrs from the secular action of the planets and galactic tides In Sect.5.3, we map the maximum possible variation for q and I in 4.5 Gyrs, according to the location in the (a, q, I) space.A95, page 20 of 20 Fig. 1 . Fig.1.Naive view of the inert Oort cloud.It is defined as the region where neither the planets nor the galactic tides have substantial effects on the orbit of small bodies.In this schematic view, the planet scattering makes small bodies move horizontally, whereas the galactic tides and the isolated mean-motion resonances with planets (labelled "planet resonances" on the graph) make them move vertically.The orbital inclination of small bodies is not considered in this picture, though it is known to play a role as well(Saillenfest et al. 2017a). major axis a (au) Fig. 3 . Fig. 3. Level curves of the Hamiltonian function F (Eq. (13)) for a circular orbit.The semi-major axis taken as parameter is a = 900 au, and the level curves are shown in black.The two graphs show the same level curves for two sets of variables (in order to avoid being misled by coordinate singularities).The dotted curves represent the inclination ψ of the ecliptic, and the blue spots show the location of the ecliptic plane.The red curve shows an example of nearly circular trajectory precessing around the normal to its local Laplace plane, obtained by numerical integration.The eccentricity slightly varies creating a deviation from the initial level curve.The oscillation period around the fixed centre is about 200 Gyrs, in accordance with Fig 5. Fig. 4 . Fig. 4. Inclination of the classical Laplace plane with respect to the planetary plane. Fig. 6 . Fig. 6.Precession velocity of Ω and ω in the planetary regime.The colour represents the velocity scale from negative values in blue to positive values in red, with the same colour scale for Ω and ω.Both Ω and ω attain their maximum absolute value at I = 0 o and 180 o .The locations where the integer combinations k ω + j Ω vanish (limited to k , j < 3) are shown by horizontal lines.The inclination values on the left are obtained from Eq. (15), and the corresponding constant angles are written on the right. As explained in Appendix D, in addition to resonances, the Hamiltonian function A95, page 6 of 20 M. Saillenfest et al.: Chaos in the inert Oort cloud Fig. 8 .Fig. 9 . Fig. 8. Same as Fig. 7 for a = 700 au.The hatched regions mean overlap.K 2 = 0.5 K 2 = 0.9 Fig. 10.Poincaré sections of the dynamics driven by both the planetary and galactic perturbations (Hamiltonian from Eq. (13)).The semi-major axis taken as parameter is a = 500 au.The colour scale shows the maximum orbital change rate between two successive points (see titles), and the grey zones are forbidden.Top: section for F = 2 × 10 −9 au 2 yr −2 at Ω G = 0 decreasing.The islands located at e ≈ 0.8 are due to the resonance 2ω + Ω, and the islands located at e ≈ 0.2 are due to librations of ω while I ≈ 63 o (Lidov-Kozai mechanism).Bottom: section for F = 2 × 10 −8 au 2 yr −2 at ω G = 0 decreasing.The chaotic bands are due to the overlap of the resonances ω − Ω and 2ω − Ω (for I < 90 o ), or ω + Ω and 2ω + Ω (for I > 90 o ), and the islands located at I ≈ 90 o are due to librations of Ω (orthogonal Laplace plane). Fig. 11.Same as Fig. 10 for a = 600 au.Left: section for F = 2 × 10 −9 au 2 yr −2 at Ω G = 0 decreasing.From top to bottom, the largest islands are due to the resonance 2ω + Ω, the libration of ω at I ≈ 63 o , and the resonance 2ω − Ω. Right: section for F = 5 × 10 −9 au 2 yr −2 at ω G = 0 decreasing.The islands located at I ≈ 90 o are due to librations of Ω. Fig. 16 . Fig. 15.Same as Fig. 10 for the parameters of 2015 TG 387 .The points of its trajectory that cross the section are represented in green.Left: nominal orbital elements by Sheppard et al. (2019).Right: semi-major axis increased by 3σ.In both panels, the largest islands are due to the resonances ω + Ω (above) and 2ω + 3Ω (below). Fig. 18 . Fig. 18.Temporal evolution of orbits with a = 1800 au.The initial conditions are I = 0 o , and= ω + Ω equally distributed in [0, 2π]; the initial value q i of the perihelion distance is written on the right of each graph.The vertical black line marks the 4.5 Gyrs time, at which our density maps (e.g.Fig.17) are computed. Fig. 18.Temporal evolution of orbits with a = 1800 au.The initial conditions are I = 0 o , and= ω + Ω equally distributed in [0, 2π]; the initial value q i of the perihelion distance is written on the right of each graph.The vertical black line marks the 4.5 Gyrs time, at which our density maps (e.g.Fig.17) are computed. Fig. 20 . Fig. 20.Minimum value of the semi-major axis above which cles initially sampled with q ∈ [40, 60] au spread beyond q = 80 au in 4.5 Gyrs, with respect to their initial ecliptic inclination.The horizontal bars show our twenty 0.1-width slices of cos I, connected in their centre by a full curve.The location of the four major resonances for the weakly perturbed planetary Hamiltonian are indicated by dotted black lines. Fig. 21 . Fig. 21.Same as Fig. 19, but showing the distribution of the ecliptic inclination. Fig. 22 .Fig. 23 . Fig.22.Limits of the inert region in the (a, q) plane.Each column corresponds to a different value of the initial ecliptic inclination (see titles).The colour scale represents the maximum possible orbital variations in 4.5 Gyrs: the top row shows the variation of ecliptic inclination, and the bottom row shows the variations of perihelion distance (see labels on the right).The black level corresponds to a variation of 5 o in inclination (top row) or 10 au in perihelion distance (bottom row).Below the black level, the region can be considered inert.See text for the white symbols. Fig. B.1.Period of small oscillations about the two kinds of circular Laplace equilibrium.Top: coplanar equilibrium; bottom: orthogonal equilibrium.In the grey zone, the equilibrium point is unstable against eccentricity growth; accordingly, the oscillation period of eccentricity is replaced by the period T after which the eccentricity is multiplied by exp(2π) ≈ 535 (black curve). Fig. C.1.Inclination of the classic Laplace plane with respect to the ecliptic in the eccentric case.It is obtained by considering fixed values of (e, ω G ).The semi-major axis taken as parameter is a = 1000 au.The two panels show the same level curves for two sets of variables.Except in the e = 0 case, this inclination is only "instantaneous" because e and ω G actually vary.Dark colours are low inclinations, and light colours are high inclinations, as shown by the labelled levels.The thick black level shows the inclination value that is equal to the one obtained in the circular case.The mean inclination, obtained by averaging over ω G , corresponds to the vertical line ω G = π/4 on the top panel, or the diagonal line on the bottom panel. Resonance centres and constants of motion arising from the dynamics in the vicinity of each resonance appearing at first order in ε The resonance centre given here is the value of cos I.The eccentricity at the resonance centre depends on the value of the constant quantity, taken as parameter (right column).The retrograde cases are obtained by changing the sign of cos I and of Ω. Fig Fig. E.1.Maximum possible orbital variations produced in 4.5 Gyrs in the (q, I) plane.Each column corresponds to a different value of the semi-major axis (see titles).This figure is to be compared to Figs. 7 and 8, showing the location and widths of the main resonances obtained analytically. Table 1 . Values used for the physical constants of the problem.
16,681
2019-08-14T00:00:00.000
[ "Physics" ]
Latent Relational Model for Relation Extraction . Analogy is a fundamental component of the way we think and process thought. Solving a word analogy problem, such as mason is to stone as carpenter is to wood , requires capabilities in recognizing the implicit relations between the two word pairs. In this paper, we describe the analogy problem from a computational linguistics point of view and explore its use to address relation extraction tasks. We extend a relational model that has been shown to be effective in solving word analogies and adapt it to the relation extraction problem. Our experiments show that this approach outperforms the state-of-the-art methods on a relation extraction dataset, opening up a new research direction in discovering implicit relations in text through analogical reasoning. Introduction Relation Extraction (RE) is a very important capability of Natural Language Processing (NLP) systems. It identifies semantic relations between pre-identified entities in text. RE is particularly useful for Knowledge Base Population (KBP), which is the task of populating Knowledge Bases (KBs) whose schemata have been previously defined by a set of types and relations exploiting information from a text corpus, as well as for building KBs from scratch. For instance, if the target relation is presidentOf, a RE system should be able to detect an occurrence of this relation between the entities Donald Trump and United States in the sentence "Trump issued a presidential memorandum for the US". Although the neural-based RE approaches show good performance, we contend that they present two limitations. First, they do not fit well for limited domains, where only few seed examples are available. Complex architectures have many parameters, therefore they require a considerable amount of training data in order to learn good representations. It is not surprising, because these approaches completely rely on the power of deep neural networks that consist of a blind feature learning without considering the linguistic and cognitive insights that this problem requires. Furthermore, the generalization capability of these approaches is limited to the relation types seen during the training phase, thus they are not applicable to discover relations in new domains or in building a new relational data source from scratch. We approach the RE task from a different angle by addressing it as an analogy problem. Solving analogies, such as Italy:Rome=France:Paris, consists of identifying the implicit relations between two pairs of entities. The research hypothesis that we will be exploring throughout this work is that a method used to recognizing analogies can be useful to discover relations in text. In other words, relation extraction and word analogy are "two sides of the same coin". These concerns lead to the following research questions: [RQ1] How to address relation extraction as an analogy problem? [RQ2] Can a relational model be compared with the state-of-the-art RE methods? In order to answer these questions, we propose an Analogy-based Relation Extraction System (ARES) by exploiting a relational model [28] which still holds the best scores in solving word analogies. Our method projects entity pairs in a relational vector space built by embedding the implicit properties which are observed in the text about how two entities are related. In this paper, we formalize relation extraction as an analogy problem through its geometric interpretation in the relational vector space. We show that following this idea it is possible to face the RE in different scenarios (unsupervised, semi-supervised, supervised) through the same relational representations. Then, we measure the performance of our approach on a popular dataset designed for distantly supervised RE. The evaluation shows that ARES, with a simple linear classifier, outperforms the previously known approaches. This achievement opens up new promising research directions for relation extraction by exploiting analogical reasoning. The paper is structured as follows: Section 2.1 describes the state-of-theart and the recent progress in RE. In Sect. 2.2 we introduce the word analogy problem and the relative approaches. In Sect. 3 we describe ARES and we provide an evaluation of it in contrast with the most popular distant supervised RE approaches in Sect. 4. Section 5 concludes the paper, highlighting the possible new directions for RE. Relation Extraction Given two entities e 1 and 2 that occur in a sentence S, Relation Extraction (RE) is the process to understand the meaning of S and extract a triple r(e 1 , e 2 ), where r represents the semantic relation between the two entities. In the literature several paradigms have been proposed to address the RE problem which differ in terms of input, output and technique adopted, such as pattern-based [10], bootstrapping [1], supervised [12,21,26] or OpenIE [18]. A promising idea, called distant supervision [20], consists in using existing KBs, like Freebase [2], as source of supervision without any human intervention. The pairs of entities that belong to a certain relation in the KB are linked with their surface forms in the textual corpus given as input. For each pair, all sentences in the corpus in which the two entities occur together are collected. However, the wrong labeling caused by the automatic matching between the entity pairs in the KB and in the textual content as well as the overlapping relations due to the intrinsic multi-graph structure of the KBs, require more complex training and prediction phases. This paradigm is commonly addressed as a multi-instance [23] and multi-label [11,27] classification task. The deep neural network models proposed in [6,15,32] attempt to solve the multi-label and/or multi-instance setting in an end-to-end fashion through neural-based architectures with the aim to avoid the error propagation that could be raised by the use of lexical and syntactic tools for feature extraction. Another method, so-called universal schema [24,30], faces RE by combining the OpenIE and KB relations. This method is related to our, in the sense that a pair-relation matrix is built, but it differs from the idea. Indeed, the goal of the universal schema is to address RE using a collaborative filtering approach typically adopted in recommender systems. Word Analogy The word analogy task, namely the proportional analogy between two word pairs such as a : b = c : d, has been popularized by [19] with the aim to show the capability of their neural-based model, so-called word2vec, in discovering the "linguistic regularities" just using vector offsets (king − man + woman = queen is the most cited example). Several studies [14,16] have been proposed to deeply analyze the use of word embeddings and vector operations in attempting to achieve better performance on the same Google analogy dataset. The works in [5,31] explore the use of word vectors to model the semantic relations. However, the word analogy task has been originally addressed by [29] who investigate several similarity measures on Scholastic Aptitude Test (SAT) dataset, composed of 374 multiple-choice analogy questions. Given mason : stone, this task consists of selecting the right analogy among 5 possible choices (carpenter : wood in this case). The authors provide an interesting argumentation regarding the different types of similarities, attributional and relational, and their use in facing the word analogy problem. The lesson learned is that the attributional similarity, typical of the word space models [13,22,25], is useful for synonyms detection, word sense disambiguation and so on. Instead, the relational similarity fits better in understanding word analogies. This intuition is confirmed by [3] who shows that word2vec is less effective on the SAT dataset. Conversely, the relational model proposed in [28] achieves a performance (56.1%) close to the human level (57.0%) on the same benchmark. Therefore, in this work we extend and adapt this relational model in order to address the relation extraction problem. Methodology In this section we present ARES and we explore its use to face the RE problem through analogical reasoning. First, we describe the Latent Relational Model (LRM), the foundation of our method. Then, we show that an extensional representation of the relations can be provided through the geometric interpretation of analogy between entity pairs. Finally, we explore the application of ARES to different RE scenarios. Latent Relational Model LRM provides an intensional representation of relations by embedding the implicit properties observed in the text about how two entities are related. This idea relies on the distributional hypothesis [9] which finds its roots in psychology, linguistics and statistical semantics: "linguistic items with similar distributions have similar meanings". Given a textual corpus T , the aim is to build a vocabulary V , composed of the unique entity pairs extracted from T , and a lookup table M n,k , with n = |V |, consisting of k-dimensional latent relational vectors associated to each element of V . The idea to build a relational vector space model was originally proposed in [28,29] to solve a word analogy task. We extend and adapt it to address the RE problem. The main differences concern the use of an entity-entity vocabulary, instead of a word-word one, and a different way to extract the contexts around a pair as explained in the following paragraphs. Entity Pair Vocabulary. Given a textual corpus T , the first step is to build a vocabulary V = {(X 1 , Y 1 ), . . . , (X n , Y n )}, where (X i , Y i ) are the distinct entity pairs that occur together at least in one sentence. The question is how to identify the atomic lexical units in T that are considered as entities (X i , Y i ). This can be done in different ways based on the specific RE scenario. For instance, in an unsupervised RE a Named Entity Recognizer (NER) or, more generally, a noun phrase chunker can be adopted. It depends from the types of relations to be extracted. In a distant supervised RE, V can be built using entities coming from the KB linked in the text. Entity Pair Contexts. Once the vocabulary V is built, the next step is to extract the contexts around each entity pair when they occur together into the same sentences across the corpus T . A careful choice of the contexts is fundamental because they are the properties that define the intensional representation of a relation. Differently from [28,29], we adopt a richer set of lexical and syntactical features extracted from each sentence as proposed in [20]. Given an entity pair, from each sentence in which the pair occurs we extract: 1. The entity types provided by the NER; 2. The sequence of words between the two entities; 3. The part-of-speech tags of these words; 4. A flag indicating which entity came first; 5. An n-gram to the left of the first entity; 6. An n-gram to the right of the second entity; 7. A dependency path between the two entities. If an entity pair occur in more than one sentence, we collect the features extracted from each sentence into a single bag. It should be noted that this may involve the wrong labeling issue using a distant supervised approach, which requires a multi-instance setting to be addressed [23]. Instead, in our model the context aggregation helps to provide a more accurate intensional representation of the relations between an entity pair. Relational Vector Space Model. In this step a sparse matrix X n,m is built by mapping the n entity pairs in V to the rows and the m distinct features/contexts extracted in the previous step to the columns. Each element X i,j represents the weight of the j-th context in relation to the i-th entity pair. This weight might be computed using different well-known weighing schemes in information retrieval [4] and distributional semantic models [13], such as binary, tf-idf, entropy and so on. Indeed, our pair-context matrix is the relational version of the classic document-term or term-term vector space models. There is not a theoretical motivation about which weighing schema is better: the choice is empirical, and depends on the specific purpose and on the distribution of the information in the textual corpus. In our experiments we found that when applied to the RE task, tf-idf weights tend to produce more precise results, while the binary schema achieves a recall-oriented performance. Matrix Factorization. Since X n,m is a highly sparse matrix, this representation is not able to catch the implicit meaning across the textual contexts which express the same semantics. For instance, the phrases "A is the author of B" and "C wrote D" have the same meaning w.r.t. the relation authorOf, but the patterns "is the author of " and "wrote" are represented as separate features in X. As consequence, the vectors related to the pairs (A,B) and (C,D) in X are orthogonal even if they convey the same concept. In line with [4,13,28], we address this issue by applying Singular Value Decomposition (SVD) to the sparse matrix X. SVD decomposes a matrix X into a product of three matrices UΣV T , where U T U = I = V T V and Σ is a diagonal matrix of sorted singular values having the same rank r of X. Let Σ k , with k r, be the truncated version of Σ by considering only the first k singular values, the SVD finds the best matrix X k = U k Σ k V T k by minimizing the cost function ||X − X k || F . We adopt the fast and scalable algorithm proposed in [8]. Thus, the SVD applied to X produces a low-rank approximation of X: where k is a hyper-parameter. For our purpose, we are mainly interested in the matrices (U k Σ k ) n,k and V k,m . Indeed, the lookup table M n,k that we are looking for is obtained by: Each i-th row in M is a k-dimensional latent relational vector associated to each entity pair in V . SVD allows to take into account the global distribution of the pair contexts in the corpus and to understand the implicit relationships among them. This latent information is embedded into the k-dimensional dense vectors. On the other hand, V k,m contains the latent vectors of each m feature/context. Thus, the SVD has the big advantage of projecting the pairs and the contexts into the same vector space. The role of V k,m is crucial for two reasons. Firstly, in a supervised RE the k-dimensional vectors of new entity pairs in the test set are obtained by M n,k = XV T k without retraining the SVD. However, the most interesting aspect regards transfer learning domain adaptation: the SVD can be applied to a pair-context matrix X W eb build on a large web scale corpus, so V W eb k condenses a rich prior knowledge that can be infused into a new domain just using a matrix multiplication [7]. Finally, many other techniques can be applied to solve the sparsity issue, such as Non-negative Matrix Factorization (NMF) or deep neural network, like Auto-Encoders (AE) that learn latent representation through a non-linear dimensionality reduction. A comprehensive comparison of all these methods as well as the application of transfer learning for domain adaptation are out of the scope of this work, but, surely, they represent a very promising directions for future investigations. Geometric Interpretation of Analogy Through LRM, each entity pair occurring in the corpus is projected into a relational vector space, therefore it is possible to exploit its geometric interpretation to measure similarities between entity pairs. Thus, we can assert that there is an analogy between two pairs of entities (A, B) and (C, D) iff their latent vectors are close in the relational vector space. For instance, we can measure this proximity with the angle between the relational vectors using the cosine similarity. and t is a threshold that establishes the breadth of the analogy between the two pairs. This intensional representation of the relations well models the fuzzy meaning of relation between two entities. In fact, let us first consider the boundary cases with the cosine similarity equal to 1 and 0. If 1 it means that (A, B) and (C, D) share exactly the same properties observed in the text, therefore the pairs are strictly analogous. Instead, the value 0 means that their vectors are orthogonal, so we can state that the pairs are not analogous at all 1 . However, since the range of the cosine is [−1, 1], infinite degrees of analogy might be defined between two entity pairs, and this aspect depends on the value of the threshold t in Eq. (3). This is useful to define the granularity of the type of a relation: higher values of t mean fine-grained relations, otherwise lower values mean relations that are more inclusive and coarse-grained. For instance, given the following sentences: (1) Rome is the capital of Italy; (2) The capital of France is Paris; (3) Brooklyn is a borough of New York. Into a hypothetical vector space, the latent vectors r (Italy,Rome) and r (F rance,P aris) are close because they share the same context "capital of ". On the other hand, r (N ewY ork,Brooklyn) is farther to the other two vectors, but it is not orthogonal because the concept of "borough of " is semantically related, in some way, to "capital of ". Indeed, both patterns "borough of " and "capital of " express the meaning of inclusion between two locations. Therefore, we can say that Italy:Rome=France:Paris. But what about Italy:Rome=New York:Brooklyn? This depends on the granularity of the relation that we are taking into account. If we want to model the relation capital, then we can say that (Italy, Rome) and (New York, Brooklyn) are not analogous. Instead, the result changes if we imagine a coarse-grained relation like contains. The different scopes of capital and contains depend on the value of the threshold t. Relation Extraction as Analogy Problem Our aim is to use the geometric interpretation of analogy in attempting to emulate the task in identifying tuples in texts that share the same relations. Formally, given as input a textual corpus T and a semantic relation R, the problem of RE is to extract all pairs of entities that have the relation R in the corpus. Therefore, the output of RE is an extensional representation of the relation R by listing all entity pairs in the corpus that belong to R. The question is: how is R defined in T ? Let us consider M T as the LRM built on the corpus T . Based on the geometric interpretation of analogy described in Sect. 3.2, we can define the relation R in an extensional way through the intensional vector representations in M T as follow: Definition 1. A semantic relation R is a region in a relational vector space M T that outlines the boundaries among those entity-pair vectors that are analogous to each other. Since computing the analogy, hence the similarity, of all possible combination of the entity pair vectors is infeasible, RE is reduced to an optimization problem in finding the boundaries of that region in M T . In the next paragraphs we show the use of ARES to address the different RE scenarios. Unsupervised Relation Extraction. In absence of training examples, a clustering algorithm can be applied on M T in order to find C 1 . . . C n centroids. For instance, the k-means or DBSCAN algorithms can be used depending on if we want to fix or not the number of the centroids. A centroid C i represents the relational vector that condenses the meaning of a relation R i . Thus, given the relational vectors of the entity pairs in M T , the relation R i in the corpus T is defined as follow: The value of t is user-defined parameter that determines the scope of the region around the centroid vector and so the granularity of the relation R i . Semi-supervised Relation Extraction. ARES can be adopted also when a small set of few seed pairs that express a relation R is provided as input. Let Y 1 ), . . . , (X n , Y n )} a set of seed pairs with n small, then the centroid vector is obtained by averaging the relational vectors in M T related to each input pair as follow: In this few-shot setting, ARES can be applied in an information retrieval style by finding the nearest neighbors of the centroid C RI used as a query. Thus, the pairs of entities in the corpus T that have the relation R are extracted as follow: The entity pairs are ranked based on the similarity with the centroid/query C RI and a user can fix the value of t in order to cut the pairs that have a similarity below that threshold. Supervised Relation Extraction. In a supervised RE setting a bigger training set of seed entity pairs is available. In particular, the distant supervision ensures a large amount of training data by exploiting existing relational data sources, like Freebase, without any human intervention. Since an entity pair can belong to more relations at the same time, the distant supervised RE is commonly addressed as a multi-label classification task where each relation is a class. In this setting, ARES exploits the training set in order to find that region where the entity-pair vectors are analogous to each other, as stated in Definition (1). For instance, a Support Vector Machine (SVM) classifier trained on the relational vectors of the entity pairs in the training set finds a hyperplane into the hyperspace defined by M T . In fact, the hyperplane splits the region into the vector space M T by grouping the analogous entity pair vectors for a specific relation. During the test phase, a new entity pair is projected into M T and the classifier predict at which region the new instance belong. Experiment This section describes our evaluation by providing a comparison with the stateof-the-art methods and a further analysis in order to show the flexibility of our approach. Experimental Setting We evaluate ARES on the real-world dataset NYT10 [23], that is commonly used by the community to evaluate the distant supervised RE methods. This dataset was created by aligning Freebase tuples with the New York Times (NYT) corpus from the years 2005-2007. We adopt the original held-out setting that consists of 51 relations/classes. The training set has 4700 positive and 63596 negative relation instances. While the test set has 1950 positive and 94917 negative examples. We build our LRM using only the sentences in the training set. For LRM we adopt the binary weights and we fix to 2000 the dimension of the relational vectors. The pair contexts are extracted as explained in Sect. 3.1. We train a set of linear SVM on LRM in a one-vs-rest multi-label setting with a penalty equal to 10, chosen with a 3-fold stratified cross validation on the training set. In the prediction phase, we first project the unseen entity pairs into the latent space using LRM as described in Sect. 3.1, then we predict the scores for each relations/classes based on the decision functions of the SVMs. We evaluate the performance using Precision-Recall curve and P@n metrics. However, during our experiments we tried also other non-linear functions, such as polynomial and rbf kernels, and a Multi-Layer Perceptron (MLP) with a sigmoid function as last layer to avoid the one-vs-rest strategy. These classifiers show more stable performance across the classes compared with a linear SVM when learned on our LRM. However, using a simple classifier, without many (hyper)parameters, allow us to evaluate more easily the quality of our relational representations, that is the research question of this study. Results and Discussion We compare ARES with the popular feature-based and neural-based distant supervised RE approaches. MINTZ++ [20] is the first distant supervised method for open domain KB that uses a logistic regression classifier. We adopt the multi-label version. MIML-RE [27] is a multi-instance multi-label approach using a probabilistic graphical model to address the wrong labeling issue. PCNN+ONE [32] uses a convolutional neural network for sentence representations with the at-least-one strategy for multi-instance. PCNN+ATT [15] improves the previous deep architecture by adding a sentence-level attention to face the multi-instance learning. Figure 1 shows the precision-recall curves of each model. The curves proof clearly that our approach outperforms consistently all the state-of-the-art methods with a particular emphasis on the boosted precision at the first part of the Table 1 shows this aspect with more detail. In fact, ARES achieves a P@100 equal to 0.68, while the other multi-instance methods obtain 0.55. However, this improvement remains constant along the curve as showed by the average precision in Table 1. ARES achieves these performances just using a simple linear classifier against more complex deep learning architectures. Therefore, our latent relational vectors promote the generalization capability of a classifier. We performed an ablation test over the lexical and syntactical features groups and their combination. As showed in Table 1, the SVM classifier trained only on the lexical group has an average precision very close to that obtained by training the classifier on the full set of features. This result suggests that our approach can be applied also on web-scale corpora since the extraction of the lexical features can be done efficiently. Moreover, this dataset is highly unbalanced, therefore an end-to-end model trained on this setting tends to overfit on the most frequent relations, like contains, and provides a poor representation for the others. Our model alleviates the overfitting because LRM learns the entity pairs vectors in a unsupervised way by taking in account the global distribution of the contexts across the entire corpus. That means better representations also for those relations with few examples, therefore better generalization capability for the classifier. To confirm this aspect, Fig. 2 shows the learning curves obtained by training the SVMs on different size of training set. We performed this analysis on four frequent relations of the NTY10 dataset by randomly choosing the different buckets of the training instances for each relation. For relations, such as contains and company, our model reaches almost the best F1-scores just using about the 20% of training examples. However, it is worth to note that the attention mechanism of PCNN+ATT shows a robust behavior when the recall increases. This suggests that a combination of our LRM with deep neural networks represents an interesting direction for future investigations. Unsupervised Relational Analysis Since LRM is an unsupervised model we can exploit the relational vectors to understand the distribution of the relations in a given textual corpus. Figure 3 shows the 2D projection of the relational representations using t-SNE [17] a techniques used to visualize high-dimensional embeddings. We built a LRM on the whole NYT10 corpus (train+test) and each point in the space is a entity pair vector. For instance, a (red) point marker in Fig. 3 refers to an instance of the relation location/location/contains, such as (New York, Brooklyn). Since the entity pairs are aligned with those in Freebase, we can label them with their relations used as ground truth. As we can see from the figure, the distribution of the entity pair clusters is very close to the ground truth. For instance, the cluster consisting of the (purple) triangle markers represents a group of entity pair vectors with well-defined boundaries and with a strong overlap with the instances of the relation business/person/company. Similar behavior occurs for the (red) point markers and the instances of the relation location/location/contains. This shows that the LRM is able to produce latent vectors for each entity pair, learned from a corpus, which approximate the relational structure of a knowledge graph like Freebase. However, there is a strong overlap for certain relations, such as people/person/nationality and people/person/place lived. In fact, they are strongly related, but this does not necessary mean that LRM provides poor representations. Instead, we can conclude that the properties in the text are not enough to discriminate the semantics of these relations, hence those in overlap can be removed or merged. In summary, this study shows that LRM is a flexible tool, e.g., also to analyze a corpus and to establish if it is proper or not in application to distant supervision paradigm. Conclusion and Future Work In this work we explored the use of analogical reasoning to address the problem of extracting relations from textual corpora. We extended a model proposed to solve word analogies in order to provide relational representations that have been proven to be effective for a relation extraction system. Indeed, our approach, using a simple linear classifier, achieves promising results when compared with state-of-the-art deep neural-based models. In our research agenda, we plan to learn non-linear relational representations from text using unsupervised deep neural networks, such as auto-encoders, as well as to explore the use of analogy in transfer learning in order to address more challenging problems, such as domain adaption and automatic ontology construction.
6,729.8
2019-06-02T00:00:00.000
[ "Computer Science" ]
Higgs assisted Q-balls from pseudo-Nambu-Goldstone bosons Motivated by recent constructions of TeV-scale strongly-coupled dynamics, either associated with the Higgs sector itself as in pseudo-Nambu-Goldstone boson (pNGB) Higgs models or in theories of asymmetric dark matter, we show that stable solitonic Q-balls can be formed from light pion-like pNGB fields carrying a conserved global quantum number in the presence of the Higgs field. We focus on the case of thick-wall Q-balls, where solutions satisfying all constraints are shown to exist over a range of parameter values. In the limit that our approximations hold, the Q-balls are weakly bound and parametrically large, and the form of the interactions of the light physical Higgs with the Q-ball is determined by the breaking of scale symmetry. Introduction Stable soliton-like solutions exist in a wide variety of quantum field theories in four, and other, dimensions. Broadly speaking, solitons may be characterised as either topological or non-topological solitons. Topological solitons have their stability guaranteed by the conservation of a suitable topological 'charge' or winding number. For example, 't Hooft-Polyakov monopoles in spontaneously broken non-Abelian (3+1)-dimensional gauge theories are characterised by the second homotopy group of the vacuum manifold and the associated winding numbers. Alternatively, for non-topological-solitons stability is commonly ensured by a combination of energy-conservation and a conserved Noether charge (see, for example, [1] and references therein). One particularly noteworthy class of such solitons are Q-balls [2][3][4][5]: semiclassical configurations of underlying Noether-charge-carrying scalar fields, and possibly other, additional fields too. In most studies of Q-ball solutions, the scalar fields making up the Q-ball are explicitly or implicitly assumed to be elementary. For instance, Q-balls that are absolutely stable, or metastable with cosmological lifetimes, have been studied in supersymmetric extensions to the Standard Model where the underlying scalar fields are combinations of the elementary Higgs, slepton, and/or squark fields of the model [6,7]. These solitons are intrinsically interesting objects to study theoretically, and often have the additional intriguing property -1 - In this work we show that Q-balls can exist in theories where the charged scalar fields that make up the Q-ball are not elementary but rather composite states, with nonperturbative dynamics leading to a low-energy effective theory described by light pion-like pseudo-Nambu-Goldstone bosons (pNGBs) carrying a U(1) global quantum number. In particular, with an eye towards future possible applications to Beyond-the-Standard Model and dark matter physics, we consider theories which contain a strongly-interacting hidden sector at TeV-scales or above, and which feature a spontaneous breaking of a non-Abelian global symmetry similar to that of the chiral symmetry breaking of QCD, but occurring at f ∼ TeV energies or greater, rather than the scale f ∼ 100 MeV as for QCD. When small explicit breaking of the original global symmetry is included, the previously massless Nambu-Goldstone bosons acquire small masses. Importantly, these, now pseudo-Nambu-Goldstone bosons, can be much lighter than all other mass scales associated with the strongly-coupled sector, and so we can treat their low-energy dynamics separately from all other degrees of freedom originating from the strongly-coupled theory. For our purposes it is also important that the pNGB fields can also naturally carry a variety of conserved U(1) 'flavour' quantum numbers. 1 As the pNGBs are the lightest charged states transforming under these U(1)'s, and in addition the pNGBs interact via both derivative and non-derivative potential terms, in principle it is possible for stable Q-balls formed out of these pNGBs to exist. 2 However, in the case that the only light fields are the pNGBs, the leading O(p 2 ) effective Lagrangian describing the low-energy interactions of the pNGBs fails to satisfy the energetic conditions necessary for a Q-ball to be stable against decay into individual pNGBs. This situation can be altered by inclusion of O(p 4 ) terms, but only at the expense of rather large second-order coefficients [9]. Fortunately, in the situation we study in this work, the pNGBs are not the only relevant light fields. In general the Standard Model Higgs field is even lighter than the hidden pNGBs and, as we show in section 2, interacts with them in a particular way via a Higgs-portal interaction. The form of the resulting pNGB-Higgs interactions is not arbitrary, but constrained by the breaking of scale symmetry [10][11][12]. This then leads to an interacting system of both charge-carrying and charge-neutral scalar fields that we show in section 3 possesses Q-ball solutions for a range of underlying parameter values. Specifically, in this paper we focus on the existence of small-to-moderate-charge 'thick-wall' Q-ball solutions which we find are applicable in a charge range up to at least Q ∼ 10 4 , and sometimes Q ∼ 10 8 depending on underlying parameters of the model -see eq. (3.21) and figures 3-5. For this reason we expect them to be of greater phenomenological relevance than thin-wall Q-balls. In this work we solely consider the existence and properties of these thick-wall Q-ball solutions, leaving their possible phenomenological applications to a later study. JHEP11(2017)179 Before turning to the details of our particular model and the existence of Q-ball solutions, we emphasise that the underlying UV strong-coupling dynamics plays almost no role in the analysis, 3 the existence and detailed properties of the Q-ball solutions depending solely on the leading-order low-energy effective Lagrangian interactions between the pNGBs themselves and with the Higgs. We therefore expect that similar Q-ball solutions will occur in a wide range of effective field theories described by the Callan-Coleman-Wess-Zumino coset construction [13,14] supplemented by Higgs interactions. In particular it would be interesting to study the possible existence of stable or metastable Q-balls in models where the Higgs doublet itself is realised as a pNGB, along with other light pNGB fields [15][16][17][18][19][20]. Finally, it is worth mentioning that it is not strictly necessary that the U(1) which stabilises our thick-wall Q-balls is an exact global symmetry. Small breaking by higherorder terms suppressed by a high scale would render the Q-ball unstable but long-lived, similar to the manner in which conventional neutron stars are still cosmologically long-lived objects in the presence of sufficiently small baryon number violation. Alternatively, the stabilising U(1) could even be an unbroken gauge symmetry, since for a gauge coupling that is parametrically smaller than the Higgs-pNGB interaction strength, our thick-wall Q-balls would be unperturbed to leading order [4]. 2 The structure of the model We assume that there are two sectors: the Standard Model (SM) and a hidden sector (HS). As described in the Introduction, the HS possesses a spontaneously broken almost-exact global symmetry, which gives rise to pNGBs. The HS also possesses an unbroken global U(1), under which some of these pNGBs transform. In this section we describe the origin of the Higgs coupling to the HS pions resulting in the Lagrangian in eq. (2.10). Structure of the hidden sector For definiteness, we consider a HS with a QCD-like SU(N c ) Yang-Mills theory with N f flavours of HS 'quarks' in the fundamental of SU(N c ). This theory possesses an SU(N f ) L × SU(N f ) R chiral flavour symmetry which is spontaneously broken to the diagonal subgroup SU(N f ) V . 4 Then, by Goldstone's theorem, there will be N 2 f −1 massless Nambu-Goldstone bosons that parameterise the coset space SU Furthermore, the HS quark mass matrix, M , explicitly breaks the chiral symmetry, which becomes only approximate in this limit. The NGBs will therefore acquire a non-zero mass, i.e., they become pNGBs. The mass matrix M also breaks SU(N f ) V if it is not proportional to the unit matrix: in the situation that no two HS quark masses are equal, the surviving global symmetry acting on the pNGBs is U(1) N f −1 , in the absence of other interactions. 3 The exception being the presence or otherwise of a Fermi repulsion term depending on the fermion or boson nature of the underlying matter degrees of freedom in the UV theory. 4 We ignore the fact that the symmetry group is generally U(N f )L ×U(N f )R since the one non-anomalous U(1) from the U(N f )L × U(N f )R, that in the SM case corresponds to baryon number, acts trivially on the pNGBs, so it is not of interest to us here. JHEP11(2017)179 As usual, we can describe the light pNGBs transforming under the non-linearly realised SU(N f ) L × SU(N f ) R symmetry by a unitary matrix field of unit determinant built from the N 2 f − 1 pNGBs, π a : Σ = exp(iπ a T a /f ). (2.1) Then, under the global vectorial symmetry, Σ transforms as where V is in general given by V = exp(−iX) with X Hermitian and traceless. The Noether current associated to this transformation is where we have assumed the usual leading order chiral Lagrangian For generic diagonal M , the transformation eq. (2.2) is a symmetry when X is one of the possible N f − 1 diagonal matrices. The pseudoscalar sector can both possess global symmetries, and have a non-trivial potential given by the second term in eq. (2.4), so it is reasonable to ask whether stable Q-balls can be present in this sector. However, to address this question (which ultimately requires some numerical analysis) we need to be more specific about the coupling of this HS to the SM, and also about the HS itself, as well as the exact form of the global U(1) that we will be using. For concreteness, suppose that the HS is very similar in form to the SM itself, but with the analogue of U(1) Y ungauged. Thus we take the HS gauge group to be SU(3) × SU(2) with, minimally, one 'generation' of matter fermions in the same SU(3) × SU (2) representations as the SM matter fields. This guarantees the anomaly freedom of the matter content with respect to these two symmetries. We also require the HS quarks to acquire bare masses, so the HS must also have an SU(2) -doublet scalar state, S, which acquires a vacuum expectation value (VEV), analogous to the Higgs doublet in the SM sector. This scalar doublet is coupled via Yukawa interactions to HS chiral quarks which acquire a mass upon the spontaneous breaking of SU (2) . The Yukawa terms in our HS Lagrangian are L HS ⊃ y ij Q L,i Sq R,j + h.c., (2.5) where Q (q) are the doublet (singlet) HS quarks and y ij are Yukawa couplings. The SU(3) is asymptotically free and confines at low energies with a corresponding confinement scale Λ hs χ . We require the HS quarks of the first generation to be light relative to Λ hs χ and assume that any additional generations beyond the first are heavy. 5 The light HS quarks will then hadronise into a massive but light triplet of HS pions as a consequence of chiral symmetry breaking -see figure 1 for a schematic of the spectrum. 5 This setup has obvious similarities with Mirror World [21][22][23] and Twin Higgs [24][25][26] scenarios, and in particular the Fraternal Twin Higgs models [27][28][29][30][31], although in our case we are taking the HS SU(3) dynamical scale Λ hs χ 1 TeV rather than the few GeV appropriate for the Twin Higgs models. To ensure that the HS pions are absolutely stable in the presence of SU(2) interactions, the HS 'leptons' -minimally one generation -must have masses above the HS pion masses. Coupling the two sectors The leading interaction between the two sectors is due to a Higgs-portal interaction. Specifically, the scalar potential for the SM Higgs and the HS doublet is given by This potential induces spontaneous symmetry breaking in both sectors. We write the VEV of S as S = v s / √ 2 and the VEV of H as We introduced a portal term in the above Lagrangian with a coupling λ p . This is a marginal operator which can arise from integrating out heavier degrees of freedom and is allowed by the symmetries of the two sectors. The portal coupling results in the mixing of the SM and HS Higgs gauge eigenstates h and s into the mass eigenstates h and s (see appendix A for details). Working in the small mixing angle limit, θ 1, s can be written in terms of the mass eigenstates as Furthermore, when the HS pions are heavier than the lightest mass eigenstate h, the form of the couplings of h to the HS pions is fully determined by the breaking of scale symmetry in the HS theory. In particular, following the work of Voloshin and Zakharov [10,11], later explicated by Chivukula et al. [12], we may first write down the effective chiral Lagrangian for the interactions with the HS gauge eigenstate s , which is given at leading order by . JHEP11(2017)179 Here, n h is the number of heavy flavours -i.e., the number of quarks q with m q > Λ hs χ -and β 0 is the one-loop beta function in the HS, given for general SU(N c ) with n light flavours by where C A = N c and T F = 1/2 sets the normalisation of the generators. The additional terms in eq. (2.8) that couple s to the HS pions originate either from integrating out heavy quarks (terms proportional to n h ) or via the Yukawa terms in eq. (2.5) -for details of the numerical coefficients, see [12]. Finally, the Lagrangian in eq. (2.8) can be rewritten in favour of h/v s using the relation in eq. (2.7) as (2.10) where we have defined η ≡ 2n h /β 0 (and neglected interactions with the scalar s on the grounds that it is much heavier than the other scalar states). We will show in the following section that the field theory defined by the Lagrangian eq. (2.10) admits thick-wall Q-ball solutions. Higgs assisted thick-wall Q-balls We will show in this section that thick-wall Q-balls formed from the light scalars of the theory (the HS pions and the SM Higgs) can exist. We will present two cases: an analytic example with no heavy quarks in the HS, and a numerical example with arbitrarily many heavy quarks. One can ask whether thick-wall Q-balls exist in the chiral Lagrangian alone, i.e., with no coupling to the Higgs field. This is not possible (see appendix B for details). Constructing and minimising the energy functional The thick-wall Q-ball limit corresponds to the limit in which the field values at the centre of the Q-ball are small such that terms quartic (and higher) in the fields can be neglected [5]. Expanding Σ in the Lagrangian given in eq. (2.10), using eq. (2.1), and including terms involving the Higgs alone gives, to cubic order, 6 This Lagrangian is invariant under a global U(1) which we can take without loss of generality to act as π ± → e ±iα π ± , π 0 → π 0 (the labels on the pions thus refer to their charge under JHEP11(2017)179 this U(1) symmetry, not to their electromagnetic charge). The Noether current associated with this symmetry is The Hamiltonian density is given by where the potential U( π, h) is We want to minimise the total energy for a fixed Noether charge Q > 0. This is most easily accomplished using the method of Lagrange multipliers [32]. To do this, we define an energy functional E ω with Lagrange multiplier ω: where E is the integral of the Hamiltonian density, given in eq. (3.3). Inserting the above expressions for the Noether current and the Hamiltonian density, we obtain The only explicit time dependence of E ω has been isolated in the first line of eq. (3.6). This integral is positive semidefinite, and to minimise its contribution to the energy the fields must have the following time dependence: Our problem now involves four real degrees of freedom: π ± (x), π 0 (x), and h(x). To proceed, we assume that the spatial profile of each of the fields has, up to normalisations, the same form: where we allow for α and/or β to be zero. This ansatz is sufficient for the purpose of demonstrating the existence of Q-balls; in reality, the spatial profiles of the fields might differ, but this extra freedom in the minimisation process can only further lower the Qball energy. With these proportionality relations, we can write E ω solely in terms of the field π(x). In addition to the gradient-squared and field-squared terms, we also have the cross-term This term can be dropped to leading order in a self-consistent approximation scheme for the Q-ball solution. This is because it is suppressed relative to the ( ∇π) 2 term by the mixing angle and π /v s , where π is the maximum value of the pion VEV inside the Q-ball, and to the π 3 term in eq. (3.7) by spatial gradients, which we will a posteriori check to be small. This term is also exactly absent when η ∝ n h = 0. It now remains to minimise the energy functional with respect to the function π(x) and the three variables α, β, and ω. To do this, it is useful to redefine the fields and the coordinates in eq. (3.6) in order to isolate them in a dimensionless integral (see appendix C for details). After these redefinitions, the energy functional becomes where Ω ≡ ω/m π and S ψ is given by where ξ and ψ are the spatial coordinate and field in dimensionless units, defined in eq. (C.1). This has the same form as the bounce action for an analogous Euclidean tunnelling problem in three dimensions, and so we can make use of previous results on this subject [33][34][35]. In particular, the integral is minimised when the field is spherically symmetric, and thus we expect all Q-ball solutions to be spherically symmetric. The value of eq. (3.12) when minimised is approximately 38.8 [36]. A global minimum with E ω /Qm π < 1 corresponds to a classically stable Q-ball solution. After minimisation with respect to ω, which enforces the fixed-charge constraint, and α and β, the Q-ball has a mass M Q = E ω . The radius, R Q , of the Q-ball is ∼ 1 in terms of the dimensionless coordinate ξ. Translated into the parameters of the model, it is given by We will minimise E ω in two cases: first, we will analytically study the case that there are no additional quarks with masses above the chiral symmetry breaking scale in the HS, and with the Higgs acting as a massless mediator; second, we will numerically study the case that there are arbitrarily many heavy quarks in the HS, allowing the Higgs mass and self-coupling to be non-zero. The qualitative dependence of the energy functional on α and β is shown in figure 2, for typical parameter choices. Physically we expect that, for m 2 h /m 2 π 1, β will be zero for the following reason. The neutral pion has no cubic interactions with the charged pions, unlike the Higgs, and thus no direct way to lower the energy of the Q-ball. It does, however, have a cubic interaction with the Higgs, which will acquire a VEV at the centre of the Q-ball along with the charged pions, and this cubic interaction may favour the neutral pion acquiring a VEV of its own. However, since this interaction is quadratic in the neutral pion, the Higgs VEV in the Q-ball must be sufficiently large that this term dominates the neutral pion mass term. We hence expect that for pions much heavier than the Higgs, the neutral pion will have a VEV of precisely zero. We will see that this is so in both the analytic and the numerical analysis of the subsequent two sections. An analytic example: no heavy quarks In order to determine the conditions for the existence of Q-balls in this theory, as well as the nature of the Q-balls should they exist, we must minimise the energy functional given in eq. (3.11) with respect to Ω, α, and β. This is not possible to do analytically in the general case: minimising with respect to Ω requires finding the roots of a sixth-order polynomial. The barrier to analyticity comes from the term proportional to Ω in the denominator of E ω . Thus, to gain an analytic understanding of the Q-ball, we assume that there are no heavy quarks in the HS: this sets η = 0 and therefore removes the problematic term. To make the results more straightforward and illuminating, we will also take the Higgs mass and cubic self-coupling to zero; it is possible to analytically study the system without this assumption, but at the expense of making the results more opaque. This assumption is valid provided the pion mass terms and the cubic coupling of the pions to the Higgs (3.14) We will leave a more general discussion of this type of hidden sector until section 3.3, where we relax this assumption and the assumption that n h = 0 with a numerical minimisation of the Q-ball energy, scanning over the parameters of the model. Setting η = 0 and m h = λ = 0 in eq. (3.11), we first minimise with respect to α to obtain α 2 = 4 + 2β 2 . Substituting this back into the energy functional, we observe that, for Ω 2 > 0, the expression is a strictly increasing function of β, and hence is minimised when β = 0 (as argued in the previous section). Thus The energy of the Q-ball is minimised when the VEV of the neutral pion inside the Q-ball is zero, whilst the VEV of the Higgs is double that of the charged pions. With these substitutions, we have an energy functional of the same form, as a function of Ω, as that given in [5]. We can therefore translate those results across to our case. The energy functional is minimised with respect to Ω if which has a solution for Ω provided 0 < < 1/2. The expression for Ω at the minimum is Substituting this back into the energy functional and expanding in yields where M Q is the energy of the Q-ball. The expression on the right-hand side is clearly less than unity for > 0. Thus, this solution is (classically) stable for Q > 0. 7 From eq. (3.13) we find that the radius of the Q-ball is given by This characteristic (inverse) length scale is proportional to the small parameter , thus justifying our earlier assertion that spatial derivatives are suppressed in the thick-wall case. 7 If the Higgs mass is appreciable compared to that of the pions, there is a lower bound on the charge Q due to the fact that the Higgs provides an unfavourable contribution to the mass-to-charge ratio of the Q-ball. Note also that the charge needs to be sufficiently large that quantum fluctuations are under control and the semiclassical approximation is valid. Here we take this to imply Q 10. JHEP11(2017)179 Finally, the maximal value of the charged pion VEV occurs in the centre of the Q-ball and takes the value This solution is subject to the following theoretical constraints. Firstly, we require the charge to be sufficiently small that the thick-wall analysis is valid. Secondly, we must check that the Q-ball number density is not so large that the Fermi degeneracy pressure due to the quarks which constitute the pions becomes important. Thick-wall validity. The thick-wall analysis is only valid in the low charge regime. This is represented by the condition that < 1/2, which can be rearranged to give (3.21) We have assumed that the quartic terms in the energy functional are small compared to the quadratic and cubic terms, which are approximately equal in size in the centre of the Q-ball. There are two types of quartic term we need to consider: the π 4 term and the h 4 term. 8 Demanding that the Higgs quartic is indeed negligible when the pion VEV is given by its maximum value, eq. (3.20), yields Demanding that the pion quartics are negligible likewise gives the constraint (3.23) Note that these constraints merely place limits on the validity of the thick-wall analysis, not on the existence of a Q-ball of any description. If these constraints are strongly violated, then stable Q-balls are best described using the thin-wall analysis [2,32]. We will return to the issue of existence and properties of thin-wall Q-balls in this class of hidden sector models in future work. In the intermediate charge region, we expect that stable Q-balls will still exist, though these will be of neither thick-nor thin-wall type. Fermi degeneracy pressure. The final important consideration arises due to the fact that the scalars from which these Q-balls are built are in fact composites of fermions, the HS quarks. If the density of pions in the Q-ball is too high, Fermi degeneracy pressure due to these quarks can become significant. In this case, we expect that the radius of the Q-ball will increase to counteract this pressure and reduce the contribution to the Q-ball energy from the filled Fermi sphere. Nevertheless, we can put a conservative upper bound on the charge of the Q-ball by demanding that, for the Q-ball radius as calculated above, such energy contributions are lower than the binding energy. JHEP11(2017)179 In the non-relativistic limit, the average additional energy contributed to the Q-ball per constituent fermion is where m f is the fermion mass and n its number density. We will demand that This leads to We hence see that Fermi degeneracy pressure can be quite significant. Given that the pions are pseudo-Nambu-Goldstone bosons of an approximate spontaneously-broken chiral flavour symmetry, we expect them to be relatively light compared to the other scales in the theory. In particular, the appropriate masses of the constituent (dressed) quarks should be of order the chiral symmetry breaking scale, Λ hs χ . This is undetermined and can in principle be arbitrarily high; as such, we will not worry further about this constraint. One might wonder whether the chiral symmetry breaking scale, if sufficiently high, might give rise to unnaturally large corrections to the Higgs mass through pion loops. The cubic Higgs-pion coupling gives rise to corrections merely logarithmic in Λ hs χ /m π , however, and therefore naturalness is not a concern in this case. A numerical example: arbitrarily many heavy quarks The task of analytically minimising the energy functional, eq. (3.6), is intractable in the general case, but can be done numerically. In this section we present the results of a numerical minimisation of the energy functional with respect to α, β, and Ω for various choices of n h , scanning over the parameters in eq. (3.6). Across the entirety of parameter space we find that the energy functional is minimised when β = 0. This is in line with the heuristic argument presented in section 3.2 that the neutral pion should not acquire a VEV inside the Q-ball. The results are almost entirely independent of the number of heavy quarks. This is perhaps to be expected, since the number of heavy quarks enters only through a small modification to the denominator of eq. (3.11). Consequently, we have chosen to use n h = 4 as an illustrative example of the full numerical analysis; the most important differences between the analytic and numerical results arise from neglecting the Higgs mass and cubic coupling in the former case. We therefore also present a numerical analysis where we take n h = 0, m h /m π → 0 and λ → 0; this 'minimal' case is meant as a cross-check against the analytic example discussed in section 3.2. The parameters were randomly sampled uniformly on a logarithmic scale. They are listed, along with their lower and upper bounds used for the scan, in table 1. A set of randomly chosen parameters was rejected if it resulted in an energetically unfavourable solution -i.e., if eq. (3.6) had no minimum such that E ω /Qm π < 1. The Higgs cubic coupling λ was treated as an independent parameter since it is poorly constrained by LHC Higgs measurements [37,38]. In the following figures, the solutions are clustered in cells and the cell brightness is directly proportional to the number of solutions it contains; the lighter (more yellow) the cell, the larger the number of solutions contained in it. In each figure, the left (right) panel shows the results for the minimal (full) case. Figure 3 shows the result of the scan for the fractional binding energy, 1 − M Q /Qm π , versus the total Q-ball charge. The figure shows that thick wall Q-balls exist for a wide range of charges (indeed, across the entire range of charges scanned over), with (for small charges) there being a preference for larger binding energy the larger the charge. This is consistent with the expression eq. (3.18) in the analytic example. When the Higgs mass is appreciable, there is some preference for larger binding energy, across a range of charges. This can be attributed to the fact that the Higgs mass results in an unfavourable contribution to the Q-ball energy, and so favourable contributions from the other terms in the energy functional are required to be larger to offset this. The typical scale of the binding energy is thus increased. 5 show the behaviour of the physical Q-ball parameters, namely its mass and radius, with respect to the charge of the Q-ball. In figure 4 there is a strong linear correlation between the mass and charge of the Q-ball in both the minimal case and the full case. This is consistent with expression eq. (3.18) in the analytic example, which predicts a linear relation between the mass and charge, to leading order. Figure 5 shows that, for a given charge, there are Q-ball solutions with radii ranging from around 10 −3 fm to around 1 fm in the minimal case. The radius (for small charges) tends to be larger on average for smaller Q; this is consistent with the expression eq. in the analytic example. We also note that the radius is bounded above by about 10 −2 fm in the full case, when the Higgs mass is accounted for. This effect can be traced back to eq. (3.13), with m h acting to reduce the radius of the Q-ball. Indeed, if we take the limit Ω → 1, then whilst the Q-ball gets arbitrarily large in the minimal case, its radius is bounded above by ∼ m h /m π in the full case. Physically we expect a lighter Higgs to yield a longer range attractive force, in turn stabilising bigger Q-balls. Figure 6 shows the relationship between the Q-ball fractional binding energy and radius, in units of the pion mass. In the minimal case there is an exact relation between these two quantities; note that eq. (3.18) and eq. (3.19) are both functions solely of . To leading order this relation is linear with gradient −2. In the full case there is no such fixed relation, but nevertheless the binding energy is bounded above for a given radius, with there being a preference for binding energies close to this bound. Summary and conclusions In this work we have demonstrated, by both analytic and numerical methods, the existence of Q-ball solutions in an interacting, hidden sector pNGB-Higgs boson system. The specific class of low-energy effective Lagrangians we study, eq. (2.10), are simple generalisations of the usual chiral Lagrangian to hidden sector QCD-like strong dynamics, supplemented by Higgs-portal-mediated interactions with the (lighter) physical Higgs boson. We find, in the small-to-moderate charge range (10 Q 10 4-8 ) we study, that thick-wall Q-ball solutions exist. These Q-balls are relatively weakly bound, eq. (3.18) and figure 3, and have size parametrically large compared to the inverse pNGB mass, eq. (3.19) and figure 5. The range of Q-ball properties that we find numerically are illustrated in figures 3-6. We Such Q-ball solutions may be relevant to dark matter properties in a variety of Beyondthe-Standard-Model theories, in particular those of asymmetric dark matter and pNGB-Higgs theories. To assess whether this is the case requires a dedicated study of Q-ball production dynamics in the early Universe. Naively, there is no analogue of a decay of an Affleck-Dine condensate [6,39] as applies in supersymmetric Q-ball models of dark matter. Thus we are left with solitosynthesis and aggregation build up along the lines of [40][41][42] as the likely dominant mechanism, though the details are different. Although we expect that we never reach the thin-wall limit, it would also be interesting to study the existence and properties of thin-wall Q-ball solutions in pNGB-Higgs systems. Acknowledgments FB, GJ, and OL are supported by the Science and Technology Facilities Council (STFC). A Scalar masses and mixing angle The scalar potential eq. (2.6) is generically minimised when both |H| and |S| acquire nonzero VEVs, which we write as v h / √ 2 and v s / √ 2 respectively. Expanding around these VEVs and diagonalising the resulting quadratic terms in the potential gives the masses m h and m s of the light and heavy scalar mass eigenstates of the theory. These can be read-off directly from [43]. We have The two scalar mass eigenstates h and s are related to the gauge eigenstates h and s by the rotation that enacts the aforementioned diagonalisation. That is to say, where M = diag(m h , m s ). We identify the lightest scalar mass eigenstate, h, as the SM Higgs. It is the coupling of the pions to this particle that is of most interest to us, on account that it can mediate a long-range attractive force between the pions by virtue of its relative lightness. Given that it is the HS gauge eigenstate s which couples to the pions, it is necessary to find an expression for the mixing angle θ. We have where z is defined as ratio of the VEVs, z ≡ v s /v h . For z large, assuming λ h and λ s are comparable in size, we can write the mixing angle in eq. (A.5) in terms of the small parameter ζ ≡ z −1 , In this limit, the small angle approximation for θ is also valid and we find that Finally, can write the SM Higgs cubic coupling λv h which appears in eq. (3.1) in terms of the couplings in the scalar potential. We have and so B Absence of thick-wall Q-balls in the pure chiral Lagrangian Here we show that thick-wall Q-balls cannot exist within the leading order SU(2) chiral Lagrangian. To do this, we need to show that the functional eq. (3.5), using the Lagrangian eq. (2.4) and current eq. (2.3), has no minima for Q = 0. Just as in eq. (3.6), we can write the functional as a sum of time-dependent and time-independent pieces: The first term in the integral contains all of the time dependence, and is positive semidefinite. Thus the functional is minimised by choosing where Σ 0 (x) is an SU(2) matrix which depends only upon spatial coordinates. Substituting this into the above functional and choosing X = σ 3 /2, we find ∇π 0 · ∇π 0 + ∇π + · ∇π − 1 − 1 3f 2 π 0 π 0 + 2π + π − + 1 6f 2 π 0 ∇π 0 + π + ∇π − + π − ∇π + 2 + 1 2 m 2 π π 0 π 0 + (m 2 π − ω 2 )π + π − − m 2 π 24f 2 (π 0 ) 4 − 1 6f 2 (m 2 π − 2ω 2 )(π 0 π 0 )(π + π − ) − 1 6f 2 m 2 π − 4ω 2 (π + π − ) 2 + ωQ, where we have defined m 2 π ≡ B 0 trM , and expanded Σ 0 to quartic order in the π fields (using eq. (2.1)) on account that there are no cubic terms in the chiral Lagrangian. As is usual in the thick-wall analysis, we ignore higher-order terms, which will stabilise the potential. This integral is exactly that describing tunnelling through a quartic potential barrier in three dimensions [33][34][35], with the potential having the schematic form U(π) ∼ m 2 π 2 −λπ 4 . The solutions to the associated bounce equation are spherically symmetric. The quartic terms containing derivatives are suppressed relative to the kinetic terms by a factor of f 2 and to the other quartic terms by spatial gradients, which are small. We will hence ignore these terms. Notice that in the limit ω → m π , i.e., the thick-wall or small-field limit, the last two quartic terms have positive coefficients. In order for a potential barrier to exist (and, therefore, a bounce solution to exist), we require that the overall contribution of all three quartic terms be negative. Consequently, the VEV of the neutral pion in the centre of the Q-ball must be large relative to that of the charged pions, but since this will contribute a large amount of mass to the Q-ball without contributing to its charge, we might expect that no stable Q-balls exist. C Field redefinitions The following rescalings of the spatial coordinates x i and the field π are necessary to remove all parameters of the theory from inside the integral in eq. (C.1) These redefinitions allow us to minimise the resulting dimensionless integral, via the calculus of variations, in a manner independent of the parameters of the theory. Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
8,862
2017-11-01T00:00:00.000
[ "Physics" ]
The Realistic Mobility Evaluation of Vehicular Ad-hoc Network for Indian Automotive Networks In recent years, continuous progress in wireless communication has opened a new research field in computer networks. Now a day's wireless ad-hoc networking is an emerging research technology that needs attention of the industry people and the academicians. A vehicular ad-hoc network uses vehicles as mobile nodes to create mobility in a network. It's a challenge to generate realistic mobility for Indian networks as no TIGER or Shapefile map is available for Indian Automotive Networks. This paper simulates the realistic mobility of the Vehicular Ad-hoc Networks (VANETs). The key feature of this work is the realistic mobility generation for the Indian Automotive Intelligent Transport System (ITS) and also to analyze the throughput, packet delivery fraction (PDF) and packet loss for realistic scenario. The experimental analysis helps in providing effective communication for safety to the driver and passengers. INTRODUCTION The wireless network is the seamless integration of all types of networks.Network special purpose Vehicular Ad-hoc Networks (VANETs) is sub category of Mobile Ad-hoc Networks (MANETs) [1].It contributes a lot to the Inter Vehicle Communication (IVC).IVC shows very different characteristics from other MANET network.Specifically, the constraints on the movement of vehicles, the behaviour of variable driver, and cause high mobility topology changes quickly, frequent network fragmentation, a small effective diameter of the network, and limited usefulness of redundancy network [2]. In VANETs vehicles serving as nodes and offers some intelligent activities.It is an intelligent network of vehicles, called Intelligent Transportation System (ITS).It is used to ensure the security services of driver assistance and comfort to road users.Intelligent Transportation Systems (ITS) include all types of communications in vehicles, between Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I).They also include the use of Information Technology and Communication (ICT) for Indian railway and air transport, including navigation systems.All types of Intelligent Transport System (ITS) depend on the services of radio communication and the use of specialized technologies [2]. The traffic is the main component of mobility research.Traffic research is broadly categorized into four classes of traffic flow models.They are distinguished by the level of detail of the simulation.They are listed below: • Macroscopic Models: In this traffic flow is the basic entity, which formulates the relationship among traffic flow characteristics like density, flow, speed etc. • Microscopic Models: It simulates the movement of each vehicle on the road most of the time considering that the behaviour of the vehicle depends on both the physical capabilities of the vehicle and the behaviour of the driver. • Mesoscopic Model: It is located at the boundary between the microscopic and macroscopic simulations.In this, the movement of vehicles is mainly simulated using queue approaches and single vehicles are moved between queues. • Sub Microscopic Models: It considers simple vehicles as microscopic, but extends them by dividing into new structures that describe the rotational speed of the motor with respect to the vehicle speed or switching speed of the preferred shares of the drive.This allows more detailed compared to simple microscopic simulation calculations.However, this model requires longer computation time [3]. According to the available data at [4], the Indian Automotive Road Network is about 33 Lakh km, which is second in the world.The number of vehicles has increased at an average rate of 10.16% per annum over the past five years.As the number of vehicles is growing at rapid speed, the need for the driver and passenger safety is also increasing.Because of this, it is necessary to develop an Intelligent Transportation System (ITS) for the Indian Automotive Networks. The rest of the paper is organized as follows: Section 2 review the literature available for the generation of realistic mobility.Section 3 describes the research methodology for the Indian Automotive Networks; Section 4 presents the configuration and realistic scenario simulation of the mobility for the different regions.Finally, Section 5 concludes the paper with the main points of this research. RELATED WORK VANET is a type of Mobile Ad-hoc Networks (MANET) which consists of number of vehicles with the ability to communicate with each other.The main objective of VANET research is to make a quick and cost effective data transmission for the safety and benefit of the driver and passengers [5].The required solution is not possible by direct experimentation, due to cost and complexity.Thus, simulation becomes the tool of choice to evaluate these quality solutions.This simulation depends on the mobility model, which represents the flow diagram of mobile users, including its location, speed and acceleration over time.A mobility model should be a realistic mobility model that takes into account the characteristics of real-world region [6].The realistic mobility can be achieved by two different ways: • A real world map obtained from TIGER (Topologically Integrated Geographic Encoding and Referencing) database from the U.S. Census Bureau [7], Clustered Voronoi Graph [8] and Shapefile Map [9]. • A real world map organizes satellite images from google earth for realistic simulation of the networks [10]. Nidhi et.al. generated a real world map of JNU, Delhi using google earth and the existing GIS tools [5].Authors have collected the traffic data for a limited area of the road map to capture the realistic mobility.In this work, the whole region has been divided into several small roads.Realistic mobility model used here considers the choice of route driver at run time.It also examines the effect of consolidation caused by the traffic lights at the intersection used to regulate the flow of vehicles in different directions.Finally, the performance of VANETs is evaluated in terms of average packet delivery ratio, packet loss, and routers drop those statistical measures for the choice of the route of the driver with the traffic light scenario. The paper [6] describes the generation of an urban vehicle trace of the large-scale mobility.The data set is obtained by considering the realistic road topology, the microscopic and macroscopic mobility flows.A comparison with traces employed showed that incomplete representations of mobility can lead to significantly different network topologies, may seek performance evaluation protocols and architectures.Their mobility traces of vehicles are available at [11].However, the author's notes that they are still far from complete realism. Haerri et.al. [8], generated a realistic vehicular movement traces for telecommunication networks simulators.They provide the description of VanetMobiSim mobility, which was validated by comparing its traces with TSIS-CORSIM.It is a traffic generator industry benchmark. VanetMobiSim is one of the few vehicles oriented mobility simulator fully validated and freely available to the research community on vehicular networks.Paper [12] presents VanetMobiSim, an extension of Canu-MobiSim capable of producing realistic mobility traces of vehicles for several network simulators.VanetMobiSim-1.0extensions made by both the macro and micro mobility were also demonstrated by the authors. In paper [13], the authors proposed the so-called MOVE, a VANETs mobility model that uses as compiler SUMO [3], which is a realistic vehicular traffic simulation model.In the article [14], Kun chan Lan et.al. first introduce a tool MOVE that allows users to quickly generate realistic mobility models for VANET simulations.MOVE built on top of an open source micro-traffic simulator SUMO.The output motion is a realistic model of mobility and can be used immediately by Simulators popular networks such as NS2 and QualNet.Authors evaluated the effects of retail mobility models in three simulation studies VANET case (in particular, the existence of traffic lights, choice of route driver and car overtaking behaviour) and show that the selection sufficient level of detail in the simulation is essential for VANET protocol design. The main challenges in the field of vehicular ad-hoc network are the realistic simulation of Inter Vehicle Communication (IVC) protocols.To provide income for the meaningful evaluation of IVC protocols, accurate modelling of traffic movement and to know the exact position of the vehicles involved is very important.In [15], the authors provide study of different mobility models with a different methodology.The necessary bidirectional coupling of network and traffic simulation and the use of a new hybrid location-based Ad-hoc routing protocol instead of DYMO in the bidirectional coupling of SUMO and OMNeT++ (veins) is proposed. Article [16], proposed a more realistic scenario, the city section mobility model and the radio propagation model with obstacles.The performance of the routing protocol was simulated in the traditional scenario and new one.Then the performance of DSR and AODV is simulated and analyzed in the new scenario.The result showed that in the more realistic scenario AODV is more suitable for VANET.The article has simulated VANET more realistically. Different simulation software are available as an open-source program and can be extended to meet the own researcher and also be used as a reference test bench for new traffic patterns needs. The few traffic simulators are used for generation of realistic mobility: SUMO, MOVE CanuMobiSim, VanetMobiSim-1.0 and VanetMobiSim-2.0etc.Each simulator has its own way to generate mobility and traffic assessment.In our research, we used the VanetMobiSim-2.0for generation and evaluation of the mobility and traffic respectively.It provides a platform to perform all the steps of single mobility simulation. SIMULATION METHODOLOGY It is necessary to prepare a methodology for the realistic evaluation of Indian Automotive Networks.As it provides the clear reflection of the research involved.It can conclude from the previous section that the simulation is a concern for Indian Networks.So selecting the suitable methodology is necessary, as it will improve the research.The methodology involved is shown below: Figure 1.Methodology Adopted The figure 1 shows the research methodology adopted for the realistic simulation and the evaluation.As the coupling simulators is used for the work.They are traffic and network simulators respectively.Traffic simulator helps in generating the realistic mobility for the different region.These regions are captured from the realistic map, which is available for the Indian automotive networks.Mobility traces is provided to the network simulator, where the appropriate propagation model and short range communication standards is provided to get the desired output traces for the realistic evaluation of the Indian automotive networks. EVALUATION In this section, work is analyzed by simulating the performance of VANET for Indian Automotive Networks.Section 4.1 introduces the simulation platform and the main parameters used in the evaluation.Then section 4.2 performs the realistic scenario simulation of VANET for Indian Intelligent Transport System.Finally section 4.3 analyzes their performance. Simulation Platform and Scenario To evaluate the performance of VANET, it is necessary to deploy a real network scenario with all possible parameters of vehicle simulation, such as simulation time, traffic flows, maximum and minimum traffic delay etc.The evaluation is carried out by simulation, using VanetMobiSim-2.0and NS2/NS3.Experiment was conducted by taking into account two real VANET regions. Table 1. Simulation Parameters The table 1 summarizes the main parameters of the simulations.It provides the basic parameters such as propagation model used, which IEEE standards and layer is involved etc.The PHY/MAC layer parameters of the simulated nodes are based on the specification of the IEEE 802.11. For the evaluation, two respective scenarios for the urban and rural region of Jaipur, Rajasthan, India is taken.The regions are: • Urban Scenario -B2 Bypass, Jaipur, Rajasthan, India and • Rural Scenario -JNU Jagatpura, Jaipur, Rajasthan, India. Realistic Scenario Simulation The first step for realistic simulation is to produce a map for the network scenario.The output file from the traffic simulator is provided to the NS2 for their network simulation. Here simulation is performed for urban and rural region based on their simulation time and number of nodes.As mentioned in table 1, the simulation is performed for 1000 and 500 seconds respectively for 50, 100,150, 250 and 500 nodes of both the regions.Each region has their respective simulation area and uses the AODV routing protocols with Constant Bit Rate (CBR) flow.Wireless channel is used and packet size is of 512 KB. This realistic evaluation involves the calculation of matrices for each simulation region.They include the throughput, PDF and packet loss percentage.It considered packet loss because of packet being dropped due to its waiting time exceeding its maximum latency or packet error due to wireless transmission channel in our simulation.Following matrices are used for calculating the performance of Indian Automotive Networks. Throughput is the rate of successful message delivery over a communication channel.It is calculated by the given mentioned equation: Urban Region The simulation is performed by analyzing the impact of matrices required for evaluating the performance of the VANETs for urban region The figure 4 and 5 shows the throughput and packet delivery fractions for different number of nodes respectively for the region.The simulation is compared for 1000 and 500 seconds.When number of nodes increases, results show the trend of increased throughput as well as PDF. Figure 6 depicts the relationship between the packet loss percent for different simulation time.It can be observed that as the number of nodes increases, the packet loss decreases. Rural Region In this simulation is performed by analyzing the impact of matrices required for evaluating the performance of the VANETs for rural region.The figure 7 and 8, the throughput and packet delivery fraction for different number of nodes was all most same.It shows a minor change with respect to simulation time.In case of packet loss percent, Figure 9 shows the packet loss percent decreases to some extent, and then shows the trend of increase as the number of nodes changes from 150 to 250 and than 500. However, it is clear from Figure 6 and 9; the packet loss percent in this research is lower for rural region as compared to urban region.Their realistic simulation is best suited for the rural region with respect to number of nodes and simulation time. Analysis of VANET performance with Mobility Section 4.2 provides the realistic simulation and their performance evaluation.Based on their result, it analyzes the realistic simulation with respect to the region, simulation time and number of nodes respectively.The analysis is performed to understand the percent gain of different matrices for respective region.This is evaluated on the basis of results obtained for the 50 number of nodes.Figure 10 shows that in urban region higher nodes mobility leads to higher gain.It also highlights that the less simulation time leads to more percent gain.Figure 11 analyze the rural region.In this the change is almost constant or varies minutely with respect to nodes mobility.However when simulation is performed for less time period, higher mobility nodes leads to loss in respect to their throughput and packet delivery fraction. CONCLUSIONS In this paper, author evaluated the realistic mobility for Indian Automotive Region.It has considered two different regions for realistic evaluation.Their research can effectively and efficiently evaluate the two different regions based on their simulation parameters.The study also highlights that there is vast scope for Indian Automotive Networks to be evaluated for realistic simulation, and need to improve existing models and routing protocols for Indian Automotive Networks.The future work will be on realistic evaluation for different models for improving the packet throughput.So that packet loss can be reduced to large extent.It will also further exploit existing routing protocols for more realistic evaluation to Indian Automotive Networks. Figure 2 :Figure 3 : Figure 2: B2 Bypass, Jaipur map (Urban Region) Throughput = ((received packet*pkt_size)/total simulation time)/1024(1) PDF, the packet delivery fraction is the fraction of the data packets originated by an application that each routing protocol delivers i.e.PDF = (received packet /sent packet)*100 (2)Packet Loss occurs when one or more packets of data travelling across a communication channel fail to reach their destination.It is generally calculated in percent.The equation to find packet loss percent in motioned below:Packet Loss % = (sent packet -received packet)*100)/sent packet(3) Figure 10 : Figure 10: Gain % for Urban Region Figure 11: Gain % for Rural Region The figure 10 & 11 provide the main characteristics of the respective Urban and Rural regions.Figure10shows that in urban region higher nodes mobility leads to higher gain.It also highlights that the less simulation time leads to more percent gain.Figure11analyze the rural region.In this the change is almost constant or varies minutely with respect to nodes mobility.However when simulation is performed for less time period, higher mobility nodes leads to loss in respect to their throughput and packet delivery fraction. But as the region changes to urban, it shows gain as the number of nodes mobility increases.But less simulation time provide more gain in throughput and packet delivery fraction.It also shows the decrease in packet loss. Table 2 : Evaluation of VANET for Urban and RuralIt was found in table 2 that the rural region shows almost constant growth with respect to nodes mobility and simulation time.
3,799.6
2014-04-30T00:00:00.000
[ "Computer Science" ]
Continuous Flavor Symmetries and the Stability of Asymmetric Dark Matter Generically, the asymmetric interactions in asymmetric dark matter (ADM) models lead to decaying DM. We show that, for ADM that carries nonzero baryon number, the continuous flavor symmetries that generate the flavor structure in the quark sector also imply a looser lower bound on the mass scale of the asymmetric mediators between the dark and visible sectors. The mediators for $B=2$ ADM that can produce a signal in the future indirect dark matter searches can thus also be searched for at the LHC. For two examples of the mediator models, with either the MFV or Froggatt-Nielsen flavor breaking pattern, we derive the FCNC constraints and discuss the search strategies at the LHC. Introduction Dark matter (DM) is stable on cosmological time-scales. A principal question about the nature of DM is: what mechanism ensures its stability? Commonly, this is assumed to be a result of an exact symmetry (for a concise review of proposed stabilization mechanisms see, e.g., [1]). One possibility is that the stability of DM is ensured by a gauge symmetry mimicking the way QED gauge invariance ensures the stability of the electron in the standard model [2][3][4]. A more frequent choice is to introduce a Z 2 symmetry by hand. A prominent example is R-parity in the MSSM which both stabilizes DM and ensures the stability of the proton [5][6][7]. An exact Z 2 symmetry can be generated dynamically, e.g., as a remnant of a spontaneously broken U(1) gauge symmetry, such as U(1) B−L [8][9][10]. In this paper we explore a possibility that the discrete Z 2 that ensures the stability of DM is both accidental and approximate. As a result, the DM is metastable with decay times potentially close to the present observational bound of τ 10 26 s. We focus on a particular subset of asymmetric DM models [15] where DM carries baryon number. For recent reviews of asymmetric DM, see [16,17]. Our working assumptions are • Baryon number is a conserved quantum number (it could, for instance, be gauged at high scales). • There is a sector that efficiently annihilates away the symmetric component. The exact form is not directly relevant for our discussion. • The observed flavor structure in the quark sector is explained by flavor dynamics in the UV while DM is not charged under flavor. The flavor dynamics fixes the flavor structure of dark sector couplings to the visible sector in the same way that it fixes the structure of the SM Yukawa interactions. This has two important consequences. First, the exchange of DM in the loops does not generate dangerously large Flavor Changing Neutral Currents (FCNCs). Secondly, and most importantly, a flavor singlet DM is stable on cosmological timescales even for TeV scale mediators between the dark and visible sectors. In this case, the nature of DM stability can even be probed directly at the LHC. The underlying flavor symmetry is crucial for the stability of DM. We will demonstrate this for two realizations of flavor physics: the Minimal Flavor Violation (MFV) hypothesis and for abelian horizontal symmetries in the case where DM carries baryon number 2. In this case the mediators leading to the decay of DM can be at O(100GeV). In contrast, for completely anarchic flavor couplings where DM couples to all quark flavors with O(1) couplings, the indirect DM bounds would require the mediators to have masses in the O(10TeV) range. The implications of continuous flavor symmetries for DM interactions have also been explored in [18? -28]. Our analysis differs from these studies in that we are assuming that DM is a flavor singlet (as is the case in most models of DM). This, along with its small mass and conserved baryon number, also ensures that DM is metastable in our setup. The stability of symmetry-less DM in the context of discrete flavor groups has been discussed in [29] (for the potential relation of discrete flavor groups in the leptonic sector and the stability of DM, see also [30][31][32]). Furthermore, the stability of asymmetric DM due to a mirror baryon number was explored in [33] or due to fractional baryon number in [34]. The decaying DM in the context of ADM models was explored in [35][36][37][38]. The paper is structured as follows. In section 2, we review the relation between DM mass and relic abundance in asymmetric DM models. In section 3, we give two examples of flavor breaking models at the level of Effective Field Theory (EFT) analysis that can lead to metastable asymmetric DM. In section 4, we derive the indirect detection bounds JHEP01(2015)089 on the two EFT set-ups. In section 5, we give two examples of mediators that would lead to the EFT set-ups discussed in section 4. The relevant bounds on the mediator masses and couplings, including collider signatures, are derived in section 6. Conclusions are given in section 7, while appendices contain technical details. 2 Dark matter mass in asymmetric dark matter models Asymmetric Dark Matter (ADM) models [15,[39][40][41][42][43][44][45][46][47][48][49][50][51][52] address the question of why the DM density, Ω χ , and the baryon density in the universe, Ω B , are so close to each other, Ω χ 5.3 Ω B [53]. In the standard weakly interacting massive particle (WIMP) models of DM this is to some extent pure coincidence. In this case DM is a thermal relic and with σ v the thermally averaged DM annihilation cross section. The coincidence Ω χ ∼ Ω B then arises due to a fortuitous size of the annihilation cross section for a weakly coupled weak scale DM -the WIMP miracle. In contrast, in ADM models the observed DM is not a thermal relic. Its relic abundance reflects the asymmetry in DM, χ, and anti-DM, χ † , densities in the early universe. The χ and χ † annihilate away, and only the asymmetric component remains. The coincidence of Ω χ and Ω B is then due to the fact that the DM relic abundance has the same origin as the baryon asymmetry. The difference between Ω χ and Ω B is simply due to the fact that the DM particle is more massive than a proton by a factor of a few. More precisely, to explain the observed Ω χ the DM's mass needs to be (see appendix B) where m p is the proton mass. Here (B − L) χ is the B − L charge of the χ field. The exact value of numerical prefactor N 0 O(1) depends on when the operators transferring the baryon asymmetry between the visible and the dark sector decouple. For decoupling temperature above electroweak phase transition, and assuming that there are only the SM fields in the visible sector, gives N 0 = 1.255 for DM that is a complex scalar or a Dirac fermion. In this case the required DM mass is where the error reflects the errors on Ω χ = 0.265 ± 0.011 and Ω B = 0.0499 ± 0.0022 [53,54]. We thus have (B − L) 2 , Y 2 and Y (B − L) summed over effective degrees of freedom in the visible NP sector, cf. eq. (B.17). The m χ required to obtain the correct relic abundance is shown in figure 1. [55] or dynamically induced mass mixing [56]. Henceforth, we will assume that m χ is given by eqs. (2.2) and (2.4). Our results can be trivially adjusted if this is not the case. For concreteness we assume in this paper the thermal history of the universe that closely resembles the one in [15] and has several distinct epochs relevant for the ADM relic density. At high temperatures, a B − L asymmetry is generated, e.g., via GUT-like baryogenesis [15] or via leptogenesis [55]. The B − L asymmetry is efficiently transferred between the visible and the DM sectors through asymmetric interactions. We do not require a discrete Z n symmetry in the dark sector so that, unlike [15], the asymmetric interactions can involve just a single χ field. At low energies, they have a schematic form taking (B − L) χ = 2 complex scalar DM as an example. Here, C is a flavor-dependent coefficient. The asymmetric interactions freeze out at temperature T f ∼ Λ m χ , below which the B − L asymmetries in the visible and dark sectors are separately conserved. If the flavor breaking is due to a spontaneously broken horizontal symmetry (see section 3.2), the freeze out temperature for the above dimension 10 operator in eq. (2.5) is, using Naive JHEP01(2015)089 Dimensional Analysis (NDA), In the numerical evaluation, we used the lower bound Λ = Λ * = 1.9 TeV from indirect detection eq. (4.2), taken the effective number of relativistic d.o.f. to be g * = 108.75, corresponding to the SM with a complex scalar DM, and set C = 1 which is appropriate for the χb → bsctb transition dominance (with any permutation of the flavors). Note that T f is above the electroweak phase transition temperature T ew ∼ 170 GeV. It is also well below Λ so that the use of EFT is justified. If the mediator scale were too low, Λ 730 GeV (or Λ 400 GeV for MFV breaking), the asymmetric operator would not freeze out before electroweak phase transition started. Consequently, the DM quantum number would not be conserved and the DM density would be washed out. This places a lower bound on the asymmetric mediator masses to be above a few hundred GeV. Finally, at temperatures below the DM mass, the bulk of the DM efficiently annihilates back to the visible sector through symmetric interactions leaving only the small asymmetric component. We have nothing new to say about this mechanism and refer the reader to a set of model building ideas already present in the literature [16,[57][58][59][60]. Metastability and flavor breaking We show next that the DM in ADM models can be stable on cosmological time-scales without invoking discrete Z n symmetries. We assume that the SM quark flavor structure is explained by a continuous flavor group and that the DM carries nonzero baryon number. This is a crucial ingredient in the argument. Since DM is not charged under the flavor group, while the SM fields are, there are no interactions between DM and the SM in the limit that the flavor group is unbroken (all flavor singlet interactions are forbidden by baryon number conservation). All the interactions between DM and the visible sector thus have to be flavor breaking and this leads to a significant suppression of the DM decay time. We show this explicitly for two examples of flavor breaking: i) the MFV ansatz, where all the flavor breaking is assumed to be due to the SM Yukawas, and ii) the spontaneously broken horizontal U(1) symmetries. Integrating out the NP fields gives the effective DM decay Lagrangian The sizes of the Wilson coefficients, C i , are fixed by the assumed flavor generating mechanism. We consider the case of DM, χ, that is a SM gauge singlet but carries nonzero baryon number, B = 0. The lowest dimensional asymmetric local operators thus have the generic form where we do not show the contractions of SM gauge indices. Here (n u +n d +n q ) mod 3 = 0 since DM is a color singlet. Note that DM needs to carry an integer baryon number in JHEP01(2015)089 order not to forbid all the asymmetric interactions with the visible sector. Above, u c , d c are the electroweak singlets and q represents the electroweak doublet left-handed quark fields in two component notation, with q * being the corresponding complex conjugated Weyl spinor, see appendix A. In the down-quark mass basis they are The SM Yukawa matrices are then with Y diag D,U the diagonal Yukawa matrices. As an example, let us consider fermionic B = 1 DM. Two distinct types of operators are allowed where ρ, σ are SU(2) L indices while the SU(3) C and flavor indices are implicit and we have chosen one possible Lorentz contraction denoted by the parentheses. Minimal Flavor Violation The MFV assumption is that, also in the NP sector, the flavor is broken only by the SM Yukawas Y U,D [61][62][63][64][65]. The MFV assumption can be most succinctly cast in the spurion language [62]. In the limit of vanishing quark masses the SM quark sector enjoys an enhanced flavor symmetry Here U Q,U,D are transformations from SU(3) Q,U,D , respectively. This means that the low energy operators in (3.1) also need to be formally G F invariant. Keeping only the minimal insertion of Yukawas, the operators O 1,2 in eq. (3.5) for B = 1 DM are where α, β, γ are the color indices, and K, N, M run over the quark generations. The two operators lead to the χ → bus decay at the partonic level which is the least suppressed kinematically allowed transition. For the operator O 1 , this transition arises at 1-loop and requires two chirality flips, see figure 2. The decay amplitude scales as ∼ y t y b with an extra loop factor and a chirality flip suppression ∼ m t Λ QCD /m 2 W . To be conservative, we count the chirality flip suppression due to the light u, d, s quarks as JHEP01(2015)089 proportional to Λ QCD and not to the much smaller quark masses. The operator O 2 leads to the decay χ → bus at tree level with the decay amplitude suppressed by ∼ y b V ub . Once the quarks hadronize, the decays appear as χ → Ξ b π, or χ → Λ b K, with any number of pions. Using NDA to estimate the decay width gives (setting V tb V ud 1) for the case where O 1 and O 2 dominate the decay, respectively. The last 1/16π 2 factor is due to three body final state and is required to obtain the correct estimate for the inclusive decay width as can be seen from the optical theorem and the use of the OPE. In the numerics, we use m t = 173 GeV, m χ = 6.2 GeV, |V ub | = 0.00415. The numerical prefactor 6.6 · 10 −51 GeV = 1/(10 26 s) is chosen to make contact with the bounds on the DM lifetime from indirect DM searches. Note that MFV leads to two sources of suppression. First, there is the suppression of the Wilson coefficients due to Yukawa insertions, y b ∼ 0.024 for O 1 and y b V ub ∼ 10 −4 for O 2 . In addition, there is a loop suppression for O 1 where the decay has to proceed through an off-shell top quark. Without these additional suppressions, the bounds from indirect DM detection would require about two orders larger NP scale, Λ 4.3 · 10 9 TeV. The suppression factors are much larger for B = 2 DM, in which case the DM is a scalar, and the asymmetric operators start at dimension 10. We investigate in detail the operator The correct relic abundance requires a DM mass of m χ = 3.1 ± 0.2 GeV, assuming the SM field content at the time of the decoupling of the asymmetric operators. We assume that m χ < m Λ + c + m Σ − = 3.48 GeV, and thus below the threshold for the χ → Λ + c Σ − decay, kinematically forbidding the χ → udc dds partonic transition. The least suppressed partonic level transition is therefore χ → uds uds resulting, after hadronization, in the decays The MFV assumption results in the y b V 2 ub suppression of the Wilson coefficient. The 1/(16π 2 ) 4 factor reflects the fact that, in the OPE, the leading contribution starts at 5 loops. The use of the OPE may be suspect for such low m χ masses and one could expect O(1) corrections to the above estimate from additional soft gluon loops. Indirect DM searches require the NP scale to be Λ 0.49 TeV. This corresponds to the bounds on the masses of the mediators between the dark and the visible sectors, m mediator 490 GeV, m mediator 210 GeV, and m mediator 90 GeV, if the operator (3.8) arises at tree level, 1-loop, or 2-loops, respectively. The mediators can thus be searched for at the LHC as discussed in section 6.3. Note that the flavor suppression was essential to have such a low bound on the NP scale Λ. Without it, and taking the Wilson coefficient to be 1, the indirect bounds on the stability of DM would require Λ 7.3 TeV, implying that the mediators were most likely out of reach of the LHC. The bound on the NP scale Λ is quite sensitive to the actual value of m χ . For larger values of m χ , the χ can decay to top and bottom quarks reducing the loop and CKM suppression of the decay width. This is illustrated in figure 3, where the NP scale is fixed to Λ MFV = 1 TeV and m χ is varied. As the kinematic thresholds for the χ decays to c or b quarks are reached, this results in a change of several orders of magnitude in the predicted decay time. Spontaneously broken horizontal symmetries The suppression we found above using the MFV ansatz is model dependent. To illustrate this point we turn to U(1) Frogatt-Nielsen (FN) models of spontaneously broken horizontal symmetries [66]. The suppression of the Wilson coefficients in the effective Lagrangian (3.1) JHEP01(2015)089 is then given by the horizontal charges of the quarks in the operators. For instance, for the two B = 1 DM operators in (3.1) the Wilson coefficients are Here H(u c K ), . . . , with H(q * K ) = −H(q K ), are the horizontal U(1) charges of the quarks, and λ ∼ 0.2 is the expansion parameter. The dependence of the operators and Wilson coefficients on the generational indices KN M is implicit as are color, weak, and Lorentz contractions in (3.10). An example of a horizontal charge assignment that gives phenomenologically satisfactory quark masses and CKM matrix elements is [67], where the column labels {1, 2, 3} correspond to the first, second, and third generations of quarks. Since the heavier flavors carry smaller charges the DM preferentially decays into the heaviest accessible states. As in MFV, the dominant decay is χ → bus, except that the y b V ub ∼ λ 5 suppression gets replaced by a much more modest ∼ λ |−H(q 1 )+H(s c )−H(q 3 )| = λ. This is the largest scaling allowed by FN charges. In concrete UV mediator models the suppression can, in fact, be much more severe as we will see explicitly in the next section. For B = 2 DM the least suppressed operator is suppressing, again, the color and weak contractions. The corresponding Wilson coefficient is suppressed by At the partonic level, the dominant decay is χ → uss uds with a Wilson coefficient that is Note that in MFV this process proceeded through 2 loops so that the suppression was much more severe, ∼ V ts V ub /(16π 2 ) 2 ∼ λ 5 /(16π 2 ) 2 at the amplitude level. While the suppression in the FN case is much less than in the MFV case, it is still nontrivial. It lowers the scale of NP allowed by indirect DM searches from Λ 7.3 TeV, in the case of no flavor structure, to Λ 2.5 TeV in the FN case. Taking the bound from DM indirect detection searches gives Λ 1.9 TeV. If the operator arises at tree level, 1-loop or 2-loops, this corresponds to mediator masses, m mediator 1.9 TeV, m mediator 830 GeV, and m mediator 360 GeV, respectively. constraint on χ → µ + µ − decay time derived in [70], while the light blue line shows the Super-Kamiokande [71] constraint on the χ → νν decay time obtained in [72]. The purple line shows the upper limit on χ → uds and χ → cbs decay times (indistinguishable at the scale of the figure) obtained in [35]. Indirect detection The asymmetric operators discussed in the previous section lead to a decaying DM which can be potentially seen in indirect DM searches. In our models, the χ decays hadronicaly. The decay products thus contain a number of charged particles and photons. The flavor composition of the final state depends on the mass, m χ , and also on the assumed flavor breaking pattern. In section 3, we discussed in detail the case of 6.2 GeV B = 1 DM, which decays through χ → bus and a 3.1 GeV B = 2 DM that decays through χ → uds uds. After hadronization, these result in the decays χ → Ξ 0 b π 0 and χ → Λ 0 Λ 0 , respectively. The dominant decays for other DM masses, assuming the MFV or FN flavor breaking patterns, are given in appendix C. The DM lifetime dependence on m χ is shown in figure 3 after fixing the NP scale to be Λ = 1(3) TeV for the MFV (FN) flavor breaking. To guide the eye, we also show in figure 3 the following bounds from indirect DM searches. The green (orange) line shows the constraint on the DM decay time from FERMI-LAT [68] for χ → bb(µ + µ − ) decays using the NFW profile. The dash-dotted light red line shows the results of an analysis [70] based on AMS-02 [69] and assuming χ → µ + µ − . The light blue line shows the result of an analysis [72] assuming χ →νν decay based on Super-Kamiokande [71] bounds. The purple line is an exclusion curve from [35] based on galactic and extragalactic gamma ray flux measurements by Fermi [73][74][75]. The authors JHEP01(2015)089 Table 2. The gauge and global charge assignment for the three scalar mediators, φ L , ϕ L and φ R , in the first UV completion toy model for which we assume the MFV flavor breaking pattern. in [35] consider χ → uds and χ → cbs decays as two extreme choices for the flavor structure of the final states. The derived bounds on the χ lifetime differ by less then a factor of 2 such that the two bounds overlap on the scale of figure 3. The decays we consider fall between these two extreme choices with potentially weakened bounds in our cases above where the dominant operator is given in (3.8). For the FN case the bound is where the least suppressed operator is given in (3.13). Mediator models The EFT analysis of metastable ADM using asymmetric operators is an appropriate approach to derive the indirect DM detection signatures as we did in the previous section. However, for DM direct detection searches and the DM production at colliders, the dominant signals are due to either a single mediator exchange or from direct production of the mediators. To assess the reach of these DM searches, the UV completions to our models are therefore needed. We introduce two toy model UV completions that can generate the dimension 10 effective operators; that is, the operator in eq. (3.8) for the MFV case and the operator in eq. (3.13) for the FN case. The EFT operators are generated when the ∼TeV mediators are integrated out. In our first model, all the mediators are scalars, while in the second model there is also a fermionic mediator. The flavor structure in either of the two models could be of the MFV or of the FN type. For concreteness we fix the first model to have the MFV flavor breaking, and the second model to have the FN flavor breaking. MFV model with scalar mediators The SM is extended by the DM, χ, and three flavor multiplets of scalar mediators -a color anti-triplet φ L and a color sextet ϕ L , both with hypercharge 1/3, and a color sextet φ R with JHEP01(2015)089 Table 3. Gauge and B − L charges of the mediators φ and ψ in the second UV completion toy model. We also assume the FN flavor breaking pattern. hypercharge −2/3 (see table 2). They transform under the flavor group G F as (6, 1, 1), (3, 1, 1), and (3, 1, 1), respectively. The interaction Lagrangian between mediators and the SM is thus given by andK αβ λ , are the same as in [76] and satisfy the completeness relation (K AB I ) * KCD , and similary forK αβ λ . In the second line of (5.1), the down Yukawa insertions make the interaction term with right-handed down quarks formally invariant under G F . Integrating out the mediators φ L,R , ϕ L , gives the χ decay operator (3.8), with the Wilson coefficient For κ 1 = κ 2 = κ 3 = κ 4 = 1 the bounds from indirect DM searches thus require m φ L ,φ R ,ϕ L 450 GeV, if all the mediator masses are the same. This should be appropriately rescaled if either κ i have smaller values or if all masses are not the same. For instance, for κ i = 0.3 the mass degenerate case of the mediators is bounded from below by m φ L ,φ R ,ϕ L 200 GeV. Since the mediators carry color charges, they can be searched for at the LHC as discussed in section 6.3 below. Note that, for the Lagrangian in eq. (5.1) the common scenario where the symmetric component of χ density annihilates through a dark photon [16,[57][58][59] is phenomenologically not viable. In this case, at least some of the SM quark fields would need to carry a dark U(1) charge in conflict with the low energy constraints if dark photon is light. A viable possibility, on the other hand, is the annihilation of χχ † to a pair of light scalars along the lines of ref. [77]. FN model with fermionic and scalar mediators In the second model the SM is supplemented with a DM scalar χ, a Dirac fermion ψ and a complex scalar φ with SM gauge assignments as in table 3. The relevant terms in the JHEP01(2015)089 where, for the couplings g q , g d , we also denote the flavor dependence. If the flavor breaking is of the FN type and the mediators do not carry a horizontal charge, then where g q,d ∼ O(1). Integrating out the mediators generates the operator (3.13) with the Wilson coefficient Note that the flavor suppression here is parametrically different than in (3.14) which was obtained by assuming that the FN scale is close to the TeV scale and that the interactions of DM with the visible sector involve the FN fields. In the above model, however, the FN scale can be arbitrarily high and only fixes the flavor interactions between the mediator and the SM fields. Consequently, the leading decay is now χ → ussuds where the suppression for the amplitude is ∼ λ |H(d c )|+|H(s c )|+2|H(q 2 )+H(q 1 )| ∼ λ 15 , to be compared with the λ 4 suppression in the more conservative case considered in section 3.2 where the leading decay is χ → udsuds. The indirect detection bound (4.2) thus translates in our toy mediator model to m φ,ψ 130 GeV for mass degenerate φ and ψ. However, since the coupling to the third generation quarks is O(1), the scalar mediators should in fact be heavier than the top quark in order not to modify its total decay width. The scaling (5.4) changes if the mediators carry nonzero horizontal charges. For instance, if the horizontal charge of φ is nonzero, H(φ) = 0, one has g q,AB ∼ JHEP01(2015)089 Figure 5. Box diagrams contributing to the neutral meson mixing. In the MFV model, there is also a contribution with both φ L and ϕ L in the loop, while φ R contributions are suppressed and can be ignored. In this case, the indirect detection bounds need to be appropriately rescaled. For −2 ≤ H(φ) ≤ 5 the Wilson coefficient is still given by (5.5) and thus m φ,ψ 130 GeV from indirect bounds as before. For other values of H(φ), the bound becomes even weaker. As far as the annihilation of symmetric part of the χ relic density is concerned, similar comments as for the MFV model in section 5.1 apply. The dark U(1) is phenomenologically not viable, while annihilation to light scalars is. Furthermore, if ψ has a mass within O(10%) of m χ , the process χχ † → ψψ † , which is forbidden at zero temperature but allowed for nonzero temperatures at the freeze-out, can efficiently annihilate away the symmetric component of χ without any need for additional states. Experimental signatures of the mediators Now we turn to the experimental signatures of weak scale mediators, the flavor constraints, direct DM detection, and DM production at the LHC. Flavor constraints The two mediator models from section 5 do not lead to tree level flavor changing neutral currents (FCNCs). These are first generated at 1-loop, see figure 5. For real couplings κ i and g q/d in eqs. mixing require the mediators masses to be generically above several hundred GeV as we show below. For related analyses of flavor constraints on diquarks, see, e.g., [78,79]. The ∆F = 2 effective weak Hamiltonian is where i = 1, . . . , 5 runs over the dimension six operators (we use the notation in [80]). Integrating out the mediators and the W at the weak scale gives, at leading order, a nonzero Wilson coefficient for the operator in the case of the FN model. Above, we first give the operators in the 4-component notation and then also in the 2-component notation (for our notation see appendix A). In the matching there are two types of contributions: in the first, only the mediators run in the loop whereas in the second, both the scalar mediator and the W boson run in the loop, see figure 5. For the MFV model, these give for the K 0 −K 0 , D 0 −D 0 , and B (s) −B (s) mixing while C FN 1D = C FN 1K andC FN 1D = 0. Above, we have indicated the scaling of different contributions to the Wilson coefficient in terms of λ = 0.2, cf. section 3.2. In the numerics we use the equality sign. The loop functions H(x) and H F (x 1 , x 2 ) are given in appendix D. Note that the above Wilson coefficients contain log(m i /m φ ) that can become large for m φ m i . We do not attempt to resum these logarithms which also means that we treat all the NP contributions as local. We expect that our numerical results can receive O(1) corrections due to neglected terms. This is within the precision required for our analysis. Though, we do include the usual RGE effects due to the NLO QCD running of the effective weak Hamiltonian from the weak scale to the low energy. For constraints from K 0 −K 0 and B (s) −B (s) mixing we use the recent results of a fit to the mixing parameters in [81]. The constraints from D 0 −D 0 mixing are obtained by assuming that the NP contribution saturates ∆m D so that in the equation x D = 2 D 0 |H ∆C=2 eff |D 0 /Γ D , valid in the limit of no CP violation, we only include the NP contribution [80]. The resulting bounds on couplings and masses are shown in table 4. In the case of the MFV model, the most severe bound comes from K 0 −K 0 and is due to K . Since we assume that all the κ i in (5.1) are real, the NP contribution does carry a weak phase due to the V ts V * td CKM factors and does contribute to K . In contrast, in the FN model the NP contributions to the mixing do not carry a weak phase and thus do not have an effect on K . Therefore, the bounds from K 0 −K 0 mixing are much less severe. JHEP01(2015)089 In figure 6, we show the constraint on the couplings κ 1,2 in the MFV model, fixing contribution to K 0 −K 0 is from the mediator-W loop, the K bound places a stringent constraint on κ 2 . Since the NP contributions to the meson mixing were assumed to be CP conserving in the case of the FN model, the couplings g d,q ∼ O(1) are allowed even for m φ as low as 200 GeV. Relic abundance and direct detection We note in passing that the virtual exchanges of the mediators generate contact operators of the schematic form χ † χqq that contribute to the χχ † annihilation cross section and to the cross section for DM scattering on nuclei. The symmetric couplings of DM and the mediators, of schematic form χχ † φφ † , do not suffice to create large enough annihilation cross sections that would annihilate away the symmetric component of DM relic abundance. As an example, consider the MFV model with scalar mediators, eq. (5.1), and assume that the lightest mediator is φ L . It can have a symmetric coupling to DM of the form At 1-loop, this generates a contact interaction χ † ∂ µ χqγ µ q, which leads to an annihilation cross section σv ∼ O(10 −28 cm 3 /s)(100GeV/m φ L ) 4 for O(1) couplings. This annihilation cross section is more than three orders of magnitude too small to obtain the observed relic density and satisfy CMB constraints for s-wave annihilation [57]. Thus, the symmetric component of the DM needs to annihilate away through a different mechanism as discussed at the end of section 5.1. Collider signatures In both the MFV and FN flavor breaking scenarios, the mediator models involve colored scalars. These can be searched for at the LHC through the gluon initiated pair production or through a single production. We use our two mediator models to estimate the LHC reach. The MFV mediator model, eq. (5.1), contains three colored scalars that are either triplets or sextets of the color and flavor groups, see Pair production of colored scalars is the dominant production mechanism of the mediators for the masses of interest, below O(TeV). We illustrate this in figure 7 for the color triplet φ in the FN model where we compare the pair production cross section from gluon fusion and from quark-guon fusion, and the single production of φ in association with a jet. Gluon fusion clearly dominates in the mass range of interest. The signatures of pair produced colored scalars depend on their decay modes. In our two models they decay either directly to two SM quarks or, alternatively, first to two lighter scalars that then in turn decay to two jets each. In the FN model the decay φ → jψ is also possible. The flavor composition of the jets depends on the flavor quantum numbers of the scalar. For instance, the states in the φ L flavor multiplet can decay either predominantly through φ L → tb, φ L → bj, or φ L → jj, depending on the flavor numbers of φ L (and similarly for ϕ L ), see eq. Figure 7. The gg → φφ † (solid blue), qq → φφ † (dot-dashed red) and gq → φj (solid light blue) contributions to the pair-production and single-production cross-section at the LHC with √ s = 14 TeV as a function of a mass of a color triplet scalar φ, a mediator in the FN model. φ R state. In the FN model one needs to require m φ > m t in order not to modify the total decay width of the top quark, see section 5. Then, the dominant decay is either φ →bψ or φ → tb, depending on the relative sizes of the two couplings, while the other decays are suppressed by additional powers of λ. To get a rough estimate of the LHC sensitivity we treat all the decay modes as twojet final states (this overestimates the reach slightly since, for the tj final state, the real efficiency is expected to be lower). The strongest constraint on pair-production of the lightest scalar mediators then comes from the search for pair-produced dijet resonances from CMS at 7 TeV LHC with integrated luminosity of 5 fb −1 [82]. This places the bounds m φ > ∼ 470 GeV in the case of FN model assuming that φ →bψ decay is negligible, and m φ L > ∼ 620 GeV, m ϕ L > ∼ 910 GeV, m φ R > ∼ 580 GeV in the case of MFV flavor breaking as shown in figure 8. Note that when all three mediators are degenerate in mass, the color sextet scalar has the largest pair production cross section due to the large color factor. In the FN model, a new experimental signature is obtained in the limit g d λ 2 g q . Then the dominant decay of φ is φ →bψ. In order not to have fast decaying DM m ψ > m χ /2. Using NDA the ψ decay length is (6.11) For light enough ψ (or heavy enough φ ), the fermion ψ does not decay in the detector and appears as E / T . The pp → φφ † pair production then results in 2j + E / T or 2b + E / T final state, and is bounded from sbottom searches as shown in figure 9. Figure 8. Constraints on the scalar mediator φ in the FN model, and φ L , ϕ L , φ R in the MFV model that follow from the CMS search for pair-produced dijet-resonances [82]. The states in the same flavor multiplet are taken to be mass-degenerate. g q = g d = 0.03 in eq. (6.11) gives BR(φ → bψ) ≈ BR(φ → sψ) = 0.33. For the same input parameters, the single production of ψ in association with b, t, or φ has a cross section ∼ 7 · 10 −2 fb while the pair production is dominated by the process ss → ψψ and has a negligible cross section of ∼ 4 · 10 −4 fb. The single production of mediators, e.g., ud → φ, ud → φ L , ud → ϕ L , ds → φ R , is suppressed due to the small couplings of the mediators to the first and the second generation quarks. Similarly, the single production from heavy quarks in the initial state suffers from the PDF suppression. This is well below the SM production cross section. Thus, the ATLAS and CMS combined measurement of the single top cross section at √ s = 8 TeV, 85 ± 12 pb [84] and so does not impose any limits on the mediator model. The production of the DM, χ, can occur from the decay of heavier mediators. For instance, for κ 4 ∼ κ 3 and φ R heavy enough, the dominant decay mode of R would thus result in 8j + E / T signature where paired dijets would reconstruct φ L and ϕ L mass peaks (depending on the flavor assignments some of the jets can be replaced by t of b jets). Conclusions We showed that for asymmetric DM (ADM) models, the stability of DM on cosmological time scales may be purely accidental. We do not require that the DM to be charged under BR(φ → bψ) = 1 BR(φ → bψ) = 0.5 Figure 9. The 95% exclusion limit on φφ † production in the FN model for the bbψψ final state, where ψ escapes the detector and sbottom search applies [83]. The solid blue (dashed red) line is for φ → bψ branching ratios of 50% and 100%. an ad-hoc conserved Z n symmetry. Rather, we assume that such a discrete symmetry is explicitly broken by the mediator interactions that transfer the B − L between the DM sector and the visible sector in the early universe. Such asymmetric interactions are necessary in all models of ADM though they may be made to obey a Z 4 symmetry (i.e. one can demand that they involve only the χχ → visible or χ † χ † → visible transitions instead of χ → visible transitions as in our case). At low energies, the DM then carries a conserved χ charge that is broken only by the higher dimensional operators obtained by integrating out the mediators. Such operators also lead to DM decays. In this paper we explored the role of continuous flavor symmetries for the properties of such decaying DM focusing on the case where DM that carries nonzero baryon number. For B = 1 DM, the direct detection bounds are evaded if the mediators are above ∼ 4 · 10 9 TeV assuming O(1) couplings. However, if quark flavor breaking is of the MFV type, the mediators can be lighter by around two orders of magnitude. For B = 2 DM, the scale of the mediators can be much lighter (O(8TeV) for O(1) couplings). This is then lowered by an order of magnitude if quark flavor breaking is of the MFV or Froggatt-Nielsen type. The mediators that would lead to indirect DM signals in the next generation of experiments can thus be, at the same time, searched for at the LHC. We have explored this possibility by constructing two mediator models, one with assumed MFV and one with a FN flavor breaking pattern. The MFV mediator model (eq. (5.1)) contains three colored scalars that are either triplets or sextets of the color and the flavor groups, see table 2. The FN model (eq. (5.3)), on the other hand, contains JHEP01(2015)089 one colored scalar and one neutral fermion, see table 3. These mediators generate FCNCs at 1-loop. While this leads to nontrivial constraints on their masses and couplings, the mediators can still be as light as a few ×100 GeV with O(1) couplings. Since the mediators are charged under QCD, they can be singly or pair-produced at the LHC with large cross sections. This means that the searches at the LHC can lead to interesting constraints or discoveries. The signatures depend on how the mediators decay. In the FN model, for instance, the decay to heavy quarks, φ → tb, is favored. Modifying the paired dijet searches to the pp → φφ → tbtb signal could thus enhance the reach of the LHC in the search for these mediators. In the MFV model, on the other hand, paired light dijets, paired tb, and paired bj are possible. Other signatures are discussed in section 6.3. In conclusion, ADM can quite generically be metastable with a possibility of complementary signals in indirect detection and at the LHC. B Asymmetric DM relic density Here we review the relations between the DM relic density and the DM mass in ADM models. We assume that the operator(s) transferring the B −L asymmetry from the visible to the dark sector decouple above electroweak phase transition, T C > T ew ∼ 170 GeV [88], as is the case for our ADM models, see section 2. We first assume that the visible sector consists below T C of only the SM fields (we will later relax this). The number density asymmetry for relativistic particles is where n(n) are the particle(anti-particle) number densities, µ i is the chemical potential for species i, andĝ i = g i (g i /2) for bosons (fermions) with g i internal degrees of freedom so thatĝ i = 1 for a Weyl fermion, whileĝ i = 2 for a Dirac fermion or a complex scalar. All the SM particles are in chemical equilibrium, so that the chemical potentials are proportional to the conserved quantum numbers [89]. Above the electroweak phase transition these are B − L, Y and SU(2) L , while B + L is broken by sphalerons. Thus (see also [90]) where the c i are constants that we determine from net weak isospin, hypercharge and B − L densities. The net weak isospin charge density in the universe normalized to entropy density is For the first equality we used that for each SU(2) multiplet i (T 3 ) i = 0, and in the second equality that the net T 3 charge is zero since SU(2) L is not explicitly broken. Thus c 3 = 0 and the SU(2) L charge of a particle does not contribute to its chemical potential. Flavor mixing ensures that the chemical potentials for SM Weyl fermions from different generations are the same. Similarly, SU(2) L interactions ensure that µ u L = µ d L ≡ µ Q , and µ L = µ ν ≡ µ L . We thus have JHEP01(2015)089 while for the gauge bosons µ G = µ W = µ B = 0. The net hypercharge of the universe is thus where N f = 3 is the number of generations and N c is the number of colors. Setting the net hypercharge density in the universe to zero, Y = 0, gives The net B − L number density in the visible sector (i.e. excluding the B − L asymmetry carried by the χ fields in the dark sector) is then There are two types of interactions between the dark and visible sector: the asymmetric interactions that involve a single χ field, and the symmetric interactions of the form χ † χ times the SM fields. The symmetric operators keep the dark and the visible sectors in thermal equilibrium. The asymmetric interactions are suppressed, and decouple at temperatures well above the χ mass. At lower temperatures the χ number is thus effectively conserved. The chemical potential µ χ is the same as it was before the decoupling. We thus have The net χ number density normalized to entropy density we denote by ∆χ and is Since B − L and χ are conserved quantum numbers below the decoupling temperature, each of the number densities scales as R −3 as universe expands. The ratio thus stays fixed. Even if at the decoupling there are more χ i dark sector states, we assume that DM is composed only from one state, χ. We therefore have for the ratio of baryon and dark matter energy densities JHEP01(2015)089 The ratio of net B and B − L numbers B/(B − L) = 28/79 = 0.354 just above the electroweak phase transition [89]. This remains essentially unchanged even if sphaleron and top mass effects are taken into account, in which case using results from [88,91] one has B/(B − L) = 0.349 for both scalar and fermionic DM. Using (B − L)/∆χ = 79/(11(B − L) sum χ ) from (B.11) finally leads to where in the last equality we used Ω χ = 0.265 ± 0.011 and Ω B = 0.0499 ± 0.0022 [53]. Note that the error is dominated by the experimental determination of DM and baryon densities. For instance, the difference between B/(B − L) determination with and without sphaleron effects leads to a smaller shift in m χ than the above quoted error. We turn next to the case of additional fields in the visible sector. An example would be that SM gets completed to the MSSM. The relation between Y, B − L and the constants c Y,B−L can be written in the matrix form (B.14) Here we defined The net χ charge is still given by eq. (B.11), while the ratio Ω B /Ω χ is given by (B.12) with (B − L)/∆χ fixed at the decoupling temperature and B/(B − L) at the electroweak phase transition. We thus have , and χ(q * q * )(q * q * )(d c d c ). For the same NP suppression scale Λ the last type of operators gives the shortest lifetime. The dominant effective decay Lagrangian is thus, schematically, where C is a flavor-dependent Wilson coefficient, the brackets enclose Lorentz contracted pairs, and summation over different flavor, color and weak isospin contractions is understood. In section 3 we included the SM Yukawa insertions in the definition of the operators. To unify the notation we instead use in this appendix the convention that the Wilson coefficient C encodes all the flavor suppressions. The effective decay Lagrangian is thus, going to the mass basis, and displaying the flavor indices only, where the flavor dependent Wilson coefficients are The partial decay width for χ → qqqqdd transition is then, using NDA, The factor 1/(8π) × 1/(16π 2 ) 4 results from integrating over the 6-body phase space. For the MFV flavor breaking case there are several subtleties when calculating the decay width. For instance, the Levi-Civita tensor contractions lead to vanishing operators for some of the color and Lorentz contractions. Another subtlety is that the tree decay may be strongly CKM suppressed so that the leading decay amplitude is the 1-loop one, see figure 10. The decay width can thus be estimated as quark content of the second is∼ udd + ddc and its rest mass m n + m Σ 0 c = 3.4 GeV. In contrast, the decays to Ξ 0 (∼ uds) or Λ 0 (∼ uds) baryons are allowed. Eq. (C.6) gives with the same estimate, within our precision, for the χ → Ξ 0 , Ξ 0 or χ → Λ 0 , Λ 0 decays. Note that in the 1-loop amplitude the partonic transition at the decay vertex, χ → udb + tds, carries no CKM suppression. Furthermore, the y s Yukawa insertion in the tree level amplitude is replaced by y b . The b and t quark lines then convert to u and s quark lines via W exchange, as shown in figure 10. The smaller CKM and Yukawa suppressions compensate the loop factor so that the 1-loop amplitude dominates, with the NDA estimate Γ mfv loop /Γ mfv tree ∼ O(10). This procedure can be repeated for different DM masses, arriving at the dominant decay modes as a function of m χ . The results are listed in table 5, where we give the kinematical thresholds (1st column) for a number of decay channels (4th column), along with the corresponding partonic transitions (3rd column) and the decay vertex transitions (2nd column). The latter two differ for the loop processes, cf. figure 10. The total decay width for given m χ is then the sum of partial decay widths, Γ i , (5th column) for the decay channels that are kinematically allowed. For convenience we also give the decay times, τ i , (6th column) that correspond to individual partial decay widths. Note that in the calculation of the partial decay widths we neglect the phase space suppression, while the quoted Γ i in table 5 are obtained from the NDA estimates (C.6) with m χ at the kinematical threshold, and setting Λ = 1TeV. In the case of FN flavor breaking the leading tree level and loop induced decay widths JHEP01(2015)089 Table 5. Partial decay widths, Γ i , and related decay times, τ i = 1/Γ i , for representative decay channels above kinematical thresholds (1st column) assuming the MFV flavor breaking ansatz. The EFT scale is set to Λ = 1 TeV. The last column denotes whether the dominant amplitude is tree level or 1-loop, while the 2nd and the 3rd columns give the decay vertex transition and the partonic transition after the potential W exchange, respectively. (C. 10) In this case the tree level decay dominates over the loop induced decay by four orders of magnitude. The dominance of the tree level decay amplitude over the 1-loop decay amplitude holds also, if the DM mass is varied. This can be traced to the following difference between the MFV and FN ansätze. In the MFV case the Levi-Civita tensors enforce that two quark flavors in the effective decay vertex need to be from the third generation. This can be changed either by using the V CKM misalignment or through a loop transition. In FN flavor structure ansatz, on the other hand, the flavor indices need not be antisymmetric. JHEP01(2015)089 D Loop functions in neutral meson mixing Here we list the analytical form of the loop functions F (x), F F (x 1 , x 2 ), G(x 1 , x 2 ), G F (x 1 , x 2 , x 3) and H(x), H F (x 1 , x 2 ) that appear in the 1-loop expressions for the Wilson coefficients in the neutral meson mixing, section 6.1. The mediator loop functions with mass degenerate quarks in the loop are given by while for two different quarks running in the loop they are 3) The loop functions for the mediator-W loops are . (D.5) Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
12,483.8
2014-08-17T00:00:00.000
[ "Physics" ]
Enhanced and Constant-value Transient Diffraction Efficiency from a Recorded Grating in a BaTiO 3 Crystal A simple method has been proposed here to get an enhanced and constant-value transient diffraction efficiency for a large time scale from a recorded grating in an undoped BaTiO3 crystal. After writing a steady-state grating by two cw recording beams (λ=514.5 nm), when the recording beams are switched off, the recorded grating as well as the diffraction efficiency decreases at a faster rate initially, and thereafter reaches to almost a stationary level for some few seconds. The diffraction efficiency for this level is very small compared to that for a steady-state grating and it becomes smaller slowly with time. We can increase (about twice the initial stationary-level value) this small diffraction efficiency from its any initial value and have this value (increased) constantly for a large time scale by using a suitable backward-pulsed reading beam containing the same properties as the recording beams. (TDF) which was up to ten times greater than the stationary TDE and was caused by the preillumination of Bi 4 Ti 3 O 12 at a low temperature.Smirl et al. [9] observed in BaTiO 3 an increase of the holographic diffraction efficiency in the dark after holographic writing with ps laser pulses.This effect can be understood in terms of the two-center charge transport model [10].In our previous works, we experimentally observed a unique increase in the TDE for a given pulsed reading beam under suitable conditions in an undoped BaTiO 3 crystal at room temperature [11].Using this increased TDE we could also observe a unique increase in the switched phase-conjugate reflectivity (SPCR) [12,13].The observations were attributed to the presence of two shallow levels in the charge transport process in the crystal even at room temperature in our proposed model [11][12][13][14][15]. In all of our previous works, we observed that after writing a steady-state grating by two cw recording beams (λ=514.5 nm), when the recording beams are switched off, the recorded grating as well as the diffraction efficiency decreases at a faster rate initially, and thereafter reaches to a nearly steady-state level for a short time scale (i.e., it remains almost constant for some few seconds).The diffraction efficiency for this level is very small with respect to that for a steady-state grating and it becomes smaller slowly as time goes on (i.e., for a large time scale).Our previously observed [11][12][13][14][15] unique enhancement of TDE or SPCR was from this short time nearly steady-state level and the investigations were for maximizing their transient values.The enhanced value of TDE or SPCR was for a given pulse of the backward reading beam and could somewhat be controlled by using proper interacting beams. For a recorded grating, one major problem is the degradation of recorded gratings with time (even in dark) due to the recombination of the charge carriers, and also by the reading beam during the read-out process.Therefore, the diffraction efficiency (hence, the ratio between diffracted and incident light intensity of the probe beam) from a recorded grating decreases with time whether a reading beam is present or not.However, in the presence of a reading beam, the diffraction efficiency decreases with increasing reading beam (or pump beam) power [3,7,11] (although, the TDE is seen to be increased for some initial pulses when a pulsed reading beam is incident on the grating [11,12]).Despite the degradation of the recorded grating, if an enhanced and constant-value of TDE for a large time scale (e.g., 10-30 mins.) is possible ( from its any initial value) from a recorded grating, after a significantly later time of recording, then it would in principle be useful in various applications such as optical image processing, holographic storage etc. In this study, we give a method to get an enhanced (about twice the stationary-level value) and also a constant value of TDE for a large time scale, in an undoped BaTiO 3 crystal at room temperature, which can be of practical use.To our knowledge, a report on the observation of such a constant-value TDE for a large time scale has not yet been reported.The present observation can also be explained with our previously proposed model of the charge transport process. Experiment Fig. 1 shows an experimental setup where the interacting beams (i.e., the recording and reading beams) from an Ar + laser were set as in our previous experiments (i.e., degenerate four-wave mixing geometry) [11][12][13].The recording beams and the reading beam are arranged conventionally from an unexpended light beam of a single longitudinal-mode Ar + laser (λ=514.5 nm, beam diameter = 1.2 mm).The polarization of all the interacting beams is extraordinary.An undoped BaTiO 3 crystal (5×5×2 mm 3 , 0 o cut) is used as our sample whose absorption constants are α 0 = 2.79 cm -1 and α e = 2.33 cm -1 at the operating wavelength.The orientation of the crystal and the interacting beams were set so that the angle θ between the grating wave vector K and the crystal c-axis was as large as possible (here θ=16 o ) without using any matching medium, to ensure a large contribution of the electro-optic coefficient r 42 . We first record a steady-state transmission grating by two extraordinarily polarized recording beams while keeping the reading beam switched off.An ordinarily polarized weak He-Ne-laser cw probe beam (λ=632.8nm) enters the crystal simultaneously at the Bragg angle and the diffracted beam from the recorded grating is monitored by a Fig. 1.Four-wave mixing configuration used in the experiment.The two recording and the reading beams (λ=514.5nm)have extraordinary polarization, and the weak probe beam (λ = 632.8nm)has ordinary polarization.photodiode behind the crystal.The intensity of the probe beam is kept less than 100μW/cm 2 in order to avoid its influence on the recorded grating.Fig. 2 shows the temporal variation of the diffracted probe beam power during the recording and reading process.A pulsed backward-pump beam (λ=514.5 nm and extraordinarily polarized) with a pulse width of ~1 sec and a period of T is then incident at t=0 after switching off the recording beams at t=-20 sec.During the recording process the diffracted beam power reaches to a steady-state value within about 200 sec.When the recording beams are switched off, the diffracted beam power decreases sharply, and thereafter reaches to a nearly steady-state level which remains almost constant for some few seconds.The diffraction efficiency for this level is very small compared to that for a steady-state grating and it becomes smaller slowly as time goes on (i.e., for a larger time scale).entering the crystal at the Bragg angle. Results and Discussion Fig. 3 shows the temporal variation of the diffracted probe beam power for various reading beam powers P b and time periods T (when it is in pulsed mode).As we observed in our previous experiments [11,12] that the TDE or SPCR can be affected by the used values of the interacting beam ratio, r int (= P b /(P f +P p )) and the recording beam ratio, r rec (=P p /P f ), where P i (i=b, f and p) represents the power of each beam denoted in Fig. 1, we have chosen here the values for obtaining maximum diffraction efficiency.For a cw reading beam, the diffracted beam power (solid line) decreases gradually with time due to the erasing effect of the recorded grating by the reading beam.Here the recording beam Fig. 3. Temporal variation of the diffracted probe beam power for cw and pulsed reading beam with powers P b and time periods T. Diffracted beam power (nW) intensity and the reading beam intensity are 7 mW/cm 2 and 44 mW/cm 2 , respectively and r rec = 0.1.For a pulsed reading beam of power 1 mW (for T = 90 and 10 sec.) the diffracted beam power as well as the TDE grows and reaches a peak value at a certain initial pulse of the reading beam (i.e., at second pulse for T = 90 sec (triangle) and at third pulse for T = 10 sec (solid circle)) after switching on at t = 0.After the peak, the TDE decreases gradually with time for further incident pulses of the reading beam.Here the intensity of the reading beam is four times greater than that of the cw reading beam.The observed result for T = 90 sec is similar to our previous experimental works, where the investigations were for maximizing the diffraction efficiency.However, for a pulsed reading beam with the same intensity of the cw reading beam and pulse period of T=10 sec, the TDE reaches a constant value within a few pulses of the reading beam.It is surprisingly observed that in spite of erasing of the recorded grating by the incidence of reading beam pulses, the TDE remains constant for a large time scale (i.e., up to 20 mins.or more).Fig. 4 shows the temporal variation of the diffracted probe beam power for different cw and pulsed reading beam powers P b and time periods T (when it is in pulsed mode).For a cw reading beam the diffracted beam power as well as the diffraction efficiency decreases with increasing the reading beam power P b .For the pulsed reading beams with different pulse periods T (i.e., T=10, 30, 60, 90 and 120 sec.), the diffraction efficiency reaches to a peak then decreases with time (except for the pulsed reading beam with P b = 250μW and T=10sec.).For the pulsed reading beam of power P b = 250μW and of T=10sec. the diffraction efficiency remains constant for a large time scale.The observed result of this constant-value TDE can also be explained with our previously proposed charge transport model [11,12].In the model, we consider two shallow centers L 1 and L 2 along with one deep center L D , as shown in Fig. 5.The deep center involves both positive and negative charge carriers, while the shallow centers involve only positive charge carriers.We also assume that the photoexcitation crosssection of L 1 is larger than that of L 2 (i.e., s 1 >s 2 ) and the thermal excitation coefficient of L 2 is larger than that of L 1 (i.e., β 2 > β 1 ).Due to the intensity distribution, charge gratings are formed in both the shallow and deep centers and can reach a steady-state form for a sufficiently long recording time.For the incidence of a reading beam (either cw or pulse), these gratings might be erased slightly or largely depending on its intensity and irradiant duration.During the illumination of a reading pulse, additional holes (as holes are believed to be the dominant charge carriers in BaTiO 3 [10,16,17]) are photoexcited from L D and most of them are accumulated to L 2 through the valence band and L 1 .When the pulse is over, holes are thermally excited from L 2 and accumulated at a large rate to L 1 (since β 2 >β 1 ) and finally to L D .Therefore, the concentration of holes in L 1 increases in the dark (i.e., while the reading beam is in off-state) for a while.If the dark-time is large enough, holes are recombined to L D from L 1 through the valence band.Photoexcitation from L D is also largely dependent on the intensity of the reading beam pulse.A high intensity of the reading beam pulse can photoexcite large charge carriers, however, it erases the recorded grating largely.For the incidence of a reading pulse, an instantaneous grating is formed by the reading and the diffracted grating which is added with the previously recorded grating.Therefore, the grating depth can be maximized when L 1 has maximum charge concentration.For the case of constant value TDE, with a moderate reading beam pulse power as observed in Fig. 3, the erased grating due to the reading pulse is compensated by the instantaneous grating formed by the reading and the diffracted beam pulses, and it is due to this charge transport process that the grating depth remains constant for such a large time scale.For the other cases of reading beam power, the erasing effects are dominant and the TDE decreases with time (except for some initial pulses of the reading beam power of 1mW with pulse period T=90 sec and T=10 sec; and hence the growing of TDE during those pulses are because the resultant grating suppresses the previously present grating before the incidence of pulses).The pulse width used (~1 sec) throughout the experiment is the same one used in our previous experiments. We also observed the same result of constant value of TDE even if the pulsed reading beam is incident at a significantly later time (e.g., after writing a steady-state grating, if the recording beams are switched off at t=- (10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20) mins. in Fig. 2) after switching off the recording beam.However, the enhancement of TDE is found to slightly vary inversely to the intermediate time between when the recording beams are switched-off and the reading beam is switched-on.For t=-20 sec, TDE increases to about twice the value with respect to that when no reading pulses are incident on the crystal. Conclusions For a recorded grating, one major problem is the degradation of recorded gratings as well as the diffraction efficiency with time whether a reading beam is present (during the readout process) or not.In this paper, a simple method has been reported to get enhanced and constant-value transient diffraction efficiency (from any of its initial value) for a large time scale, in spite of degradation of the recorded gratings during the read-out process.A suitable pulsed backward reading beam containing the same properties as the recording beams can give this enhanced constant-value transient diffraction efficiency from a recorded grating (even after a significantly later time of recording), which could in principle be useful in many different applications such as optical image processing, phase conjugation, holographic storage etc. Fig. 2 . Fig. 2. Temporal variation of the diffracted beam power of a weak probe beam (λ = 632.8nm) entering the crystal at the Bragg angle. Fig. 4 . Fig. 4. Temporal variation of the diffracted probe beam power for different cw and pulsed reading beam powers P b and time periods T. The curves drawn here are only guides for the eye. Fig. 5 . Fig. 5. Charge transport model with deep level L D and two shallow levels L 1 and L 2 .Arrows indicate excitation and recombination of charge carriers in valence and conduction bands.
3,347.4
2011-04-28T00:00:00.000
[ "Physics" ]
Explainable COVID-19 Detection Based on Chest X-rays Using an End-to-End RegNet Architecture COVID-19,which is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), is one of the worst pandemics in recent history. The identification of patients suspected to be infected with COVID-19 is becoming crucial to reduce its spread. We aimed to validate and test a deep learning model to detect COVID-19 based on chest X-rays. The recent deep convolutional neural network (CNN) RegNetX032 was adapted for detecting COVID-19 from chest X-ray (CXR) images using polymerase chain reaction (RT-PCR) as a reference. The model was customized and trained on five datasets containing more than 15,000 CXR images (including 4148COVID-19-positive cases) and then tested on 321 images (150 COVID-19-positive) from Montfort Hospital. Twenty percent of the data from the five datasets were used as validation data for hyperparameter optimization. Each CXR image was processed by the model to detect COVID-19. Multi-binary classifications were proposed, such as: COVID-19 vs. normal, COVID-19 + pneumonia vs. normal, and pneumonia vs. normal. The performance results were based on the area under the curve (AUC), sensitivity, and specificity. In addition, an explainability model was developed that demonstrated the high performance and high generalization degree of the proposed model in detecting and highlighting the signs of the disease. The fine-tuned RegNetX032 model achieved an overall accuracy score of 96.0%, with an AUC score of 99.1%. The model showed a superior sensitivity of 98.0% in detecting signs from CXR images of COVID-19 patients, and a specificity of 93.0% in detecting healthy CXR images. A second scenario compared COVID-19 + pneumonia vs. normal (healthy X-ray) patients. The model achieved an overall score of 99.1% (AUC) with a sensitivity of 96.0% and specificity of 93.0% on the Montfort dataset. For the validation set, the model achieved an average accuracy of 98.6%, an AUC score of 98.0%, a sensitivity of 98.0%, and a specificity of 96.0% for detection (COVID-19 patients vs. healthy patients). The second scenario compared COVID-19 + pneumonia vs. normal patients. The model achieved an overall score of 98.8% (AUC) with a sensitivity of 97.0% and a specificity of 96.0%. This robust deep learning model demonstrated excellent performance in detecting COVID-19 from chest X-rays. This model could be used to automate the detection of COVID-19 and improve decision making for patient triage and isolation in hospital settings. This could also be used as a complementary aid for radiologists or clinicians when differentiating to make smart decisions. Introduction Coronavirus disease 2019 (COVID-19) has been responsible for over 670 million cases and over 6.8 million deaths worldwide [1]. Real-time polymerase chain reaction (RT-PCR) is currently the gold standard for detecting and diagnosing severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) [2]. However, RT-PCR testing can still produce false-negative results [3]. Furthermore, the efficiency and timeliness of obtaining valid clinical results have become very important. The volume of patients requires the judicious use of resources while providing quality services and maintaining the safety of patients and healthcare professionals. As one of the most widely used diagnostic tools in medical practice, lung radiography adds undeniable clinical value in the diagnosis of many diseases [4]. The advantages of this artificial intelligence (AI)-based approach lie in its low cost; operational simplicity; and availability in a variety of clinical settings, both hospital-and communitybased [4][5][6][7][8]. Although any clinician can obtain a clinical impression from an image of the lungs, radiography results must be validated by a radiologist. Thus, the implementation of this method in a high-volume diagnostic setting may be self-limiting; that is, the speed of validating the results depends on the availability of a radiologist and the volume of images to be reviewed [4][5][6][9][10][11]. Thus, the automatic detection of lung disease by AI is currently a highly valued and frequently evaluated concept in the fields of medical informatics research and radiology [4,12]. Several studies are already available. For the most part, deep learning approaches are applied to chest X-ray (CXR) images to classify COVID-19-infected patients, and the results have been shown to be very good in terms of accuracy (ACC), area under the curve (AUC), sensitivity (SN), and specificity (SP). Related Work Akinyelu et al. [13] introduced deep learning (DL)-based solutions for COVID-19 diagnosis using computerized tomography (CT) scans and various convolutional neural network (CNN) models. The authors used 9000 COVID-19 and 5000 normal images. All the CNN models were pre-trained. The findings showed that NASNetLarge [14], InceptionResNetV2 [15], and DenseNet169 [16] achieved the highest classification accuracy. The accuracy of the three models was 99.8%, 99.7%, and 99.7%, respectively. Khalil et al. [17] presented a pre-trained CNN called EfficientnetB4 [18]. They developed an in-depth training approach to extract the features of COVID-19 after a medical assessment before infection testing. The proposed framework achieved an accuracy of 97.0%. Hasan et al. [19] proposed a CNN called CVR-Net for COVID-19 diagnosis. The proposed end-to-end CVR-Net was an ensemble model with multiple scales and multiple encoders that combined the outputs from two separate encoders and their various scales to represent the final prediction probability. Their approach achieved accuracy scores of 99.8%, 98.4%, and 88.7% in binary classification for three-class and four-class classification. Abdul et al. [20] presented a deep learning multi-layered network to classify CXR images as COVID-19-positive or -negative. The proposed CNN used a dataset of patients infected with Coronavirus, wherein specialists indicated multi-lobar involvements in the CXR images. The authors used a total of 6,500 CXR images for model development. Their CNN model achieved an accuracy of 94.0%. Sahlol et al. [21] created a classification strategy by merging a pre-trained CNN (inception) and swarm-based feature selection method (fractional-order marine predators algorithm) to detect COVID-19 from CXR images. The developed method was assessed on two different datasets acquired from separate sources. Dataset 1 included 1675 non-COVID-19 samples taken from the Kaggle dataset [22] and 200 COVID-19 images acquired by Cohen, Morrison, and Dao [23]. Researchers from the University of Qatar and the University of Dhaka and fellows from Malaysia and Pakistan contributed to dtaset 2 [24]. In dataset 2, which consisted of 219 COVID-19 and 1341 non-COVID-19 CXR images, some positive COVID-19 samples from the SIRM dataset were added. The authors achieved accuracy scores of 98.7% and 98.2% for dataset 1 and dataset 2, respectively. Kumar et al. [25] proposed DL network called "LiteCovidNet" to detect COVID-19 cases as the binary class (COVID-19 vs. normal) and the multi-class (COVID-19 vs. normal and pneumonia) using CXR images. Their method achieved an accuracy of 100% and 98.82% for binary and multi-class classification, respectively. Muhammad et al. [26] fine-tuned a pre-trained model with some extra CNN layers (average pooling layer and two dense layers followed by ReLU with a softmax activation function). The authors used CXR images for binary classification (COVID-19 vs. negative). They benchmarked various CNN models such as VGG19 [27], Xception [28], ResNet152 [29], ResNet152v2, ResNet101, ResNet101v2, DenseNet201 [16], DenseNet169, and DenseNet121. Their best model achieved an average accuracy score of 95.0%. Ayalew et al. [30] presented a hybrid approach combining a convolutional neural network (CNN) and a histogram of oriented gradients (HOG) called DCCNet for COVID-19 diagnosis using CXR images. Their hybrid model achieved an accuracy score of 99.67%. Ghose et al. [31] presented transfer learning for COVID-19 detection using CT scans and CXR images. The authors merged CT scans with CXR images to create a global dataset. Their algorithm obtained an accuracy score of 99.59% for CXR and 99.95% for CT scan images. Indumathi et al. [32] presented a method based on a machine learning (ML) algorithm to identify the degree of infection of COVID-19. The ML algorithm classified COVID-19affected regions into various zones such as danger, moderate, and safe zones. Their proposed approach obtained an accuracy score of 98.06%. Salau et al. [33] provided a support vector machine (SVM) algorithm for the identification and classification of COVID-19. The authors used a discrete wavelet transform (DWT) algorithm for feature extraction and SVM for classification. Their method achieved an accuracy score of 98.2%. Frimpong et al. [34] presented an interesting study on COVID-19 detection based on a Wi-Fi-enabled microcontroller, a temperature sensor, and a heart rate sensor. The authors designed a low-cost hardware system for students. The suggested method monitored the student's condition continuously on a mobile application while detecting and differentiating between normal and abnormal body temperatures and regular and irregular heartbeats. Tests over time demonstrated the IoT-enabled system's dependability, responsiveness, and affordability. The microcontroller's intelligent programming and the sensor's operation through the mobile application enabled the low-cost early diagnosis of abnormal temperature and heartbeat anomalies. Lua et al. [35] presented a multi-scale class residual attention (MCRA) network for the multi-class classification of COVID-19, pneumonia, and normal cases using CXR images. The authors used the pixel-level image mixing of local regions for data augmentation and noise reduction. Their experimental results showed that their network achieved an accuracy score of 97.71%. Chouat et al. [36] presented a series of pre-trained DL models, ResNet50, InceptionV3, VGGNet-19, and Xception, for COVID-19 detection on CXR and CT scan images. The authors included a data augmentation technique to increase the size of the dataset. They found that VGGNet-19 outperformed the other three DL models on the CT image dataset, where it achieved an accuracy score of 87.0%. The best model for CXR images was Xception, with an accuracy score of 98.0%. Deriba et al. [37] presented three ML algorithms, naïve Bayes (NB), artificial neural network (ANN), and SVM, for COVID-19 detection. The authors used 311 patients' data, comprising 214 males and 96 females. The model was tested using n = 10 input variables. The results demonstrated that the SVM algorithm achieved an accuracy score of 91.3%, and the other two methods provided an accuracy of 87.75% and 96.05%, respectively. A similar study presented by Wubineh et al. [38] for COVID-19 detection used a dataset of 1,048,575 variables obtained from Kaggle for model development. The authors employed a method called the PART rule-based algorithm and achieved an accuracy score of 92.47% using a 10-fold cross-validation test. In this study, a CNN algorithm for COVID-19 detection was developed. A preliminary internal validation was carried out with a balanced cohort of patients from Italy, i.e., patients with an official diagnosis of COVID-19 and others with a negative or different diagnosis of COVID-19. The anonymized images of this cohort were obtained from the "Società Italiana di Radiologi Medica e Interventistica" [39]. The results of this first study showed a sensitivity of 98% and a specificity of 97%. However, the internal validation was carried out at a small scale, and the continuity of the model training on a larger scale had to be ensured as a validation process for its eventual clinical use. Thus, the objective on this study was to validate and test this deep learning model on confirmed cases to detect COVID-19 from chest X-ray (CXR) images. We aimed to make the following contributions: 1. A state-of-the-art pre-trained CNN model called RegNetX032 was fine-tuned for multi-binary classification (COVID-19 vs. normal, COVID-19 + pneumonia vs. normal, and pneumonia vs. normal. Such a model has not yet been proposed in a medical imaging classification study. Our study investigated the performance of this fine-tuned RegNet model for COVID-19 detection. 2. We used various datasets, which differed in terms of resolution quality, to validate the performance of the model and its degree of generalization. 3. We tested the performance and the degree of generalization of the model using a private dataset. 4. An explainability model was integrated to localize the signs of the disease and provide decision support. The paper is structured as follows: Section 1 introduces the COVID-19 pandemic, and Section 2 presents related work. Section 3 discusses the methods and presents the proposed deep learning model and the datasets used in this study. Section 4 presents the experimental results. Section 5 describes the model's explainability. Section 6 discusses the model's limitations. Section 7 presents a discussion and conclusions. Deep Learning Model For COVID-19 detection, we fine-tuned the recent convolutional neural (CNN) network called RegNetX032 [40]. Convolutional neural network architectures have often been created and optimized for a single objective. For instance, at the time of its original release, the ResNet [29] model family was tuned for accurate results on ImageNet [41]. MobileNets [42] were designed specifically to perform on mobile devices, as the name suggests. EfficientNet [43] was developed to be highly effective in visual recognition tasks. Radosavovic et al. [40] decided to set a very unusual but extremely interesting goal in their study "Designing Network Design Spaces". The authors set out to investigate and develop a highly flexible network architecture that was customizable for the best classification performance, could be developed to run on mobile devices or be extremely effective, and was also highly accurate. Setting the proper parameters in a quantized linear function, which is a sequence of formulas with specified parameters to determine a network's width and depth, was thought to be able to manage this adaptation. They also used a novel method, creating a network named network design spaces rather than manually creating the model architecture. Deriving the RegNet Model from Network Design Spaces A network design space is made up of various model architectures, as the name might imply, but it also builds various parameters that create a space of alternative model designs. This is not like a neural architecture search, wherein the developers experiment with several structures to find the best one, adjusting, for example, the network's width, depth, or groups. RegNet [40] also only employed one type of network block out for the several architectures, i.e., the bottleneck block. The authors first created a space for all practical models, which they referred to as "AnyNet", before reaching the final RegNet design space. This part generated a large variety of models from a large variety of combinations of the different parameters. On the ImageNet dataset, all these models were trained and tested using a standard training phase (epochs, optimizer, weight decay, and learning rate scheduler). By examining the parameters that contributed to the improved performance of the best models in the AnyNet design space, they developed gradually smaller iterations of the original AnyNet design space. In general, they tested the weighting factors of several parameters to reduce the design space to only the best models. Setting a shared bottleneck ratio and a shared group width as well as parameterizing the width and depth to increase in the later stages were some of the enhancements applied from the existing design space to the tighter design space. They finally reached the optimized RegNet design space, which only showed the best models and the quantization linear function required to specify the models. The RegNet Design Space The network was constructed of several stages consisting of multiple blocks, forming a stem (start), body (main part), and head (end). There were different stages specified inside the body, and each stage was made up of different blocks. As previously mentioned, the standard residual bottleneck block with group convolution was the only type of block used in RegNet. The RegNet model's architecture was determined by a quantized linear function that was controlled by the selected parameters rather than by fixed parameters such as depth and width. After optimization, the following formula was used to determine the block widths: The width for each block increased by a factor of w a for each additional block. The authors then introduced an additional parameter w 0 (set by the user) and calculated sj: Finally, the authors rounded sj and computed the quantized per-block widths in order to quantize u j . All blocks with the same width were simply counted together to form one stage to determine the width for each stage i, as all blocks combined should have the same width. The authors set the parameters d (depth), w 0 (initial width), w a (slope), w m (width parameter), b (bottleneck), and g (group) in order to generate a RegNet from the RegNet design space. The authors altered these settings in order to create various RegNets with diverse characteristics. In this study, we used RegNetX032, which represents 3.2 billion flops. The reasons for choosing this version were that it is fast in terms of convergence and obtained a high accuracy score of 94% on the Imagenet [41] dataset. For each binary classification, we customized the pre-trained model by adding global average pooling, followed by batch normalization and two dense layers of sizes 512 and 128, respectively. To reduce overfitting, each dense layer was followed by a dropout layer (25%). Finally, a softmax layer provided the probability prediction scores for the multi-binary classification: (1) COVID-19 positive vs. normal cases; (2) COVID-19 + pneumonia vs normal cases; and (3) pneumonia vs. normal cases. Figure 1 provides an overview of our approach. NIH Dataset For the pneumonia and healthy classes, we used the NIH [44] chest X-ray dataset, comprising 112,120 CXR images with disease labels from 30,805 unique patients. This dataset was obtained from the National Institute of Health (USA). There were 15 classes in the dataset (14 diseases, and one class for "healthy"). Infiltration, edema, atelectasis, pneumothorax, consolidation, emphysema, effusion, fibrosis, pneumonia, cardiomegaly, pleural thickening, mass, nodule, and hernia were some of the available disease images. Expert physicians assigned grades to the CXR images. We reserved 6000 CXR images from the healthy category and 4852 for the other pneumonia cases (pneumothorax, effusion, etc.). We obtained a total of 10,852 images for the training and validation sets. Figure 2 shows example NIH CXR images. COVID-19 Image Data Collection Cohen et al. [23] released an open dataset of CXR and CT scan images of patients who were positive for COVID-19 and other viral/bacterial forms of pneumonia (MERS, SARS, and ARDS). The data were mainly scraped from online medical websites collecting released COVID-19 images from hospitals and physicians. The dataset contained 654 COVID-19 CXR images, and its objective was to develop AI-based approaches to predict and understand the infection. Figure 3 shows examples from the COVID-19 image data collection dataset. COVID-19 Radiography The database in [45] contains 219 CXR COVID-19-positive images collected by a team of researchers from the University of Qatar (Doha, Qatar) and the University of Dhaka (Bangladesh) and their Pakistani and Malaysian collaborators with the aid of various medical doctors, who created a CXR image database for positive cases of COVID-19. Figure 4 shows examples from the COVID-19 radiography dataset. BIMCV COVID19+ The BIMCV COVID19+ [46] dataset is a broad dataset of COVID-19 patients' CXR and computed tomography (CT) images together with their radiographic observations, pathologies, polymerase chain reaction (PCR) test results, diagnostic antibody tests for immunoglobulin G (IgG) and immunoglobulin M (IgM), and radiographic records from the Medical Imaging Databank in the Valencia Area Medical Imaging Bank (BIMCV). The images were collected by a team of specialist radiologists in high resolution and annotated. In addition, comprehensive information was provided, including demographic information for the patient, projection type (PA-AP), and criteria of acquisition for imaging analysis. This database included 1380 customer experience (CX), 885 digital transformation (DX), and 163 CT images. The images were merged into a single dataset with a total of 4148 COVID-19-positive images and 10,852 images of healthy patients and pneumonia cases, providing a total of 15,000 CXR images. Figure 5 shows examples from the BIMCV COVID19+ dataset. Montfort Dataset In addition to the above datasets, we collected more images in collaboration with health professionals from Montfort hospital (Ontario, Canada) and built the Montfort dataset for the testing phase. This proprietary dataset included 176 adults (~18 years of age and older) with a total of 236 CXR images. Of these, 93 patients (150 CXR images) were COVID-19-positive, as confirmed by positive RT-PCR test results and/or diagnosis by a physician for COVID-19. Added to the dataset were 26 patients with pneumonia (other than COVID-19, 29 CXR images) and 57 patients with healthy lungs (57 CXR images). These patients were labeled using radiology reports and RT-PCR tests. Figure 6 shows examples from the Montfort dataset. [49] with a learning rate of 1 × 10 −3 , which was further reduced when the validation accuracy did not improve consecutively over three epochs. We did not apply augmentation techniques, and CXR images were resized to 512 × 512. Results The performance of the model was calculated using the accuracy score and the receiver operating characteristic curve (ROC). The area under the ROC curve (AUC) was used as the measure of diagnostic accuracy for the model. A 0.5 threshold was used to validate the detection of a specific class. Furthermore, using the RT-PCR results as a reference for COVID-19 cases and radiology reports for pneumonia (other than COVID-19) and healthy cases, sensitivity and specificity were calculated. These measures were calculated as follows: where TP is the true-positive rate, i.e., the number of positive cases that were correctly labeled; TN is the true-negative rate, representing the number of negative cases that were correctly labeled; FP is the false-positive rate, representing the number of positive cases that were falsely labeled; and FN is the false-negative rate. Three model scenarios were created comparing different conditions: scenario (1)-COVID-19 positive vs. healthy cases; scenario (2)-COVID-19 + pneumonia vs healthy cases; and scenario (3)-pneumonia vs. healthy cases. The accuracy of the validation set for each model scenario was found to be 98.6%, 97.3%, and 95.0% for scenario 1, scenario 2, and scenario 3, respectively. Regarding the AUC scores, the models obtained values of 98.0% (scenario 1), 98.0% (scenario 2), and 97.0% (scenario 3). A value of 1.00 indicates a perfect COVID-19 and/or pneumonia test, and 0.50 (as plotted by the straight line of no discrimination) represents a diagnostic test that was no better than random coincidence. On the Montfort test set, the AUC for the model scenario showed better results with values of 99.1% (scenario 1), 99.1% (scenario 2), and 99.4% (scenario 3). The accuracy scores were found to be 96.0%, 95.3%, and 96.4% for scenario 1, scenario 2, and scenario 3, respectively. Confusion matrices were constructed to summarize the binary classification performance of the model with the sensitivity and specificity (Figure 7) for the testing phases. The validation phase showed excellent sensitivity and specificity results for all three scenarios, ranging between 95.0% and 98.0% (Table 1). The testing phase showed close results to the validation phase with sensitivity and specificity ranging between 90.0% and 98.0% (Table 1). Table 2 presents a comparison with machine learning and deep learning methods for COVID-19 detection. As one can see, our approach obtained the best scores compared to most of the studies presented. Our model's score was very close to that of Ayalew et al. [30]. In their study, the validation and test phases were taken from the same dataset, and data augmentation was applied. This could have provided biased results due to the similarity of the images from the training and testing sets. Moreover, the hybrid architecture could have increased the complexity of training compared to using a single model for feature extraction and classification. In addition, the authors combined feature extraction, detection, and segmentation from multiple models, which could have also created a delay in image inference. In the stuyd of Ghose et al. [31], the details of the dataset division were not provided, and no explainability model was developed in order to visualize the detected signs. We also note that no study has validated the performance of its model on an independent dataset to test the degree of generalization and prevent bias. Most studies have tested their model on a test set that was reserved from the global dataset. This confirms that our model was robust in terms of detecting COVID-19 using an independent and unique dataset. The proposed model improved upon our previous model [50]. The model was based on EfficientNet-B0 and obtained an AUC of 95.0%, an SP of 90.0%, and an SN of 97.0%. This indicated that the current proposed model was robust and able to detect COVID-19. Model Explainability To confirm how the model learned to detect COVID-19 signs, we developed an explainability model based on gradient-weighted class activation mapping (Grad-CAM) [51]. This approach was used to generate a visual description of the outcomes of the proposed CNN models. Grad-CAM uses any target's gradients flowing into the final convolutional layer to generate a coarse map of localization highlighting important regions in the predictive image. Grad-CAM was applicable to our proposed CNN model without any architectural changes or re-training. The proposed technique combined Grad-CAM with fine-grained visualizations to create a high-resolution class-discriminative visualization. Figure 8 shows samples of true-positive cases of COVID-19 detected with our fine-tuned model. As one can see, the model efficiently localized the infected area on the lung. Figure 9 presents some of false-positive CXR images. The low quality and the text on the radiography images confused the model when localizing the important areas of the disease on the lung. Model Limitations Despite the results obtained from the proposed model, we found that the model provided some false-positive detection results. This was due to the poor quality of some images from the Montfort dataset. For example, as shown in Figure 9 rows 1 and 2, the model detected a normal case as COVID-19-positive, and in row 3, a normal case as pneumonia. The artefact and the noise created an obstacle for the model, which interpreted them as signs of pneumonia. In future work, we will test different strategies to improve the quality of the images. Discussion and Conclusions Our study demonstrated that transfer learning can be effective in detecting COVID-19 using CXR images. Our pre-trained ImageNet model achieved a high sensitivity of 98.0% in detecting COVID-19-positive patients compared to healthy ones, and it demonstrated stateof-the-art performance in all measures discussed. This high performance ensured accurate diagnosis in most cases, even with limited data, which is typical in real-world situations. We also used the Grad-CAM visualization technique to make the proposed deep learning model more interpretable and explainable, which validated its performance and aided in the development of novel visual indicators for manual screening. However, there are still several research questions that need to be addressed. For instance, we need to focus on determining the severity of COVID-19 and developing robust models that can extract more features from CXR images to improve detection performance. Additionally, explanatory analyses could help us gain more insight into the mechanisms behind COVID-19 detection. Furthermore, it would be interesting to investigate whether our model could be applied to other respiratory diseases and explore the potential of transfer learning in diagnosing such diseases. Overall, our study provides a solid foundation for future research in this field. In conclusion, our study demonstrated that our algorithm, validated using CXR images from a large dataset with varying image quality and from different healthcare systems around the world, could provide greater imaging insights and a quantifiable probability of COVID-19 diagnosis compared to other respiratory diagnoses. The high performance of our algorithm could be useful in triaging patients for isolation in a timely manner and improving patient flow while waiting for other gold-standard testing results. The explainability of the images provides crucial information to assess lung damage and valuable insight for timely treatment and intervention. Our model could serve as a complementary aid in helping radiologists perform diagnoses and could potentially automate radiology services with AI-powered decision support tools. In the future, further research can focus on developing more robust models that can extract more features from CXR images to improve the performance of detection and investigate the application of transfer learning in diagnosing other respiratory diseases. Funding: This work was partially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) Alliance Grants (ALLRP 552039-20); New Brunswick Innovation Foundation (NBIF) COVID-19 Research Fund (COV2020-042); and the Atlantic Canada Opportunities Agency (ACOA) Regional Economic Growth through Innovation-Business Scale-Up and Productivity (project 217148). Institut du Savoir Montfort, general funds 2022/23. Institutional Review Board Statement: The IRB of Université de Moncton waived the approval requirements, since the data used in this work were anonymized. Informed Consent Statement: Not applicable. Data Availability Statement: The data used in this work originated mainly from public datasets and a private dataset. Please see the section describing the datasets. Conflicts of Interest: The authors declare no conflict of interest.
6,575.8
2023-06-01T00:00:00.000
[ "Medicine", "Computer Science" ]
Human Thelaziasis, Europe Thelazia callipaeda eyeworm is a nematode transmitted by drosophilid flies to carnivores in Europe. It has also been reported in the Far East in humans. We report T. callipaeda infection in 4 human patients in Italy and France. N ematodes transmitted by arthropods may cause diseases of different severity, especially in developing countries (1). Among these nematodes, Thelazia callipaeda Railliet and Henry, 1910 (Spirurida, Thelaziidae) has received little attention. Commonly referred to as eyeworm, it infects orbital cavities and associated tissues of humans, carnivores (i.e., dogs, cats, and foxes), and rabbits (2). Because of its distribution in the former Soviet Union and in countries in the Far East, including the People's Republic of China, South Korea, Japan, Indonesia, Thailand, Taiwan, and India (2) it has been known as oriental eye worm. T. callipaeda infection is endemic in poor communities in Asia, particularly in China (3), where it is frequently reported as being responsible for human thelaziasis with mild to severe symptoms (including lacrimation, epiphora, conjunctivitis, keratitis, and corneal ulcers) (4). A second species, T. californiensis Price, 1930, has been reported to infect humans in the United States (2). Infective third-stage larvae of eyeworm are transmitted by insects that feed on lacrimal secretions of infected animals and humans that contain Thelazia spp. fi rst-stage larvae. In the vector T. callipaeda, fi rst-stage larvae undergo 3 molts (≈14-21 days), and infective third-stage larvae may be transmitted to a new receptive host and develop into the adult stage in ocular cavities within ≈35 days (5). Competence of drosophilid fl ies of the genus Phortica as vectors of T. callipaeda has been recently demonstrated (6)(7)(8). Ocular infection of carnivores by T. callipaeda has been reported in France (9). This infection is also common in dogs ( Figure 1) and cats in Italy (10). Imported carnivore cases of thelaziasis have also been reported in Germany, the Netherlands, and Switzerland (11). The number of case reports of human thelaziasis has increased in several areas of Asia (3), where it occurs predominantly in rural communities with poor living and socioeconomic standards and mainly affects the elderly and children. In spite of increasing reports of T. callipaeda infection in carnivores in different European countries, no human cases have been described. Thus, infection with this eyeworm is unknown to most physicians and ophthalmologists. We report autochthonous cases of human thelaziasis in Europe. We sought to raise awareness in the scientifi c community of the risk for disease caused by this parasite and the need to include this infection in the differential diagnosis of ocular diseases. The Study From June 2005 through August 2006, a total of 4 patients with human thelaziasis were referred to the Department of Emergency and Admissions at Croce and Carle Hospital in Cuneo, Italy, for consultation. The 4 male patients (age range 37-65 years) lived in northwestern Italy (43°N, 6°E) and southeastern France (46°N, 9°E), where infections had been reported in dogs, cats, and foxes (9,12). All patients had similar symptoms (exudative conjunctivitis, lacrimation, and foreign body sensation) for a few days to weeks before referral (Table). All patients required medical attention during the summer (June-August 2005 and 2006) and reported fl oating fi laments on the eye surface. A medical history was obtained for 3 of the patients. The other patient (patient 2, a homeless man) was referred to a physician at the local social services in Nice, France, for severe mental disorders, poor hygiene, and diabetes (Table). Infections in patient 2 were diagnosed 1 month apart in each eye (June and July 2005; referred to as patient 2a and 2b). None of the patients had had any eye disease or had traveled outside their area of residence, with the exception of patient 1 who had gone trekking in the woods in Tenda (Piedmont region, Italy) ≈3 weeks before the onset of symptoms. Eye examinations showed thin, white nematode(s) on the conjunctival fornix of the affected eye. Nematodes were removed with a forceps after local anesthesia (1% no- vocaine) was administered. The nematodes were stored in 70% ethanol until they were morphologically identifi ed and analyzed. After the parasites were removed from the eyes, antimicrobial eye drops were prescribed for ≈7 days. Ocular symptoms disappeared within 2-3 days. Collected nematodes were identifi ed based on morphologic keys (13,14). T. callipaeda nematodes have a serrated cuticle (Figure 2, panel A), buccal capsule, mouth opening with a hexagonal profi le, and 6 festoons. Adult females are characterized by the position of the vulva, located anterior to the esophagus-intestinal junction, whereas males have 5 pairs of postcloacal papillae. To confi rm morphologic identifi cation, specimens from patients 2 and 4 were analyzed as previously described (11). Genomic DNA was isolated from each nematode, and a partial sequence of the mitochondrial cytochrome c oxidase subunit 1 (cox1, 689 bp) gene was amplifi ed by PCR. Amplicons were purifi ed by using Ultrafree-DA columns (Amicon; Millipore, Bedford, MA, USA) and sequenced by using an ABI-PRISM 377 system and a Taq DyeDeoxyTerminator Cycle Sequencing Kit (Applied Biosystems, Foster City, CA, USA). Sequences were determined in both directions and aligned by using the ClustalX program (15). Alignments were verifi ed visually and compared with sequences available for the cox1 gene of T. callipaeda (GenBank accession nos. AM042549-556). A total of 6 adult nematodes were morphologically identifi ed as T. callipaeda (Table). A mature female nematode (patient 4) had embryonated eggs in the proximal uterus and larvae in the distal uterus ( Figure 2, panel B). This suggested that a male worm was present, which had been rubbed out of the eye before symptoms occurred or had remained undetected. Sequences obtained from nematodes were identical to the sequence of haplotype 1 of T. callipaeda (GenBank accession no. AM042549) (11). Conclusions We report human infection by T. callipaeda in Italy and France in the same area where canine thelaziasis had been reported. These infections highlight the importance of including this arthropod-borne disease in the differential diagnoses of bacterial or allergic conjunctivitis. All cases of human thelazioisis were reported during the summer months (June-August), which is the period of T. callipaeda vector activity (late spring to fall in southern Europe) (7). The seasonality of human thelaziasis may impair correct etiologic diagnosis of this disease because spring and summer are the seasons in which allergic conjunctivitis (e.g., by pollens) occurs most frequently. This fi nding is particularly important when infections are caused by small larval stages that are diffi cult to detect and identify. Furthermore, clinical diagnosis of human thelaziasis is diffi cult if only small numbers of nematodes are present because clinical signs related to an infl ammatory response mimic allergic conjunctivitis, especially when they are associated with developing third-or fourth-stage larvae. Untimely or incorrect treatment of the infection may result in a delay in recovery, mainly in children and the elderly, who are most likely to be exposed to infection by the fl y. Although treatment for canine infection with T. callipaeda with topical organophosphates, 1% moxidectin, or a formulation containing 10% imidacloprid and 2.5% moxidectin is effective, mechanical removal of parasites in humans remains the only curative option (3). Thus, prevention of human thelaziasis should include control of the fl y vector by use of bed nets to protect children while they are sleeping and by keeping their faces and eyes clean. Genetic identifi cation of haplotype 1 has shown that this is the only haplotype circulating in animals (i.e., dogs, cats, and foxes) in Europe (11). This fi nding confi rms the metazoonotic potential of Thelazia spp. infection and the need to treat infected domestic animals, which may act as reservoirs for human infection.
1,786.4
2008-04-01T00:00:00.000
[ "Biology", "Medicine" ]
Optimization of T-DNA architecture for Cas9-mediated mutagenesis in Arabidopsis Bacterial CRISPR systems have been widely adopted to create operator-specified site-specific nucleases. Such nuclease action commonly results in loss-of-function alleles, facilitating functional analysis of genes and gene families We conducted a systematic comparison of components and T-DNA architectures for CRISPR-mediated gene editing in Arabidopsis, testing multiple promoters, terminators, sgRNA backbones and Cas9 alleles. We identified a T-DNA architecture that usually results in stable (i.e. homozygous) mutations in the first generation after transformation. Notably, the transcription of sgRNA and Cas9 in head-to-head divergent orientation usually resulted in highly active lines. Our Arabidopsis data may prove useful for optimization of CRISPR methods in other plants. Introduction CRISPR (clustered regularly interspaced short palindromic repeat)-Cas (CRISPR associated) site-specific nucleases evolved as components of prokaryotic immunity against viruses, and are widely deployed as tools to impose operator-specified nucleotide sequence changes in genomes of interest [1][2][3][4]. During infection by bacteriophages, Cas1 and Cas2 can integrate phage DNA sequences into 'spacer' regions of tandem CRISPR loci in the bacterial genome. The crRNA (CRISPR-RNA) transcription product of the spacer associates with nucleases from the Cas family to form ribonucleoproteins that can cleave nucleic acid sequences homologous to the spacer. This enables elimination of viral nucleic acid upon subsequent infection. CRISPR systems are divided in two classes [5,6]. Class 1 systems comprise multi-subunit complexes whereas Class 2 systems function with single ribonucleoproteins. Within Class 2, Type-II and Type-V cleave dsDNA (double-stranded DNA) via Cas9 and Cas12/Cpf1 respectively, while Type-VI cleaves ssRNA (single-stranded RNA) via Cas13/C2c2. Golden Gate cloning enables facile assembly of diverse Cas9 T-DNA architectures In Golden Gate modular cloning, the promoter, reading frame and 3' end modules at 'Level 0', are assembled using Type IIS restriction enzymes to 'Level 1' complete genes, that can then be easily combined into T-DNAs carrying multiple genes at 'Level 2'. This enables facile assembly of diverse T-DNA conformations [22,23]. Level 0 acceptor vectors are designed to clone promoter, coding sequence (CDS) or terminator fragments (see Materials and methods). For our purpose, we used three Level 1 vectors: a glufosinate plant selectable marker in position 1 (pICSL11017, cloned into pICH47732), a Cas9 expression cassette in position 2 (cloned into pICH47742) and a sgRNA expression cassette in position 3 (cloned into pICH47751) (Fig 1). Some Cas9 expression cassettes were cloned into a Level 1 position 2 variant: pICH47811. This vector can be assembled in Level 2 in the same fashion as pICH47742, but it enables Cas9 transcription in the opposite direction as compared to the other Level 1 modules. We assembled 25 different Level 1 Cas9 constructs and four sgRNA expression cassettes. The sequence targeted by the sgRNA was CGTATCTTCGGCCATGAAGC (NGG) (Protospacer Adjacent Motif indicated in italics) which targets specifically ADH1 in Col-0, enabling pre-selection of CRISPR-induced adh1 mutants by selecting with allyl alcohol [13]. Assembly of these Level 1 modules resulted in 39 Level 2 T-DNA vectors (S1 Table). More details of the assembly protocols can be found in the 'Materials and Methods' section. CRISPR-induced Arabidopsis mutations can be selected using allyl-alcohol The 39 Level 2 plasmids were transformed in A. tumefaciens strain GV3101 and used to generate Arabidopsis Col-0 transgenic lines. 'T1' refers to independent primary transformants selected from the seeds of the dipped plant; 'T2' refers to the T1 progeny. For each of the 39 constructs, about 100 T2 progenies from six independent T1 lines were screened for allyl alcohol resistance (Fig 2). T2 seeds were selected with 30 mM allyl-alcohol for two hours. Six survivors (or all survivors if there were less than six) were screened by PCR amplification and capillary sequencing to confirm the mutation in ADH1 at the expected target site. This genotyping step enabled us to estimate the percentage of non-mutated plants that escape the allylalcohol selection. We indeed identified some lines surviving the allyl-alcohol screen that are heterozygous (ADH1/adh1). CRISPR activity is expressed as [(number of allyl-alcohol surviving plants) x (% of homozygous or biallelic mutants confirmed by sequencing among the surviving plants tested) / (number of seeds sown)]. It was measured for six independent T2 families, for each of 39 constructs. When more than 75% of the lines survived the allyl-alcohol treatment and all the lines genotyped are knock-out (KO) alleles with the exact same mutation within one T2 family, we assumed that the T1 parent was a homozygous mutant. Such T2 families are indicated in red. in most tissues [24]. We compared the 35S and Arabidopsis UBI10 promoters. More mutants were recovered using the UBI10 promoter, suggesting it is more active than 35S in the germline ( Fig 3A). Following this observation, we tested other germline-expressed promoters. UBI10, YAO and In the combinations we tested, we detected low CRISPR activity using the meiosis I-specific promoter MGE1 [26] (Fig 3C), the homeotic gene promoter AG [27] (Fig 3D) and the DNA polymerase subunit-encoding gene promoter ICU2 [28] (Fig 3D). They were tested with constructs inducing an overall low activity and we do not exclude that they can perform efficiently in other conditions. In one context specifically, ICU2 promoter resulted in moderate activity in four of the six T2 families tested, while only one T2 family showed activity with the UBI10 promoter ( Fig 3E). EC1.2 and an EC1.2::EC1.1 fusion (referred as 'EC enhanced' or 'ECenh') are specifically expressed in the egg cell and were reported to trigger elevated mutation rates with CRISPR in Arabidopsis [17]. In our Golden Gate compatible system, only ECenh induced homozygous mutants in T1 and at low frequency (Fig 3B and 3G). In one comparison, EC1.2 and ECenh performed slightly better than pUBI10 (Fig 3D), but in another, they induced lower activity ( Fig 3E). A promoter from Cassava Vein Mosaic Virus (CsVMV) was reported to mediate CRISPR activity in Brassica oleracea [29]. We found that it induced more CRISPR activity than pUBI10 in two combinations tested (Fig 3D and 3E). We also tested the YAO and RPS5a promoters. Both of them were reported to boost CRISPR activity in Arabidopsis [15,16]. Both triggered elevated mutation rates compared with the UBI10 promoter ( Fig 3F). In one comparison, pRPS5a performed slightly better ( Fig 3G), but in another, pYAO performed better ( Fig 3H). Codon optimization of Cas9 and presence of an intron elevate mutation rates The activity of different constructs with the same promoter can be very different. For instance, pRPS5a:Ca9 and pYAO:Ca9 lines were recovered that displayed either high or low activity ( Fig 3F and 3H). The most active constructs carried Cas9_3 or Cas9_4 alleles. We thus compared four Cas9 alleles side-by-side (Fig 4). Cas9_1 is a human codon-optimized version with a single C-terminal Nuclear Localization Signal (NLS) [3]. Cas9_2 is an Arabidopsis codon-optimized version with a single C-terminal NLS [13]. Cas9_3 is a plant codon-optimized version with both N-and C-terminal NLSs, an N-terminal FLAG tag and a potato intron IV [25]. Cas9_4 is a human codon-optimized version with both N-and C-terminal NLSs and an N-terminal FLAG tag [10]. We found that in comparable constructs, Cas9_2 performs better than Cas9_1 (Fig 4E to 4H), consistent with the fact that Cas9_2 was designed for Arabidopsis codon usage. However, human codon-optimized Cas9_4 induced more mutants than Arabidopsis optimized Cas9_2 in one experiment ( Fig 4B). Cas9_4 has an extra N-terminal NLS compared to Cas9_2, which may explain this difference. In this comparison specifically, Cas9_3 was less efficient than Cas9_4. However, by comparing Cas9_3 and Cas9_4 in combination with YAO or RPS5a promoters, we found that Cas9_3 resulted in high mutation rates (Fig 4C and 4D). Cas9_3 efficiency can be explained by the plant codon optimization, the presence of two NLSs and the inclusion of a plant intron. This intron was originally added to avoid expression in bacteria during cloning and, as side effect, can also increase expression in planta [30]. We recommend the use of Cas9_3 for gene editing in Arabidopsis. A modified sgRNA triggers CRISPR-induced mutations more efficiently In the endogenous CRISPR immune system, Cas9 binds a CRISPR RNA (crRNA) and a transacting CRISPR RNA (tracrRNA) [31]. A fusion of both, called single guide RNA (sgRNA), is sufficient for CRISPR-mediated genome editing [32]. sgRNA stability was suggested to be a limiting factor in CRISPR system [33]. Chen et al. proposed an improved sgRNA to tackle this issue [8]. It carries an A-T transversion to remove a TTTT potential termination signal, and an extended Cas9-binding hairpin structure ( Fig 5A). We compared side-by-side the 'Extended' and 'Flipped' sgRNA (sgRNA EF ) with the classic sgRNA (Fig 5B and 5C). In two independent 67 . c. and d. Five lines were tested for Cas9_3 instead of six. The sgRNA targets ADH1. CRISPR activity measured in % of homozygous or biallelic stable mutants in the second generation after transformation (T2). Each dot represents an independent T2 family. Red dot: All the T2 lines from this family carry the same mutation, indicating a mutation more likely inherited from the T1 parent rather than being de novo from the T2 line. Bold and underlined: Most active construct(s) for each panel. [13]. OcsT: 714 bp of the Agrobacterium tumefaciens octopine synthase terminator. U6-26p: 205 bp of the comparisons, the efficiency was higher with sgRNA EF . The improvement was not dramatic but sufficient to lead us to recommend use of 'EF'-modified guide RNAs for genome editing in Arabidopsis. The 3' regulatory sequences of Cas9 and the sgRNA influence the overall activity To avoid post-transcriptional modifications such as capping and polyadenylation, sgRNA must be transcribed by RNA polymerase III (Pol III). Several approaches involving ribozymes, Csy4 ribonuclease or tRNA-processing systems have been proposed but were not tested here [34][35][36]. U6-26 is a Pol III-transcribed gene in Arabidopsis [37]. We used 205 bp of the 5' upstream region of U6-26 as promoter and we compared a synthetic polyT sequence (seven thymidines) and 192 bp of the 3' downstream region as terminator. A T-rich stretch has been reported to function as a termination signal for Pol III [38]. In seven out of nine side-by-side comparisons, the authentic 192 bp of U6-26 terminator directed a higher efficiency of the construct, as compared to a synthetic polyT termination sequence (Fig 6 and S2 Fig). We speculate that a stronger terminator increases the stability of the sgRNA. For multiplex genome editing, the use of 192 bp per sgRNA will result in longer T-DNAs and increase the risk of recombination and instability. We generated constructs with only 67 bp of the U6-26 3' downstream sequence. Such constructs were not compared side-byside with the '192 bp terminator', although they enabled modest to high mutation rates (e.g. Fig 3F and 3G). With these results in mind, we recommend using 67 bp of the 3' downstream sequence of U6-26 as terminator for the sgRNA. Since 3' regulatory sequences can influence sgRNA stability, we tested if the same was true for Cas9. We compared the Pisum sativum rbcS E9 with two A. tumefaciens terminators commonly used in Arabidopsis: Ocs and Ags (Fig 7). We did not observe consistent differences between E9 and Ocs (Fig 7A and 7B). However, in one comparison, E9 outperformed Ags ( Fig 7C). This is consistent with previous observations that RNA Polymerase II (Pol II) terminators quantitatively control gene expression and influence CRISPR efficiency in Arabidopsis [17,39]. We propose that a weak terminator after Cas9 enables Pol II readthrough that could interfere with Pol III transcription of sgRNAs in some T-DNA construct architectures. This limiting factor can be corrected by divergent transcription of Cas9 and sgRNAs. Divergent transcription of Cas9 and sgRNA expression can elevate mutation rates The Golden Gate Level 1 acceptor vector collection contains seven 'forward' expression cassettes and seven 'reverse' expression cassettes, which are interchangeable [23]. We assembled 'RPS5a:Cas9_4:E9' and 'YAO:Cas9_3:E9' in both the Level 1 vector position 2 forward (pICH47742) and reverse (pICH47811) (Figs 1 and 6). In one case, CRISPR activity was moderate when Cas9 and sgRNA are expressed in the same direction and high when they are expressed in opposite direction (Fig 8A). In another case, CRISPR activity was very high in both cases (Fig 8B). We thus recommend to both use a strong terminator after Cas9 (e.g. E9 or Ocs) and express Cas9 and sgRNA in opposite directions. Most of the stable double events are homozygous rather than biallelic From the mutant screen, 315 allyl-alcohol resistance lines were confirmed by capillary sequencing (S5 Table). We classified them in four categories: (i) 59% were homozygous (single sequencing signal, different than ADH1 WT), (ii) 11% were heterozygous (dual sequencing signal, one matching ADH1 WT), (iii) 10% were biallelic (dual sequencing signal, none matching ADH1 WT) and (iv) 20% were difficult to assign (unclear sequencing signals, either biallelic or due to somatic mutations, but clearly different than WT, heterozygous or homozygous genotypes) (Fig 9). The recovery of heterozygous (ADH1/adh1) lines indicates that the loss of a single copy of ADH1 can sometimes enable plants to survive the allyl-alcohol selection. Discussion CRISPR emerged in 2012 as a useful tool for targeted mutagenesis in many organisms including plants [11,32]. In Arabidopsis, the transgenic expression of CRISPR components can be straightforward, avoiding tedious tissue culture steps. Many strategies to enhance the overall CRISPR-induced mutation rate have been proposed [8,13,[15][16][17]40]. Here we report a systematic comparison of putative limiting factors including promoters, terminators, codon optimization, sgRNA improvement and T-DNA architecture. We found that the best promoters to control Cas9 expression are UBI10, YAO and RPS5a. The best terminators in our hands were Ocs from A. tumefaciens and rbcS E9 from P. sativum. A plant codon-optimized, intron-containing Cas9 allele outperformed the other alleles tested. A modified sgRNA with a hairpin Extension and a nucleotide Flip, called sgRNA EF , triggers slightly elevated mutations rates. The sgRNA transcription regulation by the authentic 3' regulatory sequence of AtU6-26 results in better CRISPR activity. We get high mutation rates with either 67 bp or 192 bp of terminator and recommend using the shortest (67 bp). We hypothesise that a weak terminator after Cas9 enables RNA-polymerase II readthrough within the sgRNA expression cassette, preventing optimal expression of the sgRNA. Indeed, we noted an elevated CRISPR-Cas9 efficiency by expressing Cas9 and sgRNA in opposite directions. Considering the combinations of Cas9 and sgRNA genes tested in this study, we recommend to use a 'YAO:Cas9_3:E9' and a 'pU6-26:sgRNA EF :U6-26T 67 ' cassettes in head-to-head orientation. This combination is included in the constructs tested here (Fig 8B) and enabled us to recover one homozygous mutants in five T1 plants tested. We also obtained useful rates with other constructs (e.g. Fig 3F), indicating that the CRISPR components do not entirely explain the final CRISPR activity. It was recently reported that heat stress increases the efficiency of CRISPR in Arabidopsis [41]. Environmental conditions may explain fluctuation of the CRISPR activity, independently of the T-DNA architecture. Five lines were tested for H2T instead of six. The sgRNA targets ADH1. CRISPR activity measured in % of homozygous or biallelic stable mutants in the second generation after transformation (T2). Each dot represents an independent T2 family. Red dot: All the T2 lines from this family carry the same mutation, indicating a mutation more likely inherited from the T1 parent rather than being de novo from the T2 line. Bold and underlined: Most active construct(s) for each panel. We were surprised to recover more homozygous than biallelic events. Stable double mutations are the result of two CRISPR events, on the male and female inherited chromosome respectively. In this scenario, lines can be recovered with two different mutations, resulting in a biallelic (e.g. adh1-2/adh1-3) genotype, rather than having the same mutation on both chromosomes (e.g. adh1-1/adh1-1). Double-strand break-induced homologous recombination occurs between allelic sequences [42]. It has been reported that double strand breaks caused by CRISPR-Cas9 can increase this phenomenon [43]. Allelic recombination can explain our observation of the same mutation on both copies of ADH1. The prevalence of homozygous over biallelic genotypes facilitates the genotyping and is an advantage for targeted mutagenesis using CRISPR-Cas9. Genotype at ADH1 locus confirmed by capillary sequencing. For each T2 family tested, up to six allyl-alcohol resistant plants were genotyped by capillary sequencing of an sgRNA target (ADH1) PCR amplicon. We retrieved a total of 315 sequences with a mutation. 59% (187) showed a single sequencing signal, different than ADH1 WT and were classified as "Homozygous". 11% (33) showed an overlap of two sequencing signals, one matching ADH1 WT and one different; and were classified as "Heterozygous". 10% (31) showed an overlap of two sequencing signals, none matching ADH1 WT; and were classified as "Biallelic". 64 (20%) showed an overlap of signals different than WT but not clear enough to distinguish; and were classified as "Unknown". The "Unknown" sequences can be biallelic or due to somatic mutations but are different than WT, heterozygous or homozygous genotypes. https://doi.org/10.1371/journal.pone.0204778.g009 Optimization of CRISPR for Arabidopsis We used a glufosinate resistance selectable marker which enables easy selection of transgenic lines. It can be important to segregate away the T-DNA in the CRISPR mutant line for multiple reasons. For instance, a loss-of-function phenotype must be confirmed by complementation of the CRISPR-induced mutation. A CRISPR construct still present in the mutant can target the complementation transgene and interfere with the resulting phenotypes. Selection of non-transgenic lines is possible but complicated with classic selectable markers such as kanamycin or glufosinate resistance, since a selective treatment kills the non-transgenic plants. FAST-Green and FAST-Red provide a rapid non-destructive selectable marker and involve expression of a GFP-or RFP-tagged protein in the seed [44]. Transgenic and non-transgenic seeds can be distinguished under fluorescence microscopy [16,45,46]. This facilitates recovery of mutant seeds lacking the T-DNA (Fig 10). Homozygous mutants can be identified among the independent T1 lines. Non-fluorescent seeds can be selected from the T1 seeds. The resulting T2 plants are homozygous mutant and non-transgenic. We report a CRISPR-and Golden Gate-based method to generate stable Arabidopsis mutant lines in one generation. In our efforts to elevate mutation rates in Arabidopsis, we found several limiting factors mostly related to Cas9 and sgRNA transcription. Some of these At the sgRNA target site, they can be WT, or display somatic, heterozygous, biallelic of homozygous mutations. All the possibilities are represented here. "Somatic" describes events happening in somatic cells, not inherited in the next generation. As somatic events can happen independently in each cell, they often result in mosaic pattern of mutations across the leaf. One line has homozygous mutation (mut1/mut1). It produces seeds segregating for the T-DNA, visible under microscope if using FAST-Red. The seeds will segregate 3:1 (Red: Non-red) if there is one locus insertion, 15:1 (Red: Non-red) if there are two loci insertion, etc. The T2 progeny of (mut1/mut1) is 100% homozygous for the mutation. The non-red seeds are also T-DNA free. https://doi.org/10.1371/journal.pone.0204778.g010 findings can be tested for other plant species and for knock-in breeding. The generation of null alleles via CRISPR is today quick and simple, facilitating the investigation of gene function. Improvement of rates of gene 'knock-ins' provides the next challenge. In vivo gene tagging or knock-in breeding are theoretically possible and have been reported [47][48][49][50]. Improvements in CRISPR-based genome editing techniques will facilitate the study of genes and proteins and be beneficial for both basic and applied plant science. Combinations of three Level 0 vectors containing respectively a promoter, a Cas9 coding sequence and a terminator were assembled in Level 1 vector pICH7742 (Position 2) or pICH47811 (Position 2, reverse) by the same 'Golden Gate' protocol but using 0.5 μl of BpiI enzyme (10U/μl, ThermoFisher) instead of 0.5 μl of BsaI-HF. To generate the sgRNA expression cassettes, DNA fragments containing the classic or the 'EF' backbone with 7, 67 or 192 bp of the U6-26 terminator were amplified using primers flanked with BsaI restriction sites associated with Golden Gate compatible overhangs (S3 Table). The amplicons were assembled with the U6-26 promoter (pICSL90002) in Level 1 vector pICH7751 (Position 3) by the 'Golden Gate' protocol using the BsaI-HF enzyme. Combinations of three Level 1 vectors containing a glufosinate resistance selectable maker (pICSL11017), a Cas9 expression cassette and a sgRNA expression cassette were assembled in Level 2 pAGM4723 (overdrive) or pICSL4723 (+ overdrive) by the 'Golden Gate' protocol using the BpiI enzyme. All the plasmids were prepared using a QIAPREP SPIN MINIPREP KIT on Escherichia coli DH10B electrocompetent cells selected with appropriate antibiotics and X-gal. Plant transformation, growth and selection Agrobacterium tumefaciens strain GV3101 was transformed with plasmids by electroporation and used for stable transformation of Arabidopsis accession Col-0. Arabidopsis plants were grown in 'short days' conditions (10 hr light/14 hr dark, 21˚C). Transformants were selected by spraying three times 1-to 3-weeks old seedlings with phosphinotrycin at a concentration of 0.375g/l. 4-weeks old resistant plants were transferred in 'long days' conditions (16 hr light/8 hr dark, 21˚C) for flowering. For each genotype, six independent T1 were self-pollinated to obtain six independent T2 families per construct. Characterisation of CRISPR events T2 families were tested for resistance to allyl-alcohol.~100 seeds were sterilized, immersed in water (4˚C, dark, overnight), treated with allyl-alcohol (30mM, room temperature, 2 hours, shaken at 750rpm), rinsed three times with water and sown on MS 1/2 medium. After two weeks, the number of germinated and non-germinated seeds was monitored. DNA was extracted from up six allyl-alcohol resistant plants (or all the resistant plants if there were less than six) for genotyping.~0.5cm 2 of leaf tissue was printed by mechanical pression onto an FTA filter paper (Whatman Bioscience). 1-mm disks were punched out from FTA filter paper by using a punch and placed in a 200μl PCR tubes. One disc was used per tube. Samples were incubated in 50μl of FTA buffer (1.25ml Tris 1M, 500μl EDTA 0.5M, 12.5μl Tween 20 and water up to a total volume of 125ml) for 2 hours and rinsed with water. PCR was performed on this template using primers flanking the sgRNA target in ADH1 (S3 Table) and Q5 High-Fidelity DNA Polymerase (NEB, following the manufacturer recommendations). After amplification, the PCR products were resolved by electrophoresis on a 1.5% agarose gel and purified using the QIAquick Gel Extraction Kit (QIAGEN). The purified PCR product was sequenced using the same primer set for amplifications by capillary sequencing (GATC Biotech). Sequencing results were compared to the Col-0 sequence of ADH1 using CLC Main Workbench 7.7.1. ADH1 genotypes were reported as WT (identical to Col-0), heterozygous (both Col-0 and single mutation detected), biallelic (two different mutations detected), homozygous (single mutation detected) or somatic (more than two signals detected). The number of confirmed mutants among all the allyl-alcohol resistant lines was used to estimate the total number of real mutants among allyl-alcohol survivors from each plate. For each T2 family, the CRISPR efficiency was defined as the ratio of homozygous and biallelic mutants compared to the total number of seeds sown. Plots presented in this article were made using ggplot2 in R version 3.3.2. Supporting information S1 Table). Vector "pAGM4723" lacks an overdrive; Vector "pICH4723" has an overdrive. "Same_mutation" indicates whether all the lines carry the same mutation. It is applied only if more than 75% of the seeds germinated. If so, it indicates that the parent was likely a homozygous mutant and the mutation was inherited to all progenies. "extension-flip" sgRNA. U6-26T: 7 bp of the At3g13855 terminator. RB: Right Border. The sgRNA targets ADH1. CRISPR activity measured in % of homozygous or biallelic stable mutants in the second generation after transformation (T2). Each dot represents an independent T2 family. Bold and underlined: Most active construct(s) for each panel. The overdrive sequence can increase the integration efficiency [21]. In one comparison the presence of the overdrive results in slightly better activity (C), but in another one it did not (B). We concluded that the presence of an overdrive does not influence the CRISPR efficiency. Thus, we could compare constructs independently of the presence of an overdrive.
5,551
2018-09-17T00:00:00.000
[ "Biology" ]
Hospital Performance, the Local Economy, and the Local Workforce: Findings from a US National Longitudinal Study Blustein and colleagues examine the associations between changes in hospital performance and their local economic resources. Locationally disadvantaged hospitals perform poorly on key indicators, raising concerns that pay-for-performance models may not reduce inequality. Methods and Findings: We applied county-level measures of local economic and workforce resources to a national sample of US hospitals (n = 2,705), during the period 2004-2007. We analyzed performance for two common cardiac conditions (acute myocardial infarction [AMI] and heart failure [HF]), using process-of-care measures from the Hospital Quality Alliance [HQA], and isolated temporal trends and the contributions of individual resource dimensions on performance, using multivariable mixed models. Performance scores were translated into net scores for hospitals using the Performance Assessment Model, which has been suggested as a basis for reimbursement under Medicare's ''Value-Based Purchasing'' program. Our analyses showed that hospital performance is substantially associated with local economic and workforce resources. For example, for HF in 2004, hospitals located in counties with longstanding poverty had mean HQA composite scores of 73.0, compared with a mean of 84.1 for hospitals in counties without longstanding poverty (p,0.001). Hospitals located in counties in the lowest quartile with respect to college graduates in the workforce had mean HQA composite scores of 76.7, compared with a mean of 86.2 for hospitals in the highest quartile (p,0.001). Performance on AMI measures showed similar patterns. Performance improved generally over the study period. Nevertheless, by 2007-4 years after public reporting began-hospitals in locationally disadvantaged areas still lagged behind their locationally advantaged counterparts. This lag translated into substantially lower net scores under the Performance Assessment Model for hospital reimbursement. Conclusions: Hospital performance on clinical process measures is associated with the quantity and quality of local economic and human resources. Medicare's hospital pay-for-performance program may exacerbate inequalities across regions, if implemented as currently proposed. Policymakers in the US and beyond may need to take into consideration the balance between greater efficiency through pay-for-performance and socioeconomic equity. Introduction Pay-for-performance is an important market-based approach to improving health care quality. During the past decade, the approach has been adopted widely, by health systems in the UK [1], Australia [2], and Taiwan [3], among others. Pay-forperformance has also been used in the US, but in a piecemeal fashion, in some states, by some insurance firms, for some health care providers [4]. Now a unified effort is underway, with the US government poised to implement pay-for-performance nationwide within its Medicare health insurance program [5,6]. As the provider of near-universal health insurance to Americans age 65 y and older, Medicare is a powerful driver of US health care policy. For example, Medicare was the innovator in the introduction of hospital prospective payment under the Diagnosis Related Groups [DRGs] program [7]. That payment reform was in turn adopted in the private health insurance sector, and is the standard throughout the US today. Medicare reforms also resonate internationally, as evidenced by the widespread implementation of the DRG case mix approach in 25 nations worldwide [8]. Pay-for-performance assumes that providers have adequate economic and human resources to perform, or improve their performance, within a short time frame. Yet the prevailing distribution of resources in the US health care system makes it difficult for some providers to operate effectively as it is [9]. Payment based on performance may worsen inequalities, as hospitals in underresourced areas lose funds to their better-off counterparts, with the government acting as a sort of ''reverse Robin Hood.'' This scenario is not entirely far-fetched. In the US, hospital revenues are largely derived from a mix of private and public health insurance payments, which vary with local socioeconomic conditions [9]. Strong finances give hospitals the opportunity to invest in quality improvement [10]. Hospitals also draw on local human resources. Arguably the most important of these is clinical staff. But not all facilities have access to a high quality talent pool. To date, much research and policy attention has been directed toward attracting physicians, nurses, pharmacists, and other clinicians to areas that would be otherwise underserved, because of local poverty, limited spousal employment opportunities, and sub-par schools [11][12][13]. Moreover, the US-and the world at large-is increasingly segregated, both economically and in terms of educational level [14][15][16]. Demographers in the US have noted a growing concentration of college-educated people in a relatively small subset of geographical areas [16]. This ''regional concentration of human capital'' has translated into higher productivity in places with more educated workforces, and a decline of economies in areas where this advantage is lacking [17,18]. Although there is evidence that clinical outcomes vary by geographical area [19], little research has explored the impact of the regional concentration of wealth and human capital on health care. In this study, we examine the association between local resources and hospital performance, seeking to understand the potential redistribution of funds under an important pending change in hospital reimbursement. We also explore the implications of our findings for health systems beyond the US, as pay-forperformance expands worldwide. Setting The US Medicare program is administered through the Centers for Medicare and Medicaid Services (CMS), a federal govern-mental agency. Over the past decade, the CMS has piloted payfor-performance in a variety of settings [6]. Under the agency's ambitious ''Value-Based Purchasing (VBP) Initiative,'' the first wave of national implementation is slated to take place in hospitals, which will have a portion of their revenues withheld and then returned, conditional on their ability to meet quality targets [6]. Later, the approach will be extended to payment for other types of providers, including physicians, nursing homes, and home health agencies [6]. Groundwork for hospital pay-for-performance was laid when, in 2004, the agency called for hospitals to voluntarily report their performance on process-of-care measures for three clinical conditions (acute myocardial infarction [AMI], heart failure [HF], and pneumonia). Shortly thereafter, the agency began providing financial incentives for reporting, under so called ''payfor-reporting'' [20]. The transition was to have been made to hospital pay-for-performance in 2009, but with the change in administration and focus on health care reform, that effort was temporarily suspended. Nonetheless, pay-for-performance enjoys a high degree of support in the agency, the Congress, the hospital industry, and the Obama administration, and is widely expected to be implemented, with the recent passage of health care reform [21]. Data Sources and Sample The performance data used in this study were derived during Medicare's voluntary reporting period from 2004-2007. Hospitals were eligible for inclusion in the study if they were located in the 50 United States, and voluntarily reported to the Medicare program under the Hospital Quality Alliance [HQA] program during the period. We merged the HQA process-of-care data (which are publicly posted on the program's Hospital Compare website [22]) with data on hospital characteristics and finances from the Medicare Cost Reports [23]. These data were merged with county-level information from the Health Resources and Services Administration's Area Resource File [24]. We used a dataset of institutions reporting at least some HQA data for at least 1 y during the study period. Because a goal of the study was to assess change in all measures over time, we limited the study sample to those hospitals reporting on all seven measures in both 2004 and 2007 (n = 2,705; see below). Measures Composite HQA performance score. We used clinical process-of-care measures for AMI and HF, two of the three conditions for which process measures were collected under the HQA throughout the study period. We do not present findings on the third condition (pneumonia), because the initial measures for that condition were controversial and were modified during the study period [25]. However, findings for the pneumonia measures were qualitatively similar to those reported here for AMI and HF, albeit with somewhat attenuated impacts. Detailed standards for these measures are published elsewhere [26]. Consistent with previous research [27,28], we selected the following individual measures in developing composite scores: AMI (aspirin on admission, aspirin at discharge, angiotensin converting enzyme [ACE] inhibitor for left ventricular dysfunction, beta-blocker on admission, beta-blocker at discharge); HF (assessment of left ventricular function, ACE inhibitor for left ventricular dysfunction). These are process measures that can be successfully met by physician order or chart notation, including documentation of a contraindication. We excluded measures of the delivery of cognitive services such as smoking cessation counseling that may be sensitive to patient characteristics [29]. For each year, for each of the two conditions, we computed a single weighted average ''composite'' score, following a standard methodology [30], which assigned each hospital a score ranging from 0 to 100, reflecting the mean hospital performance on a patient receiving the processes of care for which s/he was eligible, for that condition. Locational resources. We characterized hospital locational resources at the county level across a set of dimensions. Local economic conditions were measured in two ways. First, chronicity of local poverty was assessed using a modified version of a metric developed by the US Department of Agriculture's Economic Research Service [ERS], which identifies counties with respect to their population poverty levels over the past four decennial census periods . ''Persistently poor'' counties had .20% of their population living in poverty in all four of those census years; ''intermittently poor'' counties met the .20% criterion during at least one census year; and ''never poor'' counties never met the .20% poverty level during any of the census years. The current health of the local economy was summarized by the local unemployment rate according to the 2000 census, using the ERS cut point for ''high'' unemployment (,65% of residents 21-64 y old employed). Three measures reflected the availability and characteristics of the local workforce. Availability of health professionals was measured using the federal government's Health Professional Shortage Area (Primary Care) (HPSA) designation [31]. The HPSA designation applied to the whole county, a portion of the county, or to no portion of the county. The education level of the local workforce was measured in two ways. The ERS's ''low education'' designation identifies counties for which .25% of those aged 25-64 y do not have a high school diploma or equivalent, based on the 2000 census [32]. Because the local prevalence of college graduates is the standard measure of workforce human capital used by economic geographers, we also used the proportion of people aged 25 y and over who had completed 4 y of college. For bivariate and multivariable analyses, we divided the sample of hospitals into quartiles on the basis of the prevalence of college graduates in the local county. For each of the dimensions of locational resources, one resource level was designated ''locationally disadvantaged'' (persistently poor, high unemployment, entire county designated as HPSA, high prevalence of non-high school graduates in workforce, lowest quartile college educated). Dimensions of advantage/disadvantage were moderately intercorrelated between counties, with unweighted values of Cramer's V ranging from 0.21 (HPSA designation and prevalence of college graduates in the local county) to 0.56 (chronicity of local poverty and high unemployment). Individual hospital characteristics. Measures of hospital size and ownership were derived from the 2003 Medicare Cost Reports. From the same source, the ratio of interns and residents to beds was used to compute a 3-valued measure of teaching status, with a cut point of 0.10 separating ''major'' and ''minor'' teaching institutions, and hospitals with a value of zero designated as ''non-teaching'' [33]. Location was classified using the Office of Management and Budget's 2003 urban/rural continuum codes, collapsed into three standard categories: ''metropolitan,'' ''micropolitan'' (town or small city), and ''non-core'' (roughly, rural). From the Cost Reports we computed the percent of bed days attributable to Medicaid revenue, and total margin in 2003. Cost Report data were unavailable for 175 (6.4%) of the hospitals. Those institutions were excluded in multivariable analysis. Attainment, improvement, and net performance score (Performance Assessment Model). To assess potential redistribution under VBP, we used the Performance Assessment Model that is detailed in Appendix B of CMS's November 2007 Report to Congress [5]. This model is ''the methodology that could be used for scoring hospital performance on specific measures'' in hospital VBP, according to CMS's January 2009 Roadmap for Implementing Value Driven Healthcare in the Traditional Medicare Fee-for-Service Program [6]. Under the model, hospitals are assigned scaled two scores for each condition annually, one based on attainment (absolute performance) and the other based on improvement (increase in performance from the prior reporting year). A net performance score is then assigned for each condition equal to the attainment or improvement score, whichever is greater. We assigned these scores to hospitals using the model's ''standard'' method, as illustrated in Figure 1. To determine the attainment score, for each condition, for each year, there is an attainment range with an upper limit benchmark (the mean of the top decile of performance for all hospitals, for the previous year) and a lower limit attainment threshold (the 50th percentile of performance for all hospitals, for that the previous year). Hospitals at or below the attainment threshold receive 0 attainment points. Those at or above the benchmark receive 10 attainment points. Hospitals performing within the attainment range receive a scaled score between 1 and 9 attainment points. A similar scale methodology generates the improvement score, with the upper benchmark fixed at the same level as for that measure's attainment score. However, the lower threshold is defined differently for every hospital, and is set at the hospital's performance score in the previous period. Thus, the improvement scoring range varies across hospitals, and requires a greater increase in the number of points on the part of previous low-performers than previous high-performers in order to obtain the same scaled score. We also assigned scores using the model's alternative method for ''topped-out'' measures [5]. Results of our analyses were not significantly changed. Statistical Analysis We began by describing the distribution of the sample of hospitals with respect to the individual characteristics and locational resources described above. Then, with hospitals as the units of analysis, we computed mean annual composite HQA performance scores, for each dimension of location, by resource level, and developed 95% confidence intervals (CIs) around each mean. To formally test trends in performance over time, for each condition and dimension, we developed mixed models that included resource level, time, and their interactions as fixed effects, along with random effects of hospital and county, to reflect repeated measures for hospitals and clustering of hospitals within counties. For each condition and dimension, we also assessed differences in mean composite scores, comparing hospitals at the most and the least disadvantaged resource level for that dimension, for the year 2004. These bivariate analyses incorporated robust standard errors to account for the clustering of hospitals within counties. The same set of analyses were conducted for the year 2007. To assess the independent contributions of the dimensions of location, we developed mixed multivariable models of composite HQA performance score for each condition in 2004 and 2007, entering the locational and hospital characteristics simultaneously as fixed effects, with county as a random effect. We performed regression diagnostics using the approach described by Belsley, Kuh, and Welch [34], and the models performed favorably. Finally, we examined bivariate differences in mean Performance Assessment Model attainment, improvement, and net scores for 2007, for each condition, comparing hospitals at the most and least disadvantaged resource level, within each locational dimension. For these tests, robust standard errors were used to correct for the clustering of hospitals within counties. We also performed analyses that were weighted for hospital size. Again, the results were quantitatively similar; we present unweighted statistics here. Results Although 4,786 different institutions reported HQA data for at least 1 y during the study period, some did not report certain measures during some years. Of the 3,698 hospitals that reported at least one measure of AMI and HF performance in 2004, 3,147 (85.1%) reported on all seven of the measures used in this study in that year, and of those, 2,705 (85.9%) were ''complete reporters,'' providing data on all seven measures again in 2007. These 2,705 hospitals formed the cohort for the present study. Compared to all hospitals, complete reporters were disproportionately large, had a teaching mission, and were comparatively advantaged in terms of local economy and workforce. Analysis revealed that hospitals that were complete reporters performed at a higher level than noncomplete reporters, within every stratum of the five dimensions of advantage/disadvantage. We performed sensitivity analyses to assess the extent to which the inclusion criteria may have impacted our findings, as reported below. For most dimensions of locational resources, relatively few of the sample hospitals are in counties with the lowest resource levels ( Table 1). For example, only 130 (4.8%) of hospitals are in counties that have been persistently poor in the years 1970-2000, and 231 (8.5%) are in counties with high unemployment. Nevertheless, for some dimensions, a substantial proportion of hospitals are in relatively disadvantaged locations. For instance, 12.5% of hospitals are in areas with a low prevalence of high school graduates, and nearly 25% of hospitals are in areas where fewer than 16.2% of local adults are college graduates. In all, 873 (32.7%) of the hospitals in the sample are in a county that is locationally disadvantaged on at least one dimension. Trends in Performance There was general improvement in mean composite score over time (for all hospitals, for AMI, 1.62 points/year, 95% CI = 1.55-1.70; for HF, 3.33 points/year, 95% CI = 3.24-3.40). Details of the yearly trends in composite scores for the two conditions are depicted in Figures 2 and 3. The top panels in Figure 2 ( Figure 2A and 2B) shows trends defined by baseline (2004) performance quartile. For both conditions, hospitals starting in the lowest quartile showed the most improvement over time (p,0.001 for a comparison of the linear time trend between the first and fourth quartiles, for both conditions), but even by the fourth year of reporting, those in the initial lowest quartile had not reached parity with the other groups (p,0.001 for the difference of means for the first and fourth quartile in 2007, for both conditions). The remaining panels in Figures 2 and 3 display performance trends over time for the five dimensions of locational resources, stratified by resource level. For all five dimensions, hospitals at the most disadvantaged level of resources fared relatively poorly at the outset (p,0.001 for the difference between hospitals at the most and least disadvantaged resource levels for all dimensions, for both conditions), and that significant disadvantage continued, but was attenuated over time (p,0.05 for the difference in linear trend between hospitals at the most and least Figure 1. Performance Assessment Model scoring example (standard method). For this example, the attainment threshold is 50 points, and the benchmark is 90 points, based upon hospital performance nationwide during the previous year (see text). This hypothetical hospital received a composite score of 30 during the preceding year, and a score of 75 during the current year. This is converted to attainment and improvement scores, as follows: For the attainment score, the current year score falls in the attainment range, and the hospital is assigned a scaled attainment score of 7. For the improvement score, the current year score falls in the improvement range, and the hospital is assigned a scaled score of 8. The overall score is the larger of the two scores (a value of 8). disadvantaged resource levels, for all dimensions, for both conditions, with the exception of health professional shortage for AMI). By the fourth year of performance reporting, hospitals in disadvantaged areas continued to lag significantly behind their advantaged counterparts (p,0.001 for the difference in means between hospitals at the least and most disadvantaged resource levels for all dimensions, for both conditions) (Table S1). Multivariable Analysis of Dimensions of Location The five dimensions of location are conceptually interrelated, and may be correlated with individual hospital characteristics. Table 2 shows the change in composite performance score independently attributable to individual hospital characteristics, and then independently attributable to each locational dimension, expressed as a difference in score between each level and the most locationally advantaged level, within each dimension. The data show consistent independent effects for chronicity of poverty, entire county health professional shortages, and percent college graduates, with levels of chronicity of poverty and percent college graduates in the workforce showing dose-response relationships to performance. Location and Performance Assessment Model Score Applying the Performance Assessment Model to the 2007 data allowed calculation of attainment, improvement, and net Performance Assessment Model scores (Table 3). Hospitals in more advantaged locations had substantially higher attainment scores (p,0.01 for all five dimensions in both AMI and HF). Hospitals in advantaged locations also had higher improvement scores than their disadvantaged counterparts, though differences in improvement scores were narrower and not all statistically significant (p,0.05 for all dimensions for AMI, but only for chronicity of poverty and college graduates in the workforce for HF). For locationally advantaged hospitals, attainment generally exceeded improvement; the converse was true for disadvantaged hospitals. The net result was that the Performance Assessment Model scorethe suggested basis for reimbursement under VBP-was consistently higher for hospitals in the most advantaged locations than those in the least advantaged locations, for both conditions and for all five locational characteristics (p,0.02 for both conditions, all five characteristics). Sensitivity of Findings to Inclusion Criteria To assess the sensitivity of our core findings to our ''complete reporters throughout'' inclusion criteria, we conducted longitudinal (2004)(2005)(2006)(2007) analyses with the larger sample of hospitals that reported completely in 2004 (n = 3,147), as well as cross-sectional (2007) analyses with complete reporters for AMI in that year (n = 3,074) and HF (n = 3,908). In each case the results were substantially equivalent to those reported here. Discussion We found that US hospitals operating in locations with richer economic and human resources attained significantly higher clinical process scores than those located in less advantaged areas during the period 2004-2007. This pattern was evident along several dimensions of the local economy and workforce. Over the study period, hospital performance improved generally, with initially low-performing hospitals showing the greatest increases. Since locationally disadvantaged hospitals were disproportionately low performers initially, they showed more improvement over the 4-y period, and by the end of the study period, disparities by degree of locational advantage had decreased appreciably. Still, by the fourth year of public reporting, locationally disadvantaged hospitals had not achieved scores comparable to their advantaged counterparts. Our finding of an association between location and performance does not establish causality or specify a mechanism by which the local economy or workforce affect quality. However, it bears emphasis that the association is consistent with research in health management that suggests that effective human and material resources are essential to hospital performance and performance improvement, and that reported performance is only as strong as the weakest link [35][36][37][38][39][40]. Moreover, regardless of the causal history, the fact that better-endowed providers are significantly better performers suggests that pay-for-performance may transfer funds from providers in disadvantaged locations to their better-endowed counterparts. This possibility has international resonance, as is discussed below. Limitations We selected our process measures carefully to ensure that variations in hospital performance are unlikely to reflect differences in the characteristics of patients served by hospitals. As noted previously, performance on the measures is satisfied by a physician's order, as documented by a medical records abstractor. For example, in order for a hospital to have succeeded in satisfying the process measure for ''administration of an aspirin upon admission,'' there need only be an order in a patient record (or documentation of a contraindication) that is reported to CMS. Successfully meeting these criteria does not depend upon a patient action or compliance, and is plausibly independent of patient characteristics. Indeed, the Medicare Payment Advisory Commission (MedPAC) has discussed this matter, and has concluded that risk adjustment for patient characteristics ''is not necessary'' for these measures [41]. This recommendation is consistent with a recent study that found that performance on the measures used in our study was not consistently associated with patient race/ ethnicity, within hospitals [29]. We would not suggest that location is the sole determinant of performance. On the contrary, we found substantial within-county variation in performance, some of which was correlated with hospital characteristics including size, ownership, teaching status, and financial strength. Accounting for within-location variation is beyond the scope of this paper, although it is the focus of some of our ongoing work. On a note related to locational determinism, we would not suggest that hospitals necessarily hire and promote the best of local talent. However, having a strong talent pool from which to draw upon is likely to contribute to organizational strength, all else being equal. Our estimates of effect should be interpreted with some caution. In measuring and assigning ''location,'' we used county as the geographic unit for analysis, because it is the unit for which information about the characteristics of interest is readily available. However, locational characteristics of hospitals are not necessarily fully defined at the county level. For example crosscounty commuting is common in some areas, with over 25% of US workers nationwide crossing county boundaries to reach their workplace [42]. To the extent that more educated workers are drawn across county lines in order to work at high-performing hospitals, the associations reported here understate the relationship between workforce characteristics and hospital performance. A related caveat pertains to variations in county size and population. Across the US, counties range in size from 67 to 227,556 square km [43]. Some counties are small rural areas; others are entire metropolitan regions. Within large counties, there is likely to be substantial heterogeneity of workforce and economic characteristics. The assignment of average characteristics in such counties is especially imprecise, and as a result our estimates of the impact of location are again likely to be biased downward. To explore this effect, we replicated our analyses, omitting hospitals in counties with populations greater than 1 million, and then again omitting hospitals in counties with populations greater than 500,000. As the sample was restricted to smaller counties, estimates of locational impacts generally increased (analyses not shown). Other caveats have to do with representativeness and generalizability. As we noted at the outset, locationally disadvantaged institutions were relatively underrepresented in our sample because they were less likely to report consistently to CMS than their better-resourced counterparts. For example, in our sample 4.8% of hospitals were in ''persistently poor'' counties, but in the nation at-large that figure is 7.5%. Therefore, while our sensitivity analysis did not suggest bias in our estimates of the impact of locational disadvantage, disadvantage is more prevalent than our sample would suggest. In addition, we used process measures of performance for only two clinical conditions. It remains to be seen whether similar trends and patterns will be found, when data are available for other conditions, or for measures of the outcomes of care. Finally, while our analyses suggest that Medicare's pay-forperformance will transfer funds from poorly resourced to better off areas, we cannot assign dollar values to the transfers that are likely to occur, for several reasons. First, our data derive from a period of public reporting, and the addition of payment incentives may influence provider behavior beyond those changes induced by public reporting. Second, the specifics of payment remain to be determined by the Congress and the CMS. Critical issues include the percent of revenue to be withheld under the scheme, the extent to which that revenue will be returned to providers (rather than retained by the Medicare program [5,6]) and the translation of performance scores to dollar amounts (the so-called ''exchange function'' [5]). Policy Implications With respect to Medicare, CMS has acknowledged that pay-forperformance may inadvertently worsen the lot of hospitals that ''consistently face challenges in improving or maintaining their performance'' ( [5], p. 85). In its Report to Congress, the agency has outlined plans to monitor the distribution of funds as pay-forperformance is implemented. If some subsets of hospitals are disadvantaged under the payment reform, then those hospitals could be offered training, site visits, and other forms of technical assistance, the agency has suggested. Our work argues for a more proactive approach. Rethinking the way that performance is assessed could help avoid some of the ''reverse Robin Hood'' consequences that are foreshadowed by our analysis. As we have noted, in its current published form, the Performance Assessment Model credits improvement conditional on starting point. This means that baseline low-attainers must have a greater absolute score increase in order to ''improve'' as much as baseline highattainers. Since locationally disadvantaged hospitals are typically baseline low-attainers, they are perforce less likely to be identified as high performers under the model. Changing the model so that it credits improvement regardless of starting point, or assesses improvement over a longer time frame, could help make the program more equitable. It is important to note that the Performance Assessment Model, while referenced in multiple CMS documents as a likely basis for Medicare reform, is still preliminary. Thus, opportunities exist to modify and improve upon the current version. Alternatively, rather than altering the model, CMS may wish to consider comparing hospitals to their similarly located peers, thereby enhancing equity through an ''apples-to-apples'' comparison. However this strategy carries the risk of institutionalizing inequalities, and finding the right balance may be difficult [44,45]. Do our findings apply beyond the US? While our specific measures of locational disadvantage may not apply everywhere, there are likely analogs in other settings. For instance, significant health workforce inequalities can be found within and across nations, around the world [46,47]. These are likely to translate into regional differences in capacities to perform. A recent study in the UK found that general practitioner practices in deprived areas are disproportionately staffed by older physicians, and those who received their medical training outside of the UK [48]. Both of these provider characteristics were associated with poorer practice performance under a pay-for-performance scheme. Deprived areas had a disproportionate share of the lowest performing general practices. In other words, location was linked to performance among UK general practitioners, perhaps through health workforce inequalities. While this remains to be explored in future research, it implies (as does our work) that the pursuit of efficiency through provider accountability may be at odds with pervasive structural inequalities. Such inequalities can be addressed through countervailing policies. For example, in the UK, geographically targeted approaches have been taken to overcome regional inequalities under ''deprivation payment'' schemes [49]. In the US, there is no comprehensive strategy to address regional resource inequalities as they might affect health care delivery, although various policies have supported health care in rural areas and in the so-called ''safety net,'' which includes institutions providing care to low income people [11,50,51]. Despite claims that ''the world is flat'' [52]-that place is irrelevant in a globally networked world-our work suggests that location is a critical input to health care quality. Holding providers accountable is not an unreasonable approach to quality improvement. However, it must be done in a way that attends to the profound inequalities in local circumstances that shape life in the twenty-first century [14]. Editors' Summary Background. These days, many people are rewarded for working hard and efficiently by being given bonuses when they reach preset performance targets. With a rapidly aging population and rising health care costs, policy makers in many developed countries are considering ways of maximizing value for money, including rewarding health care providers when they meet targets, under ''pay-forperformance.'' In the UK, for example, a major pay-forperformance initiative-the Quality and Outcomes Framework-began in 2004. All the country's general practices (primary health care facilities that deal with all medical ailments) now detail their achievements in terms of numerous clinical quality indicators for common chronic conditions (for example, the regularity of blood sugar checks for people with diabetes). They are then rewarded on the basis of these results. Supporting Information Why Was This Study Done? In the US, the government is poised to implement a nationwide pay-for-performance program in hospitals within Medicare, the government program that provides health insurance to Americans aged 65 years or older, as well as people with disabilities. However, some observers are concerned about the effect that the proposed pay-for-performance program might have on the distribution of health care resources in the US. Pay-forperformance assumes that health care providers have the economic and human resources that they need to perform or to improve their performance. But, if a hospital's capacity to perform depends on local resources, payment based on performance might worsen existing health care inequalities because hospitals in under-resourced areas might lose funds to hospitals in more affluent regions. In other words, the government might act as a reverse Robin Hood, taking from the poor and giving to the rich. In this study, the researchers examine the association between hospital performance and local economic and human resources, to explore whether this scenario is a plausible result of the pending change in US hospital reimbursement. What Did the Researchers Do and Find? US hospitals have voluntarily reported their performance on indicators of clinical care (''process-of-care measures'') for acute myocardial infarction (AMI, heart attack), heart failure (HF), and pneumonia under the Hospital Quality Alliance (HQA) program since 2004. The researchers identified 2,705 hospitals that had fully reported process-of-care measures for AMI and HF in both 2004 and 2007. They then used the ''Performance Assessment Model'' (a methodology developed by the US Centers for Medicare and Medicaid Services to score hospital performance) to calculate scores for each hospital. Finally, they looked for associations between these scores and measures of the hospital's local economic and human resources such as population poverty levels and the percentage of college graduates in the workforce. Hospital performance was associated with local and economic workforce capacity, they report. Thus, hospitals in counties with longstanding poverty had lower average performance scores for HF and AMI than hospitals in affluent counties. Similarly, hospitals in counties with a low percentage of college graduates in the workforce had lower average performance scores than hospitals in counties where more of the workforce had been to college. Finally, although performance improved generally over the study period, hospitals in disadvantaged areas still lagged behind hospitals in advantaged areas in 2007. What Do These Findings Mean? These findings indicate that hospital performance (as measured by the clinical process measures considered here) is associated with the quantity and quality of local human and economic resources. Thus, the proposed Medicare hospital pay-for-performance program may exacerbate existing US health care inequalities by leading to the transfer of funds from hospitals in disadvantaged locations to those in advantaged locations. Although further studies are needed to confirm this conclusion, these findings have important implications for pay-for-performance programs in health care. They suggest that US policy makers may need to modify how they measure performance improvement-the current Performance Assessment Model gives hospitals that start from a low baseline less credit for improvements than those that start from a high baseline. This works against hospitals in disadvantaged locations, which start at a low baseline. Second and more generally, they suggest that there may be a tension between the efficiency goals of pay-forperformance and other equity goals of health care systems. In a world where resources vary across regions, the expectation that regions can perform equally may not be realistic. Additional Information. Please access these Web sites via the online version of this summary at http://dx.doi.org/10. 1371/journal.pmed.1000297. N KaiserEDU.org is an online resource for learning about the US health care system. It includes educational modules on such topics as the Medicare program and efforts to improve the quality of care N The Hospital Quality Alliance provides information on the quality of care in US hospitals N Information about the UK National Health Service Quality and Outcomes Framework pay-for-performance initiative for general practice surgeries is available
8,334.2
2010-06-01T00:00:00.000
[ "Economics", "Medicine" ]
Challenges Predicting Ligand-Receptor Interactions of Promiscuous Proteins: The Nuclear Receptor PXR Transcriptional regulation of some genes involved in xenobiotic detoxification and apoptosis is performed via the human pregnane X receptor (PXR) which in turn is activated by structurally diverse agonists including steroid hormones. Activation of PXR has the potential to initiate adverse effects, altering drug pharmacokinetics or perturbing physiological processes. Reliable computational prediction of PXR agonists would be valuable for pharmaceutical and toxicological research. There has been limited success with structure-based modeling approaches to predict human PXR activators. Slightly better success has been achieved with ligand-based modeling methods including quantitative structure-activity relationship (QSAR) analysis, pharmacophore modeling and machine learning. In this study, we present a comprehensive analysis focused on prediction of 115 steroids for ligand binding activity towards human PXR. Six crystal structures were used as templates for docking and ligand-based modeling approaches (two-, three-, four- and five-dimensional analyses). The best success at external prediction was achieved with 5D-QSAR. Bayesian models with FCFP_6 descriptors were validated after leaving a large percentage of the dataset out and using an external test set. Docking of ligands to the PXR structure co-crystallized with hyperforin had the best statistics for this method. Sulfated steroids (which are activators) were consistently predicted as non-activators while, poorly predicted steroids were docked in a reverse mode compared to 5α-androstan-3β-ol. Modeling of human PXR represents a complex challenge by virtue of the large, flexible ligand-binding cavity. This study emphasizes this aspect, illustrating modest success using the largest quantitative data set to date and multiple modeling approaches. well as the large number of compounds being analyzed, the best alignment option was not immediately apparent to us. Common substructure alignment with an inertial grid orientation was attempted for the training sets using different template molecules. The final alignments were picked based on the quality and plausibility of the actual alignment as well as the statistical quality of the QSAR model derived from it. The best alignments of the master training set (N = 95), the subsets of pregnanes (N = 23) and bile acids /salts (N = 41) were achieved using the conformation of pregnanedione (compound # 27,Supplemental For CoMFA, all the molecules were placed in a 3D lattice with regular grid points separated by 2 Å. The van der Waals potentials and the Coulombic term representing the steric and electrostatic fields were calculated using the standard Tripos force field for CoMFA. A C sp3 atom with a formal charge of +1 and a van der Waals radius of 1.52 Å served as probe atom to generate steric (Lennard-Jones 6-12 potential) and electrostatic (Coulombic potential) field energies, which were obtained by summing the individual interaction energies between each atom of the molecule and the probe atom at every grid point. A distance-dependent dielectric constant was used. The steric and electrostatic fields were truncated at ± 30.00 kcal/mol. A similar approach was used for CoMSIA as the aligned molecules were placed in a 3D lattice with regular grid points separated by 2 Å. The five physicochemical properties for CoMSIA (steric, electrostatic, hydrophobic, hydrogen-bond donor and acceptor) were evaluated using a common probe atom with 1 Å radius, +1.0 charge, and hydrophobic and hydrogen-bond property values of +1. The attenuation factor α, which determines the steepness of the Gaussian function, was assigned a default value of 0.3 (39). The PLS technique was employed to generate a linear relationship that correlates changes in the various computed potential fields with changes in the corresponding experimental values of activities (-log IC 50 ) for the data set. Employing the CoMFA and CoMSIA potential energy fields for each molecule as the independent variable and the corresponding activity values as the dependent variable, PLS converts these descriptors to the so-called latent variables or principal components (PCs) consisting of linear combinations of the original independent variables. To assess the internal predictive ability of the CoMFA and CoMSIA models, the 'leave-one-out' (LOO) cross-validation procedures was employed. Cross-validation determines the optimum number of PCs, corresponding to the smallest error of prediction and the highest q 2 . PLS analysis was repeated without validation using the optimum number of PCs to generate final CoMFA and CoMSIA models from which the conventional correlation coefficient r 2 was derived. The utility of the 3D-QSAR models were determined by predicting the activities of the test set compounds that were not included in the training sets after aligning in the same way as those in the training set. In silico methodology: 3D-QSAR -Catalyst The pharmacophore modeling studies were carried out using Catalyst in Discovery Studio version 1.7 (Accelrys, San Diego, CA) running on a Sony Vaio laptop computer (Intel Pentium M processor). This methodology has been previously described [2]. Molecules were imported as an sdf file and the 3-D molecular structures were produced using up to 255 conformers with the best conformer generation method, allowing a maximum energy difference of 20 kcal/mol. Ten hypotheses were generated using these conformers for each of the molecules and the EC 50 values, after selection of the following features: hydrophobic, hydrogen bond acceptor, hydrogen bond donor and ring aromatic features. In addition, hypotheses were generated with up to 2 excluded volumes, variable weight and tolerances and a combination of excluded volumes and variable weight and tolerances. In all cases, after assessing all ten generated hypotheses, the one with lowest energy cost was selected for further analysis as this usually possessed features representative of all the hypotheses. The quality of the structure activity correlation between the estimated and observed activity values was estimated by means of an r value. As Catalyst is commonly used with relatively small training sets (greater or equal to 16 molecules) we generated individual models for the different types of steroids only. In silico methodology: 4D-QSAR The 4D-QSAR methodology has been presented previously in detail [3]. Briefly, the commercial version (V3.0) of the 4D-QSAR package was employed in this study (4D-QSAR, Version 3.0; The Chem21 Group, Inc., Lake Forest, IL). This study uses a receptor-independent (RI-4D-QSAR) analysis. The first step in the analysis is to generate a reference grid cell lattice in which to place the 3D structure of each training set compound. This grid cell lattice is composed of a set of one angstrom cubes. The 3D structures of the training set compounds were then constructed and optimized in Hyperchem (Release 7.51 for Windows; Hypercube, Inc. Gainesville, FL) The preferred compound geometry was determined via molecular mechanics with an MM+ force field, and the partial charges were assigned using a semiempirical AM1 method implemented in the Hyperchem program [3]. The interaction pharmacophore elements, or IPEs were assigned to the intramolecular energy minimized 3D structure of each compound and the conformational ensemble profile, or CEP, was then generated for each training set compound. The seven IPEs used in 4D-QSAR analyses represent any/all atoms, non-polar atoms, polar positive atoms, polar negative atoms, hydrogen-bond acceptor atoms, hydrogen-bond donor atoms, aromatic atoms and non-hydrogen atoms. A molecular dynamics simulation (MDS) was used to create the CEP. The MOLSIM V3.0 (C. Doherty and The Chem21 Group, Inc., Lake Forest, IL) software package with the extended MM2 force field was utilized to perform the MDS. The molecular dielectric was set to 3.5, and the simulation temperature was fixed at 300 K. A sampling time of 100 ps was employed, over which a total of 1000 conformations of each compound were recorded. The CEP was created by recording the atomic coordinates and conformational energy every 0.1 ps throughout the simulation, resulting in 1000 "snapshots" of each compound as it traverses through the set of thermodynamically available conformer states. Following generation of the CEP of each compound, the molecular alignments were chosen for the training set. Three-ordered atom alignment rules were used in this study. In general, the alignments are chosen to span the common framework (core) of the molecules in the training set so that information relating to the substituent properties of the compounds is obtained from the resulting models. This alignment strategy is reflected in those which were chosen and listed along with the steroidal core structure in Supplemental Table 7. All conformations from the CEP of every compound are placed in the grid cell lattice space according to a selected trial alignment. The occupancy of the grid cells by each IPE type is recorded over the CEP which then forms the set of grid cell occupancy descriptors, or GCODs which are utilized as the pool of trial descriptors in the model building and optimization process. The genetic function approximation (GFA) is used to optimize the 4D-QSAR models [4]. Since GFA typically generates a family of possible models, the best models in the 4D-QSAR study were chosen based on a number of different criteria. In addition to the leave-one-out cross-validated correlation coefficient, or q 2 , other statistical measures such as r 2 , standard error (SE), and lack-of-fit (LOF) were considered as indicators of model fitness [4]. The optimal number of descriptor terms to include in the best model was determined by plotting the number of model terms versus the cross-validated correlation coefficient (data not shown). The point of the plot at which the q 2 did not significantly increase with the addition of an additional model term was chosen as the optimal number of model terms. Test sets not included in the training sets were also used to evaluate the predictive power of the 4D-QSAR models. The active conformation of each of the compounds in the training sets was postulated relative to the best 4D-QSAR model for the respective set. This was accomplished by first determining the conformations of the CEP that are within a threshold energy limit of 5 kcal, i.e., only thermodynamically accessible conformations are considered, and then determining which of these possible conformations has the highest activity as predicted by the model.
2,289
2009-12-01T00:00:00.000
[ "Biology" ]
Modeling and observation of mid-infrared nonlocality in effective epsilon-near-zero ultranarrow coaxial apertures With advances in nanofabrication techniques, extreme-scale nanophotonic devices with critical gap dimensions of just 1–2 nm have been realized. Plasmons in such ultranarrow gaps can exhibit nonlocal response, which was previously shown to limit the field enhancement and cause optical properties to deviate from the local description. Using atomic layer lithography, we create mid-infrared-resonant coaxial apertures with gap sizes as small as 1 nm and observe strong evidence of nonlocality, including spectral shifts and boosted transmittance of the cutoff epsilon-near-zero mode. Experiments are supported by full-wave 3-D nonlocal simulations performed with the hybridizable discontinuous Galerkin method. This numerical method captures atomic-scale variations of the electromagnetic fields while efficiently handling extreme-scale size mismatch. Combining atomic-layer-based fabrication techniques with fast and accurate numerical simulations provides practical routes to design and fabricate highly-efficient large-area mid-infrared sensors, antennas, and metasurfaces. T he ability of metal nanostructures to localize light below the diffraction limit and enhance the optical field via surface plasmons-collective oscillations of free charge carriers-forms the basis of nano-optics and plasmonics [1][2][3] . Ultraconfined light in nanoscale apertures, tips, and gaps has been harnessed for surface-enhanced spectroscopy 4 , superresolution imaging 5 , optical trapping 6 , and nonlinear optics 7 . With continual advances in top-down nanofabrication and bottom-up synthesis techniques, researchers can manufacture large-scale metal structures with critical dimensions below even 1 nm 8 . One of the most efficient geometries to realize plasmonic field confinement and enhancement is a nanometric gap formed between two metallic elements [9][10][11] . Pushing these gaps down to sub-nanometer distances in a precisely controlled manner enabled researchers to investigate nonlocal electrodynamics [11][12][13][14][15][16] and light-induced quantum tunneling effects [17][18][19] in the visible and near-infrared regime. Novel applications of resonant nanogap structures and antennas are expanding toward the midinfrared (MIR) regime (typically 2-20 µm in wavelength), which is the emerging frontier showing great promise for biochemical sensing and spectroscopy [20][21][22][23][24][25] . However, nonlocal electrodynamics in MIR nanophotonic structures and its impact on the device design, ultimate performance (localization and field enhancement), and characterization accuracy have not yet been investigated, due to the significant challenges in both fabrication and numerical modeling. This work presents practical approaches to solve these challenges. In general, nonlocal effects arise from the inhomogeneous nature of matter at the microscopic level. In metals, a strong component arises from intrinsic quantum properties of the electron gas, and can have a measurable impact even on large systems 12 . The most general linear relation between the electric field and the electric displacement vector in a homogenous medium can be expressed as: where ε(r, t) is the nonlocal dielectric tensor. This constitutive relation can be written as D k; ω ð Þ ¼ ε 0 εðk; ωÞEðk; ωÞ in the Fourier domain. In the local response approximation (LRA), the wavelength is assumed to be much larger than the characteristic dimensions (the lattice spacing of a metal or the Thomas-Fermi screening length), hence the dielectric tensor is invariant with respect to the wavevector, that is ε r; t ð Þ ¼ εðtÞδðrÞ in real space, with δ being the Dirac delta function. In this case, Eq. (1) can be then simplified as D ω ð Þ ¼ ε 0 εðωÞEðωÞ, which is no longer dispersive in space. In metal nanogap structures, however, light acquires an effective wavelength that can be comparable to characteristic microscopic dimensions and the spatial dispersion can become significant enough to cause experimental observations to deviate from the local model. Extremely localized surface plasmons in the nanogap between a gold nanoparticle and a mirror demonstrated the limitation of plasmonic field enhancement and the nonlocal effect in the visible and near-infrared regimes 12 . While a full quantum mechanical description of optical response is not yet possible for structures other than small clusters 26,27 , a semi-empirical hydrodynamic model has been successfully applied to describe electron-electron interactions in the limit of the Thomas-Fermi approximation in film-coupled nanoparticle systems at optical frequencies 12,28 . To extend this method toward the longer-wavelength regime, in particular the mid-IR regime, it is necessary to address the increased size mismatch between the minimum feature dimensions and the free-space wavelength, that is over ten times larger than in the visible regime, presenting both simulation and fabrication challenges. It should also be noted that most existing works studied nonlocality in simplified two-dimensional (2-D) geometries or in specific (spherically or axially) symmetric threedimensional (3-D) structures due to the computational burden. Practical applications, however, may involve arbitrarily shaped structures, often arranged in periodic arrays. Hence, the ability to perform nonlocal simulations for full 3-D structures with complex geometries is paramount. We overcome the simulation challenge of resolving rapid field variations over the Thomas-Fermi screening length (~0.1 nm; about 10 5 times smaller than MIR wavelengths) with a fast and accurate 3-D computational method, the hybridizable discontinuous Galerkin (HDG) method, that accounts for nonlocal effects via a hydrodynamic model, while the fabrication challenge is overcome via atomic layer engineering. We quantify the nonlocal effect on both spectral resonance and transmission intensity in the mid-IR by comparing measurements from coaxial apertures with gap sizes of 1-10 nm with numerical calculations based on the hydrodynamic model. While previous work has concluded that nonlocality is responsible for limiting the field enhancement in a film-coupled nanoparticle system 12 , our aperture geometry, which harnesses extraordinary optical transmission (EOT) 29 , allows us to show that nonlocality can positively boost the transmission efficiency by effectively enlarging the gap width. Results ENZ mode and nonlocal response. It is not trivial to push the resonance wavelength of metal nanoparticle-based systems toward the mid-IR regime. Moreover, the gap-plasmon resonance of the film-coupled nanoparticle system is characterized via extinction measurements in reflection mode, but many practical applications in nanophotonics require optical transmission through sub-wavelength apertures. As a practically relevant model system to investigate the mid-IR nonlocality in transmission mode, we use coaxial nanoapertures 30-33 that exhibit strong mid-IR resonances and can be made with the gap size as small as 1 nm via atomic layer deposition (ALD) of gap-filling insulators 34 . The origin of strong optical resonances in a coaxial aperture was previously explained using a mechanism based on the zeroth-order Fabry-Pérot (FP) resonance of the gap mode 31,34 . Alternatively, this mode can be interpreted as the effective epsilon-near-zero (ENZ) phenomenon. ENZ photonics has provided a convenient framework to describe a wide range of phenomena such as electromagnetic tunneling through ultranarrow channels operating at the cutoff condition, uniform phase accumulation, large field enhancement, supercoupling, optical nonlinearity, and nonlocality [35][36][37][38] . Unlike the nonlocality observed in other plasmonic modes with a large wavevector in the propagation direction, the cutoff ENZ mode in a coaxial aperture has a vanishingly small wavevector component along the propagation axis, yet exhibits strong nonlocality as shown below. Coupling Maxwell's equations leads to the wave equation in momentum and frequency space: Equation (2) has two solutions depending on the polarization of the electric field. For transverse waves, the divergence-free solution k Á E ¼ 0 yields the usual dispersion relation of On the other hand, the curl-free solution k E ¼ 0 is satisfied by longitudinal waves ðk k EÞ, when the condition ε k; ω ð Þ ¼ 0 is fulfilled. In order to find the longitudinal solutions that satisfy Eq. (2), we need an explicit expression for ε(k,ω) = 0. For a metal, the electron dynamics can be described, in principle, by the Lindhard function 39 , which takes into account the full quantum nature of the electron gas. In general, however, when dealing with inhomogeneous, finite size systems, it becomes very difficult to work in the reciprocal k-space and a real-space implementation is required. To this end, a hydrodynamic description of the electron dynamics inside a metal provides a relatively simple tool to predict nonlocal response effects in large plasmonic systems. The hydrodynamic model can be summarized by the following equation: 40 The first term in Eq. (3) gives rise to the nonlocal responsethe electric field at the point r not only generates currents at r but also in its neighborhood-due to the presence of spatial derivatives. From a physical point of view, the first term in Eq. (3) arises from the electron quantum pressure that prevents charges in the metal to occupy the exact same state, i.e., induced charges do not collapse into a delta function at the metal surface. The nonlocal parameter β is proportional to the Fermi velocity v F , and for a 3-D system the Thomas-Fermi approximation gives At high frequencies (including optical frequencies), however, β 2 ¼ 3v 2 F =5, such that Eq. (3) gives the same result in a free-electron gas as the Lindhard function, up to Oðk 2 Þ 41 . It is useful to extract from Eq. (3) the spatially dispersive permittivity ε L for the longitudinal modes: We notice (neglecting for a moment the damping term) that the dispersion relation of longitudinal waves ε(k,ω) = 0 is satisfied by Eq. (4) for real k only if ω > ω P , giving rise to propagating bulk plasmons. Because of inter-band absorption, it is hard to observe these waves in real metals, although some resonances due to bulk plasmons can be detected in sufficiently small systems 42 . For ω < ω P , however, solutions with imaginary k exist. These solutions are associated with evanescent bulk plasmons. Although they do not propagate in the bulk region, they nonetheless exist at the metal surface and are responsible for inducing a charge accumulation (or depletion) at the metal surface that spreads to the internal metal volume, causing an observable deviation of optical properties compared with a purely local description. In the Thomas-Fermi approximation, the hydrodynamic model does not account for electron spill-out nor electron tunneling. These effects only become important for gaps below half a nanometer, and can therefore be safely neglected in this work. In general, however, they can be accounted for by including gradient dependent corrections to the kinetic energy functional of the electron gas 41,43,44 . Nonlocal effects have a strong impact on systems supporting ENZ modes, such as nanowire-based metamaterials 45 and thin films 42 , for which the ENZ condition occurs around their plasma frequencies (ω P ). The coaxial nanoaperture can serve as a practical platform to study the nonlocality triggered by the ENZ effect in the long-wavelength regime, since its effective ENZ mode can be widely tuned from the near-to mid-IR, and even terahertz regimes while confining the field inside sub-10-nm gaps 34,46,47 . In our previous works in mid-IR and terahertz frequencies 34,46,47 , the EOT phenomena were demonstrated through such ultranarrow coaxial nanoapertures. While the ENZ mode resonances at 10 and 7 nm gaps were in good agreement with the local modeling results, the blueshifts of the ENZ mode resonance from the local modeling results began to appear as the gap size decreased below 5 nm and became apparent at 2 nm gap. Such a large deviation, which cannot be explained by only the fabrication imperfection or variation, drives us to elucidate the origin of the discrepancy between the experiment and the local calculation occurring in ultranarrow gaps in the perspective of nonlocality by combining the sophisticated fabrication process and the advanced HDG modeling equipped with a hydrodynamic model. As illustrated in Fig. 1a, our device consists of an array of coaxial nanoapertures arranged in a square lattice. After patterning gold pillars (250 nm diameter) on a sapphire wafer, ALD-grown Al 2 O 3 films on the sidewalls precisely control the gap size. After depositing a second layer of gold film and glancingangle ion milling, the planarized top surface exposes a dense array of coaxial nanoapertures. The diameter of each coax (250 nm) and the array periodicity (500 nm) are about an order of magnitude smaller than the MIR resonance wavelength (~3-7 µm), thus our structure can be considered a metamaterial. Each coaxial aperture can support a TE 11 -guided mode at the cutoff frequency, when illuminated with linearly polarized light at normal incidence. The fundamental TEM mode of the coax, however, cannot be excited in that configuration due to the mismatch in mode symmetry. The cutoff resonance is entirely determined by the geometry of a single coaxial waveguide such as the inner diameter (D in ) and gap size (G) following the dispersion relation in the coaxial waveguide derived below. Let us start from the dispersion relation of planar metal-insulator-metal (MIM) structures: Here ε m and ε i are the relative permittivities of the metal and the insulator, respectively, k 0 = ω/c is the free-space wavenumber, and d is the space between two metal plates. Equation (5) can also approximately describe the dispersion relation of a coaxial waveguide by inserting the total propagation number, κ mim , consisting of two components 48 : where κ is the wavevector component along the propagation axis and k θ is transverse component in the cross-sectional plane, r is the radius of the coaxial waveguide, and Γ is an integer representing the angular momentum. Combining Eqs. (5) and (6) leads to the dispersion relation for a coaxial waveguide depicted in Fig. 1c. The real part of the propagating wavenumber κ vanishes spectrally close to the cutoff frequency, whereas its imaginary part increases over the cutoff frequency. This points out that the effective permittivity of a coaxial waveguide at the cutoff frequency is near-zero so that the system behaves as if it is filled with a near-zero permittivity metamaterial, thus showing effective ENZ properties. Although in this work we perform full-wave 3-D numerical calculations of the coaxial waveguide system, the dispersion relation obtained above can be used to intuitively analyze the nonlocal effect at the ENZ cutoff frequency. In the coaxial waveguide, the perpendicular wavevector component k m that determines the exponential decay into the metal surface can be derived as k m ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi As expected, as the radius r becomes infinitely large, the coax structures tend toward a planar MIM structure, so that the geometric term disappears and the inplane wavevector component κ becomes a dominant parameter to the nonlocal effect. Conversely, k m in the coaxial waveguide is largely affected by the geometric term r, since κ is very small for the ENZ mode. Thus, as the radius is reduced, k m can become very large, boosting the nonlocal effect at the ENZ cutoff condition in the coaxial aperture. However, it is interesting to note that 1/r is just the transverse component of the coaxial wavevector, k θ = Γ/r. It is clear from the first of Eqs. (6) that for κ ffi 0; κ mim ffi k θ ; so that for a coaxial waveguide, k θ plays the same role as κ mim in the MIM system. The main difference is that while the light is "going around" the coax ring, it is also slowly moving (v g = δω/δk) forward through the film, maximizing the interaction time of the light with the metal, and thus increasing the nonlocal signature of the system. 3-D computational modeling of mid-IR nonlocality. While the above analytical approach-based on the assumption that the material parameters vary slowly with respect to the wavelengthcan provide intuitive ideas of how the ENZ mode should behave in coaxial apertures, full-wave 3-D simulations are required to quantitatively analyze the resonance patterns of the coaxial structure and compare with experimental data. For this purpose, we have addressed its simulation using the hybridizable discontinuous Galerkin method. The HDG method is a high-order accurate, stabilized and locally conservative finite-element numerical scheme designed to resolve rapid field variations in complex geometries spanning multiple length scales. This method has allowed researchers to solve acoustic, elastic, and electromagnetic wave propagation problems more efficiently than other presently available finite-element techniques. The HDG method for 3-D time-harmonic Maxwell's equations 49 has been used for local nanophotonics simulations 34 showing a remarkable agreement with experimental results. The HDG method has been extended to account for nonlocality in 3-D structures 50 , more specifically a periodic nanoaperture structure for low terahertz frequencies. Unlike other commercially available finite-different time-domain (FDTD) or finite-element method (FEM) solvers, the HDG method employs arbitrarily high-order approximations, thereby reducing the numerical error to levels where the solution is practically insensitive to the fineness of the mesh discretization. High accuracy and stability are critically important when one considers resonant phenomena and solutions which exhibit extreme-scale variations and highly localized features such as ultrathin charge distribution layers resulting from nonlocality. In addition, the HDG formulation can easily accommodate divergence-free constraints without resorting to curl-conforming subspaces and exhibits less globally coupled unknowns and higher convergence rates than other finite-element methods, hence is more computationally efficient and accurate, see Supplementary Note 1. In this paper, we use the novel HDG method for the hydrodynamic model introduced in ref. 50 to simulate nonlocality on ultranarrow coaxial structures. To the best of our knowledge, this is the first time that full 3-D nonlocal plasmonic simulations have been performed for extended 3-D periodic nanostructures on the long-wavelength MIR regime. The simulations are performed on a highly anisotropic mesh with 1960 hexahedra, with a higher concentration of elements near the metal-insulator boundaries of the structure, see Fig. 2a and the Methods section for further details. For the different gap widths, we compute the LRA, as well as the nonlocal response for β 2 ¼ bβ 2 0 , where the baseline value for the nonlocal parameter is the first-principle high-frequency regime value, β 2 0 ¼ 3v 2 F =5, and b is the only fitting parameter in our model (note the LRA would correspond to b = 0). For the simulations we first considered b = 1, and in light of the significant blue-shift observed with respect to experiments for small gaps (6.6% for 1 nm and 7.5% for 2 nm), we increased b to better match the measured resonances. The best agreement with experimental data was obtained when b = 1.5. For G = 10 nm and b = 1, we show in Fig. 2b the magnitude of the field enhancement E x j j of the ENZ mode for the resonant wavelength 2.87 μm. The plasmonic hotspots are located near the metal tips, and the mode is nearly constant along the thickness of the gap. The enhancement is maximized in the direction of the polarization. An important aspect of nonlocal simulations is that, contrary to LRA, the electric field penetration inside the metal can be captured. The one-dimensional E x j j profile along y = z = 0 for LRA and the nonlocal response is depicted in Fig. 2c, at the resonant wavelength of each model for the 1 nm gap. The nonlocal model exhibits lower enhancement in the alumina, and inside the metal the enhancement decays more gradually than with LRA, forming a boundary-layer type structure with a skin depth that depends only on the nonlocal parameter β. Hence, the gap seen by the incident wave is effectively enlarged in the nonlocal model. The relative effective enlargement is more significant for narrower apertures, since the skin depth is constant across gap widths. The transmittance spectra for all gaps and models, the tracking of the resonant wavelength and peak transmittance are shown in Fig. 3a-c. The nonlocal model blueshifts the resonances with respect to LRA, and larger relative shifts are observed for larger b values (1 nm gap: 9.5% for b = 1 and 11.5% for b = 1.5) as well as for smaller gaps (9.5% for 1 nm gap and 1.4% for 10 nm gap). An interesting phenomenon we noticed is that nonlocal transmittance is greater than local transmittance only for gaps below 7 nm. Indeed, in addition to enlarging the aperture (increase in transmittance), nonlocality reduces peak field enhancement (decrease in transmittance). For small gaps, we observe an overall increase in transmittance since the relative gap enlargement is significant (skin depth depends only on β) and absolute enhancement is still high, albeit lower than with LRA, see Fig. 3d. However, for large gaps the aperture enlargement is not sufficient to compensate for the quenching of enhancement, hence transmittance decreases. Experimental verification of mid-IR nonlocality. Once the inner diameter of a coaxial aperture (D in ) is fixed, the resonance of the ENZ mode can be tuned by changing its gap size, which is defined by the thickness of the alumina ALD layer in our fabrication process. It provides us with the degree of freedom necessary to explore the nonlocal effect on the spectral resonance shift as well as the transmitted intensity at the ENZ mode by scaling the thickness of the alumina layer down from 10 to 1 nm. We measured mid-IR transmission through large-area coaxial aperture arrays with seven different gap sizes (10, 7, 5, 4, 3, 2, 1 nm) as shown in Fig. 4a, and compared them with numerically calculated results. The transmission spectrum of each sample was measured at eight different positions on the sample to obtain a statistically valid data set incorporating the inhomogeneity of coaxial apertures. Gray scattered points indicate the distribution of the measured data set, and solid line represents the average of eight Fig. 4, the ENZ mode of a 10 nm gap sample shows a very strong resonance peak (as high as 24% in absolute transmission) through a very small open area of 3%. As the gap size is reduced, the ENZ-mode resonance shifts toward longer wavelengths. The resonance as a function of the gap size is plotted in Fig. 4b and compared with the local and nonlocal modeling results. It is clearly seen that the deviation of experimentally measured resonance from the local modeling results tends to augment as the gap shrinks. The experiment and local modeling results agree well for 10 and 7 nm gaps. Below 7 nm gap size, however, the discrepancy between measurements and the local model calculations begins to increase. In the extreme case, the measured resonance of a 1 nm gap sample is 1000 nm away from what the local model predicts. On the other hand, the experimental and nonlocal modeling (b = 1.5) results match well down to the 3 nm gap size, while they still show~5% deviation for 1 and 2 nm gap sizes (see Fig. 4d). In addition to the influence of nonlocality on the resonance shift, we investigate how it affects the MIR transmission intensity through ultranarrow coaxial apertures. The transmission measurements are compared with modeling results in Fig. 4c. For gap sizes of 10, 7, 5, and 4 nm, we observe a progressive improvement of the agreement between the calculated intensity values and the measured transmittance as the gap size shrinks. Although the measured quantitative values could be affected by many parameters, the qualitative trend that we observe could be explained by the following theoretical argument. The broadening of the transmittance resonances (hence their peak intensity values) can be associated with two different mechanisms: on the one hand, there are fabrication imperfections (i.e., shape and size of the rings are not perfectly uniform) that should introduce a constant broadening across the different gap sizes; on the other hand, there is the intrinsic broadening due to ohmic losses in the metal, which increases for smaller gaps due to the fact that the gap-plasmon mode becomes more lossy. Since in our numerical calculation we assume perfectly uniform structures, agreement with experimental data improve as the ohmic losses increase, dominating the overall broadening process. For sub-3 nm gap sizes, however, the measured transmission is higher than predicted by the simulations. In particular, at the 1 nm gap size, the measured intensity is two times larger than the calculated value. Even after incorporating the nonlocal effect in modeling, large deviations of resonance shift as well as transmission intensity from the nonlocal modeling are still observed at extremely small gap dimensions of 1 and 2 nm. As illustrated in Figs. 2 and 3, the nonlocality and the resulting smearing of electrons can effectively enlarge the gap size that contributes to the outgoing radiation from the aperture. This enlargement of the effective gap size will be more prominent for smaller gap sizes. However, we assumed a constant nonlocal parameter (b) for all gap sizes from 10 to 1 nm, which may give rise to the deviation from the experimental results in terms of resonance shift and enhanced transmission. Discussion Our study clearly shows that the nonlocal effect cannot be ignored for accurate theoretical prediction and experimental characterization of ultranarrow gap structures in the MIR domain. In particular, the coax geometry proposed and the fabrication techniques used minimize the impact of roughness on the system resonances. In fact, because the ALD process is highly conformal, the presence of roughness would randomly shift the resonance of each aperture toward higher or lower frequencies, producing a global broadening of the resonance without affecting the peak center of mass. Our observations confirm a trend that is accurately reproduced by numerical calculations, leaving in our opinion, a small margin to other interpretations. These results are consistent with previously published work 12 and further corroborate the fact that nonlocality (or nonlocal-like effects) leads to the blue-shift of gap-plasmon resonances in gold nanostructures with respect to local predictions. Our approach based on the hydrodynamic model and full 3-D HDG simulations provides a practical route and unparalleled efficiency for researchers to design high-performance MIR nanogap antennas, large-area ENZ metamaterials, and biochemical sensors 51,52 with nonlocal corrections. Our experiments show that as the gap size shrinks below 1 nm the physics might actually become more complex than what is captured by the hydrodynamic description. In the future, effects such as light-induced electron tunneling, electron spill-out and other quantum phenomena should be included in addition to nonlocality. These effects will pose significant new challenges from a computational perspective, as additional nonlinear equations that describe the behavior of electron density need to be simultaneously solved, and extremely refined meshes are required to properly capture the quantum phenomena. Furthermore, the thickness dependence of the dielectric spacer might play a crucial role in describing the electromagnetic response of subnanometer-gap systems. While daunting challenges exist for simulating large 3-D quantum plasmonic devices, the HDG method combined with the ever-increasing computational and storage power will provide researchers with a new route to simulate complex geometries involving extreme-scale size mismatches and rapid variations in atomic-scale charge distribution layers. On the experimental side, atomic layer lithography will also enable researchers to overcome technological challenges of mass-producing ultrasmall resonant gaps as well as probing and harnessing quantum plasmonic phenomena at the wafer scale. deposited on the patterned substrate using electron-beam evaporator (CHA, SEC 600). After a lift-off process in acetone and an oxygen plasma cleaning (STS, 320PC) at 100 W for 30 s to remove resist residue, the resultant Au nanodisk array was coated with an Al 2 O 3 film using ALD (Cambridge Nano Tech Inc., Savannah) at a typical deposition rate of 1 Å per cycle at 250°C, followed by conformal deposition of a 400 nm Au film using an electron-beam evaporator (CHA, SEC 600) with a planetary fixture. Finally, the structures were planarized via ion milling (Intlvac, Nanoquest) with an Ar beam of 130 mA and 36 V incident at 10º from the horizontal plane. Methods FTIR measurement. Transmission spectrum measurement was performed on a Thermo Fisher Scientific Nicolet iS 50 FTIR spectrometer with a liquid nitrogencooled mercury cadmium telluride (MCT) detector under the acquisition settings of 150 scans with 4 cm −1 resolution. All samples were measured using a Nicolet continuum infrared microscope with the Reflechromat objective and condenser of 15× and a 0.58 numerical aperture. The signals were collected from the sample area of 100 × 100 μm 2 through a knife edge aperture. The background signal was measured from a bare sapphire substrate under identical acquisition conditions and was then used for the normalization. The total height of the mesh is 1 µm. Absorbing conditions are validated by simulating the structure with PML layers on the outer boundaries. For the 1 nm gap mesh, the volume ratio between the largest and the smallest element is 10 5 . The mesh resolution is chosen so that transmittance results are grid-converged to <0.1% at resonance. All relevant formulation and implementation details of the HDG method for these nanocoax simulations may be found in Supplementary Note 1. The dielectric constant for thin-film alumina is extracted from measurements in Kischkat 53 , whereas the optical constants for gold are obtained from Olmon et al. 54 . Data availability The data that supports the finding of this study are available from the corresponding author upon request. Code availability The code used to compute the results in this paper may be downloaded from https:// github.com/ferranvidal/nanocoax.
6,718.6
2019-10-02T00:00:00.000
[ "Physics" ]
Microcontroller-Based Direct Torque Control Servodrive Robot technology has become an integral part of the automotive industry in several tasks such as material handling, welding, painting, and part assembly. 0erefore, the knowledge and skills to control the electric motors in these manipulators are essential for undergraduate electrical engineering students. Currently, the digital signal processor (DSP) is the core chip in industrial motor-control drives; however, the implementation of DSP control algorithms can be quite challenging for an experienced programmer, even more so for the novice. Considerable research has been done on this topic, although authors usually focus on DSP-based motor drives using popular control techniques such as field-oriented control (FOC). Although highly efficient, this approach is usually reserved for postgraduate education due to its complex structure and functionality. In this paper, the authors present a modular servodrive design on a low-cost, general-purpose microcontroller using the direct torque control (DTC) method, an alternative known for greater simplicity and torque response, compared with FOC. 0e system design was based on Micropython language allowing the software structure to be more manageable and the code to be more understandable. 0is design will be useful to undergraduates and researchers with interests in motor control design. Introduction It is well known that robotic machines are becoming an extensive and vital part of our everyday life. Especially in manufacturing industries such as the automotive, robotic manipulators are used for a wide variety of applications such as material handling, arc and spot welding, painting, and part assembly, among others. Almost any task, which may be repetitive, difficult, or hazardous for a human being, will usually involve some kind of robotic machinery [1]. For this reason, it is paramount for electrical engineering undergraduate students to learn the fundamentals of industrial servomotor control. To achieve fast and accurate movements in a robot, servomotors are controlled employing high-performance techniques, such as field-oriented control [2,3]. is method allows control of the motor's speed while maintaining constant torque. Although FOC is commonly used in industrial motor controls due to high efficiency, its physical implementation can be quite challenging since complicated mathematics is involved. Also, it requires a fast and robust processor and great design efforts [3]. For these reasons, FOC-based servodrive design is a topic usually reserved for graduate or postgraduate engineering education. Direct torque control (DTC) is another high-performance motor control method which may not be as popular or widely spread as FOC; however, it has many merits such as a more simplistic structure, low computational complexity, quick-response time, and a good dynamic performance [4]. is allows the implementation to require a less powerful processor [5], and a less experienced programmer for the software design. DTC has minor disadvantages compared with FOC, such as difficulty to control the motor at low speeds, variable switching frequency, and high-frequency ripple present on flux and torque signals [3]. Several techniques have been proposed to overcome these issues in classical DTC drives; however, the torque dynamics and the simplicity of the algorithm structure are mostly lost [2,3]. In this paper, the authors present a prototype servocontroller based on the classical DTC technique. e electronic device used is the Espressif ESP32, which is a 32 bit, low-cost embedded processor. For the complete coding of DTC, Micropython programming language was used. Experimental results were carried out to validate the effectiveness of the design. is allowed for the implementation of an open-source, user-friendly controller for permanent magnet synchronous motors (PMSMs). Although the aforementioned disadvantages of the DTC technique are present, the aim of this project is the design of a modular DTC test-bed for educational purposes. is will allow a fundamental understanding of DTC, as well as a simplified implementation and experimental verification of future improvements. Motor Control Strategies Industrial AC motors are used in a wide variety of applications that run at a constant speed regardless of the load, for example, fans or pumps. But there are also instances where it may be necessary to control the speed or torque, and in this case, a motor drive is required. Currently, there are several techniques used by manufacturers for motion control, two of the most dominating are scalar controls and vector controls [6]. Scalar control is based on mathematical expressions that describe the machine in the steady-state. In this type of control, there is no motor feedback; therefore, torque control is inefficient [7]. Speed control is carried out by varying the voltage and frequency ratio (V/Hz) so that this quotient is always constant. Although these motor drives are simple and economical, speed regulation is only around 3 percent of the motor's base frequency, and more than that, torque is greatly diminished [8]. For this reason, this method is not adequate for high-precision applications, such as industrial robot manipulators or metal-machining centers. Alternatively, vector control techniques such as FOC are more powerful and efficient, although the control algorithm is significantly more complicated than V/Hz [6]. is methodology, developed by Blaschke [9] in the early 1970s for induction machines, is a closed-loop system; motor output signals such as voltage, current, and shaft position are constantly monitored to estimate the current and magnetic flux spatial vectors. e basic idea is to reorient these vectors perpendicular to each other in order to achieve maximum torque per amp, regardless of the motor's speed [3]. Unlike scalar control, FOC is based on mathematical expressions that describe the steady and transient state model of the machine. is technique not only manipulates the magnitude and angular velocity but also controls the instantaneous position of the spatial vectors of voltage, current, and magnetic flux, allowing a more stable and precise speed and torque control [2]. Although FOC is a high-performance and well-established technology widely applied in power electronics, it is well known that certain features make it difficult to implement in a motor drive. For instance, it has a cascaded architecture that involves coordinate-transformation blocks, proportional-integral (PI) controllers which contain many parameters to be tuned, current regulators, and pulse width modulation (PWM) signal generators, all of which increases implementation complexity and execution time [10]. erefore, FOC usually requires a specialized electronic device optimized for processing digital signals in real time. Such a device is usually a DSP since its architecture is enhanced for this kind of task. In other cases, a field-programmable gate array (FPGA) may also be a good alternative, considering that it is possible to design in it, customized parallel operations to lower execution times [11][12][13]. e main requirement for using these devices is certain expertise in programming languages, such as C++ for the DSP; for the FPGA, it usually will be either VHDL (Very-High-Speed-IC Hardware Descriptive Language) or Verilog [14,15]. Classical DTC Principle DTC is a motor control strategy that works differently than FOC. It was proposed in the mid-1980s by I. Takahashi for controlling induction motors [16]. e main idea in DTC is to estimate the stator flux and electromagnetic-torque spatial vectors, and control their magnitude separately and independently, with the use of hysteresis controllers [17,18]. e outputs of the hysteresis controllers are applied directly to a predefined table to select the optimum voltage vector for the voltage-source inverter (VSI). With it, torque and flux vectors are controlled directly and maintained within two hysteresis bands [19]. In DTC there is no need for PI or current controllers. Also, since DTC is in the stator reference frame, no coordinate transformation is applied. Additionally, due to the direct control feature of torque and flux, a PWM modulator is not necessary [20][21][22]. For these reasons, execution time is lower, and dynamic torque response is higher, compared with FOC [23]. A block diagram of the classical DTC strategy is shown in Figure 1. Torque and Flux Estimation. e DTC algorithm is quite straightforward. e process starts by sensing two lines of the three-phase motor current (i a , i b ). e values are converted from the stator three-phase reference frame (a, b, c) to a two-phase reference frame (α, β) as e DC link voltage (V dc ) is also measured and converted to a two-phase reference frame; together with the last vector applied to the VSI (S a , S b , S c ), the two-phase voltage reference components are obtained as To estimate the stator magnetic flux vector (ψ S ), the following expression is applied using stator resistance, voltage, and current (R s , V s , I s ) as 2 Journal of Robotics However, since the DTC strategy must be implemented digitally, this expression must be formulated as a discrete operation. erefore, by changing the integration to an accumulated sum and by using the stator voltage and current two-phase reference components, it can be rewritten as follows: where n is the notation to indicate the current discrete sample and n + 1 is the next discrete sample. As for T s , it represents the sampling time, where its value is based on factors such as microcontroller speed and algorithm execution time. Now, by using the flux components calculated in (4) and (5), the total magnitude of the stator flux vector will be Also, the angle of the flux vector can be calculated with the following expression: is angle is of major importance since it will help determine in which sector the flux vector is located. To estimate the electromagnetic torque, the two-phase current and flux frame reference components and the motor pole pairs (p) are used as Once the torque and flux have been estimated, they are compared with their respective reference values to obtain the torque error (e T ) and the flux error (e ψ ). ese error values will be the input to the hysteresis comparators. Hysteresis Comparators. In DTC, hysteresis comparators are used as a fast and simple equivalent to the PI controllers used in FOC. eir function is to restrict the magnitude of the torque and flux errors to values between two limits. Narrow hysteresis bands lead to smooth sinusoidal current and torque waveforms, although they will also cause high switching frequencies and greater switching losses [10]. erefore, the hysteresis comparators must have the appropriate limits for a good DTC performance. e flux-controller output (ϕ) has two possible values, 0 or 1, depending if the flux error (e ψ ) is under the lower limit or over the upper limit, respectively. On the contrary, the torque-controller output (τ) has three possible values, − 1, 1, or 0, depending on if the torque error (e T ) is either under the lower limit, over the upper limit, or between both, respectively. Both hysteresis controllers are shown in Figure 2. e output values of both controllers, together with the spatial placement of the flux vector, are used to select the optimal voltage vector for the VSI from the vector table. Optimal Vector Selection. e flux vector rotation space is divided into six equally spaced sectors (1-6) containing six active vectors (V 1 to V 6 ) and two null vectors (V 0 and V 7 ). Figure 3 shows how the voltage vectors are distributed in each sector. To select the optimum vector to apply to the VSI, the hysteresis outputs (ϕ, τ) are used. e flux vector angle θ s is also required but only to determine in which sector the flux vector is located. Both torque and flux can be controlled directly by selecting the appropriate active voltage vector and regulate their magnitudes within their respective hysteresis limits. When a null vector is applied to the VSI, it serves as a soft transition from one switching state to the next. Also, they prevent a short circuit in the DC bus due to a delay in the turn-off time of the power switches [24,25]. To understand the effect of the voltage vector on the flux or torque vector, an example is presented. In Figure 3, the flux vector ψ s is located in sector 1 and is currently rotating (Table 1), the vectors that would have a greater impact on both torque and flux are V 2 , V 3 , V 5 , and V 6 . Applying V 2 would increment both the torque and the flux, whereas V 5 would decrease both. Instead, if V 3 is applied, it would increase the torque but would decrease the flux. Similarly, if V 6 is applied, it would increase the flux and decrease the torque. Applying V 1 or V 4 would have an impact mostly on the flux vector, either decreasing or increasing it, respectively. Lastly, if either null vector is applied, it would slowly decrease both vectors. erefore, for this example, if sector � 1, ϕ � 1, and τ � 1, it can be seen in Table 1 that the vector to be applied would be V 2 . Once the optimal vector is selected, it is applied directly to the VSI for torque and flux correction. DTC Servodrive Description e DTC servodrive was designed as a customized and modular test-bed for industrial servocontrol. e main idea is that each one of the modules may be replaced by another with different types of hardware. For example, the current sensor module in this design uses inductive sensors but can be replaced easily by a Hall-effect sensor module or a shunt resistor module. Also, the processing module which currently uses a 32 bit microcontroller may be replaced in the future by a different type of embedded system. Even the DTC algorithm has an open-source architecture so that the main blocks can be modified or enhanced, or even new functionalities may be easily added. e primary reason is to be able to experimentally test and visualize the effects these changes have on the motor performance. e servodrive system is shown in Figure 4. A list of the electronic components used in this prototype is presented in Table 2. Current-Voltage Sensing and Signal Conditioning Circuitry. One of the most important stages of the DTC process starts in current sensing. Since the flux and torque estimations are based on the current values, it is necessary to have a precise current sensor. Firstly, the current signals of two of the motor lines are measured and converted to a DC signal by adding an offset voltage. Secondly, the signal is amplified and filtered so it can be converted to a digital signal using the analog to digital converters (ADC). Antialiasing filters are employed to effectively cut off frequencies higher than about half the maximum frequency of interest. For this project, a MURATA 56300C inductive current sensor was used. As shown in Figure 5, the current is sensed, positive offset, and amplified using operational amplifiers. To calculate the two-phase reference components for the voltage, the DC power bus must be monitored in case of voltage drops. For the DC voltage sensing (Figure 6), a resistive divider circuit was used and also amplified using JFET-Input operational amplifiers. Since this voltage is always positive, no offset is required. e signal is scaled, amplified, and filtered so it can be converted to a digital signal. Table 1: List of all optimal vectors for the VSI based on the sector number (1-6), the torque error (τ), and the flux error (ϕ). Journal of Robotics 5 Power Electronics Circuitry. In this stage, the threephase voltage is converted to a DC signal so that it can be converted back again to a controlled AC signal for the motor. For this stage, industrial-grade power modules were used for the rectifier bridge and the VSI (Figure 7). Two 1000 μF electrolytic capacitors are connected in parallel for the DC filtering stage. DTC Processing Module. e processor used for the implementation of the DTC technique is the ESPRESSIF ESP32-WROVER-B 32 bit microcontroller, which runs at 240 MHz. ESP32 is a great development board for IoT applications as it is a low-cost device and has many powerful features (Table 3). ESP32 can be programmed in various environments and programming languages such as Arduino IDE, Espressif-IDF, Lua, Micropython, and C/C++. In this project, Micropython was used for the design and implementation of the DTC technique. Micropython is a reimplementation of Python 3 programming language, targeted to microcontrollers and embedded systems. Both programming languages are very similar. Apart from a few exceptions, all the language features of Python are available in Micropython. e main reason for using Micropython in the DTC prototype is that today Python is one of the most widely used, simple and easy to learn programming languages currently available, and well suited for computational analysis and design [26,27]. With the emergence of Micropython, it becomes easy to program microcontrollers and embedded devices. is makes ESP32 and Micropython, a great experimental platform for students, teachers, and researchers [28]. e DTC scheme presented previously in Figure 1 was implemented in Micropython and the ESP32 with the use of peripherals such as ADCs, DACs (digital to analog converters), and timers. A summarized workflow of the DTC algorithm is shown in Figure 8. e DTC algorithm is programmed in a Micropython interrupt service routine (Figure 9) which is triggered every 250 μS by the timer. Once initiated, the current and voltage analog signals are converted to digital values. ese values are used to estimate flux components, flux, and torque values and then compared with reference values to select the optimal vector for the VSI. Any of these signals can be visualized in an external oscilloscope utilizing the internal DACs. Experimental Results e experimental testing was carried out using a standard 1 kW PMSM servo, and the main parameters of which are given as follows: R s � 15.57 Ω; pole pairs, p � 8; rated voltage, V � 130 V; rated current, I � 2.5 A; rated torque, T � 2.7 Nm, and rated speed, ω e � 5000 RPM. To validate the DTC servodrive performance, several tests were executed with the main focus being on the flux, torque, and current signals. If the output signals are similar to the simulated signals, then it means that the algorithm has been properly implemented. Also, the torque output was measured to verify that it is as close as possible to the torque reference value. Although it is important to clarify that the performance of the DTC algorithm and the quality of the output signals are directly related to the hysteresis controllers [29,30]. For this reason, most of the tests were made by observing the effect of the hysteresis bands on the output signals. Effects of the Hysteresis Flux Band. e hysteresis flux band has a considerable effect on the output signals. For instance, if the band is narrow, the switching frequency will be higher, the flux locus will tend to a circular path, and the current signal is very close to a sine wave; however, current ripple will increase since a narrow band will make it easier for the hysteresis limits to be exceeded. On the contrary, a wide band will lower the ripple content, but the switching frequency will decrease, the flux locus will tend to a hexagonal path, and the current signal is distorted [30]. is effect can be observed in simulations ( Figure 10) and experimental tests (Figure 11). e effect of the flux hysteresis band can also be observed in the current signals. When the band is narrow, the current signal will have greater ripple content, but the shape will be close to a sine wave. If instead, the band is wider, the ripple will be lower, but the signal will be heavily distorted. is behavior can be seen in simulations ( Figure 12) and the experimental motor current signal ( Figure 13). Effects of the Hysteresis Torque Band. e width of the hysteresis torque band also has a considerable effect on DTC performance, mainly on the inverter switching frequency. Although it also influences the total harmonic distortion Journal of Robotics (THD) in the current signals, this effect is due to both hysteresis bands. Firstly, the switching frequency is defined as where N s is the number of commutations per period and T f is the period of the fundamental signal. To observe the effect on the switching frequency, both hysteresis bands were varied. In Figure 14, it can be seen that even though both hysteresis bands affect the switching frequency, the torque band has a greater influence. As for the THD on the current signal, it is defined as Journal of Robotics where I is the RMS value of the current and I 1 is the RMS value of the fundamental harmonic component. Both hysteresis bands were varied, but in this case, the flux band had a greater influence on the current signal distortion as shown in Figure 15. Output Torque Measurements. To validate the torque performance of the servocontroller, measurements were made using a 10 Nm rotary torque sensor. In these tests, it was important to verify that the torque output corresponds to the torque reference and that the ripple content is relatively low. Once again, the hysteresis bands were varied to observe the effect on the torque signal. Since the original DTC scheme with no enhancements was implemented, it was expected to have relatively high ripple content. e best torque performance with the lowest ripple was obtained Journal of Robotics when the flux band was set to 2% of the reference value (0.8 Wb), and the torque band was set to 1% of the reference value. e output torque signal is shown in Figure 16. Needless to say, the verification of the torque response in the servodrive is very important since this ensures that changes in torque and direction will be carried out by the drive. However, the use of a physical torque sensor is usually expensive and requires additional installation and software. Additionally, torque sensors have a specific range; if a wider range is required, the cost becomes prohibitive. In future projects, it would be interesting to make use of virtual instruments [31] for the monitoring of torque and magnetic flux signals, and in this way avoid the limitations of physical sensors, especially in electromechanical projects aimed to education [32] where funding is usually limited. Figure 13: Experimental motor current output to observe the effect of the flux hysteresis band with (a) 2% and (b) 5% of the flux reference value. e current amplitude is 2 A/div, and the period is 5 mS/div. microcontroller. e aim of this prototype is to be used in didactic applications for undergraduate engineering students. Experimental and simulation results validate the effectiveness of the design and show that good torque performance can be achieved with DTC on a low-cost, general-purpose processor. e main benefit of this prototype is its simplicity since it did not require any modification to the original DTC structure, and even so current and torque ripple content was acceptable. e code written on Micropython made it easy to understand and to adjust for different testing parameters such as modifying hysteresis bands to observe the quality of current, flux, and torque output signals. Conclusion is will allow students and researchers interested in motion control to experiment with a simple and practical tool and in future projects to improve even the performance and structure of this prototype. Data Availability No data were used to support this study.
5,409.6
2020-02-08T00:00:00.000
[ "Engineering", "Computer Science" ]
Warburg effect‐related risk scoring model to assess clinical significance and immunity characteristics of glioblastoma Abstract Background Glioblastoma (GBM), the most common primary malignant brain tumor, has a poor prognosis, with a median survival of only 14.6 months. The Warburg effect is an abnormal energy metabolism, which is the main cause of the acidic tumor microenvironment. This study explored the role of the Warburg effect in the prognosis and immune microenvironment of GBM. Methods A prognostic risk score model of Warburg effect‐related genes (Warburg effect signature) was constructed using GBM cohort data from The Cancer Genome Atlas. Cox analysis was performed to identify independent prognostic factors. Next, the nomogram was built to predict the prognosis for GBM patients. Finally, the drug sensitivity analysis was utilized to find the drugs that specifically target Warburg effect‐related genes. Results Age, radiotherapy, chemotherapy, and WRGs score were confirmed as independent prognostic factors for GBM by Cox analyses. The C‐index (0.633 for the training set and 0.696 for the validation set) and area under curve (>0.7) indicated that the nomogram exhibited excellent performance. The calibration curve also indicates excellent consistency of the nomogram between predictions and actual observations. In addition, immune microenvironment analysis revealed that patients with high WRGs scores had high immunosuppressive scores, a high abundance of immunosuppressive cells, and a low response to immunotherapy. The Cell Counting Kit‐8 assays showed that the drugs targeting Warburg effect‐related genes could inhibit the GBM cells growth in vitro. Conclusion Our research showed that the Warburg effect is connected with the prognosis and immune microenvironment of GBM. Therefore, targeting Warburg effect‐related genes may provide novel therapeutic options. | INTRODUCTION Glioblastoma (GBM) is the most prevalent primary malignant brain tumor, accounting for 48.6% of all malignant central nervous system (CNS) tumors. 1,2It has a poor prognosis, with only 7% of patients surviving 5 years after diagnosis.After receiving standard treatment, patients with GBM had a 14.6-month median overall survival (OS). 3Many studies have been conducted on the factors that could affect the prognosis of GBM, such as age, radiotherapy, chemotherapy, and O 6 -methylguanine-DNA methyltransferase (MGMT) promoter. 3,4However, the complex pathogenesis and molecular heterogeneity of GBM render it difficult for the current prognostic factors to explain the progression of the disease, necessitating the investigation of other prognostic factors. The Warburg effect, first observed by Otto Warburg in the 1920s, is a phenomenon of abnormal glucose metabolism in cancer. 5It refers to the fact that tumor cells participate in aerobic glycolysis in the presence of enough oxygen, subsequently leading to the production of a significant amount of lactic acid. 6,7Lactic acid has a significant effect on several carcinogenesis processes, involving metastasis, angiogenesis, metabolism, and immunosuppression. 8Recent research has demonstrated that elevated lactic acid produced by Warburg effect is a poor prognostic indicator for metastatic lung cancer. 9The effect of lactate has also been established and confirmed in the prognosis of colorectal cancer, 10 lung adenocarcinoma, 11 and esophageal squamous cell carcinoma. 12Moreover, recent research has demonstrated that Warburg effect-related genes have a significant effect on tumor progression and prognosis.DExD-box helicase 39B (DDX39B) can promote colorectal cancer metastasis through activating the Warburg effect, which was confirmed as a poor prognostic indicator for the prognosis of colorectal cancer. 13Basic leucine zipper and W2 domain-containing protein 1 contributed to the growth and poor prognosis of pancreatic ductal adenocarcinoma by promoting the Warburg effect. 14Previous studies have also shown that Warburg effect-related genes such as monocarboxylate transporter 1, 15 Glucose transporter-1, 16 and SoLute Carrier family 9A1 17 have a negative effect on the prognosis of GBM.However, a comprehensive Warburg effect signature for the prediction of the prognosis of GBM has not been established to date. In addition, high lactate levels produced by the Warburg effect have been demonstrated as a key immunosuppressive metabolite in the tumor microenvironment (TME). 18Research has shown that the Warburg effect leads to the formation of an immunosuppressive microenvironment by maintaining a low potential hydrogen value in the TME, 18,19 such as by polarizing tumor-associated macrophages to favor the M2 phenotype, 20,21 promoting the depletion of cytotoxic T cells, 22 and thereby promoting tumor progression.However, the role of the Warburg effect on other cells involved in tumor immunity has not been clearly reported, especially the role of the Warburg effect on the immune cell profile in the GBM immune microenvironment.Given that previous studies have reported the existence of severe immunosuppression in GBM, 23 elucidating the above issues may provide novel approaches for immunotherapy of GBM.Therefore, the role of the Warburg effect on the immune microenvironment of GBM is worthy of further exploration. Recent research has demonstrated that the Warburg effect alters the tumor microenvironment and promotes angiogenesis, immunosuppression, formation of tumor-associated fibroblasts, and drug resistance. 24herefore, targeting Warburg effect-related genes would be a promising approach for treating GBM.Several preclinical and early-stage clinical studies have demonstrated that targeting the Warburg effect is effective in inhibiting tumor progression.For example, previous research has shown glycolysis as a therapeutic target in colorectal cancer, and inhibitors of glycolysis have been used in clinical trials. 24Another study investigated the relationship between the Warburg effect and glioma and proposed that the ketogenic diet might be effective in the therapy of low-grade glioma (LGG) by affecting the Warburg effect. 25Inhibition of Aurora kinase A has been shown to affect the cellular metabolism of GBM by the reversal of the Warburg effect. 26Nevertheless, there are currently no drugs available that specifically target the Warburg effect-related genes in the clinical treatment of GBM. Although several prognostic models for GBM have been built in the past, a more recent and accurate prognostic model is also necessary given the new diagnostic criteria for GBM in the fifth edition of the World Health Organization (WHO) Classification of Tumors of the CNS in 2021. 27n this research, we investigated the role of the Warburg effect on the prognosis and immune microenvironment of GBM patients using bioinformatics techniques.Moreover, we also explored the drugs that target Warburg effect-related genes and validated them utilizing Cell Counting Kit-8 (CCK-8) assays.This study gives a novel tool for the clinical prediction of prognosis in GBM and contributes to the development of new therapeutic strategies. | Sources of datasets and pretreatments for GBM First, the RNA sequencing (RNA-seq) data, somatic cell datasets, and clinical data for all glioma patients were acquired using The Cancer Genome Atlas (TCGA) database.Next, telomerase reverse transcriptase (TERT) promoter mutation, epidermal growth factor receptor (EGFR) gene amplification, or +7/−10 copy number changes in isocitrate dehydrogenase (IDH)-wildtype diffuse astrocytomas and IDH-wildtype GBM were selected as the training set based on the WHO CNS5.Duplicate patient records were deleted.Lastly, data from defined GBM patients based on WHO CNS5 were included for further analysis.Single-nucleotide polymorphism (SNP) and copy number variation (CNV) data for each GBM patient were retrieved from the University of California Santa Cruz Xena website.The validation set for this study was obtained after excluding LGG and IDH mutant GBM from the Chinese Glioma Genome Atlas (CGGA) database.The data on gene expression from the training and validation cohorts were log2 transformed [log2 (FPKM+1)].Fragments per kilobase million (FPKM) data were utilized in single-sample gene set enrichment analysis (ssGSEA) analysis.Transcripts per kilobase million (TPM) data was used for CIBERSORTx and ImmuCellAI analysis.GSE16011 cohort from the Gene Expression Omnibus (https:// www.ncbi.nlm.nih.gov/ geo/ ) database was downloaded and used to further validate the risk score model.The P27 cell line was derived from GBM patients in the Department of Neurooncology, Cancer Center, Beijing Tiantan Hospital, and was sought after ethical approval (IRB: KY 2021-153-03) and patient consent.The cells were handled according to the protocols approved by the ethical committee of Beijing Tiantan Hospital, Capital Medical University.U87, U251, and LN229 GBM cell lines were donated by Beijing Tiantan Hospital Affiliated to Capital Medical University.The genetic types of U87, U251, and LN229 were as follows: U87, U251, and LN229 were IDH1/2wildtype cell lines with CDKN2A/B homozygous deletions.EGFR gene amplification in U251 and LN229 cell lines, but not in U87 cells; phosphatase and tensin homolog (PTEN) gene mutant in U87 and U251 cells; PTEN gene wildtype in LN229 cell line; tumor protein p53 (TP53) gene mutant in LN229 and U251 cells; TP53 gene wildtype in U87 cells. | Definition of Warburg effect-related genes Genes associated with the Warburg effect were identified from previous studies (Table S1).The criteria for inclusion of Warburg effect-related genes were as follows: (1) genes identified to be directly involved in the Warburg effect and (2) genes directly related to the Warburg effect rather than all genes in a same signal pathway. | Mutation analysis of Warburg effect-related genes The locations of the Warburg effect-related genes were mapped on 46 chromosomes using CNV data.The mutation frequency of Warburg effect-related genes and oncoplot waterfall plots were constructed using SNP data. 2. 4 | Constructing and validating a prognostic risk score model for Warburg effect-related genes (Warburg effect signature) By Kaplan-Meier (K-M) survival analysis, the prognostic value of Warburg effect-related genes for GBM was evaluated, and genes with prognostic significance were used for further analysis.Based on these prognostic genes, we performed univariate COX regression analysis and least absolute shrinkage and selection operator (LASSO) regression (R package "glmnet" and "MASS") to gain the Warburg effect signature for predicting the prognosis of GBM.Eight genes, G Protein-Coupled Receptor 68 (GPR68), Mitochondrial Pyruvate Carrier 1 (MPC1), Solute Carrier Family 16 Member 1 (SLC16A1), Signal Transducer And Activator Of Transcription 3 (STAT3), Transketolase Like 1 (TKTL1), Ribonucleotide Reductase Regulatory Subunit M2 (RRM2), Mechanistic Target Of Rapamycin Kinase (MTOR), and Toll Like Receptor 4 (TLR4), were obtained by appropriate λ value (representing the appropriate number of genes in the model) and the least Akaike information criterion (AIC) value (representing the optimal choice of the model) to construct the Warburg effect signature.The expression of the Warburg effect signature in GBM patients was defined as the Warburg effect-related genes (WRGs) score.The following formula was employed for calculating the WRGs score: where coef (i) and x (i) represent the regression coefficients of the multivariate Cox regression model and the expression of Warburg effect-related genes, respectively. 28A risk score of Warburg effect signature for per GBM patient was then calculated.Based on the cutoff value, all TCGA GBM patients were categorized into high-WRGs scores (high-WRGs) and low-WRGs scores (low-WRGs) groups.The performance of the risk model was then assessed by comparing the survival rates of two groups using the K-M curve.Lastly, the risk model developed from the TCGA GBM cohort was tested in the CGGA GBM cohort using the same approach. Warburg effect signature The WRGs score and clinical information from GBM patients were integrated for univariate and multivariate Cox regression model analysis.The forest plot was created with the "forestplot" package of R, showing the pvalue, hazard ratio, and 95% confidence interval of each parameter. | Nomogram construction Independent characteristics were chosen by multivariate Cox stepwise regression analysis in the TCGA GBM set to build a nomogram to predict the prognosis of GBM patients at 6, 12, and 24 months.The reliability of nomogram prediction was statistically evaluated using the calibration curve, time-dependent receiver operating characteristic (ROC) curves, concordance index (C-index), and Brier Score.The nomogram was finally validated using the aforementioned methods in the CGGA validation set. | Analysis of immune infiltration, tumor mutation burden (TMB), and microsatellite instability (MSI) The gene scores of 29 gene sets associated with immunity were evaluated in each sample using the ssGSEA.By comparing differences between 29 immune-related pathways, patients were categorized into clusters with high and low immune responses.The tumor purity and TME scores (including the ESTIMATE score, Stromal score, and Immune score) were analyzed by The Estimation of Stromal and Immune cells in Malignant Tumor tissues using Expression data (ESTIMATE).The heatmap and WRGs score together showed the degree of immune infiltration.The percentages of immune cells were calculated using CIBERSORTx and ImmuCellAI.The role of WRGs score on GBM immunotherapy responses was shown using violin plots to show MSI and TMB in the two groups, respectively. | Drug-sensitivity analysis based on the Warburg effect signature Using data on drug response from the Cancer Therapeutic Response Portal (CTRP) and the Cancer Cell Lines Encyclopedia (CCLE), drugs associated with the Warburg effect signature were investigated using Spearman's correlation analysis.The effect of different concentrations of inhibition of arginase with (S)-(2boronoethyl)-L-cysteine (BEC [catalog number: S7929, Selleck.cn])ranging from 0 to 1000 μmol/L on the viability of U87, U251, LN229 GBM, and P27 cell lines for 72 h was studied using CCK-8 assays.The effect of BEC on glucose uptake and lactate generation was studied between BEC-treated and untreated GBM cell lines.BECtreated referred to BEC treated with 100 μM BEC.U87, U251, LN229, and P27 GBM cell lines were divided into treated-BEC groups and nontreated-BEC groups.After 48 h of culture, the culture supernatant was collected.Then, the cell viability of two groups was detected by CCK-8, respectively.Next, the glucose and lactate contents in the culture supernatant were quantitatively analyzed by a glucose detection kit (Nanjing Jiancheng Bioengineering Institute, A154-2-1) and a lactate detection kit (Nanjing Jiancheng Bioengineering Institute, A019-2-1), respectively.Finally, glucose uptake and lactate generation were calculated for each unit of cell viability.The glucose consumption and lactate generation experiments were repeated five times (n = 5), respectively. | Statistical analysis The R software (version 4.1.1)and Bioconductor package were utilized for all bioinformatics data analysis.The Graphpad prism was used for laboratory data analysis.Spearman's correlation coefficient |R| > 0.3 and a p-value of <0.05 were considered statistically significant. | Genomic characteristics of Warburg effect-related genes Using the TCGA GBM cohort, somatic mutation analysis and CNV were carried out to investigate genetic characteristics.Variant classification, variant type, gene mutation proportion, and single-nucleotide variant (SNV) were all assessed in the somatic mutation analysis (Figure 1A).Of the SNVs that were found, C>T mutations were the most prevalent.Somatic mutation analysis revealed the percentages of mutations in TP53 (80%), GPR132 (7%), HCAR1 (5%), HTR2C (5%), ABCC1 (5%), STAT3 (5%), MTOR (3%), GPI (3%), ALDH1A1 (3%), and HK2 (3%). Figure 1B showed the alterations with Warburg effect-related genes in 61 GBM patients in the TCGA training set, where missense mutations were the predominant type of genetic alterations.CNV analysis revealed the location of Warburg effect-related genes on the chromosomes.According to the results, chromosome 1 contained the greatest amount of CNV variants in the WRG, followed by chromosome X, chromosome 2, chromosome 4, and chromosome 12 (Figure 1C).A correlation between Warburg effect-related genes in GBM was found by Co-mutation correlation analysis (Figure 1D).Therefore, we preferred to construct a gene set rather than a single Warburg effect-related genes to investigate the role of the Warburg effect on the prognosis and immune microenvironment of GBM. | Construction of Warburg effect signature The Warburg effect signature for the prediction of the prognosis of GBM was developed by variable selection and model construction using LASSO Cox regression analysis (Figure 2A,B).GPR68, MPC1, SLC16A1, STAT3, TKTL1, RRM2, MTOR, and TLR4 were obtained to construct the risk score model.Based on the expression of each gene in the Warburg effect signature, we accessed the risk score for each patient.The formula was as follows: WRGs score = (0.5125) * GPR68 expression level + (−0.2607) * MPC1 expression level + (−0.5221) *SLC16A1 expression level + (0.7147) * STAT3 expression level + (0.2894) * TKTL1 expression level + (0.2817) * RRM2 expression level + (−0.5515) * MTOR expression level + (0.2093) * TLR4 expression level.The prognostic WRGs score was then determined for every patient in the GBM cohort of the TCGA.Next, GBM patients were categorized into high-WRGs score (high-WRGs) and low-WRGs score (low-WRGs) groups based on the median (0.62) as the cutoff value.The risk plot illustrated survival risk, survival status, and Warburg effect signature expression levels in the two groups (Figure 2C), which revealed that survival time shortened as the WRGs score increased.High expression of GPR68, STAT3, TKTL1, RRM2, and TLR4 suggested a poor prognosis for GBM patients, while high expression of MPC1, SLC16A1, and MTOR indicated a long OS for patients.Additionally, the K-M survival curve illustrated that the prognosis of the high-WRGs group was considerably poorer than the low-WRGs group (p < 0.001) (Figure 2D).We performed external validation using the CGGA set to confirm the prediction value of this signature.The survival curve demonstrated that the prognosis for the high-WRGs group was much poorer as compared to the low-WRGs group (p = 0.003) (Figure 2E).The risk plot also revealed that the prognosis became poorer as the WRGs score increased.In contrast to the low-WRGs group, the levels of GPR68, STAT3, TKTL1, RRM2, and TLR4 expression were higher in the high-WRGs group, whereas MPC1, SLC16A1, and MTOR expression were the opposite (Figure 2F).In addition, the GSE16011 cohort was also used to verify the prediction value of the Warburg effect signature, where the results of the risk plot were consistent with those of the TCGA training set and the CGGA validation cohort (Figure S4).However, the K-M curve showed that there was no significant difference in prognosis between high-WRGs and low-WRGs groups (p = 0.079) (Figure S4). | Identification of the WRGs score as an independent prognostic characteristic The role of WRGs score and clinical parameters on the prognosis of GBM was explored by univariate and multivariate Cox regression analyses (Figure 3A).Univariate analysis demonstrated that WRGs score (p < 0.001), radiotherapy (p = 0.013), chemotherapy (p = 0.011), subtype (p = 0.012), and age (p = 0.037) were significantly related to the prognosis in the GBM cohort of the TCGA.Next, WRGs score (p < 0.001), radiotherapy (p = 0.034), chemotherapy (p = 0.044), and age (p = 0.043) were confirmed to independently influence the prognosis of GBM using multivariate analysis.Therefore, the WRGs score calculated by the TCGA GBM cohort was the independent prognostic indicator with the greatest effect on the prognosis of GBM.Moreover, the WRGs score was identified as an independent prognostic characteristic for GBM in the CGGA cohort (p = 0.011) (Figure 3B) and the GSE16011 cohort (p = 0.037) (Figure S4). | Constructing and validating the nomogram A predictive nomogram was constructed to clinically predict the probability of 6-, 12-, and 24-month survival in GBM patients.To construct the nomogram, five independent prognostic indicators (WRGs score, age, radiotherapy, chemotherapy, and MGMT promoter) were included, and the results showed that WRGs score had the highest effect on the prognosis of GBM among these factors (Figure 4A).Based on the nomogram, each factor was assigned a score, F I G U R E 2 Construction of the prognostic risk model in the TCGA cohort and its validation in the CGGA cohort.(A and B) Eight Warburg effect-related genes were selected using LASSO analysis with the appropriate λ value and the least Akaike information criterion (AIC) value.(C) Distribution of the survival risk, survival status, and expression of eight Warburg effect-related genes between the high-WRGs score (high-WRGs) and low-WRGs score (low-WRGs) groups in the training cohort.(D and E) Comparisons of the overall survival between the high-and low-WRGs groups in the TCGA and CGGA cohorts.(F) Distribution of the survival risk, survival status, and expression of eight Warburg effect-related genes between the low-and high-WRGs groups in the validation cohort.The depth of the colors represents the strength of the correlation.Red represents a high expression, and blue represents a low expression.and then, the sum for every patient with GBM was estimated.According to the scores, the probability of 6-, 12-, and 24-month survival for GBM patients was estimated.The accuracy and sensitivity of the nomogram were validated by the calibration curve, area under curve (AUC) value, and C-index.The K-M survival curve demonstrated that the prognosis of the low-WRGs group was dramatically better in contrast to the high-WRGs group in the TCGA training cohort (p < 0.001) (Figure 4B).The calibration curve revealed that actual observations and the predictions of the nomogram were consistent (Figure 4C).ROC analysis indicated that the nomogram had the ability to accurately predict the OS of patients at 6, 12, and 24 months, with AUC values of 0.772, 0.768, and 0.736, respectively (Figure 4D).The C-index after bootstrap-resampling was 0.633.Similar results were identified in the CGGA validation cohort, where the K-M curve revealed patients in the high-WRGs group with a much poorer prognosis as compared to the low-WRGs group (p < 0.001) (Figure 4E).The calibration curve indicated excellent consistency between the predictions and actual observations (Figure 4F).The C-index after bootstrap-resampling was 0.696.The AUC values indicated the nomogram had excellent predictive ability in the CGGA validation cohort (Figure 4G).The GSE16011 cohort was used to re-validate the accuracy of the nomogram to predict the OS of GBM patients at 6 months (Figure S4).The calibration curve demonstrated consistency between the predictions and actual observations.The ROC curve demonstrated the nomogram had the ability to accurately predict the OS at 6 months. MSI analysis Based on previous studies, we hypothesized that the Warburg effect might influence the prognosis of GBM by interfering with the immune microenvironment.Thus, the ssGSEA, ESTIMATE, CIBERSORTx, and ImmuCellAI were utilized to assess the differences in immune infiltration between two groups.The results of the ssGSEA revealed the correlation between 29 immune-related components and WRGs score, with immune-related components being more activated in the high-WRGs cluster (Figure S1).Based on this analysis, GBM patients were divided into two groups by hierarchical clustering: high immune infiltration (Immune-H) and low immune infiltration (Immune-I).The WRGs score in the immune-H group dramatically increased as compared to the immune-I group (Figure S1).ESTIMATE assessed tumor purity and immune microenvironment scores between the lowand high-WRGs groups (Figure 5A).The results revealed that stromal and immune scores were significantly higher in the high-WRGs group (Figure 5B,C), whereas tumor purity showed an opposite tendency (Figure 5D).The abundance of immune cells was shown by CIBERSORTx analysis (Figure 5E).Immunosuppressive cells, including regular T cells (Tregs), neutrophils, and mast cells activated, were greatly abundant in the high-WRGs group, whereas immune-defense cells, such as nature killer cells activated, were highly enriched in the low-WRGs group.Additionally, ImmuCellAI was utilized to assess the immune cell infiltration in GBM patients, which revealed that the infiltration score of the low-WRGs group was considerably lower than that of the high-WRGs group (Figure 5F).The validation in the CGGA cohort (Figure S2) and the GSE16011 cohort (Figure S5) employed the same methodology as above, and the results were in accordance with the training cohort.The results demonstrated that stromal and immune scores were significantly higher in the high-WRGs group, whereas tumor purity was significantly higher in the low-WRGs group.Moreover, the difference in sensitivity to immunotherapy between the two groups was also assessed.The results demonstrated that the TMB and MSI in the high-WRGs group were dramatically lower in comparison with those in the low-WRGs group (Figure 5G,H), demonstrating that immunotherapy was more helpful for low-WRGs GBM patients. | Evaluation of drug sensitivity targeting Warburg effect-related genes By statistically comparing the expression of Warburg effect signature related antibodies in low-and high-grade gliomas (HGG) on the Human Protein Atlas website, we discovered that the high expression of Warburg effect signature in HGG was highly expressed in the high-WRGs group, providing a basis for the treatment of GBM by targeting Warburg effect signature.To identify drugs that were sensitive against Warburg effect signature, drug sensitivity analysis was performed using the CTRP and CCLE databases.The correlation analysis between the dose-response AUC of drugs and the gene expression levels was performed to explore the sensitive drugs of the Warburg effect-related genes (Figure 6A).The results revealed that BEC was the drug correlating with the most genes in the Warburg effect signature.The dose-response AUC of BEC was positively correlated with the expression levels of TLR4, GPR68, and MPC1 and negatively correlated with the expression level of SLC16A1, suggesting that BEC was more toxic to cells with low levels of MPC1, TLR4, and GPR68 expression and high level of SLC16A1 expression.Moreover, our study revealed high expression levels of TLR4 and GPR68 and low expression levels of SLC16A1 and MPC1 in GBM patients with poor prognosis (Figure 2C).We indicated that BEC may interfere with the Warburg effect by highly targeting MPC1 in GBM patients with poor prognosis.Next, the ability of BEC to inhibit the GBM cells growth was evaluated by the CCK-8 assays, and the results revealed that BEC could significantly inhibit the growth of U87 GBM cells growth in vitro when its concentration reached 100 μmol/L (Figure 6B).The survival of both U87 (Figure 6B) and U251 (Figure 6C) GBM cells could be inhibited in vitro when the concentration of BEC reached 1000 μmol/L but not the LN229 cell line (Figure 6D). Figure S3 demonstrated that BEC significantly inhibited the growth of the P27 cell line when its concentration reached 100 μmol/L.Similar to the U87 and U251 cell lines, the P27 cell line showed less than 50% growth inhibition even at a BEC concentration of 1000 μmol/L.In addition, the results also revealed that BEC dose-dependently inhibited the growth of U87, U251, and P27, but not in LN229 cells.Figure 6E showed that the ratio of lactate generation to glucose uptake was significantly lower in U87, U251, and P27 GBM cell lines.The glucose uptake assay demonstrated that glucose consumption in BECtreated U87, U251, and P27 cells was significantly higher than that in the untreated group (Figure 6F).However, the lactate generation assay illustrated that there was no significant change between the treated-BEC and nontreated-BEC groups (Figure 6G).The results suggested that BEC can interfere with the metabolism of GBM cell lines. | DISCUSSION GBM is a glioma with a poor prognosis, drug resistance, and a high incidence of relapse. 30,31The Warburg effect is a metabolic phenotype of tumor cells, 6 which is the main cause of acidic TME and can promote tumor metastasis, immune evasion, and drug resistance. 32,33The present study aimed to categorize patients at risk and predict their prognosis for GBM in order to support clinical treatments based on the Warburg effect.In our study, we developed a prognostic risk score model based on Warburg effect-related genes, and the role of the Warburg effect signature in the immune microenvironment and the prediction of the prognosis of GBM were investigated.In addition, novel drugs that target Warburg effect-related genes were also explored. In our study, the WRGs score was the factor that most significantly affected the prognosis of GBM, although other factors such as surgery, 34 radiotherapy, 35 and age 36 showed a more significant effect on the prognosis in previous studies.The possible reason is that our model reclassified the GBM cohort in the TCGA database following the most recent diagnostic criteria for GBM in WHO CNS5.In WHO CNS5, the clinical definition of GBM is updated by the inclusion of molecular parameters such as TERT promoter mutation, EGFR gene amplification, and +7/−10 copy number change. 27This indicated that, compared to models from previous studies, our model was more suitable for the current clinical prediction of the prognosis of GBM.However, the prediction accuracy of the model remains to be further confirmed in clinical practice. ][39][40][41][42] However, our study differed from previous studies in several aspects.A previous study has shown that GBM patients with activated PI3K/Akt/mTOR pathways have a poor prognosis. 43In this study, the expression of MTOR was found to be low in patients with poor prognosis.It may be because MTOR in our study referred to transcriptome data, whereas the previous studies referred to protein expression levels.This can be the possible outcome of an inconsistency between the transcription level of a gene and its protein level.Our study also identified many genes that had not previously been linked to the prognosis of GBM.The findings revealed that high levels of GPR68 and TKTL1 expression and a low level of SLC16A1 expression suggested a poor prognosis for GBM.Previous studies have shown that GPR68 promotes connections between tumor cells and cancer-associated fibroblasts, which in turn can contribute to carcinogenesis. 44It also promotes tumor growth by maintaining macrophages in the M2 state and inhibiting the infiltration of T-cell. 44The high level of GPR68 expression possibly indicates the short survival rate of tumors.High expression of TKTL1 has been correlated with poor prognosis in several tumors. 45SLC16A1 was shown to be expressed at higher levels in GBM in comparison with LGG. 46,47owever, the influence of SLC16A1 expression level on the prognosis of GBM remains inconclusive.Therefore, our research was the first to report the effect of expression levels of GPR68, TKTL1, and SLC16A1 on the prognosis of GBM, and the specific effects of these genes in GBM need to be further confirmed. Analysis of immune infiltration revealed that the abundance of immune-suppressive cells, such as Tregs and neutrophils, was considerably higher in the high-WRGs group as compared to the low-WRGs group, indicating that the Warburg effect may be one of the reasons for immunosuppression in GBM.A possible mechanism is that the accumulation of lactate by the Warburg effect inside the cell might damage the cellular compartment and halt metabolic processes.Excess lactate must be transported from the cell into the TME to prevent intracellular acidification, which results in the acidification of TME. 48Acidic TME can induce the invasion of immune cells that have immunosuppressive effects, such as Tregs, M2 macrophages, and N2 neutrophils. 49Tregs are currently considered to be the major regulators of immunosuppression in the TME of glioma, and the induction of their activity has contributed to the progression and poor prognosis of glioma. 23,50Tumor-infiltrating neutrophils can inhibit T cells from attacking tumors, and a high percentage of neutrophils is correlated with a short survival time for GBM patients. 51,52In light of the aforementioned pathways, the Warburg effect may result in the immunosuppression of GBM and have an impact on the prognosis. Targeting the Warburg effect is currently considered a promising cancer therapy strategy, 53 and drugs that target the metabolic enzymes involved in aerobic glycolysis have been explored in GBM. 54,55This is the first study to target Warburg effect-related genes for drug sensitivity and obtain the drug BEC, to which MPC1 is highly sensitive.Our results showed that low expression of MPC1 exhibited high sensitivity to BEC.Downregulation of MPC1 expression has been related to poor prognosis and temozolomide resistance in GBM. 42Previous studies have shown that inhibitors of Chicken ovalbumin upstream promoter transcription factor II (COUP-TFII) can inhibit the growth of GBM by targeting MPC1. 42,56ccording to the CCK-8 assay, high concentrations of BEC had a growth-inhibiting effect on GBM cells that did not exceed 50% in vitro.One possible reason for this might be that the absence of an immune microenvironment in our study may have affected the action of BEC since BEC is a known immune-activating drug. 57In addition, the results showed that BEC can interfere with the metabolism of GBM cell lines, which may potentially be one of the mechanisms by which BEC could inhibit the growth of GBM cells.It is, therefore, necessary to further explore the effect of BEC in inhibiting the progression of GBM by targeting Warburg effect-related genes to provide potential effective drugs for GBM treatment. There were a few limitations to our research.The CGGA validation set for this study was obtained after excluding LGG and IDH mutant GBM following the fourth edition of the WHO CNS in 2016, which did not include newly upgraded GBM patients in the WHO CNS5 in 2021.Secondly, the prognostic model of Warburg effect-related genes has not yet been used in clinical practice, which requires further validation in prospective studies and clinical trials. In conclusion, a trustworthy Warburg effect-related genes risk scoring model was identified.This model has a significantly excellent predictive value for the prognosis of GBM patients, suggests their immunotherapy response, and identifies new GBM therapeutic targets.In addition, a promising prognostic nomogram was built using a combination of the WRGs score and clinical characteristics to provide an individual prediction of OS and facilitate the selection of effective treatment strategies in the clinic. F I G U R E 1 Genomic features.(A) Somatic mutations of Warburg effect-related genes.(B) List of the most frequently altered Warburg effect-related genes.(C) The location of Warburg effect-related genes on chromosomes.(D) Correlation of Warburg effect-related genes.The depth of the colors indicates the strength of the relevance.*p < 0.05. F I G U R E 3 Identification of the WRGs score as an independent prognostic factor.(A) Cox analysis in the TCGA cohort.(B) Cox analysis in the CGGA cohort.Univariate analysis is on the left and Multivariate analysis is on the right. F I G U R E 4 Construction and validation of the nomogram.(A) Nomogram integrating the WRGs score and clinical parameters.(B) Overall survival (OS) comparison between high-and low-WRGs groups according to a nomogram in the TCGA cohort (p < 0.01).(C and D) Calibration curve, and area under curve (AUC) of the nomogram at 6, 12, and 24 months of survival in the TCGA cohort.(E) OS comparison between high-and low-WRGs groups according to a nomogram in the CGGA cohort (p < 0.01).(F and G) Calibration curve, and AUC of the nomogram at 6, 12, and 24 months of survival in the CGGA cohort.F I G U R E 5 Immune infiltration, tumor mutation burden (TMB), and microsatellite instability (MSI) in the TCGA cohort.(A) Heatmap for immune infiltration between low-and high-WRGs groups.(B-D) Comparison of the stromal score, immune score, and tumor purity between the low-and high-WRGs groups.(E and F) Comparison of immune cell abundance in the low-and high-WRGs groups.(G and H) Comparison of TMB and MSI in low-and high-WRGs groups.*p < 0.05, **p < 0.01, ***p < 0.001. F I G U R E 6 Identification and validation of sensitive drugs targeting Warburg effect-related genes.(A) Sensitive drugs target eight Warburg effect-related genes.(B-D) Survival rate of U87, UN251, and LN229 GBM cells after treatment with different concentrations of BEC.(E) Effect of BEC on glucose uptake and lactate production in U87, U251, LN229, and P27 GBM cell lines.(F) Effect of BEC on the glucose uptake in U87, U251, LN229, and P27 GBM cell lines.(G) Effect of BEC on the lactate generation in U87, U251, LN229, and P27 GBM cell lines.Glucose uptake and lactate generation assays were performed five times, respectively.The Wilcoxon test was used for data analyses.*p < 0.05, **p < 0.01.
7,854
2023-10-21T00:00:00.000
[ "Biology", "Medicine" ]
Sensor-Based Optimization Model for Air Quality Improvement in Home IoT We introduce current home Internet of Things (IoT) technology and present research on its various forms and applications in real life. In addition, we describe IoT marketing strategies as well as specific modeling techniques for improving air quality, a key home IoT service. To this end, we summarize the latest research on sensor-based home IoT, studies on indoor air quality, and technical studies on random data generation. In addition, we develop an air quality improvement model that can be readily applied to the market by acquiring initial analytical data and building infrastructures using spectrum/density analysis and the natural cubic spline method. Accordingly, we generate related data based on user behavioral values. We integrate the logic into the existing home IoT system to enable users to easily access the system through the Web or mobile applications. We expect that the present introduction of a practical marketing application method will contribute to enhancing the expansion of the home IoT market. Introduction The home Internet of Things (IoT) is not entirely new. In the early 2000s, the widespread use of high-speed Internet and the wired Internet-based home network market rapidly expanded. The recent introduction of the home IoT is an extension of the existing market fostered by the development of the wireless Internet environment and machine to machine (M2M) technology [1]. While the existing home network has limitations in market expansion owing to the prevalent use of the wired network, the current home IoT can connect more diverse devices on account of the advancement of related telecommunication technologies. Accordingly, the current home IoT is distinguished from the existing home network and can be referred to as a new "ecosystem". The key features of the home IoT platform technology are summarized in Table 1. Nine core functions that should characterize a home IoT platform are listed. In modern society, people spend more time indoors than outdoors. According to a World Health Organization study, people reside indoors for more than 21 h a day. The degree of indoor pollution varies by up to three times per individual, depending on the length of residence [7]. Indoor air is more polluted than outdoor air, which is naturally purified. However, it is not easy to recognize this condition and properly address it in real life. According to a US Environmental Protection Agency survey, the concentration of indoor air pollutants is two to five times, or even as much as 100 times, higher than outdoor air pollutants. It is well known that various kinds of volatile organic compounds (VOCs) that are harmful to humans are generated in indoor building materials, paints, and adhesives, which cause skin diseases and allergies [8]. As a practical example of indoor pollution, the health problem of "building syndrome" has emerged, with occupants complaining of temporary or chronic health problems relating to the building. We thus developed a model that can check the pollution of the indoor air in real time through a project conducted by "company A". According to the model, the user is notified of the indoor air status and the appropriate ventilation time when necessary. The system is implemented to enable control from outside the building in real time by means of the user's smartphone. Table 1. Nine key features of the home IoT platform. Function Description Prior Studies Auto Configuration Functions for device installation and easy configuration processing Spanò et al. [2] Remote Monitoring Function to monitor human and object behavior according to space and time Situation Awareness Function for real-time recognition of natural environment changes according to the situation Alirezaie et al. [3] Sensor-Driven Analytics Function to support human decision-making through specific analysis and data visualization [6] In this paper, we introduce the general procedure of the home IoT solution connected to a device sensor, IoT infrastructure, data processing, and mathematical modeling. We describe a related marketing strategy for the solution. In addition, we present specific modeling techniques for improving the air quality, which is a key home IoT service. The relatively sophisticated modeling technique is presented from an academic perspective. It is expected that the presented research will contribute to increasing the market integration of this type of solution by practical commercialization of models that can be readily applied. The remainder of this paper is organized as follows. In Section 2, we summarize the smart home IoT system based on the user value and service vision along with research related to indoor air quality data processing and control systems. In Section 3, we describe our research design, which includes data collection and generation (scenario 1), and user-behavior settings with various statistical methodologies (scenario 2). We additionally introduce a marketing strategy for commercialization of home IoT technology. In Section 4, we conclude the paper and highlight the theoretical and practical implications of our research. IoT and User Behavior Value IoT is a system in which intelligent objects are connected in a physical or virtual space, and a network is formed between people and objects, or between objects and objects [9]. IoT can also be defined as a global infrastructure that provides intelligent services by combining knowledge based on context awareness. Implementation of an IoT requires an embedded system represented by things, a bi-directional communication environment, including the Internet, and commercial software to process the data. IoT was initiated with the ability to remotely control lighting, thermostats, and security devices in everyday life [10]. This ability can be viewed as a function that satisfies user's behavioral values (UBVs) of management, promptness, and information [11]. From that time, IoT has evolved into a means of exchanging information between objects and objects, and the "If This Then That" (IFTTT) concept has become universal, satisfying the value of scalability and automation. IFTTT represents a service for linking various programs and applications on the Internet with a computer through a command "recipe" [12]. In recent years, IoT in daily life has shown a tendency to expand its service centering on home IoT, which is fused with an artificial intelligence (AI) client. This enables users to manage multiple Internet devices more conveniently with voice commands. In particular, a report summarizes existing high-level techniques in gas sensing and IoT-related papers published within the last five years. The research was tested in a kitchen environment that contained several objects monitored by different sensors [13]. The authors of the report introduced a representational and reasoning model for the interpretation of a gas sensor situated in the sensor network. The interpretation process includes inferring high-level explanations for changes detected over the gas signals. Inspired from the Semantic Sensor Network (SSN), the ontology used in this work provides an adaptive way of modeling the domain-related knowledge. Furthermore, exploiting Answer Set Programming (ASP) enables a declarative and automatic way of rule definition. Converting the ontology concepts and relations into ASP logic programs, the interpretation process defines a logic program whose answer sets are considered as eventual explanations for the detected changes in the gas sensor signals [14]. As the home IoT has become more convenient, it has become more widely used in everyday life. However, with this greater prevalence, users have become increasingly concerned about related privacy, security, and safety issues of home IoT devices. This concern is particularly the case with respect to the numerous sensors and communication devices involved. From the UBV perspective, IoT is demonstrating that the value placed on safety has recently increased along with universal UBV, such as manageability, speed, and scalability [11]. We derived 28 items on UBV based on the previous six years of IoT-related studies and theories of change. We redefine the three UBVs, as shown in Table 2, by incorporating the overlapping or similar concepts. The theory of change emerged from the field of program theory and program evaluation in the mid-1990s as a new means of analyzing theories motivating programs and initiatives toward social and political change [19]. The theory of change generates knowledge about whether a program is effective, while explaining what methods the program can employ to be effective. In the early days of the theory of change, Kubisch established three quality control criteria to combine theory with traditional manufacturing, environmental psychology, organizational psychology, sociology, and political science [20]. The three criteria are plausibility, feasibility, and testability. Since the three criteria have been gradually extended to research on the theoretical background of system maintenance and software upgrades in information and communication technology, they have been used in various terms and as different values [21]. First, plausibility refers to the "logic of outcomes" pathway. In other words, it is the user's expectation of or satisfaction with the accuracy and logic of the new technology in terms of UBV. Plausibility has been replaced by the meaning of relationship, sociality, and convenience in later studies. We redefine plausibility as interactivity by grasping the accuracy of the technique and the satisfaction of users accordingly. Second, feasibility refers to whether the initiative can realistically achieve its long-term outcomes and impacts. This has been handled in research in terms of the manageability of technologies to solve psychological problems related to the user's reticent relation to the given technology. Thus, we contend that people using home IoT products or services can relinquish their technical reticence and gain psychological flexibility through certain values. We redefine all of these values as stabilities. Finally, testability refers chiefly to the indicator that measures the importance of users' behavioral values. In other words, it is a type of instrumental utility that quantitatively measures thought flow and change. Recently, information and communication technologies (ICT) research has replaced testability with a kind of functionality. In this study, we redefine it as the comprehensive meaning of UBVs, such as scalability, compatibility, and promptness. Studies on Improvement of Indoor Air Quality A pleasant indoor environment is determined by the comprehensive action of various indoor environmental factors. In recent years, there has been a growing interest in indoor environmental factors that directly affect the degree of comfort for people who reside indoors, including temperature and humidity. In addition, there is a continuing need to manage indoor air quality factors, such as fine dust and carbon dioxide, which are closely related to human health [22]. According to US Environmental Protection Agency research, the causes of indoor hazardous substances are carbon dioxide (CO 2 ), nitrogen dioxide (NO 2 ), sulfur dioxide (SO 2 ), ozone (O 3 ), fine dust, heavy metal, asbestos, volatile organic compounds (VOCs), formaldehyde (H-CHO), microbial substances, and radon (Rn). Various gas measurement sensors for indoor air pollution sources have been developed and employed. Moreover, studies and development are currently underway on technologies that quickly detect flammable or toxic gases and respond accordingly [23]. Research on indoor air quality sensing has been conducted for various public places of everyday life, such as subways, schools, department stores, and offices. Paulos et al. [24] developed a system for measuring and monitoring office air quality through research on the office indoor air environment and work efficiency. As a result of controlling the system through a wireless sensor network linked to mobile devices, the overall work efficiency of the employees increased. Kanjo [25], Lohani and Acharya [26] developed their own environmental information monitoring system that applies precautions, such as indoor fine dust reduction, by using a mobile wireless LAN. The author showed that employee satisfaction with the work environment increased. Hwang and Yoe [27] monitored and analyzed indoor environment information through closed-circuit television (CCTV) and public environment information using an application programming interface (API). In addition, they developed an indoor environmental control system based on automatic situation recognition. Wang et al. [28] and Pötsch et al. [29] developed a wireless-sensor-based indoor environmental monitoring system for green buildings and the LoRaWAN stack, respectively. The system visualizes collected indoor environment data and measurement position data, and it distributes the temperature sensor to various locations in the target space. Moreover, it communicates the temperature in each space using a step color chart. Specifically, the authors calculated the distance from a window and installed sensors at three levels above the horizontal point. Their system visualizes the collected data as a three-dimensional space chart according to the spatial distribution. In a study on an indoor air quality monitoring system, researchers divided the measurement values of the fine dust concentration on the floor plan of the space into multiple spaces and expressed them in two or three dimensions [30]. The system has a simple structure for intuitively grasping the indoor environmental condition, thus enabling a comparison of the dust concentration according to the space. Meanwhile, the studies of Salamone et al. [31] utilize more simple self-developed experimental tools. They installed the open-source Smart Lamp in a real office environment and tested the reliability of IoT equipment. Salamone et al. [32] conducted a ventilation efficiency evaluation according to the ventilation method of an indoor space using a computational fluid dynamics (CFD) technique. To this end, they developed a system for measuring toluene concentrations and visualizing them in three-dimensional (3D) charts, which were applied to the field and contributed significantly to lowering the average toluene concentration. Moreover, another paper presents a very important reference point on how to sense different kinds of gases. According to this study, the method of sensing various types of gas is described in detail. Additionally, the sensitivity (the minimum value of the target gas volume concentration when the gases could be detected) and the selectivity (the ability of gas sensors to identify a specific gas among a gas mixture) are regarded as very important measures for evaluating stability in gas sensing. In addition, it was explained that response time (the period from the time when gas concentration reaches a specific value to that when the sensor generates a warning signal), energy consumption, reversibility (whether the sensing materials could return to their original state after detection), adsorptive capacity, and fabrication cost are important factors. As shown by the above research examples, most studies related to indoor air quality improvement involved developing a system that is suitable for a specific environment. This approach is difficult to apply to all environments of a given workplace using a standardized sensor device. Moreover, it cannot achieve the ultimate result needed for the actual user in the workplace, which is the reduction of harmful indoor components. In view of recent trends in the previous research, it can be observed that constructing the system environment that we planned, and creating the data through the distribution of the sample data, which is the methodology that is appropriate for it, is a very effective methodology. In other words, just as many experimental studies create experimental environments that can control variables themselves, we cannot only set specific situations, but we can also scientifically carry out all experimental steps consisting of system design, instrument connection, data communication control, sample data distribution analysis, and function estimation and verification. Many customized studies have been conducted through these actual system building processes [25,[28][29][30][31][32], and the results are reflected very successfully in practice. From the researcher viewpoint, it is more effective to develop a system suitable for the environment and apply it to identifying problems and finding solutions. According to these trends, we intend to develop an air quality improvement system that can be applied to the apartment, the most common Korean housing type. Technique of Random Data Generation There are several ways in which we can amplify data within a given error-term. In particular, many previous studies on random number generation have been conducted based on the following three trends. First, in the information technology (IT) field, random number generation and its statistical evaluation have been mainly performed in the research of cryptography and system security. Second, prior research on random number generation in the financial sector has been predominantly focused on predicting how stock and bond values will change in response to changes in interest rates and other macroeconomic variables. Finally, another area that heavily uses random number generation is the traditional use of statistical tests to generate test data in areas where mathematical proofs are required. Xiao et al. [33] argued that the most important point in generating test data is finding an efficient optimization algorithm. They generated test data using a genetic algorithm (GA), simulated annealing (SA), and genetic simulated annealing (GSA), and they concluded that GA is the best optimization algorithm for generating test data. Several studies were conducted to improve the efficiency of test data generation by improving existing optimization algorithms. Alba and Chicano [34] applied parallel GA to test data generation, and Mousa et al. [35] suggested application of a memetic algorithm that combines GA and local optimization algorithms. Watkins and Hufnagel [36] compared the fitness evaluation functions used to generate the test data. The results showed that the most efficient fitness evaluation functions for generating test data are BP1, BP2, and IPP. Monte Carlo simulation (MCS) has been considered the most effective technique for random number generation for complex financial products. MCS is a common method that involves numerical integration based on random sampling. However, since random sampling is inherently a brute force method (BFM), many trials are required to maintain a high accuracy and minimum error rate, which is also time consuming. To solve this problem, Mallat [37] used the random number generation scheme (RNGS) to investigate bond values. This method stratifies sampling of interest rate data through a uniform distribution, applies an inverse-transform technique, and then obtains a random variable of an inverse function. The study of random sampling in the financial sector has centered on the interest rate structure; however, it has supplemented various alternative financial models, such as the standard Wiener process (SWP) [38]. In other words, a cumulative (or spectral or density) distribution function of the actual sampling data was converted into rich interest rate data and eventually the distribution function of the random data was generated through natural cubic spline (NCS) interpolation. In this study, we employ Gerald and Wheatley's random number generation method. We create a density distribution function based on the actual home IoT data, such as the indoor air quality concentration from apartment complexes and the API data provided by a meteorological office, and we extract the basic data. Based on these data, we generate a random number function for the last year of data through NCS interpolation. Sensor-Based Modeling Framework The model framework design for our study is divided into total three stages. First, as a preprocessing step, we select information, generate sample data, and pack it according to the time variable. The second step is the process of creating the model by building logic for the data. In this case, we proceed through two processes. First, we construct the model with the static data completed in the preprocessing process. Second, we construct the model through the variable data, such as the user behavior data. In the final step, post-processing, we evaluate the accuracy of the actual data with a continuous test, and we connect the constructed model to the existing interface. This sequence of steps is shown in Figure 1. As mentioned earlier, we employ in this study the random number generation method presented in a previous study. We conduct a spectrum analysis based on the actual home IoT data and the public API data, and achieve a prototype of the sample data. We also apply the NCS interpolation method to the prototype data to generate a random number function for the last year of data. Infrastructure We design the mobile application to transmit the information of each situation to the server so that users can collect IoT status information according to their situation. At this time, the managerial server that receives the user information simultaneously requests the status information of all the user's home IoT devices, and it also structures and stores all the received information. The overall infrastructure is comprised of several components. First, the front-end receives the user's status information from the mobile application and it helps the server structure and store data through its embedded business logic. Second, the client, acting as a data receiver, retrieves the change information of the user's home IoT devices through the broker instance built into the system. It structures the data through the server's business logic. Third, the IoT managerial connector, a module for communicating with the external server, manages the home IoT device information for each user. It also receives IoT device information and stores data at specific time intervals. Finally, the data formatter structures the state information of the user and the device proceeds through each module. This infrastructure is shown in Figure 2. Each log data is structured and stored in Hadoop (Hortonworks), a data distribution storage processing framework. The document type can be divided into general data entered in the API, real user context data, and other data from connected home IoT devices. All of the IoT log collection servers that comprise this system are built in an Amazon Web Services (AWS) environment. Each component server constituting the system is composed as follows. First, we configure the log collecting Web server as an instant type (four CPUs, 8 GB memory, respectively). Second, in the case of the broker instance that transmits information of the IoT device, we construct an instance system by additionally connecting a 200-GB hard disk drive (HDD) to enable stable data transmission. Finally, in the case of Hadoop, which stores all user information, a 1-TB HDD is additionally connected to accommodate instantaneously changing data. Preprocessing After reviewing and evaluating as much information as possible about the air quality at the stage of variable selection, we identify the source of the relevant data, consider the possibility of analyzing the data, and finally select the variables. The selected variables are 21 in total. Among these, 12 outdoor data are retrieved via a public API, and indoor data are obtained from existing home IoT data. The results are shown in Table 3. In this study, statistical software packages such as SAS 9.4 (SAS Institute Inc., Cary, NC, USA), SAS Enterprise Miner v.13.1 (SAS Institute Inc., Cary, NC, USA) are applied to analyze the sensor data. As mentioned earlier, we visually check the temporal and seasonal flow of each variable and then apply the NCS interpolation method to each variable. In other words, we perform random number generation to fill each sample period consecutively in seconds for one year. This process is shown in Figure 3. In the next step, we replace the existing linear flow with a probability distribution function to make each variable value more fluid and objective. To this end, we use a cumulative distribution analysis (or density and spectrum analysis) method. We take an inverse function to express the probability function thus created as a variable coefficient value of one or less. Finally, we create a final data set for analysis by sorting the values of each variable into time variables in seconds and grouping them together. This process is shown in Figure 4. First-Round Analysis In the first round of the study, we strive to mathematically estimate and derive optimal indoor ventilation times to ensure the uncontaminated air quality. As a preliminary step, we estimate the environmental factors correlated with indoor air pollution, and we develop a model to derive air ventilation (Vq) and ventilation time (Vt) for optimal indoor air quality. The specific process is as outlined as follows. First, variables measured through the sensor are monitored to set an alarm when a certain threshold (pollution degree: 80%) of air pollution is exceeded. Secondly, we analyze pollutant variables that have the greatest impact on air pollution through correlation analysis of various air quality variables when an alarm occurs. Third, we compare indoor-outdoor observations of pollutant variables to determine whether indoor air is clean. Fourth, we predict the ventilation rate (Vq) by estimating the amount of the pollution factor. In this case, the amount of ventilation can be derived from the air pollution concentration minus the allowable pollution concentration as the denominator and the pollutant generation amount as the numerator. Finally, the optimal ventilation time (Vt) is estimated through the ventilation amount (Vq). This process is shown in Figure 5. Second-Round Analysis We create the UBV model by adding user-customized data in the second-round study, while creating the model for the existing fixed data in the first-round study. In other words, we add seven additional user variables to the existing 21 data items to estimate the optimal indoor ventilation time. This enables creation of a more subjective dataset with a range of predictions. Table 4 summarizes seven variables, which are classified into three categories: data from home IoT devices (4), data classified by a person's characteristics (2), and three levels of place sizes (1). As shown in Figure 6, we obtain data in minutes from three sources. The first source is the home IoT device data stacked on the server. The second is a custom value we randomly group into four types. The last data source is the size of the place divided into three types. We create a new user-centric model by adding these three additional data items to the existing model. In other words, we intend to provide customized services for individual users. We thus design the set values into groups and develop a flexible logic according to the users. Technically, we construct new personal data for a total of 200 individuals, each consisting of 50 individuals in each of the four areas. The vertical axis denotes the residence time; the horizontal axis represents the user's sensitivity to dust. Accordingly, we finally obtain a relational function model based on UBV. Post-Processing The most important goals of post-processing are summarized in the following two points. First, as shown in Figure 7, we realign and advance the logic through posterior conformance testing, which is a repeated visual plotting test of personalized data. Because our logic is automatic, regular, and dynamically generated for random numbers at specific locations and points in time, as well as for specific users, we believe it is necessary to stabilize them. Therefore, it is necessary to continuously check whether the value is within the error range under a certain condition while continuously visually confirming based on the newly generated random number. The second is to integrate all the created logic into the existing interface to enable users of the home IoT system via an existing PC or mobile device to actually observe the logic working. To this end, we used SAS Event Stream Processing (ESP; SAS Institute Inc., Cary, NC, USA), which provides streaming data of operations, transactions, sensors, and IoT devices in real time and visually presents them to the user. This process is depicted in Figure 8. Marketing Prospects Depending on the use of the developed model in the business domain, we expect a notable adoption expansion in several markets. First, the model can be immediately applied to existing buildings as well as new buildings. This applied model will help to increase the market value of the building. That is, it can increase the value of existing products without requiring additional hardware or system changes. It is thus effective in terms of profit increases. In addition, the developed logic can be sold not only in the business-to-business market, but also as a business-to-consumer-specific product, by specializing it in a user-customized model. Second, by applying the latest data analysis model, we can expand brand awareness in the construction market. In maintaining the recent trend of the fourth industrial revolution, applying the latest IoT technology to the construction field can be expected to enhance the brand image in the home IoT market. In addition, we can expect to gain market dominance by supplying additional hardware and systems by selling artificially intelligent home IoT products, such as noise detectors and motion detectors, in line with rising brand awareness. Conclusions Recently, IoT has been used in a wide range of industries, including the smart home, health care, automobile, and energy industries. Many home IoT devices have already been integrated in our daily lives. In addition, IT-oriented companies and telecom companies that are leading IoT are expanding their market by developing technology-oriented products and services, while attempting to build a user-centered home IoT environment. Therefore, in this study, we conducted a literature survey on the user value of IoT based on the theme of air quality improvement among the IoT service examples. We focused on the user value that can be satisfied through related products and services. We also introduced random number generation as a method for reasonably amplifying analytical data, and created user-based analysis models through various data from smart home devices and public APIs. Moreover, we linked them to existing infrastructures. Finally, we described the use of the developed model for marketing purposes.
6,701.8
2018-03-23T00:00:00.000
[ "Computer Science" ]
FREE BOUNDARY PROBLEM FOR A REACTION-DIFFUSION EQUATION WITH POSITIVE BISTABLE NONLINEARITY . This paper deals with a free boundary problem for a reaction- diffusion equation in a one-dimensional interval whose boundary consists of a fixed end-point and a moving one. We put homogeneous Dirichlet condi- tion at the fixed boundary, while we assume that the dynamics of the moving boundary is governed by the Stefan condition. Such free boundary problems have been studied by a lot of researchers. We will take a nonlinear reaction term of positive bistable type which exhibits interesting properties of solutions such as multiple spreading phenomena. In fact, it will be proved that large-time behaviors of solutions can be classified into three types; vanishing, small spreading and big spreading. Some sufficient conditions for these behaviors are also shown. Moreover, for two types of spreading, we will give sharp esti- mates of spreading speed of each free boundary and asymptotic profiles of each solution. MAHO ENDO, YUKI KANEKO AND YOSHIO YAMADA When f satisfies (PB), we say that f is a function of positive bistable type. Initial function u 0 satisfies u 0 ∈ C 2 ([0, h 0 ]), u 0 (0) = u 0 (h 0 ) = 0 and u 0 (x) > 0 for 0 < x < h 0 . (1.1) A free boundary problem like (FBP) was first proposed by Du and Lin [5] to describe the invasion of a new species by putting homogeneous Neumann condition at x = 0 in place of Dirichlet condition. We denote such a free boundary problem by (FBP-N). Function u(t, x) stands for the population density of the species over onedimensional habitat (0, h(t)). The free boundary x = h(t) represents the expanding front of the habitat and its dynamics is determined by the Stefan condition of the form h ′ (t) = −µu x (t, h(t)). For the ecological meaning of this condition, see [2]. Du and Lin studied (FBP-N) with logistic nonlinearity f (u) = u(a−bu), a, b > 0, and established various interesting results such as spreading-vanishing dichotomy and asymptotic behaviors of solutions as t → ∞ as well as the existence and uniqueness of global solutions. In particular, it was shown that any solution (u, h) of (FBP-N) satisfies either vanishing or spreading: vanishing means the case where lim t→∞ h(t) ≤ (π/2) √ d/a and lim t→∞ u(t, ·) C([0,h(t)]) = 0, while spreading means the case where lim t→∞ h(t) = ∞ and lim t→∞ u(t, x) = a/b locally uniformly for x ∈ [0, ∞). Since the appearance of their work, a lot of people have investigated (FBP), (FBP-N) and related free boundary problems (see, e.g. [2]- [7], [9]- [20], [22], [25]- [27] and references therein). Among them we should refer to the work of Du and Lou [6], who have discussed a similar problem to (FBP-N) (or (FBP)) by putting free boundary conditions at both ends of the interval. As one of the most important results, it was shown that the analysis of large-time behaviors of spreading solutions is closely related to the following semi-wave problem where u * is a positive equilibrium point of f such that f ′ (u * ) < 0. When f is monostable, bistable or combustion type of nonlinearity satisfying f (0) = f (1) = 0 and f (u) < 0 for u > 1, it was proved in [6] that (SWP) with u * = 1 admits a unique solution (c, q) = (c * , q * ). Their results suggest that (c * , q * ) is available to study asymptotic behavior of any spreading solution. For its sharper asymptotic estimates, see the paper of Du-Matsuzawa-Zhou [9]. Assume that f is given by which is a combination of a logistic term and a predation term called Holling Type III. This is one of the important reaction terms in population biology and discussed in detail by Ludwig, Aronson and Weinberger [21] as the spruce budworm model. It is known that, if a and b in (1.2) satisfy a certain condition, then such f possesses property (PB). In this case, we can interpret u * 1 as a low endemic state and u * 3 as an outbreak state, while u * 2 is a threshold population density. When f satisfies (PB), we recall the work of Kawai and Yamada [16] for (FBP-N). They have succeeded in the classification of solutions of (FBP-N) into four types of asymptotic behaviors: vanishing, small spreading, big spreading and transition. In particular, (FBP-N) with positive bistable nonlinearity f exhibits two types of spreading phenomena; one is the small spreading of solution (u, h) with lim t→∞ u(t, ·) = u * 1 locally uniformly in [0, ∞) and the other is the big spreading of solution (u, h) with lim t→∞ u(t, ·) = u * 3 locally uniformly in [0, ∞). Moreover, it was also proved in [16] that under certain circumstances (SWP) does not have a solution, which is a big difference from previous results for other types of nonlinearity. In this sense, positive bistable f provides us with interesting and significant properties for (FBP-N). Recently, it was proved by Kaneko-Matsuzawa-Yamada [15] that, if (SWP) has no solutions, the corresponding spreading solution approaches a propagating terrace. Our interest is to investigate the following issues for positive bistable function f : • What kind of asymptotic behaviors of solutions of (FBP) can be found? • Are there any differences in asymptotic behaviors between (FBP) and (FBP-N)? • Is it possible to get precise estimates of when h(t) → ∞ as t → ∞? As the first step, we will show that any solution (u, h) of (FBP) satisfies one of the following properties: Here v 1 and v 3 are bounded solutions of the following problem respectively. Note that (SP) has no bounded solutions other than v 1 and v 3 (see Proposition 3.1). In order to get better understanding on the above asymptotic behaviors, we will introduce parameter σ > 0. Let any (u 0 , h 0 ) satisfying (1.1) be fixed and consider (FBP) with (u 0 , h 0 ) replaced by (σu 0 , h 0 ). We denote such a free boundary problem by (FBP) σ . Let (u(t, x; σ), h(t; σ)) be the solution of (FBP) σ . Then it is possible to show the existence of two threshold numbers σ * 1 and σ * 2 (σ * 1 < σ * 2 ) such that the vanishing of (u(t, ·; σ), h(t; σ)) occurs for 0 ≤ σ ≤ σ * 1 , the small spreading of (u(t, ·; σ), h(t; σ)) occurs for σ * 1 < σ ≤ σ * 2 and the big spreading of (u(t, ·; σ), h(t; σ)) occurs for σ * 2 < σ. As the second step, we will derive asymptotic estimates for two types of spreading solutions. Let (u, h) be any big spreading solution of (FBP) and let (SWP) with u * = u * 3 admit a unique solution (c B , q B ). (For the existence and nonexistence of such a solution, see [16]). Then we will prove that (u, h) satisfies for any c ∈ (0, c B ). In this sense, (c B , q B ) gives a good approximation of (u, h) near the free boundary x = h(t) for large t. Moreover, we can also show that for any For any small spreading solution (u, h), it will be seen that analogous estimates as Here we should remark that there exists a small spreading solution which does not satisfy this condition. For example, when we take (u(t, x; σ * 2 ), h(t; σ * 2 )) which is a borderline solution between the small spreading and the big spreading for (FBP) σ , this solution will be proved to satisfy lim t→∞ u(t, This is a new "borderline" behavior which can not be observed in the study of (FBP-N). We have not obtained satisfactory asymptotic estimates for such small spreading solution. This paper is organized as follows. In Section 2 we will prepare some basic results such as the existence theorem of global solutions, comparison theorem, vanishing theorem and spreading theorem. In Section 3 we study (SP) and related stationary problem by the method of the phase plane analysis. In Section 4 we will investigate large-time behaviors of solutions such as the classification of asymptotic behaviors, sufficient conditions for each behavior and the existence of threshold numbers for (FBP) σ by using parameter σ ≥ 0. Finally, in Section 5 we will derive precise asymptotic estimates for the spreading speed of the free boundary and sharp estimates for asymptotic profiles of spreading solutions with use of a semi-wave solution of (SWP), which corresponds to the spreading solution. 2. Basic properties. We first state the global existence result for (FBP). and there exist positive constants We define spreading and vanishing of solutions under general situations. We next give a comparison theorem for (FBP). Theorem 2.3 (Comparison theorem). Suppose thath Proof. The proof of this theorem is the same as that of [5,Lemma 3.5]. The following result is very useful for the analysis of asymptotic behaviors of the solution of (FBP) (see [12,Theorem 2.11]). We finally give a sufficient condition of the vanishing (see [12, Theorem 2.10]). 3. Analysis of stationary problem. To apply the results in Section 2, we study (SP) and (SP-ℓ) with nonlinearity f satisfying (PB) by making use of the phase plane analysis (see for instance [24] and Figure 1). We first give the existence of bounded nonnegative solutions of (SP) without proof. Proposition 3.1 (Existence of bounded solutions of (SP)). Under assumption In order to find a solution of (SP-ℓ) we consider the following initial value problem Let v = v(x; P ) be a solution of (3.1) and define ℓ = ℓ(P ) by We also define where v P = inf{v > 0 : F (v) = dP 2 /2}. Note that if one can find P * satisfying ℓ(P * ) = ℓ, then v(x; P * ) becomes a solution of (SP-ℓ). The following result gives an elementary property of ℓ(P ). For the proof of this lemma, see [21]. Lemma 3.2 ensures the existence of a minimum of ℓ(P ) in (ω 1 , ω 3 ), namely We are thus led to the following result on the structure of solutions of (SP-ℓ) by virtue of Lemma 3.2. Asymptotic behaviors of solutions. In this section, we will study asymptotic behaviors of solutions of (FBP) as t → ∞. Classification of asymptotic behaviors. Our first main result is the classification of solutions of (FBP) in terms of their asymptotic behaviors. To prove Theorem 4.1, we will prepare a series of lemmas. where v 1 and v 3 are given in Proposition 3.1. where ϕ 1 is the solution to (SP-ℓ). Applying Theorem 2.4 to the solution of (FBP) with initial data where M > 0 is a constant such that M > max{ u 0 C([0,h0]) , u * 3 }. Then it follows from the standard comparison principle that (4.1) Moreover, since w 0 satisfies d(w 0 ) xx +f (w 0 ) < 0 for x ≥ 0, we see from the monotone method (see [23]) that w t (t, x) ≤ 0 for t > 0 and x > 0; that is, w(t, x) is nonincreasing with respect to t > 0 for each x > 0. Therefore, there exists a nonnegative functionv(x) such that Note w(t, x) ≥ 0 for t ≥ 0 and x ≥ 0 by the maximum principle. It can be proved thatv is a solution of (SP) (see, e.g., [12,Theorem 2.11]). Moreover since This equality together with (4.1) implies that This completes the proof. The following result can be easily proved by virtue of Lemmas 4.2 and 4.3. Corollary 4.4. Let (u, h) be the solution of (FBP) with initial data where v 1 and v 3 are given in Proposition 3.1. In order to prove Theorem 4.1, we will make use of the zero number arguments developed by Angenent [1]. Denote by Z I (w) the number of zero points of a continuous function w in an interval I ⊂ R. We should recall the following results which are extensions of Angenent that w(t, x) is a continuous function defined for t ∈ (t 1 , t 2 ) and x ∈ I(t) and that it satisfies in the classical sense, where c is a bounded function of t ∈ [t 1 , t 2 ] and x ∈ I(t). If w(t, −ξ(t)) = 0 and w(t, ξ(t)) = 0 for t ∈ (t 1 , t 2 ), then the following properties hold true: (i) Z I(t) (w(t, ·)) < ∞ for any t ∈ (t 1 , t 2 ) and it is non-increasing in t; (ii) If w(s, x) has a degenerate zero x 0 ∈ (−ξ(s), ξ(s)) at some s ∈ (t 1 , t 2 ) , then Z I(s1) (w(s 1 , ·)) > Z I(s2) (w(s 2 , ·)) for any s 1 ∈ (t 1 , s) and s 2 ∈ (s, t 2 ). Lemma 4.6. Let I ⊂ R be an open interval and let {w n (t, x)} ∞ n=1 be a sequence of functions which converges to w(t, x) in C 1 ((t 1 , t 2 ) × I). Assume that for every t ∈ (t 1 , t 2 ) and n ∈ N, the function x → w n (t, x) has only simple zeros in I and that w(t, x) satisfies an equation of the form (4.4) in (t 1 , t 2 ) × I. Then for every t ∈ (t 1 , t 2 ), either w(t, x) ≡ 0 in I, or w(t, x) has only simple zeros in I. We will prove the following convergence property of the solutions of (FBP) by using these zero number arguments and basic properties of the structure of ω-limit set. where v * is a bounded positive solution of (SP). Proof. Let ω(u) be an ω-limit set of u(t, ·) in the topology of L ∞ loc ([0, ∞)), that is, for every w ∈ ω(u) there exists a sequence 0 < t 1 < t 2 < · · · < t n < t n+1 < · · · → ∞ such that (4.5) By local parabolic regularity estimates, we can replace the topology of L ∞ loc ([0, ∞)) by that of C 2 loc ([0, ∞)). Since ω(u) is a compact, connected and invariant set, for any w ∈ ω(u) there exists an entire orbit {W (t, x)} t∈R with W (0, x) = w(x). This fact implies that for every w ∈ ω(u) there exists W (t, x) satisfying This convergence can be also replaced by the topology of C 1,2 loc (R×[0, ∞)) on account of parabolic regularity. Let We will investigate intersection points between W (t, x) and v(x). Let v 1 and v 3 be functions given in Proposition 3.1. Lemma 4.2 gives (4.7) Since . Therefore, by the phase plane analysis (see Figure 1), it is seen that either (i) v(x) > 0 for x > 0, or (ii) there exists a positive number R such that v(R) = 0 and v(x) > 0 for x ∈ (0, R). MAHO ENDO, YUKI KANEKO AND YOSHIO YAMADA First, we consider the case (i). Letû(t, x) be an odd extension of u(t, x) for t ∈ (0, ∞) and Thenû is a classical solution of wheref is defined byf Similarly, we also denote byv an odd extension of v over (−∞, ∞). Note thatv is also a classical solution of . . Therefore, for any t ∈ R,Û (t + t n , ·) has only simple zeros provided that n ∈ N is sufficiently large. Moreover, by (4.6), satisfies a parabolic equation of the form (4.4) for any (t, x) ∈ R 2 , it follows from Lemma 4.6 that, for every t ∈ R, eitherŴ (t, has only simple zeros in R. However we see that the latter case never occurs becauseŴ (t, 0) −v(0) =Ŵ x (t, 0) −v x (0) = 0 at t = 0. Therefore, W (t, x) ≡v(x) in R. Since the right hand side is not dependent on t, Thus any w ∈ ω(u) is equal to v which is a bounded positive solution of (SP). We will next exclude the case (ii). Assume that (ii) holds true. Since lim t→∞ h(t) = ∞, there exists a positive number T such that h(t) ≥ R for t ≥ T . By virtue of U (t, R) = 0 for t > T , we can repeat the previous argument with t ∈ (0, ∞) and replaced by t ∈ (T, ∞) and x ∈ [−R, R], respectively. Then it is possible to show that, for every t ∈ R, eitherŴ (t, x) −v(x) ≡ 0 for all x ∈ (−R, R), orŴ (t, x) −v(x) has only simple zeros in (−R, R). In the former case, we see that w(x) ≡ v(x) for x ∈ [0, R), which contradicts (4.7). On the other hand the latter case contradicts the fact thatŴ (t, x) −v(x) has a degenerate zero x = 0 at t = 0. In this way we conclude that the case (ii) never occurs. The proof is complete. . By the phase plane analysis, v * must coincide with v 1 or v 3 . The proof is complete. Sufficient conditions for asymptotic behavior. In this subsection we will give some sufficient conditions for (I)-(III) of Theorem 4.1. We first introduce a sufficient condition for the vanishing which can be proved in the same way as [11,Theorem 2.2]. then the solution (u, h) of (FBP) satisfies the vanishing. The following result gives a sufficient condition for the spreading when h 0 < π √ d/f ′ (0): ( then the solution of (FBP) satisfies the spreading. We next discuss the case u 0 C([0,h0]) > u * 1 . Let (u, h) be a solution of the following free boundary problem : (4.10) Then (u, h) is a lower solution to (FBP). Applying Theorem 2.3 yields h(t) ≤ h(t) and u(t, x) ≤ u(t, x) for t ≥ 0 and 0 ≤ x ≤ h(t). In view of u 0 C([0,h0]) = u * 1 , the preceding result implies the spreading of (u, h) and, therefore, (u, h), provided that ∫ h0 we complete the proof. We will show a sufficient condition for the small spreading of solutions. Let u = u(t, x) be the solution of the following problem where M > 0 is a constant satisfying max{u * 1 , u 0 C([0,h0]) } < M < u * 2 . Since u 0 satisfies d(u 0 ) xx + f (u 0 ) < 0, it is possible to prove the monotone decreasing convergence of u(t, x) as t → ∞ to a solution of (SP) in the same way as the proof of Lemma 4.2. Moreover, since u 0 ∈ (u * 1 , u * 2 ) for x ≥ 0, we see that lim t→∞ u(t, x) = v 1 (x) locally uniformly for x ≥ 0. Since u is an upper solution to (FBP), it follows from the comparison principle that Combining this fact and (4.11), we conclude which implies the small spreading of u(t, ·) as t → ∞. Finally, we will give a sufficient condition for the big spreading. Using Theorem 4.8, Lemma 4.12 and (4.12), one can see that σ * 1 given in (4.13) is the threshold number which separates the vanishing and the spreading: Theorem 4.13. Let (u(t, x; σ), h(t; σ)) be the solution of (FBP) σ with initial data (σϕ, h 0 ) for σ > 0. Then (u(t, x; σ), h(t; σ)) satisfies the vanishing for every σ ≤ σ * 1 and the spreading for every σ > σ * (4.15). For the proof of this theorem, see [6,Theorem 5.2] or [16,Theorem 3.7]. Next we will show that σ * 2 defined as (4.14) is the threshold number which separates the small spreading and the big spreading: Remark 4. If we consider (FBP-N), then the transition occurs at σ = σ * 2 when σ * 1 and σ * 2 are defined by (4.13) and (4.14) (see, [16,Theorem 3.8]). This fact means that the transition is a borderline behavior between the small spreading and big spreading in the case of zero Neumann boundary condition at x = 0. Remark 5. The notion of small spreading in Theorem 4.1 is defined by lim t→∞ h(t) = ∞ and lim t→∞ u(t, x) = v 1 (x) in [0, R] for any R > 0. It may be classified into two sub-cases; (i) lim inf t→∞ u(t, ·) C([0,h(t)]) < u * 2 , (ii) lim inf t→∞ u(t, ·) C([0,h(t)]) ≥ u * 2 . In particular, case (ii) implies that u(t, x) has a peak at x = x * (t) satisfying u(t, x * (t)) ≥ u * 2 for sufficiently large t. This is an interesting phenomenon, but we have no further information on this kind of small spreading. The phenomenon of case (ii) may correspond to the "transition", which is a borderline solution between small spreading and big spreading for solutions of (FBP-N).
5,069.4
2020-01-01T00:00:00.000
[ "Mathematics" ]
Ultralow temperature terahertz magnetic thermodynamics of perovskite-like SmFeO3 ceramic The terahertz magnetic properties of perovskite-like SmFeO3 ceramic are investigated over a broad temperature range, especially at ultralow temperatures, using terahertz time-domain spectroscopy. It is shown that both resonant frequencies of quasi-ferromagnetic and quasi-antiferromagnetic modes have blue shifts with the decreasing temperature due to the enhancement of effective magnetic field. The temperature-dependent magnetic anisotropy constants are further estimated using the resonant frequencies, under the approximation of omitting the contribution of Sm3+ magnetic moments to the effective field. Specially, the effective anisotropy constants in the ca and cb planes at 3 K are 6.63 × 105 erg/g and 8.48 × 105 erg/g, respectively. This thoroughly reveals the terahertz magnetic thermodynamics of orthoferrites and will be beneficial to the application in terahertz magnetism. Rare earth orthoferrites with distorted perovskite structure have received lots of attention in recent years [1][2][3] . This series of compounds have been found to possess G-type antiferromagnetic ordering formed by Fe 3+ ions spins and the precession frequency of magnetic moments can extend to terahertz regime due to the strong internal magnetic field 4,5 . Besides, the canted spins also induce weak macroscopic magnetization and ferroelectricity in some members 6 . Therefore, ReFeO 3 -type oxides exhibit abundant physical properties such as the terahertz magnetic response, the muitiferroic and magneto-optical effect 7-12 . There are usually three competitive exchange interactions in orthoferrites induced by Fe-Fe, Re-Fe, and Re-Re, respectively. The Fe-Fe interaction determines the formation of antiferromagnetic ordering in the high temperature region, while the Re-Fe exchange effect will lead to magnetic anisotropy and further induce the spin reorientation (SR) 13 . However, the Re-Re interaction will be activated at very low temperature, which contributes to the long range magnetic ordering of rare earth ions. For example, SmFeO 3 exhibits antiferromagnetic ordering along a axis with a net spontaneous magnetization along c axis below 670 K (Neel temperature). Then, the magnetic moments continuously rotate from a axis to c axis during 450 ~ 480 K due to the Sm-Fe interaction. The formation of long range magnetic ordering in Sm sublattice plays an important role in determining the macroscopic magnetic properties at very low temperature. Magnetization reversal was observed in SmFeO 3 below 5 K under a magnetic field about 300 ~ 500 Oe, which can be ascribed to the antiparallel ferromagnetic moments of Fe sublattice and Sm sublattice 6,14 . This interesting phenomenon may have potential applications in the magnetic switch under a weak applied field. Despite the terahertz antiferromagnetic resonances and potential physical phenomena, SmFeO 3 has not been investigated in the terahertz regime. In this work, we fabricate the SmFeO 3 ceramic samples and characterize their terahertz magnetic properties in a wide temperature range. We will discuss the magnetic thermodynamics of the SmFeO 3 ceramic in details, including the temperature-dependent ferromagnetic and antiferromagnetic resonant frequencies of Fe sublattice, as well as the contribution of Sm spins to macroscopic magnetization and magnetic resonance at the ultralow temperatures. Figure 1 shows the terahertz transmission frequency-domain spectra (normalized to the reference spectrum) of the SmFeO 3 ceramic between 3 K and 292 K. Only partial curves are presented to keep the tendency observable. Below 200 K, two dips are observed on the transmission curves, which can be ascribed to the so-called quasi-ferromagnetic mode (F mode) and quasi-antiferromagnetic mode (AF mode) of SmFeO 3 , respectively 15 . The resonant frequencies of F mode and AF mode are 0.34 THz and 0.62 THz at 200 K, respectively. As the temperature decreases, both the resonant frequencies of two modes exhibit blue shift. At 40 K, the respective frequencies are 0.55 THz and 0.70 THz. Below 40 K, the effect of temperature on the resonant frequencies becomes much more significant. When temperature lowers to 10 K, the frequencies of two modes increase to 0.67 THz and 0.80 THz, respectively. At 3 K, F mode and AF mode further harden, whose frequencies are 0.84 THz and 0.95 THz, respectively. It is worth noting that the resonant strength weakens at high temperatures. Specially, the dip attributed to F mode cannot be resolved from the background above 200 K, while AF mode also gets very weak at room temperature (RT), with a frequency of 0.57 THz. Results and Discussion The resonant frequencies of F mode and AF mode at various temperatures are extracted from the frequency-domain spectra and presented in Fig. 2. As mentioned above, the resonant frequencies for both modes undergo a sharp decrease over the range of 3 ~ 40 K, while above 40 K, the frequency-temperature curves slope gently downward, especially for the AF mode. Besides, the F mode data between 200 K and RT are not shown since it almost disappears in this temperature interval. Next, let us consider the physical origin of the resonant modes and the corresponding magnetic thermodynamics. The crystal structure of SmFeO 3 is shown in Fig. 3. As can be seen, Fe 3+ ions occupy the (0 0 0.5) sites, of which, there are eight edge sites and four face center sites, according to the symmetry of Pbnm space group. Besides, the eight nearest Fe 3+ ions constitute a cubic and the spin orientations for adjacent ions are opposite, that is, G-type antiferromagnetic ordering is formed 13 . In fact, the spins of adjacent Fe 3+ ions are not strictly antiparallel. Specifically, just below the Neel temperature (T N = 670 K), the canted spin mainly orient along a axis and also have a small component along c axis. Therefore, the magnetic structure can be denoted as , where G is the antiferromagnetic vector and F is the ferromagnetic vector. Like most other rare earth orthoferrites, SmFeO 3 undergoes a spin reorientation transition due to the interaction between rare earth ions and Fe 3+ ions. However, difference is that the transition temperature of SmFeO 3 is the highest in the family of rare earth orthoferrites and much higher than RT. At about 480 K, the Γ 4 phase changes to Γ ( , ) G F z x 2 through a mesophase Γ 42 14 . Thus, as seen in Fig. 3, the Fe 3+ spins orient along c axis with a weak macroscopic magnetization along a axis below the transition temperature. The canted spins induce the weak macroscopic magnetism in SmFeO 3 and the magnitude of magnetization depends on the ferromagnetic component of magnetic moments, while the terahertz magnetic resonances caused by the spin precession under an internal magnetic field relate to the F and G vectors. According to some previous studies, the magnetic moments of Sm 3+ ions play an important role on the magnetic properties of SmFeO 3 at low temperature. Owing to the relative strong Re-Fe exchange interaction, SmFeO 3 possess a high SR transition temperature, while the Re-Re interaction leads to a high magnetic ordering temperature for Sm 3+ ions. At about 140 K, the Sm 3+ ions spins are activated in the ab plane. As seen in Figs 3 and 4(a), Sm 3+ ions exhibit the ( ) , F C x y symmetry, that is, the spins satisfy the following equations: S 1x = S 2x = S 3x = S 4x and S 1y = S 2y = −S 3y = −S 4y . Thus, Sm 3+ ions possess C-type antiferromagnetic ordering along b axis and also a ferromagnetic moment along a axis. Moreover, macroscopic magnetization orients along the −a direction, antiparallel with the one of Fe 3+ ions 16 . During the cooling process, the remarkably increased net magnetic moment of Sm 3+ ions will cancel with the opposite contribution from Fe 3+ ions, which leads to a zero macroscopic magnetization at the temperature called compensation point (about 5 K). Below this temperature, magnetic reversal is observed when magnetic field is applied parallel to a axis in the SmFeO 3 crystal 6 . To verify this phenomenon in the ceramic sample, we test the − M T curve under an applied magnetic field of 1000 Oe. As shown in Fig. 4(b), the magnetization increases first during the cooling process due to the increased ferromagnetic component of Fe 3+ spins, then begins to decrease gradually because of the activation of Sm 3+ spins at about 170 K (different from the crystal sample), companied by a sharp decline below 40 K. Nevertheless, magnetic reversal does not appear in the SmFeO 3 ceramic even when the temperature is lowered to 2 K. The possible reason is as follows. Crystal sample has a long range ordering and the macroscopic magnetization is measured when applied field is parallel to a axis. However, for the SmFeO 3 ceramic, magnetic ordering is formed in a single crystal grain and the orientation of crystal grains is random, and therefore, the measured magnetization is an average value of various orientations between the magnetic field and crystal axis. Now, let us further consider the temperature dependent magnetic resonant frequencies based on the foregoing discussions about the magnetic structure in SmFeO 3 . In antiferromagnetic materials, the resonant frequency can be described by 4,17 Figure 3. The crystal structure, atom arrangements, and spin orientations of SmFeO 3 crystal below 140 K. The eight nearest Fe 3+ ions constitute a cubic, whose spins orient along c axis with a weak macroscopic magnetization along a axis. By contrast, the spins of Sm 3+ activated below 140 K locate in the ab plane and increase during cooling process. where H ca eff and H cb eff , K ca eff and K cb eff are the effective second-order anisotropy fields and anisotropy constants in the ca and cb planes respectively, and M 0 is the saturation magnetic moment. Moreover, the exchange field H E is proportional to the magnetic moment ( ) where λ is the molecular field coefficient 4 . Since the temperature region considered in this work is much lower than the Neel temperature and no SR transition occurs, the exchange field can be regarded as nearly temperature independent 20 . However, the anisotropy field changes influenced by the Fe-Fe exchange, magnetic dipole interaction, and crystal field will change with temperature 21 . According to Eqs (2) and (3), it can be found that the square of resonant frequency is proportional to the anisotropy constant. As a consequence, we can obtain the temperature dependent anisotropy constants using the frequency data. It is worth noting that the above discussion have not taken account of the contribution of Sm 3+ magnetic moments to the effective field. This approximation is valid, especially for the temperature region above 40 K, as the magnetic moment of Sm 3+ ion is much weaker than that of Fe 3+ ion. The ground state levels for Sm 3+ ([Xe]4f 5 ) and Fe 3+ ([Ar]3d 5 ) in an octahedral crystal field are 6 H 5/2 and 6 S 5/2 , respectively. According to the Hund's rules 22 , we may conclude that the saturation magnetic moment of Fe 3+ (5.92 µ B , close to 5 µ B in the orthoferrites system 18 , where µ B is Bohr magneton.) is large enough compared to the one of Sm 3+ (0.85 µ B , actually less than this value even at 5 K 6 ), and that the effective field is mainly contributed by the magnetic moments of Fe 3+ ions. We fit the ν − T curve using the nonlinear curve fitting method. The fitting results are also presented in Fig. 2 together with the experimental data for comparison. The equations used for fitting the frequencies of F mode and AF mode can be expressed by Eq. (4) and Eq. (5), respectively. As shown in Fig. 2, the fitting curves agree well with the experimental points for both F mode and AF mode, thus, the proposed equation is applicable in the temperature range from 3 K to RT. However, the fitting curve can be divided into three intervals due to the different tendencies. Between 40 K and RT, the resonant frequencies and temperature satisfy the linear relationship; the items T 1 and T 1 2 can be omitted since they are small enough compared to the linear item. The blue shift of resonant frequencies can be attributed to the increase of anisotropy constants, and hence the enhancement of effective magnetic field. It is noted that the F mode hardens fast than AF mode, which implies that growth rate of K ca eff is larger than that of K cb eff . The second region is during 5 ~ 40 K. Since both the T 1 item and the linear item work in this interval, the resonant frequencies remarkably increase with decreasing temperature. The addition of T 1 item implies that the effective anisotropy constants increase more quickly during cooling, compared to the first process. Then, below 5 K, the linear item can be deleted. However, the T 1 item is not enough to depict the rapidly increased frequencies, so we introduce the T 1 2 item, with which we get a good fitting (see Fig. 2). Furthermore, the effective anisotropy constants K ca eff and K cb eff are calculated according to Eqs (2) and (3), using the resonant frequency data. The amplitudes of the anisotropy constants are normalized to the one at 3 K, and both the experimental and fitting values have been obtained and presented in Fig. 5. According to some previous studies, the exchange field in rare earth orthoferrites is about 6.4 × 10 6 Oe 19,20 , and M 0 is calculated as 109.85 emu/g, and hence, the effective anisotropy constants K ca eff and K cb eff at 3 K can be estimated as 6.63 × 10 5 erg/g and 8.48 × 10 5 erg/g, respectively. Thus, we have obtained the temperature dependent anisotropy constants which essentially determine the magnetic resonant frequency of orthoferrites. Conclusions In summary, the terahertz magnetic thermodynamics of the SmFeO 3 ceramic have been investigated over a wide temperature region from 3 K to 292 K. The macroscopic magnetization is measured and the magnetic reversal does not occur even at 2 K for the ceramic sample. Additionally, both the F mode and AF mode of the SmFeO 3 orthoferrite harden during the cooling process, which can be attributed to the increase of anisotropy constants, and hence the enhancement of the effective magnetic field. The resonant frequencies of both two modes can be well fitted with a nonlinear equation of temperature, which clearly describes the temperature dependence of the resonant frequencies in different temperature region. With the frequency values, we also estimate the anisotropy constants at various temperatures. . The schematic diagram of the terahertz time-domain measurement system. Terahertz pulse is excited by a 780 nm near-infrared femtosecond laser in the emitter component, and first passes through the quartz window of the liquid helium cryostat, then interacts with the sample, followed by the other quartz window, and lastly arrives at the detector component. The cooling system with liquid Helium circulation can realize the precise temperature control between 3 K and RT.
3,518.2
2015-10-01T00:00:00.000
[ "Materials Science", "Physics" ]
A Modulated Approach for Improving MFSK RADARS to Resolve Mutual Interference on Autonomous Vehicles (AVs) This paper proposes a novel automotive radar waveform involving the theory behind M-ary frequency shift key (MFSK) radar systems. Along with the MFSK theory, coding schemes are studied to provide a solution to mutual interference. The proposed MFSK waveform consists of frequency increments throughout the range of 76 GHz to 81 GHz with a step value of 1 GHz. Instead of stepping with a fixed frequency, a triangular chirp sequence allows for static and moving objects to be detected. Therefore, automotive radars will improve Doppler estimation and simultaneous range of various targets. In this paper, a binary coding scheme and a combined transform coding scheme used for radar waveform correlation are evaluated in order to provide unique signals. AVs have to perform in an environment with a high number of signals being sent through the automotive radar frequency band. Efficient coding methods are required to increase the number of signals that are generated. An evaluation method and experimental data of modulated frequencies as well as a comparison with other frequency method systems are presented. Introduction In recent years, engineers and data analysts have been searching for ideal methods to collect data from multiple driving scenarios. Economical self-navigation can be achieved by using radar sensors rather than cameras for data acquisition. Radar sensors will acquire data to improve simultaneous range and Doppler estimation, both of which are crucial for safe navigation. Advanced driver-assistance system (ADAS) has laid out definitions for autonomous driving in six distinct stages as shown in Figure 1 to mark progress towards AV sensor-based driving functions and high-definition maps once vehicles are performing all driving functions [1,2]. Mutual interference is a major issue that prevents AVs from using radar data during periods of high traffic. This issue occurs when two devices emit the same frequency, causing each sensor to report unreliable data. Furthermore, if AVs shared radar sensor data, reciprocal interference might be avoided. Due to the mass production of most AV radar sensors, they tend to work on the same wavelengths as other AV physical devices, such as cameras, lidars, and ultrasonic sensors. An emitter on one vehicle transmits data to a receiver on another vehicle, rendering the sensor's data worthless and potentially leading to unexpected driving decisions. Any form of sensor that produces and receives a frequency might cause mutual interference. The likelihood of receiving reciprocal interference grows as the range of any sensor increases in relation to the frequency's emitted region. Mutual interference is more common in radars than ultrasonic devices, as they operate in a narrower regions of space [3]. Figure 2 shows an example of a common scenario that can cause mutual interference. Many applications benefit from modulating the frequency of a sinusoidal wave. Frequency modulation (FM) was originally utilized in radio wave transmission in the 1930s [4]. Using FM instead of amplitude modulation (AM) allows radio waves to be broadcast within a greater bandwidth. The sound resolution improves while increasing interference susceptibility. Frequency modulation is a useful approach for identifying faults in AV object detection. A frequency-modulated continuous-wave (FMCW) radar may identify several objects by using Fourier transforms for frequency waveform identification, although these mathematical data currently have mutual interference constraints [5]. AV RADAR Methods FMCW RADAR systems are built on the foundation of frequency modulation. Along with modulating frequency, FMCW is a continuous-wave RADAR which enables calculations for range by comparing transmit and receive frequencies. There are many different modulation schemes pertaining to frequency modulation, with a common modulation scheme being linear with an increasing frequency. In Figure 3, a linear FMCW waveform can be shown, modulating frequency over time. FMCW RADARs tend to modulate linearly but can be designed to operate differently. Since automotive radars can only operate within specified bands of frequencies, FMCW RADARs linearly sweep an entire frequency band. A FMCW signal will sweep from the lower end of the band to the higher end of the band and vice versa. This sweep can be in two directions, up or down. In addition, Linear FMCW waveforms can produce sawtooth or triangular patterns [5]. Linear FMCW RADARs can be mathematically represented in Equation (1): where m is the slope, t is the x-intercept, f s is the y-intercept in terms of giga Hertz (GHz) for frequency. The linear sweep allows radars to calculate the radial velocity for a given object. FMCW RADAR Calculations A Fourier transform compares samples from a signal that has propagated through space or time. When using data collected from sensors, sampling is critical in the case of AV RADARs. A fast Fourier transform (FFT) is often used by AVs to quickly calculate values from sensor signals. Using a FFT of a received signal, an AV can determine the object location based on how the signal has changed throughout propagation. The accuracy of direction and distance values of a RADAR are directly proportional to the number of receiving antennas. Using an FFT to identify the phase difference between multiple received signals, the distance can be estimated. To accomplish this, the continuous-wave RADAR system must provide IQ "in-phase/quadrature-phase" signals, which are two orthogonal sinusoidal waves [6]. Comparing the amplitude of the IQ signals across two different receiving antennas allows for distance to be calculated in Equation (2): where d is the distance between two receiving antennas and θ is the phase angle of the received signal. As the number of received antennas on a radar module increases, the RADARs angular resolution increases as well. For continuous-wave RADARs, a higher angular resolution enables a more accurate distance value to be calculated. The Doppler and time delay can be estimated after performing a FFT of the received signal. Figure 4 shows the difference in emitted (in red) and received (in green) signals along the x-axis for the time delay (∆t) and the y-axis for frequency (∆f). Doppler frequency data (f D ) can be calculated similarly on the y-axis between emitted and received signals. Since time delay and Doppler data are vectors on their own axes, they can be shown alongside one another to aid with object tracking. When AV RADARs produce IQ signals, two plotted vectors containing time delay and Doppler data intersect with one other, resulting in the true time delay and Doppler data for a specific object. Increasing the number of time delay and Doppler data vectors enable more accurate readings to be measured. Ghost Detections As seen in Figure 5, each red circle is an object's true location. A RADAR system that is detecting two objects can be visually seen as any two of the red circles with the arrows pointing towards them. The other two red circles are known as ghost detections. The vectors shown are derived from the time and Doppler delay magnitudes of the received signal from an object. For FMCW RADARs, ghost detections occur when multiple objects are being tracked. FMCW RADARs are limited due to their ability to only output a linear modulation pattern. Since FMCW RADARs output a triangular wave, the time delay and Doppler values are the same. Therefore, each circle in Figure 5 has two vectors crossing through them. Each point where a vector overlaps creates an indication of an object's true location. When tracking multiple objects, these vectors start to overlap and provide false locations. A constant false alarm rate (CFAR) is commonly adjusted to allow RADARs to perform while also picking up false readings. Increasing the number of time delay or Doppler vectors enables more vectors to be used in the elimination of ghost detections [7,8]. MIMO Technology Multiple-input and multiple-output (MIMO) RADAR systems are used to enhance the data retrieved from hardware by creating virtual antenna arrays with multiple inputs and multiple outputs. The utilization of MIMO technology decreases the range of clustered detection. As the number of receive antennas increases, the range of clustered detection decreases. To explain further, each additional receive antenna, that is set a half wavelength apart from the previous, allows for the calculation of the extra distance covered by an IQ signal. The distance vector is derived from the extra length that each IQ signal must travel to reach an additional receiver antenna. The creation of an array of multiple receive antennas allows for the enhancement of the angular resolution. An FFT is used to exploit the relationship between the spatial resolution and the angle of arrival (AoA). MIMO technology is the utilization of virtual arrays consisting of multiple transmit and receive antennas to eliminate the need for additional antennas. Therefore, it enables the continuation of RADAR to be a lost-cost alternative when compared to other AV sensors [8]. Phase-Modulated Continuous-Wave RADARs PMCW radar systems are designed to modulate a given signal's phase. Coding schemes can be applied to control when the phase switches. Commonly, this is achieved by modulating the waveform in binary sequences. The binary sequencing allows for the transmitted radar signals to be orthogonal. Almost perfect auto-correlation sequence (APAS) codes are binary symbols mapped onto 0-and 180-degree phase shifts to incorporate a signal customized fingerprint [9]. After utilizing an APAS code, that customized fingerprint is what resolves mutual interference. By incorporating APAS codes using a high-speed analog-to-digital converter (ADC), PMCW radars can be fully functional no matter the distance between them. The only requirement is that each has its own APAS code, which can span from 1 bit to 1000 bits. Moreover, the incorporation of these binary phase shifts provides a solution to mutual interference issues that FMCW radars are not able to get around. PMCW radars have recently been able to be implemented on CMOS chips. The single-chip implementation allows for PMCW to be a feasible option for resolving mutual interference as well. Recent research in Germany showed that a 400 bit APAS code can resolve mutual interference when implemented in a PMCW system [10]. In the near future, most vehicles will be fully electric, with most having fully autonomous capabilities. The time between then and now will be shortened by advances in sensor development and signal processing, as mutual interference can be resolved by utilizing APAS codes with PMCW RADARs [10]. Currently, MFSK radars produce a fixed frequency when shifting between the step values. The incorporation of increasing and decreasing frequencies instead of a fixed frequency provides more data used for creating detection. The application of a signal fingerprint can be possible by allowing the frequency to be modulated in multiple different triangular chirps with different rates of modulating frequency. Overview of Proposed Waveform This section will summarize the effects of the proposed waveform, such as MFSK radar calculations, aspects of proposed waveform, and future of mutual interference. MFSK Radar Calculations MFSK radars are similar to standard FMCW radars. Each radar system sweeps a frequency in order calculate the radial velocity. IQ signals can be utilized in both systems to receive the angle of arrival (AoA) of an object. The difference between MFSK and FMCW radars is how they modulate their frequency. MFSK radars modulate their frequency in steps over a given frequency range. During each step, frequency does not modulate, as the MFSK radar progresses through each step and outputs a fixed frequency. Then, a fixed frequency is derived from the desired number of steps. The number of steps dictates what fixed frequency will be used during each step. Figure 6 shows one transmit signal of an MFSK waveform. MFSK RADARs, with a step value of 10 9 , can be mathematically represented in Equation (3): F step = f s + 10 9 f or t n−1 ≤ t < t n ; f s = 76-81 GHz (3) Figure 6. Standard MFSK signal steps with a fixed frequency during a given time period. Binary frequency shift keying (BFSK) has been utilized in past Bluetooth communications and other wireless radio systems to implement a similar fingerprint aspect to that of PMCW radars. When comparing receiver (Rx) and transmitter (Tx) signals, beat frequencies can be calculated only if the frequency is modulated. When comparing these signals, it can clearly be shown that each transmit signal can both horizontally and vertically shifted. The vectors between each signal provide time delay information and Doppler data when using IQ signals. A FFT is used to convert data collected in the time domain by radars to the frequency domain. The conversion of this data allows for substantial amounts of data to be processed at a remarkably high rate. These data are stored, filtered, and then used to implement safe driving decisions. Each oscillation of a Rx signal provides a snapshot of the environment when compared to the Tx signal. In theory, an AV would perform best if it could capture images of the environment at a rate that is inconceivably fast. Accomplishing an extremely fast refresh rate would allow an AV to make decisions in slow motion compared to the time system that humans are used to. The amount of data lost is directly proportional to the speed of the vehicle. In fact, every sensor on any creation is limited to the rate at which it collects data. Thanks to advances in recent years, higher-speed data processing has become quite feasible considering the number of calculations needed for AVs and where we were two decades ago [11]. Aspects of Proposed Waveform The proposed waveform is a variation of a MFSK radar. Instead of outputting a fixed frequency during each step, we propose the waveform modulated during each step positively and negatively in a linear fashion. An up-chirp followed by a down-chirp creates a triangular modulation pattern during each step. This pattern allows for beat frequencies to be calculated in the same fashion as FMCW and MFSK radars. As a RADAR beam is transmitted and received, when plotted with each other, received signals are shifted. The horizontal and vertical distances show the difference in domain and range values correlated to the direction and velocity of an object. Each shift, depending on the location relative to the Tx signal, produces some variation of a combined x and y vector that can represent both shifts with the angle relative to the time axis. Additional triangular sweeps can be implemented to collect more beat frequencies, which help eliminate ghost detection [12]. As the number of beat frequency calculations increases, the number of vectors that can be drawn through and confirmed with real detection increases. As computations are added to modify the waveform, the entire process is computationally more expensive. Each modification of our proposed waveform's variations allows for the number of rearrangements to increase proportionally to the factorial numbers. The sensitivity of the antenna array would dictate the number of steps and linear modulation patterns that could be implemented. Along with this, the angular resolution for any continuous-wave RADAR is proportional to the number of received antennas. The number of individual signals that can be created would increase as the numbers of different steps and frequency slopes are increased. The proposed five steps and three different slopes allow for 720 different signals to be created. These 720 different signals are the result of 3! × 5! = 6 × 120 = 720. If the proposed radar had the ability to increase either the number of steps or the number of different slopes it could detect, the number of individual signals would increase factorially. The number of binary digits used to control the sequencing of each would increase as well, which would increase the computational power needed. Future of Mutual Interference Mutual interference happens when Rx antennas are blinded by Tx antennas on another vehicle using the same frequency range. Therefore, it becomes a widespread problem when mass producing AVs. Recent PMCW research has shown that it can be resolved using binary symbols. APAS codes have shown the capability of producing a remarkably high number of unique signals to be used for each individual vehicle on the road [9]. We propose similar methods that can be applied to current MFSK radars. One of the first steps for developing a coding scheme for a signal is to decide how many different devices will be used in the assigned frequency band. Considering, in recent years, the success that auto manufacturers have had in terms of AVs, these vehicles will one day be a consumer standard. The elimination and prohibition of gasoline vehicles are right around the corner. Taking all that information in, it was only reasonable to calculate the potential number of cars during a high-traffic scenario. The number of signals needed was calculated using a bidirectional long-range RADAR (LRR) of 600 m. A 60 m 10-lane highway was used to illustrate the area of the road covered by LRRs and the area of a sedan electric car. It was found that 3275 individual signals would have to be created to ensure no car was emitting the same signal. Coding Schemes After the effects of the proposed waveform and mutual interference, a random coding scheme and the results of the proposed waveform in a quantitative manner are presented. Random Coding Scheme PMCW offers key insight into the benefits of applying a coding scheme to an automotive radar. High-speed ADCs enable the coding of binary symbols into PMCW waveforms. Many other types of electronics, primarily sensors, are programmed in the same manner. Controlling the order in which each step is taken allows for binary coding to be implemented. MFSK radars tend to sweep linearly in steps; the step order can be rearranged to allow different frequency orders to be implemented. We proposed that each step modulates over a 1 GHz range. If the range it sweeps is from 76 GHz to 80 GHz, the MFSK would only take five steps to sweep the entire range. If the MFSK only takes five steps, then that means the radar has 120 different step rearrangements. Binary coding can be used with six variables to control the output of one given rearrangement. Since the number of variations can be represented by 5!, six binary digits in total must be used to control the signal. One combination of a binary encoded MFSK signal is shown in Figure 7. This type of coding scheme is proportional to the number of steps taken, assuming that the specified frequency band is fixed. Binary coding will not work if applied because the fixed frequency band limits the number of steps. Since the frequency band is considered fixed, the implementation of many steps leads to the signal becoming indistinguishable from noise. As the number of steps increases, the chirp time decreases as well as the domain values for each triangular pattern. Proposed Waveform The proposed waveform is divided into five parts: (1) the importance of coding steps and modulation patterns, (2) signal parameters, (3) multi-object tracking results, (4) mutual interference results and (5) advantages and disadvantages. The Importance of Coding Steps and Modulation Patterns Using MFSK, creating more steps allows for more binary digits to be implemented at the cost of the degradation of the signal. The implementation of multiple coding schemes is the key to creating a higher number of signal combinations for MFSK radars. If the frequency during each step is modulated, triangular modulation would be possible. Controlling how frequency is modulated through each step is vital to the creation of unique signals. An up-chirp followed by a down-chirp produces the outline of a triangle on a spectrogram using a FFT data. The number of different triangles produced allows for more ghost detection to be eliminated by comparing the time delay and the beat frequencies of the Rx and Tx antennas. The introduction of an MFSK radar that creates two differently sloped triangles allows for eight beat frequencies to be calculated, lowering the false-alarm rate for ghost detection. In this case, the slope of the triangle is proportional to frequency over time. Triangular modulation is accomplished by changing the rate at which frequency increases or decreases. It is beneficial to modulate the frequency of an up-chirp and down-chirp with the same rate of change. Incorporating this feature into a signal enables time delays to cancel, allowing for a more stable signal to be generated. With a smaller angle between the chirps, the signal would be less stable. Along with decreasing the probability of ghost detections, this method of signal propagation allows for many types of coding schemes to be implemented. The previous binary scheme offers insight into the implementation of multiple coding schemes. The number of steps should be equivalent to the number of channels. As the number of steps increases, assuming that the frequency range is fixed, the bandwidth of the channel decreases. Signal Parameters By introducing different sequences of chirps with different rates of frequency change, several different chirp sequences can be compared to one another. MFSK radars have three different customized properties to introduce a higher number of unique signals: (1) the utilization of different channels, (2) changing the rate at which frequencies modulate, and (3) the incorporation of multiple differently sloped chirp sequences. These three properties are the basis on which the proposed coding scheme is derived. All three customized properties are shown in Figure 8. The proposed triangular waveform can be mathematically represented in Equation (4): where f s is the y-intercept at various frequencies, m x is the slope at 0.25, 0.5, and 1, and t is the x-intercept. The coded variables in the proposed triangular FMCW signal are: A = 1 and B = 2 for the slope ordering; and C = 1, D = 2, E = 3, and F = 4 for the step ordering. A-F are coded variables used to control the ordering of both slopes and steps. These variables must be included in order to effectively produce any combination of the proposed waveform. In addition, the combination of proposed coding schemes enables the number of possible signal to increase factorially. A MATLAB coded plot of frequency modulation is presented in Listing 1, where the first point is a function of the waveform's chirp time. Multi-Object Tracking Results The solid Tx signal and dotted Rx signals in Figure 9 represent three potential objects being tracked. These objects can be detected by calculating the beat frequency during periods of modulation. Beat frequencies can be calculated using the same math values as FMCW and MFSK waveforms. To calculate the Doppler and time delay values in the same manner, the Tx and Rx signals must be mixed and filtered to output a beat frequency. This beat frequency can be plotted with other beat frequencies on the same plot. Figure 10 represents a plot of all six beat frequencies for each of the three objects. These calculations result in eighteen different beat frequencies per chirp period, rather than just six beat frequencies, when tracking three objects with FMCW. The increase in beat frequencies validates the proposed waveform by visually showing better detections. The coordinates provided are the magnitudes of Rx signal delay values. The rates of modulation enable the tracking of three objects during one chirp sequence. An Rx signal may have Doppler delay, time delay, both, or neither. Therefore, the received signal may be translated horizontally to the right to indicate time delay, or vertically translated to indicate Doppler delay. The true location of an object may be found by calculating the beat frequency. The beat frequency is the product of transmitted and received signals after it has been low-pass filtered. With a standard FMCW RADAR, there are two possible beat frequency calculations due to the triangular frequency modulation. During the up and down sweeps, the proposed waveform includes three modulations, obtaining differently sloped beat frequencies. As seen in Figure 5, two ghost detections appear due to the same rate of modulation being used. When using different modulations rates, the slopes of the beat frequencies change proportionally to the rate of frequency modulation. Each of the three up-sweeps provides three positively sloped beat frequencies, likewise for the down-sweeps. Each chirp sequence of the proposed waveform offers six beat frequency calculations to enable the tracking of three objects at a time. Moreover, eighteen unique vectors are shown in Figure 11. Mutual Interference Results The driving scenarios shown in Figure 2 explain how sensors may cause mutual interference when outputting Tx signals. The proposed waveform was designed to resolve mutual interference through modulating in different frequency bands. As seen in Figure 12, two Tx signals are generated using the protocol code presented in Listing 1. During periods in which the two Tx signals are outputting in different frequency bands, there is no possibility of mutual interference. Advantages and Disadvantages One of the advantages is that beat frequencies are calculated using the same mathematical approach utilized for both FMCW and MFSK RADARs. Thus, it is still feasible to use Doppler and time delay to compute distance. Additionally, the suggested waveform produces up to ninety beat frequencies every period. Frequency modulation rates and running protocol code may produce unique signals that resolve mutual interference. Since the proposed waveform outputs three triangles at five frequency values, it will operate in short periods when undergoing mutual interference. Five starting frequencies for a chirp enable the chirp to operate on a frequency, even when presented as in Figure 2. Table 1 shows aspects of the continuous-wave RADAR waveform. A disadvantage of implementing the proposed coding scheme is the high computational costs associated with creating and monitoring new data points. Conclusions This paper dove into the aspects of previous and current progress pertaining to radar waveform. Research works on FMCW, PMCW, MIMO, and MFSK radar systems have each shown extraordinary examples of pushing the limits of signal processing. The proposed waveform is a type of MFSK radar that uses code to resolve mutual interference. Currently, MFSK radars step and output a fixed frequency and continue to the next step with a higher fixed frequency. This paper proposed a MFSK waveform that modulates frequency during steps. During each step, multiple chirp sequences are implemented to increase the number of beat frequency calculations. The number of beat frequency calculations enables more Doppler and time delay data to be processed. Along with increasing beat frequency calculations, chirp sequences are modulated at different rates to increase the number of unique signals for multi-object tracking. The total number of signals increases factorially as more slopes or steps are introduced. The proposed waveform offers a resolution of mutual interference by introducing more types of signals than what is used on today's AV RADARS. Future Work The implementation of the proposed waveform is coded and functional. The realworld implementation of this waveform may require a unique antenna array, most likely using MIMO technology to maintain low hardware cost. Simulations of the waveform are performed on MATLAB. The proposed waveform will be programmed to generate radar data based off the beat frequencies created previously. The proposed coding scheme has future potential by increasing the amount of slopes or steps. The incorporation of MIMO technology may be a viable step for creating more signals. With that, ghost detections may cancel using virtual antenna arrays. As technology evolves, AVs will encounter a decentralized tracking network, in which detections will be shared amongst vehicles.
6,333.6
2023-08-01T00:00:00.000
[ "Engineering" ]
An Economic Assessment of the Impact on Agriculture of the Proposed Changes in EU Biofuel Policy Mechanisms : In Poland, rapeseed production has been the fastest growing branch of plant production since 2000. Rapeseed yields have increased 2.5 times in the last 20 years. The main reason for this trend was the implementation of obligations resulting from legal acts by Member States relating to increasing the share of RES in the structure of primary energy production, and in particular relating to the share of biofuels in fuels used in transport. In Poland in the years 2010–2020, about 1.0–1.6 million tonnes of rape seeds were used for this purpose annually. Due to the fact that biofuel production competes for raw materials with the food economy, at the end of the first decade of the 21st century, many representatives of various circles intensified their voices, calling for withdrawal from the policy supporting the biofuel sector, which may have resulted in a decrease in oilseed plant cultivation areas. As a result of the research conducted here, it was determined that the place of oilseed rape in the sowing structure will be taken by rye, triticale and, on good soils, by wheat. Compared to rape, their production is characterised by lower income per 1 ha; in the years 2013–2019, these differences amounted to: wheat—8 EUR, triticale—102.3 EUR, and rye—168 EUR. This situation will deteriorate the value cereal cultivation sites and will result in a decrease in their yields. On the basis of the conducted research, the estimated value of rape as a forecrop for wheat, triticale, and rye was, respectively: 103.7; 64.6 and 46.7 EUR. An additional advantage of oilseed rape is that it is an excellent bee resource and is classified as a commodity crop, i.e., one from which significant amounts of honey can be obtained, with a net value of EUR 55 per hectare. In addition, in many agricultural holdings, as a result of forecasted changes in plant production, there will be an accumulation of field work during the harvest period, which will also affect the worse use of machinery and storage areas. The consequence of increasing the area under which cereal crops and their supply can grow may be the decline in production profitability and thus the income situation of farms, but this will be assessed at the next stage of research. Introduction The first attempts to utilise biofuels to power engines were made by the end of the 19th century [1][2][3]. The self-ignition engine constructed in 1893 by Rudolf Diesel could be fuelled with both petroleum-derivative fuels and oils of both vegetable and animal origin [4][5][6]. presented in 1992 at the United Nations Conference on Environment and Development (UNCED) in Rio de Janeiro. The main body of the convention became the Conference of the Parties (COP), which, since its first meeting in Berlin in 1995 (COP 1), regularly assesses the scale and course of climate change and its effects and develops strategies to respond to these changes [34]. The first significant effect of these activities was the signing of the Kyoto Protocol during COP 3, in which the 38 most industrialised countries and the European Union committed to reduce GHG, which was expressed in carbon dioxide equivalent, by at least 5% below 1990 levels between 2008 and 2012 [35]. Due to protracted negotiations on a new global "climate agreement", COP 18 extended its validity until 31 December 2020 [36]. Although the Kyoto Protocol was a first significant step towards reducing greenhouse gas emissions, it did not solve the problem of global warming. This did not occur until climate policy re-prioritisation (among other things, under the influence of the financial crisis), which began to be seen as a factor for economic growth through "the development of clean or low-carbon technologies, the creation of new markets, industries and jobs" [37]. The latter led to the acceleration of negotiations and agreement on the content of a global climate agreement at COP 21 in Paris in December 2015 (the Paris Agreement) [38]. The European Union (EU) plays a very important role in reducing greenhouse gas emissions. The actions taken by the EU go far beyond the obligations arising from global climate agreements [39]. The European Green Deal has set out a clear vision of how to achieve climate neutrality by 2050 [40]. The fourth reason is the stagnation in demand for agricultural raw materials and food products, which is becoming a barrier to agricultural development. In countries with a developed economy, surpluses of agricultural raw materials have started to occur, which has led to a deterioration in the profitability of production and reduced incomes for farming families. One way of managing these surpluses is to use them for non-food purposes. The idea of "Chemurgy" was already promoted in the 1920s as a strategy for industries and governments who were interested in reviving the agricultural economy [41]. The USA reverted to this concept in the early 1980s. As part of the Growing Industrial Materiale programme, more than two thousand plant species have been tested for the raw material content sought by industries, of which several dozen have been selected and recommended for cultivation [42]. In Europe, the intensification of research into the cultivation of plants for industrial purposes dates back to 1982, when the European Commission recommended cooperation between agriculture and industry. This research resulted in a very long list of arable crops that can be used in several industries and branches of industry [43][44][45]. However, the direction of bioenergy has become dominant, which is mainly due to the growing interest in obtaining inexhaustible and ecologically clean energy sources [46][47][48]. The records in the White Paper "Energy for the Future: renewable sources of energy" prepared by the European Commission in 1997 showed that by 2010, the production of firstgeneration biofuels, mainly comprising biodiesel produced from rapeseed, will increase the most [49]. The interest of scientific environment concerning the issue of biofuels was mainly stimulated by the discussion in the context of climate change, energy and food security, and the legitimacy of support for the development of their use on both national and EU (European Union) levels. On the other hand, there is a lack of comprehensive assessments relating to the agricultural sector on a micro-scale, which is the key supplier of raw materials for their production. This primarily results from the problem of identification, and especially quantification, of a wide range of effects that are the result of changes in the structure of plant production. The main motivation of our study has been to assess the impact of the EU biofuel policy on the agricultural sector. In this paper, we illustrate this for biodiesel in Poland, which is the largest producer of biodiesel produced from domestic feedstock in the EU. What Are Biofuels? In the RES literature, the term "biofuel" is defined very differently. It is most often used to refer to fuels produced from biomass, which can take solid, liquid, or gaseous forms [2,67,68]. However, since it started to be widely used as motor fuel, the term is dedicated to any type of liquid or gas produced from biomass that can be used as a substitute for fossil fuels [69,70]. According to the International Energy Agency [71], biofuels are "liquid and gaseous fuels produced from biomass-organic matter derived from plants or animals". Biofuels are usually classified according to two categories: type of biomass and production technologies. Biomass sources are defined in Directive 2018/2001 of the European Parliament and of the European Council (EU) as "the biodegradable fraction of products, waste and residues from biological origin from agriculture, including vegetal and animal substances, from forestry and related industries, including fisheries and aquaculture, as well as the biodegradable fraction of waste, including industrial and municipal waste of biological origin". Biomass fuels refer to gaseous and solid fuels produced from biomass and, biofuels refer to liquid fuel for transport produced that is from biomass [66]. Due to the diverse composition and suitability for the various conversion methods that are used, the following biomass categories can be distinguished [72][73][74]: • Raw materials containing significant amounts of sugar and starch (sugar beet, cereals, potatoes); • Lignocellulosic biomass (wood and its waste, targeted wood crops, straw); • Oilseeds and animal fats; • Organic waste (organic fertilisers and food and municipal waste); • Algal biomass. Depending on the type of biomass that is used, the following biofuel generations can be distinguished [75]: In 2017, the main raw materials that were used in the production of bioethanol were sugar cane, maize grain (Brazil, USA), biodiesel soybean and palm oils, animal fats, used cooking oils, and rapeseed, which was mainly used in the EU [47,74]. Biofuels are commonly referred to as first-generation fuels, which is mainly due to the fact that they use conventional technologies during their production: alcoholic fermentation, mechanical pressing, and transesterification (hydrogenation) of oils and anaerobic digestion of organic biodegradable wastes to produce biogas [71]. Due to the controversy arising from the significant quantities of agricultural raw materials used to produce biofuels [50][51][52][53][54][55][56][57][58][59][60], research on the production of second, third, and fourth generation biofuels, known as advanced biofuels, has intensified since the beginning of the 21st century. The main substrates for their production are waste and residues of biological origin from agriculture, forestry and related industries, fisheries, aquaculture, and municipal and industrial waste of biological origin. The prospective development of next-generation biofuel production is [76]: Based on experience to date, it can be concluded that apart from HVO technology, the production of other advanced biofuels is still under intensive development and work on optimising production efficiency, minimising production costs, and seeking non-commercial sources of financing is being undetaken [76][77][78]. Legal Conditions The growing interest in opportunities to increase energy production from renewable sources in the EU began after the first oil crisis. However, the energy obtained in this way was more expensive than conventional energy in most applications. Therefore, the EU and individual countries have taken political, legal, administrative, and financial measures to achieve this objective as efficiently as possible. The first regulations concerning the support for renewable energy sources were included in Council Regulation (EEC) No 1302/78 of 12 June 1978, which discussed the granting of financial support for projects to exploit alternative energy sources [79]. In contrast, the Council resolution of 9 June 1980 concerning Community energy policy objectives for 1990 and the convergence of the policies of the Member States required the Commission to integrate RES into the framework of community energy policies [80]. Further actions include an assessment of the potential, the state of the technology, economic conditions, and barriers related to increasing the use of RES [81,82]. Research and development work has also been intensified, among other initiatives, within the framework of the programmes Valoren, Altener, Coopener, Intelligent Energy-Europe Programme, Joule-thermie, Save, Steer, and Synergy of subsequent European Framework Programme for Research and Innovation. A milestone on the way to increase the importance of RES in the EU was the publication of The Green Paper [83] and White Paper [49] between 1996 and 1997, which were entitled "Energy for the future: renewable sources of energy". At that time, these were key documents that were political and strategic in character, setting directions for long-term policy, with quantitative targets in the form of doubling the share of RES in the structure of primary energy production from 6 to 12% between 1998 and 2010. They indicated that biomass would be the most important among renewable energy sources. Its share in the production of liquid fuels was predicted to increase (40-60 times) compared to electricity (ten times) and thermal energy (two times). These documents also formulate the need to introduce appropriate legal regulations and to secure sources of funding to achieve these ambitious goals [84]. In 2000, the Commission proposed the first two EU directives for RES, the promotion of renewable electricity and the development of biofuels in transport. The first was adopted in 2001 (2001/77/EC), and the second objective pertaining to the development of biofuels was adopted in 2003 (2003/30/EC). The biofuels directive obliged Member States to set national indicative targets to set reference values of 2% share for biofuel consumption in transport by 31 December 2005 and obliged them to increase those shares to 5.75% in 31 December 2010 [85]. To meet these requirements Member States used two main tools: tax exemptions and biofuels obligations. Additionally, they introduced a special "energy crop payment" of EUR 45 per hectare (a maximum guaranteed area of 1.5 million hectares). These measures were complemented by the extension of offers for preferential loans, guaranteed lending, and loans to small businesses for renewable energy investments by financial institutions such as the European Investment Bank (EIB) and the European Bank for Reconstruction and Development (EBRD). Despite the instruments used, the market share of biofuels in 2005 was only 1.4% [86]. Although in those first years, there were problems with the implementation of Directive 2003/30/EC in some countries, as there were intense discussions in the EU regarding increasing the market share of biofuels [87,88]. In 2009, the European Parliament and the Council adopted a climate policy package in which the European Union committed to reducing greenhouse gas emissions expressed as a CO 2 equivalent by 20% by 2020 (if other developed countries made similar commitments, then the reduction could be as high as 30%). In the same period, the EU should have also increased the share of renewable energy in terms of total energy production from 8.5 to 20% to 10%-the share of biofuel in transport fuel-and reduce energy consumption by 20%. The biofuel sector was mainly covered by two directives: [90]. The results of the researched obtained within the European Framework Programme-Horizon 2020 have shown low efficiency in reducing CO 2 emissions through the use of traditional biofuels, the so-called first generation, hence the proposals to reform the biofuel directives [91]. As a result of the discussions and analyses that have been conducted, the current solutions were modified and were included in Directive 2015/1513 of the European Parliament and during the Council meeting on 9 September 2015 [92]. One of the most important changes introduced by this Directive was to set a limit for the level of firstgeneration biofuels, with the Directive stating that their maximum quantity in 2020 could not exceed 7%. Moreover, the condition for including such biofuels as renewable energy was to prove that the raw materials obtained for their production did not come from areas with high biodiversity value and high carbon intensity, and that their production complied with environmental requirements, which are regulated by the Code of Good Agricultural Practice in Poland [92]. The remaining part (at least 3%) was to be produced from algae, by-products (e.g., straw, manure, seed hulls, etc.), or waste. A detailed list is provided in Annex IX of Directive 2015/1513 [65]. The necessity of meeting the EU's obligations arising from the Paris Agreement was the main determinant of the adoption of a new directive on the promotion of the use of energy from renewable sources (EU) 2018/2001 (RED II). In this document, the Member States agreed that the share of energy from renewable sources in gross final energy consumption in 2030 will be at least 32%. After 2023, a proposal to increase this target will be considered if its production costs are significantly reduced or due to the EU's international commitments. This Directive also contains many significant changes relating to the issue of biofuels [66]. The most important are: • A 14% share of renewable energy in final energy consumption in the transport sector by 2030 at least; • Renewable energy used in the transport sector should also comprise renewable liquid and gaseous transport fuels of non-biological origin (e.g., hydrogen) and recycled carbon fuels (e.g., derived from plastic waste, rubber); • First-generation biofuels should be divided into two categories: low (certification required) and high-risk Indirect Land Use Change -ILUC (cannot be higher than 2019 consumption levels-reduction from 31 December 2023 to 0% by 31 December 2030); • Input of advanced biofuels and biogas produced from raw materials listed in Annex IX: -Part A-min 0.2% in 2022, min 1% in 2025 and min 3.5% in 2030; -Part B-maximum 1.7%. • New methodology for calculating GHG emissions. Development of Biofuel Production in the UE Between 1996 and 1997, when the Green Paper [83] and White Paper [49] "Energy for the future: renewable sources of energy" were presented, the assumptions they made regarding the development of biofuel production in the European Union were considered unrealistic by most experts dealing with the issue [84]. However, the systematic implementation of the provisions contained in both documents and Directive 2003/30/EC contributed to the development of this economic sector. Between 1996 and 2010, the production of biodiesel in the EU increased by more than thirty times, and the production of bioethanol increased by nearly fifty times. This growth dynamic, which was mainly due to the continuation of the current EU policy on RES (Directives 2009/28/EC and 2015/1513/EC), continued. In 2018, bioethanol and biodiesel production increased by 60 and 50 times in relation to their production in 1996, respectively. In the considered period (1996-2018), the share of biofuels in the RES production structure also increased significantly, from 0.36% to 7.06% ( Figure 1). In the EU, the predominant role among biofuels is played by biodiesel, the use of which increased from 85.8% in 1996 to 81.0% in 2018. On an energy basis, biodiesel represents about 75 percent of the total transport biofuel market [93]. Globally, the share of biodiesel in the production of biofuels in 2018 was only 28.1%, with bioethanol accounting for over 70% [94]. The term biodiesel (pure) includes traditional biodiesel fatty acid methyl ester (FAME) and hydrotreated vegetable oil (HVO). The main factors that determined greater interest in the production of biodiesel in the EU rather than bioethanol were: • The Blair House Agreement (provisions on the production of oilseeds under the Common Agricultural Policy) [93,95]; • Higher margin income in the production of oilseeds, which are the primary feedstock in the production of biodiesel, than cereals [96,97] [55,70,98,107]. The largest biodiesel producers in the EU are Germany, France, the Netherlands, Spain, Poland, and Italy (Table 1). Rapeseed remains the dominant raw material used for the production of biofuels (France, Germany, Poland), but its share is systematically decreasing. In 2008, it was 72%, and in 2019, it was only 43%. This is the result of the growing use of used cooking oil (UCO) and palm oil. In 2019, the share of UCO was 21%, and it was mainly used in the Netherlands, Portugal, and Austria. The high biodiesel production in the Netherlands, Portugal and Belgium is based on imports. The incentive for its application is provided by Annex IX, point B of the RED and RED II Directives. In determining the contribution of biofuels to the final energy consumption of the transport sector, the use of UCO can be considered equivalent to twice the energy value of biofuels products from UCO. Palm oil, which had a share of 16% in 2019, has been used on a large scale in Spain, Italy, France, and the Netherlands. It has been used on a smaller scale in Finland, Germany, and Portugal. In the EU, biodiesel is also produced from sunflower seeds (Greece, Bulgaria, Hungary, Lithuania, France, Romania, Austria), animal fats (Denmark, Finland, France, The Netherlands), tall oil (Finland, Sweden), and cottonseed oil (Greece). The volume of biodiesel production supplies about 80% of the demand for this biofuel, hence the need for imports. The EU mostly imports biodiesel from Argentine, Malaysia, China, and Indonesia. Biodiesel Production and Changes in the Area under Basic Crops In the EU, the main raw material used for the production of first-generation biodiesel are oilseeds, so as demand for this type of biofuel increases, so does the area under cultivation. Based on tests that were performed independently-using the Pearson correlation-it was found that both the sown areas of oilseed plants (Y1), rapeseed and colza seed (Y2), soybean (Y3), and sunflower (Y4) were significantly correlated to biodiesel production (x). As expected, these correlations were positive, but their strength was characterised by significant differentiation. The characteristics of the estimated parameters of the models are summarised in Table 2. The model expressing the relationship between rapeseed and colza seeding areas (Y2) and biodiesel production (x), followed by Y1(x), Y3(x), and Y4(x), turned out to be the best suited to empirical data (R2 = 0.909). Among the EU countries, the production of biodiesel to the greatest extent produced determined the sown area of oilseed crops in Poland (R2 = 0.803). Table 2. Basic statistic relationships between oilseeds (Y 1 ), rape areas (Y 2 ), soybean (Y 3 ), sunflower (Y 4 ), and biodiesel production (x) in UE, Germany, France, and Poland. These relationships are reflected in changes in the sown areas of basic crops (Figure 2). The sowing area of oilseeded crops, with the exception of sunflower, increased, and the sowing area of cereals decreased (except for triticale). Trend models for the sowing of basic crops and their statistical characteristics are presented in Table 3. The estimated trend models for the sown area of oilseed crops, including oilseed rape, cereals (except wheat and maize for grain), and biofuels (except other liquid biofuels), are very well fitted to the characterised phenomena (R 2 for the mentioned variables ranges from 0.754 to 0.916). Linear models turned out to be the most fitted, except for in the case of the sown area of total oilseed crops and oilseed rape and colza. For the total oilseed crops and for oilseed rape and colza, these were quadratic trends (Table 4). These trends were characterised by a very high coefficient of determination (R 2 = 0.926), which may indicate that the used model is correct. Materials and Methods The analyses in Section 2.4 show that the implementation of the EU biofuel policy has contributed to a significant increase in oilseed sowing. In Poland, the average acreage occupied by these crops in 2017-2019 was more than 58% higher than the 2004-2006 average (Table 5). Hence, it was first necessary to identify the crop species that were abandoned in favour of oilseed crops. To this end, statistical relations between the areas sown to oilseed crops (Y1) and the areas taken up by other crops (xn) were evaluated. In the next stages, on the basis of research conducted at the Institute of Plant Cultivation, Fertilisation and Soil Science National Research Institute in Puławy (IUNG PIB), the Institute of Agricultural and Food Economics National Research Institute in Warsaw (IERiGŻ PIB), literature on the subject, and data from the Central Statistical Office, the following five factors were identified and quantified, and it was on the basis of this that a synthetic assessment of the economic benefits of increasing the area of oilseed crop cultivation in Poland was made: • The sown area of oilseeds; • The area of sown crops replaced by oilseeds; • The direct surplus for the above-mentioned crops; • The value of oilseeds as a forecrop in relation to the crops that were replaced; • The profits of beekeeping; • The possibilities of using by-products for feed purposes and thereby reducing protein feed imports. Land Use Change In the years 2004-2019, the sown area of oilseed crops in Poland increased from 564.8 thousand ha to 915.9 thousand ha. Rapeseed and winter oilseed rape accounted for the largest share in the structure of these crops, from 85.2% in 2016 to 95.9% in 2013, with the average for the whole period under study being 91.0% (Figure 2). In the same period, the total sown area decreased from 11,285.4 thousand ha to 10,897.7 thousand ha. Apart from the decrease in the sown area, there were significant changes in its structure. Apart from rapeseed and colza, maize, wheat, and triticale areas increased to the greatest extent. These plant species were mainly introduced in place of spring cereal mixtures, rye, potatoes, and spring barley (Table 5). Similar trends were observed in most EU countries [100,[109][110][111]. The main reason or this was the profitability of production [96][97][98]100,112]. In order to illustrate these changes in relation to oilseed rape and colza seed, causeand-effect models were built and subjected to detailed verification, where the dependent variable was the area sown with oilseed rape and colza seed, and the independent variables were the areas of other crops, and these models were constructed using the following procedure: • The model was estimated with all of the independent variables and then statistically insignificant and non-coincident variables were removed by a posteriori elimination method; • The model was estimated using all of the independent variables as potential variables using the stepwise regression algorithm (assuming that the variable left in the model must be statistically significant at least at the level of p < 0.05) and following the rule of coincidence; • The model with independent variables negatively correlated with the dependent variable was estimated, and then statistically insignificant and non-correlated variables were removed by a posteriori elimination method; • The model was estimated by using only independent variables as the potential variables for winter crops, which were negatively correlated with the dependent variables, using the stepwise regression algorithm (assuming that the variable left in the model must be statistically significant at least at the level of p < 0.05) and following the rule of coincidence; • The dependence model of the sown area of winter rape and colza (Y) and rye (X) was estimated with the use of an additional artificial zero-one variable (with value 1 for the periods when the variable Y had significantly lower values than those resulting from the linear model; and 0-in the remaining periods). Both variables in the model were statistically significant at the p < 0.0001 level. The obtained econometric models, whose parameters were estimated with the use of the Classic Least Squares Method, were subjected to further verification to assess their quality and the validity of their specification (e.g., tests of non-linearity, RESET specification, stability of QUSUM parameters, distribution of residuals). Finally, the selected models were characterised by the best values of the corrected coefficient of determination and the Akaike information index. No significant residual autocorrelation was found in the approximated models (LM test for autocorrelation of order 1). Due to a small number of observations, testing integration and the cointegration of the examined time series was abandoned. The only variable fulfilling these conditions was the winter rye sown area. The trend of decreasing the share of this crop in the sowing structure has persisted since the second half of the 1960s. Between 1965 and 2015 in Poland, the share of rye in the cereal sowing structure decreased from 52.8% to 9.7%. Initially, its place was taken by wheat and rye, and since 2004, its share has also been replaced by oilseed rape [113]. The introduction of oilseed rape to crops was regionally differentiated and depended on the share of good soil and the structural area of farms [114]. Stable oilseed rape yields can only be obtained in good and very good soils, which constitute about 50% of the arable land in Poland. Moreover, only larger farms can apply the correct technology needed for the production of the seeds of this plant. At present, over 70% of rape crops are grown in farm with over 50 ha of arable land. Revenues of Operations As a principle, the activities of agricultural producers aim to obtain the highest possible income from their activities. This is true outside of Poland as well, with the key factor being based on which farmers made decisions to increase the production of winter oilseed rape and colza due to its higher profitability in relation to most cereal crops, especially winter ones [96][97][98]100,101,112]. Table 6 compares the average incomes obtained from the production of winter oilseed rape and rapeseed as well as rye, triticale, and winter wheat in 2013-2019. These values were determined within the framework of the AGROKOSZTY and Polish FADN agricultural product data collection system conducted at the Institute of Agricultural Economics and Food Economics-National Research Institute in Warsaw, in cooperation with agricultural advisory centres. Over the entirety of the analysed period, the income obtained from the production of winter oilseed rape and colza was significantly higher than that of winter rye (by 59.7%) and triticale (by 29.4%) and was comparable to winter wheat [96][97][98][99]115,116]. Pre-Crop Value Apart from the financial benefits, oilseed rape cultivation is distinguished by a whole range of other favourable characteristics that are important for farms. The most important of these is its value as a forecrop, especially on farms specialising in cereal production. The cultivation of oilseed rape enables the effective interruption of the natural development cycle of cereal plant diseases and prevents the spread of weeds and pests. This makes the management of successive cereal crops easier, which helps to increase yields and reduce cultivation costs [100,101,103]. In addition, soil cover for 11 months of the year and a deep and extensive root system counteract erosion, improve soil aeration, and reduce nitrate leaching. The large amounts of biomass produced by oilseed rape both above and below the soil surface also contribute to the build-up of fertile humus [102]. In Poland, oilseed rape is mainly grown in simplified crop rotations (3-field rotations) after cereals, and it is most often the only crop that interrupts the succession of cereals. If oilseed rape is excluded from crop rotation, its place will be taken by cereals with greater economic value, mainly winter wheat, and that can be grown in weaker soils-triticale or rye. This situation will cause a deterioration in the value of the site for cereal cultivation and will generally result in lower yields. It is assumed that wheat yields are 15-20% higher in the stand after rape compared to pre-crops. Many years of research indicate that a negative stand for cereal cultivation cannot be fully compensated by increased fertilisation or higher doses of plant protection products [117]. The effect of lower yield of cereals under the conditions of the increased cereal shares in the sowing structure should be associated with the deterioration of phytosanitary conditions (increased intensity of diseases of the stem base and root system), weed infestation in the field (including possible compensation of noxious weeds), and the accumulation of toxic phenolic compounds in the soil [99][100][101][102][103][104]. The expanding cultivation area is also a factor stimulating the yield level of wheat, which both in Poland and in the world is traditionally sown in the position after rape [118]. The important significance of oilseed rape as a forecrop for cereals also results from its favourable effect on the soil environment under cultivation conditions, especially in terms of long monoculture sequences of monocotyledonous vegetation [119]. The attractiveness of winter oilseed rape as a forecrop is not only due to the rapid decomposition of crop residues (narrow C:N ratio) but is also due to their biofumigant effects [120]. Manfred Schoepe [96] estimated the value of a post-rape stand at 130 EUR/ha. In the presented paper, these values for wheat, triticale, and rye were set being equivalent to 11% of the yield (Table 7). Such an assumption was based on the results of research conducted in IUNG PIB [102,117,121,122] and in the literature [96,100,101,103,104]. Profits from Beekeeping Beekeeping is a very important part of the bioeconomy. However, the literature is dominated by studies on the ecosystem services provided by pollinators. According to estimates made by Launtenbach and associates [123], the global value of pollinator services in 2009 was EUR 265 billion. In Europe, the largest benefits were obtained in Italy, Greece, Spain, France, the UK, Germany, the Netherlands, Switzerland, Austria, Poland, Romania, and Hungary. The latest published estimates on the value of the ecosystem service provided to the human economy by pollinators, mainly by honeybees, puts this work at between USD 235 and 577 billion. These values may vary depending on the assessment method used and the inflation levels that are assumed. It is worth noting that successive evaluations of pollination benefits to the food economy become higher and higher [124,125]. In Poland, the economic value of bees as pollinators of crop plants alone was estimated to be around EUR 2.0 billion in 2015 [126]. Agriculture, however, mainly through so-called melliferous plants, can contribute to the development of apiary management. The beekeeping value of a given plant species is mainly determined by the time and abundance of flowering as well as by the abundance of nectar and pollen. Winter oilseed rape is an excellent source of honey in the first two decades of May and is classified as a commodity crop, i.e., one from which significant quantities of honey can be obtained. The flowering period of oilseed rape lasts, depending on weather conditions, from 15 to 20 days, during which flowering plants provide insects with approximately 90-120 kg of sugars and 115-160 kg of pollen from 1 hectare of crops [127][128][129]. The high beekeeping value of oilseed rape is evidenced by the intensity of its flight by pollinating insects, reaching up to 5-6 individuals per 1 m 2 of the flowering canopy in the peak insect flight hours in good weather, among which the honeybee constitutes approximately 90% of all of the insects found on flowers [130]. The value of net profit of beekeeping (the calculation as food fields for apiculture) from one hectare of oilseed rape cultivation was determined on the basis of research conducted at the Apiculture Division in Pulawy of to The National Institute of Horticultural Research, at the level of 55 EUR/ha. This amount is similar to that estimated by the Institute for Economic Research at the University Munich [96]. Conclusions In Poland, rapeseed production has been the fastest growing branch of plant production since the year 2000. Rapeseed yields have increased 2.5 times in the last 20 years. The main reason for this trend was the implementation of obligations resulting from legal acts by Member States relating to increasing the share of RES in the structure of primary energy production and to the share of biofuels in fuels used in transport in particular. In the White Paper, which was entitled "Energy for the Future: renewable sources of energy", prepared by the European Commission in 1997, it was indicated that the fulfilment of these intentions would take place through the increased production of first-generation biofuels, mainly biodiesel produced from rapeseed. In Poland, in the years 2010-2020, about 1.0-1.6 million tons of rapeseed was used for this purpose annually. Such utilization had an impact on the increase in agricultural incomes, contributed to the decrease in income disparity, and increased the chances of gaining equal-with respect to urban residents-access to goods and services. Moreover, an increase in the demand for agricultural raw materials for biofuel production created an opportunity to abolish the demand barrier that hampers the development of agriculture. Another important benefit connected to the development of the liquid biofuel sector is the processing of oilseed, thanks to which the country obtains considerable quantities of high-protein post-extraction meal, which is an important component of feedstuffs. This makes it possible to limit imports of high-protein feedstuffs, mainly soya meal, including that produced from genetically modified seeds. Due to the fact that biofuel production competes for raw materials with the food economy, at the end of the first decade of the 21st century, many called for withdrawal from the policy supporting the biofuel sector. Its implementation was to lead, inter alia, to changes in land use, mainly in the reduction of the area comprising forests and land with natural values. The research conducted here shows that in Poland in the period 2000-2020, the opposite trend occurred. The area of forest land increased from 9.1 to 9.6 million hectares, including increases in the area taken up by forests from 8.9 to 9.3 million hectares, and the sown area decreased from 12.4 to 10.8 million hectares despite a significant increase in rape sowing from 437 to 864 thousand hectares. The introduction of changes in the present EU biofuel policy may result in a significant reduction in the area where oilseed rape is sown and thus in a reduction in the income generated from its production. Taking into account the factors determining the cultivation of oilseed rape: soil quality, the share in the sowing structure of farms, the area structure of farms, and regionalisation related to the risk of crop freezing, it can be assumed that the growth and production of oilseed rape will be abandoned first by farms that produce the crop on land that is less suitable for oilseed rape production, e.g., medium soils (complex 5) and some good soils (complexes 8, 11), as well as smaller farms. Only in good and very good soils, which in Poland constitute around 50%, and on larger farms (over 50 ha) can a smaller reduction in rape growing area be expected. Rape will be replaced in the sowing structure by rye, triticale, and, in good soils, wheat. Compared to oilseed rape, their production is characterised by lower income per hectare; in 2013-2019 these differences amounted to EUR 8 for wheat, EUR 102.3 for triticale, and EUR 168 for rye. The expanding area of rape cultivation is a factor stimulating the yield level of other plants, mainly wheat, which both in Poland and worldwide is traditionally sown in the position after rape. The significant importance of oilseed rape as a forecrop for cereal crops results from its favourable impact on the soil environment in terms of cultivation conditions and long monoculture sequences of monocotyledonous vegetation. At present, oilseed rape is mainly grown in simplified rotations (3-field) after cereals, and it is usually the only plant that is able to interpret the succession of cereals. If oilseed rape is removed from the rotation, cereals will take its place. This situation will cause the value of the growing area used to grow cereals to decrease and thus a decrease in the yield of those cereals. On the basis of the conducted research, the estimated value of oilseed rape as a fore crop for wheat, triticale, and rye was EUR 103.7, 64.6, and 46.7, respectively. An additional advantage of oilseed rape is that it is an excellent bee resource and is classified as a commodity crop, i.e., one from which significant amounts of honey can be obtained, with a net value of EUR 55 per hectare. In addition, in many agricultural holdings, as a result of the forecasted changes in crop production, there will be an accumulation in field work during the harvest period, which will also affect the worse use of machinery and storage areas. The consequence of increasing the acreage of cereal cultivation and its supply may be worse production profitability and thus the income situation of farms, but this will be assessed at the next stage of research.
9,052.4
2021-10-25T00:00:00.000
[ "Economics", "Agricultural and Food Sciences", "Environmental Science" ]
Effects of asymmetries in computations of forced vertical displacement events Visco-resistive magnetohydrodynamic (MHD) computations with the NIMROD code (Sovinec C R et al 2004 J. Comput. Phys. 195 355) are applied to a model tokamak configuration that is subjected to induced vertical displacement. The modeling includes anisotropic thermal conduction within an evolving magnetic topology, and parameters separate the Alfvénic, resistive-wall, and plasma-resistive timescales. Contact with the wall leads to increasingly pervasive kink and tearing dynamics. The computed 3D evolution reproduces distinct thermal-quench and current-quench timescales, a positive bump in plasma current, and net horizontal forcing on the resistive wall. The MHD dynamo effect electric field, E f = − V ˜ × B ˜ , is analyzed for understanding the nonlinear effects of the fluctuations on the spreading of parallel current density and the resulting bump in plasma current. Forces on the resistive wall are consistent with Pustovitov’s analysis (Pustovitov V D 2015 Nucl. Fusion 55 113032); the plasma remains in approximate force-balance with the wall, so net force is accurately computed from integrating stress over the wall’s outer surface. Improvements to the modeling that are needed for predictive simulation of asymmetric vertical displacement events are discussed. Introduction Discharge-terminating disruptive events in tokamaks take many forms and involve a variety of plasma dynamics [1,2]. Whether it is a root cause or a consequence of other disruptive activity, vertical displacement poses a particularly significant risk, because it brings hot plasma in contact with surfaces that are not designed for extreme thermal loading. In addition, asymmetries that develop during vertical displacement events (VDEs) lead to horizontal forces on electrically conducting structures [3][4][5][6], and the forcing may rotate at rates that are comparable to mechanical harmonics [7]. The risks for ITER have motivated numerous experimental, analytical, and computational studies of disruptive dynamics and of means to mitigate their consequences. Here, we report on nonlinear visco-resistive magnetohydrodynamic (MHD) computations of vertical displacement in a model configuration and on the consequences of significant asymmetry that develops through contact with the wall. In these computations, vertical displacement occurs over the timescale of the resistive wall and is forced by externally imposed conditions on the magnetic field. Our analysis examines the force on the resistive wall during the current quench (CQ) and the spreading of parallel current density, which transiently raises the plasma current starting at the thermal quench (TQ). To date, simulations of disruptions have been based on MHD modeling or on forms of reduced MHD [8][9][10][11][12][13][14][15][16][17]. The reasoning behind this choice is that macroscopic dynamics always arise during disruptions, and MHD provides a practical model for dynamics involving gross motions and changes in magnetic topology. Because disruptions in tokamak experiments involve electron and ion kinetics, radiation, neutral dynamics, and plasma-surface interaction, in addition to macroscopic dynamics, predictive simulation of disruptions in tokamaks will require far more comprehensive modeling [18] that is beyond present-day capabilities. Nonetheless, MHD modeling provides a basis for understanding disruptive dynamics and a conceptual framework from which to build comprehensive modeling. Previous axisymmetric studies evolve equilibrium relations quasi-statically to examine the generation of symmetric, open-field halo currents [11] and the influence of geometric details of surrounding structures [19]. Previous simulations of asymmetric VDEs by Strauss and coauthors examined the peaking of plasma current over the toroidal angle [15], the magnitude of horizontal forces as the ratio of external-kink time and resistive-wall time is varied [6], and the influence of changing boundary conditions on flow [20]. Recent simulations of a VDE in the NSTX experiment describe the destabilization of edge kink modes resulting from contact with solid surfaces [21]. Experimental studies of VDEs trigger them by turning parts of the control system off [22] or by programming the system to induce displacement [23]. The computations discussed here are similar to the latter approach. They start from a nominally up-down symmetric, double-null equilibrium, and one of the two divertor coils is effectively turned off at the start of the computation. Vertical displacement then proceeds on the resistive wall time , w t as eddy currents decay, which is the case whenever the wall slows the displacement. Running both axisymmetric and 3D nonlinear computations in this controlled scenario allows direct comparison, which helps us identify effects that result from asymmetry. The remaining sections of this paper are organized as follows. Section 2 briefly describes the model and computational methods, including parameters and initial conditions. Section 3 discusses the computed results on displacement, TQ, CQ, and asymmetries. It also describes analysis of the spreading of parallel current density and forces on the wall. Section 4 provides a discussion of our findings and future efforts, and our conclusions are given in section 5. Visco-resistive MHD modeling The central objective of this study is to reproduce the major aspects of asymmetric disruption from vertical displacement. Modeling a TQ resulting self-consistently from asymmetric macroscopic instability requires thermal transport that is sensitive to changes in magnetic topology. The CQ occurs over a longer time-scale in experiments [1,[24][25][26], so numerical computations must also have multi-scale capabilities. In addition, net forcing on the wall only results when the wall is not an ideal conductor [27]. All of these properties are within the scope of time-dependent nonlinear non-ideal MHD models if the system of equations is solved with implicit numerical methods and is augmented with a resistive-wall and external magnetic response. We separate the problem domain, which is axisymmetric, into the two subdomains shown in figure 1, where the resistive wall lies along their intersection. The visco-resistive MHD equations of the inner subdomain describe the evolution of particle density n n n , where total (electron plus ion) pressure is P nT is the adiabatic index. Relative to SI units, the Boltzmann constant is absorbed in T. The computations normalize the fields by using spatial dimensions and B | | of order unity, and by setting each of ion mass m, maximum-n, and 0 m to unity. The first term on the right side of equation (4) represents induction from the resistive-MHD electric field. We use the Spitzer dependence of The initial state of our idealized configuration has T varying by four orders of magnitude from the open-field region to the central value T 0 and resistivity varying by six orders of magnitude. Large resistivity at low temperature suppresses current density outside regions representing hot 1 More precisely, conditions at the magnetic axis of the equilibrium discussed below have R 0 =1.65 and B 0 =1.28, so the normalized τ A is 1.29. For convenience, discussion in the text considers τ A to be 1, however. plasma during the course of 3D dynamics. The second term on the right side of equation (4) is a numerical term that, together with a high-order representation, is used to keep magnetic divergence error small in the computations [28]. The closure relations represent anisotropic thermal conduction and anisotropic viscous stress where I is the identity tensor, and is the traceless rate of strain tensor. While the form of these relations is, technically, only suitable for collisional plasma with vanishingly small ion gyro-radii [29], it facilitates our present computations, together with the relatively modest anisotropy from the coefficient values 7.5 10 , n =´-|| Thermal conduction and viscous stress with these fixed coefficients are not accurate models of collisional transport in the edge plasma or of kinetic effects in the hot core. However, the unit vector b is along the 3D magnetic field, which makes the computed transport sensitive to the evolving magnetic topology, even with the modest anisotropy. The terms on the right side of equation (1) provide numerical smoothing of the particle density field. =´used in the computations are intended to have negligible impact on physically meaningful results. As a check on this, we find that repeating the linear computation of Section 3.1 with D n and D h reduced to 5 10 7 and 0, respectively, affects the growth rate by less than 0.5%. The outer subdomain is modeled by a simplified version of equation (4), where the electric field is E J h = with 0 h m having the fixed value of 100 so that resistive diffusion there is faster than other processes in the problem. This annular subdomain lies between the resistive wall and an outer conducting shell. The two subdomains are coupled by the thinwall approximation of where n is the unit normal along the surface of the wall, the parameter v w w w 0 h m º D is the ratio of the wall's magnetic diffusivity and thickness, and ΔB is the jump in magnetic field across the wall. For spatial scales of order unity, our v w -value of 10 3 implies that the time-scale for diffusion through the wall is 10 . The initial central-η 0 value implies 10 w 3 t t h over the same spatial scale, hence . We note that equations (1)-(4) describe the non-reduced form of MHD, where the toroidal component of magnetic field evolves according to equations (4) and (7), like the poloidal components. The outer boundary condition holds the total toroidal flux contained within the conducting shell constant, but magnetic flux may diffuse through the resistive wall, which supports poloidal and toroidal eddy currents that are subject to dissipation on the w t time-scale. The conducting shell that surrounds the outer subdomain holds the normal component of B constant. The axisymmetric part of B n ·ˆalong that surface has contributions from external coils and from the initial plasma current. In addition, small error fields of order =respectively. Homogeneous Dirichlet conditions are set on the tangential components of V, and the inhomogeneous drift condition is the electric field within the resistive wall. This condition depends on time and space in the nonlinear computations, but its maximum value can be estimated from the initial equilibrium as v R I I I , @´-(ˆ· ) which is far smaller than the magnitude of flows that develop in the nonlinear computations, so this inhomogeneous condition is essentially equivalent to setting n V 0. = · Mass and thermal energy escape the inner subdomain via diffusion and thermal conduction. Consistent with findings reported in [20], the concerns regarding the essentially impenetrable flow condition raised in [30] are not realized in these non-ideal computations, where magnetic flux is not frozen to cooled regions of plasma. However, we have found that results are sensitive to boundary conditions on T, due to its impact on resistivity in open-field regions [31]. More realistic modeling with boundary conditions inferred from sheath effects is under development, but it is beyond the scope of the present work and will be presented in a future publication. The nonlinear computations start from the model equilibrium whose pressure and poloidal flux distribution are shown in figure 2(a). The equilibrium is the result of solving the Grad-Shafranov equation with the pressure and poloidal current profiles within the closed-flux region being P P P P a 4 1 and 8 where y is the normalized poloidal flux function that increases from 0 at the magnetic axis to 1 at the separatrix. The parameters P 1 10 , =´and I 2 e = produce the pressure and safety-factor profiles that are shown in figure 2(b). The equilibrium is determined numerically with the NIMEQ code [32], which has been modified to solve freeboundary computations. This up-down symmetric configuration has 15 axisymmetric coils outside the resistive wall at the positions shown in figure 1, and the two divertor coils are located at R 1.25 = and Z 1.45. =  The pressure distribution has a central β-value of 1.2%, and the particle density profile satisfies n n P P 0 0 1 5 = ( ) / so that edge-n is 1/10th of its central value. The results section describes four initial-value computations. The first is a linear computation for the equilibrium of figure 2. The other three are nonlinear: a toroidally symmetric 2D computation with vertical forcing, a 3D computation with vertical forcing, and a 3D computation without vertical forcing. All are solved with the NIMROD code [28] using the implicit leapfrog time-advance that is described in [33] and the stabilization scheme from [34]. They use the same mesh over the R-Z plane, which has approximately 40 000 biquadratic elements in the inner subdomain and approximately 20 000 in the outer subdomain with some degree of concentration near the resistive wall in both. Vectors are expanded in cylindrical components with H 1 -elements, where the magnetic divergence constraint can only be satisfied approximately. However, even without considering the large uniform RB I e = f field, the error diffusion term in equation (4) keeps the divergence error smaller than one part in 1500 during the most violent phase of the 3D computations. Overall, results computed with the same mesh having bicubic basis functions are similar, providing evidence of spatial convergence over the R-Z plane, but running these computations through the entire CQ is computationally prohibitive at present. Toroidal variations in 3D NIMROD computations are represented by finite Fourier series. The toroidal resolution using wavenumbers n 0 21   in the 3D computations described here is based on experience with scanning resolution over a series of computations. Numerical quadratures over the toroidal angle use 64 evenly spaced points to preclude aliasing of quadratic nonlinearities. These computations include additional numerical damping of the n 20 = and n 21 = components at rates of 5 10 3 and 1 10 , 2 -respectively, to help avoid aliasing from high-order nonlinearities. NIMROD has been verified and benchmarked for many different models and applications (for example, 28,33,34). The resistive-wall implementation has been verified on cylindrical resistive-wall mode computations and is presently being benchmarked with the M3D-C1 code on a VDE instability using data from an NSTX discharge [35]. Growth-rates for the initial configuration are computed from the linearized versions of equations (1)-(4), where the equilibrium is a fixed distribution. In the nonlinear computations, the equilibrium density and temperature distributions become the initial conditions for n and T for the axisymmetric part of the solution. The axisymmetric part of the magnetic field is decomposed. The poloidal and toroidal fields from the plasma current become part of the initial conditions, and the externally generated toroidal field, i.e. the uniform RB I e = f part, is held constant. In the un-forced 3D computation, the poloidal field from all 15 external coils is also held constant. In the axisymmetric and forced 3D computations, field from the upper divertor coil, the unfilled rectangle in figure 1, is also made part of the initial condition, and field from the other 14 coils is held constant. This effectively turns the upper divertor coil off in these computations, leaving eddy current in the resistive wall in place of the upper divertor coil current. Also, the nonlinear computations are not driven from loop voltage or other external sources, and even without vertical displacement or asymmetric perturbation, this would lead to a gradual decay of plasma current and thermal energy from the start of the computations. Results For completeness, we first discuss the linear stability properties of the initial state without the forced VDE. We then describe general properties of the nonlinear evolution in the axisymmetric and 3D computations. Analysis of the current density and forces on the resistive wall follow. Linear stability of the initial state The equilibrium profile shown in figure 2 is linearly unstable to an external m 4, = n 1 = mode that has a growth rate of 4 10 . where primes indicate d d , y is nontrivial for 1 y  in the equilibrium. Near the separatrix of this equilibrium, the above relation is dominated by the first term on the right. The fact that the mode is distributed over the poloidal angle indicates external-kink behavior, likely associated with this edge current density and its discontinuity across the separatrix. In our nonlinear computations, the relatively cool edge region just inside the separatrix is subject to resistive diffusion. However, as discussed in the next section, contact with the wall that results from vertical displacement tends to re-sharpen the edge. Nonlinear evolution The decay of eddy current in the nonlinear computations with forced vertical displacement transitions the configuration from being diverted to being limited within the first 1000 τ A . Over the first half of this period of time, the evolution of plasma current I p ( ) and thermal energy is nearly identical in the axisymmetric computation and the forced 3D computation. Figure 4 shows that discrepancies between the two computations develop over the second half of this period and increase thereafter, especially for the thermal energy. The spreading of the current-density distribution in the forced 3D computation, discussed later, weakens the attraction between the lower divertor coil and the plasma current, and figure 4(c) shows that the vertical motion is slowed as a result. The asymmetric perturbations in the forced 3D computation achieve their maximum amplitude during the period t 500 1100 , A A   t t and as evident from the multiple local maxima in the fluctuation spectra shown in figure 5, saturation is not a simple process. The distortion of the plasma cross-section during this early saturation phase, shown by plasma pressure in figure 6, indicates that the m 3 = perturbation dominates prior to t 500 . However, the m 2 = perturbation dominates by t 600 . Plots of the n 1 = component of pressure (not shown) support the finding that the dominant poloidal wavenumber changes over time. The perturbations also alter the magnetic topology, producing magnetic islands near the edge of the distorting region of confinement and shrinking the volume of closed field lines. While the closed-flux region of the axisymmetric computation also decreases, the decrease in that case results from contact with the resistive wall, alone. This effect also occurs in the forced 3D computation, but chaotic scattering from the asymmetric perturbations is more significant, and anisotropic thermal conduction along the scattered field lines increases the rate of thermal energy loss. By t 1000 , there are no closed flux surfaces remaining in the forced 3D computation. The rate of thermal energy loss is largest at this time, when the highest-temperature region loses closed-flux confinement. The 3D computation without vertical forcing also shows significant MHD activity. Resistive diffusion of the edge current again suppresses the m 4 = mode and excites the m 3 = external mode, but the core remains vertically centered ( figure 4(c)). The MHD activity is also less virulent. Comparing figures 5 and 7 shows that the initial saturation takes approximately twice as long without the wall contact. The m 2 = activity arises later in time, and figure 8 shows that while it again destroys all magnetic flux surfaces, there is recovery of a small central region of closed flux. This recovery does not occur in the computation with forced displacement. Noting from figure 4 that without wall contact, the plasma current continues with a relatively slow decay after thermal energy is diminished, the characterization of major versus minor disruption [24] generally describes the different behavior in our 3D computations with and without vertical forcing. As a description of VDE evolution in the absence of asymmetric instabilities, the axisymmetric computation provides information on the source of free energy for MHD activity in the forced 3D computation during its vertical transient. With the condition of , contact with the wall scrapes-off the outer part of the equilibrium, while leaving the core relatively unchanged. Figure 9(a) shows the effect on the safety factor profile, and the most striking feature is the decrease in edge q-values over time. We surmise that the effect changes resonance conditions for different wavenumbers in the forced 3D computation. Moreover, the loss of Figure 9(b) shows that the resulting edge-λ is nearly as large as the central λ-value, considerably larger than the initial edge-λ. The narrowness of this edge-current layer implies free energy for current-gradient-driven MHD modes, as also noted in [21], and enhances kink-type behavior, evidently followed by magnetic tearing, when asymmetries are allowed. Between the two 3D computations, this effect is only present in the one with vertical forcing, and it strengthens the MHD activity in that case. Contact with the wall also increases the edge pressure gradient in the axisymmetric computation. For q a 1, > ( ) this edge pressure gradient would tend to drive ballooning, but no distinct high-n activity appears in the forced 3D computation (figure 5). The loss of A t = Each frame shows results from the 3D computation without vertical forcing at toroidal angle 0. f = edge thermal confinement from the low-n perturbations in the 3D computation precludes the development of a clear edge pressure pedestal, and ballooning does not arise. Current bump and distribution of current density The separation of the plasma current histories for the forced cases ( figure 4(a)) starts from the initial saturation of the asymmetric instabilities, which increases I p in the 3D case. Comparing figures 4 and 5(a) shows that the I p bump continues after t 1000 , which is when magnetic fluctuations and the rate of thermal energy loss are largest. This aspect is consistent with the discussion of the JET current spike in figure 29 of [25] and with the data for a major disruption in TFTR shown in figure 1 of [26]. Comparing the two frames in figure 10 shows that the parallel current density distribution, computed from the toroidally symmetric part of B, spreads while I p increases. The concentration of poloidal flux within the central-plasma region also decreases over this time, which can be observed from the decreased density of the equally spaced, poloidal-flux contours. The increase in I p simultaneous with the spreading of poloidal flux necessarily implies a reduction of inductance. This effect does not occur in the axisymmetric computation, where the distribution of poloidal flux within the plasma core remains relatively constant while the edge is removed by contact with the wall (figure 9(b)). Similar behavior has been noted in simulations of MHD activity excited by edge impurities for disruption mitigation [36,37]. Part of the change of poloidal flux can be attributed to resistive dissipation that is enhanced by the decreasing temperature during the TQ. However, a stronger effect results from the correlation of magnetic and flow-velocity fluctuations, the MHD dynamo effect E V B , f º -á´ñ˜where 'fluctuation' refers to the toroidally asymmetric component of a field. Conceptually, this stems from mean-field theory of MHD for astrophysics [38] and was first appreciated for magnetic confinement in the context of magnetic relaxation in reversed-field pinches [39,40]. It was later applied to analyses of driven spheromaks [41,42], helicity injection in tokamaks [43] and spherical tokamaks [44][45][46], and the tokamak hybrid mode [47]. Averaging Faraday's law, t B E, ¶á ñ ¶ = -´á ñ and considering low frequencies leads to the Poynting theorem for the symmetric field, The term E J f á ñ · is included in the right side of equation (9) and is part of a nonlinear magnetic energy transport process that acts through MHD fluctuations [43,48]. Figures 11(a), (b) show this power density and the tor- which is when the I p bump begins in the forced 3D computation. To gauge the magnitude of E , f we also plot an approximation for the resistive electric field that acts on the toroidally averaged current density J . á ñ f Figure 11(c) shows the product of J á ñ f and resistivity which is computed with the averaged temperature. In the plot, locations where the current density values are below 1% of the maximum are set to 0 so that the image is not dominated by the outer, cold region where resistivity is orders of magnitude larger. Comparing figures 11(b), (c) shows that where the dynamo electric field is active, it is more than an order of magnitude larger than the resistive dissipation of J , á ñ f and approximately 100 times that of J h f on the magnetic axis at the start of the computation. Thus, the E f contribution generates net toroidal electric field, which alters the poloidal-flux and current-density distributions. The E J f á ñ · power density is mostly positive near the core and negative in the edge, which removes energy from B á ñ in the core and deposits it in the edge. This process spreads the current-density distribution. In addition, the fluctuationinduced loop voltage from E f f ·ˆredistributes poloidal flux; locations where RE f f ·ˆdecreases in the flux-normal direction push the poloidal flux distribution outward. The consideration of correlated fluctuations helps describes the nonlinear effects of asymmetric instabilities on the toroidally averaged field, but it does not provide a full sense of their influence. Section 3.1 already noted that chaotic scattering extends over the entire plasma volume by the condition shown in figure 10(b). The local evaluation of λ at this time, figure 12(a), identifies considerable spatial structure that mean-field relaxation analysis does not. The spatial structure includes current sheets and regions of anti-parallel current, and these spatial structures extend beyond the region of closed poloidal flux contours shown in figure 10(b). This serves as a reminder that the poloidal flux function does not represent the magnetic topology of the 3D evolving field. The temperature distribution in the same poloidal plane, figure 12(b), also shows hot structures that extend beyond the region of closed flux contours. While the temperature distribution, and hence the electrical conductivity distribution, is not identical to the λ-distribution, a strong correlation is apparent from the figure. Forces on the resistive wall Our computation of net forces on the resistive wall follows the analysis given by Pustovitov in [27]. Two essential properties used in the analysis are that (1) together, the plasma and resistive wall form an electrically isolated system and (2) plasma inertia is negligible for the timescale of the evolution. The net force on the wall can then be computed from a surface integral of magnetic stresses over the outside of the wall. The thin-shell approximation of our model actually necessitates the use of stresses, because current per unit cross-section area is infinitely large. Here, we check the consistency of our where e ĵ is a Cartesian unit vector and the integral is broken into separate contributions over the inner and outer surfaces of the resistive wall. Noting that the net force acting on the plasma is F in - and, therefore, the implication from property 2 of F 0, in   we expect F F . j out j  The net horizontal force, which only results with toroidal asymmetries, peaks at a normalized amplitude of 0.08 at t 3000 A t = in the forced 3D computation ( figure 13(a)). To provide a sense of scale, if this force was generated from m 1, = n 1 = tilting of the plasma current, Noll's relation [3], would imply that the toroidally asymmetric displacement would only be 0.011. Nonetheless, we can check whether Pustovitov's property 2 holds for this relatively weak force. We find that the largest instantaneous realization of F F in out is the value of 3 10 2 that occurs briefly at t 600 . A t @ Using the maximum over time of each integral, we find F F max max 5 10 . in out 3 @´-( ) ( ) These ratios are larger than what is discussed in [27] for cases of large forces in JET, but even here they are consistent with the analysis. Separating the horizontal components of force, figure 13(b), shows that the orientation of the horizontal force vector changes over the simulated displacement event. This has been observed in previous MHD simulations of disruption [49], but effects outside the scope of MHD are likely needed to predict the rotation of forces. Comparing figures 5(a) and 13(a) shows that the largest horizontal force on the wall occurs much later than the peak of the asymmetric magnetic fluctuations. In fact, the force 2 , here n is the unit outward normal along the wall. The maximum force density is larger at t 3000 A t = than at t 971 , A t = despite the smaller internal fluctuation energy. During the intervening time, the discharge has moved somewhat closer to the lower divertor, and the magnetic perturbations have more time to penetrate the resistive wall. The distribution over the toroidal angle has also changed such that there is less cancellation when integrating over the inboard and outboard sides. Discussion Visco-resistive MHD modeling greatly simplifies the entirety of physics that influences disruptions in tokamak experiments. Approximations that directly influence the computational results presented above are the lack of radiation modeling, the Dirichlet boundary conditions on temperature, and the lack of runawayelectron (RE) effects. The modeled TQ resulting from local anisotropic thermal transport in the presence of changing magnetic topology is fast in our forced 3D computation, relative to the CQ. However, a realistic kinetic model for parallel heat transport, or even a larger value of , c || would further separate the TQ and CQ timescales. Radiation from impurities is also expected to have an important role in the TQ, and as noted, it has not been modeled. The present computations rely on thermal conduction to the resistive wall, where the imposed Dirichlet condition maintains low temperature. Parallel conduction quickly cools open field-lines, leading to the thin halo region in our axisymmetric result. From [21] and our work in this area [31], we know that computed VDE evolution is sensitive to edge temperature, so more comprehensive modeling is needed. For example, boundary conditions for edge turbulence modeling have been developed using conditions at the magnetic pre-sheath entrance [50]. We are adapting this more realistic approach for our disruption computations [31], but because electrons are then largely insulated, it may also require a radiation model to expel electron heat that is conducted toward the surface. Finally, experimental CQs are often extended by the formation of RE beams [1,25,51]. We expect that practical computations in the near future will rely on reduced modeling of REs, such as the one developed in [52] and already applied in recent M3D simulations. The possibility of kink-induced surface current, progressing in the direction opposite to that of the plasma current and traveling partly through conducting structures has been raised in [5]. While this wall-touching kink mode (WTKM) is predicted to be most problematic for the m 1, = n 1 = mode, which arises with sufficiently small q-values, the physics of reversed surface currents also arises with other external modes [53,54]. In Section 3.3, we note the existence of reversed parallel current density at the time of maximum-I p in the forced 3D computation, but the nonlinearly distorted current profile makes it difficult to relate the reversed parallel current density with any particular kink mode. However, at t 473 A t = when the m 3, = n 1 = mode dominates, the external kink distortion is clearer. The 0.15 l =isosurface in figure 15 is computed from the total current density at this time, and it lies along the outward bulge from the kink distortion. The existence of this helical current channel demonstrates that the physics of reversed surface currents is within the scope of resistive-MHD models using T-dependent resistivity to track an effective plasma surface. Conclusions This computational study of an asymmetric VDE in an idealized configuration considers a number of physical effects that are important for tokamak experiments. The 3D result reproduces distinct timescales for the TQ and CQ phases, despite the modest parallel heat conduction and absence of RE physics. The thermal energy collapse results from chaotic scattering of magnetic field lines associated with increasingly pervasive MHD modes and nonlinear coupling. In addition, the MHD activity generates nonlinear processes that redistribute the parallel current density. We argue that the effects on the symmetric components of B and J can be understood through analysis of the fluctuation-induced E f electric field, which is the MHD dynamo effect known from studies of current drive in RFPs, spheromaks, and other configurations. Because the fluctuations do not dissipate rapidly, the decrease in inductance and the resulting bump in plasma current that are associated with current-density spreading persist beyond the time when the rate of thermal energy loss is greatest. We also find that relative to the axisymmetric result, the spreading of current reduces the attractive force exerted by the current in the divertor coil. Thus, final termination of the plasma current through contact with the wall takes longer with the asymmetric distortions. We have computed forces on the resistive wall by integrating the Maxwell stress over its surface. In the modeled case, the net horizontal force is not large relative to conditions of significant tilting of the toroidal current. Nonetheless, we find that the net force over the inner surface of the wall is much smaller than that over its outer surface. This is consistent with the analysis of [27], which argues that the plasma and wall must remain in approximate force balance on timescales of interest, so the net force can be accurately computed by integrating stress over the outer surface alone. Our force computation takes into account contributions from asymmetric conduction currents flowing from the plasma to the resistive wall, including any reversed currents of externalkink dynamics, demonstrated in section 4, which underlie the WTKM theory [5]. While the computed scenario does not develop a large m 1, = n 1 = distortion, our results support the possibility of reproducing WTKM physics. This work also supports the prospect of using Eulerianframe computation for simulating disruptions. We expect Lagrangian and other moving-frame approaches to be more efficient in cases where the plasma torus remains intact. However, instances of significant distortion and magnetic topology change, such as what our model case produces, would tangle Lagrangian meshes or at least remove many of the computational benefits of moving meshes. Nonetheless, 3D computations, such as the two presented here, are computationally intensive, and improved solver efficiencies and better use of recent computer hardware developments are needed to make these computations more practical. Predictive simulations of asymmetric VDEs are also expected to require detailed representations of external conductors, such as what has been developed for resistive-wall mode studies [55,56] and as discussed in [57], together with the physical model developments described in section 4.
8,041.6
2019-01-07T00:00:00.000
[ "Physics" ]
Novel solitons and periodic wave solutions for Davey–Stewartson system with variable coefficients In this paper, the variable coefficients Davey–Stewartson system represents many physical phenomena in shallow water waves, quantum and optics, etc, is transformed directly into nonlinear ordinary differential system by using the new modification to the direct similarity reduction method. After solving the reduced system, new Jacobi, hyperbolic and periodic wave solutions are achieved for complex variable coefficients Davey–Stewartson system. The application of the new modification of the direct similarity reduction method reflects how this method is powerful, easy and simple, if it is compared with other symmetry techniques. Introduction Recently, the aspect of solitary wave attracted many physicians as it is very common in many fields of physics, such as ocean dynamics, fluid mechanics, plasmas and optics. Moreover, when the solitary wave keeps its shape after interaction with other solitary waves, it becomes a soliton. Therefore, solitons also are important in many fields, especially quantum mechanics and optics. From a different point of view, solitary waves and solitons are considered solutions for many famous models in partial differential equations, as for example Korteweg de-Vries (KdV) and the Nonlinear Schrodinger (NLS) equations [1][2][3][4][5][6][7][8]. The importance of the direct similarity reduction CK method combined with homogeneous balance method is the direct reduction of n-dimensional NPDE to an ordinary differential equation. On the other hand, other symmetry methods reduce the n-dimensional NPDEs to (n−1)-dimensional NPDEs. Therefore, the symmetry method or other methods should be used to reduce those equations again. In this study, the direct similarity method has been modified in order to apply it on the complex variablecoefficient systems. As one example, this method will be applied on the following variable coefficients (Davey-Stewartson (vcDS) system): where = (t, x, y), and = (t, x, y) are the complex wave envelope and the real forcing terms, respectively. Note that a 1 , a 2 , b 1 and b 2 are real functions in t, where a 1 and a 2 are corresponding to the group velocity dispersion (GVD) terms and b 1 and b 2 represent the nonlinear cubic coefficient and quadratic nonlinearity terms while s 1 and s 2 are real constants (for more details about the physical background of the vcDS system see [41,42]). Recently, Zhou et al. [41] solve the vcDs for Lax pair and Bäcklund transformation. Moreover, Wei et al. [42] obtain conservation laws and similarity solutions using the classical Lie group. Methodology In this paper, one more modification for the direct similarity reduction method will be applied [35][36][37][38][39][40] to transform complex vcNPDEs systems to nonlinear ordinary differential systems: Consider a vcNPDEs system as follows: where a i (t) and b j (t) are arbitrary functions in t, k = 2 and i and j are some positive integers. In the following steps, system (1) is transformed directly to a system of nonlinear ordinary differential equations as follows: (1) Assume where is the newly independent variable and ψ and φ are the new dependent similarity variables, respectively, c 1 and c 2 are undetermined constants. Moreover, (t) is an unknown function on t and m 1 and m 2 represent positive integers obtained from balancing between nonlinear and linear terms in system (1). (2) Collect the coefficients of ψ, φ and its derivatives. (3) Assume that the normalized coefficient is the greatest linear term, then collect the same derivatives and powers of ψ and φ and equate them with any arbitrary functions l ( ) multiplied with the normalized coefficient. After that, a partial differential system in (t, x, y) is obtained where l is a positive integer. Results and discussion Many novel wave solutions have been obtained in the previous section are considered new for the vcDS compared to other solutions obtained previously in the literature [22,41,42]. In this section, a discussion for the plots of the complex wave envelope | 1 |, by assuming that m 2 = 0 in (8), are presented. In this case, from Equation (9) the similarity variable becomes as = m 1 x − 2 (c 1 m 1 a 1 (t)) dt and it depends on the variable function a 1 (t) only. Subsequently, we have plotted the modulus of the periodic wave packet | 1 | and its Kink soliton limit | 1→1 | and the soliton solution | 3→1 | according to different values of the variable coefficient a 1 (t) as 1, t, sin(t) respectively. From Figures 1-3, we concluded that the propagation of the periodic, Kink soliton and soliton solutions affected by the values of the variable coefficient function a 1 (t). Finally, we hope that the investigation of multiply solitary and periodic wave solutions for the variable coefficients Davey-Stewartson system may shed light on new types of solitary waves generated this system in fields of hydrodynamics, plasma physics and Bose-Einstein condensates. Conclusion In this paper, the vcDS system is transformed directly into a third-order nonlinear ordinary differential system (6) and (7) by using the new modified direct similarity reduction method. By integration, it becomes as a Ricatti equation. Then, by solving it, different types of wave solutions like solitons and periodic waves are obtained. One of the most important remarks novel modification in this study is that it transforms the vcDS system from (2 + 1)-dimensional to 1-dimensional system. Otherwise, in [42], the vcDS is reduced two times from (2 + 1)-dimensional to (1 + 1)-dimensional using classical Lie group method, then to the ordinary system by applying the Lie group method again. Therefore, we conclude that the similarity methodology used in this paper is more efficient and easier to apply compared to the classical Lie group method. Moreover, many exact solutions are obtained for the vcDS considered new compared to other solutions obtained before in [22,41,42].
1,349.6
2020-01-01T00:00:00.000
[ "Physics", "Mathematics" ]
Study of thickening polymeric compositions for printing fabric of blended fibers . Thickening compositions of a new composition for printing mixed fabrics based on cotton and nitron fibers have been developed. The influence of the components of dampening compositions on the rheological properties of the composition depending on their concentration was studied. Established the development of a new composition of the thickening composition of its physico-chemical and rheological properties. Introduction For printing mixed fabrics, it should be noted that, especially natural and synthetic (cotton and nitrone) fibers with active dyes, the issue of choosing a thickener is important. Because, most traditional thickeners by nature and chemical structure are high molecular weight hydroxyl compounds, i.e. to polysaccharides, which are similar in chemical structure to cellulose. Based on production experiments, alginic acid salts correspond to the greatest positive properties, i.e. sodium alginate and synthetic thickeners [1]. During the processing of cotton fiber in the yarn, it is subjected to a number of mechanical impacts leading to the deterioration of its properties. Therefore, cotton fiber is referred to and the fabric based on it after the stage of dummon and bleach is printed using different ingredients. As is known, in order to create a colorful pattern on a fabric using the existing technology, it is necessary to place the dye in a viscous system that is capable of ensuring its transition from an in-depth engraving or pattern grid to the fabric. A viscous system is a thickener. Thickeners are polymer solutions, multicomponent, highly structured disperse systems. Despite these requirements, they are not ideal. These thickeners do not meet the requirements in production mainly for two reasons: their high cost and sensitivity to hardness salts and pH. There are such thickeners as Solvitose C-5, Emprint, Manutex because of the high cost makes it difficult to use them on a large scale in production. Therefore, the use of domestic products based on oxidized starch, PAA and K-4 preparation as thickeners when printing mixed fabrics with active dyes is interesting as an alternative to expensive imported thickeners. Therefore, it became necessary to study the effect of a new composition of thickening compositions on the printing properties of blended cotton and nitron fabrics. Oxidized starch, polyacrylamide, and K-4 were selected as thickening compositions [2]. Methods When printing mixed textile materials with active dyes at domestic textile enterprises, starch cannot be used as a thickener. Because, when printing with starch hydrogels, low values of the degrees of dye fixation are obtained as a result of the chemical binding of the dye with a thickener, i.e. with starch and, as a result, the formation of hard-to-remove films, which greatly complicates the technological process of washing printed fabrics. It is this circumstance that contributes to ensuring the stability of the color to physical and chemical treatments. In addition, as a result of printing, the active dye becomes part of the fiber macromolecule, as a result of which the fixation of the dye increases, leading to high resistance to wet processing, friction, color intensity and other external influences [3]. Despite significant achievements in the field of creating cotton fibers, success in this area is far from being exhausted, therefore the development of effective water-soluble compositions based on local raw materials suitable as a thickener in the process of printing mixtures based on cotton and nitron fibers is a very urgent task. Results and discussion When printing samples of mixed fabrics based on cotton and nitrone fibers at a ratio of 70:30, the following active dyes were respectively selected. These dyes are called bright red 5CX, remazol bright blue R, orange 2CT. It should be noted that the choice of dyes is due to the fact that the above active dyes are widely used for printing many fabrics, such as cotton, viscose and other mixed fibers, give a strong high-quality color that is resistant to the physical and chemical effects of printed patterns using the proposed thickener [4]. The influence of the components of dampening compositions on the rheological properties of the composition depending on their concentration was studied. The data obtained are shown in Table 1. From the data obtained (Table 1) it can be seen that with an increase in the amount of PAA in the composition of oxidized starch, its viscosity, the degree of thixotropic reduction, and the yield strength change significantly. As can be seen from the Table, as a result, the development of a new composition of the thickening composition of its physico-chemical and rheological properties in relation to thickeners containing starch, carboxymethyl starch and Na-CMC becomes high. And in relation to thickeners of sodium alginate and solvitosis, the rheological properties of the developed composition become close. During the addition of a 6.0% oxidized starch of 1.0% PAA, the solution has a higher viscosity, the rheological properties change and the degree of thixotropic recovery is 85.9%, and the yield strength is 46.37 g / cm 2 . In this regard, it represented the development of the technology of a new composition of thickening compositions for the printing of blended tissues based on cotton and nitron fibers. From the data shown in Fig. 1, it can be seen that the introduction into the composition of the finished 6.0% oxidized starch, 1% PAA and 1.5% K-4 (relative to the mass of the total solution) allows you to get a more viscous printing paint than without additive. The cause of high viscosity, in our opinion, is the additional formation of supramolecular structures with hydrogen and intermolecular bonds between functional groups OK, PAA and K-4. Each type of links contributes to an increase in the stability of the starch of oxidized celastic [5]. The identified behavior of the developed thickening composition is very important to achieve the necessary viscosity during the operation of the printed machine. At the same time, in this case, the amount of thickety administered to the composition of the printed paint can be reduced by 15-17% by adding oxidized starch 1.0% PAA and 1.5% K-4. The pre-economic calculation of the receipt of printed paints on the basis of a 6% oxidized starch and compositive (obtained by the "man-made" method) of thickens showed that the introduction of 1.0% PAA and 1.5% K-4 in oxidized starch In general, it allows you to reduce costs in the production of thickening. With the purpose of estimating the degree of destruction of the inner structures under the conditions of printing and analyzing the changes in these structures, when the components of the thickener are determined by rheological parameters for the studied compositions of various composition. Analyzing scientific and technical literature is established that when stuffing cotton tissues with active dyes there are some disadvantages due to the lack of a wide range of thickeners who could meet all the necessary requirements for the quality patterns obtained on the textile material. In this regard, it was an interest and justify the possibility of using water-soluble compositions based on oxidized starch (OK), polyacrylamide (PAA) and the drug K-4 as a thickener when putting up with active dyes, as well as expand the range of thickeners through the use of these promising types of polymers [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20]. Conclusion The experiments carried out showed that the components of thickening polymer compositions are compatible with selected active dyes. To determine the influence of components of thickening compositions on the quality and stability of the color, the printing quality parameters were estimated -the color saturation, the degree of fixation of the dye, the resistance to wet and mechanical treatments -stability to washing, to sweat and friction on generally accepted methods.
1,765.4
2023-01-01T00:00:00.000
[ "Materials Science" ]
Coherency and incoherency in neutrino-nucleus elastic and inelastic scattering Neutrino-nucleus scattering $\nu A\to \nu A$, in which the nucleus conserves its integrity, is considered. We show that elastic interactions keeping the nucleus in the same quantum state lead to a quadratic enhancement of the corresponding cross-section in terms of the number of nucleons. Meanwhile, the cross-section of inelastic processes in which the quantum state of the nucleus is changed, essentially has a linear dependence on the number of nucleons. These two classes of processes are referred to as coherent and incoherent, respectively. The coherent and incoherent cross-sections are driven by factors $|F_{p/n}|^2$ and $(1-|F_{p/n}|^2)$, where $|F_{p/n}|^2$ is a proton/neutron form-factor of the nucleus, averaged over its initial states. The coherent cross-section formula used in the literature is revised and corrections depending on kinematics are estimated. As an illustration of the importance of the incoherent channel we considered three experimental setups with different nuclei. Experiments attempting to measure coherent neutrino scattering by solely detecting the recoiling nucleus, as is typical, might be including an incoherent background that is indistinguishable from the signal if the excitation gamma eludes its detection. However, as is shown here, the incoherent component can be measured directly by searching for photons released by the excited nuclei inherent to the incoherent channel. For a beam experiment these gammas should be correlated in time with the beam, and their higher energies make the corresponding signal easily detectable at a rate governed by the ratio of incoherent to coherent cross-sections. The detection of signals due to the nuclear recoil and excitation gammas provides a more sensitive instrument in studies of nuclear structure and possible signs of new physics. Neutrino-nucleus scattering νA → νA, in which the nucleus conserves its integrity, is considered. Our consideration follows a microscopic description of the nucleus as a bound state of its constituent nucleons described by a multi-particle wave-function of a general form. We show that elastic interactions keeping the nucleus in the same quantum state lead to a quadratic enhancement of the corresponding cross-section in terms of the number of nucleons. Meanwhile, the crosssection of inelastic processes in which the quantum state of the nucleus is changed, essentially has a linear dependence on the number of nucleons. These two classes of processes are referred to as coherent and incoherent, respectively. Accounting for all possible initial and final internal states of the nucleus leads to a general conclusion independent of the nuclear model. The coherent and incoherent cross-sections are driven by factors |F p/n | 2 and (1 − |F p/n | 2 ), where |F p/n | 2 is a proton/neutron form-factor of the nucleus, averaged over its initial states. Therefore, our assessment suggests a smooth transition between regimes of coherent and incoherent neutrinonucleus scattering. In general, both regimes contribute to experimental observables. The coherent cross-section formula used in the literature is revised and corrections depending on kinematics are estimated. Consideration of only those matrix elements which correspond to the same initial and final spin states of the nucleus and accounting for a non-zero momentum of the target nucleon are two main sources of the corrections. As an illustration of the importance of the incoherent channel we considered three experimental setups with different nuclei. As an example, for 133 Cs and neutrino energies of 30 − 50 MeV the incoherent cross-section is about 10-20% of the coherent contribution if experimental detection threshold is accounted for. Experiments attempting to measure coherent neutrino scattering by solely detecting the recoiling nucleus, as is typical, might be including an incoherent background that is indistinguishable from the signal if the excitation gamma eludes its detection. However, as is shown here, the incoherent component can be measured directly by searching for photons released by the excited nuclei inherent to the incoherent channel. For a beam experiment these gammas should be correlated in time with the beam, and their higher energies make the corresponding signal easily detectable at a rate governed by the ratio of incoherent to coherent cross-sections. The detection of signals due to the nuclear recoil and excitation γs provides a more sensitive instrument in studies of nuclear structure and possible signs of new physics. The process of neutrino scattering, by means of Z 0 -boson exchange, off a system of bonded particles provides a great laboratory to test principles of quantum physics and search for new phenomena. Under certain conditions the corresponding interaction probability acquires an extra factor with respect to the case of scattering off free particles. arXiv:1806.08768v1 [hep-ph] 22 Jun 2018 This extra factor, proportional to the number of scatterers, is a direct consequence of the principles of quantum physics. The probability of an outcome is determined by the absolute value squared of the sum of amplitudes corresponding to indistinguishable paths to realize this outcome. Neutrinonucleus scattering in which the nucleus conserves its integrity is an example of this kind, as was observed by Freedman [1] more than four decades ago. There are two distinct outcomes of such interactions: (i) the nucleus remains in the same quantum state and (ii) the state is changed. We refer to these cases as elastic and inelastic scatterings, respectively, because in (i) the energy transfer to the recoil nucleus is vanishingly small, while in (ii) it is apparently non-zero. It was shown [2][3][4][5] that the cross-section of elastic neutrino scattering off a nucleus is amplified with respect to a neutrino scattering off a single nucleon. The amplification factor for a spin-less even-even nucleus reads where Z and N are the numbers of protons and neutrons, g p/n V are proton/neutron couplings of the nucleon vector current, and F p/n (q) are proton/neutron form-factors of the nucleus. The form-factors approach unity if where R A is the radius of the nucleus. The form-factors vanish at |q| → ∞. Neutrinos with energies below some tens of MeV predominately conserve the integrity of nucleons in neutrinoquark interactions with Z 0 -boson exchange, allowing one to consider this process using an effective neutrino-nucleon interaction in which the nucleon current is a sum of vector and axial currents. The corresponding axial currents do not contribute significantly when a neutrino elastic scatters off of a spin-less nucleus due to the cancellation in the sum of amplitudes. The vector coupling g p V = 1 2 − 2 sin 2 θ W of the proton is small (g p V ≈ 0.023) and is neglected in the approximate equality in Eq. (1). In our estimates we used a best-fit value of sin 2 θ W = 0.23865, determined using low energy neutrino data and MS renormalization scheme [6]. Freedman coined the terminology "coherent neutrinonucleus scattering" to emphasize the fact that the dependence of the corresponding cross-section is quadratic in terms of the number of nucleons. This dependence was attributed to nearly identical amplitude phases corresponding to a neutrino scattering off nucleons. The first experimental evidence for coherent neutrinonucleus scattering was reported in 2017 by the COHERENT Collaboration [59][60][61], using CsI[Na] scintillator exposed to neutrinos with energies of tens of MeV produced by the Spallation Neutron Source (SNS) at the Oak Ridge National Laboratory [62][63][64]. Our motivation for this work was triggered by the following observation. At neutrino energies of some tens of MeV the three-momentum transfer q is large enough to break the condition in Eq. (2). For example, energy deposits observed in [62], correspond to |q|R A sampling the interval (1, 2.7) and the elastic cross-section should be suppressed. At higher energies, but still in the regime where the nucleus conserves its integrity, the elastic cross-section vanishes and the neutrino-nucleus interaction probability must be determined by inelastic interactions. In general, the corresponding crosssection should be given by a sum of elastic and inelastic crosssections, similar to the theory of the scattering of X-rays [65] and electrons [66] off an atom, and of slow neutrons off of matter constituents [67]. What should one expect about the "coherency" in inelastic processes? If this terminology is understood literally as the equality of phases of neutrino-nucleon scattering amplitudes, then one would conclude that inelastic processes should also be coherent, as in elastic processes, because there is no reason why these phases should be different. Should one then expect a quadratic dependence of the inelastic crosssection in terms of the number of nucleons, similar to Eq. (1)? The corresponding literature, to best of our knowledge, lacks an appropriate theory for neutrino-nucleus interactions that could address these questions. This paper attempts to provide a theoretical framework accounting for elastic and inelastic neutrino-nucleus scattering of the process based on calculations from first principles. In Eq. (3) the possibility that the internal quantum state of a nucleus can be modified after an interaction is labeled by the ( * ) superscript. We show in this work that the cross-section of the neutrinonucleus elastic process is, indeed, quadratically dependent on the number of nucleons, while that for inelastic scattering exhibits a linear dependence. Elastic and inelastic crosssections also possess a distinct dependence on q: the former is driven by |F p/n | 2 , while the latter is governed by 1 − |F p/n | 2 . At the same time, the phases of corresponding neutrino-proton and neutrino-neutron amplitudes are all equal for protons and neutrons, respectively. This is at odds with the assumption that the difference of phases of the scattering amplitudes is responsible for loss of coherency [17,33,68]. Our arguments are discussed in what follows. The paper is split into two parts. The first part is focused only on the main points of the derivation and discusses the results obtained. The second part, containing the necessary technical details, is organized in a set of appendices. In particular, a conceptual derivation of a general form of the cross-section of the process in Eq. (3) is discussed in Section II. We review the paradigm of coherent scattering and suggest our concept in a simplified way in Section II A. The kinematics of elastic and inelastic scattering, and the corresponding amplitude and the cross-section are discussed in Sections II B to II E, respectively. We refer to Appendices A to C for full details of this derivation. In Appendix A we define the theoretical framework, reminding the reader of the decomposition of a quantum state in x and p bases for n−particle states, introducing notation and defining a general form of the wave-function of the nucleus. In Appendix B we compute the scattering amplitude and the cross-section. In Appendix C we summarize some details of our calculations of the scalar product of lepton and hadron currents, needed to calculate the scattering amplitude and the cross-section. In Section III we discuss in detail the derived crosssection. Coherent and incoherent regimes are discussed in Section III A. Our revision of the coherent cross-section is discussed in Section III B. In Section III C we discuss in some detail a proposal to detect transition γs from excited nuclei inherent to incoherent processes. These γs would provide both an additional background suppression and an independent observable sensitive to the form-factor of the nucleus. In Appendix D we provide an analogy with a mechanical system of two balls connected by a spring to illustrate the kinematics of coherent and incoherent scattering. The summary is drawn in Section IV. The natural units = c = 1 are used throughout the paper. Three-vectors are denoted by bold face. A four-vector a has the following components: a µ = (a 0 , a), enumerated by a Greek index µ. The Dirac spinors and γ-matrices are used in the Dirac basis and γ 5 = iγ 0 γ 1 γ 2 γ 3 . The Feynman slash notation / a = γ µ a µ is used for a scalar product of a four-vector a µ and Dirac γ µ -matrices. Quantum operators are denoted by the hat symbol, likeX for the position operator. II. ELASTIC AND INELASTIC NEUTRINO-NUCLEUS SCATTERING A. Revising the paradigm We begin this section by reminding the reader of the paradigm of coherency in neutrino-nucleus scattering [17]. Two waves are considered coherent if they have the same frequencies, wave-forms, and constant relative phase. Coherence can lead to constructive and destructive interference. A neutrino-nucleus interaction is a result of an individual neutrino scattering off of nucleons. Each such scattering off a k-th nucleon can be described by an amplitude A k . If these nucleons are assumed to have definite coordinates x k , then, due to the translation invariance, A k gets an additional factor e iqx k and the total amplitude reads These individual amplitudes are coherent if for any k the phases qx k are nearly the same. This is fulfilled if the condition in Eq. (2) is satisfied. The left panel of Fig. 1 depicts a neutrino scattering off of nucleons displaced from each other. The non-zero angle θ of the scattered neutrino leads to a loss of coherence. Left panel: Front of incoming neutrino plane-wave (solid vertical line) scatters on nucleons at fixed positions, xj and x k , respectively. Non-zero scattering angle θ develops the phase difference ∆ϕ = q(xj − x k ) of two fronts of scattered neutrino plane-waves (dashed lines) which leads to a loss of coherence. Right panel: Neutrino scatters off a k-th or j-th nucleon described by a wave-function exemplified here as a Gaussian profile. The outgoing neutrino wave, as for any nucleon target, is a superposition of waves e iqx k weighted by ψn(x1 . . . xA) 2 . Does this consideration of coherency remain appropriate when the assumption of the nucleon's definite position is released? In this case, the positions of nucleons are described by a multi-particle scalar wave-function ψ n/m (x 1 . . . x A ), where the n/m subscripts stand for the initial and final state of the nucleus. The amplitude in Eq. (4) could be generalized as where f k mn (q) = m|e iqX k |n is the transition matrix element of e iqX k withX k being the quantum position operator of the k-th nucleon. In particular, defining the form-factor of the nucleon bound in the nucleus, differs from the exponential factor e iqx k in two major respects. (i) f k nn (q) does not depend on the coordinate of the k-th nucleon. All position variables are integrated out in Eq. (6). (ii) f k nn (q) does not depend on the index k (ignoring for simplicity a possible difference in form-factors for protons and neutrons). This statement can be proven for both fermions and bosons by a change of integration variables, and accounting for symmetry properties of the wave-function under interchange of its arguments. Now, accounting for these properties of f k nn (q) we conclude that phases of each individual amplitude in the total amplitude in Eq. (5) are all equal and the amplitudes are coherent for any q, at variance with Eq. (4). This conclusion does not mean to say that the total amplitude is not vanishing at large q, because in this limit the form-factor f nn (q) vanishes. What governs such dependence of f nn (q)? Mathematically, the reason lies in the fast oscillation of the e iqx k factor in the integral in Eq. (6), washing out the integrand function. The physical reason is in the incoherent summation of waves belonging to the wavefunction of a single nucleon extended over the size of the nucleus. Other physics arguments are discusses in Section II C and Appendix D. One can argue that this conclusion seems to be in conflict with a wave-function corresponding to the nucleons at fixed positions, assuming that where y i are variables and x i are parameters. Then, Eq. (7) reduces to Eq. (4) in which every term has an individual phase in contrast to our statement. This antinomy appeared because of the assumption in Eq. (8) which breaks the principle of the particles identity. The latter requires that the multi-particle wave-function should be either symmetric (bosons) or antisymmetric (fermions) under exchange of its arguments. As a result it is not possible to state that the i-th particle has position x i even if it is known that all particles occupy some fixed positions. Instead, the i-th particle can be at any point among the x 1 . . . x A fixed positions. Therefore, considering Eq. (7) with an appropriately symmetrized δ-like wave-function one would identically obtain Eq. (4) for any index k in agreement with our conclusion. Consideration of this antinomy is also helpful in understanding that the very form of Eq. (4) ignores the fundamental principle of quantum physics -the indistinguishability of particles. The right panel of Fig. 1 displays a scattering picture accounting for a wave-function of the nucleons exemplified here as a Gaussian profile. The summation of waves weighted by ψ n (x 1 . . . x A ) 2 yields the scattered neutrino wave, as for any nucleon. Therefore, according to our consideration, it is not appropriate to identify the diagonal terms in as due to incoherent interactions. Both diagonal and nondiagonal terms contribute equally to |A nn | 2 , and with the same dependence on q. What, then, defines the incoherent interactions? Essentially, they are defined by processes in which the quantum state of the nucleus is changed (n = m). Let us briefly highlight the main points of a derivation illustrating this statement, ignoring for a while complications due to spin, type of nucleon, possible dependence of A k mn → A 0 on the indices, etc (full details can be found in Appendix B). Assume the nucleus is initially in the n-th quantum state. If the experiment is not able to distinguish the final state of the nucleus, one should sum over all possible final states to get the observable proportional to Using Eq. (6) one can rewrite Eq. (10) as where we used the unity operator composed of nuclear states m |m m| =Î. One can define a two-particle real-valued correlation function If k = j, then G(q) = 1. For k = j, G(q) does not depend on values of k, j as can be seen using the symmetry properties of the nucleus wave-function. Combining Eqs. (6), (11) and (12) one gets where A gives the number of nucleons. The terms of |A| 2 in Eq. (13), quadratically and linearly depending on A, are shaped by factors G and 1 − G, respectively. These terms provide a smooth transition between coherent and incoherent regimes. One can observe that if the nucleus' multi-particle wave-function is constructed as a product of single-particle wave-functions, then G(q) can be represented as |F (q)| 2 , where F (q) is the single-nucleon form-factor of the nucleus. The derivation of Eq. (13) does not indicate in a transparent way what the source of the quadratic and linearly dependent terms is. We conclude this section by showing that coherent and incoherent terms are due to processes in which the nucleus remains in the same quantum state or is changed, respectively. For this purpose we rewrite Eq. (11) as The first line gives immediately which could be identified as a coherent term in Eq. (13). The second line can be presented as where the covariance of quantum operators reads The covariance terms are identically zero for a multi-particle wave-function constructed as a product of single-particle wave-functions and the second line of Eq. (14) reads Therefore, one can conclude that an elastic process (first line in Eq. (14)) yields the coherent term in Eq. (15), while inelastic processes all together (second line in Eq. (14)) yield the incoherent term in Eq. (18). One can find a certain analogy with the theory of neutrino oscillations in which the integration over an unobserved time of neutrino emission leads to an incoherent L-independent term in the oscillation probability formula (see, for example, in [69,70]). Attribution of elastic and inelastic processes as contributing to the coherent and incoherent interactions was also done in [20,[71][72][73] where the authors performed numerical calculations of the corresponding cross-sections within appropriate nuclear models. B. Kinematics of elastic and inelastic neutrino-nucleus scattering In general, one should consider the treatment of neutrinonucleus interactions using wave packets. The corresponding formalism was developed (see, for example, Ref. [69,74]) and some potentially interesting effects for elastic neutrinonucleus scattering could be envisaged and examined. We simplify our treatment by considering the initial and final states as having definite momenta. Let us denote by k = (E ν , k) and k = (E ν , k ) the four-momenta of incoming and outgoing neutrino, and by P n and P m the four-momenta of initial and final state nuclei, respectively. The total energy P 0 n of a nucleus state |P n reads as E P + ε n , where ε n is an internal energy of the nucleus state. In the laboratory frame, energy E ν of the outgoing neutrino depends on angle θ between k and k where is the difference of energies of the |m and |n states. Absolute values of the four-momentum transfer vector, q = (q 0 , q), read where T A is the kinetic energy of the scattered nucleus, calculated below. In neutrino-nucleus center-of-mass frame q 2 reads where is the energy of neutrino scattering off a nucleon state |n , s A,n = (k + P n ) 2 and m A,n = m A + ε n . Minimum and maximum values of q 2 correspond to sin 2 θ 2 = 1 and 0, respectively For heavy nuclei with ∆ε mn of the order of hundreds keV and experimentally detectable signals produced by a release of kinetic energy of the scattered nucleus, q 2 can be approximated as Assuming the initial nucleus is at rest, the kinetic energy of its recoil reads Using Eq. (19) and assuming m A E ν , the kinetic energy T A of the scattered nucleus becomes Here we will examine the kinetic energy for a few cases of interest. (i) Forward scattering of the neutrino corresponds to cos θ = 1 and yields the minimal kinetic energy of the nucleus which is zero for m = n because no energy, nor threemomentum, is transferred in this case. For m = n, the energy q 0 = ∆ε mn is transferred to the nucleus as well as the threemomentum q, equal in magnitude to q 0 for forward scattering, thus yielding (ii) Backward scattering corresponds to cos θ = −1 and yields the maximal kinetic energy of the nucleus which can be understood as follows. For m = n no energy is transferred to the nuclear structure, while the transferred three-momentum is equal to double the initial neutrino energy (backward scattering). Thus, For m = n, the energy ∆ε mn transferred to the nucleus must be subtracted from the total transfered three-momentum 2E ν , thus leading to Eq. (29). (iii) In general, the kinetic energy of the scattered nucleus is smaller if the nucleus changes its quantum state (m = n) with respect to the case when m = n. Effectively, this can be described by a decrease of neutrino energy by ∆ε mn , which could be significant when E ν and ∆ε mn are comparable. For heavy nuclei, like 133 Cs or 127 I, used by the COHERENT experiment [62], the first excitation energies are of the order of 100 keV, which are small corrections compared to the tens-of-MeV neutrino energies produced by the Spallation Neutron Source. Therefore, the kinetic energy of the recoil nucleus is of the same order of magnitude for both elastic and inelastic scatterings. In Fig. 2 we show the expected kinetic energy of the recoil nucleus 133 Cs as a function of its kinetic energy, illustrating the impact of ∆ε mn and cos θ. A strong dependence on neutrino scattering angle θ is evident from the upper panel of Fig. 2. The effect of non-zero values of ∆ε mn , displayed in the bottom panel of Fig. 2, is also present, but it is significantly smaller than the angular dependence. The reason for the weaker dependence due to non-zero values of ∆ε mn is in the partial compensation due to the ∆ε 2 mn /2 term in the numerator of Eq. (27) at E ν ∆ε mn , and irrelevance of ∆ε mn when E ν ∆ε mn . C. Kinematic paradox As we show in what follows, the coherent enhancement of the interaction probability corresponds to neutrino-nucleus scattering in which the nucleus remains in the same quantum state. This intuitively evident statement might seem to result in the following kinematic paradox. Both the nucleon and the nucleus acquire the same three-momentum q. Assuming both of them are initially at rest, one arrives at the kinetic energy T N = q 2 /2m N of the nucleon right after an interaction, which is a factor m A /m N larger than the kinetic energy T A = q 2 /2m A of the nucleus. Since the nucleus remains in the same quantum state with the same internal energy, T N and T A must be equal to each other, and to the difference E ν −E ν . This paradox appears because some of the assumptions are incorrect. In particular, while the assumption that the nucleon is at rest seems to be quite reasonable given that the average nucleon momentum, p, is much smaller than its mass, m N , this assumption leads to the paradox. Let us require that T N and T A are equal to each other. This requirement cannot be satisfied for any nucleon momentum p. One can find a compatible p using energy conservation Searching for a solution where p is proportional to q, p = αq, one finds the nucleon momentum to be Therefore, energy-momentum conservation and the requirement that the nucleus does not change its state after an interaction provides a qualitative picture of the coherent neutrino-nucleus scattering process, displayed symbolically in Fig. 3. Here we discuss a few features of this interesting A qualitative picture of a coherent neutrino-nucleus interaction. A neutrino interacts with a nucleon initially having a particular momentum p = αq aligned along q and given by Eq. (31). Since the nucleus initially is at rest, all the nucleons except the target one have a momentum −p shown by line dashed. The final momentum p + q = (1 + α)q of the target nucleon is also aligned along q. In the figure an angle between the p and p + q vectors differs from π for visual clarity. After the interaction the increased energy of the target nucleon and acquired three-momentum q are transferred to the entire nucleus, leaving the internal quantum state of the latter unchanged. A Z-boson having a wavelength comparable to the size of the nucleus produces a coherent enhancement of scattering amplitudes. observation. (i) Not every nucleon in the nucleus can interact with a neutrino in such a way that after the interaction the nucleus remains in the same state. Only those nucleons which happen to have a momentum compatible with Eq. (31) are appropriate targets. (ii) The wave-function of the nucleons provides us a distribution of the nucleon's momenta. Large nucleon momenta are, in general, less probable than smaller momenta. This explains qualitatively why at large q the enhancement factor in Eq. (1) vanishes, contrary to the case of small q for which the chance to find a nucleon with an appropriate momentum is relatively large. Mathematically, this suppression is given by |F (q)| 2 . This consideration could be extended to the case of incoherent neutrino-nucleus scattering, when the nucleus changes its intrinsic quantum state |n → |m and n = m. Eq. (30) must be generalized to account for non-zero differences of energy levels ∆ε mn where E p = m 2 N + p 2 . Eq. (32) does not use a nonrelativistic approximation because for small values of q, its solution p can be comparable to the nucleon mass. Splitting p into a sum of components: longitudinal p L and transverse p T to q, one can find an exact solution of Eq. (32) where In Fig. 4 we display the solution in Eq. (33) in Eq. (31). In this case β from Eq. (34) is vanishingly small and coinciding with Eq. (31) for p T = 0. One can observe that the longitudinal momentum of the nucleon in coherent neutrinonucleus scattering is always aligned opposite to the transfered three-momentum q. For ∆ε mn = 0 the solution of Eq. (32) is drastically different for small q. Here we analyze different regions of three-momentum transfer: (i) At q = 0, Eq. (32) has no solution, which simply means that at zero energymomentum transfer an excitation of the nucleus is impossible. (ii) At smallest |q| approaching its minimum possible value |q min | = ∆ε mn + ∆ε 2 mn 2m A , the solution of Eq. (33) diverges, p L → ∞, whence a non-relativistic approximation in Eq. (32) is not appropriate at small q. The chance to find a nucleon in the nucleus with such a momentum is vanishingly small. Therefore, for small |q| the incoherent scattering is significantly suppressed, in opposition to the coherent interaction. (iii) With increasing |q| there is a good chance to find a transition |n → |m with ∆ε mn yielding p ≈ 0 for which the suppression is minimal. Again, this dependence is exactly opposite to the coherent scattering. (iv) These kinematic considerations give a qualitative understanding, yet do not provide a complete picture of the dependence of the neutrino-nucleus scattering upon q. In |n → |m (n = m) transitions, the matrix element m|e iqX |n , whereX is the position operator, determines the actual functional dependence. A quantitative mathematical framework is developed in Appendix B 1. As a useful and simple illustration of transitions for which the internal state is changed or unchanged, in Appendix D we consider a mechanical analogy of a system of two balls with equal masses m connected by a massless spring having nonzero rigidity. D. Scattering amplitude Our calculation follows a microscopic description of neutrino-nucleus scattering as a result of the neutrino-nucleon interaction. We consider a Fock state |P n of a nucleus with fourmomentum P n being in the n-th quantum state as a superposition of free nucleons states weighted with their bound state wave-function. The latter is explicitly factorized into a product of the wave-functions describing the internal structure of the nucleus and motion of their center-of-mass. The internal wave-function depends on A − 1 three-momenta because one three-momentum variable is used to describe the motion of the nucleus. It is convenient to refer to the Fock state |n , describing the nucleus in the n-th quantum state at rest. At zero nucleus momentum, both |P n and |n states describe the same quantum state but still differ by their normalizations given in Eqs. (A22) and (A23). The details of this consideration are summarized in Appendix A. A priori, one does not know the initial, |n , and final, |m , internal states of the nucleus. Therefore, all possible transitions must be considered. The matrix element iM mn , corresponding to the process in Eq. (3) keeping only the leading order terms of Fermi constant G F , reads where m N and m A are masses of the nucleon and nucleus, respectively, and C mn,1 is a function of the order of unity defined in Eq. (B19). Details of the derivation can be found in Appendix B 1. Functions f k mn (q) ≡ m|e iqX k |n , whereX k stands for the position operator of the k-th nucleon, are transition formfactors for m = n and n-state form-factors for m = n, defined in Eq. (B14). (l, h k sr ) is the scalar product of the lepton (l) and k-th nucleon's (h k sr ) neutral weak currents. For their definition refer to Eqs. (B4) and (B25), respectively. λ mn (s, r) is a spin transition amplitude between the |n and |m states of the nucleus. It depends on initial, r, and final, s, doubled spin projection on the given axis of the scattered nucleon. For a definition, refer to Eqs. (B11), (B12) and (B17). The amplitude in Eq. (37) is a sum of neutrino-nucleon amplitudes, each proportional to the scalar product of the lepton and nucleon currents, weighted by two factors, each not exceeding unity. Given the definition of f k mn in Eq. (B14) and the symmetry properties of the nucleus wave-function, one can conclude that f k mn does not depend on the number k, but only on the type of nucleon k points to. Therefore, all amplitudes in Eq. (37) have the same phase and thus are "coherent" in the literal sense of this terminology. One can see that f k mn (q) = m|e iqX k |n is a generalization of the e iqx k quantum-mechanical factor used by Freedman in [1]. Section II A can be referred for a discussion of an important difference between these two factors. f k mn is most important in understanding the mechanisms of the quadratic and linear dependence of the observable crosssection on the number of nucleons. Let us examine the form-factor f k mn for elastic and inelastic scatterings. (i) In the case of elastic scattering and one expects a quadratic dependence of the cross-section on the number of nucleons. For q → ∞, the matrix element vanishes lim q→∞ n|e iqX k |n → 0 (39) and the elastic cross-section must also vanish. Therefore, the elastic scattering has the properties of a "coherent" process in the terminology of Freedman. (ii) In the case of inelastic scattering according to the normalization m|n = 0 for n = m (see Eq. (A23)). For a non-zero q the matrix element m|e iqX k |n = 0 in general, and as we show in Section II E the cross-section is a linear function of the number of nucleons once all possible initial and final states are accounted for. Since this result could be obtained by summing up the absolute values of the amplitudes squared one can refer to this case as incoherent scattering. E. Cross-section The corresponding differential cross-section reads where C 2,mn is a function of the order of unity given in Eq. (B23). As we show in App. C (see Eq. (C33)) the matrix-element squared, |iM mn | 2 , is independent of the azimuthal angle ϕ, therefore we integrated over this variable in Eq. (41). An observable cross-section can be obtained by averaging over all possible initial states |n and summing up over all possible final states |m where ω n is a statistical weight to find an initial nucleus in a quantum state |n at given ambient temperature. In what follows, we do not need an explicit form of ω n normalized as n ω n = 1. The matrix-element squared, |iM mn | 2 , has inside it a summation k,j over two indexes enumerating the scattered nucleons. In Appendix B 2 it is shown that terms in Eq. (42), corresponding to elastic neutrino-nucleus scattering ( n=m ), keep both indexes, k and j, giving rise to a quadratic dependence of the cross-section as a function of the number of nucleons. In contrast, terms in Eq. (42), corresponding to inelastic neutrino-nucleus scattering ( n =m ), are to a good accuracy proportional to δ kj , which automatically yields a linear dependence on the cross-section as a function of the number of nucleons. Therefore, the observable cross-section can be written as where |λ p/n sr | 2 and g i/c are determined factorizing, respectively |λ mn sr | 2 given by Eq. (B18) and g mn defined by Eq. (B27) out of the double sum nm in Eq. (42). g i/c are kinematic functions of the order of unity. |F p/n | 2 are proton and neutron form-factors of the nucleus defined by Eq. (B14). The second and third lines of Eq. (44) correspond to inelastic and elastic neutrino-nucleus scattering, respectively. Their dependencies on the number of nucleons are linear and quadratic, respectively. Using the terminology of Freedman, one would refer to these terms as incoherent and coherent, correspondingly. This is the most general result of this work if terms with covariances defined in Eqs. (B31) and (B35) are neglected. The summation of amplitudes due to the scattering off of various targets is evident in the third line of Eq. (44). Each type of nucleon is weighted according to the appropriate averaged form-factor F p/n (q). Note, that the nucleus does not change its spin eigenstate in the coherent term. This is encoded in the summation r (l, h p/n rr ). The incoherent term depends on |λ p/n sr | 2 . The latter is a probability for a nucleon to change spin index r to s in transitions |n → |m , averaged over n and summed up over m. While one needs a model for the nucleus wave-functions to calculate |λ p/n sr | 2 , we approximate these coefficients by unity |λ p/n sr | 2 → 1, which implies that for any r, any value of s is possible with the same probability. Therefore, we can complete our calculations of the cross-section. The scalar products (l, h p/n ) are calculated in Appendix C using helicity and σ 3 bases. The latter corresponds to the basis with spin projection on a fixed axis chosen to be along the incoming neutrino momentum. While the results do not depend on the basis chosen, as demonstrated in Eq. (C39), it is more straightforward to use the helicity basis with Eq. (C12) and σ 3 basis with Eq. (C33) to calculate the incoherent and coherent cross-sections, respectively. As follows from Eq. (44), the observable neutrino-nucleus cross-section can be presented as a sum of incoherent and coherent cross-sections The incoherent cross-section reads If the target nuclei are unpolarized, then terms proportional to ∆A f in Eq. (46) vanish after averaging. Therefore, for an unpolarized target the incoherent cross-section reads The coherent cross-section reads where where It is straightforward to perform the spin averaging in Eq. (48), removing the terms linear in ∆A f . The final formula of the spin-averaged cross-section reads which is a well known result [1-5, 7-10, 20, 33, 43]. Corrections to this formula are discussed in Section III B. III. DISCUSSION In what follows we discuss in detail the calculated crosssection. It is convenient to refer to the cross-section integrated over the kinetic energy of the recoil nucleus This integral depends on the energy threshold T min A , unique for each detector. As an illustration we consider three experimental setups. We refer to the state-of-the-art energy thresholds of considered experimental setups, briefly described in what follows. (i) A germanium detector exposed to ν e flux from a nuclear reactor. Among all natural isotopes we select only one stable nucleus, 74 Ge, for our illustration. The expected energy threshold for electrons of germanium bolometers is 200 eV [75], which, accounting for the quenching in germanium detectors [76], roughly corresponds to 1 keV of the 74 Ge recoil kinetic energy. We refer to the νGEN experiment at the Kalinin Nuclear Power Plant [51] as an example. For illustration we calculate the differential cross-sections for two ν e energies, 5 MeV and 7 MeV, and total cross-section for E ν ∈ (1, 20) MeV. As an estimate for an excitation energy of the 74 Ge nucleus we take ∆ε = 900 keV. (ii) A CsI scintillator exposed to the neutrinos from the Spallation Neutron Source [62]. The differential and total cross-sections are calculated for E ν = 30 MeV and 50 MeV and for E ν ∈ (1, 150) MeV, respectively. We assumed ∆ε = 100 keV for the 133 Cs nucleus. The energy threshold was set to 5 keV of the 133 Cs recoil kinetic energy. (iii) A liquid argon detector with an unprecedented lowenergy threshold of 0.6 keV for the 40 Ar nucleus achieved by the DarkSide Collaboration [77]. The differential and total cross-sections are calculated for E ν = 15 MeV and for E ν ∈ (1, 50) MeV, respectively. To make a prediction for an experiment we use (i) two form-factors F p/n (q) for protons and neutrons, respectively, and (ii) data regarding the energy levels of the nucleus under consideration. We considered two models of the form-factors: symmetrized Fermi-distribution [78] and Helm form-factor [79]. Both models of the form-factors give very similar results numerically if the parameters of the models are selected to reproduce the same proton and neutron RMS radii. In what follows we present the results obtained assuming the same RMS radii for protons and neutrons, and using the Helm formfactors for definiteness. In A. Coherent and incoherent The most general feature of Eq. (45) consists of smooth transitions between coherent and incoherent regimes. Both terms of the cross-section are governed by the same F p/n (q) form-factors defined in Eq. (B28). In the limit q → 0, F p/n (q) → 1, and the contribution of the incoherent cross-section vanishes, while the coherent term totally dominates. In the opposite limit of large q, when F p/n (q) → 0, the coherent cross-section vanishes and the incoherent term dominates. In general, both coherent and incoherent scatterings contribute. In Fig. 6 the differential coherent and incoherent crosssections are displayed for three experimental setups discussed above. (i) At T A → 0 the coherent cross-section totally dominates since the incoherent contribution vanishes. For a given nucleus, the coherent differential cross-section in this limit does not depend on neutrino energy up to small corrections, in agreement with Eq. (48). (ii) At T A → T max A the coherent cross-section vanishes because of the factor 1−T A /T max A , while the incoherent crosssection rises. One might observe that the maximum kinetic energy of the nucleus experienced in an incoherent scattering is systematically smaller than that for the coherent interaction. This is because some of the neutrino energy is used for the excitation of the nucleus, as given by Eq. (29). (iii) For small neutrino energies the coherent cross-section dominates over the incoherent contribution for any T A . For larger E ν there is a value of T A above which the incoherent cross-section dominates over the coherent, as can be seen in the middle panel of Fig. 6 for E ν = 50 MeV. In particular, for E ν = 50 MeV with a 133 Cs nucleus this occurs at T A 33 keV. In Fig. 7 the corresponding integral cross-sections are displayed. (i) At low E ν the coherent integral cross-section is larger than the incoherent by orders of magnitude because the factors 1 − |F p/n (q)| 2 suppress the latter at small q. With increasing neutrino energy their interrelation changes to the exact opposite, the incoherent cross-section dominating above a certain E ν . As an example, for the 133 Cs nucleus this occurs at E ν 140 (120) MeV for T min A = 0 (5) keV. (ii) The experimental detection threshold reduces the integrated coherent cross-section and, to a lesser extent the incoherent, because the threshold removes the part of the differential cross-section which is the largest for the former and vanishing for the latter, as can be seen in Fig. 6. To quantify this statement the ratio of integrals given by Eq. (53), σ incoh /σ coh , is displayed in Fig. 8 B. Revising the coherent cross-section It is instructive to compare the coherent cross-section in Eqs. (48) and (51) to that used in the literature [64] dσ 0 The second approximate equality of Eq. (54) appeared as a result of a quite accurate approximation y = T A /E ν → 0. The last line is a result of further approximations: (i) g p V → 0 and (ii) spin-less nucleus. Let us briefly review Eq. (54). After a number of approximations, the third line of Eq. (54) is identical to an approximation of the coherent cross-section in Eq. (52), calculated in this work. However, conceptually, a derivation of Eq. (54) is at odds with the coherency. Indeed, as one can observe, the first line of Eq. (54) corresponds to a calculation of incoherent cross-section (compare to Eq. (47)), where the nucleus changes its spin eigenstate. As we advocate here, the coherent scattering corresponds to interactions of neutrino with the nucleon in which the latter remains in the same quantum state. How then are Eq. (54) and Eq. (46) consistent with a good accuracy? The reason is in the non-relativistic approximation. Two terms of the matrix element containing (l, h η +− ) and (l, h η −+ ) with a spin-flip should not contribute to the coherent cross-section (on the opposite, they do exist in Eq. (54)). In the non-relativistic approximation (l, h η +− ) vanishes, while (l, h η −+ ) is proportional to g A and vanishes for a spin-less nucleus, as can be seen in Eq. (C35). The last statement is accurate if the nucleons in the nucleus are at rest. To illustrate the effects of a moving target nucleon and constant spin of the nucleus in elastic neutrino-nucleus scattering, a ratio of differential coherent cross-section dσ/dT A in Eq. (48) to that in Eq. (54) is displayed in Fig. 9 for a 133 Cs nucleus, assuming three fixed values of neutrino energy. The cross-sections coincide at T A = 0 and show a difference at some percent with increasing T A . The maximal difference occurring at the end of the nucleus kinetic energy spectra, rises with neutrino energy from about 5% at E ν = 30 MeV to about 20% at E ν = 100 MeV. C. Proposal to observe higher energy excitation γs due to incoherent scattering After an interaction the nucleus may remain in the same quantum state, or the internal state of the nucleus could be changed. We refer to these cases as elastic and inelastic interactions. Experimentally, the scattered nucleus, being in the same or an excited state, are practically indistinguishable if one measures only the kinetic energy of the nucleus. Inelastic interactions must be accompanied by the emission of gammas corresponding to the difference of energy levels of the nucleus. The time scale of these emissions is in the range of picoseconds to nanoseconds for the 133 Cs nucleus, taken as an example. The energies of the γs are of the order of some hundred keV for the same nucleus. These γs should produce a very detectable signal in the scintillator correlated in time with the beam pulses for an accelerator based experiment. The rate of these γs is determined by the ratio N inc /N coh , where (55) in which ε(T A ) is the detection efficiency. Fig. 8 suggests that the number of γ events due to incoherent interactions should be detectable. It is remarkable, that a similar proposal was made back to 1975 in [73]. IV. SUMMARY A theoretical framework for neutrino-nucleus scattering νA → νA, in which the nucleus conserves its integrity, is developed. The main result of this work consists in the demonstration that coherent and incoherent regimes appear due to elastic and inelastic processes, when all possible initial and final states are taken into account. This conclusion is in agreement with corresponding theories of scattering of Xrays, electrons of an atom and of slow neutrons off matter constituents. The coherent and incoherent cross-sections were shown to be driven by |F p/n | 2 and (1 − |F p/n | 2 ) factors, thus providing a smooth transition between these regimes. We also revised a formula for the coherent cross-section. The obtained formula has some percent level corrections when compared to that known in the literature (see, for example in [64]). They differ at most at the end of kinetic energy spectrum of the target nucleus, reaching ≈ 5% at E ν = 30 MeV (≈ 20% at E ν = 100 MeV). There are two main sources for this difference. (i) Our consideration treats only those matrix elements which correspond to the same initial and final spin states of the nucleus in contrast to the conventional derivation which considers also the spin-flipped matrix elements. (ii) The target nucleon is not assumed to be at rest which develops corrections to the vector and axial form-factors of the nucleus. Three experimental setups considered in this work illustrate our results. In particular, for 133 Cs and neutrino energies of 30 − 50 MeV the incoherent cross-section is about 10-20% of the coherent contribution if experimental detection threshold is accounted for. The incoherent processes being a relatively small "background" to the coherent interactions provide an important clue if γs released by excited nucleus are detected. Detection of both signals due to nuclear recoil and excitations γs provides a more sensitive instrument in studies of nuclear structure and possible signs of new physics. An interested reader could checkout and run a Jupyter Notebook where equations from this manuscript are documented in terms of a python code [80]. In this section we shortly summarize some mathematical aspects of the representations of abstract quantum states for both single fermion and n-fermions. Single-particle states We begin by reminding the reader about the single-particle basis. A fermionic state with mass m, definite threemomentum p, energy E p = p 2 + m 2 and spin projection s is defined according to with Lorentz-invariant normalization A fermionic state with definite x can be defined as whereψ(x) is the free field operator in the Schroedinger representation. The state in Eq. (A3) is a Dirac spinor. These states are normalized as follows whereÎ 4×4 is the 4 × 4 unity matrix in the spinor space. The single-particle unity operators read The scalar product of states given by Eqs. (A1) and (A3) x|p, s = u(p, s)e ipx (A6) allows for the representation of |p, s and |x states via linear superpositions of each other The second line of Eq. (A7) allows us to see that x|, given by Eq. (A3), differs from a non-relativistic spin independent state where p| is defined similarly to Eq. (A1) but for a spin-less particle. n−particle states The unity operators defined in Eq. (A5), generalized for nparticle states, readŝ The symbols {p} and {x} are n-tuples, {p} = (p 1 . . . p n ) and {x} = (x 1 . . . x n ) are used for compaction here and in what follows. The bra-vector {x}| is given as where m i enumerates the spinor's rows of the fields ψ(x i ). The wave function of a nucleus The Fock state |P n of a nucleus, with four-momentum P n being in the n-th quantum state, can be written as a superposition of free nucleon states using their bound state wave-function in the momentum representation ψ n The wave-function ψ n ({p}) describes both the internal structure of the nucleus and its movement as a whole with three-momentum p = A i=1 p i and spin projection s. Since the quantum state of A interacting nucleons cannot depend on the motion of their center-of-mass, the wave-function ψ n ({p}) can be factorized into a product of the wavefunction ψ n ({p }), describing the internal structure of the nucleus in its center-of-mass (the corresponding momenta are labeled by the upper index ), and the wave-function Φ(p), describing the motion of the nucleus with momentum p and spin projection s, both encoded in the argument p of Φ ψ n ({p}) = ψ n ({p })Φ n (p). (A19) The factorization in Eq. (A19) makes sense for A > 1. The three-momentum of the i-th nucleon in the center-ofmass frame is given by p i . The i-th nucleon's momentum p i in the laboratory system is given by The state in Eq. (A18) can now be rewritten as We take the wave-function Φ(p) of the form which corresponds to a nucleus with a definite momentum P and energy P 0 n = E p + ε n , including excitation energy ε n . Then, the state in Eq. (A21) is normalized similarly to Eq. (A2) if the following normalization of the internal nucleus state |n is adopted (A23) The delta-function δ 3 A i=1 p i reduces the number of independent momenta in Eq. (A23) by one. The states |n and |P n describe the same realm at P = 0 yet still differ by normalization. We define the former as should be accurate enough for the scattering of a low energy neutrino off of a nucleus. In (B1) and are weak currents of neutrino and nucleons, respectively, written in the normal ordering represented by colons. The quantum fields ψ ν (x) and ψ n,p (x) correspond to the neutrino and nucleons, respectively. The S-matrix amplitude P m , k |S|P n , k , to the first order of G F , reads where and H µ mn (P n , P m ) = P m |H µ (0)|P n . Using (A2), (B2) and the anti-symmetric nature of the wavefunction, the hadronic current in (B5) can be found In the SM these couplings read The arguments of ψ * m ({p (k) }) and ψ n ({p }) are n-tuples defined as {p } = (p 1 . . . p A ), where its i-th element, p i = (p i , r i ) and {p (k) }, is identical to {p } except for its k-th element, which reads as (p k + q, s k ). The three momentum p k , used in the argument of the Dirac spinor u, is the k-th nucleon momentum in the laboratory frame given by (A20). The hadronic current, corresponding to neutrino-nucleus scattering, is a sum of currents u(p k + q, s k )O µ k u(p k , r k ) corresponding to the scattering of a neutrino off of the k-th nucleon with momentum in the laboratory p k and spin projection r k . The probability amplitude to find a nucleon in the |P n state of the nucleus with these quantum numbers is just the wave-function ψ n ({p }) in the momentum representation, which depends on momenta in the nucleus center-of-mass frame. The outgoing nucleon has a three-momentum in the laboratory of p k + q, and, in general, an arbitrary spin projection s k . The corresponding probability amplitude to find a nucleon with exactly these quantum numbers is given similarly by the wave-function ψ * m ({p (k) }). The denominator 2E p k 2E p k +q depends on the energies of the initial and final nucleons in the laboratory frame, and automatically accounts for the normalization of Dirac spinors u † (p, s)u(p, s) = 2E p . The equal momenta of the initial and final state spectator nucleons are integrated out with the weight given by a product of initial and final state wave-functions. To proceed further, let us make the following simplifications. The current u(p k + q, s k )O µ k u(p k , r k ) could be factorized out from the integral at an effective momentum p k which we approximate to be given by a solution of Eq. (32). Also, we assume that the spin and momenta structures of ψ n could be factorized into a product ψ n and χ n which are functions of two n-tuples {p } = (p 1 . . . p A ) and {r} = (r 1 . . . r A ), respectively. The spin-functions can be normalized as follows Thus, (B7) can be rewritten as where {r (k) } is an n-tuple identical to {r}, except its k-th element, which is equal to s k . A further insight could be gained by observing that one can rewrite the multidimensional integral in (B13) as the matrix element whereX is the three-coordinate operator of the k-th nucleon. Eq. (B14) provides a clue in understanding the appearance of coherent and incoherent regimes in neutrino-nucleus elastic and inelastic scattering. A derivation of Eq. (B14) is facilitated if the following equality is observed (B16) We introduce the following notation for economy of space: In general, the scattered nucleus may have a final spin state different with respect to the initial. We assume in what follows that initial and final states of the nucleus are eigenstates of the spin operator with quantum numbers (J, J 3 ). One might observe that if m = n, then the amplitude λ mn (s, r) = δ sr for appropriate normalization of the spin wave-function (see the normalization used in Eq. (B12)). We denote for m = n the corresponding amplitude as λ mn sr . Therefore, for any m, n λ mn (s, r) = δ mn δ sr + (1 − δ mn )λ mn sr . The multiplier in Eq. (B16) can be rewritten, factoring out the leading order term m A /m N and the factor C mn,1 of the order of unity defined as Using Eqs. (B14), (B17) and (B19), one can represent Eq. (B16) as in Eq. (37). Cross-sections The cross-section corresponding to the matrix element in Eq. (B3) reads where all kinematic variables are given in the laboratory frame in which the initial nucleus is assumed to be at rest, E ν is given by Eq. (19) and ∆ε mn = ε m − ε n . The kinetic energy T A of the scattered nucleus is given by Eqs. (26) and (27). Integration over E ν can be done with help of a Dirac δfunction, providing energy conservation, thus yielding One can obtain dσ mn /dT A using a very accurate approximation given in Eq. (27) is of the order of unity. Combining Eqs. (37), (B14) and (B22) one gets an observable differential cross-section defined in Eq. (42) where p is a solution of Eq. (32). In Eq. (B25) a superscript p or n appears when the index k in h k sr from Eq. (B24) points to a proton or to a neutron, respectively. When an index k or j in Eq. (B24) points to a proton/neutron, the form-factors f k mn should be read as f p/n mn , correspondingly. Each of the |(l, h p/n )| 2 terms given by Eqs. (C12) and (C33) yields the common factor 64(s − m 2 N ) 2 , where s = (p + k) 2 is the total energy squared in the neutrino-nucleon center-of-mass frame, and m N is the mass of the nucleon. In the leading non-relativistic approximation this factor can be approximated as 2 8 m 2 N E 2 ν . We denote a correction to this formula by a factor C 3,mn , accounting for the fact that the nucleon in the initial state has a non-zero three-momentum In what follows we denote by g mn the product of correction factors which is of the order of unity. Following our discussion of Eq. (37) we identify the second and third lines of Eq. (B24) as contributing to the coherent and incoherent cross-sections. The factor g mn is, in general, different for coherent and incoherent terms. We take out these factors from the double summation at their effective values denoted by g c and g i for coherent and incoherent terms, respectively. The summation over n in the second line of Eq. (B24) leads to the form-factors averaged over all initial states (B28) Therefore, the second line of Eq. (B24) can be re-written as (B29) Let us work out the incoherent scattering encoded in the third line of Eq. (B24). A summation over m, n cannot be done without a model for λ mn sr . If λ mn sr would not depend on m, n the corresponding summation could be performed as follows. Consider the case when k and j point to the same type of the nucleon, for example, to a proton. If k = j then following a consideration similar to Eq. (B30) one may find that where the right-hand-side of Eq. (B31) is a covariance of quantum operators e −iqXj and e iqX k on |n , whose state reads cov nn (e −iqXj , e iqX k ) = n|e −iqXj e iqX k |n − n|e iqX k |n n|e −iqXj |n . In the case of weak correlations of nucleons in a nucleus, the covariances, like in Eq. (B31) vanish. For example, in models like the nuclear shell model, where a multi-particle wavefunction is constructed in terms of a product of one-particle wave-functions, the covariance in Eq. (B31) is identically zero. The smallness of the covariance in Eq. (B31) is the reason why the inelastic cross-section is, to good accuracy, linearly dependent on the number of nucleons. In what follows the covariance terms are neglected. The same considerations apply to the scattering on a neutron. It is straightforward to show that in the case of mixing neutron and proton amplitudes one gets (let k point to a proton and j point to a neutron, and now automatically k = j) n ω n m =n f k mn f j * mn = cov(e iqX k , e −iqXj ) pn (B35) which can also be neglected. As mentioned above the exact summation should consider the spin amplitude λ mn sr . We approximate the summation by replacing λ mn sr by its average value λ p/n sr for protons and neutrons, respectively. Therefore, the third line of Eq. (B24) reads Z k=1 sr Combining Eqs. (B24), (B29) and (B36), one gets the differential cross-section in Eq. (44). Appendix C: Calculation of the scalar product (l, h) The third line of Eq. (44) prompts us to calculate the scalar product of two currents u(k )O µ u(k) · u(p )O µ u(p), where O µ , O µ are Dirac matrices. The use of a standard powerful technique, which consists of the calculation of traces of Dirac γ-matrices, is not helpful for this problem. This is because all four momenta k, k , p and p are different and one cannot use the well-known formula for Dirac spinors where s r is four-vector of the fermion spin. To simplify intermediate formulas, we calculate the scalar product of the neutrino and nucleon currents in their centerof-mass frame, where energies of incoming and outgoing fermions are equal. In what follows in this section all quantities depending on kinematic variables are given in the neutrino-nucleon center-of-mass frame. Energies E ν and E N , of the neutrino and nucleon, respectively, read where s = (p + k) 2 and m gives the mass of the nucleon. In the Dirac basis the spinor of a nucleon with threemomentum p and index r = ±1 reads where λ ± = √ E N ± m and α p = n p · σ, in which n p is a unit vector along p, and σ = (σ 1 , σ 2 , σ 3 ) is a threevector of Pauli matrices. The index r enumerates two linearly independent two-spinors χ r (p). (C5) It is convenient to specify a basis of two-component spinors χ r to perform the calculations in Eq. (C3). Summation over r, r in the incoherent term of Eq. (44) are simpler in the helicity basis in which r, r are helicity eigenvalues. The coherent term of Eq. (44) requires consideration of the nucleon current with conservation of spin projection on the given axis. For this purpose a basis of χ r two-spinors, which are eigenstates of the σ 3 = (n k · σ) matrix, is more appropriate. It is apparent that the physical observable does not depend on the basis chosen. Helicity basis In the helicity basis the two-spinor χ r (p) is an eigenvector of the helicity operator n p · σ n p · σχ r (n p ) = rχ r (n p ) with an eigenvalue r = ±1, known as the helicity or doubled spin projection on a particle's three-momentum. Two-component normalized to unity spinors χ corresponding to the incoming and outgoing neutrino and nucleon with definite helicities in their center-of-mass frame, can be read −+ = 2m 0, −c θ/2 e +iϕ , ic θ/2 e +iϕ , s θ/2 (C8) and neutrino where for the sake of compactness For a neutrino, assuming its vanishing mass, manifesting neutrino helicity conservation in weak interactions. In Eq. (C9) only the left-handed neutrino currents required to calculate the elastic neutrino-nucleus cross-section are shown. (C20) In this limit there is an exact cancellation of the sum of axial currents with opposite spins lim cos θ→1 This cancellation can be understood by recalling that the axial current of the fermion with the same initial and final momenta p and same spin projection r is proportional to the 4-spin vector s µ u(p, r)γ µ γ 5 u(p, r) = 2mrs µ . (ii) The vector currents conserving spin projection reduce to One might observe, that in this limit, similar to Eq. (C21), which implies that in the coherent term of Eq. (44) there is a cancellation of the axial currents for spin-less nuclei. A more accurate statement can be drawn considering the exact formula in Eq. (C17) A η ++ + A η −− = −2i sin θ(E N − m)(0, sin ϕ, cos ϕ, 0) −i k 2 0 m sin θ(0, sin ϕ, cos ϕ, 0) (C32) In general, this four-vector is non-zero unless the neutrino energy k 0 in the laboratory frame is not zero and the scattering angle θ = 0 or π. Once the vector and axial currents of the nucleon are calculated, it is straightforward to calculate the scalar product (ii) The following equalities hold true (C39) (iii) Eqs. (C37) and (C38) can also be used to cross-check the results of tedious calculations leading to Eq. (C17).
15,478.4
2018-06-22T00:00:00.000
[ "Physics" ]
What’s Going On in Neural Constituency Parsers? An Analysis A number of differences have emerged between modern and classic approaches to constituency parsing in recent years, with structural components like grammars and feature-rich lexicons becoming less central while recurrent neural network representations rise in popularity. The goal of this work is to analyze the extent to which information provided directly by the model structure in classical systems is still being captured by neural methods. To this end, we propose a high-performance neural model (92.08 F1 on PTB) that is representative of recent work and perform a series of investigative experiments. We find that our model implicitly learns to encode much of the same information that was explicitly provided by grammars and lexicons in the past, indicating that this scaffolding can largely be subsumed by powerful general-purpose neural machinery. Introduction In the past several years, many aspects of constituency parsing and natural language processing in general have changed.Grammars, which were once the central component of many parsers, have played a continually decreasing role.Rich lexicons and handcrafted lexical features have become less common as well.On the other hand, recurrent neural networks have gained traction as a powerful and general purpose tool for representation.So far, not much has been shown about how neural networks are able to compensate for the removal of the structures used in past models.To gain insight, we introduce a parser that is representative of recent trends and analyze its learned representations to determine what information it captures and what is important for its strong performance. Our parser is a natural extension of recent work in constituency parsing.We combine a common span representation based on recurrent neural networks with a novel, simplified scoring model.In addition, we replace the externally predicted partof-speech tags used in some recent systems with character-level word representations.Our parser achieves a test F1 score of 92.08 on section 23 of the Penn Treebank, exceeding the performance of many other state-of-the-art models evaluated under comparable conditions.Section 2 describes our model in detail. The remainder of the paper is focused on analysis.In Section 3, we look at the decline of grammars and output correlations.Past work in constituency parsing used context-free grammars with production rules governing adjacent labels (or more generally production-factored scores) to propagate information and capture correlations between output decisions (Collins, 1997;Charniak and Johnson, 2005;Petrov and Klein, 2007;Hall et al., 2014).Many recent parsers no longer have explicit grammar production rules, but still use information about other predictions, allowing them to capture output correlations (Dyer et al., 2016;Choe and Charniak, 2016).Beyond this, there are some parsers that use no context for bracket scoring and only include mild output correlations in the form of tree constraints (Cross and Huang, 2016b;Stern et al., 2017).In our experiments, we find that we can accurately predict parents from the representation given to a child.Since a simple classifier can predict the information provided by parent-child relations, this explains why the information no longer needs to be specified explicitly.We also show that we can completely remove output correlations from our model with a variant of our parser that makes independent span label decisions without any tree constraints while maintaining high F1 scores and mostly producing trees. In Section 4, we look at lexical representations.In the past, parsers used a variety of cus-tom lexical representations, such as word shape features, prefixes, suffixes, and special tokens for categories like numerals (Klein and Manning, 2003;Petrov and Klein, 2007;Finkel et al., 2008).Character-level models have shown promise in parsing and other NLP tasks as a way to remove the complexity of these lexical features (Ballesteros et al., 2015;Ling et al., 2015b;Kim et al., 2016;Coavoux and Crabbé, 2017;Liu and Zhang, 2017).We compare the performance of characterlevel representations and externally predicted partof-speech tags and show that these two sources of information seem to fill a similar role.We also perform experiments showing that the representations learned with character-level models contain information that was hand-specified in some other models. Finally, in Section 5 we look at the surface context captured by recurrent neural networks.Many recent parsers use LSTMs, a popular type of recurrent neural network, to combine and summarize context for making decisions (Choe and Charniak, 2016;Cross and Huang, 2016a;Dyer et al., 2016;Stern et al., 2017).Before LSTMs became common in parsing, systems that included surface features used a fixed-size window around the fenceposts at each end of a span (Charniak and Johnson, 2005;Finkel et al., 2008;Hall et al., 2014;Durrett and Klein, 2015), and the inference procedure handled most of the propagation of information from the rest of the sentence.We perform experiments showing that LSTMs capture far-away surface context and that this information is important for our parser's performance.We also provide evidence that word order of the far-away context is important and that the amount of context alone does not account for all of the gains seen with LSTMs. Overall, we find that the same sources of information that were effective for grammar-driven parsers are also captured by parsers based on recurrent neural networks. Parsing Model In this section, we propose a span-based parsing model that combines components from several recent neural architectures for constituency parsing and other natural language tasks.While this system is primarily introduced for the purpose of our analysis, it also performs well as a parser in its own right, exhibiting some gains over comparable work.Our model is in many respects similar to the chart parser of Stern et al. (2017), but features a number of simplifications and improvements. Overview Abstractly, our model consists of a single scoring function s(i, j, ) that assigns a real-valued score to every label for each span (i, j) in an input sentence.We take the set of available labels to be the collection of all nonterminals and unary chains observed in the training data, treating the latter as atomic units.The score of a tree T is defined as a sum over internal nodes of labeled span scores: We note that, in contrast with many other chart parsers, our model can directly score n-ary trees without the need for binarization or other tree transformations.Under this setup, the parsing problem is to find the tree with the highest score: Our concrete implementation of s(i, j, ) can be broken down into three pieces: word representation, span representation, and label scoring.We discuss each of these in turn. Word Representation One popular way to represent words is the use of word embeddings.We have a separate embedding for each word type in the training vocabulary and map all unknown words at test time to a single <UNK> token.In addition to word embeddings, character-level representations have also been gaining traction in recent years, with common choices including recurrent, convolutional, or bag-of-n-gram representations.These alleviate the unknown word problem by working with smaller, more frequent units, and readily capture morphological information not directly accessible through word embeddings.Character LSTMs in particular have proved useful in constituency parsing (Coavoux and Crabbé, 2017), dependency parsing (Ballesteros et al., 2015), part-of-speech tagging (Ling et al., 2015a), named entity recognition (Lample et al., 2016), and machine translation (Ling et al., 2015b), making them a natural choice for our system.We obtain a character-level representation for a word by running it through a bidirectional character LSTM and concatenating the final forward and backward outputs. The complete representation of a given word is the concatenation of its word embedding and its character LSTM representation.While past work has also used sparse indicator features (Finkel et al., 2008) or part-of-speech tags predicted by an external system (Cross and Huang, 2016b) for additional word-level information, we find these to be unnecessary in the presence of a robust character-level representation. Span Representation To build up to spans, we first run a bidirectional LSTM over the sequence of word representations for an input sentence to obtain context-sensitive forward and backward representations f i and b i for each fencepost i.We then follow past work in dependency parsing (Wang and Chang, 2016) and constituency parsing (Cross and Huang, 2016b;Stern et al., 2017) in representing the span (i, j) by the concatenation of the corresponding forward and backward span differences: See Figure 1 for an illustration. Label Scoring Finally, we implement the label scoring function by feeding the span representation through a onelayer feedforward network whose output dimensionality equals the number of possible labels.The score of a specific label is the corresponding component of the output vector: where g is an elementwise ReLU nonlinearity. Inference Even though our model operates on n-ary trees, we can still employ a CKY-style algorithm for efficient globally optimal inference by introducing an auxiliary empty label ∅ with s(i, j, ∅) = 0 for all (i, j) to handle spans that are not constituents.Under this scheme, every binarization of a tree with empty labels at intermediate dummy nodes will have the same score, so an arbitrary binarization can be selected at training time with no effect on learning.We contrast this with the chart parser of Stern et al. (2017), which assigns different scores to different binarizations of the same underlying tree and in theory may exhibit varying performance depending on the method chosen for conversion. With this change in place, let s best (i, j) denote the score of the best subtree spanning (i, j).For spans of length one, we need only consider the choice of label: For general spans (i, j), we have the following recursion: That is, we can independently select the best label for the current span and the best split point, where the score of a split is the sum of the best scores for the corresponding subtrees. To parse the full sentence, we compute s best (0, n) using a bottom-up chart decoder, then traverse backpointers to recover the tree achieving that score.Nodes assigned the empty label are omitted during the reconstruction process to obtain the full n-ary tree.The overall complexity of this approach is O(n 3 + Ln 2 ), where n is the number of words and L is the total number of labels.We note that because our system does not use a grammar, there is no constant for the number of grammar rules multiplying the O(n 3 ) term as in traditional CKY parsing.In practice, the O(n 2 ) evaluations of the span scoring function corresponding to the O(Ln 2 ) term dominate runtime. Training As is common for structured prediction problems (Taskar et al., 2005), we use margin-based training to learn a model that satisfies the constraints for each training example, where T * denotes the gold output, T ranges over all valid trees, and ∆ is the Hamming loss on labeled spans.Our training objective is the hinge loss: This is equal to 0 when all constraints are satisfied, or the magnitude of the largest margin violation otherwise. <START> Figure 1: Span representations are computed by running a bidirectional LSTM over the input sentence and taking differences of the output vectors at the two endpoints.Here we illustrate the process for the span (1, 4) corresponding to "played soccer in" in the example sentence. Since ∆ decomposes over spans, the inner lossaugmented decode max T [s(T ) + ∆(T, T * )] can be performed efficiently using a slight modification of the dynamic program used for inference.In particular, we replace s(i, j, ) with s(i, j, where * ij is the label of span (i, j) in the gold tree T * . Results We use the Penn Treebank (Marcus et al., 1993) for our experiments with the standard splits of sections 2-21 for training, section 22 for development, and section 23 for testing.Details about our model hyperparameters and training prodecure can be found in Appendix A. Across 10 trials, our model achieves an average development F1 score of 92.22 on section 22 of the Penn Treebank.We use this as our primary point of comparison in all subsequent analysis.The model with the best score on the development set achieves a test F1 score of 92.08 on section 23 of the Penn Treebank, exceeding the performance of other recent state-of-the-art discriminative models which do not use external data or ensembling. 1 Liang et al. (2008) provides theoretical results suggesting they may be useful for learning efficiently.In constituency parsing, there are two primary forms of output correlation typically captured by models.The first is correlations between label decisions, which often are captured by either production scores or the history in an incremental treecreation procedure.The second, more subtle correlation comes from the enforcement of tree constraints, since the inclusion of one bracket can affect whether or not another bracket can be present.We explore these two classes of output correlations in Sections 3.1 and 3.2 below. Parent Classification The base parser introduced in Section 2 scores labeled brackets independently then uses a dynamic program to select a set of brackets that forms the highest-scoring tree.This independent labeling is an interesting departure from classical parsing work where correlations between predicted labels played a central role.It is natural to wonder why modeling label correlations isn't as important as it once was.Is there something about the neural representation that allows us to function without it?One possible explanation is that the neural machinery, in particular the LSTM, is handling much of the reconciliation between labels that was previously handled by an inference procedure.In other words, instead of using local information to suggest several brackets and letting the grammar handle interactions between them, the LSTM may be making decisions about brackets already in its latent state, allowing it to use the result of these decisions to inform other bracketings. One way to explore this hypothesis would be to evaluate whether the parser's learned representations could be used to predict parent labels of nodes in the tree.If the label of a node's parent can be predicted with high accuracy from the representation of its span, then little of the information about parent-child relations provided explicitly by a grammar has been lost.For this experiment, we freeze the input and LSTM parameters of our base model and train a new label scoring network to predict the label of a span's parent rather than the label of the span itself.We only predict parent labels for spans that have a bracket in the gold tree, so that all but the top level spans will have nonempty labels.The new network is trained with a margin loss. After training on the standard training sections of the treebank, the network was able to correctly predict 92.3% of parent labels on the development set.This is fairly accurate, which supports the hypothesis that the representation knows a substantial amount about surrounding context in the output tree.For comparison, given only a span's label, the best you can do for predicting the parent is 43.3% with the majority class conditioned on the current label. Independent Span Decisions Like other recent parsers that do not capture correlations between output labels (Cross and Huang, 2016b;Stern et al., 2017), our base parser still does have some output correlations captured by the enforcement of tree constraints.In this section, we set out to determine the importance of these output correlations by making a version of the parser where they are removed.Although parsers are typically designed to form trees, the bracketing F1 measure used to evaluate parsers is still defined on non-tree outputs.To remove all output correlations from our parser, we can simply remove the tree constraint and independently make decisions about whether to include a bracketed span.The architecture is identical to the one described in Section 2, producing a vector of label scores for each span.We choose the label with the maximum score as the label for a span.As before, we fix the score of the empty label at zero, so if all other label scores are negative, the span will be left out of the set of predicted brackets.We train with independent margin losses for each span. Ignoring tree well-formedness, the development F1 score of this independent span selection parser is 92.20, effectively matching the performance of the tree-constrained parser.In addition, we find that 94.5% of predicted bracketings for development set examples form valid trees, even though we did not explicitly encourage this.This high performance shows that our parser can function well even without modeling any output correlations. Lexical Representation In this section, we investigate several common choices for lexical representations of words and their role in neural parsing. Alternate Word Representations We compare the performance of our base model, which uses word embeddings and a character LSTM, with otherwise identical parsers that use other combinations of lexical representations.The results of these experiments are summarized in Table 1.First, we remove the character-level representations from our model, leaving only the word embeddings.We find that development performance drops from 92.22 F1 to 91.44 F1, showing that word embeddings alone do not capture sufficient information for state-of-the-art performance.Then, we replace the character-level representations with embeddings of part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003).This model achieves a comparable development F1 score of 92.09, but unlike our base model relies on outputs from an external system.Next, we train a model which includes all three lexical representations: word embeddings, character LSTM representations, and part-of-speech tag embeddings.We find that development performance is nearly identical to the base model at 92.24 F1, suggesting that character representations and predicted part-of-speech tags provide much of the same information.Finally, we remove word embeddings and rely completely on character-level embeddings.After retuning the character LSTM size, we find that a slightly larger character LSTM can make up for the loss in wordlevel embeddings, giving a development F1 of 92.24. Predicting Word Features Past work in constituency parsing has demonstrated that indicator features on word shapes, suffixes, and similar attributes provide useful infor- mation beyond the identity of a word itself, especially for rare and unknown tokens (Finkel et al., 2008;Hall et al., 2014).We hypothesize that the character-level LSTM in our model learns similar information without the need for manual supervision.To test this, we take the word representations induced by the character LSTM in our parser as fixed word encodings, and train a small feedforward network to predict binary word features defined in the Berkeley Parser (Petrov and Klein, 2007).We randomly split the vocabulary of the Penn Treebank into two subsets, using 80% of the word types for training and 20% for testing.We find that the character LSTM representations allow for previously handcrafted indicator features to be predicted with accuracies of 99.7% or higher in all cases.The fact that this simple classifier performs so well indicates that the information contained in these features is readily available from our model's character-level encodings.A detailed breakdown of accuracy by feature can be found in Appendix B. Context in the Sentence LSTM In this section, we analyze where the information in the sentence-level LSTM hidden vectors comes from.Since the LSTM representations we use to make parsing decisions come from the fenceposts on each side of a span, we would like to understand whether they only capture information from the immediate vicinity of the fenceposts or if they also contain more distant information.Although an LSTM is theoretically capable of incorporating an arbitrarily large amount of context, it is unclear how much context it actually captures and whether this context is important for parsing accuracy. Derivative Analysis First, we would like to know if the LSTM features capture distant information.For this experiment, we use derivatives as a measure of sensitivity to changes in an input.If the derivative of a value with respect to a particular input is high, then that input has a large impact on the final value.For a particular component of an LSTM output vector, we compute its gradient with respect to each LSTM input vector, calculate the 2 -norms of the gradients, and bucket the results according to distance from the output position.This process is repeated for every output position of each sentence in the development set, and the results are averaged within each bucket.Due to the scale of the required computation, we only use a subset of the output vector components to compute the average, sampling one at random per output vector. Figure 2 illustrates how the average gradient norm is affected by the distance between the LSTM input and output.As would be expected, the closest input vectors have the largest effect on the hidden state.However, the tail of values is fairly heavy, with substantial gradient norms even for inputs 40 words away.This shows that faraway inputs do have an effect on the LSTM representation. Truncation Analysis Next, we investigate whether information in the LSTM representation about far-away inputs is actually important for parsing performance.To do so, we remove distant context information from our span encoding, representing spans by features obtained from LSTMs that are run on fixed-sized windows of size k around each fencepost.Figure 3 illustrates this truncated representation.Since the truncated representation also removes information about the size and position of the span in addition to the context words, we learn a positiondependent cell state initialization for each of the two LSTM directions to give a more fair comparison to the full LSTM.The use of a fixed-sized context window is reminiscent of prior work by Hall et al. (2014) and Durrett and Klein (2015), but here we use an LSTM instead of sparse features.We train parsers with different values of k and observe how their performance varies.All other architecture details and hyperparameters are the same as for the original model. The blue points in Figure 4 show how the context size k affects parser performance for k ∈ {2, 3, 5, 10, 20, 30}.As with the derivative analysis, although most of the weight is carried by the nearby inputs, a nontrivial fraction of performance is due to context more than 10 words away. Word Order Now that we have established that long-distance information is important for parsing performance, we would like to know whether the order of the far-away words is important.Is the LSTM capturing far-away structure, or is the information more like a bag-of-words representation summarizing the words that appear? To test the importance of order, we train a parser where information about the order of far-away words is destroyed.As illustrated in Figure 5, we run a separate LSTM over the entire sentence for each fencepost, shuffling the input depending on the particular fencepost being represented.We randomly shuffle words outside a context window of size k around the fencepost of interest, keeping words on the left and the right separate so that directional information is preserved but exact positions are lost.The orange points in Figure 4 show the performance of this experiment with different context sizes k.We observe that including shuffled distant words is substantially better than truncating them completely.On the other hand, shuffling does cause performance to degrade relative to the base parser even when the unshuffled win- dow is moderately large, indicating that the LSTM is propagating information that depends on the order of words in far-away positions. LSTMs vs. Feedforward Finally, we investigate whether the LSTM architecture itself is important for reasons other than just the amount of context it can capture.Like any architecture, the LSTM introduces particular inductive biases that affect what gets learned, and these could be important for parser performance. We run a version of the truncation experiment from Section 5.2 where we use a feedforward network in place of a sentence-level LSTM to process the surrounding context of each fencepost.The input to the network is the concatenation of the word representations that would be used as inputs for the truncated LSTM, and the output is a vector of the same size as the LSTM-based representation. As in Section 5.2, we wish to give our representation information about span size and position, so we also include a learned fencepost position embedding in the concatenated inputs to the network.We focus on context window size k = 3 for this experiment.We search among networks with one, two, or three hidden layers that are one, two, or four times the size of the LSTM hidden state.Of all the feedforward networks tried, the maximum development performance was 83.39 F1, compared to 89.92 F1 for the LSTM-based truncation.This suggests that some property of the LSTM makes it better suited for the task of summarizing context than a flat feedforward network. Related Analysis Work Here we review other works that have performed similar analyses to ours in parsing and other areas of NLP.See Section 2 for a description of how our parser is related to other parsers.Similar to our independent span prediction in Section 3.2, several works have found that their models still produce valid outputs for the majority of inputs even after relaxing well-formedness constraints.In dependency parsing, Zhang et al. (2017) and Chorowski et al. (2016) found that selecting dependency heads independently often resulted in valid trees for their parsers (95% and 99.5% of outputs form trees, respectively).In constituency parsing, the parser of Vinyals et al. (2015), which produced linearized parses token by token, was able to output valid constituency trees for the majority of sentences (98.5%) even though it was not constrained to do so. Several other works have investigated what information is being captured within LSTM representations.Chawla et al. (2017) performed analysis of bidirectional LSTM representations in the context of named entity recognition.Although they were primarily interested in finding specific word types that were important for making decisions, they also analyzed how distance affected a word's impact.Shi et al. (2016) and Linzen et al. (2016) perform analysis of LSTM representations in machine translation and language modeling respectively to determine whether syntactic information is present.Some of their techniques involve classification of features from LSTM hidden states, similar to our analysis in Sections 3.1 and 4.2. In Section 5.4, we found that replacing an LSTM with a feedforward network hurt performance.Previously, Chelba et al. (2017) had similar findings in language modeling, where using LSTMs truncated to a particular distance improved performance over feedforward networks that were given the same context. Conclusion In this paper, we investigated the extent to which information provided directly by model structure in classical constituency parsers is still being captured by neural methods.Because neural models function in a substantially different way than classical systems, it could be that they rely on different information when making their decisions.Our findings suggest that, to the contrary, the neural systems are learning to capture many of the same knowledge sources that were previously provided, including the parent-child relations encoded in grammars and the word features induced by lexicons.Our model hyperparameters are summarized in Table 2.We train using the Adam optimizer (Kingma and Ba, 2014) with its default hyperparameters for 40 epochs.We evaluate on the development set 4 times per epoch, selecting the model with the highest overall development performance as our final model.When performing a word embedding lookup during training, we randomly replace words by the <UNK> token with probability 1/(1 + freq(w)), where freq(w) is the frequency of a word w in the training set.We apply dropout with probability 0.4 before and inside each layer of each LSTM.Our system is implemented in Python using DyNet (Neubig et al., 2017) Figure 2 : Figure2: Average derivative of the LSTM output with respect to its input as a function of distance.The output is most sensitive to the closest words, but the tail of the distribution is fairly heavy, indicating that far-away words also have substantial impact. Figure 3 : Figure 3: An example of creating a truncated span representation for the span "played soccer in" with context size k = 2.This representation is used to investigate the importance of information far away from the fenceposts of a span. Figure 4 : Figure4: Development F1 as the amount of context given to the sentence-level LSTM varies.The blue points represent parser performance when the LSTM is truncated to a window around the fenceposts, showing that far-away context is important.The orange points represent performance when the full context is available but words outside a window around the fenceposts are shuffled, showing that the order of far-away context is also important. Figure 5 : Figure 5: An example of creating a shuffled span representation for the span "played soccer in" with context size k = 2.The light blue words are outside the context window and are shuffled randomly.Shuffled representations are used to explore whether the order of far-away words is important. Table 1 : Development F1 scores on section 22 of the Penn Treebank for different lexical representations. Table 2 : The sizes of the components used in our model. . Table 3 : Classification accuracy for various binary word features using the character LSTM representations for words induced by a pre-trained parser.Performance substantially exceeds that of a majority class classifier in all cases, reaching 99.7% or higher for all features.The majority class is True for the first four features in the left column and False for the rest.
6,694.8
2018-04-20T00:00:00.000
[ "Computer Science" ]
MONOTONICITY OF THE PERIOD AND POSITIVE PERIODIC SOLUTIONS OF A QUASILINEAR EQUATION . We investigate the monotonicity of the minimal period of the periodic solutions of some quasilinear differential equations and extend results for p = 2 due to Chow and Wang, and Chicone, to the case of the p -Laplace operator. Our main result is the monotonicity of the period of optimal functions for a minimization problem related with a fundamental interpolation inequality. In particular we generalize to p ≥ 2 a recent proof of monotonicity due to Benguria, Depassier and Loss for the same optimality issue and p = 2. Introduction In this paper we study monotonicity properties of the minimal period of positive periodic solutions of φ p (w ′ ) ′ + V ′ (w) = 0 , (1) where p ≥ 2, φ p (s) = |s| p−2 s, and V : R → R is smooth.The potential function V(w) is assumed to be non-negative for w ≥ 0, V(0) > 0, it has a minimum at w = A > 0 with V(A) = 0 = V ′ (A), and satisfies some additional conditions listed in Section 3, which guarantee that (1) has positive periodic solutions enclosing the critical point (A, 0) in the phase plane (w, w ′ ). The energy E = 1 p |w ′ | p + V(w) is conserved if w solves (1) and we are interested in the positive periodic solutions with energy less than E * := V(0) which are enclosed by the homoclinic orbit attached to (w, w ′ ) = (0, 0).We further assume that V is such that these solutions are uniquely determined, up to translations, by the energy level E, with minimal period T (E). The purpose of this paper is to study under which conditions T is an increasing function of E in the range 0 ≤ E ≤ E * where E * is the energy level of the homoclinic orbit.Furthermore we will consider the asymptotic behaviour of T (E) as E → 0 + and as E → (E * ) − .Surprisingly enough, the cases p = 2 and p > 2 differ as E → 0 + . Our first result is an extension to p > 2 of a result of Chow and Wang [8,Theorem 2.1]. Notice that w → |V ′ (w)| 2 − p ′ V(w) V ′′ (w) is a positive function if and only if w → V(w) |V ′ (w)| −p ′ is a monotone increasing function. Our second result is also an extension to p > 2 of the monotonicity result in [7, Theorem A] under Chicone's condition, which is also a growth condition, but of higher order in the derivatives. A central motivation for this paper arises from the study of the minimization problem where q > p is an arbitrary exponent and S 1 is the unit circle.The problem can also be seen as the search for the optimal constant in the interpolation inequality ) . Testing the inequality with constant functions shows that µ(λ) ≤ μ(λ it is well known from the carré du champ method [2,3] that equality holds if and only if λ ≤ d/(q −2).If λ > d/(q −2), we have µ(λ) < μ(λ) and optimal functions are non constant, so that symmetry breaking occurs.The minimization problem problem with p > 2 was studied in [18].There is an optimal function for (2) and the corresponding Euler-Lagrange equation turns out to be the nonlinear differential equation with nonlocal terms given by where we look for positive solutions on W 1,p (S 1 ) \ {0} or equivalently positive 2π-periodic solutions on R.So far, we do not know the precise value of λ for which there is symmetry breaking but according to [18] rigidity holds if 0 < λ < λ 1 for some explicit λ 1 > 0, where rigidity means that any positive solution of ( 3) is a constant.In that range, we have µ(λ) = μ(λ).On the contrary, one can prove that symmetry breaking occurs if λ > λ 2 for some λ 2 > λ 1 , so that µ(λ) < μ(λ) and (3) admits non-constant positive solutions for any λ > λ 2 .Using homogeneity, scalings and a suitable change of variables, the study of ( 3) is reduced in [18] to the study of positive periodic solutions on R of In this equation, there are no non-local terms but the minimal period of periodic solutions is no more given.Equation (⋆) enters in the framework of (1) with A = 1 and potential so that E * = 1/p − 1/q.Positive periodic solutions exist only if the energy level satisfies the condition E < E * .Again, let T (E) be the minimal period of such a solution.Theorems 1 and 2 do not apply easily and we shall prove directly the following result, which is the main contribution of this paper. Theorem 3. Let p and q be two exponents such that 2 < p < q and consider the positive periodic solutions of (⋆).Then the map E → T (E) is increasing on (0, E * ) with lim E→0 + T (E) = 0 and lim E→(E * ) − T (E) = +∞. The study of (3) is motivated by rigidity and symmetry breaking results associated with interpolation inequalities on the unit sphere S d in one and higher dimensions, that is, d ≥ 1.If p = 2, a precise description of the threshold value of λ is known in the framework of Markov processes if q is not too large (see [3] for an overview with historical references that go back to [2]) and from [5,11,14,15,16,17,13] using entropy methods applied to nonlinear elliptic and parabolic equations; also see [12] for an overview and extensions to various related variational problems. Almost nothing is known beyond [18] if p > 2, even for d = 1.Our results are a contribution to a better understanding of the fundamental properties of the solutions of (1) in the simplest of the cases when p > 2. Without the Assumption that V ′ (A) = 0 in Theorems 1 and 2 (which is also satisfied in Theorem 3), it is easy to give similar results so that E → T (E) is decreasing, but in phase plane the solutions are not described anymore by orbits enclosed by a homoclinic orbit.Some comments on this issue can be found in Section 2. In dimension d = 1, the bifurcation problem (3) degenerates in the limit case p = 2, for which λ 1 = λ 2 = 1/(q − 2) according to [2].We refer to [4, Section 1] for an introduction to the minimization problem (2) with p = 2, the issue of the branches and the monotonicity of the period problem.Proving that symmetry breaking occurs if and only if λ > 1/(q − 2) can be reduced to a proof of the monotonicity of the minimal period using Chicone's criterion [7,Theorem A].The study of bifurcation problems using the period function goes back to [23] in case of equations with cubic non-linearities and was later extended to various classes of Hamiltonian systems in [22,21,10,9,19]. If p ′ = p/(p − 1) is the Hölder conjugate of the exponent p and ) can be rewritten as the Hamiltonian system of equations with w = u and w ′ = φ p ′ (v).Although this Hamiltonian structure may superficially look similar to the conditions of [22, Theorem 1], we have a definitely different set of assumptions.In [21], a much larger set of Hamiltonian systems is considered but again our assumptions differ, for instance for the simple reason that the function φ p ′ is not of class C 2 .Further references on the period function can be found in [24].There are various other extensions of Chicone's result [7], see for instance [6].Also notice that there is a computation in [6, Section 4] which turns out to be equivalent to an argument used in the proof of our Theorem 4 (see below in Section 2), although it is stated neither in that form nor as in Theorem 1.The Hamiltonian version of the method has interesting applications to Lotka-Volterra systems. The monotonicity of the minimal period as a function of the energy level is a question of interest by itself and particularly in the model case of the potential V as in (4), even in the case p = 2.We quote from [4] that: "It is somewhat surprising that, despite its ubiquity, the monotonicity of the period function for [this problem] in full generality was only established recently."In [20], Miyamoto and Yagasaki proved the monotonicity of the period function for p = 2 and for q an integer.In [24], Yagasaki generalized the result to all values of q > 2. Both papers, [20,24], rely on Chicone's criterion which is difficult to apply to non-integer values of q.The purpose of Benguria, Depassier and Loss in [4] was to give a simplified proof of the monotonicity of the period of the positive solutions of w ′′ + w q−1 − w = 0 (corresponding to p = 2 in our notations). We point out that in many situations in the paper we will consider the equation where V is a potential of class C 2 defined on R such that The potential V (w) achieves its minimum on (a, b) at x = 0.The relationship of V with V is given by V (w) = V(w + A), a = − A and b = B − A. The origin w = 0, w ′ = 0 is a stationary point of ( 5) giving rise to a center surrounded by closed periodic orbits with minimal period T (E), such that these periodic orbits are enclosed by a homoclinic orbit attached to (a, 0).This paper is organized as follows.Section 2 is devoted to the proof of the p-Laplacian version of results which are classical when p = 2 and are summarized in Theorems 1 and 2. We are not aware of such statements in the existing literature but they are natural extensions of the case p = 2 and might already be known, so we do not claim any deep originality.The result of Theorem 3 is by far more difficult.In Section 3 we start with problem (1) by making a change of variables and obtain an expression for the minimal period following Chicone's ideas.We also prove some properties of the minimal period when the energy goes to zero and when it goes to the homoclinic level E * .In Section 4 we prove the monotonicity of the minimal period extending, in particular, the results of [4] for p = 2 to the more general case of the one-dimensional p-Laplacian operator w → φ p (w ′ ) ′ , with p > 2. Our main result (Theorem 3) is proved in Section 5, the proof is highly non-trivial. A p-Laplacian version of some classical results This section is devoted to the proof of Theorems 1 and 2. We also provide a slightly more detailed statement of Theorem 1. We begin by extending [8, Theorem 2.1] by Chow and Wang to the p-Laplacian situation when p ≥ 2. We recall that p ′ = p/(p − 1) denotes the Hölder conjugate of p. Equation ( 5) has a first integral given by for any energy level E ∈ (0, E * ) and the minimal period is given in terms of the energy by where At this point, let us notice that the map E → T (E) is a continuous function if we assume that w V ′ (w) > 0 for any w ∈ (a, 0)∪(0, b), but that it is not the case if V admits another local minimum than w = 0 in the interval (a, b).Let us define The following result is a detailed version of Theorem 1. With the above notations, for any E ∈ (0, E * ), it holds that if the integral in the right-hand side is finite.Thus if R is positive on (a, 0) ∪ (0, b), then the minimal period is increasing. Notice that from Assumption (H1), we know that which is incompatible with R being a negative valued function in a neighbourhood of w = a + .If we remove the assumption that V ′ (a) = 0, then it makes sense to assume that R is a negative function on (a, 0) ∪ (0, b).In this case, the minimal period is decreasing. Proof.The proof relies on the same strategy as for [8, Theorem 2.1].We skip some details and emphasize only the changes needed to cover the case p > 2. Let us set By differentiating with respect to E, we obtain T (E) and Differentiating once more with respect to E, we get On the other hand, by integrating by parts in we obtain by definition of J and γ.See [8] for further details in the case p = 2.By differentiating twice this expression of J(E) with respect to E, we obtain Since T (E) = 2 dI dE (E), we learn from (10) that This concludes the proof of (8). Proof of Theorem 2. Let us consider again Equation ( 5) with a potential V which satisfies (H1).We adapt the proof of [7, Theorem A] to the case p > 2. Let us consider the function h(w) := w |w| V (w) (11) for any w ∈ (a, 0) ∪ (0, b) and extend it by h(0) = 0 at w = 0.With the notations of (7), we have h w 1 (E) = − √ E, h w 2 (E) = + √ E and we can reparametrize the interval With this change of variables, the minimal period can be written as Its derivative with respect to E is given by where we use the short-hand notation w = h −1 √ E sin θ .After an integration by parts, this expression becomes and one can show that ′′ is positive if and only if V /(V ′ ) 2 is a convex function.This completes the proof of Theorem 2. Asymptotic results As in Section 2, let V (w) = V(w + A) and recall that (5) has a first integral given by ( 6) where E ≥ 0 is the energy level.In this short section, we shall assume that (H1) holds with a = − A, define and make the additional hypothesis lim inf This assumption is satisfied in case of (4) as soon as q > p > 2 and in that case ω = V ′′ (1) = √ q − p, but the following result holds for a much larger class of potentials. Proof.In a neighbourhood of w = 0, we can write V (w) ∼ 1 2 ω 2 w 2 , use (7) and the change of variables w = √ 2 E y/ω to obtain We obtain the expression of the integral using the formulae [1, 6.2.1 & 6.2.2] for the Euler Beta function.Now let us consider the limit as E → (E * ) − .We learn from (H2) that for some ℓ > 0 if w − a > 0 is taken small enough.We deduce from (7) that T (E) diverges as E → (E * ) − . The monotonicity of the minimal period Applying the formulae of Section 2 to study the monotonicity of the minimal period for periodic solutions of (⋆) leads later to very complicated expressions for our problem with potential (4).For that reason, it is convenient to introduce a new change of variables as follows.Let A = − a > 0 and define for any w ∈ (a, 0) ∪ (0, b) and extend it by h(0) = 0 at w = 0.Here h is defined as in Section 2 (proof of Theorem 1, Eq. ( 12)) while h is such that Let us make the simplifying assumption By the above definition of h and ( 13), the minimal period can now be computed as Let us define dθ and notice that J is a function of E as a consequence of the change of variables (17): . By differentiating T (E) in ( 16) with respect to E, we find that y is given by ( 17) and Here is a sufficient condition on h, which is in fact an assumption on V . Lemma 6. Assume that (H1) and (H3) hold.With the above notations, if the function K is decreasing on [A, B], then J ′ > 0 on (0, E * ) and the minimal period T (E) is a monotone increasing function of E. Proof.With y(E, θ) defined by ( 17), the result is a consequence of We deduce from Lemma 6 a sufficient condition on h to obtain that the minimal period is monotone increasing. Corollary 7. Assume that (H1) and (H3) hold.If h and and 1/h ′2 are convex functions, then the minimal period T (E) is a monotone increasing function of E ∈ (0, E * ). Proof.By convexity of 1/h ′2 , we have that 0 < 1 2 and h ′′ h ′3 is a decreasing function.Next, from (18) written as we observe that all the factors on the right hand of this expression are positive decreasing functions, implying that K is a decreasing function on [A, B]. Proof of the main result By applying Lemma 6 and Corollary 7, we prove Theorem 3. The main difficulty is to establish that K is monotone decreasing if 1 < m < 2, which is done in Section 5.3.5.1.Notations.Let us consider (1) with V given by (4) and q > p ≥ 2, hence V ′ (w) = φ q (w) − φ p (w), and ( 1) is reduced to In particular w = 1 is a trivial solution of this equation.All conditions of Section 1 for V are satisfied, V (resp.V ) reaches a minimum at w = 1 (resp.w = 0) and In the discussion, we shall consider the three cases: m = 2, m > 2 and 1 < m < 2, where We have that V (w) = V(w + A) with A = 1, i.e., It is convenient to define With these notations, we have As a special case, note that W (y) = (y − 1) 2 and h(y) = (y − 1)/ √ q if m = 2.In that case, the result of Theorem 3 is straightforward. Lemma 8.If m = 2 and V given by (4), the minimal period Proof.The function K defined by ( 18) is explicitly given by K(y) = q 2 p ′ y −1/p hence monotone decreasing and Lemma 6 applies.5.2.The case m > 2. We obtain the following result.W (y)/q, we find that the expression has its sign given by and In both cases, we conclude that h ′′ ≥ 0. This proves Theorem 3 as a consequence of Corollary 7 and Lemma 9 if m > 2. 5.3.The case 1 < m < 2. We cannot apply Corollary 7 and we have to directly rely on Lemma 6.We recall that m = q/p.Let us start by computing K ′ . Lemma 10.The function y → − K ′ (y) has the sign of p 2 y 2 f (a, m, y, z) where z = y m−1 , the parameters (a, m) are admissible in the sense that Proof.We set y = x p so that x = y 1/p and dx dy = where W and h are as in Section 5.1, so that , that is, 4 m p 3 y 1/p ′ h ′ (y) 2 = (Φ ′ (x)) 2 /Φ(x) and K as in ( 18) can be rewritten as Φ ′′ and the detailed computation shows that ending the proof of the lemma. Proof.Keeping the notations of Lemma 10, our goal is to prove that y → f (a, m, y, y m−1 ) is nonnegative for any y ∈ (0, γ m ) whenever the parameters (a, m) are admissible. In the limit as m → 2, we have y = z and Hence f (a, 2, y, y m−1 ) is positive unless y = 1.We are now going to take a given a ∈ (0, 1/2) and consider m ∈ (1, 2) as a parameter.Let us prove that for some m * ∈ (1, 2), we have f (a, m, y, y m−1 ) ≥ 0 for any (m, y) such that m * < m < 2 and 0 ≤ y ≤ γ m .We assume by contradiction that there are two sequences (m k ) k∈N and (y k Up to the extraction of a subsequence, (y k ) k∈N converges to some limit y ∞ ∈ [0, 2] and by continuity of f we know that f (a, 2, y ∞ , y ∞ ) ≤ 0: the only possibility is y ∞ = 1 by (22).Since f a, m k , y k , y m k −1 k < 0 = f (a, m k , 1, 1), we learn that y k = 1.Since lim k→+∞ y k = 1, this contradicts (21) or, to be precise, |y k − 1| ≥ ε(a, m k ), as the reader is invited to check that lim inf k→+∞ ε(a, m k ) > 0 because f is a smooth function of all of its arguments.If we redefine then we know that for any a ∈ (0, 1/2), we have m * (a) < 2. We want to prove that m * (a) = 1.Again, let us argue by contradiction: if m * (a) > 1, and assume that there are two sequences (m k ) k∈N and (y k Up to the extraction of a subsequence, (y k ) k∈N converges to some limit y ∞ ∈ [0, 2] and by continuity of f we know that f (a, m * (a), y ∞ , y m−1 ∞ ) ≤ 0. For the same reasons as above, y ∞ = 0, y ∞ = 1 and y ∞ = γ m * (a) are excluded.Altogether, we have proved that for m = m * (a) , we have f (a, m, y ∞ , y m−1 ∞ ) = 0 for some y ∞ ∈ (0, 1) ∪ (1, γ m ) and we also have that f (a, m, y, y m−1 ) ≥ 0 for any y ∈ (0, 1) ∪ (1, γ m ), so that y ∞ is a local minimizer of y → f (a, m, y, y m−1 ).As a consequence, we have shown that for m = m * (a) > 1 and y = y ∞ = 1, we have f a, m, y, y m−1 = 0 and ∂ ∂y f a, m, y, y m−1 = 0 . As we shall see below, this contradicts Lemma 12. Hence y → f (a, m, y, y m−1 ) takes nonnegative values for any admissible parameters (a, m) with 1 < m < 2. By Lemma 10, K ′ (y) ≤ 0, thus completing the proof. ) Under this assumption, w i (E), i = 1, 2, are the two roots in (a, b) of V (w) = E, as in Theorem 4, V (w) = E admits no other root in (a, b) for any E ∈ (0, E * ) and the map E → T (E) is continuous.Also notice that h ′ (y) > 0 ∀ y ∈ y 1 (E), A p ∪ A p , y 2 (E) where y 1 (E) := A − |w 1 (E)| p and y 2 (E) := A + w 2 (E) p .
5,580.4
2023-01-05T00:00:00.000
[ "Mathematics" ]
Persistence Modeling in Marketing: Descriptive, Predictive, and Normative Uses There is general agreement that a firm’s scarce marketing resources should be managed for the purpose of long-term profitable growth. Putting that premise into practice is difficult, as only the short-term impact of marketing actions can be readily observed. Persistence modeling has become a well-accepted tool for long-run impact detection. Its consistent use across a broad range of settings has resulted in novel empirical generalizations on the long-run effectiveness of several marketing instruments and has contributed unique insights on, among others, (i) the marketing-finance interface, (ii) the role of new media, and (iii) the mediating role of a broad set of mindset metrics. Moreover, the recent addition of a more normative focus has added considerably to the actionability of these insights. Marketing Science, the marketing discipline started to have a growing appreciation for Empirical Generalizations obtained through a consistent application of data-driven techniques (including timeseries analysis) on multiple data sets covering scores of brands, categories, or industries (see, e.g.Nijs et al., 2001or Srinivasan et al., 2004 for some representative examples). 3 Persistence modeling The development of techniques specifically designed to disentangle short-from long-run movements-unit-root testing, cointegration and error-correction modeling, persistence estimation, and Forecast Error Variance Decomposition (FEVD)-offered a further push to TS models' growing acceptance, as they provided a natural match with one of marketing's long-lasting interest fields: quantifying the long-run impact of marketing's tactical and strategic decisions. Short-run effects are, by definition, temporary in nature (Hanssens & Dekimpe, 2012).After the effects are dissipated, performance (e.g, sales or market share) returns to the level enjoyed before the marketing action took place.Often, this will be a "return to the mean," but it could also be a return to an exogenously determined trend.By contrast, long-term effects are permanent (or persistent) in nature: after the marketing action is completed, the affected variable reaches a different (higher or lower) level and stays at that new level. Persistence modeling combines in one metric the total impact of a chain reaction of consumer response, firm feedback, and competitor response that emerges following an initial marketing action.This marketing action could be an unexpected increase in advertising support (Dekimpe & Hanssens, 1995), a price promotion (Pauwels et al., 2002) or a competitive activity (Steenkamp et al., 2005), and the performance metric can be category demand (Nijs et al., 2001), brand sales (Dekimpe & Hanssens, 1995), brand profitability (Dekimpe & Hanssens, 1999), or stock returns (Pauwels et al., 2004), among others. Using the advertising-sales relationship as an illustration, Dekimpe and Hanssens (1995) identified six components that make up the chain reaction from initial advertising campaign to persistent demand impact: (i) contemporaneous (or immediate), (ii) carry-over, (iii) purchase reinforcement, (iv) feedback effects, (v) firm-specific decision rules, and (vi) competitive reactions.The central idea is that in quantifying the total long-run impact of a marketing action, all channels of influence should be accounted for.A similar logic can be found in Bass and Clarke (1972, p. 300) who stated that "credit for the second purchase should be assigned to the expenditures which induced trial" and Leeflang andWittink (1992, 1996) who made a case for incorporating competitive reaction patterns when assessing the total effect of marketing activities. Persistence calculations try to incorporate all channels of influence, enabling one to draw managerially relevant long-run inferences.They typically involve the estimation of a Vector Autoregressive Model (potentially augmented with some eXogenous control variables).The model is specified in the level of the variables, in the first difference, or in error-correction format, depending on the outcome of preliminary unit-root and cointegration tests.These VARX models allow for the complex feedback loops needed to incorporate the six effects mentioned before.From the VAR parameters, impulseresponse functions (IRFs) can be derived.Technically speaking, an impulse-response function traces the incremental effect of a oneunit (or one-standard-deviation) shock in one of the variables on the future values of the other endogenous variables in the VAR system, taking into account each of the aforementioned factors.We refer to, among others, Dekimpe and Hanssens (2023), Pauwels (2018), Srinivasan (2021), or Wang and Yildirim (2021) for recent technical expositions.Table 1 provides both some econometric background studies and a set of marketing applications. IRFs can been seen as the difference between two forecasts (each over multiple periods): one based on an information set that does not take the marketing shock into account, and another based on an extended information set that takes this action into account.(Pauwels et al., 2002).As such, IRFs trace the incremental effect of the marketing action reflected in the original shock. A stylized example is given in Figure 1. 4 The top IRF traces the sales impact of a price-promotion shock in a mean-reverting market.The IRF shows various fluctuations over time: a positive immediate effect, followed by a typical stockpiling effect, after which some additional sales are observed that could be due to, for example, purchase reinforcement and/or firm-specific decision rules (where successful promotions lead to another price reductions).Eventually, The listed studies are given for illustrative purposes only.As such, the list is not meant to be exhaustive. however, any incremental effect disappears, potentially due to competitive reactions.This does not imply that the product no longer realizes any sales, but rather that no additional sales can be attributed to the initial promotion.In contrast, in the bottom panel, we see that this incremental effect stabilizes at a non-zero, or persistent, level.In that case, we have identified a long-run effect, as the initial promotion keeps on generating extra sales.Behavioral explanations for this phenomenon could be that newly attracted customers make regular repeat purchases or that the existing customer base has increased its usage rate.While impulse-response functions are useful summary devices, the multitude of numbers (one per post-shock period) involved still makes them somewhat cumbersome report (unless presented in a graphical way as in Figure 1) and hard to compare across brands, markets, or marketing-mix instruments.To reduce this set of numbers to a more manageable size, one often (see, among others, Nijs et al., 2001;Nijs et al., 2007;Pauwels & Srinivasan, 2004;Srinivasan et al., 2004) derives various summary statistics from them, such as 5 : (i) the immediate (same period) impact of the marketing shock; (ii) the long-run or permanent (persistent) impact, which is the value to which the IRF converges; (iii) The time interval before convergence is obtained is often referred to as the dust-settling period (Dekimpe & Hanssens, 1999;Nijs et al., 2001), and the cumulative effect over this time period is often referred to as the total short-run effect.For mean-(or trend-) reverting series, this reflects the area under the curve.In case of a persistent effect, the combined (cumulative effect) over the the dustsettling period is computed as comparable metric; and (iv) finally, the relative importance of (current and past fluctuations in) a given marketing instrument (or other shock component) in explaining the future evolution of the performance metric can be derived through an error-variance decomposition (see, e.g.Nijs et al., 2007). Marketing-mix effectiveness Initial applications of the persistence-modeling approach in marketing focused on the quantification of short-and long-run elasticitiess of different marketing-mix instruments on a variety of performance metrics.The marketing-mix instruments included, among others, advertising support (Dekimpe & Hanssens, 1995;van Heerde et al., 2013), price promotions (Dekimpe et al., 1999;Slotegraaf & Pauwels, 2008;Srinivasan et al., 2004), product innovations (Pauwels et al., 2004), assortments (Bezawada & Pauwels, 2013), or competitive activities (Steenkamp et al., 2005), and the performance metrics have been primary (Dekimpe et al., 1999;Nijs et al., 2001) or secondary demand (Dekimpe & Hanssens, 1995), brand and retailer revenues (Srinivasan et al., 2004), profitability (Dekimpe & Hanssens, 1999), or stock prices (Pauwels et al., 2004), among others.While many studies have focused on the aggregate performance metrics, others explored the heterogeneity in response across performance components such as category incidence, brand choice, and purchase quantity (Pauwels et al., 2002), or across consumer segments (Lim et al., 2005;Sismeiro et al., 2012).In combination, these studies have resulted in a rich set of empirical generalizations on marketing's short-and long-run effectiveness.Following this initial wave of studies, persistence modeling has added substantial insights to multipe research streams, among which (i) the marketing-finance interface, (ii) research on the relative effectiveness of, and inter-relationships between, numerous new/social media, and (iii) the relevance of mindset metrics to better understand the sequencing of advertising's performance effects.A common feature across these streams are (i) the potential presence of multiple complex feedback loops with little a priori knowledge on the direction of the relationships, and (ii) an interest in disentangling the short-and long-run effects of changes in one variable on other variables in the system.Srinivasan et al. (2015), for example, considered the effects of consumer activities on paid, owned, and earned online media on sales, as well as potential feedback loops with the more traditional marketing-mix elements of price, advertising and distribution, while Hewett et al. (2016) studied how social media sites create a reverberating "echoverse" for information dissemination, involving feedback loops ("echoes") among multiple information sources, such as corporate communications, news media, and user-generated social media.Valenti et al. (2023), in turn, considered how advertising influences how consumers think, feel, and experience a product, how different mindset metrics interact with one another and mediate advertising's impact on sales, and how not only the size but also the sequence of these influences varied across brands and categories. Marketing-finance interface Time-series methods are well suited to analyze stock-price data and quantify their sensitivity to new marketing information.Not only can they be employed without having to resort to strong a priori assumptions about investor behavior such as full market efficiency, VARX models are also very flexible to accommodate feedforward and feedback loops between investor behavior and managerial behavior.Given the increasing interest in understanding the linkage between product markets (or Main Street) and financial markets (or Wall Street), it is not surprising that time-series models in general, and VAR models in particular, have been used extensively in that research domain.Chakravarty and Grewal (2011), for example, showed how managers, in response to investor expectations for short-term stock returns, may decide to modify their R&D expenditures, their marketing budgets, or both to avoid short-term earnings shortfalls, even when this results in a reduced long-run profitability.Joshi and Hanssens (2010), in turn, established that advertising can have a direct effect on firm value, beyond its indirect effect through market performance, while Luo et al. (2013) showed how variance in brand ratings across consumers (brand dispersion) hurts stock returns yet reduces firm risk.More extensive reviews on how persistence modeling has contributed to the marketing-finance interface discussion are available in Srinivasan and Hanssens (2009), Luo et al. (2012), and Edeling et al. (2021). Social media The emergence of new media has brought along a new set of marketing metrics, which can easily be tracked over time.Given the multitude of these new media (Twitter, Facebook, etc.), the large number of metrics that can be derived from them (like website visits, paid search clicks, Facebook likes, Facebook unlikes, etc.), and the large number of feedback loops that may exist (not only among these online metrics themselves, but also with more traditional offline metrics), many researchers have opted for the flexibility of VAR models, with their data-driven identification of relevant effects, to study these phenomena.Trusov et al. (2009), for example, studied the effect of word-of-mouth marketing on membership growth at an online social network, and compared it with more traditional media and marketing events.Word-of-mouth referrals were found to have higher elasticities than more conventional marketing tactics.Borah and Tellis (2016) identified asymmetric halo effects where negative online chatter for one product increases negative chatter about other (own and competing) products.These halo effects were shown to subsequently affect downstream performance metrics such as sales and stock performance.Luo and Zhang (2013), in turn, linked various buzz and online traffic measures to the subsequent performance of a firm's stock-market value.We refer to Dekimpe and Hanssens (2023) for further illustrations. Inclusion of mindset metrics While mind-set metrics such as awareness, liking and consideration have a long history in marketing (e.g. as building blocks in hierarchy-of-effects models), questions/doubts about their long-term sales effects through brand building have long prevailed.Not only were time-series data on these metrics often missing, prior evidence on the exact inter-relationships and sequence of these effects was mixed (Srinivasan et al., 2010).Indeed, marketing theory appears insufficiently developed to posit non-equivocally one specific sequence.A flexible modeling approach that does not impose an a priori sequence on the effects, yet which can capture multiple interactions among the various measures, was therefore called for.VAR models are ideally placed to do so, and were used in, among others, Srinivasan et al. (2010) and Pauwels and van Ewijk (2013).Srinivasan et al. (2010), for example, added, for more than 60 CPG brands, various mindset metrics to a VAR model that already accounted for the short-and long-run effects of advertising, price, distribution and promotions.Importantly, the mind-set metrics added considerable explanatory and forecasting power, and can therefore be used by managers as early performance indicators.Pauwels and Van Ewijk (2013), in turn, combine slower-moving attitudinal survey measures with rapidlychanging online behaviorial metrics to explain the sales evolution of over 30 brands across a diverse set of categories (CPG as well as services and durables). Integrating developments across all three research streams, Colicev et al. (2018) estimated a 13-equation VAR model through which they studied the impact of owned and earned social media (OSM and ESM) on brand awareness, purchase intent, and customer satisfaction, while also linking these consumer mindset metrics to shareholder value (abnormal returns and idiosyncratic risk).Other studies in this domain are reviewed in, among others, Srinivasan (2015) and Dekimpe and Hanssens (2023). Toward a more normative focus The initial persistence-modeling applications in marketing tended to focus on the introduction of a new technique (e.g.unit-root testing and Impulse Response Functions in Dekimpe & Hanssens, 1995) or Generalized Impulse Response Functions (GIRFs) in Dekimpe and Hanssens (1999) or aimed to establish a superior model fit and/or forecasting performance when adding a certain type of variables (e.g.mindset metrics in Srinivasan et al., 2010).However, subsequent applications gradually started to have a more substantive (descriptive and/or hypothesis-testing) focus. Because of a growing data availability, persistence models were increasingly estimated in a consistent way across a broad spectrum of brands and categories.This resulted in various empirical generalizations on the typical effect sizes for a variety of (short-and long-run) marketing-mix elasticities.For example, based on an analysis of 25 US categories and close to 600 Dutch CPG categories, Nijs et al. (2001) and Srinivasan et al. (2004) concluded that typical estimates of the total (long-run) price promotion elasticities are around 3.70 for brand sales, 2.30 for manufacturer revenue, 0.50 for category sales at the chain level, 1.40 for category sales at the national level, −0.05 of retailer revenue and −0.70 for retailer margins. 6Moreover, given the underlying multitude of elasticity estimates, studies started to add an additional modeling step to better understand the observed heterogeneity by linking them to a broad set of brand-and category-related contingency factors.Nijs et al. (2001), for example, examined to what extent price-promotions' short-and long-run primary-demand elasticities were systematically and predictably linked to the category's promotional depth and frequency, advertising intensity, competitive reactivity and competitive structure.As another example, Kübler et al. (2020) studied when (i.e. for which brands and industries) various sentiment extraction tools had the most explanatory and predictive power on a variety of mindset metrics.Other studies have done so in an international setting and have investigated to what extent marketing elasticities differ, for example, between emerging and developed markets (see, e.g, Pauwels et al., 2013). These Empirical Generalizations, along with their relevant contingency factors, are not only of clear academic interest, but also offer managers a benchmarking opportunity to compare the elasticities of their own brand(s) with.Still, more actionable insights could be obtained when using the resulting elasticities to arrive at normative recommendations.Insights in the relative (short-and longrun) effectiveness of the different on-and off-line media that many brands use, for example, are essential when optimizing their media allocation decisions. Building on a long tradition (see, e.g.Hanssens et al., 2001, Chapter 9, for a review or Leeflang et al., 2000, pp. 154-155 for a formal derivation), several studies have started to infuse persistence-based response elasticities in the Dorfman and Steiner (1954) recommendation to set optimal allocation shares in accordance to the instruments' elasticity ratios.Kireyev et al. (2016), for example, found that a bank's online search elasticities were significantly higher than the corresponding display elasticities, and argued that the firm should (relative to its current allocation) spend 36% more on the former and 31% less on display advertising to optimize its customer acquisition.Joshi and Hanssens (2010), in turn, used estimated longrun advertising response elasticities to derive the profit-maximizing advertising levels for two personal-computer brands, while Pauwels et al. (2016) used persistence-based elasticity estimates in combination with the Dorfman and Steiner (1954) allocation rule to show how synergy effects between different media channels can substantially alter a brand's optimal media allocation. More recently, Datta et al. (2022) considered the cross-country budget allocation for a leading washing-machine brand, and compared its actual allocation with (i) the allocation that would be recommended on the basis of the Dorman-Steiner elasticity-ratio rule, and (ii) the improved optimal allocation rule of Fischer et al. (2011) that not only the takes the market responsiveness into account (which they obtained from an error-correction specification) but also the size of each country's profit contribution (last year's sales × profit contribution) along with its growth potential over a given planning horizon. Conclusion We see an increasing use of time-series models in the academic marketing literature, not only because more extensive (in terms of both the included variables and the length of the time window covered) data sets become available combined with a growing openness to data-driven Empirics-First research (Golder et al., 2023), but also because various research questions have come to the fore that (i) potentially/likely involve multiple feedback loops, and where (ii) marketing theory is insufficiently developed to specify a priori all temporal precedence relationships.In those instances, the flexibility of VAR models to capture dynamic inter-relationships, and to the ability of persistence modeling to quantify the short-and long-run effects of the various influences at hand, becomes very valuable. Moreover, by combining the resulting short-and long-run effectiveness estimates with some well-known optimal allocation rules as developed by Dorfman and Steiner (1954) and extended in Fischer et al. (2011), the managerial insights and recommendations have become more actionable.In line with that evolution, persistence-modeling has seen a growing acceptance in practice as well.A non-exhaustive list of companies/brands that have already made use of persistence modeling to support their decision making include Amazon, Bank of America, Heineken, L'Occitane, Nissan Motor, Sony, Tetra Pak, Unilever, Vistaprint, and World Education Services. 7 We are hopeful to see a further diffusion of these techniques in the academic and business marketing communities in the years to come. ORCID iD Marnik G. Dekimpe https://orcid.org/0000-0001-5011-5269Notes 1.A more in-depth discussion can be found in Hanssens et al. (2001, Chapters 6 and 7), Leeflang et al. (2000, Section 17.3), or Pauwels (2018).2. This lackluster attention among marketing academics stood in sharp contrast with time-series models' popularity in business, where their superior forecasting performance was well recognized (Dalrymple, 1987;Winklhofer et al., 1996).3.These studies can be seen as precursors of the Empirics First approach advocated in Golder et al. (2023).4. Other stylized examples are given in, among others, Dekimpe and Hanssens (1999) and Sismeiro et al. (2012).Nijs et al. (2001) provide IRFs for the over-time impact of a promotional shock on the category sales in two Dutch CPG markets, Dekimpe and Hanssens (1995) show the IRF for the over-time impact of an advertising shock on a home-improvement chain's sales, while Hewett et al. (2016) use IRFs to graphically depict how bad news in the financial services industry spreads through multiple communication channels.5. VARX models are often estimated on log-transformed variables (e.g.Nijs et al., 2001), so that the (immediate, short-run, and long-run) impact measures can immediately be interpreted as unit-free elasticities.Other studies have worked (to avoid potential aggregation biases) with linear specifications, from which one subsequently derived the elasticities at the sample mean of median (e.g.Srinivasan et al., 2004;Trusov et al., 2009).6.A more extensive set of time-series based empirical generalizations can be found in Hanssens (2015).7. We are indebted to Koen Pauwels for providing several of these examples.
4,720.8
2024-01-17T00:00:00.000
[ "Business", "Economics" ]
The results of studies on the extension of methodological capabilities of determining physico-chemical conditions of lead-based liquid-metal coolants* The authors consider examples of using the thermal cycling method for heavy leadand lead-bismuth-based coolants with simultaneous measurement of the thermodynamic activity (TDA) parameter of oxygen impurities. In these experiments, structural EI852 and EP823 ferritic-martensitic steels as well as EP302 austenitic steel were used as sources of iron impurities. As a result of the experiments, it was found that iron oxides previously formed in the coolant, as such or in the form of an oxide film on the structural steels, are not inert phases in heavy coolants, but can exchange iron and oxygen with the melt. The nature of this exchange depends both on the actual coolant impurity state and on external conditions, including, first of all, temperature. The experimental data obtained by thermal cycling of leadand lead-bismuth coolants were analyzed both in the region of relatively high oxygen TDA values close to the oxygen saturation concentration and in the region of sufficiently low values of this parameter close to the saturation concentration of iron impurities. It is concluded that the proposed coolant thermal cycling method is informationally significant and can be recommended for further use, for example, to obtain quantitative data on the content of iron impurities in the coolant. Introduction The direct entry of structural steel components into the coolant in the presence of protective passivation coatings on the surface of these steels, in contrast to liquid metal corrosion processes, where the steel components dissolve in the coolant, is a complex multistage process of diffusion and interaction of steel and oxygen components. This process takes place simultaneously in the steel itself, on the inner boundary of the oxide film, in the volume oc-cupied by the oxide coating, on the outer border between the oxide layer and the coolant and directly in the coolant itself. This is a complex interconnected process (Gromov and Shmatko 1997, Martinelli and Balbaud-Célérier 2011, Mulier 2008, in which the physicochemical state of the liquid metal melt is of great importance. To determine it, it was common practice to use the method of taking samples and their subsequent analysis for the content of oxygen and metallic impurities. But this method was practically abandoned due to its relatively low sensitivity to dissolved oxygen and insufficient information content on the main products of the interaction between the coolant and structural steels. At the same time, taking into account the importance of monitoring the oxygen conditions of the coolant, oxygen thermodynamic activity sensors have come into common use. This is a significantly more sensitive means of monitoring the coolant condition for the oxygen impurity. However, unlike the sampling procedure, where the mass content of this impurity is determined, the relationship between the oxygen TDA and its mass content is uniquely determined only for the coolant refined from other impurities. This relationship is tabulated for both lead-and lead-bismuth coolants. In the well-known E-T diagram, which relates the readings of the sensor E and the temperature of the coolant T, this tabulated dependence represents for each oxygen concentration a straight line, which at low temperatures can go into another line characterizing the ultimate solubility of oxygen and the formation of the corresponding excess solid coolant oxide. However, with changes in the coolant physicochemical condition due to the emergence of other impurities in it, in particular iron as the main component of steels, the nature of the temperature dependence of the readings of the oxygen TDA sensors changes significantly, which is the basis for using this technique for a more detailed analysis of the coolant real condition. Analysis of temperature dependencies of changes in oxygen thermodynamic activity In the region of sufficiently low oxygen TDA (low oxygen partial pressures), the problem of identifying a "ferruginized" coolant does not present any particular difficulties, since, in this region, the temperature-dependent changes in the sensor's EMF are qualitatively different from the "isoconcentration" distribution. As practice shows, with oxygen thermodynamic activities at oxygen TDA levels of а ≤ 10 -3 , the maintained "isoconcentration" dependence of the sensor readings is more likely an exception than a rule. It is much more difficult to distinguish between these two states in the region of partial oxygen pressures, which are close to the release of the excess solid oxides from the coolant. At present, it is generally accepted that the "isoconcentration" temperature dependence of the sensor readings in this region is fulfilled, and when the tempe-rature decreases, the PbO phase is released (Chernov et al. 2003, Askhadullin et al. 2003. In other words, the iron impurity goes into the oxidized state and becomes inert with respect to the lead melt and oxygen in it. At the same time, it often turns out that in the region of ~ 400-600 °C, the sensor readings stabilize at a temperature drop of Е ~ 120-125 mV, followed by a slight increase with a further decrease in T. This contradicts thermodynamic calculations, according to which the potential of the {Pb}-<PbO> system has a small negative temperature slope and stabilizes in the considered temperature range at a level of E ~ 110-108 mV. These deviations may seem insignificant if we do not know that, in this region of changes in the sensor readings, the price of each millivolt in the mass of dissolved oxygen is extremely high. Factually, this refers to the impact of iron impurities on the solubility of oxygen in the lead coolant. Thus, for a pure {Pb}-<PbO> system, in the range of the oxygen activity sensor (OAS) readings E = 120-108 mV, the melt must absorb up to 90% of the oxygen mass of the saturation concentration (as calculated in (Ivanov et al. 2005)). At one time, attempts were made to explain the observed deviations of the sensor readings at the equilibrium level of the coolant with the oxide phase by various factors (the influence of the thermoelectric power, pollution of the sensors, etc.). However, the data on the level of equilibrium in the "coolant-oxide phase" system (120-125 mV) were systematic and recorded by various sensors. At the same time, steel was always present in the system. Meanwhile, a previous analysis of the observed differences in the readings of two TDA oxygen sensors with different reference electrodes ({Pb}-<PbO> and {Bi}-<Bi 2 O 3 >) in the same medium showed a stable level of 108 mV, which corresponds to the thermodynamic calculations in (Ivanov et al. 2002). Indicative in this regard is also the nature of EMF changes of the same sensor in the experiments with a rotating disk, carried out when EI852 steel samples were tested in lead-and lead-bismuth coolants. In the initial period (after the sample rotation was established), characterized by an increase in the interaction of the liquid metal melt with atmospheric oxygen, a regular decrease in the sensor readings was recorded: in lead, at 620 °C, from 124-122 mV to ~ 110 mV and in lead-bismuth, at 620 °C, from ~ 125 mV to ~ 77 mV. As slag accumulated on the surface of the melt (which was observed visually), the OAS readings increased to E ~ 115 mV in lead and to E ~ 79 mV in lead-bismuth. The experiments were carried out in metal cups made of 0X18H10T steel, which did not exclude the release of metal components directly into the coolant. Therefore, these results were interpreted as a dynamic balance of deoxidation and oxidation processes with a constant supply of iron from the steel and oxygen from the atmosphere. Thus, the data presented indicate a qualitative change in the coolant condition in the presence of an iron source not only in the "deoxidized" region but also in the range of oxygen activity close to unity. To confirm this thesis, experiments were carried out to identify the nature of changes in the potential of the melt in the state of its equilibrium with the resulting solid phase. For this purpose, the lead melt was periodically cycled during the exposure of EP823 steel samples in it at a base temperature of 620 °C. Before the steel samples were immersed, the lead melt was previously refined in air, and the resulting slag was removed from its surface. The surface of the samples was relatively small (37 cm 2 ), and a ceramic cup (Al 2 O 3 ) was used as a lead container. All this allowed us to hope that, at the beginning of the exposure of the samples, the melt did not contain metallic impurities in appreciable amounts. Figure 1 shows the changes in the OAS readings during the melt cooling at certain time intervals (campaign with EP823 steel). The obtained dependences indicate that in the initial period of steel exposure, as expected, the state of the melt was close to refined. It should be noted that, in this case, the sensor EMF level corresponded to the calculated values (Е ~ 108-110 mV). Subsequently, a gradual departure from the initial level was observed, associated with the entry of iron into the melt. Due to the small contact surface of the coolant with the steel as well as low diffusion fluxes of iron through the oxide film formed on the EP823 steel samples, the potential change process took quite a long time (~ 760 h). The level of the sensor readings reached at this point of time during the separation of the solid oxides in the melt cooling process was E ~ 120-124 mV. A much more rapid coolant "ferruginization" was observed in the experiments with EP302 steel, in which the contact surface of the steel with the coolant was 189 cm 2 , and the amount of lead was approximately two times less (Fig. 2). The first thermal cycling of the coolant was carried out immediately following the passivation after 17 hours of exposure of the EP302 steel samples in the lead melt. The minimum values of the sensor readings were E ~ 110 mV, but during further cooling they increased to ~ 115-117 mV. After about 150 hours of exposure, the precipitation of the solid oxide was recorded already at the level of E ≈ 120-124 mV. The experiments show that, over time, the liquid metal is saturated with iron, and the solid phase formed during the cooling process is not pure lead oxide but, probably, is complex oxide with one or another iron content. It is logical to assume that the impact of iron impurities in a varying degree should also be manifested in the region of intermediate oxygen activities in the coolant (relatively far from the equilibrium line <Fe>-<Fe 3 O 4 > and the equilibrium line {Pb}-<PbO>) and, moreover, in the area of activities directly adjacent to the line <Fe>-<Fe 3 O 4 >. To analyze this effect, it is more convenient to use coordinates, where the lg p O2 values are considered along the ordinate axis and the (lg p O2 , 1/T) values are considered along the abscissa axis. The partial pressure of molecular oxygen in equilibrium with the oxygen dissolved in the melt, depending on the sensor readings and temperature, are calculated by the formula: where -20609/T + 10,188 is the partial pressure logarithm of molecular oxygen (lg p O2 ) in the reference electrode Bi-Bi 2 O 3 ; F = 96,485 is the Faraday constant, C/ mol; N e = 4 is the number of electrons participating in the transfer; E denotes the OAS readings, mV; T is the temperature, K. Figure 3 shows the calculated temperature dependences of the constant oxygen concentration for a pure {Pb}-<P-bO> system with a constant oxygen content in the melt (from 10 -4 to 10 -6.5 atomic fractions). The approximation dependences presented on the graph show that they in the coordinates (lg p O2 , 1/T), in contrast to the coordinates (E, T), have a practically constant temperature slope (from -13541 to -13545), which determines their convenient use. The experimentally observed deviations from the temperature slope indicated for a pure system should charac-terize the degree of the impact of iron on the lead melt oxidation potential. The iron available in the coolant or coming from the source consumes part of the oxygen available in the melt. As a consequence, the temperature slope in such a system should increase. If dissociation or conversion of any oxides with the release of free oxygen into the melt proceeds in the system, then the temperature coefficient should decrease in absolute terms. Figure 4 shows the results of measuring the system oxygen potential in the coolant cooling process obtained during the EP302 steel campaign. Changes in the melt oxygen potential were observed only in transitional regimes of temperature changes to lower values at the end of the campaign when the experiments were completed. Of great interest to us are the results of the final melt cooling (during 923 hours after the campaign started). The graph shows that with the onset of cooling, the temperature dependence of the partial pressure first approaches the equilibrium line <Fe>-<Fе 3 O 4 >, then it practically coincides with it and, at a temperature of ~ 450 °C, sharp evolution of oxygen into the coolant begins, accompanied by the corresponding increase in p O2 . The process of oxygen evolution at a slower pace continues until the melt freezes. The results obtained confirm that, under appropriate conditions, iron oxides in heavy lead-containing coolants can dissociate. This is apparently due to the lack of iron in the coolant necessary for the further oxide formation. The decomposition of iron oxides by this mechanism is accompanied by the release of oxygen in accordance with the stoichiometry of these oxides. Thus, it was found that iron oxides previously formed in the coolant, as such or in the form of an oxide film on the structural steels, are not inert phases in heavy coolants, but can exchange iron and oxygen with the melt. The nature of this exchange depends both on the actual coolant impurity state and on external conditions, including first of all, temperature. Conclusion The authors considered examples of using the thermal cycling technique for heavy lead-and lead-bismuth-based coolants in order to obtain additional information about their actual physicochemical state based on the analysis of the oxygen thermodynamic activity behavior. It is shown that the proposed coolant thermal cycling method is informationally significant and can be recommended for further use, for example, to obtain quantitative data on the content of iron impurities in the coolant.
3,374
2020-03-01T00:00:00.000
[ "Materials Science", "Engineering" ]
Investigating Silver Coordination to Mixed Chalcogen Ligands Six silver(I) coordination complexes have been prepared and structurally characterised. Mixed chalcogen-donor acenaphthene ligands L1–L3 [Acenap(EPh)(E'Ph)] (Acenap = acenaphthene-5,6-diyl; E/E' = S, Se, Te) were independently treated with silver(I) salts (AgBF4/AgOTf). In order to keep the number of variables to a minimum, all reactions were carried out using a 1:1 ratio of Ag/L and run in dichloromethane. The nature of the donor atoms, the coordinating ability of the respective counter-anion and the type of solvent used in recrystallisation, all affect the structural architecture of the final silver(I) complex, generating monomeric, silver(I) complexes {[AgBF4(L)2] (1 L = L1; 2 L = L2; 3 L = L3), [AgOTf(L)3] (4 L = L1; 5 L = L3), [AgBF4(L)3] (2a L = L1; 3a L = L3)} and a 1D polymeric chain {[AgOTf(L3)]n 6}. The organic acenaphthene ligands L1-L3 adopt a number of ligation modes (bis-monodentate μ2-η2-bridging, quasi-chelating combining monodentate and η6-E(phenyl)-Ag(I) and classical monodentate coordination) with the central silver atom at the centre of a tetrahedral or trigonal planar coordination geometry in each case. The importance of weak interactions in the formation of metal-organic structures is also highlighted by the number of short non-covalent contacts present within each complex. Crystal engineering utilises the metal-ligand coordination bond to construct coordination networks, generally through the self-assembly of tuneable building blocks [4][5][6][7][8][9][10][11][12][13][14].Bridging organic ligands acting as rigid supports are linked in an ordered lattice, building extended and often multidimensional networks with central metal ions.Modification of the functional groups within the ligand shell can control the properties, topology and geometry of the extended network and lead to potential applications as new functional materials [10][11][12][13][14]. Nevertheless, the unpredictability of the polymeric architecture is a major challenge when designing supramolecular complexes.Self-assembly, which dictates the structural motif of the final complex is controlled by experimental conditions [10][11][12][13][14]. Factors such as the central metal ion oxidation state, the coordination geometry, the metal-to-ligand ratio, the nature and spacer length of the bridging ligand, the presence of solvents and the type of counter-anions, all play a significant role [10][11][12][13][14].A subtle variation to any one of these parameters can influence the geometry of the final solid state structure, generating for example extended three-dimensional networks, linear chain polymers or simple monomeric species [10][11][12][13][14][15]. Naphthalenes [18][19][20][21] and related 1,2-dihydroacenaphthylenes (acenaphthenes) [22] provide the perfect framework from which to design tunable donor ligands for thc preparation of metal complexes [23,24].The rigidity of the organic backbone and the geometric constraints unique to these compounds, imposed by a double substitution at the close peri-positions, ensures metal coordination is favoured in order to achieve a relaxed geometry [24].We have previously utilised the naphthalene backbone to prepare a variety of chalcogen and phosphorus compounds and associated metal complexes . Results and Discussion The three mixed acenaphthene derivatives Acenap[EPh][E'Ph] (Acenap = acenaphthene-5,6-diyl; EE' = L1 SeS, L2 TeS, L3 TeSe [49][50][51], were each independently treated with silver tetrafluoroborate [AgBF 4 ] and silver trifluoromethanesulfonate [AgOTf].In order to keep the number of variables to a minimum, the reactions were carried out using a 1:1 ratio of Ag/L and run in dichloromethane under an oxygen-and moisture-free nitrogen atmosphere.The complexes 1-6 obtained were characterised by multinuclear NMR and IR spectroscopy and mass spectrometry and the homogeneity of the new compounds was where possible confirmed by microanalysis; 77 Se and 125 Te-NMR data can be found in Table 1.Crystal structures were determined for 1-6 and 2a and 3a (recrystallisation products of 2 and 3, respectively).A number of the silver(I) complexes were found to be unstable towards light whilst in solution.Selected interatomic distances, angles and torsion angles are listed in Tables 2 and 3. Hydrogen-bond and other non-conventioanl weak inter-and intra-moleular interaction data can be found in Table S1 in the Electronic Supporting Information (ESI).Further crystallographic information can be found in Tables 4-6 and in Figures S1-S4 and Tables S2 and S3 in the ESI.Table 1. 77Se and 125 Te-NMR spectroscopic data [a] . a] van der Waals radii used for calculations: r vdW (Br) 1.85Å, r vdW (S) 1.80Å, r vdW (Se) 1.90Å, r vdW (Te) 2.06Å [55]; [b] Splay angle: Σ of the three bay region angles-360. TeSe [a] In order to keep the number of variables to a minimum, the reactions were carried out using a 1:1 ratio of Ag/L and run in dichloromethane. − ) and eight additional dichloromethane molecules.The asymmetric unit of 3 contains four silver(I) centres, eight L3 ligands and four non-coordinating counter-anions (BF 4 Within the structural architecture of complexes 1 and 2, two crystallographically unique molecules of the unsymmetrical mixed-chalcogen acenaphthene donor (L1/L2) act as monodentate ligands, binding in each case via the least electronegative chalcogen atom (Se/Te; Figure 3).The two-coordinate central silver atom adopts a distorted bent coordination geometry, with E(1)-Ag(1)-E(2) angles of 135.In all three structures the geometry around the silver centre is governed by the conformation of the rigid acenaphthene supports.The axial-equatorial conformation of the aromatic rings in both acenaphthene fragments of each complex (type AB) [56][57][58][59][60][61][62][63][64][65][66][67][68], positions the E-C Ph bonds close to the acenaphthene plane with the secondary (E'-C Ph ) bond aligned perpendicular to it [in each case χ(E) < χ(E'); E is the monodentate coordinating chalcogen donor].The two facially bound axial E'(phenyl) rings are orientated parallel to their respective C Acenap -E'-C Ph plane and subsequently linked to the adjacent silver centre via a η 6 -E'(phenyl)•••silver type interaction to complete a quasi-chelate ring in each case (Figure 3).Coordination to silver has no significant effect on the conformation of the acenaphthene components or the degree of molecular distortion occurring within the organic frameworks of 1-3 compared with parent ligands L1-L3 [49][50][51].The degree of distortion is related to the size of the atoms residing in the bay-region, with an expected lengthening of the peri-gap observed as the heavier congeners are located at the 5,6-positions along the series [ Reactions of Silver(I) Trifluoromethanesulfonate In contrast to the reactions with AgBF 4 , treatment of L1 and L2 with one molar equivalent of AgOTf afforded two isomorphous three-coordinate, monomeric, silver(I) complexes [Ag(OTf){Acenap(L)} 3 ] (4 (L1); 5 (L2); Figures 2 and 6).Crystals suitable for X-ray diffraction were obtained by slow diffusion of hexane into a saturated dichloromethane (4), dichloromethane/methanol (5) solution of the respective product.Recrystallisations of both products were performed at room temperature, in the absence of light to prevent the complexes from decomposing.The two nearly identical asymmetric units contain six silver(I) centres, eighteen mixed-donor ligands (L1/L2) and interestingly six non-coordinating triflate counter-anions. [AgBF 4 (L2) 3 ] 2a & [AgBF 4 (L3) 3 ] 3a Experimental conditions such as central metal ion oxidation state, the metal-to-ligand ratio, the nature and spacer length of the bridging ligand, the presence of solvents and the type of counter-anions can have a profound influence on the structural architecture of the final complex and adds unpredictability to the self-assembly process [10][11][12][13][14]. Techniques and solvents used in the recrystallisation process can also affect the outcome of the final product.A subtle adjustment to the recrystallisation solvent systems for complexes 2 and 3 afforded two nearly identical three-coordinate, mononuclear, monomeric silver(I) complexes [Ag(BF 4 ){Acenap(TePh)(EPh)} 3 ] (2a E = S, 3a E = Se) with structures analogous to complexes 4 and 5. Crystals suitable for X-ray diffraction were obtained by slow diffusion of hexane into saturated dichloromethane/methanol (2) and tetrahydrofuran (3) solutions of the respective product.Further information on the crystal structures of 2a and 3a can be found in the ESI. General All experiments were carried out under an oxygen-and moisture-free nitrogen atmosphere using standard Schlenk techniques and glassware.Reagents were obtained from commercial sources and used as received.Dry solvents were collected from a MBraun solvent system.Elemental analyses were performed by Stephen Boyer at the London Metropolitan University.Infra-red spectra were recorded as KBr discs in the range 4000-300 cm −1 on a Perkin-Elmer System 2000 Fourier transform spectrometer. 1H-and 13 C-NMR spectra were recorded on a Jeol GSX 270 MHz spectrometer with δ(H) and δ(C) referenced to external tetramethylsilane. 77Se and 125 Te-NMR spectra were recorded on a Jeol GSX 270 MHz spectrometer with δ(Se) and δ(Te) referenced to external Me 2 Se and Me 2 Te respectively, with a secondary reference for δ(Te) to diphenyl ditelluride [δ(Te) = 428 ppm]. 19F-NMR spectra were recorded on a Bruker Ultrashield 400 MHz spectrometer with δ(F) referenced to external trichlorofluoromethane.Assignments of 13 C and 1 H-NMR spectra were made with the help of H-H COSY and HSQC experiments.All measurements were performed at 25 °C.All values reported for NMR spectroscopy are in parts per million (ppm).Coupling constants (J) are given in Hertz (Hz).Mass spectrometry was performed by the University of St. Andrews Mass Spectrometry Service.Electrospray Mass Spectrometry (ESMS) was carried out on a Micromass LCT orthogonal accelerator time of flight mass spectrometer. Crystal Structure Analyses X-ray crystal structures for 1-6, 2a were collected at −180(1) °C by using a Rigaku MM007 High brilliance RA generator (Mo Kα radiation, confocal optic) and Mercury CCD system.At least a full hemisphere of data was collected using ω scans.Data were collected for 3a at −148(1) °C using a Figure 3 . Figure 3. Two crystallographically distinct L1 ligands bind to the silver(I) center via monodentate selenium coordination (left) to form complex 1 (right; H atoms and solvent molecules omitted for clarity).The structures of 2 and 3 (adopting similar conformations to 1) are omitted here but can be found in Figure S1 in the ESI. Figure 4 . Figure 4.The bent metallocene motif found at the center of complex 1, formed from two η 6 -S(phenyl)•••Ag interactions.Comparative fragments found in complexes 2 and 3 are displayed in Figure S1, ESI. Figure 5 . Figure 5. Complex 1 viewed down the z-axis; BF 4 − counter-anions and dichloromethane solvent molecules stack in channels between the acenaphthene fragments.The packing of complexes 2 and 3, viewed down the y-axis is displayed in Figure S2, ESI. Figure 6 . Figure 6.The three coordinate, mononuclear silver(I) complex 4 (H atoms omitted for clarity).The structure of 5 (adopting a similar conformation to 4) is omitted here but can be found in Figure S3 in the ESI. Figure 7 . Figure 7. Weak Ag1•••S1 contacts in the secondary coordination sphere affords a distorted quasi-trigonal prismatic geometry around the central silver atom in 4 and 5 (phenyl rings and H atoms omitted for clarity; complex 4 shown). Figure 9 . Figure 9.View of the 1D extended helical chain polymer 6 along the x-axis (H atoms and solvent molecules omitted for clarity). Figure 10 . Figure 10.The repeating unit of extended helical chain polymer 6 (top; H atoms and solvent molecules omitted for clarity) and the central core of the repeating unit showing the three coordinate, trigonal planar silver(I) geometry (bottom). Figure 12 . Figure 12.View of the 1D extended helical chain polymer 6 along the z-axis; silver atoms align in two columns with the closest non-bonding Ag•••Ag distance 5.929(1) Å.
2,532.6
2012-11-01T00:00:00.000
[ "Chemistry" ]
BOUNDS FOR DISCONNECTION EXPONENTS We improve the upper bounds of disconnection exponents for planar Brownian motion that we derived in an earlier paper. We also give a plain proof of the lower bound 1 = (2 (cid:25) ) for the disconnection exponent for one path. Introduction The first purpose of this paper is to improve the upper bounds of Brownian critical exponents derived in Werner [14]. The basic ideas and tools of the proof are similar to those used in [14]. We refer to this paper for a detailed introduction and definitions of disconnection exponents for planar Brownian motion and for more references. Recall that, if B 1 , . . . , B n denote n independent planar Brownian motions started from (1, 0), the disconnection exponent η n (for n ≥ 1) describes the asymptotical decay of the probability IP{∪ j=n j=1 B j [0, t] does not disconnect 0 from infinity}, when t → ∞, which is logarithmically equivalent to t −ηn/2 (we say that a compact set K disconnects 0 from infinity if it contains a closed loop around 0). We are going to show that: In particular η 1 < .469 and η 2 < .985 (the upper bound in [14] was η n < n/2−.0243/n). Lawler [10] recently showed that the Hausdorff dimension h of the 'frontier' of planar Brownian motion is exactly 2 − η 2 . Combined with our estimate, this implies that h > 1.0156 (see also Bishop et al. [2], Burdzy-Lawler [3]). Let us just recall that it has been conjectured that η 1 = 1/4 and η 2 = 2/3 (see e.g. Duplantier et al. [5], Puckette-Werner [12]). These conjectures have been confirmed by simulations [12]. One of the motivations of this paper is to understand why the upper bounds in [14] are so far from the conjectured value. The second result of this paper is the lower bound This result has been anounced by Burdzy and Lawler (see e.g. in Lawler [6]) but (to our knowledge) it has never been written up. Proofs of the fact that η 1 ≥ 1/π 2 can be found in [3], [6]. We supply a short proof of Theorem 2 to fill in this gap in the literature. This result has consequences for the Hausdorff dimension of the frontier of planar Brownian motion, and also for the Hausdorff dimension of the set of cut-points of planar Brownian motion (see Burdzy-Lawler [3], Lawler [7], Lawler [10]). For random walk counterparts, see e.g. Puckette-Lawler [11], Lawler [8]; see also Lawler [9] and Werner [16] for some other related results on disconnection exponents and non-intersection exponents. The paper is structured as follows. We first derive Theorem 2 in Section 2; in section 3, we derive some results concerning extremal distance and we finally prove Theorem 1 in Section 4. Lower bound We will often identify IR 2 and I C. Let B denote a complex Brownian motion started from 1. If T R denotes the hitting time of the circle {z, |z| = R} by B, then the disconnection exponent η 1 is defined by see e.g. [14] and the references therein for more details. We want to derive a lower bound for η 1 , i.e. upper bounds for (R > 1) Using the exponential mapping and conformal invariance of planar Brownian motion, one can notice that this is the same as finding an upper bound for the probabilities (r = log R > 0) where Z = (X, Y ) is a two-dimensional Brownian motion started from 0 and More precisely, We now put down some notation: For all r > 0, definẽ h r 's are the lengths of the excursions of X below its maximae. For u ∈ [0, h r ], we define Lévy's identity (see e.g. Revuz-Yor [13], Chapter VI, Theorem (2.3)) shows that (r −E 1 r (.), r ≥ 0) is identical to the excursion process of reflected linear Brownian motion. Put also is a linear Brownian motion started from 0, which is independent from X, E 1 and also from F r , r = r. It is easy to check that: Proposition 2 in Werner [15] (which is in some sense a slightly improved version of Beurling's Theorem), readily implies that for all where IP Fv denotes the probability measure corresponding to For v = v , the strong Markov property shows that A v and A v are independent. Hence, It is well-known (see e.g. Chung [4], page 206) that: We define: where F is an independent linear Brownian motion started from 0 under the probability measure IP F . Fix ε > 0 and put r 0 = 1/ε. so that n(H > r 0 ) = ε (see Revuz-Yor [13], Chapter XII, Exercice (2.10)). For all r > r 0 , one has and as the Excursion process of Brownian motion is a Poisson point process, We now state the following technical lemma: This lemma yields immediately that and completes the proof of Theorem 2. Proof of Lemma 1: There are various ways of deriving this identity. Let c denote the integral on the left-hand side of the identity in Lemma 1. It is very easy, using a reflection argument, to check that for all n > 0, Hence, integrating by parts yields: Define g(x) = x exp(−x 2 /32). Note that: We put We can rewrite (2) as follows: It is an easy exercice that we safely leave to the reader to check that for some fixed constants c , c and for all k > 0, Hence, by dominated convergence, For all n > 0, Hence, for all k ≥ 1, and as g(0) = 0, which concludes the proof. Proof: For all x, we define j(x) ∈ IN such that |x| ∈ [(j(x) − 1)a, j(x)a). Consider the function ρ in S defined by: . We now show that the ρ-distance of any continuous path joing L to U in S is greater or equal to 2π. Take a C 1 path (x(s), y(s)) s∈[0,l] in S, joining L to U (s is the euclidean arclength parameter and l the euclidean length of the path), that is such that y(0) = f(x(0)) − π and y(l) = f(x(l)) + π. Define Y s = y(s) − f(x(s)). It is easy to notice that dY s cos ϕ j(x(s)) ≤ ds. Note that for a C 1 odd function f on (−r, r) and S,U , L defined as in (3), (4) and (5), the same method shows that (in fact, this can also be viewed as a corollary of Proposition 1, using approximations of f by piecewise linear functions), which generalizes (7) in [14] and Proposition 1. Let us now just recall the following observation from [14]. If B denotes a planar Brownian motion started from 0 and τ its exit time from the domain S, then: where the first equality is a consequence of the symmetry of S, the third follows from conformal invariance of B under φ, and the last inequality is a consequence of properties of hitting times by reflected linear Brownian motion. Hence, with the same notation than in (8), Upper bound We very briefly recall some notation and results from [14]. We want to derive an upper bound for η 1 . We define for r > 0, where (as in the previous section) X and Y are two independent linear Brownian motions started from 0 andT r = inf{t > 0; X t = r}. For all r > 0, it is easy to see that q r ≤ Q r (with Q r defined as at the beginning of the previous section). Combining this with (1) shows that As in [14], we are going to consider a family of functions f such that the events: are disjoint. We will use (10) to evaluate each probability IP{A r f } and then sum over all functions f in this family. We define the sets: We also define: for all but finitely many (i, j) ∈ I , k i,j = 0}. (we put i 0 (0) = 0), we define the function f K on [−r, r] as follows: 1. f K is odd and continuous 3. f K (r) = 2k 0,1 π 4. If i 0 (K) = 0: For all 1 ≤ i ≤ i 0 and 1 ≤ j ≤ 2 i−1 , Note that Condition 2 implies that Condition 4 holds for all i ≥ 1 and j ∈ {1, . . ., 2 i−1 }. Also if K = (k i,j ) = K = (k i,j ) in J , then for the definition of f K yields We now evaluate r 0 (f K (x)) 2 dx. An easy induction (over i 0 ) shows that Hence, using (10), Combining with (12) yields: Hence, and eventually where θ(x) = k∈Z Z exp(−k 2 x) is the usual Theta function. We now put b = 8π 2 /r and define the function: We rewrite (14) as follows It remains to study the behaviour of g(b) when b → 0+. It actually turns out that the maximum M of g is obtained at the limit b → 0+, which is not surprising. Considering the sequence b n = 2 −n , one can express M = g(0+) as follows: Finally, Numerically, M > .03125, which completes the proof of Theorem 1 for one walk. As in [14], exactly the same technique provides an upper bound for the disconnection exponent for n > 1 Brownian motions. One just has to consider the sum IP{A r f } n . The upshot is Theorem 1. Exactly as in [14], this result has some consequences for non-intersection exponents that we leave to the reader. Remarks. In Werner [14], the estimates obtained have at least three reasons for being far from the conjectured values. We try to remove one in the present paper (allowing Brownian motion to wind quickly from time to time). One would expect to obtain better estimates for instance considering a family of functions F such that for some f = g in F, the events A r f and A r g are not disjoint, and then estimating the sum But to do this, we would need more precise estimates of IP{A r f } and IP{A r f ∩ A r g } (the latter is more difficult) than those derived in this paper. The other estimation loss occurs while restricting ourselves to study the asymptotics of q r . It is in fact likely that q r and Q r do have different asymptotic behaviours. This gap seems even more difficult to lift using our type of approach.
2,524.2
1996-11-03T00:00:00.000
[ "Computer Science", "Mathematics" ]
Series solution of the time-dependent Schr\"{o}dinger-Newton equations in the presence of dark energy via the Adomian Decomposition Method The Schr\"{o}dinger-Newton model is a nonlinear system obtained by coupling the linear Schr\"{o}dinger equation of canonical quantum mechanics with the Poisson equation of Newtonian mechanics. In this paper we investigate the effects of dark energy on the time-dependent Schr\"{o}dinger-Newton equations by including a new source term with energy density $\rho_{\Lambda} = \Lambda c^2/(8\pi G)$, where $\Lambda$ is the cosmological constant, in addition to the particle-mass source term $\rho_m = m|\psi|^2$. The resulting Schr\"{o}dinger-Newton-$\Lambda$ (S-N-$\Lambda$) system cannot be solved exactly, in closed form, and one must resort to either numerical or semianalytical (i.e., series) solution methods. We apply the Adomian Decomposition Method, a very powerful method for solving a large class of nonlinear ordinary and partial differential equations, to obtain accurate series solutions of the S-N-$\Lambda$ system, for the first time. The dark energy dominated regime is also investigated in detail. We then compare our results to existing numerical solutions and analytical estimates, and show that they are consistent with previous findings. Finally, we outline the advantages of using the Adomian Decomposition Method, which allows accurate solutions of the S-N-$\Lambda$ system to be obtained quickly, even with minimal computational resources. Since the first attempts to create a quantum theory of the gravitational field in the 1930's [1,2], the search for a complete theory of quantum gravity has been one of the major fields of research in theoretical physics. In the pioneering work [1], Bronstein investigated the quantum mechanical measurement of the Γ 0 01 component of the Christoffel symbols. This led to a fundamental limit on the temporal uncertainty intrinsic to any quantum measurement, ∆t ≥ /c 2 Gρ 2 V 1/3 , where V and ρ are the volume and the density of a self-gravitating massive body, respectively. The time uncertainty can then be related to the spatial uncertainty via ∆x ≤ c∆t. By introducing the standard mass-density-volume relation, M = ρV , one also obtains the mass-time-density uncertainty relation, M ≥ /c 2 Gρ (∆t) 3 . This was one of the first generalised uncertainty relations (GURs) and, since then, many others have been proposed in the literature. (See [3,4] for reviews.) Of these, the most widely studied are the generalised uncertainty principle (GUP) and the extended uncertainty principle (EUP). The former aims to incorporate the effects of canonical gravitational attraction between quantum mechanical particles [5,6], while the latter accounts for the repulsive effects of dark energy in the form of a cosmological constant, Λ [7][8][9]. The extended generalised uncertainty principle (EGUP) accounts for both [10][11][12]. Over the years, many theoretical models and diverse approaches to the problem of quantum gravity have been developed. These include postulating the existence of the graviton, the hypothetical spin-2 boson that mediates quantum gravitational interactions, string theory, loop quantum gravity, and noncommutative geometry, to mention just a few of the directions investigated. (For detailed presentations of the different approaches, see [13][14][15][16]. For recent reviews of the present status of quantum gravity research, see [17][18][19][20].) However, recently, important progress in experimental techniques has enabled researchers to cool, control, and measure physical systems in the weak gravity and quantum mechanical regimes, with far greater accuracy than ever before. For the first time, it may be possible to directly observe quantum gravity effects at scales accessible in terrestrial and near-Earth-orbit laboratories, in near-future experiments [21,22]. Nonetheless, the conceptual and technical challenges to the construction of complete theory are manifold. From a quantum field theory perspective, even the Newtonian theory of gravity is problematic. Comparing Newton's law of gravity, Φ(r) = −Gm/r, with Coulomb's law of electrostatics, V (r) = k e q/r, it follows that the gravitational constant has mass dimension −2. Hence, the theory of Newtonian quantum gravity is nonrenormalizable. This result follows from the calculation of the gravitongraviton scattering at energy E, in which the divergent series ∼ 1 + GE 2 + GE 2 2 + ... appears [23]. The nonrenormalizability of such naive quantum gravity theories therefore suggests that new physics should emerge at the Planck scale, M Pl = c/G ≃ 10 19 m proton . Because of these challenges, much research interest has been devoted to the study of semi-classical models. In these models matter fields are quantised while gravity remains classical, or is perturbatively quantised at nextto-leading order in the expansion of the metric. Such an approach to quantum gravity was proposed in [24], which is based on the decomposition of the quantum metric into a classical and a fluctuating part,ĝ µν = g µν + δĝ µν . Further assuming that δĝ µν = K µν = 0, where K µν is a classical tensor, one arrives at an effective gravitational Lagrangian of the form L = L g (ĝ µν ) + √ −gL m (ĝ µν ) ≃ L g + δLg δg µν δĝ µν + √ −gL m + δ( √ −gLm) δg µν δĝ µν , where κ 2 = 8πG/c 4 . The gravitational field equations obtained from this Lagrangian lead to theories that require geometrymatter coupling at the classical level. The coupling is of the kind that also appears in the f (R, T ) type modified gravity theories [25,26] and the cosmological implications of effective field theories with fluctuating metric components were investigated in [27]. However, because gravity is in many ways different from the other fundamental forces, and due to the intrinsic difficulties in its quantization, some researchers have suggested that the gravitational field may be essentially classical, and that it should not and cannot be quantized [28,29]. But, even gravity is not quantum, ordinary matter is. Hence, in order to describe the gravitational dynamics of quantum fields, one must still combine classical gravity with quantized matter. In this scenario quantized matter is coupled to the classical gravitational field by replacing the classical energy momentum tensor, T µν , with the expectation value of the energy-momentum operator, T µν , in Einstein's field equations. The expectation value is constructed by averaging with respect to an appropriately chosen quantum state, Ψ, yielding the semi-classical field equations [30], Equations (1) can be also obtained from the variational principle δS = δ (S g + S ψ ) = 0 [31], where S g = (1/16πG) R √ −gd 4 x is the standard Hilbert-Einstein action of general relativity, while the quantum part of the action, S ψ , is introduced in the form (2) By varying the quantum action (2) with respect to Ψ we obtain the normalization condition Ψ|Ψ = 1 and the following Schrödinger equation for Ψ, in addition to the semi-classical Einstein equations. Note that the Bianchi identities still impose the conservation of the effective energy-momentum tensor, ∇ µ Ψ|T µν |Ψ = 0. The difficulty of building a successful quantum theory of general relativity, as well as the intrinsic problems of treating quantum field theories in curved spacetimes, have also led to the hypothesis that a satisfactory description of quantum gravity could be achieved by unifying quantum mechanics with Newtonian gravity [32]. This corresponds to the weak field limit of Eqs. (1), which reduce to the semi-classical Poisson equation Equation (4) is the basis of the Schrödinger-Newton approach [21] and, in this model, the equation of motion of a self-gravitating massive particle can be formulated as where V is the canonical quantum potential and Φ is the gravitational self-interaction potential, obtained by solving (4). For a system of N non-relativisitc free particles V = 0 and the mass-density operator may be written asρ = This is the standard Schrödinger-Newton equation, whose static and time-dependent solutions have been intensively investigated in . The average value of the self-interaction potential can be obtained as Φ ≃ 2 G/mR 3 and it turns out that the particle behavior is essentially quantum if the condition m 3 R ≪ 2 G ≃ 10 −47 cm g 3 , where R is average radius of the wave function, is satisfied [32]. Due to its extreme nonlinearity, the time-dependent Schrödinger-Newton system has mostly been investigated numerically. An exception is the variational approach, which was considered in [42], where the system of equations (4)-(5) was investigated in the hydrodynamical representation of quantum mechanics. In this formalism the wave function is represented as ψ ( r, t) = √ ρ e iS and the canonical Schrödinger equation reduces to the equations of classical fluid mechanics, in the presence of the quantum potential. The quantum fluid flows with a velocity u = ∇S and the equations of motion can be obtained from the Lagrangian +ρV. (7) By adopting a spherical Gaussian profile for the density, ρ(r, t) = π −3/2 R −3 (t)e −r 2 /R 2 (t) , one can then obtain the gravitational potential and the Lagrangian of the system reduces to L(R,Ṙ) =Ṙ 2 /2 − 1/2R 2 + C/R, where C is an arbitrary constant. The corresponding equation of motion for R isR = 1/R 3 − C/R 2 . Using this formalism, one can obtain the energy eigenvalues, linear frequencies, and nonlinear late-time behavior of the S-N wave packet [42]. More recently, the Schrödinger-Newton system was generalized by considering the effects of dark energy in the form of a cosmological constant Λ [54]. This is consistent with the standard ΛCDM model of cosmology, in which it is assumed that the late-time acceleration of the Universe is driven by a constant vacuum energy density, ρ Λ = Λc 2 /(8πG) ≃ 10 −30 g cm −3 [55]. (For alternative models of dark energy as modified gravity, see [56] and references therein.) The physically interesting regime in which dark energy dominates both gravitational self-attraction and canonical quantum diffusion was investigated numerically and using analytical estimates. It turns out that this takes place for objects with arbitrary mass that are sufficiently delocalized. An estimate of the minimum delocalization width required, of the order of 67 m, was determined, and this prediction was verified by the numerical results. However, the exact delocalisation radius required for dark energy domination can be much higher for very massive particles. In general, the wave function of a free particle in the S-N-Λ system was found to split into a core region that collapses due to gravitational self-attraction and an outer region that undergoes accelerated diffusion due to presence of dark energy [54]. While the former behaviour is present in the standard S-N model, the latter is unique to the S-N-Λ system. The order of magnitude of the critical radius separating collapse from expansion was found to match analytical estimates of the classical turnaround radius for a massive compact object in the presence of a cosmological constant [57]. The goal of this paper is to further investigate the mathematical and physical properties of the timedependent S-N-Λ system introduced in [54]. In order to obtain a better understanding of the dynamics of the model, we adopt a semi-analytical approach and construct series solutions using the Adomian Decomposition Method (ADM) [58][59][60][61]. This is a powerful method that can be used to obtain accurate series solutions of a large class of nonlinear differential equations, and systems of equations, with applications in diverse fields of science and engineering [62][63][64][65][66][67][68][69][70][71][72][73][74]. Here, we apply it to the S-N-Λ model for the first time. An essential advantage of this method is that it can be used to obtain analytical approximations to the full numerical solutions, without any need for perturbation theory, closure approximations, linearization, or discretization methods. For many highly nonlinear models, including the S-N and S-N-Λ systems of equations, the use of these methods leads to complicated and time-consuming numerical computations. On the other hand, to obtain even approximate closed-form analytical solutions of a nonlinear problem requires introducing restrictive and simplifying assumptions. The key advantage of the ADM is that it can be used to find the solution of a given equation or system of equations in the form of a rapidly converging power series. Successive terms in the series are obtained via a recursive relation, with the help of a special class of functions known as Adomian polynomials [58][59][60][61]. In most cases the series converges fast, so that the application of this method saves a lot of computational time. Although the ADM has been used extensively in many areas of engineering and physics, it has been used very little in the study of gravitation and quantum mechan-ics. (For some applications of the method in these fields, see [64,68,70].) In order to apply the method, we must first reformulate the time-dependent S-N-Λ system as a system of two integral equations. We then obtain the series solutions of the system by expanding the nonlinear terms using the Adomian polynomials [58][59][60][61]. To eliminate the unwanted oscillatory behavior of the solution, we represent the Adomian series in terms of their Padé approximants. After obtaining the recursive relation for the full S-N-Λ system, we test the efficiency of the ADM for a free Gaussian wave packet, in the limit G → 0, Λ → 0. In this case, the canonical Schrödinger equation can be solved exactly, and we show that the ADM recovers the exact solution in just a few simple steps. Next, series solutions are obtained for both the wave function and the gravitational potential, in the presence of gravitational self-interaction and dark energy. The associated probability density is computed with the help of the Padé approximants and we pay special attention to he dark energy dominated regime. This paper is organized as follows. In Section II we present the basic structure and mathematical formalism of the Adomian Decomposition Method. The S-N-Λ system is reformulated as a system of integral equations in Section III, and the recurrence relations for the series solution of the system are obtained. The method is tested for the case of the canonical Schrödinger equation describing the free propagation of a Gaussian wave packet, and it is shown that the exact solution of this system can be re-obtained in a few simple steps. We obtain the semianalytical solution of the time-dependent S-N-Λ system, for Gaussian initial conditions, in Section IV. The dark energy dominated regime is also considered in detail, and a numerical analysis of the evolution of the probability density is presented. Our results are compared with previous analytical and numerical studies in Sec. V and our discussion and final conclusions are presented in Section VI. II. THE ADOMIAN DECOMPOSITION METHOD Let us consider a partial differential equation written in the general form is the linear remainder operator that may contain partial derivatives with respect to x, N [.] is a nonlinear operator, which we assume is analytic, and g is a non-homogeneous term that is independent of u. Equation (8) must be solved with the initial condition u(x, 0) = f (x). We assume thatL t is invertible, so that we can applyL −1 t to both sides, obtaining The ADM posits the existence of a series solution in which u(x, t) is given by while the nonlinear termN [u (x, t)] is decomposed aŝ where {A n } ∞ n=0 are the Adomian polynomials. These are generated according to the rule Substituting the series expansions (10) and (11) into Hence, we can obtain the following recurrence relation, giving the series solution of Eq. (8) as Therefore, we obtain the approximate solution of Eq. (8) as where For a given nonlinearityN [u], the Adomian polynomials are obtained as and so on. The greater the number of terms, the higher the accuracy of the truncated series solution. III. THE ADOMIAN DECOMPOSITION METHOD FOR THE TIME-DEPENDENT SCHRÖDINGER-NEWTON-Λ SYSTEM For a single particle of mass m, the time-dependent S-N-Λ system is given by the following two equations, where the last term in the above Poisson equation has been chosen so that the dark energy density is given by its standard form ρ Λ = Λc 2 /(8πG). For spherically symmetric systems, Eqs. (21) and (22) take the form which must be solved with the boundary conditions ψ (r, 0) = Ψ(r) and Φ (r, t) = φ (r), respectively. Introducing the operatorsL t = ∂/∂t andL rr = ∂ 2 /∂r 2 , Eqs. (23) and (24) can be rewritten aŝ These equations can be solved formally to give By taking into account the fact thatL −1 where φ 1 (t) and φ 2 (t) are arbitrary integration functions. Using the Cauchy formula for repeated integration, Eq. (30) can be rewritten as In order to make the gravitational potential finite in the origin r = 0, we chose φ 2 (t) = 0, giving (32) We now determine the series solution of the reformulated system of equations, (29) and (32), by assuming expansions of the form and In addition, we decompose the terms Φ (r, t) ψ (r, t) and |ψ (ξ, t)| 2 in terms of the Adomian polynomials as Substituting the above decompositions into Eqs. (29) and Hence, we obtain the following recursive series solution for the time-dependent S-N-Λ system, The first three Adomian polynomials in each series are obtained as and respectively. For simplicity, we assume that the initial state of the wave function is a spherically symmetric Gaussian, with initial width σ 0 = 1/ √ 2α. The gravitational potential φ(r), corresponding to the time-independent Gaussian wave packet, then satisfies the Poisson equation whose general solution is where erf(z) = (2/ √ π) z 0 e −t 2 dt is the error function, and c 1 and c 2 are arbitrary integration constants. In order to avoid a singularity at the origin, we take c 1 = 0. The initial distribution of the gravitational potential is then finite everywhere, and satisfies the condition lim r→0 φ(r) = c 2 − Gm α/π. A. Testing the Adomian Decomposition Method To test the efficiency of the ADM, we consider the evolution of a Gaussian wave packet in the absence of the gravitational interaction, and dark energy, by setting Φ(r, t) = 0. In this case, the evolution of the wave function is given by the canonical Schrödinger equation and must be solved subject to the initial condition ψ(r, 0) = (α/π) 3/4 e −αr 2 (48). The general solution of Eq. (51), satisfying the required initial condition, is given by In order to simplify the mathematical formalism, we introduce a new set of dimensionless variables (τ, θ), defined as where m p denotes the proton mass, and (55) respectively. Moreover, we rescale the wave function so that The rescaled wave function satisfies the dimensionless Schrödinger equation whose general solution, satisfying the initial condition (48), is given bỹ This may be expanded as a power series, with respect to the dimensionless time τ , as To solve Eq. (57) using the ADM, we apply the oper-atorL −1 τ to both sides, giving We then decomposeψ (θ, τ ) into an infinite sum of components, so thatψ (θ, τ ) = ∞ n=0ψ n (θ, τ ), where the componentsψ n (θ, τ ) will be determined recurrently. By substituting the series expansion into Eq. (60), we obtain (61) Thus, we obtain the recursive relations The first few iterations are given bỹ and so on. Clearly, the Adomian series solutionψ (θ, τ ) ≃ ψ 0 (θ, 0) +ψ 1 (θ, τ ) +ψ 2 (θ, τ ) +ψ 3 (θ, τ ) + . . . exactly reproduces the series expansion of the exact solution (59) and, in the limit of an infinite number of iterations, fully recovers it. Hence, we have shown that the ADM gives the exact series representation of the solution of the spherically symmetric, three-dimensional Schrödinger equation describing the time-evolution of a free Gaussian wave packet. The probability distribution P (θ, τ ) is obtained as Here, we approximate the series representation ofP (θ, τ ) by its Padé approximantP [3/4](θ, τ ), which is given bỹ Generally, for a power series of the form f (z) = ∞ z=0 f k z k the Padé approximant of the order (m, n) in the vicinity of the point z = 0 is the rational function Π m.n ∈ R m,n having the property that it takes the closest values to the given series near z = 0. Here, by R m,n , we have denoted the set of rational functions of the form P/Q, where P and Q are polynomials in z of degree p ≤ m and q ≤ n, respectively [75]. The comparison between the exact probability density of the Gaussian quantum wave packet, and its approximation, given by Eq. (67), is represented in Fig. 1. As one can see from this figure, Eq. (67) gives an excellent description of the time-evolution of the wave packet for −1 ≤ τ ≤ 1, and a good approximation for τ outside this range. IV. SERIES SOLUTION OF THE TIME-DEPENDENT SCHRÖDINGER-NEWTON-Λ SYSTEM Using the mathematical formalism developed in the previous Section, we now construct explicit series solutions of the S-N-Λ system. In addition, we perform a detailed numerical study. Our main goal is to highlight the effects of self-gravity and dark energy on the timeevolution of a the quantum wave packet. In the first order of approximation, we obtain (96) and at the second order of approximation, To third order, the probability density can be approximated as Higher order approximations of the probability density of a Gaussian wave packet, evolving under self-gravity in the presence of dark energy, can also be calculated easily with the aid of computer algebra systems. The gravitational self-potential can be approximated as or, in terms of the Padé approximants of the power series, (100) B. The dark energy dominated regime We now consider the limiting case in which the dark energy density dominates the matter density, λ ≫ σ ψ (θ, τ ) . The Poisson equation then takes the simple form and can be immediately integrated to givẽ where we have assumed that the background gravitational potential is independent of time. Hence, for the dark energy dominated phase, the Schrödinger equation takes the form and can be formally solved to givẽ By decomposing the wave function asψ(θ, τ ) = ∞ n=0 ψ n (θ, τ ), we obtain the following recurrence relations for the determination of the componentsψ n (θ, τ ): and the first few approximations of the quantum wave packet in the dark energy dominated regime are obtained asψ The probability density of the Gaussian wave packet can be obtained, with the help of the Padé approximants, to different orders of approximation, as and and so on. The analytical expressions for the probability density can also be obtained easily to any desired order of approximation. C. Numerical analysis In the final part of this section, we consider the numerical results obtained from the Adomian series solutions of the S-N-Λ system. Our main goal is to highlight the effects of the self-gravitational potential and the dark energy density on the evolution of the probability density associated with the Gaussian quantum wave packet. In Fig. 2, we present the three-dimensional evolution of the rescaled gravitational potential in the absence of dark energy, i.e., with λ = 0, and with σ = 1. For convenience, we takeã = 1 as its initial value. In this case, the gravitational potential can be approximated bỹ and the full solution satisfies the condition lim τ →∞Φ (θ, τ ) = 0. Mathematically, a singularity develops inΦ(θ, τ ) for values of θ satisfying 4θ(4a + σ) − √ 2πσerf √ 2θ = 0. However, to at least third order in the approximation, this equation does not have any real roots, except at θ = 0. The variation of the self-gravity potential in the presence of dark energy is represented in Fig. 3, for two different values of λ; λ = 0.20 and λ = 0.35. In the large θ limit its behavior can be approximated as Thus, we see that the presence of a positive cosmological constant does have a significant effect on the distribution of the gravitational potential. To at least the considered order of approximation, the condition lim τ →∞Φ (θ, τ ) = 0 still holds. On the other hand, as expected, lim λ→∞Φ (θ, τ ) = −∞. In both cases, thẽ Φ(θ, τ ) has a sharp maximum at the origin of the coordinate system, θ = 0. The time variation of the probability density of the Gaussian wave packet is represented, for fixed values of the radial coordinate θ, in Figs. 4. There are two significant effects induced by the presence of the dark energy. As one can see from the left-hand panel, for (relatively) small values of the dimensionless radial coordinate θ, the probability density in the presence of Λ > 0 almost coincides with the function describing the evolution with Λ = 0, for −1 ≤ τ ≤ 1, and has the same maximum value. In the absence of Λ, the probability density tends to zero at a finite value of τ . However, dark energy significantly modifies the tail of the Gaussian distribution, which extends in time and induces much higher values of the probability density, as compared to the Λ = 0 case. From the right-hand panel we see that, for larger values of θ, the dark energy has two different effects on the probability density. The first is a significant increase in the amplitude of the probability density, with the maximum increased by a factor of at least two. This indicates the increased probability of finding the wave packet at larger distances from the center, the effect being a direct consequence of the presence of repulsive dark energy. Secondly, at large distances, the probability density tends to zero. However, the decrease is much slower for Λ > 0, and is directly correlated with the increase of the amplitude of the wave. Another interesting effect is related to the change in the shape of the wave function function, which evolves from a single-peaked into a double-peaked symmetric function. The three-dimensional evolution of the wave packet in the presence of the self-gravitational field and the dark energy density is depicted in Figs. 5. The same effects, as previously mentioned, are also apparent when considering the three-dimensional evolution of the wave packet. For large values of τ and θ, P (θ, τ ) → 0, but the dynamics of the transition to the asymptotic limit are strongly influenced by the presence of dark energy, whose effect becomes significant at late times and for large values of the radial coordinate. The behavior of the probability density in the dark energy dominated regime is presented in Fig. 6, for two distinct physical situations, corresponding to a fixed value of θ (left panel), and to a fixed value of τ (right panel). Even though, in this regime, there is a qualitative similarity with the Λ = 0 case, significant differences also appear. The double-peaked shape of the Gaussian distribution is extended in time for fixed θ, and the shape of the Gaussian tail is strongly modified, indicating an increase in the probability of finding the particle at higher values of τ . Moreover, the maximum value of the probability as a function of θ, at a given time, increases dramatically with increasing λ. However, at least at the considered order of approximation, and for the adopted values of λ in the large θ limit, the probability distribution of the initially Gaussian wave packet still tends to zero. Nonetheless, much larger values of λ, in the range λ ∈ 10 2 , 10 3 , would greatly modify the dynamics of the wave packet at infinity. V. COMPARISON OF THE ADOMIAN METHOD WITH PREVIOUS ANALYTICAL AND NUMERICAL RESULTS A particle obeying the S-N-Λ equation of motion experiences three tendencies in its dynamics. Both canoni-cal quantum diffusion and dark energy induced acceleration cause its wave function to spread whereas Newtonian self-gravity, represented by the non-linear term, acts to localize the wave packet. In [54], it was argued that the relative strengths of these three tendencies can be esti- mated, at least approximately, by considering the motion of the peak radial probability density, r p (t). This is the position of the spherical shell at which the radial probability density dP/dr = 4πr 2 |ψ| 2 reaches its maximum, that is, the radius at which the particle is most likely to be found at a given time t. It is determined by solving the equation or, alternatively, which is equivalent to setting d 2 P (r, t)/dr 2 = 0 or d 2P (θ, τ )/dθ 2 = 0, respectively. The contributions to the total acceleration experienced by r p (t) due to canonical quantum diffusion, self-gravity, and dark energy are then estimated as and The subscript SE refers to the canonical Schrödinger equation, SN refers to the standard Schrödinger-Newton contribution, and Λ denotes the additional term induced by the dark energy density. In order to determine the regimes in which the different tendencies dominate the dynamics, we consider equality between the absolute magnitudes of the accelerations (117)-(119) in a pair-wise manner, i.e., or, equivalently, plus giving and where λ C (m) = /(mc) is the reduced Compton wavelength of the particle, r S (m) = 2Gm/c 2 is its Schwarzschild radius, l Pl = G /c 3 ≃ 10 −33 cm is the Planck length, and l dS = 3/Λ ≃ 10 28 cm is de Sitter radius. Note that the latter is comparable to the present day radius of the Universe [76] and that we have neglected numerical factors of order unity in all three equations. The critical value of r p (t) in Eq. (124) is the classical turn-around radius for a spherical compact object in the Schwarzschild-de Sitter spacetime [57], Mass Behavior Below 1 × 10 −18 g Evolution indistinguishable from that of a free particle in canonical quantum mechanics 2 × 10 −18 g to 3 × 10 −17 g The whole wave packet spreads faster than that of a canonical free particle 4 × 10 −17 g to 5 × 10 −17 g The inner core of the wave function spreads slower than the wave function of the canonical free particle while the outer shells spread faster 6 × 10 −17 g to 1 × 10 −16 g The inner core of the wave function collapses under self-gravity while the outer shells spread faster than in canonical quantum mechanics ∼ 2 × 10 −16 g Chaotic Above 3 × 10 −16 g Stationary TABLE I. Dynamical evolution of a Gaussian wave packet, with initial width σ0 = 7.5 × 10 2 cm, under the S-N-Λ equation, according to the numerical solution obtained in [54]. The comparison is made to a free particle in canonical quantum mechanics, evolving under the canonical Schrödinger equation. or, equivalently, where m Pl = c/G ≃ 10 −5 g and m dS = ( /c) Λ/3 ≃ 10 −66 g are the Planck mass and the de Sitter mass, respectively. The approximate value of the peak radial probability is and a more careful estimate, accounting accurately for numerical factors, gives r p ≃ 67 m, as shown in [54]. For a Gaussian distribution, the initial peak radial probability is comparable to the initial width of the wave function, r p (0) ≃ σ 0 , and the two are equivalent up to a multiplicative constant of order one for a large class of physically reasonable wave functions [54]. Therefore, Eq. (129) also gives the order of magnitude value of the minimum initial width required, in order for the acceleration due to dark energy to dominate both canonical quantum diffusion and self-gravitation. This is a very clear and somewhat surprising prediction: in the S-N-Λ system, the spreading of any spherically symmetric wave packet with an initial width σ 0 67 m will be dominated by the accelerated expansion of the Universe, due to dark energy, regardless of its initial mass. For particles with masses m 10 −17 g, Eq. (122) implies that the dark energy term always dominates over canonical quantum diffusion, whenever the initial width of the wave packet exceeds this critical value. For heavier particles, we expect the outer shells of the wave packet to undergo accelerated expansion due to dark energy while the inner core region contracts under self-gravity. By Eq. (124), the critical radius marking the division between collapsing and expanding shells should be of the order of the classical turn-around radius (126). However, these very strong predictions were derived using rather crude analytical techniques and approximations. It is therefore reasonable to ask: can they be trusted? To answer this question, the numerical solution of the S-N-Λ system was presented in [54], for an initially Gaussian wave packet with a range of initial widths and particle masses. Remarkably, the existence of both a critical mass of order 10 −17 g, and of critical initial width of order σ 0 ≃ 6.7 × 10 2 cm, was verified by the numerical results. A summary of the numerical results obtained in [54], for a particle wave function of initial width σ 0 = 7.5 × 10 2 cm, and particle masses in the range 10 −18 kg ≤ m ≤ 10 −16 kg, is given in Table 1. We note that the chaotic and stationary regimes obtained for larger values of m are artifacts of the numerics, which were unable to probe masses above ∼ 2 × 10 −16 g due to limited computational resources. The critical radius marking the boundary between the collapsing inner core and the expanding outer shells of the wave packet was also verified to be within one order of magnitude of the classical turn-around radius (126), which isn't bad for such a crude analysis [54]. (131) Fig. 7 shows the evolution of θ p (τ ), obtained from the Adomian series solution, for a Gaussian wave packet with a = 1 (describing the effect of the background gravitational field), and for different values of the dimensionless dark energy parameter, λ. The presence of the gravitational field and of the dark energy significantly modifies the behavior of θ p . Although, in the absence of self-gravity, the peak probability density of the Gaussian wave packet satisfies lim τ →∞ θ (0) p = ∞ in the presence of an extremely high dark energy density, corresponding to very large values of λ, the presence of self-gravitational interaction significantly alters the behaviour of θ p , at least at the first order of approximation, which may now tend to zero for finite values of τ . This represents the regime in which the total collapse of the wave function occurs under the action of self-gravitational attraction, which successfully counteracts both dark energy repulsion and canonical quantum diffusion. In the range −1 ≤ τ ≤ 1, the time evolution of θ (1) p closely follows, on a qualitative level, the dynamics of θ (0) p , even though some quantitative differences do appear. In Figs. 8, the dimensionless radial probability density dP/dθ = 4πθ 2 |ψ| 2 is plotted for fixed θ, and for various values of τ . This clearly shows the formation of a collapsing inner core and an outer shell undergoing accelerated expansion. The critical value of θ that demarcates between the two regions corresponds, to within an order of magnitude, to the classical turn-around radius of the particle mass, and is therefore consistent with the numerical results summarised in Table 1. Finally, before concluding this section, we note that, since the Compton wavelength of the proton is of order 10 −15 m, lab-based experiments for which the dark energy dominated regime (128)-(129) is accessible require macromolecules with approximately 108 amu. This is two orders of magnitude below the estimated mass required for tests of the standard Schrödinger-Newton equation using opto-mechanical traps [77], which corresponds to the generic estimate for the onset of the semi-classical gravity regime with Λ = 0 [30]. In other words, in terms of the mass parameter, current experiments are sufficiently precise to allow the effects of Λ on the quantum dynamics of a macromolecule to be observed and measured. The associated length scale is σ 0 ≃ 1 − 10 m, though, unfortunately, the associated time-scales may astronomical [54]. However, for macromolecules with ∼ 10 10 amu, the canonical quantum contribution to the peak acceleration is of the same order as the dark energy contribution for σ 0 ≃ 1 m. This raises the intriguing possibility that dark energy effects may be observable in near-future experiments on local quantum systems, though, to date, the preceding order-of-magnitude estimates seem to have been overlooked in the quantum gravity literature. Crucially, the present work shows that we may go beyond such crude estimates, to obtain detailed analytical predictions of the S-N-Λ model under realistic experimental conditions. As a proof-of-concept, our work also shows that we may fruitfully apply the ADM to any number of competing semi-classical gravity models [78][79][80]. This may be useful for a range of experimental tests, including tests of gravitationally-induced wave function collapse [81][82][83]. VI. DISCUSSIONS AND FINAL REMARKS In the present paper, we have investigated the semi-analytical series solutions of the time-dependent Schrödinger-Newton-Λ (S-N-Λ) system, which describes quantum matter in the presence of a nonlinear self- gravitational interaction and a background dark energy density. For the latter, we adopted for the simple form of a positive cosmological constant, which enters into the mathematical formalism through the modified Poisson equation. In order to solve the coupled system of S-N-Λ equations, we used a powerful mathematical method called the Adomian Decomposition Method (ADM), which provides in a fast and efficient way of obtaining series solutions of strongly nonlinear differential equations. The starting point of this method is the transformation of the given system of differential equations into an equivalent system of integral equations. Then, by positing the existence of series solutions of the integral system, one can obtain sets of recurrence relations for each unknown term in the power series expansion. Usually, the ADM series converges fast, allowing detailed studies of the solutions of highly nonlinear differential equations using purely analytical methods. The main advantage of the method outlined in this paper is that it is based on a rigorous mathematical procedure, namely, the series expansions of the wave function, and of the nonlinear self-gravity term, while at the same time providing results that are mathematically simple and physically intuitive. This allows the in-depth investigation of the role dark energy may play in the microscopic dynamics of a quantum particle. In the cosmological context, the dark energy density can be inferred from the critical density of the Universe, given by ρ cr = 3H 2 0 /8πG = 1.88h 2 × 10 −29 g/cm 3 , where H 0 is the present day value of the Hubble constant, and h = H 0 /100 km s −1 Mpc −1 . Since the cosmological data indicates a dark energy density of the order of ρ vac ≃ 0.75ρ cr , it follows that ρ vac ≃ 10 −29 g/cm 3 . On the other hand, the cosmological dark energy can be obtained from physical considerations, once it is interpreted as a vacuum energy, as ρ vac = √ k Pl k dS k dS k 2 + (mc/ ) 2 dk, where k Pl = 2π/l Pl , and k dS = 2π/l dS , where l Pl = G/c 3 is the Planck length and l dS = 3/Λ is the de Sitter length [84][85][86]. This is consistent with the existence of the GUP and EUP [87] and with the recent tentative observational evidence for the granular nature of dark energy on scales of order (k Pl k dS ) −1/2 ≃ 0.1 mm [88][89][90][91][92]. In quantum physics, a quantum fluctuation (also called vacuum fluctuation), is the random variation of the energy at a point in space, due to the creation of virtual particle-antiparticle pairs. These pairs are continuously created in the space, according to the energy-time uncer-tainty principle, ∆E∆t ≥ /2. In our present approach, we describe the effects of these processes on the quantum dynamics of the particle via a constant term. Even though, on a cosmological scale, the vacuum energy may have a very low (but extremely important) numerical value, quantum fluctuations may still have a significant impact on the local particle dynamics, at a microscopic level, over sufficiently long time-scales [54]. However, as a future extension of our current work, it would be interesting to reanalyze the problem using an alternative dark energy ansatz, which captures the oscillating, or 'granular' nature of the dark energy density proposed in recent models [84][85][86][87][88][89][90][91][92]. With or without a dark energy term, the importance of the self-gravitational interaction essentially depends on the mass of the particle. For a particle with a mass of the order of m = 10 10 m p ≃ 10 −14 g, where m p is the proton mass, the dimensionless coefficient σ given by Eq. (71) is of order unity, σ ≃ 1. In this regime, the selfgravitational interaction has a significant effect on the evolution of the quantum wave packet. In the absence of dark energy, λ = 0, it follows from Eq. (72) that this phase corresponds to the standard gravity-dominated regime of the Schrödinger-Newton system. In summary, the consistency of the Adomian series solutions with the exact numerical solutions obtained in previous studies represents a huge step forward in the study of the S-N-Λ system. Up to now, only very crude and approximate analytical methods could be used to investigate its dynamics. Although useful for developing our physical intuition and providing order-of-magnitude estimates, these are no substitute for accurate quantitative solutions. Conversely, obtaining accurate numerical solutions is resource intensive, requiring long periods of time to develop and run the relevant codes, which are also computationally demanding [54]. By contrast, the same results can be obtained using Adomian decomposition in a fraction of the time, with the help of a relatively simple Mathematica or Maple worksheet. Indeed, in [54], it was stated that "we must deal with a complicated integro-differential equation, with little hope for analytical exploration". We have now shown that this is not the case and that the S-N-Λ system may be investigated analytically, to any degree of desired accuracy, using the right series solution techniques. By applying the Adomian decomposition method to PDEs, it should even be possible to obtain non-spherically symmetric solutions of the S-N-Λ equations. To the best of the author's knowledge, this has not yet been attempted in the existing literature, even numerically. The preliminary results presented here indicate that the Adomian Decomposition Method can be used to obtain accurate solutions of a wide variety of semi-classical gravity models, subject to a wide range of initial conditions. Ultimately, this should help us to test the predictions of these models in greater detail, under realistic experimental conditions [77].
10,024
2020-11-22T00:00:00.000
[ "Physics", "Mathematics" ]
Synthesizing Lattice Structure in Phase Space We consider a realistic model, i.e., ultracold atoms in a driven optical lattice, to realize phase space crystals [Phys. Rev. Lett. 111, 205303 (2013)]. The corresponding lattice structure in phase space is more complex and contains rich physics. A phase space lattice differs fundamentally from a lattice in real space, because its coordinate system, i.e., phase space, has a noncommutative geometry, which naturally provides an artificial gauge (magnetic) field. We study the behavior of the quasienergy band structure as function of the artificial magnetic field and investigate the thermal properties. Synthesizing lattice structures in phase space is not only a new way to create artificial lattice in experiments but also provides a platform to study the intriguing phenomena of driven systems far away from equilibrium. (Dated: October 15, 2014) We consider a realistic model, i.e., ultracold atoms in a driven optical lattice, to realize phase space crystals 1 . The corresponding lattice structure in phase space is more complex and contains rich physics. A phase space lattice differs fundamentally from a lattice in real space, because its coordinate system, i.e., phase space, has a noncommutative geometry, which naturally provides an artificial gauge (magnetic) field. We study the behavior of the quasienergy band structure as function of the artificial magnetic field and investigate the thermal properties. Synthesizing lattice structures in phase space is not only a new way to create artificial lattice in experiments but also provides a platform to study the intriguing phenomena of driven systems far away from equilibrium. I. INTRODUCTION In a recent paper 1 , we introduced the idea of phase space crystals, i.e., a lattice structure in phase space created by breaking a continuous phase rotational symmetry via a driving field. In our previous work we used the model of ultracold atoms trapped in a time-dependent power-law potential, i.e., ∼ x n cos(ω d t), to illustrate our idea. However, this model is technically difficult to realize in experiments. Here, we present a realistic driven optical lattice model, i.e, the power-law driving is replaced by a cosine-type driving, i.e., ∼ cos(kx + ω d t), to realize phase space crystals. Thus, the novel phenomena predicted by phase space crystals can be directly observed in current experiments of ultracold atoms in an optical lattice. The model proposed here synthesizes a more complex lattice structure in phase space and thus contains rich physics. We further develop the theory of phase space crystals and calculate the complex quantum tunnelling rates. We identify the artificial (magnetic) gauge field in phase space, which is a result of the noncommutative geometry of the phase space crystal. Compared to the artificial lattice structures in real space [2][3][4][5][6][7][8][9] , synthesizing a lattice structure in phase space has the key advantage of being conveniently tunable in experiments through changes in the driving field. Due to this possibility phase space lattices may provide a new platform to simulate condensed matter phenomena. II. MODEL AND HAMILTONIAN The model we propose here can be realized by ultracold atoms trapped in a time-dependent optical lattice. The Hamiltonian is given by Here, the parabolic term is the harmonic confinement potential of ultracold atoms, which can be created by a gaussian beam profile of a laser 10 or introduced by another external field. As sketched in Fig. 1, the characteristic length of the FIG. 1: Ultracold atoms in driven optical lattice. Ultracold atoms (green dots) are confined in a harmonic potential (red parabolic curve). The ground sate of confinement potential is represented by a Gaussian wave packet (yellow wave packet) with width b = √ /(mω). The blue curve represents a propagating optical lattice with period d, amplitude 2A and velocity ω d /k. The potential for creating phase space lattice is the sum of them. ground state in the confinement potential is b = 2π √ /(mω). Experimentally the optical lattice is created by the interference of two counter-propagating laser beams, which form an optical standing wave with period d = 2π/k. The ultracold atoms are trapped by the interaction between the laser light field and the oscillating dipole moment of atoms induced by the laser light 11 . We can drive the optical lattice simply by tuning the phase difference of the two laser beams linearly as described by Hamiltonian (11). Effectively, this creates a propagating optical lattice with a velocity of ω d /k. An important parameter is λ ≡ (b/d) 2 = k 2 /(mω), which defines the "quantumness" of our system. It is large in the quantum regime and goes to zero in the semiclassical limit. We emphasize that the optical potential is time-dependent and the confinement potential also plays an important role. Thus, our system does not have spatial periodicity and the Bloch theory in real space does not apply directly for the Hamiltonian (11). We are interested in the regime near the high-order resonant condition ω d ≈ nω with a large integer n ≫ 1. For the duration of this paper with will use n = 30. The detuning δω ≡ ω − ω d /n is much smaller than the natural frequency ω. We perform a unitary transformation of the Hamiltonian H(t) via the operatorÛ = e i(ω d /n)â †â t , whereâ is the annihilation operator of the oscillator. In the spirit of the rotating wave approximation (RWA), we drop the fast oscillating terms and arrive at the time-independent Hamiltonian (see more details in section A of the Appendix) In the context of Floquet theory,ĝ is called quasienergy 14,20 , which has been scaled by the energy m(ω/k) 2 = ω/λ. The parameters ǫ ≡ δω/ω and µ ≡ λA/( ω) are the dimensionless detuning and driving strength respectively. Functions L (−n) a †â (•) are the generalized Laguerre polynomials, as a function of the photon numberâ †â |k = k|k , where |k are the Fock states. III. SYMMETRIES In the following, we are particularly interested in the resonant condition, i.e., the detuning is zero δω = 0. Without loss of generality, we set the scaled driving strength to unity, i.e., µ = 1. In this case, the RWA Hamiltonian (2) has two new symmetries which are not visible in the original Hamiltonian (11). To visualize them, we replace the operatorâ by a complex number in the semiclassical limit and plot the quasienergy g in the phase space spanned by Re[a] and Im [a]. As displayed in Fig. 2(a), we first see the discrete angular symmetry g(θ) = g(θ + 2π/n). Additionally we have the chiral symmetry g(θ) = −g(θ + π/n), which divides the whole lattice structure into two identical sublattices as indicated in Fig. 2(a) by the different colors. To describe the two symmetries in quantum mechanics, we define a unitary op-eratorT τ = e −iτâ †â with the propertiesT † τâTτ =âe −iτ and T † τâ nT τ =âe −inτ . Since the operatorâ †â keeps invariant under the transformation ofT τ , it is not difficult to check that the RWA Hamiltonian (2) is invariant under discrete transformation T † τĝ T τ =ĝ for τ = 2π/n. We call this symmetry discrete phase translation symmetry. The chiral symmetry follows from the fact T † τĝ T τ = −ĝ for τ = π/n. The chiral symmetry suggests that the two sublattices are symmetric with respect to g = 0, except a phase shift θ → θ + π/n. The angular symmetry indicates it is convenient to introduce the radial and angular operatorsr andθ viaâ = e −iθr / √ 2λ and a † =re iθ / √ 2λ. They obey the commutation relation where λ plays the role of a dimensionless Plank constant. IV. PHASE SPACE LATTICE In the semiclassical limit λ → 0, the quantum Hamiltonian g can be written in its classical form (see more details in section A of the Appendix) Here, we have used the asymptotic property of Laguerre polynomials, i.e., lim k→∞ L (n) k (x/k) = k n e x 2k x −n/2 J n (2 √ x), where J n (•) is the Bessel function of order n. The angular periodicity comes from the cosine function in Eq.(4) while the radial structure is created by the Bessel function J n (r). A similar situation has recently been studied in voltage biased Josephson junctions 12,13 . The zero lines of g form the " cells " of the phase space lattice as shown in Fig. 2(a). The center of each cell is a stable point corresponding to either a local minimum or a local maximum of g (see more details in section C of the Appendix). The area inside the cell represents the basin of attraction for the stable state in the center. In Fig. 2(b), we show the radial structure of the quasienergy g by plotting it along two angular directions θ = 0 and θ = π/n. We see the quasienergy oscillates as a function of the radius r in the form of Bessel functions J n (r). We divide the whole lattice structure into " loops ", which correspond to ring-like areas in Fig. 2(a) between two radii which satisfy J n (r) = 0. We label them from inside to outside by Roman numerals I, II, III and so on as indicated in Fig. 2(b). V. QUASINUMBER THEORY We diagonalize the quantum Hamiltonian (2) and study the properties of its quasienergy spectrum. With zero detuning δω = 0, and driving µ = 1, the spectrum is only determined by the effective Planck constant λ. In Fig. 3(a) we show the structure of the quasienergy spectrum as function of the parameter 1/λ. It is clear that the quasienergy spectrum is symmetric with respect to g = 0 because of the chiral symmetry. We also see that gaps in the spectrum are opened for small λ and disappear for sufficiently large λ. The transition happens around λ ≈ 5. We will calculate the gaps using WKB theory and discuss the physical mechanism of gap closing below. In Fig. 3(b) we show the gapless quasienergy spectrum for λ = 6 and the band structure of the spectrum for λ = 4. The band structure comes from the discrete phase translation symmetry. We introduce the quasinumber theory 1 according to Bloch's theorem. Due toT † τĝTτ =ĝ for τ = 2π/n, the eigenstates ψ m (θ) of the quasienergy Hamiltonian,ĝψ Here, the integer number m is called quasi-number, which is conjugate to the phase θ. It is an analogue of the quasi-momentum ⇀ k in a crystal. In Fig. 3(c), we plot the quasienergy band structure in the reduced Brillouin zone mτ ∈ [0, 2π). We count the bands from the bottom and relabel the eigenstates ψ m (θ) by ψ l,m (θ), where the subscript l = 1, 2, ... indicates the band that the eigenstate belongs to. From the form of the Q-function we see that the eigenstates of the system ψ m (θ) are delocalized states in phase space, which are superposition of localized states corresponding to the discrete energy levels as indicated in Fig. 2(b). We label these levels in the first loop by Level I − 1, Level I − 2 and those in the second loop Level II − 1 etc. In the semiclassical limit, these quantum levels become classical orbits of iso-quasienergy contours represented by the boundaries of the colored elliptical areas inside each cell as shown in Fig. 2(a). The shapes of these orbits vary in different loops as displayed on the top of Fig. 4(d). VI. QUASIENERGY BAND STRUCTURE The formation of quasienergy bands near the bottom can be understood in the frame of the tight-binding model. If we ne- glect quantum tunnelling, the n localized states in each loop are n degenerate states. If we consider quantum tunnelling, they are broadened and form bands. We can label the bands by the labels of corresponding localized levels, e.g., the bottom band of the whole quasienergy spectrum is Band I − 1. We can describe the structure of the l-th tight-binding band approximately by Here, E l represents the center of the l-th band and the quasienergy of the corresponding localized level. The l-th bandwidth d l is determined by the tunnelling rate, i.e., d l = 4|J l |. From Fig. 3(c) we see that the bands are not symmetric with respect to the center of the Brillouin zone in general. We describe the asymmetry by an asymmetry factor δ l . The asymmetry factor comes from the fact that the two dimensions of phase space are not commutative. We will calculate the gaps, bandwidths and asymmetry factor by WKB theory below. A. Quantum tunnelling in phase space From the commutation relation (3), it can be shown that [r 2 /2,θ] ≈ iλ in the region of r ≫ 1 1 . We can view operatorsr 2 /2 andθ as "coordinate" and "momentum" respectively, i.e.,θ ≈ −iλr −1 ∂/∂r. In the semiclassical limit, the variables r 2 /2 and θ define the phase space for our WKB calculation. In Fig. 4(a), we plot the quasienergy g in the range of θ ∈ [−2π/n, 2π/n]. For a fixed g, all the branches of classical orbits are given by where k takes integers 0, 1, 2, · · ·, and n − 1. Two real solutions θ ± (ξ, g) together represent one closed classical orbit. There are n identical orbital branches with only a 2π/n-shift of θ. From the condition |(g − ǫr 2 /2)/[2µJ n (r)]| < 1, we can determine the boundaries of classical motion. In Fig. 4(a), we indicate the boundaries of classical motion by r 2 1 /2, r 2 2 /2 and r 2 3 /2 in the phase space spanned by r 2 /2 and θ. The region between r 2 2 /2 and r 2 3 /2 is the classically forbidden region for the fixed quasienergy g. In the quantum regime, however, the states can tunnel into each other. In Fig. 4(a), we show how the two neighboring Level I − 1 states tunnel into each other through phase space. The main tunnelling path with least action is indicated by the white arrows in the same plot. The optimal path is to tunnel first into the nearest region in Loop II across one saddle point (white dot) and then tunnel back to the neighboring Level I-1 across another saddle point. There also exist many other possible tunnelling paths in phase space, e.g., the path indicated by yellow arrows in Fig. 4(a). But the contributions from these paths are exponentially small compared to the main tunnelling path (see more details in section B of the Appendix). B. Quasienergy levels and bandwidths From the WKB theory, we know the phase space area enclosed by the classical orbit is quantized according to the so called Bohr-Sommerfeld quantization condition 16 where k takes nonnegative integers. From the above condition we can calculate the quasienergy levels. As shown in Fig.4(b), the left subfigure shows several lowest levels calculated using the quantization condition (7). We compare our WKB calculation to the numerical simulation. The agreement is very good. Noticeably, Level I-2 and Level II-1 cross each other near λ = 1.2. The level crossing has significant effect on the bandwidths as we discuss below. The width of the l-th band d l is given by the tunnelling rate J l , i.e., d l = 4|J l |. The amplitude of J l is given by the integral of the imaginary part of "momentum" θ in the classical forbidden region r 2 < r < r 3 Here, S (g) in the prefactor as function of g is given by the first equality of Eq. (7). In section B of the Appendix, we give a detailed description of the behavior of Im[θ] in the classical forbidden region. Here we just present our results. In Fig. 4(b) we show the bandwidths of Level I-1 and Level I-2 calculated by Eq.(34) and compare them to the numerical calculation. There is a cusp in the curve of Level I-2. This happens because of the crossing of Level I-2 and Level II-1 which significantly enhances the quantum tunnelling of Level I-2. In this case, we need to consider three interacting levels, i.e., two neighboring Level I-2 states and the medium state of Level II-1 as indicated by the closed orbits in Fig. 4(a). The Hamiltonian of three interacting levels (TIL) is described by the following 3 × 3 matrix Here g 1 , g 2 represent the quasienergies of Level I-2 and Level II-1 respectively. Parameter J 11 represents the tunnelling rate between the two neighboring Level I-2 states. Parameter J 12 represents the tunnelling rate between the state of Level I-2 and the state of Level II-1. The tunnelling rate J 11 is given by Eq.(34) by taking g = g 1 , while the tunnelling rate J 12 is given by We can get the modified quasienergy levels by diagonalizing the matrix H T IL . The level spacing ∆ 11 of the two modified Level I-2 states gives the effective tunnelling rate between them. Therefore, the correct bandwidth of Band I-2 is 2∆ 11 . C. Band asymmetry and artificial magnetic field From Fig. 3(c), we see that the quasienergy bands are not symmetric with respect to the center of the reduced Brillouin zone. The asymmetry is described by the asymmetry factor δ l . In the frame of tight-binding approximation, the Bloch eigenstate ψ lm (θ) is given by ψ lm (θ) = 1/ √ n n−1 q=0 e imqτT q τ φ l (θ), where φ l (θ) is the localized wave functions forming the band. The quantum tunnelling rate can be calculated by J l = − [T τ φ l (θ)] * ĝ φ l (θ). The corresponding quasienergy spectrum of the l-th band then is g l The band asymmetry comes from the fact that quantum tunnelling rate J l in driven systems is generally a complex number 1,17 , i.e., J l = |J l |e −iδ l τ , and the phase parameter δ l is exactly the asymmetry factor. We can calculate the phase δ l using the WKB theory we developed above. In fact, when r is approaching one of the roots r (0) with J n (r (0) ) = 0, from Eq.(31) we see the amplitude of "momentum" θ goes to infinity |θ(r (0) )| → ∞. This means the WKB approximation breaks down near the root of the Bessel function J n (r (0) ) = 0 and we need a connecting condition. Because r (0) ≫ 1, we can expand the phase translation operator T τ = e −iτâ †â by 1â †â ≈ λ −1 (r (0) ) 2 /2 + i∂/∂θ and the connecting condition, i.e., the neighboring localized state of φ l (θ), is given byT τ φ l (θ) ≈ e −iλ −1 (r (0) ) 2 τ/2 φ l (θ + τ). Thus we get the symmetry factor δ l = δ 0 l + λ −1 (r (0) ) 2 /2, where δ 0 l is the residual asymmetry beyond WKB calculation and can be removed by redefining the phase translation operatorT τ = e −iτ(â †â −δ 0 l ) . The asymmetry factor δ l is linearly dependent on the parameter 1/λ with the slope (r (0) ) 2 /2 differing between bands. If we count r (0) = 0 as the first root of J n (r), then the asymmetry factors of bands in the l-th (l ≥ 2) loop are all given by the l-th (l ≥ 2) root of the Bessel function. But the asymmetry factors of the bands in the first loop are determined by the second root of the Bessel function. The reason is that the localized states inside the first loop tunnel through its upper boundary while states in other loops tunnel through lower boundaries. In section B of the Appendix, we give more detailed discussion on tunnelling paths and show more results about the linear relationship between δ l versus 1/λ for different bands. The fact that the tunnelling amplitudes are complex means there is an artificial magnetic field B e f f in phase space. Imagine we have a loop of atoms forming a one dimensional lattice in real space with magnetic field B across the loop. The magnetic field induces an additional phase to the tunnelling amplitude between neighbored atoms J = |J|e −iδ , where δ ∝ B is called Peierls phase 18 . Comparing the Peierls phase to the asymmetry factor of the phase space lattice calculated above, we can identify there is an effective magnetic field B e f f ∝ 1/λ in phase space. The coordinate system of a phase space lattice has a noncommutative geometry 19 , which is fundamentally different from spatial lattices. It is this noncommutative phase space which creates an artificial magnetic field and is responsible for the asymmetry of the quasienergy band structure. VII. DISSIPATIVE DYNAMICS The above calculation of the quasienergy bandstructure does not consider the dissipative environment. In actual experiments, due to the quantum and thermal fluctuations, the dynamics in a phase space lattice is non-unitary. For a driven system, we can measure the non-equilibrium stationary state in experiments. We use the master equation method to describe the dissipative evolution caused by thermal and quantum fluctuations. Already previously it has been shown that a Lindblad type of master equation [20][21][22][23] is sufficient as description, where the time t is dimensionless and scaled by the natural frequency ω. The Lindblad superoperator is defined through the Bose distribution and κ is the dimensionless damping also scaled ω. Based on the master equation (43), we calculate the density matrix of the stationary distribution in the basis of the Fock states {|k , k = 0, 1, · · ·}. By the relationship of k = r 2 /(2λ), we can find the propbability density along a circle with radius r, i.e., ρ(r) = rλ −1 k|ρ|k . In Fig 4(c), we plot ρ(r) for different temperaturesn = 0 andn = 0.1. We see that ρ(r) oscillates with radius r. The zero nodes of ρ(r) actually correspond to the boundaries of phase space lattice loops. Because the quantum heating 24 of each loop is not the same, the probabilities over the loops are not equally distributed. On the bottom of each loop, the stationary distribution can be described by an effective temperaturen e . The localized ground state of each loop can be approximately described by a squeezed state with the squeezing factor u and the corresponding effective temperature is given byn e = |u| 2 +n(2|u| 2 + 1) (see more details in section D of the Appendix). In our case, as we can see from Fig 4(c), the peak of ρ(r) is in the third loop. The reason is the effective temperature of the third loop is lower than other loops. In Fig 4(d), we calculate the squeezing factor u and the effective temperaturen e for the first ten loops and compare them to fully numerical simulations. The agreement is very good. Another interesting fact is the squeezing factor u changes from a negative value to a positive value. This means the shape of the squeezed state in each loop is different as displayed by the colored orbits on the top of Fig 4(d). The orbital shapes are taken from the plot in Fig. 2(a). The third orbit is very close to a round circle, which means the squeezing factor u ≈ 0 and the resulting effective temperaturē n e ≈n. The stationary distribution can be directly measured in the experiments 25 . VIII. DISCUSSION The phase space lattice can also realized in circuit-QED systems, i.e., a superconducting cavity coupled to Josephson junctions. The Hamiltonian is H cQED = ωa † a + 2E J cos(4πe −1 Φ) cos ϕ. The Josephson junction can be driven by either a dc voltage 12,13 , which creates ϕ = ϕ 0 + ω d t with ω d = 2eV/ , or a time-dependent magnetic flux 26 Φ = ω d t/(4πe −1 ) . The effective Planck constant in this case is λ = 8πωL/(h/e 2 ), where L is the inductance of the circuit and h/e 2 ≈ 25.8 kΩ is the von Klitzing constant. The typical impedance ωL of circuit-QED systems using only geometrical inductors and capacitors, can not exceed the characteristic impedance of vacuum µ 0 c ≈ 376.73 Ω 27 , which means that we have λ < 0.015 in circuit-QED systems. However, there are several proposals to realized a super-inductance based on the design of Josephson junction arrays 27,28 which can increase the impedance significantly up to 35 kΩ resulting λ > 1. Thus, it is possible to realize phase space lattices in circuit-QED systems combined with a proper design of Josephson junction arrays. In this section, we give detailed derivation from the timedependent Hamiltonian (1) to the RWA Hamiltonian (2) and the semiclassical Hamiltonian (4) in the main text. To be convenient, we write the original Hamiltonian of ultracold in the driven optical lattice atoms here again Now, we introduce a, a † via x = √ /(2mω)(a † + a) and p = i √ m ω/2(a † − a). By introducing parameter λ ≡ k 2 /(mω), we map the Hamiltonian (11) to the following We introduce the scaled coordinate and momentum operatorŝ Q = λ 2 (a † + a) andP = i λ 2 (a † − a) with the noncommutative relationship [Q,P] = iλ. We write Hamiltonian (12) in an alternative form Now, we employ an unitary operator U = e i ω d n a † at to transform Hamiltonian (13) into a rotating frame with frequency Here, we define M(Q,P) ≡ e i[Q cos(ω d t/n)+P sin(ω d t/n)] and the detuning δω ≡ ω 0 − ω d /n. To calculate the matrix element of M(Q,P), we define the displacement operator D(α, α * ) by Since the operator M(Q,P) can be written as we get the relationship between the parameter α of D(α, α * ) and parameters of M(Q,P) with ϕ = ω d t/n. We further define the following notations Here, L k−l l (•) is the Laguerre polynomials. Let β = 0, we have the exact form of matrix element of displacement operator D(α, α * ) l|α, k ≡ l|D(α, α * )|k Using the relationship (17) we get the explicit form of matrix elements of M(Q,P) Thus, quantum Hamiltonian (14) is Under RWA, we drop the fast oscillating terms (k − l n) and get RWA Hamiltonian (k − l = n) Here we have used the relationship 29 L n l (x)/L −n l+n (x) = (−x) −n (l + n)!/l! for x > 0. We now scale the RWA Hamiltonian by ω/λ and get the dimensionless Hamiltonianĝ where the parameters ǫ = δω/ω and µ = λA/( ω) are the dimensionless detuning and driving strength respectively. Using the following asymptotic form of Laguerre polynomials 30,31 we have the following relationship in the limit of k, l ≫ |k − l| for a fixed k − l Thus, in the semiclassical limit, i.e., k, l → ∞ and fixed k − l, Eq.(21) goes to the following Here, we have used the limit relationship l! k! k (k−l)/2 → 1. Therefore, we have the RWA Hamiltonian (24) We define the radial and angular operatorsr andθ by a = e −iθr / √ 2λ and a † =re iθ / √ 2λ. In the Fock representation, the operator e iθ is defined by |k k + 1|, and e −iθ = ∞ k=0 |k + 1 k|. (29) Using the above relationships, we have the following Hamiltonian in the semiclassical limit λ → 0 B. Quantum tunnelling in phase space In this section, we give a detailed description about the quantum tunnelling process in phase space and the analytical behavior of "momentum" θ in the complex plane. We also calculate the asymmetry factor δ and show its linear relationship with 1/λ for different bands. To be convenient, we define a new variableξ ≡r 2 /2 here. The semiclassical Hamiltonian (30) can be rewritten as g = ǫξ + 2µJ n ( 2ξ) cos(nθ − nπ 2 ) in new variables ξ and θ, which define the "ξ − θ " phase space for our WKB calculation. For a fixed g, the general solutions of classical orbits are where k = 0, 1, , 2 · ··, and n − 1 represent the n branches of solutions. Here, we choose the parameters ǫ = 0 and µ = −1. In Fig. 5, we show three classical orbits for a fixed g < 0. The two classical orbits in the first loop are indicated by red closed curves, which correspond to the following solutions , and The classical orbit in the second loop is indicated by yellow closed curve, which corresponds to the following solution In the regime of (g−ǫξ)/ 2µJ n ( 2ξ) < 1, two real solutions θ ± (ξ, g) together represent one closed classical orbit θ(ξ, g). In Fig. 5(left), the boundaries of classical motions are indicated by the white dashed lines, i.e., ξ 1 , ξ 2 and ξ 3 . Beyond the classical boundaries, the value of θ(ξ, g) has imaginary part. In Fig. 5(right), we show the analytical structures of solutions θ ± (ξ, g) in the complex plane. The closed curves on the real axis of θ represent classical orbits (we deviate the orbits slightly from the real axis to illustrate the shapes of orbits). There are n identical orbital branches with only a 2π/n-shift of Re[θ] for each type of solution. In the quantum regime, the classical orbits can tunnel into each other through the classical forbidden region. In Fig. 5(left), we show the quantum tunnelling process of the two states in the first loop in phase space. The corresponding behavior of Im[θ] is depicted in Fig. 5(right). Starting from the classical boundary ξ 2 to the zero point of Bessel function ξ (0) , the imaginary part Im[θ] increases from zero to infinite, where it jumps to another branch of solution. Then it goes back from infinite to zero as ξ changes from ξ (0) to another classical boundary ξ 3 . After that, Im[θ] increases again from zero to infinite as ξ goes from ξ 2 to ξ (0) , where it jumps again to another branch of solution. Finally, Im[θ] decreases from infinite to zero as ξ changes from ξ (0) to the classical boundary ξ 3 . As we have discussed in the main text, the amplitude of quantum tunnelling rate J l is given by the integral of the imaginary part of "momentum" θ in the classical forbidden region The tunnelling process can also happen through lower boundary ξ 1 as indicated by the white arrows in Fig. 5(left). However, the lower path is much longer than the upper path. Thus, the contribution to |J l | from the lower path is exponentially smaller than the contribution from upper path. The jumping processes between different branches of solutions give additional phases to the quantum tunnelling rate J l , which makes it a complex number J l = |J l |e −iδ l τ . As we have discussed in the main text, the connecting condition by jumping is given by the phase translation operatorT τ = e −iτâ †â . Since ξ (0) ≫ 1, we can expand operatorT τ by 1 a †â ≈ ξ (0) /λ + i∂/∂θ. As a result, the connecting condition iŝ T τ φ l (θ) ≈ e −iξ (0) τ/λ φ l (θ + τ). Thus we get the symmetry factor where δ 0 l is the residual asymmetry beyond WKB calculation. In Fig. 6(a), we compare the above linear relationships between δ l and 1/λ for different bands to our numerical simulations. In Fig. 6(b) and Fig. 6(c), we expand the asymmetry factor to the whole field of real number R and plot it as function of 1/λ for different bands. The bands in Fig. 6(b) are all in the first loop. We see that, since the states in the first loop tunnel through the upper boundary, they all have the same slope given by ξ (0) , which is the second zero point of Bessel function J n ( 2ξ). Here, we consider ξ (0) = 0 is the first zero point of Bessel function J n ( 2ξ) for n 0. In Fig. 6(c), we show the linear relationships between δ l and 1/λ for the bottom bands in different loops. We see their slopes are different. The reason is that the bands in different loops tunnel though different paths with different jumping points ξ (0) . Like the states in the first loop, the states in other loops can tunnel through both the upper boundary and lower boundary. However, we have checked the integral Im[θ]dξ of the upper path is always larger than that of the lower path. Therefore, the contribution to the tunnelling rate from the upper path is exponentially smaller than the contribution from the lower path. Therefore, the slope of all the bands in the l-th (l > 1) loop is given by the l-th zero point ξ (0) l of Bessel function J n ( 2ξ). In the flowing table, we compare the slopes extracted form numerical simulation to our theoretical calculation. Below, we label the stable points (maxima and minima) and unstable saddle points by (r m , θ m ) and (r s , θ s ) respectively. We expand the quasienergy g near the stable points (r m , θ m ) to the
7,952
2014-10-14T00:00:00.000
[ "Physics" ]
Tumor necrosis factor-alpha and apoptosis signal-regulating kinase 1 control reactive oxygen species release, mitochondrial autophagy, and c-Jun N-terminal kinase/p38 phosphorylation during necrotizing enterocolitis. BACKGROUND Oxidative stress and inflammation may contribute to the disruption of the protective gut barrier through various mechanisms; mitochondrial dysfunction resulting from inflammatory and oxidative injury may potentially be a significant source of apoptosis during necrotizing enterocolitis (NEC). Tumor necrosis factor (TNF)-alpha is thought to generate reactive oxygen species (ROS) and activate the apoptosis signal-regulating kinase 1 (ASK1)-c-Jun N-terminal kinase (JNK)/p38 pathway. Hence, the focus of our study was to examine the effects of TNF-alpha/ROS on mitochondrial function, ASK1-JNK/p38 cascade activation in intestinal epithelial cells during NEC. RESULTS We found (a) abundant tissue TNF-alpha and ASK1 expression throughout all layers of the intestine in neonates with NEC, suggesting that TNF-alpha/ASK1 may be a potential source (indicators) of intestinal injury in neonates with NEC; (b) TNF-alpha-induced rapid and transient activation of JNK/p38 apoptotic signaling in all cell lines suggests that this may be an important molecular characteristic of NEC; (c) TNF-alpha-induced rapid and transient ROS production in RIE-1 cells indicates that mitochondria are the predominant source of ROS, demonstrated by significantly attenuated response in mitochondrial DNA-depleted (RIE-1-rho) intestinal epithelial cells; (d) further studies with mitochondria-targeted antioxidant PBN supported our hypothesis that effective mitochondrial ROS trapping is protective against TNF-alpha/ROS-induced intestinal epithelial cell injury; (e) TNF-alpha induces significant mitochondrial dysfunction in intestinal epithelial cells, resulting in increased production of mtROS, drop in mitochondrial membrane potential (MMP) and decreased oxygen consumption; (f) although the significance of mitochondrial autophagy in NEC has not been unequivocally shown, our studies provide a strong preliminary indication that TNF-alpha/ROS-induced mitochondrial autophagy may play a role in NEC, and this process is a late phenomenon. METHODS Paraffin-embedded intestinal sections from neonates with NEC and non-inflammatory condition of the gastrointestinal tract undergoing bowel resections were analyzed for TNF-alpha and ASK1 expression. Rat (RIE-1) and mitochondrial DNA-depleted (RIE-1-rho) intestinal epithelial cells were used to determine the effects of TNF-alpha on mitochondrial function. CONCLUSIONS Our findings suggest that TNF-alpha induces significant mitochondrial dysfunction and activation of mitochondrial apoptotic responses, leading to intestinal epithelial cell apoptosis during NEC. Therapies directed against mitochondria/ROS may provide important therapeutic options, as well as ameliorate intestinal epithelial cell apoptosis during NEC. Introduction Necrotizing enterocolitis (NEC) is the most common gastrointestinal surgical emergency in premature low birth-weight neonates, where prematurity is the single most important risk factor. Although several contributing factors for NEC have been identified, such as ischemia, bacteria, cytokines and enteral feeding, the exact mechanisms for its pathogenesis remain elusive. The clinical presentation of NEC is often nonspecific and unpredictable. NEC frequently involves diffuse areas of bowel necrosis and perforation, necessitating emergency operation. The presence of pneumatosis intestinalis detected on plain abdominal radiographs remains as a pathognomonic clinical feature. Despite extensive research during the past two decades, the exact pathogenesis of NEC for premature neonates remains ill-defined and largely unknown, hence, limiting development of novel preventive strategies. Reactive oxygen species (ROS), generated as a result of ischemia-reperfusion injury to the gut, have been linked to the development of NEC in premature infants. 1,2 However, cytokines are also thought to play a role in ROS generation, contributing to severe gut inflammation and injury during NEC hallmarked by the exaggerated inflammatory responses by the premature immune system. 3,4 Tumor necrosis factor (TNF)α, a pro-inflammatory cytokine implicated in various inflammatory diseases of the small intestine, 5,6 is thought to contribute to the pathogenesis of NEC. Recent in vivo NEC studies have demonstrated a significant decrease in the severity and incidence of intestinal injury with anti-TNFα therapy. 7,8 TNFα-induced oxidative stress via mitochondrial ROS (mtROS) generation has recently emerged as a new mechanism of inducing cellular injury. Several studies have shown that mtROS generated by TNFα can oxidize the reduced thioredoxin-apoptosis signal-regulating kinase 1 (Trx(SH) 2 -ASK1) complex, [9][10][11][12][13] thus initiating its dissociation and inducing activation of ASK1 and its downstream stress signaling targets such as c-Jun N-terminal kinase (JNK) and p38 mitogen-activated protein kinase (MAPK) pathways. [14][15][16] ASK1 is a member of the MAPK family, and is an upstream activator of JNK and p38-MAPK signaling cascades. 17 ASK1 can be activated in response to TNFα/ROS and triggers various biological responses such as apoptosis, inflammation, differentiation and survival in various cell types. 13,[18][19][20][21][22] TNFα/mtROS-induced Trx oxidation results in the dissociation of the Trx(SH) 2 -ASK1 complex, release of ASK1 and activation of p38 MAPK and JNK stress response pathways. 16,[23][24][25][26] Thus, ASK1 is important in the regulation of ROS and inflammation-induced apoptotic signaling in injured cells; 16,27,28 however, its role in intestinal epithelial cells during oxidative injury is unknown. Oxidative stress and inflammation may contribute to the disruption of the protective gut barrier through various mechanisms; however, mitochondrial dysfunction resulting from inflammatory and oxidative injury may potentially be a significant source of apoptosis during NEC. Hence, the focus of our study was to examine the effects of TNFα/ROS on mitochondrial function, and ASK1-JNK/p38 cascade activation in intestinal epithelial cells. evaluate mouse intestinal sections for evidence of autophagy. Initially, we treated RIE-1 cells with TNFα for various time points (15,30,60, 90 min and 24 hours), and labeled cells with organellespecific dyes, MitoTracker (mitochondria, red fluorescence) and LysoTracker (lysosomes, green fluorescence). Laser scanning confocal microscopy did not reveal significant mitochondrial autophagy at early time points (data not shown); however, when cells were treated with TNFα for 24 hours (h), and then labeled, we found significant co-localization of damaged mitochondria with lysosomes (yellow fluorescence) and typical morphologic changes (Fig. 2D) consistent with mitochondrial damage and autophagy of dysfunctional mitochondria. These findings suggest that TNFα-induced mitochondrial autophagy is a late phenomenon in contrast to more rapid and transient mitochondrial ROS production, MMPΔ, reduction in cellular respiration, activation of mitochondrial apoptotic signaling pathways and JNK/p38 stress pathways. Late autophagy may represent cellular inability to cope with overwhelming burden of damaged mitochondria. When taken together with RIE-1 cell death ELISA results (Fig. 3A) after TNFα treatment, these data suggest that mitochondrial autophagy may play a pro-apoptotic role during cytokineinduced injury in intestinal epithelial cells. Cross-sectional views of mouse NEC intestinal villi showed ubiquitous autophagic vacuolization in intestinal epithelial cells in contrast with healthy controls (Fig. 2E). The degree of vacuolization is significant and may represent an adaptive function in stressed intestinal epithelial cells in vivo. Histological evaluation and electron scanning microscopy of human NEC sections would be helpful in examining autophagic vacuolization in intestinal epithelial cells. TNFα-induced RIE-1 cell apoptosis is mitochondriadependent. Next, we sought to examine whether TNFα induces apoptosis in intestinal epithelial cells. We treated RIE-1 and RIE-1-ρ° cells with TNFα and DNA fragmentation was quantitated. TNFα-induced RIE-1 cell death was significantly transient drop in MMP, increased permeability and "leakiness" of the mitochondrial membrane which could result in the release of an apoptosis-activating molecule such as cytochrome c into the cytosol. MMP depolarization is an important early indicator of apoptotic signaling activation, and hence, transient and rapid MMPΔ in response to cytokine-induced injury demonstrates mitochondrial susceptibility in RIE-1 cells. The oxygen consumption level in TNFα-treated RIE-1 cells was measured using a Clark-type electrode. TNFα treatment induced a significant decrease in oxygen consumption level of RIE-1 cells within the first minute of treatment with relatively depressed levels; this effect persisted for 5 min after TNFα treatment (Fig. 2C). This finding demonstrates that mitochondrial functional changes occur rather rapidly in response to TNFα, and that the mitochondrial oxygen consumption is rapidly decreased within the first minute of TNFα exposure. Taken together, these results demonstrate that TNFα induces significant mitochondrial dysfunction in intestinal epithelial cells, resulting in functional derangements such as increased production of mtROS, significant alteration in MMP and decreased oxygen consumption. Organelle autophagy occurs as a result of cellular injury. Hence, we next examined the effects of TNFα treatment on mitochondrial autophagy in RIE-1 cells and sought to western blot analysis. The expression of mitochondrial apoptotic markers (apoptosis-inducing factor (AIF), APAF-1, cytochrome c) and ATP synthase-β, a marker for the activity of the electron transport chain, were increased in RIE-1 cells after TNFα treatment (Fig. 3C). This effect peaked at 15 min and returned to basal levels by 60 min. In contrast, the mtDNA-depleted RIE-1-ρ° cells displayed no significant alteration in expression levels of mitochondrial apoptotic markers, with the exception of a minimal increase in cytochrome c release at 15 min. This finding may represent either delayed protein degradation or altered nuclear encoding of mitochondrial proteins and enzymes that is unaffected by mtDNA silencing. decreased in TNFα-treated RIE-1-ρ° cells (Fig. 3A). These data imply that TNFα-exerted cellular injury mechanism(s) is predominantly mitochondria-dependent in RIE-1 cells. When baseline levels of mitochondrial apoptotic markers such as Apoptotic protease activating factor 1 (APAF-1) and cytochrome c in RIE-1 and RIE-1-ρ° cells are compared, the mitochondrial expression of these apoptotic molecules is significantly reduced in mtDNA-silenced RIE-1-ρ° cell line (Fig. 3B). Hence, the effect of cytokine-induced injury may be dependent or independent of mitochondrial apoptotic arsenal. To test this hypothesis, we examined the effects of TNFα on mitochondrial apoptotic pathway activation in intestinal epithelial cells by ROS trapping and ASK1 siRNA attenuate TNFα-induced apoptosis and JNK/p38 pathway activation in RIE-1 cells. To determine the effectiveness of mitochondria-targeted potential therapies during TNFα/ROSinduced cell injury, we used ASK1 silencing via short interfering RNA (siRNA) treatment and spin-trapping compound, α-phenyl-N-tbutylnitrone (PBN). The dissociation of the Trx(SH) 2 -ASK1 complex, both cytosolic and mitochondrial, is a crucial step in activating the JNK-mediated apoptotic signaling cascade. 22-26 ASK1 inhibition can be protective during TNFα-induced cell injury. Hence, we targeted ASK1 with siRNA silencing method and examined activation of JNK and p38 apoptotic pathways, measured by phosphorylation levels of proteins following TNFα treatment (Fig. 4A). Protein analysis revealed that ASK1 silencing resulted in significant reduction of JNK and p38 phosphorylation levels in TNFα-treated cells. The differential phosphorylation of JNK isoforms was observed in RIE-1 cells, indicating a complex isoformspecific activation process induced by TNFα treatment in intestinal epithelial cells. These findings warrant future studies focusing on mitochondrial ASK1 targeting specifically and examining JNK isoform-specific activation in TNFα-treated intestinal epithelial cells. Application of spin-trapping of RIE-1 cells showed significant attenuation in TNFαinduced RIE-1 cell death (Fig. 4B), thus demonstrating a protective effect of ROS-trapping by PBN in TNFα/ROS-damaged intestinal epithelial cells. Protein analysis of cell lysates revealed significantly attenuated levels of proapoptotic cytochrome c, and marked reduction Mitochondrial dysfunction is found in many disease processes, including fulminant hepatic failure in neonates. 34,35 Susceptibility of premature neonatal intestine to significant oxidant injury during NEC as a result of mitochondrial dysfunction has not been explored. We have attempted to elucidate some of the early mitochondrial functional derangements in RIE-1 cells with TNFα treatment, and have shown that mitochondrial integrity during inflammation is compromised, leading to early activation of pro-apoptotic and mitochondria-dependent signaling in vitro (Fig. 5). It is unclear whether cellular autophagy is a pro-survival or pro-apoptotic cellular mechanism. It serves an important purpose in disease processes requiring the maintenance of healthy population of mitochondria. 36,37 Our findings show that mitochondrial damage due to TNFα/ROS elicits late mitochondrial autophagy after TNFα exposure in vitro, suggesting either slow mitochondrial turnover or delayed mitochondrial biogenesis in injured RIE-1 cells. These results suggest that TNFα affects mitochondrial homeostasis and further studies are necessary to gain insight into intestinal mitochondrial dysfunction during NEC. Our previous findings of oxidative stress-induced intestinal cell death and MMP collapse 38,39 along with the rapid, transient inflammation-induced mtROS production, early mitochondrial pro-apoptotic response and ASK1-JNK/p38 stress pathway activation in intestinal epithelial cells in the current study, suggest a strong role for ROS-mediated mechanism(s) of cellular damage during NEC. The selective phosphorylation of JNK isoforms, p56 and p45, in response to TNFα suggests that there may be a differential pathway of activation of downstream signaling proteins in rat intestinal epithelial cells in vitro. For example, although both JNK isoforms are phosphorylated during TNFα stimulation, the predominantly phosphorylated JNK isoform is p45. In contrast, pretreatment with the PBN scavenger specifically decreases p54 phosphorylation and not phosphorylation of p45 JNK isoform. Furthermore, ASK1 silencing with siRNA leads to suppressed phosphorylation levels of both JNK isoforms. These findings suggest a complex and possibly selective (cytosolic or mitochondrial only) responses to the TNFα-mediated activation of ASK1-JNK/p38 signaling pathways in intestinal epithelial cells in vitro. Regulation of caspase-dependent apoptotic signaling via cytochrome c and APAF-1 release from the mitochondrial matrix appears to be a specific response to TNFα treatment as compared to caspase-independent apoptotic signaling. Though both apoptotic signaling pathways are activated by TNFα, these findings support our hypothesis that injured mitochondria are a significant source of apoptotic signaling in intestinal epithelial cells during TNFα/ROS-induced injury, and can lead to potentially detrimental compromise of the gut mucosal barrier integrity. Previous in vivo study by Halpern et al. had already demonstrated significant attenuation of NEC in neonatal rat, both in severity and incidence of intestinal injury, with anti-TNFα treatment. In our study, we examined the in vitro effect of TNFα on molecular signaling mechanisms and mitochondrial functional changes in intestinal epithelial cells. Discussion Cytokines are thought to play a central role in gut inflammation and injury during NEC by inducing exaggerated inflammatory responses, leading to significant intestinal injury in premature neonates. 3,4 Previous studies have demonstrated that activation of inflammatory mediators such as TNFα, IL-1, NFκB, toll-like receptors (TLR), IL-8 and inducible NO synthase (iNOS) may play a significant role in the pathogenesis of NEC. 7,29,30 Recently, we demonstrated an anti-inflammatory action of peroxisome proliferator-activated receptor (PPAR)γ using in vivo model of NEC, and inhibition of NFκB pathway, a critical transcription factor for the activation of inflammatory mediators and cytokines. 31 De Plaen et al. also have demonstrated that inhibition of NFκB pathway during NEC ameliorates bowel injury and improves survival in vivo. 29 Recent studies have also focused on molecular signaling mechanism(s) within the TLR signaling pathway 32 and cyclooxygenase-2 (COX-2) in intestinal homeostasis and inflammation associated with NEC. 33 Modulating early cellular inflammatory pathways during NEC may improve overall survival for neonates. Previously, we investigated the effects of ROS, generated as a result of ischemia-reperfusion injury to the gut, and activation of apoptotic and survival signaling during NEC. The aim of the present study was to examine the effects of pro-inflammatory TNFα on mitochondrial dysfunction in intestinal epithelial cells during NEC, since premature neonatal gut is thought to be more susceptible to inflammatory cascade activation and oxidative injury. In this study, we observe significant intestinal expression levels of pro-inflammatory TNFα and apoptotic ASK1 molecules throughout all layers of the intestine. This indicates that intestinal inflammatory process during NEC is in fact transmural. Tissue ASK1 levels also share similar pattern of apoptotic activation and injury in premature neonatal gut when compared with TNFα. One may argue that the human tissue expression levels may not fully reflect the exact extent of apoptotic signaling, and likely represent a late stage of the disease. Early NEC tissue analysis is a limiting factor in understanding early intestinal responses and expression levels of TNFα and ASK1. Complex relationship between inflammation, oxidative stress and activation of apoptotic signaling in intestinal epithelial cells may largely depend on early mitochondrial responses during cellular injury. We demonstrate significant rise in mtROS production, alteration of mitochondrial function, activation of ASK1-JNK/ p38 stress signaling and mitochondrial apoptotic cascade in rat intestinal epithelial cells with TNFα stimulation. Neonatal tissue staining together with our in vitro data demonstrate that inflammation and oxidative stress can induce significant mitochondrial deregulation and activation of mitochondria-selective apoptotic response, and may possibly lead to intestinal epithelial cell death in premature neonatal gut during NEC. are the main source of intracellular ROS during TNFα exposure in intestinal epithelial cells; and (iii) mitochondria are susceptible to TNFα injury; (iv) activation of ASK1-JNK/p38 and mitochondrial apoptotic pathways occurs during inflammation-mediated ROS injury in intestinal epithelial cells, suggesting a central role for mitochondrial dysfunction during TNFα-induced oxidative stress. Therapies directed against mitochondria/ROS may provide important therapeutic options, as well as ameliorate intestinal epithelial cell apoptosis during NEC. Human intestinal NEC sections. Paraffin-embedded intestinal sections from 20 neonates with NEC and 3 neonates with noninflammatory condition of the gastrointestinal (GI) tract (intestinal atresia; control) undergoing bowel resection were analyzed. Intestinal tissues were fixed and paraffin-embedded for further analysis. Control and NEC sections (5 μm) were prepared for immunohistochemical analysis. Sections were incubated with rabbit anti-TNFα (1:200) and anti-ASK1 (1:100) antibodies overnight at 4°C, then incubated with an anti-rabbit secondary antibody and stained with DAB chromogen (Dako Cytomation EnVision ® + System-HRP (DAB) kit, Carpinteria, CA). Slides were washed, counterstained with hematoxylin, dehydrated and Based on our findings, we propose that developing therapies specifically directed against mitochondria may be beneficial in reducing activation of apoptotic signaling cascades as a result of cytokine-mediated oxidative stress in intestinal epithelial cells during NEC. Increasing ROS production by the electron transport chain of dysfunctional mitochondria can be attenuated by various ROS scavenging compounds. Our successful use of the spin-trapping compound, PBN, which protects RIE-1 cells from TNFα/ROS-mediated death and attenuates activation of ASK1-JNK/p38 apoptotic pathway signaling, suggests that ROS scavengers may be beneficial. PBN is widely used for ROS scavenging, and most importantly, has been shown to reverse the age-related oxidative changes and to reduce oxidative damage from ischemia/reperfusion injury. [40][41][42][43][44] The antioxidant activity of PBN protects biologically important molecules from oxidative damage. Although this effect has not been demonstrated in NEC intestinal tissues, increasing scientific evidence of protective effects of free radical scavengers supports their possible use in clinical application, specifically in conditions requiring the targeting of diseases of mitochondrial dysfunction. In conclusion, we have demonstrated that: (i) the pro-inflammatory cytokine, TNFα, is abundant in neonatal NEC intestinal sections and can induce significant functional mitochondrial deregulation in intestinal epithelial cells in vitro; (ii) mitochondria Western blot analysis. Cell lysates were clarified with centrifugation (13,200 rpm, 20 min at 4°C) and stored at -80°C. Protein concentrations were determined using the method described by Bradford. 45 Equal amounts of total protein (20-30 μg) were loaded onto NUPAGE 4-12% Bis-Tris Gel and transferred to PVDF membranes, incubated in a blocking solution for 1 h (Tris-buffered saline containing 5% nonfat dried milk and 0.1% Tween 20), incubated with primary antibody overnight at 4°C, and then incubated with horseradish peroxidase-conjugated secondary antibody. Anti-β-actin antibody (1:5,000), total JNK (1:1,000) and p38 (1:1,000) were used for protein loading control. All primary antibodies were used in concentration of 1:1,000 to probe membranes. The immune complexes were visualized by ECL Plus (Amersham Biosciences, Piscataway, NJ). Quantitative densitometric analyses of all western blot bands (data not shown) were performed using ImageJ (Image Processing and Analysis in Java software, National Institutes of Health, MD). ROS trapping with α-phenyl-N-t-butylnitrone (PBN) in RIE-1 cells. PBN is one of the most widely used spin-trapping compounds that target mitochondrial ROS to reverse age-related oxidative changes and to alleviate oxidative damage from ischemia/reperfusion injury. 40-44 RIE-1 (2 x 10 4 ) cells were incubated with 0.5 mM PBN for 2 h at 37°C. To control for the effects of DMSO, a control group of cells were incubated in fresh media with DMSO for 2 h. After incubation, cells were treated with TNFα (10 ng/mL) for 15 min, and protein was harvested for western blot analysis. ASK1 siRNA transfection of RIE-1 cells. Rat SMART pool ASK1 and non-targeting control (NTC) siRNA duplexes (Dharmacon, Lafayette, CO) were used for transfection by electroporation (400 V/500 μF) in RIE-1 cells. Cells were maintained in medium for 48-72 h, then treated with TNFα (10 ng/ mL) for various time points. Extracted protein was analyzed by western blotting for phospho-JNK and phospho-p38 expression levels. JC-1 assay for detection of MMP changes. To determine the effects of TNFα on MMP, we used MitoProbe JC-1 Assay kit (Molecular Probes, Eugene, OR). The collapse in the electrochemical gradient across the mitochondrial membrane was measured using JC-1 fluorescent cationic dye. RIE-1 cells (1 x 10 6 ) were treated with TNFα (10 ng/mL), washed with PBS and incubated with 2 μM JC-1 for 15 min at 37°C in darkness. Cells were washed again and analyzed on a FACScan flow cytometer. cover slipped. For negative control, sections were stained with rabbit IgG (not shown). In vivo murine NEC model. Timed pregnant Swiss-Webster mice were purchased (Charles River Labs, Pontage, MI) and pup littermates were randomized to either control or NEC group. All mice in NEC groups were housed in a water bath at 37°C. They were hand-fed KMR liquid milk replacer formula (0.3 cc/g/day; q 3 h) using an animal feeding needle (24 g, ballpoint; Popper & Sons, New Hyde Park, NY). Control mice were maternally reared. To induce NEC, pups were stressed twice daily with hypoxia by placing them in a plexi-glass chamber, breathing 5% oxygen for 10 min. Each mouse was monitored daily for the clinical severity of NEC by assessing the level of activity, oral intake, weight change and abdominal exam findings. Mice that developed abdominal distention, respiratory distress and lethargy during the first 96 h of the experiment were sacrificed. After 96 h, all surviving mice were sacrificed and distal ileum was harvested for analysis. Segments of ileum were fixed in formalin and stored in 70% ethanol for paraffin embedding. Intestinal sections were stained with hematoxylin and eosin for analysis. Cell lines and culture techniques. RIE-1 cells (a gift from Dr. Kenneth D. Brown; Cambridge Research Station, Cambridge, UK) were maintained in Dulbecco's modified Eagle medium (DMEM) supplemented with 5% fetal bovine serum. Cells were maintained at 37°C under an atmosphere containing 5% CO 2 . Tissue culture media and reagents were obtained from Invitrogen (Carlsbad, CA). To determine whether mitochondria are the main source of TNFα-induced intracellular ROS production, and that activation of apoptotic pathways is mitochondrial ROS-dependent, we established a RIE-1-ρ° from the parent RIE-1 cell line. Silencing of mitochondrial function was achieved by maintaining RIE-1 cells in isolation medium supplemented with 5% fetal bovine serum, ethidium bromide, EtBr; 0.1 μg/mL, uridine (50 μg/ mL), pyruvate (100 μg/mL) and glucose (4.5 g/L). To confirm successful mtDNA depletion in RIE-1-ρ° cell population, cells were evaluated with FACS cell sorting method for EtBr binding after 14-16 passages and cell lysates were analyzed for decreased mitochondrial cytochrome oxidase subunit 1 protein level using western blot analysis (data not shown). Mitochondrial isolation. To determine the baseline expression of mitochondrial apoptotic markers in RIE-1 and RIE-1-ρ° (2 x 10 7 ) cells, mitochondria were isolated using a mitochondrial isolation kit according to a manufacturer's protocol (Pierce, Rockford, IL). Mitochondrial lysates were analyzed for mitochondrial apoptotic markers (APAF-1, cytochrome c) by western blot. Cell death detection ELISA. RIE-1 and RIE-1-ρ° cells were plated onto 24-well plates for 24 h prior to TNFα treatment. Cells were incubated with TNFα (10 ng/mL). To determine the effects of ROS scavenging on intestinal epithelial cell survival, RIE-1 cells were first pretreated with α-phenyl-N-t-butylnitrone (PBN; 0.5 mM) for 2 h, and then incubated with TNFα (10 ng/mL). DNA fragmentation was evaluated using a Cell Death Detection 1 ml of suspended cells was placed in the respirometry chamber, and oxygen consumption was monitored for 3 min to establish a basal consumption rate. TNFα (10 ng/mL) was added, and O 2 consumption was monitored for approximately 5 min. Potassium cyanide (KCN; 2 mM) was added to another aliquot of cells to determine the rate of non-mitochondrial respiration. Rates of O 2 consumption were calculated over a linear, one minute interval. All experiments were repeated three times. Statistical analysis. Minimum of three sets of experiments were performed to reproduce the results. The data were analyzed separately for each set using Kruskal-Wallis test. For oxygen consumption study, the effect of TNFα was assessed using one-sample t test against 100%. All effects and interactions were assessed at the 0.05 level of significance. Statistical computations were carried out using SAS 9.1 ® . Confocal microscopy for ASK1 expression and mitochondrial autophagy. To examine the effects of TNFα on ASK1 expression and mitochondrial autophagy, RIE-1 cells (2 x 10 4 ) were grown in glass chambers overnight, and then treated with TNFα. Cells were first incubated with PBS containing 1% bovine serum albumin, and then incubated with ASK1 antibody and AlexaFluor ® 647-labeled goat anti-rabbit IgG to determine ASK1 expression. For mitochondrial autophagy, treated RIE-1 cells were incubated with organelle-specific dyes from MitoTracker and LysoTracker labeling kits according to the manufacturer's instructions (Molecular Probes, Eugene, OR). All cells were counterstained with Hoechst 33342 nuclear stain. Zeiss LSM 510 UV Meta laser scanning confocal microscopy was used to visualize mitochondrial and lysosomal co-localization. Data was analyzed using Zeiss LSM 5 Image software. Mitochondrial oxygen consumption (QO 2 ). Mitochondrial oxygen consumption was measured using a Strathkelvin Mitocell S200 micro-respirometry system (Strathkelvin; UK), which utilizes a Clark-type oxygen electrode. Respirometry was adapted from Mozo et al. 46 Chamber temperature was maintained at 37°C. The respirometer was calibrated prior to experiments using O 2 -saturated water followed by sodium dithionite to provide 100% O 2 and 0% O 2 baselines, respectively. RIE-1 cells (1 x 10 7 /ml) were trypsinized, pelleted by centrifugation, and resuspended in culture media lacking fetal bovine serum. Briefly,
5,857.6
2009-11-01T00:00:00.000
[ "Biology", "Medicine" ]
Nondestructive shape process monitoring of three-dimensional, high-aspect-ratio targets using through-focus scanning optical microscopy Low-cost, high-throughput and nondestructive metrology of truly three-dimensional (3D) targets for process control/monitoring is a critically needed enabling technology for high-volume manufacturing (HVM) of nano/micro technologies in multi-disciplinary areas. In particular, a survey of the typically used metrology tools indicates the lack of a tool that truly satisfies the HVM metrology needs of 3D targets, such as high-aspect-ratio (HAR) targets. Using HAR targets here we demonstrate that through-focus scanning optical microscopy (TSOM) is a strong contender to fill the gap for 3D shape metrology. Differential TSOM (D-TSOM) images are extremely sensitive to small and/or dissimilar types of 3D shape variations. Based on this, we here propose a TSOM method that involves creating a database of cross-sectional profiles of the HAR targets along with their respective D-TSOM signals. Using the database, we present a simple-to-use, low-cost, high-throughput and nondestructive process-monitoring method suitable for HVM of truly 3D targets, which also does not require optical simulations, making its use straightforward and automatable. Even though HAR targets are used for this demonstration, the similar process can be applied to any truly 3D targets with dimensions ranging from micro-scale to nano-scale. The TSOM method couples the advantage of analyzing truly isolated targets with the ability to simultaneously analyze many targets present in the large field-of-view of a conventional optical microscope. Several metrology tools are currently available or have been proposed [5,9,10,, for 3D shape measurements. Popular industrial metrology tools currently used are electron-based tools (e.g. scanning electron microscope (SEM)), probe-based tools (e.g. atomic force microscope (AFM)), and optics-based tools (e.g. scatterometers). The National Institute of Standards and Technology (NIST) pioneered the x-ray tool referred to as critical dimension small angle x-ray scattering (CD-SAXS) [31] that has attracted much attention from the semiconductor industry. Combination of the results of more than one measurement technique, referred to as either hybrid or holistic metrology, initially pioneered at NIST further improved nanometer-scale dimensional measurements [36,42]. To be used in high volume manufacturing, a metrology tool must-in addition to providing statistically significant results [12]-be fast (high-throughput), low-cost, inline capable, automated, robust, easy to use, non-contact and non-destructive. The requirements for satisfactory measurement sensitivity and resolution have been identified in the International Technology Roadmap for Semiconductors (ITRS) [43] and International Roadmap for Devices and Systems (IRDS) 2017 Edition: Metrology [44]. All currently available tools have certain advantages and disadvantages. It is difficult to find a metrology tool that satisfies all the abovementioned requirements, especially for metrology of 3D/HAR targets. It is very difficult to use top-down SEM imaging for 3D shape analysis of HAR targets at high-throughput with sufficient measurement resolution. Probe-based tools, such as AFM, have limitations in reaching the bottom of HAR targets due to probe length and width constraints and generally do not meet high-throughput requirements. CD-SAX tools currently are too expensive to be industrially relevant for high-volume manufacturing (HVM). Among non-destructive metrology tools traditionally used in nano/micro technologies, optical tools are usually better suited for inline metrology applications. Optical tools, such as spectral reflectometry [35] and interferometry [16] are available for high-throughput depth measurement of HAR targets but are not capable of determining 3D shape. The workhorse of the semiconductor industry, scatterometry, is another optics-based technique widely used for measurement of shallow repeated structures, but is limited in its ability for 3D shape analysis of deeper and/or isolated HAR targets [13]. Model-based infrared reflectometry (MBIR) technique is reported to measure depth, top and bottom critical dimensions of TSVs and HAR targets [45]. But the MBIR technique also relies on simulations similar to scatterometry and is limited in its ability for dimensional analysis of individual HAR targets. There seems to be a gap in HVM metrology tools for complete shape analysis of truly 3D or HAR targets. It would be advantageous if the tool also does not rely on optical simulations. We demonstrate here how a NIST-developed optics-based metrology tool, through-focus scanning optical microscopy (TSOM) [18,43,44,[46][47][48][49][50][51][52][53][54][55], could fill this gap. The TSOM image is generated from a set of images, each captured at a slightly different focus (i.e. through-focus), using a lowcost, conventional optical microscope. Thus, TSOM collects and preserves the entire through-focus optical intensity information in 3D space. The collected set of through-focus two-dimensional optical images are then stacked at their respective focus positions creating a 3D space filled with the optical intensities. From this 3D space, extracting and plotting the optical intensities in a vertical cross-sectional plane results in a TSOM image. In the TSOM image, the X and the Y axes represent the distance and the focus positions, respectively. The color represents the optical intensity. A TSOM image depicts variations in the optical intensities with focus position. The color pattern enables visualization of the variations in the optical intensities easily. The TSOM images were then normalized [52,56,57]. The normalization procedure nearly eliminates the effect of variations in the experimental conditions such as the illumination source intensity, camera exposure time, frame rate. However, it is advisable to optimize conditions to reduce noise [57,58]. D-TSOM images are generated by taking a pixel-by-pixel difference between two TSOM images obtained using two different targets. D-TSOM images expose small (down to sub-nanometer) differences embedded in nominally identical targets. The color patterns of D-TSOM images are usually distinct for different types of parameter changes and serve as a 'fingerprint' for different types of parameter variations. D-TSOM images are qualitatively similar for different magnitude changes in the same parameter. However, the optical content of D-TSOM images is proportional to the magnitude of the dimensional differences. We developed a metric we call the optical intensity range (OIR), which provides a quantitative estimate of the difference between two images. The OIR is the absolute optical range (i.e. the difference between the maximum and the minimum optical intensity) of the D-TSOM image, multiplied by 100 [52,56]. The utility of D-TSOM [18,47,52,53,59] is that the color pattern of the D-TSOM image is an indicator of the difference in 3D shape, while the magnitude of the OIR scales with the dimensional difference between the two targets. Developments in image acquisition techniques have significantly reduced the acquisition time for a set of through-focus images to be as fast as a single conventional microscope image [50,54,[60][61][62] making TSOM suitable for HVM. Here we present a comprehensive study that demonstrates the applicability of TSOM for shape analysis of truly 3D trench targets with HAR. We show that the scattered light, which could be due to multiple scattering, contains the 3D shape information. TSOM facilitates extraction of this useful information and allows us to propose a simple, cost-effective, non-destructive, 3D shape process control method. HAR target fabrication The current work uses a 300 mm silicon wafer with HAR targets in a SiO 2 layer. First, a 1.1 μm thick SiO 2 film was deposited on the silicon wafer. HAR features covering an area of 150 μm × 150 μm were etched into the oxide film with a nominal CD, depth and pitch of 100 nm, 1100 nm and 1000 nm, respectively. Exposure and etch conditions were varied across the wafer, as shown in figure 1(a), to provide slight variations in the feature structural parameters. This provides a systematic dimensional variation in the HAR targets across the wafer. Typical SEM and optical images are shown in figures 1(b) and (c), respectively. FIB cross-section Cross-sectional analysis of the HAR targets was performed using a focused ion beam scanning electron microscope (FIB SEM) equipped with a gas injection system. First, the HAR target areas were filled with Pt using the primary electron beam at 10 keV landing energy and with 1.6 nA beam current at 0° sample stage tilt. Once the targets were filled in, additional 1 μm thick Pt layer was deposited using the 30 keV, 800 pA ion beam. This layer was deposited to protect the sample surface during the cross-sectioning steps. Rough milling to remove the bulk of the material was performed at 2.5 nA ion beam current. The exposed cross-sectional face was further cleaned with a lower ion beam current (0.79 nA) fine milling. The cross-sectional face was imaged using a 2 keV, 100 pA electron beam with a through-the-lens detector (TLD) in immersion mode. A typical cross-sectional image thus obtained is shown in figure 1(d). The following procedure was used to determine the precise cross-sectional profiles of the HAR targets. A typical large-area FIB cross-sectional image is shown in figure 2(a). A highly magnified cross-sectional profile of a single HAR trench is shown in figure 2(b). A line was first drawn along the SiO 2 -Si interface at the bottom ((a1) in figure 2(b)). A perpendicular line (blue wide bar with a yellow central line) was drawn to this boundary passing through the middle of the trench at one-third of the distance from the top ((a2) in figure 2(b)). Eleven horizontal lines were drawn at the predefined depths covering the entire depth of the trench ((a3) figure 2(b)). Distances of the left and right profiles from the central vertical line were then carefully measured at the 11 horizontal line locations, providing the cross-sectional profile of the trenches. A cubic spline fit was drawn through the 11 points on the left and the right separately resulting in the left and right profiles. For each die, a minimum of six such profile measurements were made on six different trenches. The final cross-sectional profile is a mean of these six measurements. The mean cross-sectional profile obtained in such a manner is shown in figure 3 (blue profile) for the central reference die along with the standard deviation which is shown as red profiles. The lowest measured cross-sectional profile standard deviation was observed at about one-third of the depth from the top as highlighted by a red box in the graph with the expanded x-scale, on the right of figure 3. TSOM Experiments A commercially available conventional, bright-field optical microscope in the reflection mode was used to collect the TSOM images. The optical microscope was designed for Kohler illumination. A light-emitting diode (LED) was used as an illumination source. A narrow, band-pass filter was used to obtain an illumination wavelength of 520 nm (±5 nm). TSOM images were captured using a 40x magnification objective with 0.75 numerical aperture (NA) and 0.25 illumination NA. An image of approximately 55 μm × 40 μm was captured using a cooled, monochrome CCD camera (692 x 520 pixels). A width of 0.5 μm (along the trenches) of the image at the center of the field-of-view (FOV) was averaged to obtain a mean intensity profile. From this, 2 μm length (across the trenches) at the center of the extracted profile was used from all of the throughfocus images to construct the TSOM images. A through-focus step height of 300 nm, and a total through-focus scan range of 25 μm was used to collect the set of through-focus images. The experimental data were collected using 0° illumination polarization (E-field perpend icular to the trenches) which provided higher sensitivity. Other typical processing conditions used and the effect of optical parameters can be found in [56,57]. The through-focus optical images forming the 3D optical data set were analyzed using an in-house developed software program. TSOM data were collected at the center of the target. Three sets of TSOM data from the 80 usable dies across the wafer were collected. For this work, we considered the target in the center die (0,0) as the reference. The TSOM image processing and normalization procedure can be found in earlier publications [56,57]. An example of one TSOM image is shown in figure 1(e). D-TSOM images were evaluated by subtracting the TSOM image of the reference target from each of the TSOM images associated with the targets in the other 80 dies. A typical D-TSOM image is shown in figure 1(f); note the scale on the color bar as compared to that of figure 1(e). TSOM data Even though we expect to see variation between the targets due to the process variation shown in figure 1(a), the color patterns for the TSOM images appear to be nearly identical for all the targets on the different dies. In contrast with the TSOM images, the color patterns of the D-TSOM images vary substantially based on the die selected. Typically, a sub-section of the D-TSOM image as highlighted by a red rectangle in figure 1(f) contains the strongest color pattern. Hence, we selected this sub-section (maintaining the same focus and distance ranges) from all the D-TSOM images and created At a first glance, the D-TSOM image color patterns appear to be varying widely. However, careful observation shows that they are mostly variations of the four basic color patterns (T1, T2, T3 and T4) as identified in figure 4(b). Figure 4(c) converts the ranges of optical strength in the D-TSOM images into OIR values, which are set at their respective die locations. One can see that the upper semicircle of figure 4(a) is mostly filled with T1 or T2 types of 3D shape differences and the lower semicircle is mostly filled with T3 or T4 types of differences. Further, the larger OIR of the D-TSOM images on the right side of the semicircle in figure 4(c) indicates larger dimensional difference targets compared to the left side of the semicircle. While these observations suggest how much the different HAR structures vary, one needs additional information to know what type of shape or dimensional differences the T1, T2, T3 and T4 type of D-TSOM images represent. 3D profile analysis with TSOM For the purposes of this analysis, we compared information collected using D-TSOM and FIB-SEM cross-section. After comparing the cross-sectional profiles of the selected reference targets, a correlation between the D-TSOM images and the geometry was identified. A summary of this correlation is presented in figure 5. The four typical types of D-TSOM images identified in figure 4(b) can be correlated with the following cross-sectional characteristics. • Type T1 D-TSOM image ( figure 5(a1)). It is associated with mostly symmetric profile differences from top to bottom, with nearly similar width at the top but narrower width of the production target at the bottom ( figure 5(a2)). Schematically we can represent this type of difference as shown in figure 5(a3). • Type T2 D-TSOM image (left part of figure 5(b1)). It is associated with mostly asymmetric profile differences Standard deviations of these OIR values varies between 2% to 10%, with the majority of them falling below 5%. Each die can be uniquely identified using the assigned co-ordinates in (a) and (c). from top to bottom, with nearly similar width at the top but narrower width of the production target at the bottom ( figure 5(b2)). Schematically we can represent this type of difference as shown in figure 5(b3). Since the profile differences are similar to figure 5(a2), except with asymmetry, we can propose that T2 type of D-TSOM image is a result of T1 type of profile differences with some asymmetry present. This is presented in figure 5(b1). • Type T3 D-TSOM image ( figure 5(c1)). It is associated with mostly symmetric profile differences from top to bottom with production target wider at the top but narrower at the bottom ( figure 5(c2)). Schematically we can represent this type of difference as shown in figure 5(c3). • Type T4 D-TSOM image (left part of figure 5(d1)). It is associated with mostly asymmetric profile differences from top to bottom, with the production target wider at the top but narrower at the bottom ( figure 5(d2)). Schematically we can represent this type of difference as shown in figure 5(d3). Since the profile differences are similar to figure 5(c2), except with asymmetry, we can propose that T4 type of D-TSOM image is a result of T3 type of profile differences with some asymmetry present. This is presented in figure 5(d1). Process monitoring with TSOM From these results, we identify two paths to the desired fast, low-cost, inline capable, automated, robust, easy to use, nondestructive and statistically significant metrology tool for a high-volume production environment. For the sake of argument, we can consider the target in the central die (the reference target (0,0)) as an ideal target with desirable dimensions. In a production environment, this information would be available from a 'golden' standard or reference. We consider the targets in the rest of the dies as exhibiting dimensional variations, as it would be typical in production. First approach. As the first approach, we propose using only the OIR values for the process monitoring. In earlier work, we showed that the magnitude of the OIR increases with the magnitude of the dimensional difference [18,47,52] between the reference and target under test. Thus, in production we can identify a maximum OIR value for an acceptable production target, for which, when exceeded, an unacceptable percentage of the related devices will fail. This first approach is an example of high-throughput and quick way of using TSOM as a simple 3D shape process monitoring method. However, it has the drawback of not identifying the type of dimensional difference the D-TSOM image represents. Since there are several types of D-TSOM image color patterns in figure 4, we can expect many types of 3D shape deviations from the reference target. From the OIR alone we are unable to single-out deviation in the dimension that is critical (usually called the critical dimension) for the application. A small deviation in the critical dimension may not be acceptable, whereas a larger deviation in a non-critical dimension may be acceptable. If the process monitoring is done purely based on the OIR values, there is a possibility that useable targets may be rejected if the high OIR value of a D-TSOM image is a result of a large non-critical dimensional difference, and vice versa. Second approach. The second course of action allows us to make intelligent process monitoring decisions with enhanced accuracy using the knowledge gained by determining the correlation between the color pattern in the D-TSOM image and the type of dimensional difference. This analysis needs determination of the complete 3D shape of targets corresponding to specific families of TSOM images. This process is similar to the library development in scatterometry, which is widely used in high-volume manufacturing. However, in the TSOM method we make use of experimentally generated library where as in the case of scatterometry optical simulations are used. For this work we developed such a library using the FIB SEM cross-sectional images described above and shown in figure 3, as well as those having similar patterns but exhibiting a range of OIR values. The D-TSOM images and the evaluated profile correlations become a library for the second approach. Equipped with the information and correlations identified in figure 5, we then propose the following steps for 3D shape process control of the HAR targets. • At first, we create accept/reject rules. These rules should be based on the process control requirement. For the sake of demonstration, we have created the following rules randomly (these rules are shown graphically in figure 6). If the OIR of the D-TSOM image is more than 12, reject the target as the dimensional differences are in excess of the tolerable limits. On the lower side, if the OIR of the D-TSOM image is less than 7, accept the target as the dimensional differences are within the acceptable level. If the OIR value is in between 7 and 12, reject the production target if the profiles are asymmetric and accept if the profiles are symmetric. • Now consider a random production target, for example the one located at position (−2,2) in figure 4. Initially, we have no knowledge of the type of target shape deviation from the reference target. Since its OIR value is 8.4, we then proceed to the next step where its D-TSOM image is compared with the reference library to determine which D-TSOM image from the library matches best with it. This target shows highest correlation with type T1 D-TSOM image; this can be determined by calculating the correlation coefficients by comparing the D-TSOM image with each reference image in the library. In this case, the best correlation of the target at (−2, 2) is with the type T1 ( figure 5(a)). Since this target has a mostly symmetric shape difference and its OIR value is less than 12, the final verdict is 'Acceptable'. Figure 7 illustrates this decision process as row number 1. We can unambiguously bin (accept/reject) the unknown targets at positions (1, −4), (−2, 1), and (4, 2), following the similar decision process as depicted in rows 2, 3, and 4, respectively, using only the OIR values and the correlation coefficients. Proposed TSOM-based automated 3D-shape process control method. The selected test production targets in column 1 are considered to have unknown 3D-shape profile (column 2). Comparing the D-TSOM images of the test targets with the library provides the best match (green boxes) from which the possible 3D shape difference type can be inferred (column 8). Based on the type of 3D shape difference and the magnitude of the dimensional difference (OIR), the process control decision of accept/reject status can be made. The unknown target in row 3, (−2,1), has the highest correlation coefficient with T2. However, its correlation coefficient is also close to type T1. Similar, but still-high correlation with T1 and T2 indicates that the unknown target is type T1 with asymmetry, i.e. type T2. Since correlation coefficients derived from D-TSOM images and OIR values are all numerical values, the decision process is simple and can easily be automated. The remaining acceptable dies after applying the selected process control rules to the wafer shown in figure 4 ARE presented in figure 8. Verification of the 3D shape analysis. Here we present a test case to verify the accuracy of the TSOM method of process control (figure 9). We chose two die locations (−2, 2) and (3, 0) that show type T1 D-TSOM images but with different magnitude of OIR values of 8.4 and 12.4, respectively. Since both the dies have type T1 profile differences, their cross-sectional profile differences should be similar when compared to the reference profile. However, the target in die (3,0) with a higher OIR value should have a larger profile difference compared to the target in die (−2, 2). Measured FIB cross-sectional profiles shown in figure 9 support this, i.e. a larger difference in the profile results in a larger OIR, validating the analysis made by the TSOM method. Large area and intra-die analysis. The TSOM method can also be used to identify anomalies in targets covering a large area. In figure 10, we present 50 μm long HAR sample SEM ( figure 10(a)) and D-TSOM ( figure 10(b)) images. While the D-TSOM image pattern typically matches with type T1, which is mostly symmetrical profile differences, some local variations can be identified. We highlighted two of the several local variations in the D-TSOM image by the red boxes. In figure 10(c), the red color area between the two blue color regions is less dominant in the highlighted blue box, whereas in figure 10(d), it is more dominant. This difference is a result of the underlying localized cross-sectional profile (i.e. 3D) shape differences. Even though at present it is not known what type of profile differences result in this type of D-TSOM images, this demonstrates the ability of the TSOM method to highlight them easily. A further finer localized FIB-cross sectional analysis would reveal the underlying profile shape differences precisely. A similar process can also be applied to identify dimensional variations within a die. In this way, localized variations (or defects) within a large area that fits in the FOV of a microscope can be identified simultaneously using the TSOM method. TSOM has unique advantages over other metrologies such as scatterometry, as it can be used to independently analyze every 3D structure (such as the HAR target demonstrated here); and over metrologies such as SEM, as it can be used to simultaneously analyze a large number of targets present in a large field-of-view (FOV) of a microscope with measurement resolution comparable to SEMs and AFMs. TSOM can also be used to study dimensional variations within a die, not possible with either scatterometry or MBIR. It is also not possible to obtain 3D shape information using SEM non-destructively, and measure narrow spaces using AFM due to probe size limitations. In the case presented here we have demonstrated that TSOM does not have these two limitations. This paper demonstrates the basic 3D shape process control procedure using the TSOM method, wherein each step can be automated once the library is generated. In the case presented here, the library has only four major types of D-TSOM images. For other cases, there may be more or less major types of D-TSOM images in the library. However, we expect the same type of procedure to work for 3D shape process control of many types of shapes and target sizes, ranging from microscale to nano-scale targets. Conclusion In this paper, we have shown how TSOM could be applied in a high-volume manufacturing environment by using numerical signatures from measured D-TSOM images that have been developed into a reference library or database. Thus, we have proposed and demonstrated a low-cost, high-throughput, and nondestructive 3D shape process monitoring method for truly 3D or HAR type of targets using conventional optical microscopes. This tool can fill the gap by satisfying the HVM metrology needs of not only HAR but also other types of truly 3D targets for which 3D shape process control/monitoring is needed. This work indicates that targets with surfaces hidden from the direct illumination could also be analyzed using TSOM. We have also pointed out that TSOM has unique advantages over other metrologies such as scatterometry, and SEM.
6,008.8
2018-11-05T00:00:00.000
[ "Physics" ]
Iron and manganese removal from drinking water The purpose of the present study is to find a suitable method for removal of iron and manganese from ground water, considering both local economical and environmental aspects. Ground water is a highly important source of drinking water in Romania. Ground water is naturally pure from bacteria at a 25 m depth or more. However, solved metals may occur and if the levels are too high, the water is not drinkable. Different processes, such as electrochemical and combined electrochemical-adsorption methods have been applied to determine metals content in accordance to reports of National Water Agency from Romania (ANAR). Every water source contains dissolved or particulate compounds. The concentrations of these compounds can affect health, productivity, compliance requirements, or serviceability and cannot be economically removed by conventional filtration means. In this study, we made a comparison between the electrochemical and adsorption methods (using membranes). Both methods have been used to evaluate the efficiency of iron and manganese removal at various times and temperatures. We used two membrane types: composite and cellulose, respectively. Different approaches, including lowering the initial current density and increasing the initial pH were applied. Reaction kinetics was achieved using mathematical models: Jura and Temkin. Introduction Electrochemical treatment is an emerging technology used for the removal of organic and inorganic impurities from water and wastewater.Electrochemically processes involve redox reactions, where oxidation and reduction reactions are separated in space or time [1,2].Usually, the electrochemically treatment of water is concerned with electron transfer at the solution/electrode interface applying an external direct current in order to drive an electrochemical process [1,3].Electrocoagulation is an electrochemical result of destabilization agents (usually Mn or Fe ions) that neutralize the electrical charge of suspended pollutant. Electrochemically generated metallic ions from these electrodes could undergo hydrolysis near the anode to produce a series of activated intermediates that are able to destabilize finely dispersed particles present in the water/wastewater to be treated.Electrochemical treatment methods have a future as advanced technologies for additional treatment of potable water from domestically and remote areas. Filter media (type, size and area), hydraulic and solids loading rate and backwashing regimes are all important aspects of filter design.Autocatalytic removal of manganese can take place in a filter and could be critical for manganese removal.An investment in filter pilot testing could become significant. The sorption of metal ions from aqueous solution plays an important role in water pollution control and in recent years there has been considerable interest in the use of low cost adsorbents.Many researchers have tried to exploit naturally, occurring materials as low-cost adsorbents, for removing of heavy metals. Manganese and iron (especially the last) produce different problems that could be due to various causes [4].Many types of treatment are effective for the removal of iron and manganese from water, but not all methods are equally effective under any conditions. Oxidation of dissolved iron particles in water, change the iron to white, then yellow and finally to red-brown solid particles (precipitates) that settle out from the water.Iron that does not form particles large enough to settle out and that remains suspended (colloidal iron) leaves the water with a red tint. Manganese is usually dissolved in water, although that some shallow wells contain colloidal manganese, leaving the water with a black tint.These sediments are responsible for the staining properties of water containing high concentrations of iron and manganese [4]. Iron and manganese are common in groundwater supplies used by many small water systems.Exceeding the suggested maximum contaminant levels (MCL) usually results in discolored water, laundry, and plumbing fixtures.This, in turn, results in consumer complaints and a general dissatisfaction with the water utility.There are secondary standards set for iron and manganese, but these are not health related and are not enforceable.ANAR establishedthe following limits (MCL): iron at 0.30 mg/L and manganese at 0.05 mg/L. The purpose of the present study is to find a suitable method for removal of iron and manganese from drinking water. Materials Iron nitrate Fe(NO 3 ) 3 was used as a source of iron in form of Fe (III) and it was supplied by Sigma Aldrich.Manganese (II) nitrate Mn(NO 3 ) 2 was used as sources of manganese in form of Mn (II) and it was supplied by Sigma Aldrich.Pure kallium chloride (KCl), purchased from Merck, was used as electrolyte.Distilled water was used throughout.Analar sulfuric acid 98 % was purchased from Chimexin. Method AAS for iron and manganese A laborator combined photo-electrochemical unit was used for the batch experiments.It consists of a cylindrical quartz photo reactor (1 L), with a coaxial and immersed medium pressure UV mercury lamp used as the UV emitter and light source (Heraeus TQ150, input energy of 150 W) emitting a polychromatic radiation in the range from 100 to 280 nm wavelength.The UV lamp was equipped with a cooling water jacket to maintain the temperature of the reaction of wastewater treatment at room temperature. The reaction vessel was filled with solution containing both iron and manganese.The electrochemical characterization of the solution was carried out by using DC power supply GW 3030 and two electrodes: graphite cathode and platinum anode.The measurements were performed in the temperature range: 288 K to 303 K, and the mixing was accomplished by using continuous magnetic stirrer. Photo-electrochemical method was combined with electrocoagulation in the same unit.Metal hydroxides generated during electrocoagulation were used to remove iron and manganesse from aqueous solution, and the effects of varying the current density and solution temperature of iron and manganesse adsorption characteristics were evaluated.The findings indicated that complete iron and manganesse removal could be achieved within reasonable removal efficiency and with relatively low electrical energy consumption [5,6]. The experimental data have been fitted with Jura and Temkin adsorption isotherm models to describe the electrocoagulation process.The adsorption of iron preferably fitted the Jura adsorption isotherm and manganesse preferably fitted the Temkin adsorption isotherm, and these suggested monolayer coverage of adsorbed molecules. Definite amounts of KCl were added to improve the conductivity and ionic mobility through the electrolyte.Low concentration of KCl (45 mg/L) was added to increase the conductivity and electric current beside its bactericide effect after electrolysis to chlorine.The solution was acidified to pH 3 by drops added of prepared dilute sulfuric acid 15 %.The effect of Fe 2+ and Mn 2+ concentrations revealed that the higher concentration of dissolved iron and manganese ions, the higher removal efficiency was obtained.The efficiency of the process was evaluated by measuring the metal removal from samples at the end of each experiment.Samples were filtered with cellulose and composite membrane, before the measurement of metals, by applying atomic adsorption (Carl Zeiss Jena AAS).For all experiments there have been used a synthetic solutions of iron and manganesse with a concentration range between 1-12 ppm (Fig. 1). Figure 1. Iron and manganase concentratin evolution in time in the case photo-electrochemical combined with electrocoagulation method Removal efficiency, % Results and discussion The photo-electrochemical method combined with electrocoagulation method presents good results for removing iron ions from 12 ppm prepared synthetic solutions.As it could be seen from Figure 2, the removal efficiency in the case of iron was about 46 %, while in the case of manganese it was 55 %.According to the applied electric current, the removal efficiency varied, while at low applied electric current, a low value of efficiency removal, % was obtained, but the removal efficiency in time was improved.The equilibrium of removal was achieved after 15 minute, from the processes' beginning. Time, min Current dansity, mA The optimum current density and temperature have been establisheda at 3.2 mA cm −2 and 288 K, for iron and 3.4 mA cm −2 and 298 K for manganesse, respectively.Another method to remove iron and manganese from drinking water was the adsorption method using two membranes (cellulose and composite respectively). Both methods have been used to evaluate the efficiency of removing iron and manganese from waste waters at different times and temperatures [7,8].The experiments showed the feasibility of removing iron and manganese by adsorption and coprecipitation with aluminum hydroxides.Photo-electrochemical method was combined with electrocoagulation in the same unit and were used for oxidation of soluble forms Fe 2+ and Mn 2+ to the insoluble forms Fe 3+ and Mn 4+ .The combined method revealed better efficiency compared with a single electrochemical method.The presence of both dissolved iron and manganese has the advantage of less resistivity of the waste water solution.Low concentration of KCl (45 mg/L) was added to increase the conductivity and electric current.Higher removal efficiency was obtained when Fe 2+ and Mn 2+ are presented in higher concentration (12 ppm).The study showed a more rapid oxidation of Fe 2+ than of Mn 2+ due to the lower oxidation potential of iron ion than of the manganese ion [9,10].a Composite membranes presented a better adsorption behavior for manganese ion (10.75 ppm) compared with the cellulose membranes (8.78 ppm). For iron ion the composite membranes' adsorption behavior (9.57ppm) was better compared with the cellulose membranes (7.89 ppm) as they are presented in Figure 4. The solution pH is an important parameter which controls the adsorption process.It influences the ionization of the adsorptive molecule and hence the adsorbent's surface charge.Therefore, investigating the pH effect on the adsorption is essential in adsorption experiments.In this particular case, the solution pH can change the surface charge of the adsorbent as well as different iron and manganese ionic formsions. The Temkin assumed that due to such interactions, the adsorption energy of all layer molecules decreases with surface coating. This pattern was obtained in view of adsorption phenomena and the interaction between adsorbed substance and was achieved by designing the chart functions: q e versus ln C e . The Temkin isotherm considers the interaction between the aqueous solution and solid (composite or cellulose membrane) that contained the free energy adsorption as a function of coating the surface of the adsorbent material.The equation underlying the Temkin model adsorption heat is: The linearized form of the Temkin'equation is: where T b -is the Temkin constant sorption heat, J/mg and the corresponding maximum energy constant connection between adsorbent and adsorbed is T K / (L/g). The isotherms of this model are shown in Figure 5. B and K T are Temkin equation parameters and respectively adequate with adsorption condescended and boundary constant adequate with maximum of boundary energy.Amount of B is RT/b T and b T is based on Temkin isotherm constant.By observing that we can see in Figure 5, the maximum amount adsorption of K T for two ions: manganese and iron, these with be much better uptake by the composite membranes. Taking into consideration the correlation coefficient for the surves presented in Figures (5,6), we observe that there is an accessible competition between this model and Harkins-Jura model. Harkins-Jura adsorption isotherm The Harkins-Jura adsorption isotherm could be expressed as [14,15]: where: q e is the adsorbed ions amount at equilibrium (mg/g) and C e is the concentration at equilibrium for two ions (ppm). Harkins-Jura model is presented in Figure 6, and from the diagram log C e against 2 e 1/q , considering A as parameter slope and from intercept , we could compute B parameter.In this paper, the experimental adsorption data were tested applying the Temkin and Harkins-Jura equations. It was observed that the correlation coefficient has a good correlation with experimental data. In Figure 6 the specific amount capacity adsorption is in mg/g and equilibrium concentration in ppm. The validity of the Harkins-Jura solute adsorption isotherm to systems, it could be used for the determination of the specific surface area of solids (composite and cellulose membrane). All the plots contain two intersecting straight lines (for these two metals: iron and maganese) and according to the Harkins-Jura solute adsorption equation.The Harkins-Jura equation applies to these systems for the entire concentration range studied.The existence of two or more intersecting straight lines in the Harkins-Jura plot indicates that there are two, or many isotherms corresponding to each of these lines with different values for the constants A and B. As it could be seen from Figure 6, in the case of Jura isotherm representation there are two linear parts derivated from model equation.The difference between the surface tensions caused by the adsorbent material and aqueous solutions is a linear function of the molecule and therefore the area is indicated using the isotherm Harkins-Jura. It argues that the two straight lines correspond for two different orientations of the adsorbent (composite or cellulose membrane) in the process of adsorption and high slope corresponding to a plane in surface and the slope shows lower values corresponding to a vertical orientation, for the two metals adsorption.This point of view was supported by the research of Soriaga et al. [13] work, where, using thin layer electrochemical techniques, it has been shown that admolecules assume a parallel orientation to the solid surface when adsorbed from solutions. Therefore, we can see that a straight line with a higher slope in the Harkins-Jura model corresponds to a flat orientation adsorption process that changes from a vertical orientation as this initial concentrations becames greater. As consequence, by making this orienting change of the adsorption process, a new phase it would be obtained, which presents a new point of intersection between the lines of the model representation of the two ions by their adsorption through the two membranes.Thus are obtained very good values for the adsorption capacity of the two membranes. The Temkin and Harkin-Jura models are often used to describe the equilibrium sorption isotherms. Conclusion pH is an important parameter influencing heavy metal adsorption from aqueous solutions.It has no influence on the adsorbent surface charge, the degree of ionization of material present in the solution, and also the dissociation of functional groups on the active sites of the adsorbent. The method applying the composite and cellulose membranes adsorption presented the best results compared to photo-electrochemical method for removing iron and manganese ions from drinking water.The adsorption membrane method has the advantage of simplicity in terms of installation compared to photo-electrochemical method.The disadvantage is the high cost of composite and cellulose membranes. Considering the high consumption of electricity which is used for photo-electrochemical method, the adsorption membrane method has the advantage of low power consumption. In addition, the adsorption kinetic studies showed that the electrocoagulation process was best described using the pseudo second-order kinetic model [16] at the various current densities and temperatures. Figure 2 . Figure 2. Effect of different current densities on the removal efficiency of iron and manganese (C 0 = 12 ppm, T = 298 K, C KCl = 45 mg/L) Figure 3 . Figure 3.Effect of current density on iron and manganese removal. Figure 4 . Figure 4. Iron and manganese removal efficiency evolution in time when using cellulose membrane (a) and composite membrane (b) ln (C e / ppm) ln (C e / ppm) Figure 5.Temkin adsorption isotherms of maganese and iron ions: cellulose membrane (a) and composite membrane (b)
3,472
2016-04-21T00:00:00.000
[ "Engineering" ]
Extraction and restoration of hippocampal spatial memories with non-linear dynamical modeling To build a cognitive prosthesis that can replace the memory function of the hippocampus, it is essential to model the input-output function of the damaged hippocampal region, so the prosthetic device can stimulate the downstream hippocampal region, e.g., CA1, with the output signal, e.g., CA1 spike trains, predicted from the ongoing input signal, e.g., CA3 spike trains, and the identified input-output function, e.g., CA3-CA1 model. In order for the downstream region to form appropriate long-term memories based on the restored output signal, furthermore, the output signal should contain sufficient information about the memories that the animal has formed. In this study, we verify this premise by applying regression and classification modelings of the spatio-temporal patterns of spike trains to the hippocampal CA3 and CA1 data recorded from rats performing a memory-dependent delayed non-match-to-sample (DNMS) task. The regression model is essentially the multiple-input, multiple-output (MIMO) non-linear dynamical model of spike train transformation. It predicts the output spike trains based on the input spike trains and thus restores the output signal. In addition, the classification model interprets the signal by relating the spatio-temporal patterns to the memory events. We have found that: (1) both hippocampal CA3 and CA1 spike trains contain sufficient information for predicting the locations of the sample responses (i.e., left and right memories) during the DNMS task; and more importantly (2) the CA1 spike trains predicted from the CA3 spike trains by the MIMO model also are sufficient for predicting the locations on a single-trial basis. These results show quantitatively that, with a moderate number of unitary recordings from the hippocampus, the MIMO non-linear dynamical model is able to extract and restore spatial memory information for the formation of long-term memories and thus can serve as the computational basis of the hippocampal memory prosthesis. INTRODUCTION Cortical prosthesis is an emerging technology seeking to restore cognitive functions lost in diseases or injuries (Berger et al., 2005(Berger et al., , 2012. It is achieved by bi-directional, closed-loop communications between the prosthetic device and the brain regions. This is distinct from sensory or motor prostheses, where one side of the communication is an external entity such as the sensory input (Loeb, 1990;Humayun et al., 1999) or the motor output (Mauritz and Peckham, 1987;Taylor et al., 2002;Nicolelis, 2003;Shenoy et al., 2003;Wolpaw and McFarland, 2004;Hochberg et al., 2006). Therefore, a cortical prosthesis must deal exclusively with the internal brain signals, in which sensory or motor information is embedded, by re-encoding the upstream (input) brain signals into the downstream (output) signals ( Figure 1A). For the past decade, we have been working on developing a hippocampal-cortical prosthesis for restoring the memory functions. Hippocampus is a brain region responsible for the creation of new long-term episodic memories (Milner, 1970;Squire and Zola-Morgan, 1991;Eichenbaum, 1999). Damage to the hippocampal areas can result in a permanent loss of such cognitive functions. In a normal hippocampus, short-term memories are encoded in the spatio-temporal patterns of spikes (i.e., spike trains) as the input from the entorhinal cortex. Memory information is then processed by the hippocampal feedforward tri-synaptic pathway, which consists of dentate gyrus, CA3, and CA1 regions, and eventually transformed into the output spike trains to the subiculum, that is appropriate for the formation of long-term memories ( Figure 1B). Although the exact nature of such a transformation or the underlying mechanisms is still largely unclear, it must be the neural signal (i.e., spike trains) flow from entorhinal cortex to dentate gyrus, to CA3, to CA1, and to subiculum, that enables the re-encoding of short-term memories into long-term memories. Maintaining the normal signal flow with a prosthetic device that bypasses a damaged or diseased hippocampal region provides a feasible way of restoring the lost long-term memory functions ( Figure 1A). For example, in our first-generation hippocampal memory prosthesis applications, we (a) record input spike trains from the CA3 region, (b) process them with a multi-input, multi-output (MIMO) non-linear dynamical model to predict the desired CA1 output spike trains, and (c) electrically stimulate the CA1 region with the predicted CA1 output patterns. Previous results have shown that, (a) the MIMO model can accurately predict the output spike trains in real time based on the ongoing input spike trains (Song et al., 2007(Song et al., , 2009a, and (b) the electrical stimulation can restore or even enhance the memory functions performed by the hippocampal CA3-CA1 system (Berger et al., , 2012Hampson et al., 2012a,b). However, despite the success of demonstrating such a prosthesis, how the external behavioral events (i.e., memory events) are encoded in the two hippocampal regions and, more importantly, re-encoded by the prosthesis has not been clearly revealed, precisely due to the internal nature of the cortical prosthesis. In this study, we propose a new framework of modeling and representing the re-encoding process performed by a brain region at the memory representation level, as opposed to the signal level in our previous studies. In addition to ask the question, "What should the output signal be?" We further ask the question, "What do the signals mean?" Specifically, we combine our previously developed MIMO signal model (Song et al., 2007(Song et al., , 2009a, which predicts the output signal based on the input signal, with an additional memory decoding model that relates the input and/or output signals to the behaviors (memories) of the animal (Figure 2). The MIMO signal model is essentially a time-series regression model non-linear dynamically mapping the multiple output (CA1) signals to the multiple input (CA3) signals. On the other hand, the memory decoding model is a multi-input, signal-output (MISO) classification model identifying to which of a set of memory categories the spatio-temporal patterns of the input and/or output signals belong. The former model quantifies the input-output signal transformation, while the latter model decodes the memory by predicting the behavior. The paper is organized as the follows. In section Materials and Methods, we formally formulate the modeling problem and provide the mathematical expressions. In section Results, we apply the methods to the modeling of the hippocampus during a memory-dependent task in rodents. BEHAVIORAL TASK AND ELECTROPHYSIOLOGICAL PROCEDURES All animal procedures are reviewed and approved by the Institutional Animal Care and Use Committee of Wake Forest University, in accordance with US Department of Agriculture, International Association for the Assessment and Accreditation of Laboratory Animal Care and National Institutes of Health guidelines. Two male Long-Evans rats are trained to criterion on a two-lever, spatial delayed-non-match-to-sample (DNMS) task with random delay intervals (Deadwyler et al., 1996;Hampson et al., 1999). Animals perform the task by pressing (sample response) a single lever presented in one of the two positions in the sample phase (left or right). This event is called the "sample response." The lever is then retracted and the delay phase initiates; for the duration of the delay phase, the animal is required to nose-poke into a lighted device on the opposite wall. When the delay is ended, nose-poke light is extinguished, both levers are extended, and the animal is required to press the lever opposite to the sample lever. This event is called the "non-match response." If the correct lever is pressed, the animal is rewarded (Figure 3, top). A session includes approximately 100 successful DNMS tasks that each consists of two of the four behavioral events, i.e., right sample (RS) and left non-match (LN), or left sample (LS) and right non-match (RN). Spike trains are obtained with multi-site recordings from different septo-temporal regions of the hippocampus of rats performing the DNMS task (Figure 3, bottom). For each hemisphere of the brain, a microwire multi-electrodes array (MEA) is surgically implanted into the hippocampus, with 8 electrodes in the CA3 (input) region and 8 electrodes in the CA1 (output) region. Spike trains are pre-screened based on mean firing rate and perievent histogram. Perievent (−2 to +2 s) spike trains of the four behavioral events are extracted from each trial and then concatenated to form the datasets (Figure 3, bottom). The spike train data are discretized with a 2 ms bin size. MIMO SIGNAL MODEL OF INPUT-OUTPUT SPIKE TRAIN TRANSFORMATION The MIMO signal model of input-output spike train transformation takes the form of the sparse generalized Laguerre-Volterra model (SGLVM) we previously developed (Song et al., 2009a,b, 2013). In this approach, a MIMO model is a concatenation of a series of MISO models (not to be confused with the MISO classification model), that each can be considered a spiking neuron model (Song et al., 2006(Song et al., , 2007 (Figure 2). In this study, each MISO model consists of (a) MISO second-order Volterra kernels k transforming the input spike trains x to the synaptic potential u, (b) a Gaussian noise term ε capturing the stochastic properties of spike generation, (c) a threshold θ for generating output spikes y, (e) an adder generating the pre-threshold membrane potential w, and (d) a single-input, single-output firstorder Volterra kernel h transforming the preceding output spikes to the spike-triggered feedback after-potential a. The model can be mathematical expressed as: The zeroth-order kernel, k 0 , is the value of u when the input is absent. First-order kernels k (n) 1 describe the first-order linear relation between the n th input x n and u, as functions of the time intervals τ between the present time and the past time. Second-order self kernels k (n) 2s describe the second-order nonlinear interaction between pairs of spikes in the n th input x n as they affect u. N is the number of inputs. M k and M h denote the memory lengths of the feedforward process and feedback process, respectively. They are chosen to be 2 s in this study. Second-order cross kernels and higher-order (e.g., third-order) kernels are not included in this study. To facilitate model estimation and avoid overfitting, the Volterra kernels are expanded with Laguerre basis functions b as in Song et al. (2009c,d): where v (n) 1 , c (n) 2s , and c h are the sought Laguerre expansion coefficients of k (n) 1 , k (n) 2s , and h, respectively (c 0 is equal to k 0 ); J is the number of basis functions. To achieve model sparsity, the coefficients are estimated with a composite penalized likelihood estimation method, i.e., group LASSO . In maximum likelihood estimation (MLE), model coefficients are estimated by minimizing the negative log likelihood function -l(c). In group LASSO, the composite penalized criterion is written as where λ ≥ 0 is a tuning parameter that controls the relative importance of the likelihood and the penalty term. When λ takes on a larger value, the estimation yields sparser result of the coefficients. λ is optimized with a two-fold cross-validation method. MISO MEMORY DECODING MODEL OF SPATIO-TEMPORAL PATTERN OF SPIKES The MISO memory decoding model of spike spatio-temporal patterns takes the form of the sparse generalized B-spline linear classification model . In this approach, the feature space is defined as a set of B-spline basis functions for each neuron (input and/or output neurons depending on the application). The classifier is essentially the logistic regression (Figure 2). B-splines are piecewise polynomials with smooth transitions between the adjacent pieces at a set of interior knot points. A polynomial spline of degree d ≥ 0 on [0, M] with m > 0 interior knot points and the knot sequence η 0 = 0 < η 1 < ... < η m < η m + 1 = M is a function that is a polynomial of degree d between each pair of adjacent knots, and has d-1 continuous derivatives for d = 1. B-spline basis functions of degree d can be defined in a recursive fashion as For a given sequence of m knots and a fixed degree d, the total number of B-spline basis functions is J = m + d + 1. Spatio-temporal patterns of spikes are projected to the B-spline feature space via inner product to yield the feature vectors as where M is the time widow for inner product. It is chosen to be from −2 to +2 s of the sample events (Figure 3, bottom). x n is the nth neuron of the total N neurons included in analysis. Different from in the regression model, x can be CA3 and/or CA1 neurons depending on the context. z (n) (j) denotes the feature value of the nth neuron using the jth B-spline function. Therefore, z is a 1-by-JN vector. J is optimized in the range of 5-100 based on the out-of-sample prediction accuracy. In most of the cases, J = 20 is found to be optimal. Since there are two possible behavioral outcomes, i.e., left or right position, the model output can be represented as a binary variable β. The classification model assumed by logistic regression is where w are the sought model coefficients; 1 and 0 represent left and right positions, respectively. The linear classification rule is simply Compared with the MIMO regression model, the MISO classification model may suffer even more serious overfitting problem due to the high dimensional input (typically with hundreds of features) and the relatively small number of data points (typically 100 trials in this study). Therefore, L1 regularization (Lasso) is applied to achieve model sparsity and avoid overfitting as Where −l(c) and λ = 0 are the negative log likelihood function and the tuning parameter of the classification model, respectively. In this study, λ is optimized with a four-fold cross-validation method. By minimizing S, sparse weight matrix w are estimated and further used to reconstruct the classification feature matrix F with the B-spline basis functions as F can be directly used in the logistic regression along with the spatio-temporal pattern x as HIPPOCAMPAL CA3 AND CA1 ACTIVITIES CONTAINS SUFFICIENT INFORMATION FOR DECODING SPATIAL MEMORIES DURING THE DNMS TASK First, we apply the MISO memory decoding model to the CA3 spike trains recorded during the sample phase of the DNMS tasks. For each sample event (left or right), we take the perievent spikes 2 s before and after the event with a 2 ms bin size. The spatiotemporal patterns of spikes are then N-by-2000 matrices, where N is the number of neurons. A session typically consists of 80-100 trials with roughly half being left sample trials and half being right sample trials. The spatio-temporal patterns are labeled with 1 for the left trials and 0 for the right trials. Figure 4 (case #1) and Figure 5 (case #2) show the spatio-temporal patterns from two animals with 26 and 43 CA3 neurons, respectively. For each position, four representative patterns and the overall patterns are shown. The overall patterns are obtained by smoothing the spike trains with B-spline functions and then summing across all trials for the specific position. It is evident that the two positions show different spatio-temporal patterns and the differences exist in specific time ranges of specific neurons (Figures 4, 5). The task FIGURE 4 | Spatio-temporal patterns of spikes in the hippocampal CA3 (input) region during left and right sample events of the DNMS (case #1). of the MISO memory decoding model is to identify these sparsely distributed differences from single trials of the spatio-temporal pattern and then predict the positions of the animal. Results show that the MISO memory decoding model can achieve a 100% outof-sample prediction accuracy using the CA3 spatio-temporal patterns in both cases (Figure 8, top row). Using the same method, we build MISO memory decoding models for the CA1 spike trains. Figure 6 (case #1) and Figure 7 (case #2) show the spatio-temporal patterns of CA1 during left and right trials (row 1 and 3) from the same two animals. There are 19 and 17 CA1 neurons recorded from these two animals, respectively. Similar to CA3, CA1 also show different spatiotemporal patterns during left and right trials. The prediction accuracy is 100% in one case and 91.3% in the other (Figure 8, middle row). HIPPOCAMPAL CA1 ACTIVITIES CAN BE ACCURATELY PREDICTED BY THE MIMO SIGNAL MODEL BASED ON THE HIPPOCAMPAL CA3 ACTIVITIES In the second step, we build MIMO signal models for the transformations from the CA3 spatio-temporal patterns to the CA1 spatio-temporal patterns. To build such a model, we concatenate CA3 perievent spike trains across all trials to form the input data and the corresponding CA1 spike trains to form the output data, and then apply our MIMO modeling method. The resulting SGLVM non-linear dynamically predicts the CA1 spikes based on the ongoing and past (within the memory window) CA3 spikes (Song et al., 2007(Song et al., , 2009aSong and Berger, 2010). Results show that in both cases (Figures 6, 7, row 2 and 4), the MIMO signal model can accurately predict the CA1 spatio-temporal patterns at both the single trial level (Figures 6, 7, column 1-4) and the overall level (Figures 6, 7, column 5). Importantly, a single set of the model coefficients are used for both the left and right trials. In other words, the estimated MIMO signal models are memoryinvariant and can be used to predict the output signals without explicitly knowing what the events are . HIPPOCAMPAL CA1 ACTIVITIES PREDICTED BY THE MIMO SIGNAL MODEL CAN BE USED TO ACCURATELY DECODE THE SPATIAL MEMORY Lastly, we build MISO memory decoding models for the CA1 spatio-temporal patterns predicted by the MIMO signal model, as opposed to the actual CA1 spatio-temporal patterns. Results show that the MISO memory decoding models can accurately predict the spatial memory based on the predicted CA1 patterns (Figure 8, bottom). The prediction accuracies are 91% and 87.5 for the two cases, respectively. Importantly, the MISO memory decoding model coefficients remain the same for the actual CA1 patterns and the predicted CA1 patterns. This indicates that the MIMO signal model has successfully transmitted the spatial information from CA3 to CA1 in the same form as it is encoded in the actual CA1 patterns. The MIMO signal model has not only restored the signal, but also re-encoded the memory representations. SPATIAL INFORMATION IS SPARSELY DISTRIBUTED IN THE HIPPOCAMPAL CA1 AND CA3 SPATIO-TEMPORAL PATTERNS OF SPIKES In order to gain more insights into how hippocampal CA3 and CA1 spike trains encode spatial information, we calculate Equation (15) and plot the classification weight matrices. These matrices have the same dimensions as their corresponding spatiotemporal patterns. In order to perform classification, we can simply calculate the dot products of the weight matrices and the corresponding spatio-temporal patterns (strictly speaking, the dot products of vectorized matrices), add the bias (i.e., w 0 ), and then use Equation (16) to predict the probability of the animal having left or right memories. Figure 9 show results of CA3 and CA1 from the two animals. The CA1 weight matrices are for both the actual and MIMO predicted CA1 spatio-temporal patterns. In both cases, non-zero values (warm and cold colors represent positive and negative values, respectively) are sparsely distributed in the weight matrices. These results indicate that the spatial information exists in a redundant fashion in multiple ranges of the perievent intervals of multiple neurons. The MIMO signal model and the MISO memory decoding model jointly describe the re-encoding of the memory representations from CA3 to CA1. DISCUSSIONS Brain regions process and transmit information with spatiotemporal patterns of spikes. In order to build a cortical prosthesis to bypass a damaged brain region, it is necessary to restore the output signals of the damaged region and send it to the downstream region, so the information flow is maintained. We have shown intensively that non-linear dynamical MIMO models can predict accurately the output spatio-temporal patterns based on the ongoing input spatio-temporal patterns, and electrical stimulations of the output region following the predicted patterns can effectively restore and even enhance the memory function (Berger et al., , 2012Hampson et al., 2012aHampson et al., ,b, 2013. The unique contribution of this paper is to combine the MIMO models with a new set of MISO memory decoding models so that the input and output signals can be related to the memory (behavioral) events and thus explain why it is possible for the downstream hippocampal region to correctly decode the MIMO model generated signals. In our previous publications on the MIMO signal model (Song et al., 2006(Song et al., , 2007(Song et al., , 2009a(Song et al., ,b, 2011Song and Berger, 2010), the model goodness-of-fit are validated with a Kolmogorov-Smirnov (KS) test based on the time-rescaling theorem (Brown et al., 2002;Haslinger et al., 2010). This KS test is a powerful tool that allows the firing probability intensity function predicted by the MIMO model to be directly validated with the actual output spike train, and the model goodness-of-fit to be quantified statistically with confidence bounds. However, the KS test does not necessarily indicate whether the model goodness-of-fit is sufficient for decoding the behavior or restoring the cognitive function since it is developed only for quantifying the accuracy of the predicted point-process output signal. The typically used 95 or 99% confidence bounds will not guarantee a successful MIMO model for building the prosthesis. For example, a perfectly predicted output signal may contain no information about a specific memory of interest; on the other hand, a less accurately predicted output signal may still contain some or even sufficient information about the memory. The MISO memory decoding model described here directly quantifies the relations between output signals and memories, and provides a more functionally relevant measure to the model performance that is complementary to the KS test. In hippocampal prosthesis applications, MISO memory decoding models are estimated with input-output data during the sample phase (−2 to 2 s). The reason is that, in the DNMS task, animals form the spatial memory (i.e., left or right level position) during the sample phase, retain the memory during the delay phase, and recall the memory during the non-match phase. Previous results have shown that MIMO model-based electrical stimulation restores and enhances the spatial memory during sample phases but not non-match phases (Berger et al., , 2012Hampson et al., 2012a), despite that the MIMO signal model is able to predict accurately the output signal during both sample and non-match phases . Hippocampus is a mainly feedforward network consisting of a large number of neurons. There are approximately 1 million, 330 thousand, and 420 thousand principal neurons in the rodent dentate gyrus, CA3, and CA1 regions, respectively (Amaral et al., 1990). However, despite the small number (tens to a hundred) of recorded neurons allowed by the current MEA technology, our hippocampal prosthesis has shown impressive success in both rodents and non-human primates during the spatial memory tasks. The main reason is that, at least during the DNMS task or the delayed match-to-sample (DMS) task, spatial memories (e.g., locations of the levels) are encoded in a highly redundant and distributed fashion in a large portion of the hippocampal neurons. As shown in this study, sampling a small number of neurons from the whole population still allows accurate extraction of spatial information. The DNMS task is a highly restricted experimental paradigm that involves only two positions. Under normal conditions, however, the animal needs to form much more complex memories to maintain its normal life (Eichenbaum, 1999). A practical hippocampal prosthesis should be able to extract and restore a large number of memories with the MIMO signal model and MISO memory decoding model. This will likely require (1) recording a larger number of hippocampal neurons to obtain more information necessary for decoding the episodic memories, (2) stimulating with more electrodes for generating richer output patterns to the downstream hippocampal region, and (3) developing more powerful MIMO signal model and MISO memory decoding model to more accurately restore the output signal and decode the memories. For example, the current MISO memory decoding model has binary (left or right) output; in order to decode more memories, it needs to be extended to handle multiple-category output. A natural solution is to use multinomial logistic regression (McCullagh and Nelder, 1989), instead of the standard binary-output logistic regression used in this study. Besides, other forms of discriminative models, e.g., support vector machine, or generative models, e.g., naive Bayes classifier, may be considered for their specific advantages. In addition, to collect input-output data for building multiple-memory models, new experimental paradigms involving multiple forms of behavioral events and sensory modalities need to be utilized (Hampson et al., 2012b). Nonetheless, the study described in the paper for the first time combines the regression model with the classification model to illustrate how memory-related information is encoded and reencoded in the hippocampus, and has made a critical step toward building a hippocampal memory prosthesis. Interestingly, in both cases in this study, the MISO memory decoding model shows higher prediction accuracy in the CA3 FIGURE 8 | Predicting spatial memory events based on inputs (CA3), outputs (CA1), or outputs predicted by the MIMO models (case #1 and #2). The horizontal lines in the middle represent the decision boundaries (P = 0.5). than in the actual CA1, and higher accuracy in the actual CA1 than in the predicted CA1. The latter is unsurprising since the predicted CA1 patterns are calculated with the MIMO signal model using the actual CA1 patterns as target signals, although the real-time calculation is driven by the ongoing CA3 patterns. It is thus unlikely for the predicted CA1 patterns to contain more memory-related information than the actual CA1 patterns. The former observation can be caused by two factors. First, it is possible that CA3 neurons contain more spatial information than CA1 neurons as suggested by previous studies (Lee et al., 2004). Second, it could simply due to the fact that we have recorded more CA3 units than CA1 units in the two cases included in this study. A more systematic, comparative study of CA3 and CA1 patterns needs to be performed to draw further conclusions. In this study, the MISO memory decoding model takes the form of a B-spline, logistic regression model. The B-spline basis functions are utilized to reduce the model dimensionality and introduce a continuous metric for the similarities between spike trains. The optimal number of basis functions provides an estimate to the relevant temporal resolution of the spike trains. The logistic regression maps the spatio-temporal features to the probability of having a certain behavioral outcome. Despite the rather general model structure and the high prediction accuracy, however, this study does not necessarily suggest that the downstream hippocampal region decodes the CA1 spatio-temporal patterns in the same way. Instead, the main biological implications of this study are: first, the CA1 spatio-temporal patterns can be accurately predicted from the CA3 spatio-temporal patterns using a non-linear dynamical MIMO signal model; second, both CA3 and CA1 patterns contains sufficient information for decoding the memory events; third, the MIMO-model predicted CA1 patterns also contain sufficient information of the memory, and it must be this fact that makes the successful implementation of the hippocampal memory prostheses possible.
6,253.8
2014-05-28T00:00:00.000
[ "Computer Science", "Engineering" ]
Planning and management in the Missional agenda of the 21 st Century Church : A study of Lighthouse Chapel International A few years ago it could be said that the church did not pay much attention to the role of effective planning and management as part of its core mandate of building the body of Christ. In situations where planning was done, ensuring the effective implementation and management of plans was always a challenge. Continuing this attitude into the 21st century might have several negative effects on the church. In view of this, it is required of the church that although the gospel itself will not change, the approach, techniques and strategies of planning, administering and managing the church of the living God ought to change to meet the demands of our time. In this view, the article discusses the role of planning and managing the church in the fulfilment of the missio Dei with special focus on Lighthouse Chapel International (LCI). Introduction A few years ago it could be said that the church did not pay much attention to the role of effective planning and management as part of its core mandate of building the body of Christ.In situations where planning was done, ensuring the effective implementation and management of plans was always a challenge.Continuing this attitude into the 21st century might have several negative effects on the church.In view of this, it is required of the church that although the gospel itself will not change, the approach, techniques and strategies of planning, administering and managing the church of the living God ought to change to meet the demands of our time.In this view, the article discusses the role of planning and managing the church in the fulfilment of the missio Dei with special focus on Lighthouse Chapel International (LCI). Lighthouse Chapel International was founded in 1988 by Dag Heward-Mills 1 and headquartered in Accra, Ghana.LCI has more than 1800 branches in 79 countries worldwide (Africa, Europe, Asia, the Caribbean, Australia, the Middle East and the Americas).Stewart and Nadia remarked that LCI is one of the largest of the Pentecostal churches that have appeared since the late 1970s in cities in Africa (Stewart & Nadia 2009:170).It is one of the largest of the charismatic churches in Ghana, which have steadily drawn membership away from traditional churches (Aboderin 2011:71).The word international in the church's name reflects the common emphasis on success in charismatic Christian churches in Ghana, as does the term 'the Megachurch' that it commonly uses to describe itself (Sanneh & Carpenter 2005:85).LCI is a member of the National Association of Charismatic and Christian Churches and the Pentecostal World Fellowship (Quampah 2014:144). Lighthouse Chapel International is used for this study because of their consistent passion for mission.They are one of the few Neo-Pentecostal churches in Ghana that have successfully 1.Bishop Dag Heward-Mills is the founder and presiding bishop of Lighthouse Chapel International.He has served on the Church Growth International Board since 1996 and is the founder of the National Association for Charismatic and Christian Churches in Ghana.He was recently elected as an executive councilman of the Pentecostal World Fellowship.Before moving into full-time ministry in 1990, he was a medical doctor. Planning and management is an important aspect of every successful organisation.In a similar way as the church participates in the missio Dei, it is essential that we consider planning and management as part of the missional tools for the management of the various resources God has given the church.Doing this, church leadership must join in with the Father (God) and the Holy Spirit to know what he wants to accomplish in their context as they plan and manage God's resources for missional purposes.In the light of this understanding, the article discusses the role of 'planning and management in the missional agenda of the 21st century church' by using one of the fastest growing Neo-Pentecostal churches in Ghana, the Lighthouse Chapel International, as a case study.The central argument in this paper is that although leadership has a major role to play in missional planning, it is however a holistic and all-inclusive agenda.Missional planning includes the involvement of the Holy Spirit, congregational leadership, the entire congregation and the various resources the church is being endowed with by God. Intradisciplinary and/or interdisciplinary implications: The study is an interdisciplinary study between Missiology, Planning, and Management.The result from the study will enhance the Ecclesiastical Community to appreciate the importance of management and planning as they participate in the missio Dei. Read online: Scan this QR code with your smart phone or mobile device to read online. applied planning and management in their mission agenda.LCI has also made good use of technological advancement in their discipleship programme, leadership development and management, stewardship and mission.Their tent missionary approach for church planting has also been well managed. It should be noted at this point that the purpose of this paper is not to project planning and management of the church as a major missional tool for the church, but however, one of the missional tools to be considered in the management of the various resources (spiritual, material, physical and financial) the church has been blessed with. The concern for writing this article is that many of the nonmainline churches in West Africa and more especially in Ghana are not really doing well in this aspect (management and planning) of the missional life of their churches.We are hopeful that this article would ignite church leaders not to only look at the spiritual aspect of the church but also the various resources they are endowed with. The nature of the 21st century church and mission Though the concept of what it means to be a 'church' has remained unchanged, the 21st century church has found itself in a complex context of technological advancement, secularised and modernised society and issues of Human Rights.The world has now become a global village and what happens in one context affects the other in various ways.Kemper (2016) described missionary service in the 21st century as following the New Testament's mission mandate to move outwards from Judea into all parts of the world.In the 19th century, parts of Africa, Asia and Latin America were considered to be suitable areas for mission.However, he held that mission theology and practice today and in the future are being shaped primarily by a global, ecumenical impulse that respects cultural diversity, promotes mission partnerships and understands that mission and missionaries must come from everywhere and be sent everywhere.In the book, Missions now: This generation, Greenway and Kyle (1990) examined the trend in missions and forecasted that the youth will spearhead missionary activities in the 21st century. Greenway and Kyle argue that unlike in the centuries past, the coming youth are generally idealistic, optimistic, committed and flexible, which are characteristics required of a Christian who enters into a cross-cultural ministry (1990:17). The mass media also presents a golden opportunity for the 21st century church as another means of participating in the missio Dei.Furthermore, social media has also introduced a new reality in which each individual can create a threshold that works for him or her and move it as and when required. It gives one the opportunity to connect or communicate with hundreds of thousands of people (Nadella 2013:2).On the other hand, the 21st century church is also faced with issues of terrorism and a prosperity gospel. The concept of management and planning Rush defines management as a process of meeting the needs of people as they work at accomplishing their jobs (Rush 2003:3).It involves working constructively with resources to accomplish organisational goals.According to Means (2008:350), management relates more closely to the stewardship of human and capital resources.Management therefore requires effective and efficient coordination of all resources through the process of planning, organising, directing and controlling in order to attain a stated objective. From a missiological perspective, the Harvestime International Institute defines management as a process of accomplishing God's purposes and plans through the proper use of human, material and spiritual resources.According to them, management is another word for stewardship (Harvestime International Institute 2001:6).The concept of management has four major functions: planning, organising, leading and controlling (Dayton & Fraser 1990:19). Although the concepts of planning and management are not the same, they move together.The role of managers is to plan and implement those plans.Planning is viewed by many scholars and treated in many theories as one of the functions of management.Planning is viewed as the primary management function because it establishes the basics for all the other functions that managers perform (Robbins & Coulter 2002:177).Planning is a critical component of organisational success of which the church is not an exception.Planning is the systematic process of establishing a need and then working out the best way to meet the need, within a strategic framework that enables you to identify priorities and determines your operational principles (Shapiro 2003:4). Robbins and Coulter defined planning as defining the organisation's goals, establishing an overall strategy for achieving those goals and developing a comprehensive set of plans to integrate and coordinate organisational work (Robbins & Coulter 2002:176).Rush defined planning as consisting of identifying the overall purpose of a project, the activities to be performed, their sequence and the resources required to accomplish them (Rush 2003:17).A plan can therefore be defined as a scheme that specifies the future resources and action that an organisation needs in order to achieve its goal in an efficient and orderly way. From the above definitions, it can be said that it involves anticipating future requirements and challenges.It also involves sequencing future resources and actions to minimise the delay and waste, which could arise if events were allowed to take their natural place and chronological order.Even though planning cannot eliminate change, managers plan in order to anticipate changes and develop the most effective response to them. In summary, the following are some of the benefits of planning: • It helps us to focus our attention and efforts on the attainment of desired goals.• It also leads to the proper and rational allocation of limited resources to the attainment of defined targets.• Planning helps us to be proactive in life and not to be reactive.• It minimises waste of time and other resources and promotes proper coordination of activities.• It minimises delays and conflicts in decision-making as well as crisis management. Having discussed the importance of planning and management as one of the missional tools of the church, it is however essential that the church subject her missional planning and management to the leading and direction of God by joining with the Holy Spirit to know what God wants us to do in our context. This approach in other words is called missional discernment. According to Balia and Kim (2010:4), discernment is the first step in mission.In fact, there is no way the church can succeed in mission without engaging in missional discernment to know God's will for any mission venture they intend to embark upon (Ma & Ross 2010:223, 241).The World Council of Churches (2013:52, 56-57) submits that churches are called to discern the work of the life-giving Spirit sent into the world and to join with the Holy Spirit in bringing about God's reign. Discernment is one way we connect with God.It is a part of spirituality that opens us to God's movement in our lives.It flows out of a larger commitment to yield our attention, agenda and action towards God.Discernment is an ongoing attitude and practice of Christian spirituality that matters to mission spirituality (White 2016:253-254).The local congregation in this regard is the basic unit of Christian witness and the environment in which discernment takes place (Hendriks 2004:31).Unlike in the secular idea of planning and management, missional planning and management begin with the missional leadership and the entire congregation's commitment to the leading of the Holy Spirit in their participation in the missio Dei. Spirituality of planning and management Spirituality is the core of the Christian experience and encounter with God in real life and action.It gives the deepest meaning to our lives and motivates our actions (Shorter 1978:4).When it comes to planning, no one is comparable to God, no matter how high a position they may hold in a country or in an organisation.Wagner (1987) concisely touched on the spirituality of planning strategies in his book 'Strategies for church growth'.He addressed the popular misconception that planning leaves no room for the sovereignty of God and the spontaneity of the Spirit's work and also rules out dependency upon the work of the Holy Spirit.He laid in brief the biblical and theological foundation for strategic planning.He held that it is essential that we, as citizens and subjects, know what the king is, what his purposes are for the world in which we live and what our roles are in contributing towards these purposes (Wagner 1987:12).Despite God's sovereignty and omnipotence, he chooses to use humans to fulfil his divine purpose. Proverbs 16:1, 32 teaches us that we are not sufficient of ourselves to think, or speak, anything of ourselves that is wise and good, but all our sufficiency is of God.In this regard, because the church is participating in the missio Dei, it is important that we join in with the Father (God) to work according to his missional plan for our society, community and church.Mission starts from God.Therefore, no matter how good a plan a church may have in its participation of the missio Dei, it is required that the plan(s) of each congregation ends with the final decision and the leading of God. This echoes the view that the role of missional leadership is not in the first place in strategic planning or management, but in cultivating within the missional community the capacities needed for spiritual discernment and formation (Kim 2009:1, 40−66;Van Gelder 2007:107).According to the World Council of Churches (2013:56), pneumatological approach to mission exceeds and subverts our theological and ecclesiological boundaries and moves us into a new posture and practice of mission.In this regard, leaders stimulate the missional imagination of their congregation. Sweets (2009:178) asserts that the church does not pass through time and context in hermeneutically sealed containers but rather like yeast that takes new form and changes every culture.Therefore, this calls for an incarnational mission spirituality and discernment to know the context for missional planning and participation (Roxburgh 2011:55;Roxburgh & Romanuk 2006:24). Missional planning and management Jesus' statement, 'For which of you, intending to build a tower, does not sit down first and count the cost, whether he has enough to finish it?'(Lk 14:28), has a strong conation for importance of planning in our personal lives.This idea could also be implemented in missional planning management of the church.Dayton and Fraser (1990) submit that: if we were Christian farmers, we would not ignore the laws of nature and simply hope for a good harvest.We would not think the more ignorant and backwards we are the more room God has to bring in harvest.(p.27) In the words of Mensah, the purpose of management in Christian organisations is to create a fertile climate for spiritual service -a sense of shared mission, wise stewardship of resources and mutual supportiveness (2005:1).He held that the purpose of a Christian ministry is not excellent management per se, for management is merely a means to the end of serving God.Thus, the vision of reaching out to humanity can only be realised with careful management.Dayton and Fraser (1990) in this light remarked that: Management for Christian mission aims at bringing about conditions on earth where God's kingdom comes, where people obey God's commands as they are obeyed in heaven.It aims at a success that happens only within the limits and the power of God's Spirit, in conformity with the truth and claims of the gospel, and through the agency of people gifted by the Spirit whose lives exhibit the beauty of holiness.(p.27) In view of this, it can be said that effective management and planning is therefore needed in the area of missions. As already explained, planning and management is primarily concerned with the most effective ways of reaching organisational goals.Therefore, it is imperative for various churches to set goals and then plan to effectively manage both the human and capital resources available to them as they participate in the missio Dei. The unpredictable economic climate has created budgeting challenges for many organisations, and the church is no exception.This has created the need for churches to figure out the best way to manage their limited resources.Church leaders are often challenged with ensuring there is continued funding to support current programmes and fixed operational costs.This can be difficult because the financial needs of a church can be significant, and juggling limited resources can be stressful. To this end, Callahan concludes that 'when congregations utilise effective church finance practices, they invest less time, find more creativity, and have stronger results' (Callahan 1992:3).In doing this, leadership and congregation members being assigned the responsibility of managing the capital resources of the church should have in mind that they are stewards of God's property. The cornerstone of stewardship is the full acknowledgement and consistent practice of allowing God to direct what he wants done with what he has entrusted us to manage.Stewardship defines our practical obedience in the administration of everything under our control and everything entrusted to us.This sense of stewardship is what makes the church being first of all accountable to God. Stewardship encompasses the way we live our lives and manage our time and the resources of God.It creates the awareness that everyone shall give account of himself or herself to God and be rewarded. Management of human resources for mission Human beings are the most important assets of any organisation and the church is not an exception.According to Snell and Bohlander, the term 'human resources' implies that people have the capabilities that drive organisational performance (Snell & Bohlander 2007:4).Human resource involves the commitment, skills, knowledge, potential and availability of people.It extends to all their assets whether personal and financial.A church's human resource base falls into two categories: Volunteer workers and paid church staff.How effective a denomination is in the fulfilment of the missio Dei can therefore be determined by how people and resources are managed. The solution to faith communities' questions about how to participate in God's missional praxis is a critical, constructive dialogue or correlation between their interpretations of the realities of the global and local context and the faith resources at their disposal (Hendriks 2004:30).The Harvestime International Institute holds that as believers, each of us is a manager of spiritual resources with which God has entrusted us (Harvestime International Institute 2001:7).Willis remarked that not only does the church involve the laity, it is the laity (Willis 1988:49).It can therefore be established that every believer is called to perform a ministry.The mandate of the church in mission is not the mandate of a special class of people but every member of the body of Christ.Therefore, this makes each congregation member a missional resource for the advancement of the kingdom of God. The missional ecclesiological orientation of LCI is not a building or an institution but a community of witness, called into being and equipped by God and sent into the world to testify to and participate in Christ's work.The church focuses on the formation of a Christian community around God's mission in and for the world within the congregation's own context and as they are being led by the Holy Spirit.Local congregations are therefore impelled to step out of their comfort zones and cross boundaries for the sake of the mission of God (World Council of Churches 2013:67). Ephesians 4:1-16 says that Christ endowed the church with certain persons whose duty it is to equip the saints.The Greek word for equip or perfect in verse 12 means to mend. It also means to educate, to train, to guide or to enable a person fully to do a task.The equipper's task is to perfect the saints so that they can do the work of the ministry and build up the body of Christ.The primary purpose of the equippers is to enable the saints to minister and live worthy lives of humility, meekness, long suffering, forbearance and unity (Eph 1:18; 4:1-6). In this sense, it can be said that the equippers are the human resource managers in the economy of the church.However, the equippers' or leaders' role as managers is even more needed after people are ready for the service of ministry.Human resources managers plan, direct and coordinate the administrative functions of an organisation. Ministry team members are valued not only for their performance contributions to the ministry but also because they are members of God's family.Effective Christian management results in the ministry team being a family of people who care about and for one another.Management of Christian organisations is people-centred and participative. Ministry management places a high value on cooperation and teamwork.Team members are motivated by a shared sense of vision and mission, which is more important to them than personal gain.Goals are pursued selflessly and sacrificially.How Christians work with one another is just as important as what they are striving to accomplish.A key aim of ministry management is to help team members become more Christ like.Management in Christian organisations is ultimately a partnership with God, built on prayer, faith and obedience in the participation in the missio Dei. The role of missional leadership in the management in Lighthouse Chapel International Management is like health; it is often easier to describe it by its absence than by its presence.The complexity of the management task rises exponentially with the size of an organisation.Although many people hold the view that there are natural or born leaders, good management is however a learned skill.People are not born with it (Dayton & Fraser 1990:45). Most churches and mission organisations begin with a natural leader, often one who has a strong charismatic personality, thus ability to inspire others with his or her vision of what God wants to accomplish.This type of leadership is effective so long as the leader has the time and energy to oversee the entire operation of the organisation.As soon as the organisation multiplies, it becomes very difficult for the natural leader to give day-to-day guidance; management skills are therefore needed.Dag Heward-Mills remarked that 'while the membership of most churches has increased over the past few years; the pastor's ability for handling larger crowds has not been developed' (Heward-Mills 2007a:187).This observation from Bishop Dag is very true, especially among Ghanaian Pentecostal churches.Many of their leaders lack the skills to properly manage their churches. According to Hirsch (2006:152), if we really want missional church, then we must have a missional leadership system to drive it.This requires the reinterpreting of the denomination's foundational values in the light of the demands of its mission.In this regard, the missional leader needs to be a visionary, who can outlast significant opposition from within the denominational structures and can build alliances with those who desire change.In addition, the leader has to encourage signs of life within the existing structures and raise up a new generation of leaders and churches from the old (Addison 1995:90).Guder (1998:183) posits that the key to the formation of missional communities is their leadership.Leadership is a critical gift, provided by the Spirit because, as the Scripture demonstrates, fundamental change in any body of people requires leaders capable of transforming its life and being transformed themselves. The fact that LCI has arisen beyond the oversight of its founder and is still functioning effectively points to good missional leadership.The church currently has over 1800 branches in 79 countries.The leadership and the management system of the church epitomise a balance between spiritual and organisational leadership in LCI.It is in this light that Bishop Heward-Mills advised that a good study of administration and management will do the church of God a lot of good (Heward-Mills 2007a:187). In emphasising the responsibility of the church in planning strategies, Malphurs submits that without mission strategies, churches are only wasting their time.Hence, the strategy of the church is the vehicle that enables the church to accomplish the great commission (Malphurs 1996:30).Planning is a concept that is highly prioritised in LCI.Carefully written down plans are well communicated in the church.This strategic plan of LCI is usually developed by the Bishop Council (the highest decision-making and planning body) as they are led by the Holy Spirit, then it moves down to the general council, regional council, lay preacher's council and the stallion's council who develop strategic plans for the ministry. The church has been organised in such a way that every activity of their church leaders is been recorded and monitored through a Web system called Pastoshep.Pastoshep is a computer programme used to keep accurate data and statistics of pastoral activities.It has afforded the church the opportunity to care and monitor all ministers in LCI.Heward-Mills remarked that despite the limitations as a result of limited addresses and telephone numbers in Ghana, the Pastoshep has become a reliable and technological method of assessing the work of all pastors and shepherds alike.Thus, without being there to physically see what people are doing, this computer programme keeps accurate data of what everybody in LCI is doing (Heward-Mills 2007b:184). The Pastoshep has also helped the church in the management of their congregational membership at every location they have established a church or cell group.The programme has been designed in such a way that every detail of their members is stored on the system.The only disadvantage of the Pastoshep is that it requires Internet access everywhere it is being used.Therefore, ministers or pastors, who have churches or cell groups at locations without Internet, are required to travel to where they could have access to Internet. In their mission approach, whenever they are to plant a church or have an open air crusade dubbed, 'Healing Jesus Evangelism Campaign', they usually send the Advance Team to the venue.The Advance Team is a special group of highly skilled and experienced individuals charged with the mandate to explore new territories and organise campaigns many weeks in advance.They make all the necessary preparations for the campaign to begin.This is done in conjunction with the Mission Team, the Protocol Team, the Praises and Worship Team, Medical Mission Team and the Media Team. Managing the resources of the church with a missiological focus The church is endowed with several resources that should be properly managed for missional purposes.Resource management is essential to any organisational success. Because every single soul in the church has a role to play, it is important to pay attention to all persons in the church as well as their gifts and talents.A balanced church is the one that has people from all walks of life active in it.There should be the young and old, educated and uneducated, rich and poor, and men and women.It is therefore imperative for churches to incorporate all sorts of people to share the burden of mission.Heward-Mills holds that more often than not, the category of people who are written off, as far as ministry is concerned, stand a chance of contributing greatly to the kingdom (Heward-Mills 2007b:3).The fact that the ministry cannot be borne by one person necessitated the training and management of the people for effective accomplishment of the church's goals.According to Gibbs and Coffey (2001:121), the focus of missional leadership is not in the first place on strategic planning or management, but on cultivating within the missional community the capacities needed for spiritual discernment and formation. In LCI, all people from all walks of life are incorporated into the running of the church.The purpose of this approach is to equip them for the fulfilment of the mission of God.Bishop Heward-Mills remarked that he has many medical doctors, specialists, lecturers, architects and engineers, all who are serving as lay pastors (Heward-Mills 2007b:3).Thus, every individual in the church shares the burden of ministry and carrying out the mission of God.In Van Gelder's view, when we talk of a congregation, it is the discovering together of the missional vocation of the community.It is beginning to redefine 'success' and 'vitality' in terms of faithfulness to God's calling and sending.It is seeking to discern God's specific missional vocation for the entire community and for all of its members (Van Gelder 2007:33).With this worldview, the missional church understands itself to be missionary by nature -called, equipped and sent into the world by the Holy Spirit to participate fully in God's mission (Van Gelder 2005:23).The missio Dei presupposes mission communities that are witness to the work of God being carried out.The witness to God's work is through the people God calls and sets apart for this mission (Guder 2000:146). In order to develop and equip members of the LCI to participate in the missio Dei, the Anagkazo Currently, Anagkazo has over 10 000 students worldwide who are acquiring ministerial formation education both residentially and through online or distance learning.Many of its international students come from Africa, Europe, Asia, Australia and the Caribbean.Graduates of Anagkazo are known for their soul-winning and church-planting passion and have a strong desire to see the gospel preached to every soul everywhere (Anagkazo Bible Seminary 2013). Pastors and ministers who graduate from the Anagkazo are sent to start churches in communities all over the world. According to the LCI church-planting approach, a group of three or more is always enough to start a church.This caused the church to send missionaries to various places and countries to plant churches.Remarkably, many of these young men and women are professionals and are therefore sent as tent-making missionaries.This approach has also helped the church to save a lot of money because these lay ministers are not being paid by the church. Lighthouse and tent-making ministry We live in an important time with new challenges for the advancement of the gospel.One major challenge has to do with finances.It is however an acknowledged fact that every church has a limit to its resources (Heward-Mills 2007b:10).In Cook's observation, if the members of the church who go out as missionaries are more than the church can support, the problem becomes a serious one (Cook 1971:253).This is one of the reasons why the 21st century churches should place more emphasis on 'tent-making ministry' as a strategy to overcome financial barriers in the fulfilment of the missio Dei. Tent-making ministry refers to the activities of a matured Christian who, while dedicating himself or herself to the ministry of the gospel, receives little or no pay for church work, but performs other jobs to provide support.Specifically, tent making can also be referred to as a method of international Christian evangelism in which missionaries support themselves by working full-time in the marketplace with their skills and education, instead of receiving financial support from a church (Siemens 1999:733-741).The Apostle Paul's missionary account says that he supported himself by making tents while living and preaching in Corinth.On another occasion, he reported that he frequently performed outside work, in order not to be a financial burden to the young churches he founded (White 2014:226).Tent making is a time-proven and biblical tool, adopted by the Apostle Paul, the first missionary to integrate secular work and ministry (McNamara 2012:1). In LCI, tent-making ministers are called lay ministers (Heward-Mills 2008:4).Bishop Heward-Mills defines a lay person or minister as someone who maintains his or her secular job and yet is active in the ministry of the Lord Jesus and a full-time minister as someone who has abandoned his or her secular job to concentrate fully on the ministry (Heward-Mills 2007b:7).To him, one of the greatest keys to extensive ministry work is the tent ministry (Heward-Mills 2008:1).Churches in this world that have experienced phenomenal growth have all employed the principle of using lay people for the ministry.In emphasising the degree to which the church has adapted this strategy, he wrote that: Though I oversee a hundred churches and several thousand people at the Lighthouse Cathedral, we currently have only seven full-time pastors on staff worldwide.We have over two hundred pastors and trainee pastors within the ministry.Ninetyfive percent of them are unpaid lay people….It is not possible to pay salaries and rent an unlimited number of houses for the staff of the ministry.Full-time staff are limited in the number and the amount of work that can be done.The use of lay people is the secret to unlimited expansion of the church.(Heward-Mills 2007b:1) Tent making is very important as it sometimes provides Christians the chance to serve in countries normally closed to mission work.Governments hostile to Christianity often accept well-qualified teachers, computer technicians, doctors and engineers into their countries to work, even if these men and women are Christians.These professionals are able to serve the country and support themselves while exploiting that opportunity to perform mission work in those countries. Missional copycat Having discussed how the LCI has been able to apply the principles of missional planning and management in their participation of the missio Dei, assessment of their missional leadership formation is more or less like a kind of copycat.The term 'copycat' is usually referred to a person or thing that copies, imitates or follows the lead of another.The Cambridge Dictionary (2017) defines it as 'someone who has few ideas of their own and does or says exactly the same as someone else'. In LCI leadership and missional formation, members and aspiring leaders are made to read books and listen to and watch the sermon of Bishop Heward-Mills.They are therefore assessed based on these books and sermons.They are required to preach prescribed sermons in their local churches, which should be in line with that of Bishop Heward-Mills. This approach in their leadership formation leaves little or no room for aspiring leaders (church pastors) to be themselves and develop their personal capacities in sermon preparation. In their understanding, this approach is to give the church a missional brand for uniformity and also avoid any form of heresy. The New Testament suggests that Spirit-empowered movements articulate the gospel for a particular context.If this is the case, then it means that what LCI needs is less imitation and more discernment through the Holy Spirit.My purpose for this critical reflection of the LCI missional formation is to point out that in spite of fact that the church wants to maintain their missional brand, it is equally important not impose on minister of the church the sermons of Bishop Heward-Mills. Conclusion Planning and management is part of our everyday life and so it is in the life of the church.This article discusses the role of planning and management from a missional perspective by using the LCI as a case study.The article started with an overview of the nature of the 21st century church and mission.It argues that the mission theology and practice today and in the future are being shaped primarily by the dynamics of global and ecumenical impulse.The article further noted that in order for the church to be missionally relevant in the changing dynamics of the world, missional leaders and their congregation must see discernment as the first step in their involvement in the missio Dei. The article unearthed how the LCI has used planning and management as one of the missional tools in their participation in the missio Dei.Although leadership has a major role to play in missional planning and management of the church, it is however a holistic and all-inclusive agenda.Missional planning and management should include the involvement of the Holy Spirit, congregational leadership, the entire congregation and the various resources the church is being endowed with by God. The paper finally noted that despite the fact that LCI has been successful in applying planning and management in their participation in the missio Dei, their missional leadership formation which is in the form of a missional copycat should be looked at to make room for deeper missional discernment. Bible Training School was established.The Anagkazo Bible Training School was founded in 1996 by Bishop Dag Heward-Mills.The establishment of the school was prompted by Bishop Dag's desire to see young men and women trained and equipped for the work of the ministry.Anagkazo started as a 1-year part-time bible school.The first graduation took place in November 1997 with 12 students.It later evolved into a 2-year full-time programme and in the 2006 academic year, the period of training was extended to 4 years.
8,458.8
2017-06-30T00:00:00.000
[ "Philosophy" ]
Metaphysical Foundations of Theistic Argumentation © 2021 The Author. This article is licensed under a Creative Commons Attribution 4.0 License Abstract. The proofs of God's existence is the subject matter of the article. Four main types of proofs are analyzed: cosmological, teleological, ontological and moral. It is argued that there is a general scheme of theistic reasoning present in all four types of proving. The principal feature of this scheme lies in recognizing a ground of everything existing which goes beyond the material (or natural) world. Possible naturalistic arguments excluding a non-material, supernatural foundation of the world are also analyzed. The objections to naturalistic arguments are formulated, making it possible to assert that the natural world cannot be explained from itself. Nor it can be explained from its physical (or natural) part. At the same time, the material world needs an explanation. To meet this need, the extended direct theistic arguments are formulated in the article. They begin with the fact of there being something and include two aspects of theistic argumentation: one is to establish the existence of immaterial foundation of the natural world; another is to demonstrate that this immaterial foundation may be identified with the world subject – omnipotent, omniscient, all-good, immaterial rational being which mainly corresponds to God of theistic religions. The conclusion is made that the thinkability and rationality of the idea of God is provable. One can argue that the idea of God more rationally explains the world. Moreover, it is evident that theism is rational, naturalism (as the principle of the general explanation of everything) is irrational. But the question remains how rational the world is. INTRODUCTION Proving God's existence is one of the oldest intellectual games played by civilized humankind. Many philosophers and theologians have tried their hand at this game. However, the practical results of their efforts are insignificant. Typically, the evidence found does not convince nonbelievers. The most apparent denials do not dissuade believers. The existence of God remains a matter of faith. It seems that any positive knowledge is impossible here. Nevertheless, the discussion is not fading and seems to be even intensifying. Why is this happening? It is unlikely that both proofs and refutations of the existence of God are just exercises for the mind. The issue under discussion is too severe. One can assume that people have an inescapable need for a holistic, understandable, sys-tematic, complete, and comfortable mental model of the world. Such a model is to be, in a sense, independent of empirical data. And such a model must either include God (or something Divine) or exclude it. It seems that the philosophical problem of God is not a problem of trusting any religion; it is not a problem of faith. This is the problem of integrity in our understanding of the world. Without clarifying our attitude to the idea of God (accepting it or denying), we leave the natural world as a whole as something otherworldly for ourselves; we mystify it to a large extent. If our complete understanding of the world is impossible, then is the concept of God the best possible explanation? If our understanding of the world is possible, then is the recognition of God's existence true? And is it possible, in this case, to explain the world without resorting to such a recognition? In the history of philosophy (including philosophical theology), many ways have been developed to prove the existence of God. The very fact that there are many proofs invokes some doubts about their effectiveness and raises the question of their unification. The literature on the topic is boundless. Since our article is not bibliographic and does not relate to the history of ideas, careful analysis of this literature would have distracted us from our goal. (For a rather detailed summary of the main types of theistic argumentation in theology and philosophy, see [10]). As a basis for reasoning, four types of proofs are chosen: cosmological, teleological, ontological, and moral. A substantial general survey of the cosmological proof can be found in [7], teleological -in [6], ontological -in [5], moral -in [2]. Of the relatively recent studies in English on the first type of proofs, one should pay attention to [1], on the second -to [9], on the third -to [4], on the fourth -to [8]. In its general formulation, the cosmological argument comes down to the statement that the dependencies observed in the universe indicate the existence of "a first cause, sustaining cause, unmoved mover, necessary being, or personal being (God)" [7]. The teleological argument can be taken in this formulation: "Some phenomena within nature exhibit such exquisiteness of structure, function or interconnectedness that many people have found it natural to see a deliberative and directive mind behind those phenomena. …The resultant theistic arguments, in their various logical forms, share a focus on a plan, purpose, intention, and design, and are thus classified as teleological arguments (or, frequently, as arguments from or to design)" [6]. The ontological argument in its general form can be taken like this: it is an argument which makes an inference from the concept of God (its semantics or modality) to God's existence, in other words -"from premises which are supposed to derive from some source other than observation of the world" to "the conclusion that God exists" [5]. Finally, the moral argument is acceptable to understand as an argument "that reasons from some feature of morality or the moral life to the existence of God, usually understood as a morally good creator of the universe" [2]. The article aims to justify the thinkability of theistic argumentation and, on the contrary, the unthinkability of counter-arguments, those offering the alternatives to the theistic version of the world understanding. For this, a unified formulation of the theistic argument will be given, combining the four formulated basic types of God's existence proofs. All logically possible alternatives to this argument will be formulated. It will be demonstrated that these alternative arguments don't work. Finally, the metaphysically extended theistic argument will be presented, and its relation to the common traits of the idea of God in theistic religions will be defined. RESULTS AND DISCUSSION Any proof of God's existence includes two objectives, similar to those distinguished by William L. Rowe within the cosmological argument, "one to establish the existence of a first cause or necessary being, the other that this necessary being is God" [7]. In some (non-theistic) religions, the first objective is self-sufficient. Considering the overall scheme of proofs for the existence of God, one should have in mind the sequence of these objectives. All such proofs will be called theistic arguments. The general scheme of theistic arguments is as follows: 1. Everything that exists (besides, perhaps, the ground of everything, if there is any) has the outside ground of its existence. Thus, everything that exists is grounded. (An object (that is, something that exists) should be called grounded if it exists only under the condition of another object existence, the latter being called it's ground. Note that the relationship of grounding is generally nonreflective, asymmetric and transitive.) 2. There are some clear evidence of existing objects of a particular type. 3. These evidence point to a ground that goes beyond the material (or natural) world. 4. The stated ground is ultimate; that is, it has no further ground. 5. Under its groundlessness and point 1, the ultimate ground is the ground of all that exists. 6. Consequently, the ultimate ground is the immaterial (by point 3) ground of the material (or natural) world. 7. Such a ground of everything in the world is God -omnipotent (by being the foundation of everything), omniscient (for the same reason and by its immateriality) and all-good (by creating the human moral creature) being, the immaterial Creator of the world. The stated evidence of experience can be divided into external and internal. External evidence are the natural dependences between the material world phenomena (for example, causal) and the expediency of phenomena in the material worldthis is the basis of the cosmological and teleological arguments. Internal evidences are the concept of God and values (primarily moral) in the human mind. Ontological and moral arguments are based on this. Point 2 of the scheme shown is not objectionable since we deal here with the evidence of experience. Points 5, 6, 7 are conclusions based on the previous points. Points 1, 3, 4 are debatable. The provisions formulated here are not logically forbidden. But alternative provisions denying the immaterial basis of the world do not seem to be logically forbidden either. Such alternatives that exclude the non-material, super-natural foundation of the world should be called naturalistic arguments. There are these possible naturalistic arguments. Naturalistic Argument 1. The natural world has no fundamental, ultimate (i. e., groundless) ground. Nevertheless, there exists some ground for everything existing. The chain of grounds is endless. Thus, point 1 of the theistic argumentation scheme is accepted, but points 3 and 4 are not. Objection to Naturalistic Argument 1. If every chain of grounds is infinite and there is no first ground in it, nothing is grounded either. Anything grounded is infinitely far from an infinitely distant foundation -in this sense, the chain is symmetrical -and the absence of the first ground means the absence of the end of the chain. At least the final link in the chain, with which anything grounded must be identified, is no more explainable than the first link. Thus, the absence of the first ground is equivalent to the absence of anything grounded. There is yet another objection. Even accepting the existence of an infinite chain of grounds, one should admit that one infinite chain is different from another. For this difference, there must be common ground -fundamental for both chains. Naturalistic Argument 2. The natural world has no foundation. The world as it exists is a "brute fact"; the world is what it is and, therefore, cannot be different. The world (in all its diversity) is the ground of oneself. Therefore, the question of the ultimate foundation is meaningless; the question of grounding does not apply to the world as a whole, only to separate existing objects. This argument denies points 1 and 3 of the theistic argumentation scheme, but in a sense, it recognizes point 4, identifying groundless ground with the world in all its diversity. Objection to Naturalistic Argument 2. It rests on the provision that the diversity of the world is concrete. Not all logically and naturally possible facts take place in our world. It is enough to ask oneself whether the existence of another world with a different variety of objects is permissible for the argument under consideration to look doubtful. This material world we live in would have to exist due to absolute contingency if naturalistic argument 2 were still actual. But absolute contingency is impossible. A random fact is a way of realizing different possibilities. There can be no contingency when there is only one possibility, and one can't even presuppose its nonrealization. If something accidental happens, then it is not the only thing that happens. Contingency coexists with alternative possibilities, with the necessity of this (and not another) alternative, and so has some outer ground. Pure contingency, which is nothing more than contingency, does not exist. Such is the case with the absolutely contingent first ground of the material world. Absolute contingency is contingency without any further ground, and therefore it is entirely identical with absolute necessity. Being a concrete variety, the material world does not exist with absolute necessity or, equivalently, is not absolutely contingent. Naturalistic Argument 3. The natural world is everything that can exist. In other words, everything that can exist exists. If there are alternative possibilities, something is better adapted to existence (a favourite idea of evolutionary biologists, notably R. Dawkins). Such an argument is compatible with point 1 of the theistic argumentation scheme, even with point 4 (all logically and naturally admissible possibilities can be taken as the ultimate ground of what exists). Only point 3 of the above scheme is resolutely denied. Objection to Naturalistic Argument 3. Suppose the world is all that can exist, and this all is explained without the participation of an external factor. In that case, this totality of the possible necessarily contains mutually exclusive entities. This makes the indicated totality unrealizable. All (alterna-tive) possibilities cannot be realized if we are talking about the world as a whole, that is, about everything that exists. There are mutually exclusive possibilities among them. Mutually exclusive possibilities cannot coexist as parallel (possible) worlds since such coexistence is also only one of the possibilities (the unique, the only existing world and one of many worlds that cannot coexist). The choice between them is inevitable, but the choice must have a ground. Neither one of such contradictory entities can play the role of the world's foundation, nor all entities together, since such a primary ground must, in turn, be grounded. The world, therefore, is not all that can exist. Suppose from another point of view; we imagine what really exists as something that has an advantage in existence over its alternatives. In that case, this advantage (as well as any disadvantage) must have an external ground. Objection to Naturalistic Argument 4. When trying to find the fundamental natural parts in the material world or the ultimate physical elements, we must constantly ask why these parts are fundamental? How does their fundamental nature derive from their properties? Why are they like that and are not something different? And do they really not require further explanation and grounding? As a rule, some quite specific and rather complex physical objects or conditions are proposed as such an ultimate ground for everything. It is possible to prove their existence, but it is impossible to prove the absolute necessity of their existence, therefore, their fundamental nature. For the sake of completeness, agnostic considerations should be added to the anti-theistic arguments. From this perspective, the world is inexplicable because human knowledge is limited and will always be in this position. Perhaps so, although perhaps not. In any case, this does not exclude attempts to explain the world and accept the best explanation achieved. Let's summarize preliminary results. If the stated objections to naturalistic arguments are correct, then the natural world cannot be explained by itself. The world cannot be explained from its physical (or natural) part. At the same time, the material world cannot but be explained. It requires an explanation by something external to itself. Of course, this conclusion cannot yet be recognized as the final confirmation of the theistic picture of the world. Negative proof in the logical form of modus tollendo ponens is, in fact, not a direct proof of the existence of God, or at least of some transcendental reality, but is just the proof of the doubtfulness of denying it. Direct arguments seem to have to be formulated anyway. One should supplement the stated general scheme of theistic argumentation by some additional considerations to present such proof. This is how this extended scheme may look like: 1. Something exists. This means existence in any sense other than negative (the existence of absolute nothingness is excluded). The truth of this statement is evident from all points of view. 2. The existing differs. This statement is also undeniable, being an expression of a basic fact similar to Descartes' famous cogito. It is impossible to exist without being different (which indicates the identity of being and difference (more about this, in [3])). 3. All that exists does not exist without something different from it. It is true since the existing is different and, therefore, there cannot be one thing only within the world. 4. Everything that exists (except for what is in a sense identical to everything that exists) has a ground for its existence in something different from it (in the sense that is expressed in point 1 of the previous scheme). This statement follows from statement 3 -if something (object A) does not exist without something else (object B), then the existence of another (object B) is necessary for the existence of this something (object A). 5. The ground is not necessarily inexistent, provided the grounded does not exist. It is so since any existing requires the existence of something else outside itself, but not necessarily something concrete, that is, not necessarily what is called here the grounded. So the grounded is necessarily connected with the ground; the ground is not necessarily connected with the grounded. 6. The grounded generally (with some exclusions) is possible to its ground; that is, it may or may not exist if the ground exists. Since the grounded is also the ground to its grounded, it is necessary in relation to the next link in the series of groundings. 7. The existence of something without a ground is necessary. There is a reality that exists with absolute necessity. Since a) something exists and it is an absolute, undeniable fact and b) the grounded is generally only possible, then something without a ground cannot but exist. 8. There is the first (or ultimate) foundation of everything. This first foundation is that which exists independently of everything that may not exist. Thus it is that which exists entirely necessarily. In other words, it does not exist as a realized possibility, though it can exist in something other as a potential necessity. 9. The first ground includes all entities (no matter how many of them there may be, not excluding one entity) which mutually do not exist in the absence of their counterparts. In other words, the ultimate ground includes all necessary entities. 10. The world in which we find ourselves can be called the natural world. But it is better to define it as material. The material world is a world in which there exist (possibly along with others) material objects, that is, terminally individual and concrete objects. The concept of terminal individuality and concreteness must be clarified. An object is terminally individual if it (without its change) does not exist and cannot exist as different objects. An object is terminally concrete if it (without its change) does not exist as an object with additional properties to those that it has. Terminal individual and concrete objects differ, of course, from abstract objects (or formal ones, in other words). The following example nicely illustrates the difference. Imagine an abstract object, say, mathematical (all mathematical objects are abstract) -a triangle. One can imagine many triangles, each of which is a triangle, that is, it is an instance of a general object exemplified in individuals. In this case, more triangles can be added to any finite set of triangles; that is, one can increase the generality of a given mathematical object. Further, you can imagine a triangle rectangular, isosceles, equilateral, relatively large, relatively small, more specific than the original representation. At the same time, one more property can be added to each specific triangle; in other words, it can be concretized even more. So, a triangle as a mathematical (abstract) object can exist as a general object, represented by sin-gle objects, and can, in principle, be infinitely concretized. No material object is capable of this. Imagine a material triangle, for example, a triangle made of blue wire, which I hold in my hands. It cannot exist as different triangles; it cannot be more specific than it is. In this sense, it is entirely terminally individual and concrete. From such objects, the material or natural world consists (entirely or to a large extent). 11. The material world is the world of possible objects. This follows from the fact that any abstract and general objects (as it has been shown) can be instantiated into different terminally concrete and individual objects. Or can be not instantiated. (Whether all abstract objects are necessary entities only, respectively, whether some abstract objects entirely belong to the material world, we will not decide here). 12. Obviously, a) the material world has a foundation; b) the ground of the material world is outside this world, is transcendent to it; c) the ground of the material world is non-material. This statement is based on what has been said before. Note that speaking about finding a transcendent foundation outside the natural world, one should not in any way understand this "outside" in the spatial sense. We are talking about purely existential, metaphysical transcendence, in other words, about the simple difference between the foundation of the world and the world itself or any of its material parts. The difference here acquires the character of the highest metaphysical reality (in a certain sense, let's repeat, it is identical to being). 13. The relationship between the ground and the grounded is the relationship between the reality of possibilities and the possibilities that have been realized. In a sense, the ultimate foundation is the possibility of anything else (it appears to be so if the ultimate foundation is recognized as being identical to metaphysical difference). 14. A possibility that is to be realized cannot be the only one (otherwise, it would be necessary). The ground's necessary transition to the grounded destroys the very status of the ground and the grounded, since instead of the ground and the grounded one has a simple (necessary) coexistence of different realities (objects). The grounded one has a simple (necessary) coexistence of different realities (objects). This coexistence must itself be grounded and therefore is not fundamental. Consequently, the foundation of the world is some alternative possibilities (at least such is the ultimate foundation of the material world). 15. All alternative possibilities cannot be realized since there are mutually exclusive possibilities among them. Mutually exclusive possibilities (as already mentioned) cannot coexist as parallel (possible) worlds, which was mentioned when refuting the naturalistic argument 3. 16. An absolute, ultimate foundation cannot include unequal possibilities as well. Since every specific probability distribution of different possibilities requires its ground and a ground containing unequal possibilities is not an ultimate foundation, it must have its further ground. 17. The actualization of what is possible in grounding relation cannot be accidental. This is obvious if one talks about an ultimate (absolute) foundation (in non-fundamental relations of grounding, random results are rather possible). This was also mentioned in the refutation of the naturalistic argument 2. 18. The described types of transition from the ultimate foundation of everything to the grounded as to the realized possible (that is, to the material world) -namely the necessary transition, the realization of all possibilities, the realization of the preferred possibility, randomness -all these ways of transition are characterized by the directness of the ground / the grounded relation. In this type of grounding, the ground has what is sufficient for the actualization of the grounded or (in the variant of randomness) there are factors (objects) external to the ground, which also directly condition the actualization of the grounded. 19. The invalidity of all direct ways of transition from the ultimate foundation to the possible grounded objects makes real the indirect one. So the difference between the ultimate foundation and the possible grounded objects is not direct. There is a mediator between the ultimate foundation and the possible grounded objects. 20. The mediator has two sides, which are identified as two differences -from the grounded and the ultimate ground. On the one side, the mediator differs from the grounded as its analogue separated from the grounded itself or as an image of a result of grounding (but not as a result itself). Such an entity can also be called a grounding criterion, an anticipation of a result, or a standard applied to alternative grounding possibilities. 21. The mediator differs, on the other side, from the ultimate foundation of everything as the ground of the transition to specific, grounded objects (or the ground of a specific transition). Such an entity that ensures this transition according to a grounding criterion can be called the source of the transition to the grounded, or it can be called the actor. 22. The difference of the grounding result (mediated by the actor and the anticipation of a result) from its ground should be called activity. Activity has its own ground, in which one can discern the whole ground of the grounded, the actor, the anticipation of a result and the possibilities of the grounded. 23. The ground of activity can be external if the elements of indicated difference are themselves grounded by entities not included in their circle. This is how, for example, the activity of intelligent technical devices is externally grounded. An outwardly conditioned activity is not in some respect the agent's activity; in other words, it is not in some respect an activity conditioned only by the agent (which, therefore, cannot yet be called a subject). 24. In the case of fundamental reality, the ground of activity is internal. Activity is due to the mutual grounding of the elements that form the ground of activity. There are simply no external entities that condition activity since we are talking about everything that exists. 25. The internally grounded actor differs from the result anticipation, like the anticipation of activity. Such an actor anticipating activity is called the subject of activity. The subject is identical to the actor with anticipation of activity and anticipation of a result, the activity of which is conditioned only by the actor itself and its anticipations. The subject, therefore, is an actor whose activity (at least in a certain respect) is not conditioned by external circumstances. (A human person is a subject since his or her activity in a certain respect is not conditioned by external circumstances). 26. In the subject, two spheres of activity are correlated -ideal (image) and natural (action). The image consists of the formal elements of activity, which constitute the form of consciousness in human beings. The image also includes distinguishing the form of the source of activity, which is analogous to human self-awareness or the human Self. 27. The subject's activity under these circumstances acquires the nature of choice. This choice is purposeful and free from immediate necessity. It includes the alternative selectable forms, the selection criterion, and the adequate ground for the choice (an actor). The selection criterion coexists with the selectable forms as an exemplary image of the latter. An exemplary image is the direct ground of choice. 28. In general, one can say that the ground of all that exists reproduces the function of choice. This function singles out the subject in the universal ground or, which is the same, singles out the ground of activity. The function of choice is a necessary mediation within the relationship of the ultimate ground and the possible grounded (the material world). This function to the material world acquires the nature of the world subject. 29. An analogy can be drawn between the world subject and a human being since the latter is also a subject. In a person, the ratio of the ideal and natural sides of activity is psycho-physical; it is the relationship between consciousness, cognition and morality on one side and physical action on another. The world subject must be similar to a human person; this subject rather should be a rational and moral being. 30. In relation to the absolute foundation of everything, the world subject is the first difference of this foundation from itself, the difference between the foundation as the world possibility and its purely existential side. However, relating to the material world, the world subject is already defined as its external foundation, that is, the acting (or active) side of the ultimate foundation of everything. The world subject is the first difference of this foundation from itself to the absolute foundation of everything, the difference between the foundation as the world possibility and its purely existential side. However, relating to the material world, the world subject is already defined as its shallow foundation, the acting (or active) side of the ultimate foundation of everything. 31. The function of choice empowers the world subject with the role of the material world Creator. It also leads to omniscience since conscious creativity requires knowledge of everything that is being done. The presence of an exemplary image of the chosen, the criterion of choice (which posits the choice between good and bad) is expressed by the category of Good and the quality of all-goodness. 32. Eventually, this omnipotent, omniscient, allgood, immaterial rational being mainly corresponds to God of theistic religions. However, it does not cover all the specific features of the images of God drawn by religions. CONCLUSION What has been proven, and to what extent? Has God's existence been substantiated? It seems that absolute thinkability and rationality of the idea of God has been proven. And vice versa -absolute unthinkability of the world without God has been substantiated. Does this mean that God exists? The conclusion can be drawn if desired, but it will not be completely apodictic. To come to an apodictic conclusion, we need, for example, such an inference: The real existence of God is rationally demonstrable. There are no explanations other than rational ones of what really exists. God is necessarily real (God really exists). Unfortunately, we lack the big premise of the syllogism. We cannot say with complete certainty that the world is rational. In other words, that it is entirely consistent with our way of thinking. Therefore, only a very significant plausibility of the idea of God and its logical admissibility can be deduced with sufficient logical rigour. One can argue that the idea of God more rationally explains the world. Naturalistic models of the world lead those who are trying to understand the world this way, either to contradictions and discrepancies or to complete uncertainty and inexplicability. Theism, strange as it may sound to many, is more rational than naturalism. Naturalism is a kind of mysticism where the mysterious forces of nature operate. So, it is evident that theism is rational, naturalism (as the principle of the general explanation of everything) is irrational. But the question remains how rational the world is, how rational everything that exists might be. There is no logical reason not to believe in the existence of God. But there can be (and, apparently, often occur) super-logical grounds for doubt. Doubt is incompatible with faith. Intuition, which finds support in the data of limited, incomplete life experience, speaks in this case in favour of the absence of God around us and in ourselves. Further, this intuitive sense of the absence of God compels the mind to seek rational excuses for such intuition. These justifications are based on the particular sciences data, which, like the intuition described, are rationalizations of incomplete, limited experience. So that these scientific constructions could claim to be a metaphysical generalization, a ban is introduced on the recognition of direct reflecting the whole experience in human thinking. As for the lack of complete experience, it should be argued, however, that our experience is dual in this sense. We perceive simultaneously a part of the existing and all that exists. More precisely, we perceive something as part of the existing and as the existing in general. We perceive at once in the object of perception some different levels of abstractness of its being. We find around us and in ourselves something that exists, which, at the same time, is existing as such. Therefore, metaphysics is not a purely speculative construction; it, like any science, relies on experiential data. Besides, we have no reason to distrust our mind as much as Immanuel Kant did in his time.
7,364
2021-04-30T00:00:00.000
[ "Philosophy" ]
Super-Toughed PLA Blown Film with Enhanced Gas Barrier Property Available for Packaging and Agricultural Applications Polylactic acid (PLA) holds enormous potential as an alternative to the ubiquitous petroleum-based plastics to be used in packaging film and agricultural film. However, the poor viscoelastic behavior and its extremely low melt strength means it fails to meet the requirements in film blowing processing, which is the most efficient film processing method with the lowest costs. Also, the PLA’s brittleness and insufficient gas barrier properties also seriously limit PLA’s potential application as a common film material. Herein, special stereocomplex (SC) networks were introduced to improve the melt strength and film blowing stability of PLA; polyethylene glycol (PEG) was introduced to improve PLA’s toughness and gas barrier properties. Compared with neat poly(l-lactide) acid (PLLA), modified PLA is stable in the film blowing process and its film elongation at break increases more than 18 times and reaches over 250%, and its O2 permeability coefficient decreased by 61%. The resulting film material also has good light transmittance, which has great potential for green packaging applications, such as disposable packaging and agricultural films. Introduction Currently, the most widely used plastic products are films, especially packaging and agricultural films. Waste plastic films have brought about almost irreversible environmental damage and become a hot issue globally. Polylactic acid (PLA) has a lot of advantages, such as biodegradability, comprehensive performance, and compostability [1,2]. However, PLA can barely be used for industrial film blowing because of its low melt strength, brittleness, and low gas barrier property [3][4][5]. If the melt strength, toughness, and barrier property can be improved effectively, it is possible for PLA to be used in the film blowing process, therefore providing conditions for the industrial production and application of PLA film products. Stereocomplex (SC) crystallites can be formed by blending poly(l-lactide) acid (PLLA) with poly(d-lactide) acid (PDLA), which has about a 50 • C higher melting point than neat PLA. The SC structure has been proved effective in improving the viscosity and crystallization ability of PLA [6,7]. PDLA-g-PEG-g-PDLA(DPD) triblock polymer even showed an easier formation of the SC and a faster crystallization rate of the PLLA matrix because the introduction of flexible PEG [8]. Despite much interesting research in regards to PLLA/PDLA and PLLA/DPD polymer blends [9][10][11], there are seldom reports about their film blowing capabilities or the film's mechanical and barrier properties. Polyethylene glycol (PEG) is a nontoxic additive with good biocompatibility. Normally, PEG can serve as a plasticizer and the direct blending of PEG with the PLA matrix can increase its toughness X cc = [(∆H m − ∆H cc ) /∆H 0 m × 100%; PLA : ∆H 0 m = 93.6J/g (1) X sc = ∆H sc /∆H 0 m × 100%; SC − PLA : ∆H 0 m = 142J/g (2) In the formulas, ∆H 0 m represents the melting enthalpy of 100% crystalline PLA, ∆H m is the melting enthalpy, and ∆H cc is the cold crystallization enthalpy of PLA homocrystallites measured via DSC. ∆H sc is the SC crystallites' melting enthalpy measured via DSC. Each sample was also compression molded into 25 mm diameter and 1.5 mm thickness disks at 10 Mpa and 180 • C, and their dynamic rheological properties were tested on an AR2000ex rheometer (TA Instruments) with a parallel-plate geometry (diameter of 25 mm and 1100 µm in gap) at 170 • C under nitrogen atmosphere with angular frequency range from 0.0628 to 628 rad/s, and the applied strain was 0.1%. Characterization of the blown films. The stress-strain measurements of the blown films were performed on a universal testing machine (5967, Instron, Norwood, MA, USA) using a 500 N load cell with a stretch speed of 5 mm/min under ambient conditions. The tensile fracture surfaces of blown films were coated with a thin layer of gold and observed by SEM (JEOL JSM-5900LV, JEOL PTE Ltd., Tokyo, Japan) at 5 kV. The optical absorption spectra of blown films (thickness: 50 ± 5 µm) were Materials 2019, 12, 1663 3 of 10 measured using a UV-3600 spectrophotometer (Shimadzu, Kyoto, Japan) over 300-800 nm. The oxygen permeability (P O2 ) of blown films were tested on a VAC-V2 film permeability testing machine (lab-think instruments, Jinan, China) at room temperature (23 ± 1 • C) with 50% relative humidity according to ISO2556:1974. Results and Discussion 4.1. Thermal Behaviors of PLLA, PLLA/DPD, and PLLA/DPD/PEG Blends As a melt enhancer, SC structure is very important in later stages of the PLA film blowing process. To study the SC crystallinity after common melt blending, the non-isothermal crystallization and melting behavior of neat PLLA, PLLA/DPD, and PLLA/DPD/PEG blends were studied by DSC, and the curves are shown in Figure 1. The PLLA homocrystallites' cold crystallization temperature (T cc ), cold crystallization enthalpy (∆H cc ), melting enthalpy (∆H m ), melting temperatures (T m ) and crystallinity (X cc ), and SC crystallites' melting enthalpy (∆H sc ), melting temperatures (T sc ), and crystallinity (X sc ) are shown in Table 1. The T m around 153 • C corresponding to α homocrystals can be observed in PLLA and PLLA/DPD. However, the higher T m of homocrystals in PLLA/DPD/PEG blends at about 155 • C reflects an α polymorph of higher perfection [16]. Compared with neat PLLA, the T m at about 200 • C appeared only in samples with DPD addition belongs to the SC structure, which was formed between the DPD's PDLA segments and PLLA matrix. Compared with PLLA, PLLA/DPD showed a suppression effect to homocrystallization because of the introduction of SC crystallites. When PEG was added to PLLA/DPD, the PEG chains will accelerate the chain movement of PLLA chains and DPD chains. PLLA/DPD/PEG-5 sample showed an increase of X cc and X sc compared with PLLA/DPD, while PLLA/DPD/PEG-10 showed a slight decrease of X sc because of more PEG contents. As a melt enhancer, SC structure is very important in later stages of the PLA film blowing process. To study the SC crystallinity after common melt blending, the non-isothermal crystallization and melting behavior of neat PLLA, PLLA/DPD, and PLLA/DPD/PEG blends were studied by DSC, and the curves are shown in Figure 1. The PLLA homocrystallites' cold crystallization temperature (Tcc), cold crystallization enthalpy (ΔHcc), melting enthalpy (ΔHm), melting temperatures (Tm) and crystallinity (Xcc), and SC crystallites' melting enthalpy (ΔHsc), melting temperatures (Tsc), and crystallinity (Xsc) are shown in Table 1. The Tm around 153 °C corresponding to α homocrystals can be observed in PLLA and PLLA/DPD. However, the higher Tm of homocrystals in PLLA/DPD/PEG blends at about 155 °C reflects an α polymorph of higher perfection [16]. Compared with neat PLLA, the Tm at about 200 °C appeared only in samples with DPD addition belongs to the SC structure, which was formed between the DPD's PDLA segments and PLLA matrix. Compared with PLLA, PLLA/DPD showed a suppression effect to homocrystallization because of the introduction of SC crystallites. When PEG was added to PLLA/DPD, the PEG chains will accelerate the chain movement of PLLA chains and DPD chains. PLLA/DPD/PEG-5 sample showed an increase of Xcc and Xsc compared with PLLA/DPD, while PLLA/DPD/PEG-10 showed a slight decrease of Xsc because of more PEG contents. The interfacial interaction between polymers can be evaluated by the variation of rheological parameters [17], such as storage modulus (G'), loss modulus (G"), complex viscosity, and relaxation time. The SC crystallites' melt temperature is almost 50 °C higher than PLLA, which can serve as an efficient rheological modifier to improve the elastic response and viscosity of PLLA melt owing to the filler effect and crosslinking effect of the SC crystallite network [18,19]. The interfacial interaction between polymers can be evaluated by the variation of rheological parameters [17], such as storage modulus (G ), loss modulus (G"), complex viscosity, and relaxation time. The SC crystallites' melt temperature is almost 50 • C higher than PLLA, which can serve as an Materials 2019, 12, 1663 4 of 10 efficient rheological modifier to improve the elastic response and viscosity of PLLA melt owing to the filler effect and crosslinking effect of the SC crystallite network [18,19]. The frequency sweep experiments of PLLA/DPD blends were carried out at 170 • C to investigate the effects of DPD and PEG on the rheological behaviors of the PLLA matrix. Figure 2 shows the G (Figure 2A The frequency sweep experiments of PLLA/DPD blends were carried out at 170 °C to investigate the effects of DPD and PEG on the rheological behaviors of the PLLA matrix. Figure 2 shows the G' (Figure 2A The G' and complex viscosity of PLA melt can reflect the change of melt strength to some degree. As shown in Figure 2, the G', G", and complex viscosity values increase significantly by adding DPD to the PLLA matrix, the G' even has about two orders of magnitude increase at low frequency, which can help improve the melt stability in film blowing. The SC networks will remain unmelted at 170 °C and serve as fillers to improve the complex viscosity of the PLLA, which may be helpful to improve the melt strength in the film blowing of PLA. Adding plasticizer PEG to PLLA/DPD will directly reduce the melt strength compared with PLA/DPD, while with the help of DPD, the G', G", and complex values still showed an increase in low-frequency rate compared with neat PLLA. The PDLA segments in DPD chains can form SC structure with PLLA matrix, and the soft PEG chains in DPD may help the forming of SC structure and bind two SC crystallites together by chemical bonds to form more complicate SC networks in the system as assumed in Figure 3. The special SC network can be helpful to improve the melt strength in PLA's film blowing processing. The relaxation behavior is also closely related to the polymer's melt strength and is very important in many processing methods, which determines the effect of processing parameters on material properties [20]. The relaxation time can reflect the chain entanglements at melt, which determines PLA's melt strength. The continuous weighted relaxation spectrum (τH(τ)) of PLLA The G and complex viscosity of PLA melt can reflect the change of melt strength to some degree. As shown in Figure 2, the G , G", and complex viscosity values increase significantly by adding DPD to the PLLA matrix, the G even has about two orders of magnitude increase at low frequency, which can help improve the melt stability in film blowing. The SC networks will remain unmelted at 170 • C and serve as fillers to improve the complex viscosity of the PLLA, which may be helpful to improve the melt strength in the film blowing of PLA. Adding plasticizer PEG to PLLA/DPD will directly reduce the melt strength compared with PLA/DPD, while with the help of DPD, the G , G", and complex values still showed an increase in low-frequency rate compared with neat PLLA. The PDLA segments in DPD chains can form SC structure with PLLA matrix, and the soft PEG chains in DPD may help the forming of SC structure and bind two SC crystallites together by chemical bonds to form more complicate SC networks in the system as assumed in Figure 3. The special SC network can be helpful to improve the melt strength in PLA's film blowing processing. The frequency sweep experiments of PLLA/DPD blends were carried out at 170 °C to investigate the effects of DPD and PEG on the rheological behaviors of the PLLA matrix. Figure 2 shows the G' (Figure 2A The G' and complex viscosity of PLA melt can reflect the change of melt strength to some degree. As shown in Figure 2, the G', G", and complex viscosity values increase significantly by adding DPD to the PLLA matrix, the G' even has about two orders of magnitude increase at low frequency, which can help improve the melt stability in film blowing. The SC networks will remain unmelted at 170 °C and serve as fillers to improve the complex viscosity of the PLLA, which may be helpful to improve the melt strength in the film blowing of PLA. Adding plasticizer PEG to PLLA/DPD will directly reduce the melt strength compared with PLA/DPD, while with the help of DPD, the G', G", and complex values still showed an increase in low-frequency rate compared with neat PLLA. The PDLA segments in DPD chains can form SC structure with PLLA matrix, and the soft PEG chains in DPD may help the forming of SC structure and bind two SC crystallites together by chemical bonds to form more complicate SC networks in the system as assumed in Figure 3. The special SC network can be helpful to improve the melt strength in PLA's film blowing processing. The relaxation behavior is also closely related to the polymer's melt strength and is very important in many processing methods, which determines the effect of processing parameters on material properties [20]. The relaxation time can reflect the chain entanglements at melt, which determines PLA's melt strength. The continuous weighted relaxation spectrum (τH(τ)) of PLLA The relaxation behavior is also closely related to the polymer's melt strength and is very important in many processing methods, which determines the effect of processing parameters on material properties [20]. The relaxation time can reflect the chain entanglements at melt, which determines PLA's melt strength. The continuous weighted relaxation spectrum (τH(τ)) of PLLA blends are calculated as below with G and G" values obtained through frequency sweep testing. In the equation, τ is the relaxation time, ω is the angular frequency, and H(τ) is the relaxation time spectrum. As shown in Figure 2D, the PLLA's longest relaxation time caused by the movement of free PLLA molecular segments or chains locates within the range of 0.02 s, which means PLLA chains lack entanglement and relax rapidly. After the addition of DPD, the intensity of the relaxation spectrum was enhanced and the longest relaxation time extends about 30 times longer to about 0.7 s. The special SC networks between DPD and PLLA chains and the interaction between the SC particles and matrix makes the movement of the PLLA chains more difficult, thus results in a longer relaxation time. The improvement of relaxation time can help increase the melt stability when the melt is extruded out of the die and blown to bubble in film blowing. Although the PLLA/DPD/PEG-5 and PLLA/DPD/PEG-10 showed a decrease in the intensity compared with PLLA/DPD, the characteristic relaxation peak of PLLA was still significantly improved and the relaxation peaks of SC crystallites appeared around 0.2 s and even longer. The longer relaxation time and higher relaxation intensity can also improve the melt stability and lead to a larger processing window of PLLA. Biodegradable PLA has great application potential in the field of packaging and agricultural films, which are mainly produced through film blowing. The stability of the bubble controlled by the melt strength of polymer is essential in the film blowing continuous production process. In this study, DPD products can effectively form special SC structure networks with PLLA chains in the matrix, which can enhance the melt viscosity and relaxation behavior of PLLA. The neat PLLA, PLLA/DPD, and PLLA/DPD/PEG-10 blends were chosen to conduct film blowing processing, and their blown bubble shapes were displayed in Figure 4 and Video S1. As shown in Figure 4A, PLLA's low melt strength and quick relaxation character make it hard to meet the requirements of continuous film blowing process, and the unstable bubble burst, and bubble dancing appeared through the processing, which may disrupt the production and increase the production costs. With the appearance of melt enhancer special SC networks, the film blowing bubbles of PLLA/DPD ( Figure 4B) and PLLA/DPD/PEG-10 ( Figure 4C) are very stable, which can achieve continuous production of the film blowing process and lower the production costs of PLA film. The blow-up ratio can be maintained between 2.8-3.0, which is widely applied in industrial film blowing. The special SC structure ( Figure S2) plays the main role in improving the melt stability of film blowing of PLLA, which helps achieve the continuous production of PLLA's film blowing. appearance of melt enhancer special SC networks, the film blowing bubbles of PLLA/DPD ( Figure 4B) and PLLA/DPD/PEG-10 ( Figure 4C) are very stable, which can achieve continuous production of the film blowing process and lower the production costs of PLA film. The blow-up ratio can be maintained between 2.8-3.0, which is widely applied in industrial film blowing. The special SC structure ( Figure S2) plays the main role in improving the melt stability of film blowing of PLLA, which helps achieve the continuous production of PLLA's film blowing. The Light Transmittance Properties of PLLA, PLLA/DPD, and PLLA/DPD/PEG-10 Films Light transmittance is very important in agricultural film production. It directly affects the growth of plants because the photosynthesis of plants mainly absorbs visible light at the wavelength of 400-700 nm [21,22]. Figure 5 and Table 2 show that the obtained PLLA/DPD and PLLLA/DPD/PEG-10 films hold less transparency compared with highly transparent PLLA films. However, PLLA/DPD/PEG-10 blown film is still highly transparent with a light transmittance property similar to PE, and its T% is 75.84% under 700 nm wavelengths of light. The PLLA/DPD/PEG-10 film can potentially be used as an agricultural film to ensure the solar transmittance through the film. The Light Transmittance Properties of PLLA, PLLA/DPD, and PLLA/DPD/PEG-10 Films Light transmittance is very important in agricultural film production. It directly affects the growth of plants because the photosynthesis of plants mainly absorbs visible light at the wavelength of 400-700 nm [21,22]. Figure 5 and Table 2 show that the obtained PLLA/DPD and PLLLA/DPD/PEG-10 films hold less transparency compared with highly transparent PLLA films. However, PLLA/DPD/PEG-10 blown film is still highly transparent with a light transmittance property similar to PE, and its T% is 75.84% under 700 nm wavelengths of light. The PLLA/DPD/PEG-10 film can potentially be used as an agricultural film to ensure the solar transmittance through the film. The Mechanical Properties of PLLA, PLLA/DPD, and PLLA/DPD/PEG-10 Films PLA's application as film materials is also greatly limited by its brittleness [23,24]. In this work, PEG was added mainly to improve the toughness of PLLA film. After the film blowing process, the radial and axial direction of PLLA, PLLA/DPD, and PLLA/DPD/PEG-10 blown films were tested and their results were listed in Figure 6 and Table 3. The results show that PLLA film is a rigid material with a good tensile strength and poor toughness, which shows elongation at break value about 13.13% and 10.26% in the radial and axial direction. The PLLA/DPD film shows little change compared with the PLLA film, while the tensile strength of the PLLA/DPD/PEG-10 film decreased by about 21.39% in the radial direction, and 32.20% in the axial direction compared with PLLA film. The elongation at break of the PLLA/DPD/PEG-10 film reaches over 250% in both directions, increases by 18.3 times in the radial direction, and 25.4 times in the axial direction compared with the PLLA film. It also shows a much higher effectiveness compared with the other reported PLA/PBAT PLA's application as film materials is also greatly limited by its brittleness [23,24]. In this work, PEG was added mainly to improve the toughness of PLLA film. After the film blowing process, the radial and axial direction of PLLA, PLLA/DPD, and PLLA/DPD/PEG-10 blown films were tested and their results were listed in Figure 6 and Table 3. The results show that PLLA film is a rigid material with a good tensile strength and poor toughness, which shows elongation at break value about 13.13% and 10.26% in the radial and axial direction. The PLLA/DPD film shows little change compared with the PLLA film, while the tensile strength of the PLLA/DPD/PEG-10 film decreased by about 21.39% in the radial direction, and 32.20% in the axial direction compared with PLLA film. The elongation at break of the PLLA/DPD/PEG-10 film reaches over 250% in both directions, increases by 18.3 times in the radial direction, and 25.4 times in the axial direction compared with the PLLA film. It also shows a much higher effectiveness compared with the other reported PLA/PBAT and PLA/PBS blown films at even higher loading [25]. As shown in Figure 7, the PLLA film and PLLA/DPD film exhibit smooth brittle fracture surfaces, which indicate their brittleness, while the PLLA/DPD/PEG-10 film's rough surface demonstrates an obvious ductile fracture, indicating that the material has a good toughness. The super-toughed PLLA/DPD/PEG-10 film can potentially be used as packaging and agricultural films. O2 Barrier Properties of Blown Films The gas barrier property is very important as a packaging material, which holds the key for protecting vulnerable to O2 degradation of perishable goods [26,27]. As shown in Figure 8, the O2 permeability (PO2) is 12.9 × 10 −14 cm 3 cm 2 s −1 Pa −1 in the PLLA film, which shows a poor gas barrier property compared with other films. With the addition of DPD, the PLLA/DPD is stable in the film blown process, but its O2 barrier ability was also impacted slightly, and its PO2 increased by about 23% to 15.2 × 10 −14 cm 3 cm 2 s −1 Pa −1 . With the addition of 10 wt% PEG to PLLA/DPD, the PLLA/DPD/PEG-10 can not only withstand the film blowing process but also achieve a 61% reduction in PO2 to 4.98 × 10 −14 cm 3 cm 2 s −1 Pa −1 . The O2 does not always diffuse along the direction perpendicular to the film and it may change its original permeability path from a vertical to horizontal direction [28]. In this case, an increase in oxygen permeability can be observed after the introduction of PEG to PLLA/DPD. The reasons can be inferred in two ways. Firstly, flexible PEG can fill up the interfacial defects between DPD and PLLA, and gives rise to the increasing path tortuosity and the decreasing cross section for O2 permeability. Secondly, more homocrystallites of higher perfection and the less O2 Barrier Properties of Blown Films The gas barrier property is very important as a packaging material, which holds the key for protecting vulnerable to O2 degradation of perishable goods [26,27]. As shown in Figure 8, the O2 permeability (PO2) is 12.9 × 10 −14 cm 3 cm 2 s −1 Pa −1 in the PLLA film, which shows a poor gas barrier property compared with other films. With the addition of DPD, the PLLA/DPD is stable in the film blown process, but its O2 barrier ability was also impacted slightly, and its PO2 increased by about 23% to 15.2 × 10 −14 cm 3 cm 2 s −1 Pa −1 . With the addition of 10 wt% PEG to PLLA/DPD, the PLLA/DPD/PEG-10 can not only withstand the film blowing process but also achieve a 61% reduction in PO2 to 4.98 × 10 −14 cm 3 cm 2 s −1 Pa −1 . The O2 does not always diffuse along the direction perpendicular to the film and it may change its original permeability path from a vertical to horizontal direction [28]. In this case, an increase in oxygen permeability can be observed after the introduction of PEG to PLLA/DPD. The reasons can be inferred in two ways. Firstly, flexible PEG can fill up the interfacial defects between DPD and PLLA, and gives rise to the increasing path tortuosity and the decreasing cross section for O2 permeability. Secondly, more homocrystallites of higher perfection and the less permeable amorphous phase formed in the PLLA/DPD/PEG-10 film may also cause an increase of gas permeability [29]. In this work, the remarkable improvement in O2 barrier properties of O 2 Barrier Properties of Blown Films The gas barrier property is very important as a packaging material, which holds the key for protecting vulnerable to O 2 degradation of perishable goods [26,27]. As shown in Figure 8, the O 2 permeability (P O2 ) is 12.9 × 10 −14 cm 3 cm 2 s −1 Pa −1 in the PLLA film, which shows a poor gas barrier property compared with other films. With the addition of DPD, the PLLA/DPD is stable in the film blown process, but its O 2 barrier ability was also impacted slightly, and its P O2 increased by about 23% to 15.2 × 10 −14 cm 3 cm 2 s −1 Pa −1 . With the addition of 10 wt% PEG to PLLA/DPD, the PLLA/DPD/PEG-10 can not only withstand the film blowing process but also achieve a 61% reduction in P O2 to 4.98 × 10 −14 cm 3 cm 2 s −1 Pa −1 . The O 2 does not always diffuse along the direction perpendicular to the film and it may change its original permeability path from a vertical to horizontal direction [28]. In this case, an increase in oxygen permeability can be observed after the introduction of PEG to PLLA/DPD. The reasons can be inferred in two ways. Firstly, flexible PEG can fill up the interfacial defects between DPD and PLLA, and gives rise to the increasing path tortuosity and the decreasing cross section for O 2 permeability. Secondly, more homocrystallites of higher perfection and the less permeable amorphous phase formed in the PLLA/DPD/PEG-10 film may also cause an increase of gas permeability [29]. In this work, the remarkable improvement in O 2 barrier properties of PLLA/DPD/PEG-10 film makes it a good candidate for packaging material. Conclusion In this work, a comparative study was carried out in PLLA, PLLA/DPD, and PLLA/DPD/PEG blends to investigate their thermal and rheological properties. The special SC network proposed in this work was first used to improve the film blowing stability of PLLA. The addition of DPD can form melt enhancer SC crystallites within the PLLA matrix, which can form special SC networks to increase the melt strength of PLA, therefore, helping acquire stable blown bubble and conduct continuous film blowing process. To improve better mechanical and gas barrier performance, the addition of 10 wt% PEG to PLLA/DPD can greatly improve the toughness and gas barrier ability of PLA film without losing its film blowing stability. PLLA/DPD/PEG-10 film displays super toughness, good light transmittance, as well as better gas barrier property. The resulted biodegradable PLA film has a great potential in environmentally friendly packaging and agricultural applications. Conclusions In this work, a comparative study was carried out in PLLA, PLLA/DPD, and PLLA/DPD/PEG blends to investigate their thermal and rheological properties. The special SC network proposed in this work was first used to improve the film blowing stability of PLLA. The addition of DPD can form melt enhancer SC crystallites within the PLLA matrix, which can form special SC networks to increase the melt strength of PLA, therefore, helping acquire stable blown bubble and conduct continuous film blowing process. To improve better mechanical and gas barrier performance, the addition of 10 wt% PEG to PLLA/DPD can greatly improve the toughness and gas barrier ability of PLA film without losing its film blowing stability. PLLA/DPD/PEG-10 film displays super toughness, good light transmittance, as well as better gas barrier property. The resulted biodegradable PLA film has a great potential in environmentally friendly packaging and agricultural applications. Conflicts of Interest: The authors declare no conflict of interest.
6,343.4
2019-05-01T00:00:00.000
[ "Materials Science" ]
The BesMan Learning Platform for Automated Robot Skill Learning We describe the BesMan learning platform which allows learning robotic manipulation behavior. It is a stand-alone solution which can be combined with different robotic systems and applications. Behavior that is adaptive to task changes and different target platforms can be learned to solve unforeseen challenges and tasks, which can occur during deployment of a robot. The learning platform is composed of components that deal with preprocessing of human demonstrations, segmenting the demonstrated behavior into basic building blocks, imitation, refinement by means of reinforcement learning, and generalization to related tasks. The core components are evaluated in an empirical study with 10 participants with respect to automation level and time requirements. We show that most of the required steps for transferring skills from humans to robots can be automated and all steps can be performed in reasonable time allowing to apply the learning platform on demand. InTRoducTIon Autonomous robotic systems can be deployed in unknown or unpredictable, dynamic environments, e.g., in space, search and rescue, and underwater scenarios. These robots require behaviors to manipulate their environment and will face tasks that have not been thought of during design. It is possible to learn solutions to these unforeseen tasks from humans during operation even though the human operators might be far away. Learning complex behaviors in these cases at once is time consuming or sometimes impossible. An alternative approach is to learn complex behaviors incrementally: small behavioral building blocks are learned separately and are later on combined during consolidation of the complex behavior by concatenation. This strategy has been observed in studies with rodents (Graybiel, 1998) and children (Adi-Japha et al., 2008) which indicates that it is efficient. Learning methods for behavioral blocks often combine various approaches to leverage intuitive knowledge from humans. Most of these fall into the categories imitation learning (IL) and reinforcement learning (RL). A standard approach to learn behaviors is to initialize with a demonstrated movement and then refine the skill with RL. See, for example, Argall et al. (2009) for a survey on imitation learning and Kober and Peters (2012); Deisenroth et al. (2013) for a detailed overview of the state of the art in reinforcement learning. Complete descriptions of robot skill learning frameworks are hardly present in the literature. To our best knowledge the only work that gives a complete overview of a learning architecture and is comparable to the work that we present here has been published by Peters et al. (2012). They do not provide a thorough evaluation in terms of automation level and time consumption to learn new skills. Their work includes IL and RL methods to learn so-called motor primitives as well as generalization Frontiers in Robotics and AI | www. frontiersin. org Gutzeit et al. BesMan Learning Platform methods for these motor primitives and even describes methods to learn operational space control. However, in this work as well as in the majority of similar works the relevant behaviors are directly presented by kinesthetic teaching so that the correspondence problem (Nehaniv and Dautenhahn, 2002) that stems from the kinematic and dynamic differences of the demonstrator and the target system is neglected. In addition, only the relevant behavior is presented or it is not discussed how the relevant part that should be transferred is extracted. In contrast to that, we would like to let a human demonstrate the behavior as naturally as possible. With this approach, the system can be situated in a far away place or environment hostile to man and could still learn from a human demonstration although a direct kinesthetic teaching is not possible or would only be possible indirectly in case that a second identical system would be available. To allow the demonstrator to act naturally, we use behavior segmentation methods and solve the correspondence problem as automatically as possible. In this paper, we focus on the question whether learning from human demonstrations can be used for realistic robotic manipulation tasks and whether the approach can be highly automated and is fast enough to be considered for solving unforeseen tasks during a system's application in real scenarios. The presented learning platform provides a robot with human demonstrations of behaviors suited for some specific situation. These demonstrations are autonomously decomposed into atomic building blocks which are learned by means of imitation learning. Based on this, RL and transfer learning are used within the learning platform to adapt and generalize the learned behavioral building blocks. While this is not the scope of the presented work, those building blocks can then form the basis for learning more complex behavior in a life-long learning scenario. The learning platform was developed based on learning from human demonstration since for kinematically complex robotic systems it is typically infeasible to generate behavior from scratch within a robot due to high-dimensional state and action spaces and the limited number of trials a robot can conduct. The presented learning platform will be described and evaluated with respect to its components' and its overall performance. For this purpose, we will solve the problem of target-directed ball-throwing with a robotic arm. leARnIng plATfoRM This section describes the architecture of the learning platform (see Figure 1) which was developed within the project BesMan 1 to transfer movement behavior from a human demonstrator to a robotic system. During this process so-called motion plans and behavior templates are generated. Motion plans represent solutions to generate specific behaviors and behavior templates represent generic movements to generate a flexible behavior able to, e.g., reach different points in space and to be executed on different systems with different morphology. Demonstrations are recorded and preprocessed as described in Section 2.1. The "Behavior Segmentation" will decompose demonstrations into simple behavioral building blocks. Segments that belong to the same type of movement are grouped together to obtain multiple demonstrations for the same motion plan. Details of the behavior segmentation and classification approach are described in Section 2.2. This part of the learning platform generates labeled demonstrated behavioral building blocks that are independent of the target system. For each relevant segment, IL methods are used to represent the recorded trajectory segments as motion plans (see Section fIguRe 1 | Dataflow diagram of the BesMan Learning Platform. A behavior is demonstrated and segmented into smaller behavioral building blocks. Motion plans corresponding to these building blocks are imitated and refined using reinforcement learning and/or transfer learning. Acquired motion plans are specific for a task but can be generalized to more generic behavior templates. Once a new specific task is encountered, the behavior template is instantiated and yields a task-specific motion plan. Numbers show the corresponding sections in this article that explain the module in detail. 2.3). Motion plans describe trajectories that could be executed by the robot and mimic the trajectories presented by the human demonstrator during a single behavioral building block. However, human demonstrators usually have considerably different kinematic and dynamic properties in comparison to the robotic target systems, hence, these motion plans might not produce the same result on the target system. To account for this, the "Motion Plan Refinement" module can use RL (see Section 2.4) to adapt the motion plan. This requires interaction with the real or simulated target system and the specification of a reward function which tells the learning algorithm how well a motion plan solves the task. Alternatively or in conjunction with RL, transfer learning as described in Section 2.5 can be used to adapt motion plans. Using this method, differences between learned behaviors in simulation and on the real robot are also considered during the motion plan refinement. Motion plans are solutions for very specific settings. It is often necessary to learn more generic behavior templates that can be applied to similar settings. This is achieved by the "Behavior Template Learning" module (see Section 2.4). Behavior templates are capable of generating motion plans for new but similar settings. Once a behavior template has been learned, it is added to the "Behavior Template Pool" which is accessible from the robotic system. The behavior templates in this pool can be used directly during online operation. In the following sections, detailed descriptions of the modules are given. Behavior Acquisition and preprocessing The movements of the demonstrator that should be transferred to a robotic system are recorded using a Qualisys motion capture system. Visual markers are attached to the human demonstrator and to objects which are involved in a particular task. In this way, movements and important changes in the environment can be recorded at a high accuracy. Markers are placed at shoulder, elbow, and hand of the human demonstrator. Three markers at one position can be used to infer orientations. This is important in manipulation task which require the robot to imitate the orientation of the hand. By placing three markers at the back, all marker positions can be transformed into a coordinate system relative to the demonstrator to make the recordings independent from the global coordinate system of the camera setup. Additional markers can be placed on manipulated objects. An example of a recording setup for ball-throwing behaviors is shown in Figure 2. Because a passive marker-based motion-capture system is used, we implemented an automatic marker identification based on the relative positions of the markers to each other. Similar methods are provided by manufacturers of motion capture systems, for example, the Qualisys track manager 2 offers "automatic identification of markers" which is based on previously labeled motion capture data. Other works rely on manually defined or otherwise inferred skeletons. Refer to Meyer et al. (2014); Schubert et al. (2015Schubert et al. ( , 2016, for details about some of these approaches. Behavior Segmentation In the recorded demonstrations the main movement segments have to be identified to transfer the movements to a robotic system. We segment the behavior into its main building blocks and classify each building block into a known movement class. By using an automatic approach, movement sequences needed to solve a certain task with a robotic system can easily be selected without manual user interference. To decompose demonstrated behaviors into simple building blocks, we have to identify the characteristics of the movements. Manipulation movements usually have bell-shaped velocity profiles (Morasso, 1981). Based on this knowledge, we developed the velocity-based Multiple Change-point Inference (vMCI) algorithm for the segmentation of manipulation behavior (Senger 2 For more details, see http://www.qualisys.com/software/qualisys-track-manager/ fIguRe 2 | Data acquisition setup. Motion capture cameras, markers, and goal positions for the throws in the setting are displayed. Written and informed consent for publication of the photo has been obtained from the depicted individual. Frontiers in Robotics and AI | www. frontiersin. org Gutzeit et al. BesMan Learning Platform et al., 2014). The vMCI algorithm is an extension of the Multiple Change-point Inference presented by Fearnhead and Liu (2007), in which Bayesian inference is used to determine segment borders in a time series. We extended this algorithm to detect building blocks in human manipulation movement (Senger et al., 2014). Each segment is represented with a linear regression model. In the vMCI algorithm, the velocity of the data is independently modeled with a bell-shaped basis function. The algorithm identifies segment borders at positions where the underlying model of the data changes. These borders can be determined using an online Viterbi-algorithm in a fully unsupervised manner. By including the velocity into the inference in the vMCI algorithm, segments with a bell-shaped velocity can be detected automatically without need for parameter tuning, as shown in several experiments (Senger et al., 2014;Gutzeit and Kirchner, 2016). Because in general there is no ground truth segmentation available for human movements, the vMCI algorithm has been compared to other segmentation algorithms on synthetic data generated from sequenced dynamical movement primitives [DMP; (Ijspeert et al., 2013)] as well as on real human manipulation movements (Senger et al., 2014). It has been shown, that the vMCI algorithm is very robust against noise and to the selected parameters. These are very important properties because noise in the data as well as differences in the movement execution can be handled. Furthermore, the manual user input needed can be kept low. Resulting segments need to be annotated in order to select movement sequences that should be learned and transferred to the robot. To minimize manual effort, this annotation should work with small training data sizes. As detailed in Gutzeit and Kirchner (2016), we use a k -Nearest Neighbor (k-NN) classifier with Euclidean distance as distance metric on the normalized trajectories transformed to a coordinate system relative to the human demonstrator to assign predefined movement classes to the acquired segments. The k-NN algorithm is chosen because it has only one parameter, k , which needs to be selected. No further parameter tuning is needed. We choose k = 1 , because we want to classify the segments with a low number of training examples. A higher k could result in a higher number of miss-classifications using very small training set sizes. We showed, that using this simple Euclidean based classification algorithm, it is possible to classify human movements into different movement classes at a high accuracy. Imitation learning We use imitation learning [IL; Schaal (1997)] to obtain motion plans from the recorded trajectories. A workflow for learning from demonstrations must address the correspondence problem as well as the representation of the motion plan. In this section we introduce motion plan representations used within the BESMAN learning platform and discuss the correspondence problem. Motion Plan Representation One of the currently most popular motion plan representations to imitate movements are so-called dynamical movement primitives (Ijspeert et al., 2013). The DMP representation has a unique closed-form solution for IL which makes it very appealing for our purpose. After imitation, the parameters of the DMP can be adjusted easily which makes it suitable for policy search (see Section 2.4.1). The standard DMP formulation allows to set the initial state, goal state, and execution time as meta-parameters. The DMP converges to a velocity of 0 at the end of a discrete movement. There are several variants of DMPs. An extension that allows to set a goal velocity has been developed by Mülling et al. (2011Mülling et al. ( , 2013. We use this DMP formulation for trajectories in joint space. Representing three-dimensional orientations in a DMP is not straightforward. A solution has been proposed by Pastor et al. (2009) and improved by Ude et al. (2014) through correct handling of orientation represented by unit quaternions. To represent trajectories in Cartesian space, we use a combination of a position DMP as formulated by Mülling et al. (2011Mülling et al. ( , 2013 and an orientation DMP by Ude et al. (2014). Correspondence Problem Demonstrated actions must be executable by the target system to apply IL. Is is not possible to directly transfer joint angles from a human arm to a robot arm because it has different joints, degrees of freedom, and link lengths. The reason is the correspondence problem (Nehaniv and Dautenhahn, 2002), which consists of two subproblems (Argall et al., 2009). The following mappings have to be defined: the record mapping, which maps marker trajectories to a sequence of actions or system states, and the embodiment mapping, which maps the recorded sequence to a trajectory that is executable on the target system. The record mapping g R : A T → D maps from some not directly observable space A T in which the teacher performs the demonstration (e.g., joint angles of a human, muscle activity, applied torque, etc.) to a corresponding observation space D . We cannot directly observe the actions of the agent but we can observe the marker positions, which means that a part of the record mapping is already given. Instead of using the observed marker positions directly to represent D , we reduce the marker positions to a representation that is more meaningful to describe manipulation behavior and is independent of the platform: we use observed marker positions to extract end-effector poses in some reference frame, that is . The calculation of end-effector poses from observed marker positions is specific for a marker setup and has to be defined. The reference frame depends on the application. For goal-directed manipulation behavior Wirkus (2014) proposes to use the target as a reference frame, e.g., a box that we want to grasp. For behaviors like ball-throwing it is better to use a reference frame on the teacher, e.g., the back, because the target object (the ball) will be moved with the end-effector. Works by Niekum et al. (2015); Calinon (2016); Manschitz et al. (2016) select the reference frame automatically but we did not consider this here. The embodiment mapping g E : D → A L maps from the observations d ∈ D to a corresponding action a ∈ A L that the learner has to perform to achieve a similar result. It is specific for the task and the target system. Although one might think that transfering end-effector poses to the target system is simply an inverse kinematics problem, it can actually be much more complicated: the target system might not have the same workspace, Frontiers in Robotics and AI | www. frontiersin. org BesMan Learning Platform kinematic structure, and dynamic capabilities as the teacher. Hence, we propose to use multiple methods sequentially to determine the embodiment mapping automatically: • Black-box optimization to determine synchronization frames, • Spatial and temporal scaling to take the kinematic and dynamic capabilities of the target system into account, • And refinement with policy search. The first two methods will only guarantee that the actions are executable on the target system. That does not mean that the result of the actions actually produces the same effects that have been observed in the demonstration by the human and the task is solved. Not all of these methods are required for all categories of tasks. We will now explain and discuss when to use these methods. The reference frame of our target system is a so-called synchronization frame, that is, we use it as the base frame in which the demonstration is performed [see Bongardt (2015) for a detailed introduction of this term]. It is denoted as denotes the base frame of the target system. For simplicity, we assume that the synchronization frame is constant over time. In some situations it is obvious how the corresponding reference frame in the target system should be selected to transfer demonstrated motion plans in Cartesian space. For example, if we define the teachers reference frame to be the target of a grasping movement, we will not change this in the target system. In other cases it is not so obvious. Consider a ball-throwing movement where the reference system of the recorded movement is the human teacher's back (see Figure 3). When we want to transfer the observed motion plan to a robotic arm, it is not obvious where we would put the synchronization frame. We can choose it arbitrarily. We can use this freedom to account already for some of the problems that occur in the embodiment mapping. Without having any information about the structure of the task, we can optimize the synchronization frame so that we can find a joint configuration for each end-effector pose from the demonstration so that velocity, acceleration, and/or jerk of the trajectory is minimized. The first objective ensures that the trajectory is mapped into the robot's workspace and the other objectives account for kinematic and dynamic problems that might occur. Otherwise it happens quite often that inverse kinematic solvers run into local optima during the execution of the motion so that big changes in the joint configuration occur during the execution. Even if that is not the case, it might happen that a small displacement in Cartesian space that requires almost no effort by the teacher might result in a large displacement in the joint space of the target system. The only information that we have to give at this point is a kinematic description of the target system. To find a locally optimal solution to our problem, we can use any black-box optimization algorithm like CMA-ES (Hansen and Ostermeier, 2001) or L-BFGS (Nocedal, 1980). For a specific task on the target system we have to scale the demonstrated trajectory spatially by setting the start and goal state of the trajectory. From the inverse kinematics we can then compute fIguRe 3 | Synchronization frames on the demonstrator and on the target system. Written and informed consent for publication of the photo has been obtained from the depicted individual. Frontiers in Robotics and AI | www. frontiersin. org Gutzeit et al. BesMan Learning Platform the required velocities in joint space. To keep the velocities within the limits of what is achievable by the target system, we can do a temporal scaling of the movement, e.g., by interpolation. Temporal scaling is simple when we represent the demonstrated trajectory as a DMP because the execution time is a meta-parameter of the DMP. Until now we only integrated knowledge about our target system into the embodiment mapping. To ensure that the imitated skill has the same effects as the demonstration, we must integrate knowledge about the task. This will be done in the policy refinement step. We have to define a reward function that can be used by reinforcement learning methods to complete the embodiment mapping. Here we account for kinematic and dynamic differences that cannot be resolved easily, e.g., a human teacher might have a hand structure that is different from the target system. An example is displayed in Figure 3: the target system does not even have an active hand. It has a scoop mounted on the tip of the arm. In other cases, the robot might have a gripper that does not have all of the capabilities of a human hand, e.g., a parallel gripper. Another problem in the ball-throwing domain is the dynamic and kinematic difference between the human demonstrator and the target system. It might be possible for the robot to execute the throwing movement after temporal scaling but this step can reduce the velocity in Cartesian space so drastically that the ball does not even leave the scoop any more. The methods that can be used to solve these problems are discussed in Sections 2.4.1 and 2.5. Reinforcement learning The learning platform provides tools to adjust motion plans to specific target systems and to generalize motion plans over specified task parameters based on reinforcement learning [RL; Sutton and Barto (1998)]. Depending on the application, we have to decide whether learning will take place in simulation or in reality. Learning in reality would give the best results, however, this might not be feasible because it takes a lot more time and usually requires human assistance. Furthermore, the robot could potentially damage itself. The more complex a robot is the more parts can break and, hence, the more fragile it becomes. It is often a good idea to model the relevant aspects of the task in simulation and start with the refinement in simulation. When a good motion plan is obtained, this can be used to start learning in reality directly or approaches can be applied to handle the simulation-reality gap (see Section 2.5). We decided to focus on policy search and its extensions. A good overview is provided by Deisenroth et al., 2013. Policy search is very sample-efficient in domains where a good initial policy can be provided, the state space is high-dimensional and continuous, and the optimal policy can be represented easily with a pre-structured policy such as a DMP. It depends on the target system and the application which policy searchs methods should be used. Standard policy search (see Section 2.4.1) is always included in our learning platform to ensure that the embodiment mapping is completed. We can generalize the obtained motion plan to a behavior template that takes the current context as a parameter to modify its motion plan using contextual policy search (see Section 2.4.2). In an active learning setting the generalization can be more effective (see Section 2.4.3). A more recent alternative to these policy search methods that is gaining more and more popularity in the reinforcement learning community is end-to-end learning with deep reinforcement learning (Arulkumaran et al., 2017). Deep RL uses complex policies or value functions, represented by neural networks. The benefit is that it allows a conceptually easy integration of sensors like cameras. Using a neural network, however, complicates the integration of prior knowledge from non-experts through imitation learning. Movement primitives and policy search are very appealing methods from a robotics perspective, since just one demonstration is enough to learn motion plan that can be refined by policy search. Policy Search In policy search we modify a parameter vector θ , which can include the parameters and/or meta-parameters of the motion plan π , so that the reward R θ is maximized. Depending on wether we want to optimize only a few meta-parameters or the whole set of parameters, we can select the policy search method. We use several algorithms in the learning platform. Covariance matrix adaption evolution strategy [CMA-ES; Hansen and Ostermeier (2001)] is a black-box optimization algorithm which can be used to optimize any parameterized policy (Heidrich-Meisner and Igel, 2008). Unlike CMA-ES, Relative Entropy Policy Search [REPS; Peters et al. (2010)] limits the information loss during exploration. Path Integral Policy Improvement [PI2 ; Theodorou et al. (2010)] is based on stochastic optimal control. It requires to specify an initial policy and a covariance matrix which governs exploration in weight space. All of these methods are local search approaches which means they need a good initialization provided through imitation learning. Bayesian Optimization (Brochu et al., 2010) is a global optimization method. It has been shown to be applicable to policy search (Calandra et al., 2016). Bayesian optimization is usually limited to a few parameters because of the computational complexity, however, it is very sample-efficient. Contextual Policy Search A disadvantage of using prestructured motion plans like DMPs is that they are designed for a specific situation and can only generalize over predefined meta-parameters. They are not able to generalize over arbitrary task parameters that often have nontrivial relations to the optimal motion plan. This is addressed by contextual policy search which is an extension of policy search. Contextual policy search learns a so-called upper-level policy θ = π ( s ) , which is a mapping from a context vector s that describes the task to the optimum parameter vector θ of an underlying motion plan. We implemented Contextual Relative Entropy Policy Search (2015) is based on Bayesian optimization and is much more sample-efficient than the local search approach C-REPS but it does not scale well to a large number of parameters. Frontiers in Robotics and AI | www. frontiersin. org BesMan Learning Platform Active Learning Active contextual policy search extends contextual policy search for cases in which the learning agent is able to determine the context it wants to explore. This is the case for example in the ball-throwing domain. In this setting it is desirable to select the context s that maximizes the learning progress in each episode. We have shown that selecting the context gives an advantage in combination with C-REPS (Fabisch and Metzen, 2014) although we only used a discrete set of contexts and modeled context selection as a non-stationary multi-armed bandit problem. We developed an active context-selection approach for BO-CPS based on entropy search (Hennig and Schuler, 2012), which is called Active Contextual Entropy Search [ACES; Metzen (2015)]. ACES allow to select from a continuous set of contexts. We also developed minimum regret search [MRS; Metzen (2016)], which a novel exploration strategy for Bayesian optimization (more specifically: an acquisition function). In contrast to entropy search, which aims at maximizing the information gain about the optimum, MRS aims at minimizing the expected regret of its final recommendation. MRS explores more globally and is less likely to focus prematurely on a local optimum. Simulation-Reality-Transfer Motion plans learned only in simulation often perform worse when they are executed in reality. In certain situations, this performance drop is small, for example, in open-loop control with a robot having accurate and precise actuators. In such a case, it can be sufficient to apply policy search methods in simulation and transfer the result directly to the real system. Using closed-loop control while having sensor and actuator noise, deformation, fatigue and other factors contributing to the disparity between simulation and reality, the Reality Gap becomes a problem. A wide range of approaches to this issue have been proposed, many of which aim to improve the physical correctness of the simulation (Jakobi et al., 1995(Jakobi et al., , 1997Bongard and Lipson, 2005) or adaptation capabilities (Urzelai and Floreano , 2001;Hartland and Bredeche, 2006). More recently (Koos et al., 2013) and (Cully et al., 2015) have applied behavioral descriptions of the controller to be optimized to build and update an internal model guiding the search towards controllers that work well in reality without adapting the original simulation model. To choose an appropriate algorithm, one has to consider several factors, including: the task complexity, the consistency of the robot and the environment, the complexity of the environmental interactions and required precision of the model as well as the availability of the robot. The cost and robustness of the robot determines how often and what type of tests (potentially risky movements) can be performed in reality. For example, the ball-throwing task in the current work requires to release the ball at a certain position and velocity. Both depend on the end-effector trajectory and the time of release. The time of release, speed and direction are not easy to predict (Otto, 2015). The development of an accurate model of the physical interactions of the ball and the ball mount would require a detailed analysis of the ball as well as the ball mount. We want to reduce the amount of expert knowledge or additional testing and solve the task with minimum user input. Hence, we choose to apply the Transferability Approach by (Koos et al., 2013), which minimizes the number of tests on the real system by focusing on the task to be solved rather than optimizing simulation accuracy. The Transferability Approachis based on the hypotheses that the performance of a specific behavior on the target system mainly depends on (1) its performance in simulation and (2) the correctness of the simulation for this behavior. Thus, the optimization problem can be reformulated to find actions maximizing the reward in simulation and the transferability. While the simulation model remains unaltered, the motion plans are adjusted to approximate the Pareto front (optimal trade-off solutions) for both criteria. Motion plans generated by imitation learning constitute the initial population for the optimization. Motion plans are evaluated in the simulation while features are extracted to compute the objective functions. By iterative variation and selection through evolutionary operators, the motion plans are optimized. In regular intervals, an update heuristic is used to select motion plans to be transferred based on their behavioral diversity. The behavioral diversity is a quantity that describes how much an observable activity differs from a set of other activities. The data recorded in these transfer experiments is compared with the corresponding data in simulation and their disparity is stored for each transferred motion plan as the simulation-to-reality disparity. Most motion plans are not transferred. Instead, a surrogate model is used to estimate the disparity by interpolating between the observations. evAluATIon of The leARnIng plATfoRM Most of the individual modules of the learning platform have already been evaluated in previous works (Fabisch and Metzen, 2014;Senger et al., 2014;Fabisch et al., 2015;Metzen, 2015;Metzen et al., 2015). In this section, we evaluate the learning platform as a whole in a ball-throwing scenario. We transfer the human movements to a robotic arm. Furthermore, we evaluate the learning platform with respect to time requirements and level of automation. We show that the learning platform can be run with minimal user interference by different non-expert subjects which demonstrated the ball-throw. Methods In the following we describe the applied methods for learning a ball-throwing behavior from humans and for transferring it to a robotic arm. Robotic System We transfer the movements to the robotic arm COMPI (Bargsten and de Gea Fernandez, 2015) displayed in Figure 3. A scoop that can hold a ball is mounted as COMPI's end-effector. The position where the ball hits the ground can not always be identical because of varying positions of the ball in the scoop, varying shape of the deformable and not perfectly round ball, inaccuracies in the execution of the desired trajectory, and measurement errors. How reproducible this position is depends on the throwing Frontiers in Robotics and AI | www. frontiersin. org Gutzeit et al. BesMan Learning Platform movement. For some throwing movements the SD of the position can be more than a meter because the ball sometimes falls down before the throwing movement is finished and sometimes not. To estimate the maximum reachable accuracy, we designed 4 throws manually for which we measured the SD of the touchdown position in 20 experiments per throw. The mean positions were 1.3 to 2.3 m away from COMPI. The standard deviations were about 4.5 to 7 cm. To measure the position where the ball hits the ground we use the motion capture system and a ball that is recognized as a marker. Data Acquisition The setup used to record the demonstration of a throw can be seen in Figure 2. Seven cameras tracked eight visual markers attached on the human and the target area. Only five cameras were focused directly on the subject. The recorded marker positions were labeled according to their position on the human body (e.g., "shoulder"). The subjects had to throw a ball to a goal position on the ground, approximately 2 m away. To limit the range of possible throws, they were instructed to throw the ball from above, i.e., the hand is above the shoulder while throwing (see Figure 2). The subjects had to move their arm to a resting position in which it loosely hangs down between the throws. The movement was demonstrated by 10 subjects. All subjects were right-handed and had different throwing skills ranging from non-experts to subjects performing ball sports in their free time, like basketball, volleyball, or handball. Each subject demonstrated 8 throws in 3 experiments which results in total numbers of 24 throws per subject, 30 experiments, and 240 throws for all subjects. Segmentation and Movement Classification Based on the position and velocity of the hand, the recorded demonstrations were segmented using vMCI. The determined segments can be assigned into 4 different movement classes: strike_out, throw, swing_out and idle. To classify the resulting segments into predefined movement categories, we use a 1-NN classifier as described in section 2.2. Imitation Learning IL is based on end-effector poses. End-effector trajectories cannot be transferred directly to the robot's coordinate system so that we have to do a synchronization frame optimization to translate and rotate the original trajectory so that it fits into the workspace of COMPI. The end-effector trajectories are transformed into joint trajectories via inverse kinematics. In addition, we scale the joint trajectories so that the joint velocity limits of the target system are respected. In a last step, the throwing movement is represented as a joint space DMP via IL. Moreover, a minimum execution time of 0.95 s is set to reduce the velocity and accelerations, which are penalized during the following optimization. Motion Plan Refinement While the DMPs resulting from IL can be executed on the robot, they do not necessarily have the same effect on the ball as the movements of the human. The lack of actuated fingers as well as kinematic and dynamic differences to the human lead to the need for adaptation, which we do via policy search. The adapted policy parameters include: initial position, goal, weights and execution time, consisting of 6, 6, 36 and 1 value(s), respectively. Following the concept of the Transferability Approach, we aim to minimize two objectives: (a) the target distance of the touchdown position in simulation and (b) the distance between the touchdown position in simulation and in reality. The target distance in reality is not directly optimized, but evaluated during and at the end of the experiment. The optimization consists of several steps. (1) Refinement in Simulation: Results from IL are optimized to throw at least 0.1 m (see gray line in Figure 4A) close to the target [objective (a)]. Per subject we perform 6 optimization runs using all available motion plans in each run. After a maximum of 10,000 episodes (500 generations with a population size of 20), the optimization is stopped. (2) Refinement in Simulation and Reality: The Transferability Approachis performed until 25 transfer experiments are executed on the robot. The initial population consists of successful throwing movements from the optimization in simulation (previous step). They are optimized to minimize both target distance, position and velocity limit violation as well as an acceleration penalty. After 50 episodes (corresponds to population size) in simulation, we do one transfer. The Transferability Approachis performed until 25 transfer experiments are executed on the robot. The initial population consists of successful throwing movements from the optimization in simulation (previous step). To infer the transferability of a motion plan that is not tested in reality, the surrogate model requires a low dimensional description of the type of activity that is created by the motion plan in simulation. We use a subset of the simulation results to describe the activity by 17 features including (among others) the time of release, maximum joint accelerations and the posture at half-time. After each transfer, the observed touchdown-disparity and the activity description computed in simulation are appended to the observations stored in the surrogate model. Required Time and Level of Automation For each main module of the learning platform that is needed to learn a new behavior, the required time was measured. The time required to learn a new behavior is strongly influenced by the degree of automation, which is furthermore relevant to the easiness of application. Therefore we evaluated the degree of automation for each module of the learning platform. Frontiers in Robotics and AI | www. frontiersin. org BesMan Learning Platform Results and discussion We evaluate two parts of the learning platform separately. The first part is independent of the target system and includes data acquisition, preprocessing, and segmentation (see Section 3.2.1). The second part depends on the target system and includes imitation and refinement (see Section 3.2.2). The whole learning platform will be evaluated with respect to required time (see Section 3.2.3) and level of automation (see Section 3.2.4). Segmentation and Movement Classification The automatic marker labeling could not always be used for the recordings. The reason for that are gaps in the recordings. Because we had to cover a large area (target area and subject's workspace) it was nearly impossible to record every marker all the time for every subject with only seven cameras. With a smaller volume that has to be covered, more cameras, or active markers this problem could be solved. The vMCI algorithm successfully detected movement segments with a bell-shaped velocity profile in the demonstrations. Exemplary segmentation results of this data can be found in (Gutzeit and Kirchner, 2016). No manual intervention was needed for this segmentation step because the vMCI determines the movement segment borders unsupervised and its parameters can be computed directly from the data (Senger et al., 2014). To classify the determined segments with 1-NN, a training set of 40 throwing demonstration from subjects 1-5 (one experiment from each subject) is manually labeled. The throwing movements of these 5 subjects that have not been used for training (two experiments per subject) and all the throwing movements of subjects 6-10 (three experiments per subject) are used to evaluate the classifier. On the test set consisting of the remaining demonstrations of subjects 1-5, an accuracy of 93.2% could be achieved. The important movement class throw, which should be transferred to the robotic system, was detected with an accuracy of 98.6% on this test set. The second test set consisted fIguRe 4 | (A) Results of IL evaluated in simulation are shown by Tukey boxplots. N is the number of throw segments detected in the data of a specific subject. The solid gray line indicates the 0.1 m threshold. Please note the different scaling of the ordinate, compared to (B) and (c). (B) Closest distance to target achieved in transfer trials during robot-in-the-loop optimization. The median over 8 subjects of the best results so far is shown. For each subject, 50 transfer trials are executed on the robot during the optimization. (c) At the end of the optimization process, for each subject, six motion plans are automatically selected for evaluation. The result from the deterministic simulation is shown by squares. Each motion plan is evaluted three times in reality (circles). If the ball landed closer than 0.1 m to the target in all three repetitions, the circles are filled, otherwise not. Frontiers in Robotics and AI | www. frontiersin. org Gutzeit et al. BesMan Learning Platform of demonstrations of subjects 6-10, i.e., no throwing movements of these subjects were used to train the classifier. On this data, the accuracy was 89.1 and the class throw was detected with 93.9% accuracy. As a conclusion, different movement classes can be successfully recognized based on a simple Euclidean based 1-NN using only marker positions on arm and hand as features. Even movements with considerable differences in execution, like in the ball-throwing scenario, the 1-NN based classification is successful. Imitation Learning and Motion Plan Refinement In this section, we evaluate the target distance measured at several steps of the learning platform. (0) Initial performance: At first, we evaluate the results of the IL in simulation. The IL does not consider suitability for holding and throwing the ball. Hence, several motion plans result in a simple ball drop near the initial position. This is reflected by target distances around 2.15 m in (Figure 4A). None of the simulated ball throws is closer than 1 m to the target. One may also note the variation in the number of throw segments that are detected in the data containing 24 actual demonstrations per subject. (1) Refinement in Simulation: For one subject, the goal (distance is below the 0.1 m threshold indicated by the gray line in Figure 4A) was not reached. For all remaining subjects, the goal was reached in less than 2,200 episodes in at least one of the 6 runs. (2) Refinement in Simulation and Reality: For 2 out of 9 subjects, the transfer experiments reached target distances below 0.1 m in reality. The deviations of the touchdown positions in reality and in simulation seem to be systematic, i.e., for throwing behaviors resulting from the same subject, similar deviations are found. Having a constant offset between simulated and real results contradicts the premise of the Transferability Approach, aiming to find a region in the parameter space that features transferable motion plans. Hence, we decide to adapt the simulation specifically to predict the touchdown positions for some of the movements more accurately. (3) Simulation refinement: To reduce the offset of real and simulated results, we minimize the median of the touchdown-disparities obtained for the 25 transfer experiments so far. This is done via a simple grid search on the value for the robot height and the ball-release angle. The medians could be reduced to a range of 0.11 to 0.23 m (depending on subject; compared to 0.23 to 0.73 m before). (4) Refinement in Simulation and Reality: For one subject, this optimization step was aborted after 10 critical transfer experiments, during which joint limits were exceeded and the robot was deactivated consequently. For all of the remaining subjects, target distances below 0.1 m as well as touchdown-disparities below 0.1 m occurred in the 50 transfer experiments. The best target distances so far are shown in Figure 4B). The curve indicates that 25 transfers are sufficient to minimize the target distance. (5) Evaluation of Candidates: Figure 4C) shows the performance evaluation of 6 automatically selected final candidate solutions (2 with the lowest target distance in reality and 4 from the Pareto front. Up to 4 of these hit the ground reliably, i.e., the target distance is below 0.1 m in all three repetitions (marked by filled dots). For seven out of eight (remaining) subjects, at least one selected candidate solution hits the target reliably. Required Time An overview of the required time for each step can be found in Table 1. Note that labeling the dataset for movement classification usually has to be done only once. The required time for successful automatic marker labeling is much faster than manual labeling even though the automatic labeling is slow because of the bad quality of the data (see 3.2.1). If the markers were always visible the automatic labeling would have taken only about some seconds. The longest part in the whole process is the refinement for the target platform (imitation, policy search, transfer), which is a difficult problem that involves interaction with the real world. Level of Automation Although we have automated the process of acquiring new behaviors, still some human intervention is required either through knowledge that has to be given to the system or by interacting physically with the system. An overview can be found in Table 1. Of course it is necessary that a human demonstrates the movement. The labeling of the markers can be completely automated with more cameras. However, we could not achieve the maximum possible level of automation in our experiments. Movement classification requires a dataset that is labeled manually but we minimized the effort by using a classifier which classifies at a high accuracy with small training data sets. When we set up the learning platform for a new target system and type of manipulation behavior we have to decide which components we combine for embodiment mapping, e.g., in the ball-throwing scenario the synchronization frame optimization was useful which is not always the case. However, the IL is completely automated. The motion plan refinement with the transferability approach requires human assistance because the robot has to try throwing movements in the real world and is not able to get the ball back on its own. The process itself is automated so that no knowledge about the system or the task is required from the human at this step. In addition, we have to define a reward function that describes how a solution of a task should look like and because we want to minimize the interaction with the real world we use a simulation which has to be designed. This is a manual process at the moment. evAluATIon of BehAvIoR TeMplATe leARnIng In separate experiments, we evaluated the component "Behavior Template Learning" in the ball-throwing scenario. It is irrelevant how the initial throw has been generated and, hence, it is not required to evaluate the component for each subject. The behavior template is learned directly in reality without any simulation. Similar but not directly comparable work for ball-throwing has been published by da Silva et al. (2014) Behavior Template Learning In order to apply the most sample-efficient algorithm (BO-CPS) we had to reduce the number of parameters of the motion plan drastically. Otherwise the computational complexity of the problem would be too high. It is not possible to learn all weights of a typical joint space DMP, e.g., ten weights per joint would result in 60 parameters. For that reason we selected only two meta-parameters that were optimized: the execution time and the goal position of the first joint. The execution time will let us vary how far the ball is thrown and the goal position of the first joint will determine the angle of the throw. Note that there is not a direct linear relation between the goal position of the first joint and the throwing angle because of a complex interaction between the deformable ball and the scoop. It depends on the execution time as well. The goal is to learn an upper level policy that predicts a close to optimal pair of execution time and goal position of the first link for a given target so that a specific motion plan can be generated to hit this target. We conduct a thorough evaluation that compares the acquisition functions upper confidence bound and entropy search for BO-CPS. Results and discussion A learning curve is given in Figure 5. The average of the reward (negative squared distance to the target) and distance to the target over the test contexts are displayed. We noticed that after about 80 episodes the performance will not increase much more. shows that it is possible to generalize throwing movements to a large area in only 80 episodes so that the average deviation from the target is almost at the maximum achievable precision. In addition, the computationally more complex entropy search shows consistently better results than upper confidence bound. ApplIcATIon of The leARnIng plATfoRM In dIffeRenT ScenARIoS Besides the evaluation of the learning platform in the ball-throwing scenario, it has been applied in different scenarios to transfer movements to different robotic systems. In a pick-and-place scenario a grasping movement was extracted from demonstrations of picking a box from a shelf, placing it on a table and putting it back afterwards. The movements were, similarly to the ball-throwing task, recorded with a marker based tracking system as detailed in (Gutzeit and Kirchner, 2016). After successful segmentation and classification of the grasping movement [see Gutzeit and Kirchner, 2016], it was imitated using a Cartesian space DMP and adapted to be executed on a Kuka iiwa lightweight robot equipped with a 3-finger gripper from Robotiq. Only one demonstration was required to learn the grasping movement. Cartesian DMPs were used for easy integration with the used whole-body control and perception. In this scenario the refinement was done using the CMA-ES algorithm in simulation. After 50-100 iterations, the movement could be successfully transferred to the robotic system. One demonstration of the initial trajectory is usually sufficient. In another scenario, the learning platform was used to teach the robotic system Mantis (Bartsch et al., 2016) to pull a lever. Again, the recorded movements could be successfully segmented. Due to the fixed lever position in this experiment, the movement execution of the human was strongly predetermined leading to good classification results with only one training example per class (de Gea Fernndez et al., 2015;Gutzeit et al., 2018). Like in the pick-and-place scencario, RL techniques implemented in the learning platform were used to adapt the demonstrated movement to the robotic system. REPS and CMA-ES gave good results. After several hundred episodes in simulation, a successful movement could be generated. Learning could be done in parallel from multiple demonstrations with each RL learning process being initialized with a single demonstration. As in the ball-throwing scenario, the transfer of the demonstrated movements was partially automated. To recognize the important movement segment, only a few manually labeled training examples are needed. To imitate and adapt the demonstrations to the system, the embodiment mapping and a reward function have to be selected with regards to the robotic system and the task goal. Besides Frontiers in Robotics and AI | www. frontiersin. org Gutzeit et al. BesMan Learning Platform this, the transfer of the movement to the system is completely automated. Videos of these two applications of the learning platform can be found online. 3 Our results show that it is possible to learn new skills for robots without specifying the solution directly. The learning platform leverages intuitive knowledge from humans that do not know anything about the target system to automatically transfer skills to robots. The main impediments that can be overcome by the learning platform in this setting are the kinematic and dynamic differences of the demonstrator and the target system. In this work, we usually learn reference trajectories that can be used for example in a whole-body control framework (de Gea Fernndez et al., 2015). The integration of more sensor data like camera images or force sensor measurements would be a next step. The presented approach still has some limitations: some prior knowledge has to be defined in form of reward functions, simulations, and markers for motion capture. For complex problems with complex reward functions, learning the reward function would be better than defining it manually. Promising fields of research are active reward learning (Daniel et al., 2015) and inverse reinforcement learning (Ng and Russell, 2000). Simulations ideally would be created automatically from sensor data, experience, and active exploration in the real world. At the moment, however, this is still a manual step. For automated behavior recording, marker free approaches could be tested and compared with respect to accuracy and achievable automation level. Also some prior knowledge is implicitly integrated in the design of the learning platform. There is not one combination of methods that works for all applications. For example, simulationreality-transfer is only required in challenging applications like ball-throwing. This should be addressed in future work. On the basis of the learning platform, our future goal is to build a library of movements that are represented independent of the target system. We could use methods for embodiment mapping to transfer those skills to several target systems. eThIcS AppRovAl Experimental protocols were approved by the ethics committee of the University of Bremen. Written informed consent was obtained from all participants that volunteered to perform the experiments. Written informed consent for publication of identifying information/images was also obtained from all participants. AuThoR conTRIBuTIonS LG and AF were responsible for the concept of the paper and are main authors. LG, AF, and MO wrote Section 2 and 3. LG organized the data acquisition, implemented and evaluated the behavior segmentation and annotation, and wrote Sections 2.1, 2.2 and 5. AF implemented imitation learning and reinforcement learning approaches, developed contextual policy search and active learning methods, and wrote Sections 2.3 and 2.4. MO implemented methods to transfer motion plans from simulation to reality, conducted experiments to evaluate these methods, and wrote Section 2.5. JHM implemented imitation learning methods, developed contextual policy search and active learning methods and wrote the introduction and Sections 2.4.2 and 2.4.3. JH conducted the experiments to evaluate BO-CPS on the robot COMPI and wrote Section 6. EAK and FK wrote the introduction, conclusion, and outlook. AcKnowledgMenTS We thank Hendrik Wiese for the implementation of the automatic marker labeling.
12,703.6
2018-05-31T00:00:00.000
[ "Computer Science", "Engineering" ]
The Impact of Diversification of Production Activities by Major Public Oil Companies on the Value of Their Shares This study aims to identify a system of factors that influence the market value of the largest oil companies. In order to create the most robust analysis with real-world applicability, we test several hypotheses which aim to establish those patterns of behavior and composition within oil company structures that interact most predictably with market trends and processes.<br><br>To achieved this, we collected quarterly data on the 5 largest private oil companies (BP, Chevron, Exxon Mobil, Royal Dutch Shell, Total) for the period from Q1 2006 to Q3 2016. The financial indicators for these companies were calculated based on data from the Thomson Reuters Eikon database, as well as from the quarterly reports of the companies themselves. An econometric analysis using panel data was used to test hypotheses.<br><br>The following results were revealed. First, the capital structure of the largest oil companies has a direct impact on the value of shares. Second, the increase in capital costs attributable to the downstream segment relative to total capital costs adversely affects the price of shares. Third, we discovered that with the growth of Tobin’s Q, the share price of the largest oil companies increases. It was also revealed that factors such as the conclusion of mergers and acquisitions, profitability in the downstream segment, and the dividend payout ratio were insignificant in the model.<br><br>These results confirm that any study assessing the value of companies in the oil industry ought to evaluate influential variables affecting the capitalization of companies operating in both upstream and downstream segments, while also considering companies engaged in production in only one such segment. It is also imperative to conduct a separate analysis of the influence of factors on the capitalization of companies with respect to the prevailing trends in the oil market.<br><br>The novelty of this study relates to the immediate applicability of the real-world data utilized. We focus on such fundamental econometric variables from market-leading companies as profit levels, returns on sales, debt burdens, capital costs, and the effects of mergers and acquisitions. These factors are relevant at every level of business and academic analysis in every commercial endeavor. As such our conclusions may be instantly implemented into business strategies, research, and economic analyses related to the oil industry as well as other markets. Additionally, since there are no significant discrepancies between our results and established academic consensus, our contributions require little further interpretation to be instrumental. Introduction Assessment of prospects of the future rise in value of investment projects is the basis for beneficiaries when making an investment decision. In order to define the current value of an asset and its capability to generate dividends within the chosen time horizon it is necessary to carry out a complex analysis of the factors which influence directly its value. The purpose of the present paper is revealing the group of such factors using an oil company as an example. A distinguishing feature of defining the investment potential of resource-extracting companies is the necessity of analysis of their dependency on the raw materials prices and considering of their mechanisms of protection from reduction of the amount of financial receipts with the existing risks of high volatility in the raw materials markets. The macroeconomic analysis of the petroleum industry and its prospectives will give us an opportunity to answer the important question of expediency of purchase of oil companies' shares in crucial respect. Due to a rapid growth of the global economy from 1965 to 2017 the demand for oil increased almost thrice from 1,524 до 4,470 million tons. The biggest contribution in the growth of the composite demand was made by Asia region where the considered indicator increased within the above period more than 9 times (from 163 to 1,598 million tons), while in North America the oil demand increased less than twice (from 620 to 1,056 million tons). This is due to a rapid development of the economies of Asia region. A notable increase of the share of oil consumption in this region also confirms this fact. From 10 % of the total world amount in 1965 it rose to 34.7 % by 2015 and in 2017 it amounted to 35.7 %. As the results of predicted values of the global oil demand presented in the reports of the global analytical organizations (BP, IEA, OPEC, Institute for Energy Studies of the Russian Academy of Sciences) show the average value of demand by 2040 will be 4,916 million tons, and it exceeds the corresponding value of 2016 by 13%. It is important to note that in spite of differences in predicted values of demand for energy resources represented in analytical reports of various agencies and organizations an overall trend of oil demand in the coming decades can be seen. It means that development of the oil industry will go on and it will continue to generate profits for its shareholders. When taking the optimal investment decision stock market traders use the methods of defining the fair value of public companies. Carrying out such analysis it is necessary to take into consideration as much factors influencing the share prices as possible. At present the issues related to assessment of capitalization of oil companies are of greatest relevance due to a high price volatility in the oil market which emerged in 2014. One of the mechanisms which protect capitalization of oil companies from decline is diversification of production activities into upstream and downstream segments. Figures 1 and 2 illustrate a collapse of operating income in the upstream segment for the largest oil companies Exxon Mobil, Chevron, BP and Total in 2014-2015 when oil prices fell significantly, while the same indicator in the downstream segment showed growth within the same period. The presented diagrams show that operating income in the downstream segment is unresponsive to the changes of the oil market which, in its turn, explains why prices of the companies' shares were not reduced pro rata with the fall in oil price. The mechanism of activities diversification of the largest vertically integrated oil companies from the point of view of analysis of financial flows' cost-effectiveness in the upstream and downstream segments has not been studied before. Apart from diversification it is necessary to define and analyze other factors on which capitalization of oil companies depends. Review of Literature The majority of researches dedicated to assessment of influence of various factors on capitalization of oil companies are focused on detecting of influence of financial indicators which are external for the companies, for example, change of oil price [6; 7; 10; 12; 15], movement of stock indexes [14], inflation fluctuation and industrial production index [17]. Another group of papers considers not just external factors which are independent of the company operations, but also internal ones -financial and production indicators of companies [8; 11; 13]. Conclusions on existence of the asymmetric effect of influence of the oil price change and cost of companies' shares are stated in the following econometric papers. Research [15] makes the conclusion that growth of oil prices influences the prices of oil companies' shares more than fall of these prices. However, it should be noted that the final conclusion in the paper was made on the basis of analysis of the selection which comprised large vertically integrated companies (BP, Roya Dutch Shell), as well as the companies which conducted business only in the upstream segment (Pharos Energy, Tullow Oil, Afren etc.). In this regard it is reasonable to carry out a more thorough econometric analysis using a homogeneous sampling which consists only of vertically integrated companies. Besides, among the independent variables applied by the authors of the research there are only financial indicators such as market risk calculated using the London Securities Exchange index, expected daily profitability of shares, oil price. Operational and financial indicators of the companies themselves are not included in the research. In the other paper dedicated to revealing the asymmetric effect of influence of oil price change on share prices of oil companies the authors conducted the econometric analysis separately for the companies of the upstream segment and those from the downstream segment [17]. The research also states only external factors such as oil prices (Brent, WTI and Dubai) and macroeconomic indicators (inflation, industrial production index). The main conclusion of the paper made by the authors is that share prices of oil companies react asymmetrically to change of oil prices irrespective of the macroeconomic environment in the market, for which reason the authors think that investors should assess oil companies in more than one way in case of diversification of risks of the portfolio they build up. In paper [12] the authors study external and internal factors and make the conclusion that different structure of amounts which account for the upstream and downstream segments of large vertically integrated companies results in a differently directed movement of shares' price of these companies in case of oil prices growth. However, just as in previous researches the authors focus on the cost of shares and their dependency on oil prices (the difference between the future and spot prices for oil) not including operational and financial factors of companies. However, unlike in previous papers the authors study six Higher School of Economics largest vertically integrated companies (including Chevron, Exxon Mobil, Eni) but the econometric analysis is conducted for each company individually. The authors of research [13] found out that irrespective of the sector of a resource-extracting company the revenue, mineral resource price and EBITDA are the underlying determinants which influence the value of securities. Just as in previous studies macroeconomic factors are not presented in this paper, in article [12] an individual approach to companies is applied -four companies from various sectors, including the power industry, thus it does not give a full picture of the sector because the obtained results may be accounted for the considered companies' leadership or range of activity (capitalization of each company exceeds 25 billion US dollars). Revealing of the diversification effect is not considered. To sum up the results it should be noted that in the majority of the considered researches dedicated to analysis of the factors which influence capitalization of oil companies' operational and financial indicators which are important indices of assessment of a company development potential have not been considered. In this paper we will carry out the econometric analysis to find out external and internal factors which influence capitalization of the largest vertically integrated oil companies and we will use profitability ratios for the upstream and downstream segment for the first time in order to verify the hypothesis of a positive influence of activity diversification on the above segments. Research Methodology For the purpose of our research the following model was used as a basis [13]: where it m -ratio of EV (enterprise value) to DACF (debt-adjusted cash flow), i A -a set of dummy variables specific for the company (fixed effects), t P -price for Brent oil, it KPI -vector of key performance indicators (production volume, costs, expenses for exploration and exploitation of deposits, Reserves Replacement Ratio and others), it R -RoACE. This model was chosen because it meets the criteria necessary to conduct our research. It comprises the most essential indicators of oil companies' activity, companies' value, besides, the model is intended for using panel data. For the purpose of our research we specified the model as follows. In the furtherance of our objective we will verify the following hypotheses: • Increase in profits in the downstream segment has positive effect on securities value; • Growth of profitability of sales in the upstream and downstream segments has positive effect on companies' share value; • Increase of debt load depreciates share value; • Increase of capital expenditures for the downstream segment has positive effect on company capitalization; • M&A deals influence share value. Data Analysis Before drawing up the regression we preprocessed the obtained data, the results are presented in Table 1. On the basis of the analysis, one can conclude as follows: 1) the company Exxon Mobil has the maximum value of Tobins'Q of 2.14 and it is the only company which has the mean value and median value of Tobins'Q above 1. Further, it means that this company for a long time has been assessed by the investors as more attractive for investment and this resulted in its overestimation; 2) the average of S_ratio in the selection amounts to 0.59, i.e. on average a little bit over 50% of companies' assets are comprised of debt capital. Such significant size of the share of raised funds is accounted for investment projects which are characteristic of oil industry and are distinct in capital capacity and longtermness; 3) Exxon Mobil shows the highest mean and median value of return on share capital which indicates efficiency of its business activities; Higher School of Economics 111 On the basis of the submitted data one may make a conclusion of existence of moderate significant positive relations between the following factors: Prof_down and Down_income, TobinQ and ROE, Prof_up and TobinQ, Prof_up and ROE. There are no strong relations between the dependent variable and independent ones, as well as there are no such relations between independent variables themselves (modules of obtained correlations do not exceed 0.8) which is indicative of absence of multicollinearity. Nevertheless, let's calculate variance inflation factors ( Table 2). As long as VIF of each explanatory variable is less than 10 it indicates absence of multicollinearity between the variables [24, p. 39]. We conducted a Breusch-Pagan test for heteroscedasticity where Prob = 0.1946 which exceeds 0.05. So, the null hypothesis is not rejected, hence we can make the conclusion of absence of heteroscedasticity. The final results of the developed models are presented in Table 3. Developed model 1 of pooled regression is significant at any reasonable level of significance because Prob is less than 0,01. R-squared amounts to 0.69. As judged by the model such independent variables as Prof_down, DPO, Down_income, ROE turned out to be insignificant at the level of significance of 10%. In order to take into consideration the time component model 2 with fixed effects was developed which is significant at any reasonable level of significance (Prob < 0,01); R-squared (within) amounts to 0.2895. On the basis of the obtained results one may make the conclusion that inter-individual differences between companies manifest themselves stronger than dynamic ones. As long as all predicated variables vary with time all ratios have been evaluated. Conclusion The results of verification of hypotheses in accordance with the regression analysis using the fixed effects model are presented in Table 4. Analyzing the influence of the production activities diversification by the largest vertically integrated companies applying the approach which divides factors into profitability in the upstream and downstream segments it should be noted that growth of profitability in the upstream segment results in increase of shares' price while profitability in the downstream segment turned out to be an insignificant factor which adversely affected the dependent variable. The obtained results indicate that investors pay more attention to the financial indicator related to the upstream segment leaving aside the downstream segment and this may cause underestimation of oil companies and subsequent correction of shares' prices. This conclusion is confirmed by behavour of oil companies' securities ( Figure 3). Companies' capitalization follows change of oil price but it does not decline so much as the price for the above energy source. Since 2013 and by 2015 the oil price had slumped by 60%, while within the same period the price for shares of the companies Chevron, ExxonMobil and Royal Dutch Shell, taken as an example, reduced by 28%, 23% and 36% respectively. It should also be noted that as a part of price recovery which started in 2015 the price of oil companies' shares bounced back almost to the figure of 2013. Higher School of Economics 115 *** -the factor is significant at the 1% level of significance, ** -the factor is significant at the 5% level of significance, *the factor is significant at the 10% level of significance. Source: comprised by the author. Probably, the issue of influence of M&A deals on share prices should be studied in more detail using monthly data instead of quarterly data Source: comprised by the author. The following may be added to the results represented in Table 4: • changes in the capital structure of large oil companies influence share price: debt growth leads to price decline; • growth of investment costs in the downstream sector as compared to the aggregate investment costs has an adverse effect on the companies' value. It stems from the fact that when oil price declines large oil companies cut investment costs in the upstream sector simultaneously increasing the investment costs in the advanced petroleum refining sector; • when Tobin's Q increases the price of shares of large oil companies grows. This suggests that investors are ready to invest their money in the shares of the companies which are overestimated from the market point of view as compared to the shares of other oil companies. In order to promote the research of assessment of oil companies' value it is reasonable to carry out the analysis of influence of the considered factors not just on capitalization of the largest vertically integrated oil companies but also of the companies carrying out production separately in the upstream and downstream segments. This will let us describe in more detail and explain the obtained conclusions as well as to conduct the comparative analysis of the factors which influence oil companies conducting production in various segments.
4,109
2019-12-30T00:00:00.000
[ "Business", "Economics" ]
Visualization of Salient Object With Saliency Maps Using Residual Neural Networks Visual saliency techniques based on Convolutional Neural Networks (CNNs) exhibit an excessive performance for saliency fixation in a scene, but it is harder to train a network in view of their complexity. The imparting Residual Network Model (ResNet) that is more capable to optimize features for predicting salient area in the form of saliency maps within the images. To get saliency maps, an amalgamated framework is presented that contains two streams of Residual Network Model (ResNet-50). Each stream of Reset-50 that is used to enhance the low-level and high-level semantics features and build a network of 99 layers at two different image scales for generating the normal saliency attention. This model is trained with transfer learning for initialization that is pretrained on ImageNet for object detection, and with some modifications to minimize prediction error. At the end, the two streams integrate the features by fusion at low and high scale dimensions of images. This model is fine-tuned on four commonly used datasets and examines both qualitative and quantitative evaluation metrics for state-of-the-art deep saliency model outcomes. I. INTRODUCTION The field of computer vision has taken a sensational curve, with the ascent of the Convolutional Neural Networks (CNNs), which is one of the most impressive forms of Artificial Neural Network (ANN) architecture. Therefore, visualization of a salient object in an image using Convolutional Neural Network (CNN) models is the most focused area and lies under the umbrella of supervised machine learning algorithms [5]. Typically, Convolutional Neural Networks (CNNs) have learned hierarchically and extract highly discriminative information for classification from raw images [16]. In computer vision, visual saliency detection is one of the main challenges and CNNs are the most powerful techniques that are used widely for different layer integration to make saliency maps [38]. Saliency map processing has raised an awesome measure of research intrigue and has been appearing to be beneficial in numerous applications [2]. Recently, saliency maps as depicted in Fig. 1 can be an excessive benefit for 2D image applications, object classification, The associate editor coordinating the review of this manuscript and approving it for publication was Peng Liu . action classification, video applications, video analysis, and quality assessment [2], [34]. In a common paradigm, fixation means: a common popout slim blob-like extraordinary salient area, a salient item detection frequently creates clean associated area [1]. System oriented fixation prediction models require more effort to make a saliency map of the salient object within the image compared to human. However, human visual network has the ability to find eye fixation to a useful substances and perform this task naturally and rapidly in the real world while seeing visual images [12]. Hence, researchers are typically aimed at understanding and predicting visual saliency that simulates just like the human visual process based on pinpointing the most prominent object within a scene effortlessly [1]. The advantage of this paper is to extract the most salient informative objects with their respective semantic regions in an image for understanding the whole scene. This extraction simulates the functionality just like the biological visual consideration systems [43]. Therefore, the main motivation is to provide new insights about human biological attentional processes and give new ways for: understanding visual attention, complex scene understanding, detecting salient objects in a low clutter context, making new artificial intelligence applications and these applications can be based on image or video saliency detection mechanisms [11], [43]. Commonly, visual saliency models use a multiscale configuration for improving accuracy, which integrates the information at low and high image scales [11]. This improves the saliency detection performance of our model, which finds out the tiny salient regions and the center of large salient regions in high and low scales, respectively [11]. In addition, there are various prediction models that make saliency maps based on: the probability distribution of the position of the eye fixation on the image [11], low-level features such as multiscale contrast, color spatial distribution to describe a salient object locally and regionally, high-level features such as ''objectness information'' [2]. Since then, models of saliency have emerged to fixate the most prominent regions by snubbing the less significant part, but still there are many opportunities to get better due to its complexity, having many different object types, having large dissimilarity of multiple objects in a scene [12], variations exist in images due to different viewpoints (camera viewpoint) illuminations, different object pose, partial occlusions and unrelated background as shown in Fig. 2. Although, convolutional neural networks (CNNs) have a sequence of breakthroughs to reformulate the layers as gaining knowledge for image detection but difficult to train due to its complexity. Normally, it takes so much time to train a desired model in CNN, so the saliency systems may have limited power while using CNN when known and obvious objects are not present within the image [2]. For the solution of this problem, the Residual Network Model (ResNet) [9], one of the deep CNN models, is used to carry strong semantic features within the image. In addition to this, a feature significantly describes the particular attribute of the object, some commonly used features are size, color, and shape. The primary objective is to process a saliency outline geographically to the level of saliency for visual consideration. Thus, we suggest a two-modality framework to get conceptual components from crude image pixels progressively, which has richer prior information for a better saliency prediction as this model has learned and how to identify images from ImageNet [33] dataset. It is a case of transfer learning where features are learned on one job and reused for another with or without fine-tuning. The transfer learning paradigm is considered important typically for smaller saliency datasets [21]. For image identification, Residual Network Model (ResNet) [9] used one stream with a short cut between its two blocks of layer, which reduced the computational complexity and then summed up the results at the end. However, we used two ResNet-50 [9] streams running parallel at two different image dimensions, at the end we produce results in the form of grey scale visual maps from one combined deep Residual Network [9] model up to 99 layers. Overall Contribution: our proposed framework that addresses the challenges. • Explore several CNNs models that integrate the feature maps after fusion but design a two-stream framework that utilizes ResNet-50 [9], which is efficient for getting global visual contrast information. • Investigate the effect of ResNet-50 [9] on different image dimensions. The key features are to use input data diversity and high image dimensions for getting better saliency. The robustness of the saliency framework can be enhanced by using these key features. • Four challenging datasets are used for the analysis and evaluation of our saliency model. • Extensive analysis and fair comparison with state-ofthe-art saliency prediction models with respect to qualitative and quantitative results. The rest of this paper is organized as follows: Section II mentions some of the related work. Section III describes the VOLUME 9, 2021 design of our visual saliency model. Section IV mentions the details of model training and proceeds with the investigation of our model evaluation. Section V discusses the final results. Finally, we end up with conclusions in Section VI. II. RELATED WORK The most obligate goal is to discuss the most recent research strategies about CNNs saliency models that foresee the likelihood circulation area of the eye prediction over the image. Saliency maps have different intensity for each pixel and each pixel has its place on the most salient object. In spite of the fact that these strategies accomplish better execution than conventional models depending on visual saliency. In [2], Jia et al. made an improved saliency method with multiple layers of CNN to study visual elements named as EML-NET that acquired encouraging results after merging the comfortable prior information, which discovered the results of convolution by means of CNN model on the comprehensive saliency dataset. It can be utilized further to expand the scalability performance that turns into more thought-provoking for getting features from several layers. In [3], the researchers have proposed a framework and built on two equally trained CNN models, one trained model was generated for top-down visual saliency, and the other trained model was exploited for classification. In addition, the authors collected the eye look map dataset by means of Tobbi T60 visual tracker and evaluated the performance in two forms: visual map and enhanced classification accuracy. Furthermore, a comparison has been shown between Inception VGG-19 and SalClassNet classifiers. In [1], Feng et al. computed a comprehensive spontaneous CNN architecture that captured the global and local contrast features information based on different scales, which could successfully spot the salient region within the images. Moreover, comparative results with ten state-of-the-art architectures have been exposed. In [4], the authors made a design: to extract multifarious semantic features, to study end-toend pixel-wise visual saliency at different scales while considering only the global perspective by utilizing link layers through large receptive fields. In addition, key factors were included: massive deepness, dissimilar size, kernels working in parallel to pinpoint the saliency, greater receptive fields for global context, center bias for pattern outline identification reliant on location. The proposed network in [8] contained two end-to-end CNN fixation streams, one stream was pretrained on human visual guesstimate on eye tracking data and the other was pretrained on an image identification dataset named as semantic stream which was figured out semantic signals from the input images. Furthermore, these two CNN streams merged to form a module like inception block with convolution and deconvolution layers to notice a complex prominent element. The authors in [13] presented a deep CNN based attentional push approach for saliency prediction. This model contained two pathways: a saliency way fed by the whole image to implant fixation method for computing the augmented maps, a push way fed by 2-D cropped actor head image to guess the gaze scene actors. Followed by a trivial convent that merged and generated the saliency. In [15], a ConvLSTM model built on LSTM that iteratively fixated different locations in the images to refine feature prediction and learnt the earlier saliency maps made by Gaussianfunction. In addition to this, ConvLSTM learnt center bias without mixing the prediction features manually. In [17], Cao et al. establish a simple and end-to-end CNN network that identifies input features with fewer parameters for the production of visual saliency maps. Moreover, the authors performed widespread experiments for the selection of high quality features at low layer, middle layer, and high layer. The major motivation of this design was to select input features that enabled the network to improve the results and showed major similarities with the contrast evidence which was presented in ground truth masks. In [23], Ji et al. suggest a new encoder decoder CNN framework by acquainted multidimensions spatial-wise and channel-wise devoted layers. These attention layers united the perspective information related to features at varying scales and then finally produced the saliency. In addition to this, the structure was designed to get visual saliency maps with accurate side way edge information. This structure showed effective results on various datasets. In [25], Monroy et al. extend CNN architecture by transfer learning to predict the 2D and omnidirectional images (ODIs) saliency in an accurate manner. In this pipeline, the generated visual maps were very close to the ground truth. However, in [34], the authors suggested a novel way to extend the 2D prediction method by applying on cube face images for 360degree images. This method used CNN based fusion approach which has been trained on CMP images and a new loss function. In [30], the researchers introduced two new datasets: one was odd one-out (O 3 ) images, the second was a psychophysical pattern (P 3 ). These two datasets were used to evaluate the capacity of visual saliency algorithms for finding single target. Furthermore, the effect of an architecture based on CNN saliency model training was investigated on these forms of datasets and did not find an odd oneout target ability for major improvement. A pyramid feature attention network (PFAN) was proposed in [38] that enhanced the contextual and spatial features by using a novel CNN. It contained four modules named as: a context-aware pyramid feature extraction (CPFE), channel-wise attention (CA), spatial attention (SA), and edge preservation (EP). CPFE was designed to acquire rich context features at the multiscale level, but other modules were utilized to generate feature maps and then applied fusion for the saliency regions. Moreover, in [28], the authors presented a novel deep spatial contextual computational saliency method named as deep spatial contextual long-term recurrent convolutional network (DSCLRCN) that inevitably learns local features from the input images in parallel. Then, it acquired long-term spatial interactions between global context and the overall scene context to conclude the saliency maps. In [29], the authors proposed a multiresolution convolutional neural network (Mr-CNN), which was a predicted eye fixation computational framework that learned two types of features simultaneously from input images. It was trained on fixation and non-fixation locations with multiresolution and utilized input images as raw pixels. At the end, Top-down and bottom-up features were integrated in the last layer to predict visual saliency. Gaze II in [21] used the initial deep features from the VGG-19 model, which was used for the image identification model. It trained some readout layers on top of the VGG-19 for saliency prediction. A strong test was performed after conservative cross validation, which achieved 87% top performance in area under the curve metrics. In [40], the authors introduced a two-level hierarchy by embedding deep CNNs named as hierarchical deep CNNs (HD-CNN). This model separated easy classes from difficult classes by using coarse and fine category classifiers. In training, coarse category classifiers are used multinomial logistic loss with global fine-tuning. In addition to this, the fine category classifiers and layer parameters built HD-CNNs are more scalable for visual prediction. In [7], the researchers provided a hard analysis of noisy saliency maps and presented a novel hypothesis about irrelevant features which passed through an activation function ''ReLu''. Then, we proposed a method during back propagation through layer-wise thresholding. A comparison summary of CNNs models used for visual saliency and quantitative comparison between evaluations metrics of different deep learning saliency models on the challenging MIT300 dataset are shown in Table. 1 and Table. 2 respectively. III. DESIGN OF VISUAL SALIENCY MODEL In this section, we will discuss the important factors of Residual Neural Network and the details of our two-stream visual saliency model architecture. A. RESIDUAL NEURAL NETWORK First, we introduce the basic structure of the Residual Network Model (ResNet) [9], that is, an excellent feed-forward deep intensely arrangement of interconnected convolutional layer blocks which has the power to learn and highlight features from input data. Then describe the proposed two-stream framework in Section 3.2. Most of the CNNs designs such as AlexNet [18], VGG-16 [35], and GoogLeNet [36] are comparatively 'Shallow' for generating saliency map, but the Residual Network builds its deep architecture based on the popular CNN model for saliency presentation [2]. The Residual Network Model (ResNet-50) [9] is the varietal way and the deepest ever presented for classification in vision. It won the 1st place on the tasks of ImageNet [33] detection, ImageNet localization, COCO detection, and COCO segmentation in ILSVRC & COCO 2015 competition [9]. A deeper network has demonstrated the degradation problem by loss of information which affects the training accuracy [2], [9]. Therefore, for faster training and to construct a really deep network, two approaches are initially introduced in ResNet [9]: one is a stack building block of similar connecting shapes and the other is a new skip connection approach [9], [10]. These building blocks are known as ''Residual Units'' in [10] that optimiz Residual Network Model (ResNet) [9] than plane deep learning models. Due to feed forward network identity mapping in the form of short-cut connection which skips some layers and adds results to the tiled layer output. Skip connection is an information compensation strategy which intuitively collects prior layer information with equal scale that compensates current layer features [2]. B. TWO STREAM VISUAL SALIENCY NETWORK Inspired by ''Salicon'' presented in [11], which was a pioneer effort to train a model using DNNs for ''Visual Saliency Prediction''. The main concentration is how to make visual saliency prediction for useable applications. The architecture can be useful to calculate the saliency prediction within the images based on ResNet-50 [9], which is pretrained on Ima-geNet [33] for object classification. Therefore, a trainable end-to-end two-stream ResNet-50 [9] framework is proposed to address the fixation map problem and permits to learn the parameters for back propagation of the pretrained ResNet [9] for optimizing a saliency. Several ways are there to accomplish the integration of two stream data going from initial fusion to the later one [39]. However, to achieve this, we design a very simple twostream architecture with 49 convolutional layers in each stream that will be fused at the end for capturing extracted features to reduce the semantic gap for saliency maps. Consequently, we have one ResNet-50 [9] to generate R H streams and another ResNet-50 [9] to generate R L streams. The detailed architecture is displayed in Fig. 3. These two streams are fed by two input images with three dimensions ''1000 × 800 × 3'' and ''500 × 400 × 3''. The first two measurements record the spatial area of the responsive field of the neuron, and the third one lists the layouts for which the neuron is tuned [11]. The neurons are tuned to detect the same patterns because of these two streams that share the same filters but at a different scale. This model contains 99 ''Convolutional'' layers in total, two ''Max Pooling'' layers, and one ''Concatenate'' layer. Firstly, Reset-50 is employed to get the initial features by initializing the first 30 layers from the pretrained ResNet-50 [9] on ImageNet [33] dataset. Then, we modify some parameters in it to record the saliency measurements. The proposed system explains the parameters of the model architecture as shown in Fig. 4. One ''Max Pooling'' layer is used after the first convolutional layer with pad = '0' and stride = '3'. When RGB images are resized as high (1000 × 800 × 3) and low (500 × 400 × 3) scales, respectively, we indicate the neural reactions of these two streams after the second last convolutional layer of ''Conv5'' block with dimensions of ''35 × 48''and ''17 × 24'', but both streams are taken third dimension as 2048 at this level. Note that R L has half the spatial resolution of R H at the second last convolutional layer of ''Conv5'' block. Next, the output of low-scale residual network is resized by upsampling to ''35 × 48'' with a linear interpolation to match the same spatial resolution of high-scale residual network. Combine the responses of two-scale residual networks for creating the maps of saliency. The last ''Max Pooling'' layer is used with stride = '1' and pad = '0' to denote the global features. Then, we introduce the last convolutional layer to learn the global visual contrast information. This convolutional layer is used as a single filter with ''0'' padding and stride = ''0'', that identifies whether the reactions in the last layer relate to the salient region in the form of accurate saliency maps. This layer generates the resolution of ''37 × 52''. At the end, we resized the ground truth maps to meet the size of our network output. IV. EXPERIMENTAL DETAILS The extensive experiments demonstrated that typically all saliency algorithms did not show adequate singleton target in natural images. Therefore, our framework can be simply extended to have a variety of previous knowledge for visual saliency detection. All investigations are conducted on four commonly used datasets, containing ECSSD [31], HKU-IS [20], PASCAL-S [37], and DUT-OMRON [14]. A. MODEL TRAINING We implement the proposed model in PyCaffe by using ResNet-50 [9], pre-trained on ImageNet [33] as a basic model to extract early features. The most common four datasets are employed for further training on the high and low scale dimensions of the input images. In training, we fine-tune by using training images to determine the learning weights with a momentum of 0.9 and a weight decay of 0.0005 on four different datasets separately until the training loss converges. Training has been running for 80 epochs with real ground truth fixation masks for fine-tuning. Fine-tuning of ResNet-50 [9] model for visual saliency up to 80 epochs is shown in Fig. 5. The learning rate of the first 30 convolutional layers is set to 0, but the learning rate of rest of the convolutional layers is set to 0.0001. In addition, network parameters are optimized using ''Adam optimizer'' with a batch size of 16. The visual saliency detection can be considered as a binary prediction problem; thus we utilize binary cross entropy as the loss function. We prepared the system in PUCIT, a NVIDIA Titan GPU with 12GB memory, and it took different time spans for the four datasets upon the system utilized. B. DATASETS As more models have been proposed in the writing, more datasets have been acquainted with further saliency discovery models, but the reality is that more datasets are required in the literature. There are some widely used datasets which play an important role for the most prominent object visualization. Different benchmarks used various datasets for assessing remarkable visual saliency for salient objects and Punjab University College of Information Technology, Lahore, Pakistan. for performance evaluation of saliency generation models. In this work, we evaluate the proposed visual saliency model by using the most persuasive datasets, including ECSSD [31], HKU-IS [20], PASCAL-S [37], DUT-OMRON [14] that are commonly used in many earlier works in the field of remarkable saliency fixation. PASCAL-S [37] dataset contains 850 validation sets of natural images with ground truth of full segmentation from PASCAL VOC 2010 dataset, which has 8 free-viewing viewers for exploring the images. For each input image, the individual was asked to identify a salient object by clicking with no time limit and there were also no constraints on the number of objects one can choose. HKU-IS [20] dataset contains 4447 complex images which consists of many disconnected salient objects having diverse spatial locations. This dataset is thought-provoking for similar background and foreground looks. ECSSD [31] dataset contains the most challenging 1000 images with diversified patterns in both foreground and background. It is a structurally complex new scene dataset, which contains challenging natural images for saliency detection and corresponding ground truth masks. Five helpers produced the ground truth mask. DUT-OMRON [14] dataset contains complex 5168 images with pixel wise ground truth masks of salient objects. It is a diverse dataset which consists of sample images of side length 400 pixels. C. EVALUATION METRICS In this section, we discuss three criteria which are used for performance evaluation of our proposed model, i.e., Maximum F-measure (MaxF β ), mean absolute error (MAE), precision, and recall curve (PR). PR-curve is used to measure the estimated saliency map with the threshold ranging from 0 to 255. The visual saliency map can be changed into a binary map. Then, its precision and recall can be obtained by comparing the machine-generated saliency map with the ground truth masks. By doing these comparisons at each threshold value produces P-R curves for the four mentioned datasets. F-measure is used to measure the harmonic mean of average precision and recall. It is based on pixel-wise error and can evaluate the overall performance [12]. Mean Absolute Error is used to represent the average absolute difference of the estimated saliency map and the ground truth saliency map. It often snubs structural similarities [1]. To validate the efficiency of our model, we perform several experiments, in which we find that our rich hierarchical model explore the representative potential at pixel and semantic level for learning visual saliency strategies which can be utilized for recovering local details. The visual saliency finding results from the above experiments show that visual saliency maps generated by benefiting saliency optimization process with better quality [45]. Commonly, the accuracy and superior performance of the model have been improved after multilevel feature fusion between low and high scales and performed analysis on ECSSD [31], HKU-IS [20], PASCAL-S [37] and DUT-OMRON [14] datasets. DUT-OMRON [14] is the largest and more challenging dataset among the four dataset, which has a difficult scenario due to the large number of complex scenes to identify the best performance of a model [8], [14]. PR curve demonstrates the clear and comparative small range distribution of precision and recall points by using a binary cross entropy loss function [43]. As a result, the produced saliency maps are sensitive to binary thresholds which produce smooth PR curves. It demonstrates the higher performance and better PR curves especially on ECSSD dataset. However, SMD [41] drops faster compared to the nearest ELD [19] and MSI-CNN [26] methods on all used datasets. According to our observation, multi-scale fusion strategy plays an important role on a model performance, and the quantitative results can be further improved by different factors such as: number of layers, image dimension, and hyper-parameter values. Fig. 6 shows the comparative results of our method in terms of PR curves on four commonly used datasets. MGCC [1], FSN [8], MSA-CNN [23], SMD [41], ELD [19], MSI-CNN [26], JLSD [27], CNET + PNET [24], BENDer#1 [43] on four commonly used datasets. We choose these methods because they are based on CNN, identified as a benchmark, and developed recently. As illustrated in Table 3, we can see that our model achieves significant performance after two scales fusion of residual global features from each stream than the other 7 methods, but JLSD [27] proved to be equally better with our method. From Table 3 and Fig. 6, we can see that our model achieves considerable performance over all four datasets, including the lowest MAE and the highest Maximum F-measure (MaxF β ) after two scale fusions of residual global features from each stream. We observe that our trained model measured stable values of max F-measure but showed unstable in terms of MAE. In addition, it has lower MAE values on PASCAL-S [37] and ECSSD [31] dataset which is considered better and at the second position in terms of MaxF β on all four datasets compared to the other seven models but JLSD [27] and BENDer#1 [43] are very close to our model. Moreover, our method shows encouraging results in terms of overall performance metrics. Qualitative Comparison: We selected ResNet-50 [9] with more generalization ability due to the large number of operations connected to depth features for getting improved performance [44]. One frame produces pixel level visual saliency maps and the other frame produces full resolution semantic level visual maps [42]. By using ResNet-50 [9], we get improved results in the form of visual maps. Such improvement caused by the fusion of two streams results that highlight the fixation and semantic level, but still it is a big challenge to get the best quality saliency scores. Fig. 7 shows a comparison between our model's saliency map and six other saliency prediction model's results which are provided by the concerning authors. These state-of-the-art models are Gaze-II [21], Salicon [11], LSTM (SAM) [28], Deep CNN [32], ELM [40] and Mr-CNN [29]. Our model can predict the correct salient TABLE 3. Performance of the proposed method and other 6 state-of-the-art approaches on four commonly used datasets. Red, blue, and green indicate the best, the second best, and the third best results in terms of maximum F-measure (''↑'' means larger), MAE (''↓'' means smaller) and ''-'' represents no reported. object region even under complex scenes, human structure, and animals. It can also detect a significant regions when cluttering and unrelated background is present in the images. It can be observed that our model may be indicated as the most prominent object in a better manner but also achieves encouraging results for different size objects in images. VI. CONCLUSION Recently, visual saliency map generation can be considered a useful study for image and video applications. Hence, to predict the visual saliency detection system, we fine-tune the ResNet-50 [9] model that is pretrained on ImageNet [33]. We perform various experiments on rearchitect ResNet-50 [9] model in the form of two streams. These two streams are fed by input images at low and high scales that prompt a saliency identification. In the future, this model can be tested with more number of layers to get significant results. CONFLICT OF INTEREST None of the authors have a conflict of interest related to the research and results presented in this paper. DATA AVAILABILITY STATEMENT The datasets used in the experiments and discussed in the paper will be available if required.
6,722.8
2021-01-01T00:00:00.000
[ "Computer Science" ]
Extended Cesaro Operator from A φ ∞ to Bloch Space Mingzhu Yang School of Tianjin University of Finance and Economics 25 Zhu Jiang Street, Tianjin 300222, China E-mail<EMAIL_ADDRESS>Abstract Let g be a holomorphic function of the unit ball B in several complex variables, and denote by g T the induced extended Cesaro operator. This paper discussed the boundedness and compactness of g T acting from Aφ ∞ to Bloch space in the unit ball. Introduction Let B be the unit ball of n C , and ( ) H B denotes the class of analytic functions in B. Let p H be the standard Hardy space on the unit disc D. For 0 ( ) H , the classical Cesaro operator acting on f is given by the formula The study of Cesaro operator has become a major driving force in the development of modern complex analysis.The recent papers are good sources for information on much of the developments in the theory of Cesaro operators up to the middle of last decade.In the recent years, boundedness and compactness of extended Cesaro operator between several spaces of holomorphic functions have been studied by many mathematicians.It is well known that the operator C is bounded on the usual Hardy spaces p H and Bergman space, as well as the Dirichlet space.Basic results facts on Hardy spaces can be found in Durn(1970).For 0 p < < ∞ , Siskakis (1987) studied the spectrum of C , as a by-product he obtained that C is bounded on ( ) p H D .For 1 p = , the boundedness of C was given also by Siskakis (1990) by a particularly elegant method, independent of spectrum theory, a different proof of the result can be found in Giang and Morricz(1995)}.After that, for 0 1 p < < , Miao(1992) proved C is also bounded.For p = ∞ , the boundedness of C was given by Danikas and Sisakis(1993). A little calculation shows , it is natural to consider the extended Cesaro operator defined by It is easy to see that g T take ( ) H B into itself.In general, there is no easy way to determine when an extended Cesaro operator is bounded or compact. The boundedness and compactness of this operator on weighted Bergman, mixed norm , Bloch, and Dirichlet spaces in the unit ball have been studied by Xiao and Hu.In this paper, we continue this line of research.Now we introduce some spaces first.We define Bloch space Bloch as the space of holomorphic functions ( ) Let φ denote a strictly decreasing continuous function : 1 φ = , it becomes the classical bounded function space. Some Lemmas In the following, we will use the symbol C or M to denote a finite positive number which does not depend on variable z and may depend on some norms and parameters f , not necessarily the same at each occurrence. By Montel theorem and the definition of compact operator, the following lemma follows. Assume that ( ) g H B ∈ . Then : T is bounded and for any bounded sequence ( ) Proof.Assume that g T is compact and suppose ( ) uniformly on compact sets of B .Since K is a compact subset of B , by the hypothesis and the definition of g T , ( ) T f z converges to zero uniformly on K .It follows from the arbitrary of K that the limit function h is equal to 0 .Since it's true for arbitrary subsequence of k f , we see that , where (0, ) therefore there is a subsequence km f which converges uniformly to ≤ <∞ and converges to 0 on compact subsets of B , by the hypothesis of this lemma, we have that T K is relatively compact, so g T is compact, finishing the proof. Suppose ( ) g H B ∈ , then : Proof: We proof the sufficiency frist.Since (0) 0 Now we turn to the necessity.Setting the test function , for any Since B w ∈ is arbitrary, we get the necessity. Remark: note that by take the test function 1 f = ,we can get g Bloch ∈ . 3.2 Suppose , and , then : , that is Rg belongs to the class of bounded holomorphic functions. Proof: It is obvious from the 3.1. Suppose ( ) g H B ∈ , then : Proof: We consider the sufficiency first.Assume the condition holds, then for any given 0 ε > , there exists a (0 1) ≤ , and for any sequence and k f converges to 0 uniformly on compact subsets of B .Notice that (0) 0 Now we turn to the necessity.For the necessity ,we choose the test functions as follows.For any sequence } and j h uniformly converges to 0 in any compact subset of B .That is to say j h satisfy the condition of lemma 2.1, then we have The conclusion follows by the arbitrary of the sequence } { j z ., we must have g is a constant.
1,190
2009-03-18T00:00:00.000
[ "Mathematics" ]
Resistance of Soil-Bound Prions to Rumen Digestion Before prion uptake and infection can occur in the lower gastrointestinal system, ingested prions are subjected to anaerobic digestion in the rumen of cervids and bovids. The susceptibility of soil-bound prions to rumen digestion has not been evaluated previously. In this study, prions from infectious brain homogenates as well as prions bound to a range of soils and soil minerals were subjected to in vitro rumen digestion, and changes in PrP levels were measured via western blot. Binding to clay appeared to protect noninfectious hamster PrPc from complete digestion, while both unbound and soil-bound infectious PrPSc proved highly resistant to rumen digestion. In addition, no change in intracerebral incubation period was observed following active rumen digestion of unbound hamster HY TME prions and HY TME prions bound to a silty clay loam soil. These results demonstrate that both unbound and soil-bound prions readily survive rumen digestion without a reduction in infectivity, further supporting the potential for soil-mediated transmission of chronic wasting disease (CWD) and scrapie in the environment. Introduction Prion diseases, or transmissible spongiform encephalopathies (TSEs), are fatal neurodegenerative diseases that afflict ruminants, including cattle (bovine spongiform encephalopathy, BSE, or 'mad cow' disease), sheep and goats (scrapie), and deer, elk, and moose (chronic wasting disease or CWD), as well as humans (Creutzfeld-Jakob disease or CJD) [1,2]. The infectious agent of prion diseases is PrP Sc , a misfolded isoform of a normal cellular prion protein (PrP c ) found in all susceptible species [1,3]. PrP Sc exhibits resistance to proteolysis and inactivation, increased hydrophobicity, and a propensity for aggregation [1,3]. Moreover, PrP Sc can seed conversion of PrP c to PrP Sc ('replicate') and thereby initiate prion propagation and, presumably, disease infection [3]. Natural transmission of CWD and scrapie occurs primarily or exclusively through ingestion or inhalation of prion-contaminated material shed from infected hosts or present in mortalities [2,4]. Infectious CWD and scrapie prions are shed in saliva, blood, urine, feces, antler velvet, milk, and birthing matter (reviewed by [5]) and are present in the tissue of diseased carcasses [6,7]. Once ingested by a ruminant (whether sheep, goat, cow, deer, elk, or moose), prions will be subjected to rumen digestion before entering the lower gastrointestinal tract, where agent uptake across the epithelium can initiate infection [8][9][10]. Prions are orally infectious [11,12] and can be detected in feces following oral inoculation [13,14] as well as in the feces of diseased animals [15][16][17]. Therefore, it can be assumed that a certain amount of PrP Sc survives the digestive processes in the rumen and lower gastrointestinal system. Results from previous in vitro studies of PrP Sc fate in rumen digestion have been varied. Scherbel and colleagues observed a near-complete loss of 263 K hamster PrP Sc , as detected by western blot, following rumen digestion [18]. However, no measurable loss of infectivity was seen in subsequent animal bioassay [19]. Jeffrey et al. observed complete loss of detectable PrP Sc in scrapie-infected sheep brain homogenates following exposure to rumen and other alimentary fluids [9]. However, PrP Sc was detected post-digestion when precipitation and proteinase-K digestion were used prior to western blotting. An additional limited study found no evidence of scrapie PrP Sc digestion in rumen fluids [20]. In sum, these studies demonstrate prions can survive rumen digestion, but it remains unclear whether rumen digestion degrades a significant portion of ingested PrP Sc . Ingestion of prion-contaminated soil has been implicated as a likely mechanism of natural CWD and scrapie transmission [21], but the effect of prion soil sorption on prion susceptibility to rumen digestion remains unknown. Prions bind to a wide range of soils and soil minerals, resist desorption and degradation, remain capable of replication, and retain infectivity [22][23][24][25][26][27]. Alteration of prion infectivity has been observed following soil adsorption [22,27], but the effect of soil adsorption on prion resistance to degradation remains poorly understood. Effective enzymatic digestion of soil-bound PrP Sc (both CWD-elk and hamster) has been shown previously [23,24], but this work used a specific subtilisin enzyme known to significantly reduce prion infectivity. Rumen digestion is a complex, highly heterogeneous, anaerobic process carried out by bacteria, protozoa, and fungi primarily targeted at degrading complex carbohydrates and proteins in the ruminant diet [28]. The fate of soil-bound prions may be markedly different in such an environment compared to unbound prions. Adsorption of prions to soil may alter prion resistance to host degradation, thus potentially altering their oral infectivity and transmissibility. The objective of this research was to evaluate and compare the ability of rumen digestion to degrade unbound prions as well as prions bound to a range of soils and soil minerals. In vitro anaerobic digestion assays seeded by bovine rumen fluid were conducted, and the resultant PrP Sc levels were measured by western blotting. Intracerebral hamster bioassay was also employed to measure changes in infectious titer. The results demonstrate the strong resistance of both unbound and soilbound prions to rumen digestion, which further supports the efficacy of soil-bound prion ingestion as a natural route of disease transmission in ruminants. Ethics Statement All procedures involved in animal bioassay were approved by the Creighton University Institutional Animal Care and Use Committee and complied with the Guide for the Care and Use of Laboratory Animals. Collection procedures for rumen fluid from cannulated dairy cows was approved by the University of Nebraska-Lincoln Institutional Animal Care and Use Committee. Prion Adsorption HY TME PrP Sc /PrP c and uninfected PrP c from brain homogenates were sorbed to a range of soils as described previously [23]. Gamma-irradiated fine white sand (Fisher Scientific, Pittsburgh, PA), Dickinson sandy loam soil (a Typic Hapludoll), Rinda silty clay loam soil (a Vertic Epiaqualf), sodium bentonite clay (CETCO, Arlington Heights, IL), and silicon dioxide powder (SiO 2 , Sigma Aldrich, St. Louis, MO) were used and have been described previously [23,26]. Briefly, to obtain soilbound PrP, 10% brain homogenate was combined with soil in 1X DPBS and gently rotated at 22uC, then centrifuged at 100 g for 5 min. The supernatant was removed, and the pellets were washed 2 times with DPBS. PrP adsorption to silty clay loam, bentonite clay, and SiO2 powder adsorption was conducted in 15 ml polypropylene tubes (Fisher Scientific). PrP adsorption to sandy loam and quartz sand was conducted in 0.2 ml polypropylene PCR tubes (Fisher Scientific). The final pellets were collected and stored at 280uC. Incubation times, as well as soil, buffer, and homogenate:soil ratios were as reported previously [27] (Table S1) and selected to achieve maximum or near-maximum PrP adsorption based on previous results [25,26]. Rumen Digestion Assay Standard in vitro rumen digestion assay methods were followed [18,30,31]. Active rumen matter was collected from healthy cannulated dairy cows on a single farm approximately 5 hours after feeding. Standard dairy cow diets were used, consisting of corn silage, sweet bran feed, and brome and alfalfa hays, but diet was not constant for all samplings and multiple cows were used over the course of the study. Percent grain ranged from 23-60%. No difference in immunoblot results was observed across all diets used (data not shown), although an extensive study of this variable was not conducted. Immediately after collection, rumen matter was hand-pressed through two layers of cheesecloth to remove large feed particles and sealed in a warmed thermos bottle with minimal headspace. The fluid was transported (45 min) to the lab and placed in an anaerobic chamber with an atmosphere of 85% N 2 , 10% H 2 , and 5% CO 2 . Rumen fluid was diluted 1:10 or 1:5 in McDougall's buffer (simulating ruminant saliva, 10.5 mM KCl, 8 mM NaCl, 0.5 mM MgSO 4 , 0.4 mM CaCl 2 , 0.11 M NaHCO 3 , 27 mM Na 2 HPO 4 , pH 8.3) with soluble carbohydrates (6.7 g/L maltose, 3.3 g/L xylose, 3.3 g/L soluble starch, 2.1 g/L NaHCO 3 , 3.3 g/l citrus pectin). There was no difference in immunoblot results between 1:10 and 1:5 rumen:buffer dilutions (data not shown). Resazurin dye (Acros Organic, New Jersey) was used as an indicator of redox state and does not affect in vitro digestion [32]. For all active digestions, resazurin dye added to active rumen solutions remained colorless throughout the incubation, indicating highly-reduced, anaerobic conditions prevailed. For inactive controls, rumen fluid was autoclaved at 121uC for 15 min. Active or inactive rumen fluid or buffer (McDougall's with soluble carbohydrates) was combined with prion-infected brain or soil homogenates at a ratio of 5:1 (rumen buffer:prion homogenates) in 0.2 ml PCR tubes. Rumen-prion mixtures were vortexed and then incubated at 39uC for 20 hr with occasional (<6 hr) cap venting. Following incubation, samples were stored at 280uC until analyzed. The average pH of the in vitro digestions is shown in Table 1. Immunoblot Analysis Detection of PrP Sc in digested and undigested samples was accomplished using SDS-PAGE/Western blotting as described previously without modification [23,33]. Briefly, for proteinase K (PK) treatment, sample aliquots were incubated at 37uC under constant agitation for 1 hr with 25 mg PK per ml of sample (Roche Diagnostics Corporation, Indianapolis, IN). PK digestion was stopped by boiling in SDS-PAGE sample buffer. Soil sample amounts loaded into gels are reported in (Table S1). Samples were separated on 12.5% acrylamide gels under reducing conditions and transferred to polyvinyl difluoride (PVDF) membranes. All hamster samples were immunoblotted with mAb 3F4 (Millipore, Billerica, MA, 1:10,000 dilution). Blots were developed with Pierce Supersignal West Femto maximum-sensitivity substrate and imaged on a Kodak 2000R imaging station (Kodak, Rochester, NY). None of the soils used exhibit nonspecific binding to either the primary or secondary antibody [23]. Rumen content also did not exhibit nonspecific binding (see, e.g., Figure 1D, lane 4). Blot images were quantified using Kodak 1D 4.0 software (Kodak, Rochester, NY), which output the net intensity of each blot (total darkness minus background). Net intensities of sample replicates (n = 3 to 6) were normalized as a percentage of the average of control HY BH replicates (n = 4) run on the same gel to control for inter-gel variance. Animal Bioassay Intracerebral inoculations of male golden Syrian hamsters (Harlan Sprauge-Dawley, Indianapolis, IN) were conducted as described previously [34] with five animals per group. Samples of rumen-digested and undigested HY TME bound to silty clay loam soil or unbound were gamma irradiated (8 kGy) and diluted 1:10 in DPBS and then 25 ml was inoculated per animal. The incubation period was calculated as the length of time in days between inoculation and the onset of clinical signs that include ataxia and hyperactivity to external stimuli. Statistical Analysis Two-tailed student's T-tests assuming unequal variances were performed using Microsoft Excel to determine statistical significance as noted. P values less than 0.05 were considered statistically significant. Resistance of Unbound PrP Sc to Rumen Digestion An in vitro digestion assay was employed to simulate rumen digestion of prion-contaminated material. Standard methods, including standard rumen fluid sampling procedures and substrate and buffer compositions, were used [30,31,35]. pH values for the in vitro digestion were within normal in vivo ranges (Table 1), resazurin dye indicated a reduced environment in active samples but not inactive and buffer controls, and gas was produced throughout the 20 hr incubation, indicating anaerobic digestion occurred. Unbound HY TME PrP Sc from brain homogenate was not significantly reduced in actively digested samples compared to inactive and buffer controls ( Figures 1A, lanes 1-5 and 1B). Incubation up to 48 hr did not yield significant degradation (data not shown). Immunoblots of actively digested samples not subjected to proteinase-K exhibited a shift in migration (Figure S1), indicative of PrP Sc N-terminal truncation and suggest limited proteolysis of PrP Sc did occur [33]. However, the PrP Sc Nterminus is not required for prion infectivity [36]. Preliminary results with hamster DY TME, elk CWD, sheep scrapie, bovine TME, and mink TME also showed no differences in unbound PrP Sc in digested samples and controls (data not shown). These results are consistent with the result of Nicholson and colleagues [20] showing no decrease in scrapie PrP Sc following in vitro rumen digestion, but somewhat inconsistent with Scherbel et al. [18], who observed significant (near-complete) 263 K hamster PrP Sc degradation during active digestion in the absence of detergents. Methodological differences such as rumen fluid seed or western blotting techniques may be responsible for the observed differences. For instance, we collected rumen fluid from a live, cannulated dairy cow while Scherbel et al. collected fluid from a slaughtered beef bull. Resistance of Soil-Bound PrP Sc to Rumen Digestion To determine the effect of soil on the susceptibility of prions to rumen digestion, PrP Sc was sorbed to a range of soils and soil minerals and exposed to in vitro rumen digestion. Consistent with the unbound results, HY PrP Sc bound to silty clay loam (SCL) soil was not reduced following active rumen digestion ( Figures 1A, lanes 6-10, and 1B). Preliminary results for CWD-elk PrP Sc bound to SCL soil demonstrated similar resistance to digestion (data not shown). Increased detection of PrP Sc bound to bentonite clay ( Figures 1A, lanes 11-15, and 1B) and silicon dioxide powder (SiO 2 ) ( Figures 1A, lanes 16-20, and 1C) was observed in digested samples compared to controls. These bentonite and SiO 2 results were highly variable, especially SiO 2 , but suggest that active rumen digestion increased PrP Sc desorption and detectability. PrP Sc detection from SiO 2 in all samples, including buffer controls, was very low (1-4% recovery, Figure 1A, lanes [16][17][18][19][20]. This contrasts with previous results reporting SiO 2 PrP Sc recoveries equal to or greater than 100% in three other aqueous solutions [37]. Because PrP Sc recovery from other soils and unbound samples in buffer was not abnormal (Figure 1A), the low PrP Sc recoveries from SiO 2 may be due to a specific chemical effect on the mineral particles (that in turn alters PrP Sc desorption) and not a direct effect on PrP Sc . HY TME bound to sandy loam (SL) soil and sand was susceptible to rumen digestion ( Figures 1D and 1E), and PrP Sc was not detected on sand samples actively digested ( Figure 1D, lane 4). However, the SL soil and sand results were highly variable and not statistically-significant from undigested controls. Further study may yield more precise data on PrP Sc resistance to digestion when bound to these soils, but preliminary PMCA data indicates sandbound PrP Sc remains capable of replicating following active digestion (discussed below). Rumen Digestion of PrP c Rumen digestion was completely effective at degrading PrP c from uninfected hamster brain homogenate (Figure 2A, lane 5, and 2B), consistent with previous studies [18,20]. This result typifies the increased resistance to proteolysis of PrP Sc compared to PrP c [23] and illuminates a practical effect of this increase on disease transmission: PrP Sc is able to survive rumen digestion whereas PrP c is not. A 60% decrease in detectable PrP c was observed for samples incubated in buffer (Figure 2A, lane 2), and only a faint PrP c signal was detected in samples incubated in inactive rumen content (Figure 2A, lanes 4). Thus, noncellular physical or chemical mechanisms were most likely responsible for the decreases in PrP c observed in the actively digested samples. These mechanisms could include irreversible sorption to rumen particles, heat degradation, or enzymatic degradation (from enzymes introduced in the brain homogenate [33]). In contrast to unbound PrP c , PrP c bound to bentonite clay was still detected following inactive and active digestion (Figure 2A, lane 9). This suggests that PrP c sorption to bentonite may increase its resistance to rumen degradation, perhaps due to a decrease in access to cleavage sites. As with unbound PrP c samples, PrP c levels in inactive bentonite controls were reduced (Figure 2A, lane 8), again implicating physiochemical mechanisms of PrP c degradation. Infectivity of Unbound and Soil-Bound Prions Following Rumen Digestion Is Unchanged Hamsters were inoculated with unbound and SCL-bound HY TME prions subjected to either active or inactive (pre-autoclaved) in vitro rumen digestion. The incubation periods of hamsters inoculated with inactive or active samples were equal ( Table 2). The incubation periods for the inoculated dose were consistent with previous studies [27,34]. All animals inoculated exhibited classic HY TME clinical symptoms, and all clinical animals contained HY PrP Sc in the central nervous system (CNS) (data not shown). Since there is a well-established relationship between HY TME infectious titer and incubation period [34], including for soilbound HY TME [27], these results strongly suggest rumen digestion does not alter HY TME infectious titer for either unbound or soil-bound prions. These data are consistent with the results of Scherbel and colleagues, who also observed no difference in attack rate or incubation period between (unbound) 263 Kstrain hamster prions subjected or not subjected to in vitro rumen digestion [19]. Furthermore, these data also correlate with our western blot results, which demonstrated no difference in PrP Sc levels before and after digestion in unbound and SCL soil-bound PrP Sc (Figures 1A and 1B). Also consistent with previous results, the mean incubation time of SCL soil-bound HY TME was significantly longer (13 d) than unbound HY TME (Table 2) [27]. This increase in incubation period correlates with a 1.2-log decrease in infectious titer of HY TME upon binding to SCL soil and a similar decrease in HY TME PrP Sc replication efficiency [27]. Thus, the present results indicate SCL-bound prions remain less infectious than unbound prions following rumen digestion. Implications for Environmental Prion Transmission To initiate infection via absorption across the lower gastrointestinal epithelium, orally ingested prions must survive passage through the rumen [8][9][10]. Previous studies have observed varied PrP Sc resistance to in vitro rumen digestion [9,18,20]. We observed that active in vitro rumen digestion did not reduce PrP Sc abundance (Figure 1), and consistent with the previous work of Scherbel et al. [19], unbound prion infectivity was not reduced following rumen digestion (Table 2). Moreover, our results demonstrate that PrP Sc sorption to soil does not reduce prion resistance to rumen digestion. However, since both unbound and soil-bound prions were resistant to rumen digestion, we cannot conclude that soil sorption increases prion resistance to gut degradation, only that it does not decrease it. Nevertheless, the resistance of soil-bound prions to rumen digestion supports the efficacy of soil-mediated prion transmission (prion-soil sorption and subsequent ingestion or inhalation by a naïve host) [21] as a natural mechanism of CWD and scrapie transmission. We did observe variance in PrP Sc resistance to digestion with respect to soil type, where, in contrast to the other soils and minerals, PrP Sc levels bound to sand and sandy loam soil were reduced following digestion ( Figure 1D and 1E). Variance in prion-soil interactions of this kind could lead to spatial variance in prion disease incidence based on local soil-type [21]. However, preliminary protein misfolding cyclic amplification (PMCA) experiments [27] indicate the replication efficiency of prions subjected to active digestion while bound to sand or SiO 2 is not significantly different than unbound prions (data not shown). Based on the established relationship between PMCA replication efficiency and infectious titer [24,27], these results suggest the SCL soil bioassay results are typical of the other soils and soil minerals used. Still, bioassay of other soils is needed to definitively evaluate soil-type variance in digestion resistance. A number of factors must be considered in extending the present results. First, the results were obtained using in vitro digestion, which is a simulation of in vivo processes with limitations [30,35]. We used standard in vitro methods, consistent with previous prion digestion studies, although the limited amount of prion-infected brain homogenate available necessitated using small (0.2 ml) tubes, which may have contributed to the observed variance. Second, prion resistance to digestion may vary with prion strain and species [23,33]. As noted above, our preliminary work with other prion strains and species suggests broad prion resistance to rumen digestion, but these results would need to be confirmed with additional studies. Third, rumen digestion can vary with host species and diet, with the latter appearing more significant than the former [38]. Studies have reported similar in vitro digestion (as measured by parameters such as gas production and substrate utilization) when using rumen fluid contents from sheep and cows [39,40], sheep and goats [41], and sheep and red deer [42] when animals were fed the same diet. Variance in the diet of the cows used to collect rumen fluid (23-66% grain) did not observably affect the immunoblot results of this study (data not shown), suggesting that diet is not a significant factor and that our results are applicable across a wide range of diets and species (cervids, sheep, goats, and cattle). However, an extensive study of the effect of diet was not conducted. Moreover, dairy cow diets are notably different than free-ranging deer diets, and deer diets vary seasonally as well as geographically, which can affect rumen digestion [43]. Finally, unbound and soil-bound prions surviving rumen passage will be exposed to stomach and intestinal digestion before uptake. These two processes are less complex than rumen digestion, and previous results indicate unbound prions are resistant to both [8,9,44]. Still, the effect of soil sorption on prion resistance to lower gastrointestinal digestion has yet to be investigated. Moreover, while passage through the rumen and lower gastrointestinal tract may not digest PrP Sc , it may alter PrP Sc uptake efficiency, which would not be detected by immunoblot or intracerebral bioassay. Thus, study of soil-bound prions in, for example, the gut-loop system employed by Dagleish and Jeffery [8,9] would be of interest in further evaluating the efficacy of soilmediated prion transmission. Figure S1 Rumen digestion of unbound HYTME PrP Sc without proteinase-K treatment.
5,068
2012-08-24T00:00:00.000
[ "Environmental Science", "Biology" ]
Purified Vitexin Compound 1 Inhibits UVA-Induced Cellular Senescence in Human Dermal Fibroblasts by Binding Mitogen-Activated Protein Kinase 1 Purified vitexin compound 1 (VB1), a novel lignanoid isolated from the seeds of the Chinese herb Vitex negundo, has strong antioxidant abilities and broad antitumor activities. However, little is known about its anti-photoaging effect on the skin and the underlying mechanism. Here, we demonstrated that VB1 significantly attenuates ultraviolet A (UVA)-induced senescence in human dermal fibroblasts (HDFs), as evidenced by senescence-associated β-gal staining, MTT assays, and western blot analysis of the expression of p16 and matrix metalloproteinase-1 (MMP-1). Furthermore, mass spectrometry revealed that VB1 could directly bind to Mitogen-Activated Protein Kinase 1 (MAPK1). Molecular docking and molecular dynamics simulation methods confirmed the mass spectroscopy results and predicted six possible binding amino acids of MAPK1 that most likely interacted with VB1. Subsequent immunoprecipitation analysis, including different MAPK1 mutants, revealed that VB1 directly interacted with the residues, glutamic acid 58 (E58) and arginine 65 (R65) of MAPK1, leading to the partial reversal of UVA-induced senescence in HEK293T cells. Finally, we demonstrated that the topical application of VB1 to the skin of mice significantly reduced photoaging phenotypes in vivo. Collectively, these data demonstrated that VB1 reduces UVA-induced senescence by targeting MAPK1 and alleviates skin photoaging in mice, suggesting that VB1 may be applicable for the prevention and treatment of skin photoaging. INTRODUCTION Chronic exposure to ultraviolet (UV) irradiation is the major cause of skin damage leading to premature aging of the skin, a condition called photoaging (Kruglikov and Scherer, 2016). Clinical changes in the course of skin photoaging include the formation of fine and coarse wrinkles, increased skin thickness, dryness, laxity, and pigmentation (Lei et al., 2018). Solar UV radiation is divided into three categories according to their wavelength. UV radiation can penetrate the skin to different extents and interact with skin cells (Bravo et al., 2017). UVA (320-400 nm) is more abundant in sunlight and penetrates the skin deeper than UVB (280-315 nm). Previous studies have revealed that UVA plays an important role in skin photoaging (Kammeyer and Luiten, 2015;Lei et al., 2018). To date, the mechanisms of skin photoaging is still unknown, however, it is mainly associated with oxidative stress, inflammatory responses, and DNA damage (Shin et al., 2019). Oxidative stress can increase the secretion of proteases and produce a large number of oxidative intermediates due to an imbalance between the production of oxidants and antioxidants. Accumulation of reactive oxygen species (ROS) induced by oxidative stress can affect skin cells in both epidermis and dermis, promoting cellular senescence (Stellavato et al., 2018). Oxidative stress, one of the most important mechanisms underlying skin photoaging, activates the Mitogen Activated Protein Kinase (MAPK) family, including extracellular signal-regulated protein kinases1/2 (ERK 1/2), c-Jun NH2-terminal kinase (JNK or SAPK), and p38 MAPK, and their downstream pathways to promote cellular senescence (Sun et al., 2015). MAPK1, also called ERK2, is one of the key molecules in signal transduction pathways associated with cellular senescence and only functional when phosphorylated. Recent studies revealed that MAPK1 plays a major role in the unbalanced growth of human cells (Kobayashi et al., 2012). Vitamin D protects endothelial cells from irradiation-induced senescence and apoptosis by modulating the MAPK/Sirtuin 1 (SirT1) axis (Marampon et al., 2016). Naringenin exerts potent anti-photoaging effects by suppressing UVB-induced phosphorylated MAPK1 activity in JB6 P + cells, indicating that MAPK1 plays an important role in the cellular senescence (Jung et al., 2016). In addition, some other well-studied genes act as aging markers that are often studied: Meis1 is a putative regulators of neurotransmission and neurogenesis during aging (Chang-Panesso et al., 2018). Rb1, another aging marker, induces senescence in human skin fibroblasts by regulating by DNA methyltransferase 1 (DNMT1) (Wang et al., 2017). Purified vitexin compound 1 (VB1), a novel lignanoid isolated from the seeds of the Chinese herb, Vitex negundo, has strong antioxidant abilities and broad antitumor activities in many cancer cell lines and xenograft models . VB1 suppresses the growth of melanoma cells and induces apoptosis in breast cancer cells by increasing the ROS level (Liu et al., 2014). However, VB1 failed to induce ROS generation in the immortalized non-cancerous breast cell line, indicating that it has different effects on oxidative stress processes depending on cellular conditions. In addition, VB-1 can exert hair growth-promoting effects by augmenting Wnt/β-catenin signaling in human dermal papilla cells and protect PC12 cells from hypoxia/reoxygenation-induced injury via NADPH oxidase inhibition (Yang et al., 2014;Luo et al., 2018). However, little is known about the role of VB1 in skin photoaging. Considering that oxidative stress is an important mechanism of skin photoaging, and VB1 an antioxidative agent, we speculated that VB1 may play an important role in skin photoaging. In this study, we found that VB1 significantly inhibited UVAinduced senescence in human dermal fibroblasts (HDFs). Using mass spectrometry, we also revealed that VB1 could directly bind to MAPK1. Computer-aided methods and immunoprecipitation demonstrated that VB1 binds to MAPK1 in 293T cells by interacting with the residues E58 and R65. We further verified that VB1 partially reverses UVA-induced senescence by the above-mentioned binding to MAPK1. Finally, topical VB1 gel remarkably reduced the phenotype of skin photoaging in mice. For the first time, we demonstrate that VB1 reduces UVAinduced senescence in HDFs by targeting the residues E58 and R65 in MAPK1 and alleviates skin photoaging in mice. Our results indicate that VB1 is a potential new drug for the prevention and treatment of skin photoaging in the future. Cell Culture Primary HDFs were isolated from circumcised foreskins of healthy human donors aged from 5 to 12 years. Primary HDFs were cultured at 37 • C and 5% CO 2 in a humidified incubator in Dulbecco's modified Eagle media (DMEM; Gibco, Grand Island, NY, United States), supplemented with penicillin (100 U/mL), streptomycin (100 ng/mL), and 10% fetal bovine serum (FBS; Gibco). Primary HDFs were obtained with written consent from voluntary, informed donors, following a protocol approved by the Clinical Research Ethics Committee at the Xiangya Hospital of Central South University in Changsha, China. UVA Irradiation Before UVA irradiation, HDFs cells were rinsed and submerged under a thin layer of PBS to prevent UVA absorption by components of the medium, such as VB1. Cells were then irradiated using a Philips UVA lamp with an emission spectrum between 320 and 400 nm. Mock-irradiated cells were manipulated identically, except that they were not exposed to UVA. The dose of UVA irradiation was 10 J/cm 2 per day, as verified with a UV light meter (Sigma, Shanghai, China) for 3 days. Following each UVA irradiation, cells were incubated in complete medium, supplemented with indicated compounds. Western Blotting Thirty micrograms of protein from each cell lysate was resolved by 10% SDS-PAGE, followed by electrotransfer to PVDF membranes (Millipore, MA, United States). Blots were probed with primary antibodies at 4 • C overnight, followed by incubation with an HRP-conjugated secondary antibody for 1 h at room temperature. Bands of interest in western blots were visualized with a western blot HRP substrate (Millipore, Billerica, MA, United States). SA β-Gal Staining Senescence-associated β-galactosidase (SA-β-gal) activity was measured with a β-galactosidase staining kit (Cell Signaling Technology Boston, MA, United States) according to the manufacturer's instructions. Briefly, cells were washed in PBS, fixed at room temperature for 15 min in fixing solution, and incubated overnight at 37 • C in staining solution. Relative SAβ-gal activities under each studied condition were determined by calculating the percentages of cells with SA-β-gal activity out of all cells counted in four continuous visual fields under a microscope (200x). MTT Assays Cell viabilities were determined by performing 3-(4,5dimethylthiazol-2-yl)-2,5 -diphenyltetrazoliumbromide (MTT) assays. Briefly, cells were seeded into 96-well plates at a density of 2,000 cells/well. After adhesion, cells were exposed to UVA irradiation and grown in complete medium containing VB1 (0.6 µM). At 1, 3, or 5 days post-irradiation, the medium was aspirated, and cells were incubated for 4 h in fresh medium containing 0.5 mg/mL of MTT (Sigma, St. Louis, MO, United States). Subsequently, the medium was removed and purple formazan crystals were dissolved in DMSO (150 µl/well) with a brief vortexing step. Absorbance at 570 nm was measured using a Synergy 2 Multi-Mode Microplate Reader (BioTek, Seattle, United States). All experiments were performed in triplicate, and the data presented represent the means of 3 independent experiments ± SD. Molecular Docking and Molecular Dynamics Simulation The crystal structure of the wild type MAPK1 protein was obtained from Protein Data Bank (PDB code: 5BVF) (Bagdanoff et al., 2015). The missing residues and atoms were repaired by software Discovery Studio 2.5 (BIOVIA, CA, United States). The MAPK1-VB1 binding site was predicted by Discovery Studio 2.5 and Autodock Vina (Trott and Olson, 2010). The molecular docking study was performed employing the program Autodock Vina. And the Molecular dynamics (MD) simulations were performed to explore the binding details base the docking results. The partial atomic charge of VB1 was assigned by AM1-BCC method (Wang et al., 2006) and the topology files of VB1 were generated by AMBER force field (GAFF) (Wang et al., 2004). The protonation states of ionizable residues were determined at pH = 7.0 using H++ server (Gordon et al., 2005). The MD simulations were carried out using the AMBER 16 software package (Alma Rosa Agorilla, University of California, San Francisco, United States). First, 10,000 steps minimization (4,000 steps of steepest decent followed by 6,000 steps of conjugate gradient) was carried out with protein and inhibitor constrained (100 kcal mol-1 Å-2). Subsequently, the minimization was repeated with no constrain. Then, the system was gradually heated from 0 to 310 K over a period of 300 ps with 5.0 kcal mol-1 Å-2 restrain on the solute. Thereafter, another 1 ns equilibrium simulation was followed at 310 K with 2.0 kcal mol-1 Å-2 restrain on the solute. Finally, 100 ns MD simulation was performed for each system under NPT condition to produce trajectory. The time step was set to 2 fs. Synthesis and Modification of Gold Nanoparticles Gold nanoparticles were synthesized according to following procedures. Briefly, 3 mL sodium citrate (w/v, 2%) was added to 100 mL boiling HAuCl4 (0.01%) solution and kept heated for 10 min. With continuous stirring until cooled to room temperature, the gold nanoparticles were synthesized. To conjugate lignin onto gold nanoparticles, cysteamine was first linked onto gold nanoparticles through the thiol group on the cysteamine to introduce amino group onto gold nanoparticles. Cysteamine was added into gold nanoparticles (final concentration of cysteamine is 10 M) and reacted for 2 h. After centrifuge, the gold nanoparticles were reacted with 1-(3-Dimethylaminopropyl)-3-ethylcarbodiimide hydrochloride (EDC, 100 mM) and N-hydroxysuccinimide (NHS, 100 mM) for 30 min. Then, lignin was added into the solution to reach a concentration of 10 g/mL. After another 2 h of reaction, the gold nanoparticles were centrifuged and re-dispersed into water for further use. HPLC -Mass Spectrometry Analysis Each sample of enriched nanogold-VB1 compound was reconstituted in 7 µl of HPLC buffer A (0.1%(v/v) formic acid in water), and 5 µl was injected into a Nano-LC system (EASY-nLC 1000, Thermo Fisher Scientific, Waltham, MA, United States). Each sample was separated by a C18 column (50 µm inner-diameter × 15 cm, 2 µm C18) with a 125 min HPLC-gradient. The mass spectrometric analysis was carried out in a data-dependent mode with an automatic switch between a full MS scan and an MS/MS scan in the orbitrap. The resulting MS/MS data were searched against UniProt P. mirabilis ATCC 29906 database using MaxQuant software (v1.5.2.8). Generation of MAPK1 Mutants The plasmid MAPK1-pENTER was purchased from vigenebio (Shandong, China). Mutants of MAPK1 were generated with a QuickChange II XL Site-Directed Mutagenesis Kit (Agilent Technologies, Palo Alto, CA, United States) according to the (200x). The SA-β-gal-positive rate was obviously enhanced in UVA-induced HDFs, while VB1 inhibited UVA-induced SA-β-gal activity in a dose-dependent manner. The analysis data was shown in right panel. Data are presented as mean HDFs ± SD (n = 3; + vs. ctrl, p < 0.05, * vs UVA, p < 0.05). (B) p16 levels were detected by western blot analysis. Cells were irradiated as described and harvested 24 h after the final UVA exposure. Blots were probed to detect p16, stripped, and then reprobed for β-actin. VB1 inhibited UVA-induced p16 expression in a dose-dependent manner. Images are representative of 3 independent experiments. (C) MMP1 levels were detected by western blot analysis. VB1 inhibited UVA-induced MMP1 expression in a dose-dependent manner. Images are representative of 3 independent experiments. (D) The HDFs growth rate was determined by performing MTT assays. The growth rate of UVA-exposed HDFs was significantly decreased compared with control HSFs, while VB1 could reverse the decrease (n = 3 for each time point). manufacturer's instructions. The primers used for the Mutants were used were shown in Supplementary Table 1. Immunoprecipitation HEK293 cells were transfected with MAPK1 or mut-MAPK1. The cells were collected and washed with ice-cold PBS and lysed in buffer, and the VB1 modified by gold nanoparticles was added and incubation continued overnight at 4 • C. Precipitates were washed three times with ice-cold lysis buffer at 400 g for 10 min. Bound proteins were separated on an SDSpolyacrylamide gel and analyzed by western blotting using the anti-MAPK1 antibodies. Animals and UVA Radiation Eight-week-old female FVB mice were obtained from the National Key Laboratory of Genetics (Changsha, Hunan, China). Animals were housed at 23 ± 1 • C and 50 ± 10% relative humidity in a specific pathogen-free environment. Animal experiments were approved by the Animal Research Committee at the Xiang Ya Hospital of Central South University. The dorsal skin area of mice was shaved before and during experiments. Mice were divided into control, UVA,vehicle gel and VB1 groups, with 10 mice in each group. All Mice except control group were irradiated 3 times/week for 12 weeks with 20 J/cm 2 doses under a Philips UVA lamp placed 20 cm away (emission spectrum: 320-400 nm). The dorsal skin of mice was washed with 75% ethanol before each irradiation exposure to avoid blocking or absorption of UVA rays by previous applications of the VB1 gel. UVA doses were verified with a UV light meter. A Carbomer substrate gel containing 2% VB1 or vehicle gel lacking VB1 was applied dorsally to the mice accordingly every day. No topical application or irradiation was performed in the control group. Histological Analysis Mice were sacrificed by cervical dislocation under chloral hydrate anesthesia at the end of experiments. For histological analyses, central dorsum skin specimens were fixed in 4% paraformaldehyde and sectioned after paraffin embedding. Hematoxylin-eosin staining and Masson-trichrome staining was then performed. Photographs of 5 randomly-chosen fields in each section were taken under a microscope (200x). Epidermal thicknesses were measured as the distance from the basement membrane to the bottom of the stratum corneum. Statistical Analysis All data presented are representative of at least 3 independent experiments and are expressed as means ± SD. Statistical significances were determined by a one-way analysis of variance, followed by further analysis by the LSD (least significant difference) test. P < 0.05 was considered statistically significant. VB1 Protects HDFs From UVA-Induced Senescence SA-β-gal activity was measured in HDFs to investigate the effects of VB1 on cellular senescence induced by UVA. Our results showed that the percentage of senescent cells (SA-β-galpositive cells) was significantly increased in UVA-irradiated HDFs compared with that in non-irradiated control cells. VB1 inhibited UVA-induced SA-β-gal activity in a dose-dependent manner ( Figure 1A). Additionally, we found that the expression of p16, a hallmark of cellular senescence, was significantly increased after UVA irradiation and that VB1 showed dosedependent inhibition of p16 expression ( Figure 1B). Previous studies have shown that UVA irradiation causes photoaging through MMP-1 induction, and MMP1 was also considered as an indicator of senescence-Associated Secretory Phenotype (Gorgoulis et al., 2019). Thus, MMP-1 expression was analyzed by western blotting to examine whether VB1 regulates its expression following UVA exposure. The results confirmed that UVA irradiation increased MMP-1 protein expression in HDFs, however, the UVA-induced induction of MMP-1 expression was inhibited by VB1 in a dose-dependent manner ( Figure 1C). By MTT assays, we showed that UVA irradiation decreased the cell proliferation of HDFs and that VB1 treatment partially reversed this UVA-induced effect ( Figure 1D). These data indicated that VB1 partially protected HDFs from UVA-induced senescence. Potential Target Proteins and Target Sites of VB1 for MAPK1 To screen the potential binding target proteins of VB1, we used mass spectrometry. For this purpose, VB1 was linked to modified gold nanoparticles (Figure 2). Table 1 shows the potential target proteins of VB1 identified by mass spectrometry. Based on previous research on senescence, we chose MAPK1 as the target protein of VB1 for further experiments. To accurately predict the possible target sites of VB1 for MAPK1, we employed computeraided methods, including molecular docking and molecular dynamics simulation were employed. The in silico results showed that VB1 could directly bind MAPK1 (Figure 3A), and we finally predicted that five likely interacting amino acid residues (G32, Y34, K46, E58, and R65) of MAPK1 (Figure 3B), and K338 was another possible key residue. VB1 Binds to MAPK1 by Interacting With the Residues E58 and R65 We performed endogenous immunoprecipitation assays to confirm whether VB1 directly binds to MAPK1. HEK293T cells transfected with unlinked (control) or VB1-linked nanogold particles were used for immunoprecipitation, and MAPK1 was then detected by western blotting. The results showed that MAPK1 was detectable in the nanogold-VB1immunoprecipitated complexes but not in the control nanogold immunocomplexes (Figure 4A), revealing that MAPK1 directly bound to VB1. To precisely identify the interaction sites of VB1 with MAPK1, we established wild-type and mutant MAPK1 vectors and used them to transfect HEK293T cells, together with nanogold-VB1 particles. Then, we determined the interactions between VB1 and wild-type/mutant MAPK1 using immunoprecipitation. MAPK1 was not detected in HEK293T cells transfected with the E58-and R65-mutants of MAPK1, while it was detectable in cells transfected with wild-type MAPK1 and the other four MAPK1 mutants ( Figure 4A). Then, we modeled the complex between VB1 and MAPK1, considering the interacting residues E58 and R65, by in silico method ( Figure 4B). These findings indicated that VB1 bound to MAPK1 by interacting with the residues E58 and R65. VB1 Can Partially Reverses UVA-Induced Phosphorylation of MAPK1 Only phosphorylated MAPK1 (p-MAPK1), which is the active form of MAPK1, can activate the activity of a series of downstream transcription factors, thereby regulating cell function. To clarify the effect of VB1 on the MAPK1 pathway, p-MAPK1 was detected in HDFs by western blotting in HDFs. The results revealed that p-MAPK1 expression was significantly increased after UVA irradiation, however, VB1 could significantly reverse this UVA-mediated effect ( Figure 4C). VB1 Protects HEK293T Cells From UVA-Induced Senescence via Binding to MAPK1 To examine whether VB1 reduces UVA-induced senescence through MAPK1 binding, we co-transfected HEK293T cells, with down-regulated endogenous MAPK1, with VB1 and wild-type or mutant MAPK1 (E58-or R65-mutants) and irradiated the cells with UVA rays. We found that the UVA-induced expression of p16 and MMP1 was significantly decreased in HEK293T cells transfected with wild-type MAPK1 and VB1. In contrast, the UVA-induced expression of these proteins was partially reversed in cells co-transfected with both MAPK1 mutants and VB1 (Figures 4D-F). These data revealed that VB1 could protect HEK293T cells from UVA-induced senescence by binding the E58 and R65 residues of MAPK1. Topical VB1 Gel Alleviates the Skin Photoaging Phenotype in Mice To further evaluate the anti-photoaging ability of VB1, we applied a gel containing 2% VB1 or a vehicle gel lacking VB1 topically on the UVA-irradiated dorsal skin of mice. In the vehicle group, the dorsal skin of the animals was rough and scaly, showing increased thickness and deep wrinkles after 12 weeks of UVA-irradiation compared with the corresponding parameters in the nonirradiated control group. In contrast, the skin conditions of VB1treated mice were visibly improved, and the formation of skin wrinkles was significantly reduced (Figure 5A). Mouse dorsal skin from each group was harvested for hematoxylin and eosin (HE) staining. The epidermal thickness was markedly higher in the vehicle group than in the non-irradiated control group ( Figure 5B). Daily topical application of VB1 gel significantly reduced the thickening of the epidermal layers. These data demonstrated that VB1 could alleviates UVA-induced skin photoaging in vivo. DISCUSSION Skin photoaging, an essential aspect of skin aging, is mainly characterized by skin relaxation, wrinkle formation, pigmentation, and telangiectasia, etc. Recent evidence has shown that UVA irradiation produces ROS and induces cell senescence, ultimately leading to skin photoaging (Wlaschek et al., 2001;Yi et al., 2018). Thus, finding the ideal antioxidants that can act as anti-aging drugs is very promising. Previous studies have demonstrated that VB1 acts as a novel antitumor agent by regulating the cell cycle arrest and apoptosis induction in various cancers. Some studies have shown that VB1 has a strong antioxidant effect and can inhibit multiple protein kinases and signal transduction pathways (Liu et al., 2014Yang et al., 2014;Luo et al., 2018). However, the role of VB1 in skin photoaging has never been reported. Here, we showed that VB1 protects HDFs from UVA-induced senescence. Thus, for the first time, the role of VB1 in the skin cellular senescence was explored. How does VB1 protect HDFs from UVA-induced senescence? To answer this question, we explored potential target proteins of VB1 via mass spectrometry and nanogoldbased immunoprecipitation. Nanogold particles, also called gold nanoparticles (AuNPs), have been widely used for the identification of both biological and chemical materials (Lee et al., 2018). When it is combined with recognition proteins, such as antibodies or receptors, this nanomaterial can act as a biosensor molecule (Egea et al., 2019). To date, nanogold particles have been used as a biological tool in many studies, especially in cancer-related studies (Shen et al., 2018). In the present study, nanogold particles were used to pull down VB1 micromolecules for subsequent mass spectrometry and immunoprecipitation analyses. Through mass spectrometry, we identified 26 proteins that potentially bind VB1. Among those 26 proteins, some proteins were tumor-related, such as YWHAQ and eEF1A1, and others were proteins were involved in various biological processes, such as DEDD and MAPK1. The MAPK pathway is one of the most important pathways in aging. It mainly triggers a series of downstream biological effects through MAPK family molecules, including ERK1, ERK2, ERK5, JNK, and p38 MAPK, thereby regulating cell proliferation, differentiation, and development. Some reports suggest that the activation of the MAPK pathway is the central event in UV-induced intracellular signaling, causing nuclear and DNA damage-originated cellular responses (Bode and Dong, 2003). MAPK1, also called ERK2, plays an indispensable role in the MAPK pathway. Only phosphorylated (active) MAPK1 can trigger the activation a series of downstream transcription factors, such as Sata1/3 and FoxO3 to regulate cellular processes, however, it also plays an important regulatory role in aging (Gkotinakou et al., 2019;Zhu et al., 2019). Due to its important role in photoaging, we chose MAPK1 for in silico experiments to identify potential VB1-binding sites. A new computer-aided method, including molecular docking and molecular dynamics simulation, which was widely used to find the "best" matching between two molecules and also can predict their "correct" binding (Akbarabadi et al., 2019;Sakr et al., 2019), was also applied to predict potential binding sites of VB1 in MAPK1. Based on the in silico results, we concluded that VB1 could directly target MAPK1 by interacting with several amino acid residues (G32, Y34, K46, E58, R65, and K338). Then, through transfection of HEK293T cells with FIGURE 4 | VB1 binds to MAPK1 through E58 and R65 residues of MAPK1 and VB1 protects 293T cells from UVA-induced senescence via binding the two residues. (A) VB1 linked with gold nanoparticles was used for the immunoprecipitation in 293T cells. In the gold nanoparticles (AuNPs) immunocomplexes, MAPK1 was not detectable by western blot analysis, while MAPK1 was detectable by western blot analysis in nanogold-VB1-immunoprecipitated complexes. In the 293T cells transfected with E58-mutant MAPK1 and R65-mutant MAPK1, MAPK1 was not detected, while MAPK1 was detectable in cells transfected with other four mutant MAPK1. (B) The combination form between VB1 and MAPK1 via E58 and R65 residues using computer-aided methods. (C) p-MAPK1 expression was detected by western blot analysis. p-MAPK1 was significantly increased after UVA irradiation and that VB1 could significantly decrease UVA-induced p-MAPK1 expression in a dose-dependent manner. (D,E) 293T cells was used in further co-transfected experiments because of difficulty in HDFs. Endogenous MAPK1 in 293T cells was knockdown by MAPK1 siRNA. Then UVA-irradiated 293T cells were co-transfected with VB1 and wild-type or mutant MAPK1 (E58-mutant or R65-mutant), and p16 and MMP1 expression was detected by western blot analysis. UVA-induced p16 and MMP1 were significantly decreased in 293T cells tranfected with wild-type MAPK1 and VB1, while p16 and MMP1 were partially reversed in 293T cells transfected with E58-mutant or R65-mutant MAPK1 and VB1. (F) MAPK1 siRNA could significantly knockdown the endogenous MAPK1 expression in 293T cells. MAPK1 vectors containing mutant residues and subsequent immunoprecipitation, we discovered that VB1 delayed UVAinduced cellular senescence by binding to the residues E58 and R65. Thus, we hypothesized that VB1 reduced cellular senescence by regulating the expression of phosphorylated MAPK1 expression through direct interaction with MAPK1. Previous studies have shown that TGF-β alone induced Ras-Raf-MEK1 and phosphorylated MAPK1 to increase the expression of MMP1, so we speculate that VB1 could decrease MMP1 and p16 by reducing p-MAPK1 (Amatangelo et al., 2012). The demand for products that diminish wrinkles and maintain a youthful appearance of the skin is increasing. Currently, all-trans-retinoic acid (ATRA) is the only topical drug that is approved by the Food and Drug Administration (FDA) for the treatment of photoaged skin (Behairi et al., 2016). However, the topical use of ATRA might induce local skin side effects, including irritation, erythema, burning, pruritus, and scaling. Therefore, there is a need for safe and efficacious agents for the prevention and treatment of photoaging. To analyze whether VB1-containing preparations can effectively delay skin photoaging, we prepared a VB-1 gel and demonstrated that the gel exhibited a good permeate rate and low lag time (Li et al., 2016). In addition, topical administration of the VB1 gel to mice remarkably reduced skin photoaging phenotypes caused by long-term UVA irradiation. In summary, we demonstrated that VB1 significantly inhibits UVA-induced senescence in HDFs by targeting the E58 and R65 residues of MAPK1 and effectively reduces skin FIGURE 5 | Topical VB1 gel palliates skin photoaging phenotype induced by UVA irradiation in mice. Mice were irradiated with UVA 3 times per week at doses of 20 J/cm 2 for 12 weeks. A gel containing 2% VB1 or a vehicle gel lacking VB1 (negative control) was applied topically to the dorsal skin areas of mice at 1 h after each irradiation. The control group consisted of untreated mice at the same age. (A) Representative photographs of mice dorsal skin in mice from control group, UVA-irradiated group, the experimental group receiving VB1 gel and the vehicle only group. (B) Representative photographs of hematoxylin-eosin (HE)-stained dorsal skin sections obtained at the end of the experiment from (a) control group, (b) UVA-irradiated group, (c) the experimental group receiving VB1 gel and (d) the vehicle only group (left panel, magnification 200x). Epidermal thicknesses were estimated at 5 different random sites with each mouse from digital images of HE-stained sections and are depicted as means ± SEM (n = 10 per group). The right panel showed the analysis data (+ vs. ctrl, p < 0.05, * vs. UVA, p < 0.05). photoaging in UVA-irradiated mice, indicating that VB1 could serve as a novel agent for the prevention and the potential treatment of photoaging. DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the article/Supplementary Material. ETHICS STATEMENT The animal study was reviewed and approved by the Clinical Research Ethics Committee at the Xiangya Hospital of Central South University. AUTHOR CONTRIBUTIONS BW and SY performed all the experiments and prepared the figures and tables. SY, QZ, ZD, YZ, and YY provided with the technical and statistical assistance. BW, HX, and YH analyzed and interpreted the data. JL provided all the theoretical direction. BW, SY, HX, and JL wrote and edited the manuscript. All authors read and approved the final manuscript. FUNDING This work was primarily supported by the National Natural Science Foundation of China (Grant Nos. 81502750, 81602784, 81773351, 81874251, and 81573314) and the Cultivation Project of National Natural Science Foundation Major Research Program (Grant No. 91749114).
6,605.2
2020-07-31T00:00:00.000
[ "Medicine", "Environmental Science", "Chemistry" ]
WNT Signaling In uences Neurological Function and Psychiatric Disorders Through Regulating Glia Phenotypes and Neuron Plasticity Haochen Wang Soochow University Mengyang Yan Soochow University A liated No 1 People\'s Hospital: First A liated Hospital of Soochow University Zhiqi Cheng Soochow University Tongyu Rui Soochow University Yanan Yan Soochow University Jie Chen Xi'an International Medical Center Zhiya Gu Soochow University Li Hui Soochow University A liated Guangji Hospital Qiufang Jia Soochow University A liated Guangji Hospital Xiping Chen Soochow University Luyang Tao ( <EMAIL_ADDRESS>) Soochow University https://orcid.org/0000-0002-2440-2683 Background Early views of glial cells are relatively inert cells, and now, glia are recognized as dynamic cells which respond to neuronal activity, sense and regulate metabolic changes [1]. Astrocytes, as the most abundant cells in the central nervous system (CNS), contribute to balance brain function and help to maintain the normal composition of the extracellular medium [2,3] and the perspective of microglia in disease development have evolved, which was seen to play crucial roles in promoting and limiting brain injury [4]. In traumatic brain injury (TBI) and other CNS diseases, the brain's innate response to injury is crucial, resident astrocytes and microglia are often the primary cell types to initiate an in ammatory cascade upon sensing danger, for this reason, the proteins which associated with the activation of these cells are often used as biomarkers [5]. CNS lesion will induce a generic response of reactive gliosis. Previous studies generally agreed that microglia have functional plasticity including M1 and M2 phenotypes. Recently, like microglia, A1 and A2 phenotype of astrocyte have been proved. A1 astrocytes increase many proin ammatory cytokines that shown negative function, however, A2 astrocytes upregulate neurotrophic factors for neuroprotective effect [6,7]. Disturbances of normal neuron-astrocyte interactions lead to neurodegeneration and progression of neurological diseases [8]. WNTs are family of secreted lipid-modi ed signaling proteins which acting as short-or long-term signaling molecules in the regulation of cellular processes [9,10]. The WNT/β-catenin signaling pathway is important for neurogenesis in the developing nervous system [11,12]. In present study, TBI (in vivo) and acute neuroin ammation (in vivo and in vitro) model were induced by CCI and lipopolysaccharide (LPS) respectively, the expression of WNT was detection in injured cortex. Meanwhile, WNT/β-catenin signaling regulating glia cell phenotypes and the fate of neurons was investigated, and the effect on neurological function and anxiety behavior were also studied. Animals The 8-week-old male Sprague-Dawley rats were used for animal experiments (SLAC Company, Shanghai, China). Animals were housed with a 12 h light/dark cycle and at 22 ℃ and allowed to free access food and water. All the animal procedures were approved by Institutional Animal Use and Care Committee at Soochow University and carrier out according to the guidelines of Animal Use and Care of the National Institutes of Health. Traumatic brain injury model TBI was performed using the controlled cortical impact (CCI) model as previously described [13]. Acute neuroin ammation model Rats were anesthetized with 10% chloral hydrate (0.35 ml/100 g) and intracerebroventricular (ICV) injection of LPS was given using stereotaxic apparatus following conditions: 0.8 mm posterior to bregma; 1.5 mm lateral to saggital suture; 3.6 mm beneath the surface of brain [14]. LiCl and Salinomycin treatment LiCl and salinomycin was dissolved in sterile saline, the nal concentration was 8 mg/ml and 0.4 mg/ml respectively. LiCl or salinomycin was intraperitoneally injected 30 min post-TBI, followed by injections once daily for 3 days, 7 days or 14 days. Wnt3a and DKK-1 treatment Intranasal administration is a well-established non-invasive route for drug administration to the brain and allows for permeation of proteins and even cells across the blood-brain barrier (BBB) [15]. Recombinant Wnt3a and DKK-1 (R&D Systems) was reconstituted at 10 ng/µL in phosphate-buffered saline (PBS) containing 0.1% BSA. Tissue collection Rats were sacri ced after 1 h, 6 h, 24 h, 3 d, 7 d, 14 d of TBI and LPS injection. Rats were anesthetized and scari ed at different time-points, tissue samples from injured cerebral cortex were rapidly removed, and all the subgroup samples were snap frozen in liquid nitrogen and stored at -80°C until be used. Primary astrocyte and microglia culture Primary astrocyte and microglia cultures were prepared from SD rats as described previously [16][17][18]. Behavioral analysis Wire Grip Test (WGT) and Morris Water Maze (MWM) were chosen to explore ability of motor function, learning and memory after TBI. Open eld test (OFT), light dark box test (LDB), marble burying test (MBT) was used to explore the anxiety-like behavior and sucrose preference test (SPT), forced swim test (FST) could re ect depression-like behavior. The detailed methods and protocols were shown in supplementary information. Primary neuron, astrocyte, microglia and co-culture Primary neuron cultures were prepared from SD rats and the detail was shown in supplementary information. The puri cation of astrocytes and microglia were plated in 6-wells dishes about the ratio of 3-5:1, the mature mixed glial cultures could be used after seven days. Primary astrocyte-neuron cocultures were prepared from SD rats as described previously [19]. OGD and OGD/R establishment In oxygen/glucose deprivation injury and reperfusion (OGD/R) model, 2 mM Na 2 S 2 O 4 was employed to consumption oxygen [20]. The Na 2 S 2 O 4 was solution in glucose-free DMEM/F12 medium. Transwell analysis 0.4 µm and 8 µm pore diameter inserts were used in the experiment, the detailed process was shown in supplementary information. 5-Ethynyl-2′-deoxyuridine (EdU) assay The proliferation of all type of astrocytes were measured by Cell-Light EdU Apollo567 In Vitro Kit (RiboBio, Guangzhou, China) following the manufacturer's instructions. Immunocytochemistry and Immuno uorescence staining The astrocytes were plated on the coverslips which have coated with poly-L-lysine. For Immunocytochemistry (ICC) [21], coverslips were prepared routinely, depara nized and rehydrated. Images were acquired using microscope (Nikco) and immuno uorescence staining was used to primary astrocyte-microglia co-culture, astrocytes scratch-wound model, different treatment groups of microglia and primary neuron culture in cells and rat brain tissue slices. Western blot For cortical tissues, rats were sacri ce at 1 h, 6 h, 1 d, 3 d, 7 d and 14 d after TBI and LPS and cells samples were scraped from culture plate. The proteins of injured brain tissues and cells from different treatment groups were extracted by RIPA lysis. The proteins were separated by 8%, 10% and 12% SDS-PAGE and transferred onto Hybond-Polyvinylidenedi uoride (PVDF) membranes. Real-time PCR Total RNA was extracted from primary cultured cells using Trizol reagent (Thermo, USA). The RNA sample were reverse-transcribed in 20 µl at 42°C for 60 min then incubated at 70°C for 5 min and 4°C for 10 min according to the manufacturer's instructions (Thermo, USA). The sequences of the PCR primer are shown in supplementary information. Brain edema and Lesion volume assay Brain edema (water content) was measured as previously used. In brief, rats were anesthetized and sacri ced at 24 h and 3 d after TBI. The brain water content was measured with a drying method and percentage was calculated using the Elliott formula for each part: Lesion volume was measured at 7 d post-TBI. Cryosections (15 µm thick, 100 µm intervals) were strained with H&E photographed with light microscopy and an imaging program. Each section was measured for area with imageJ software. Statistics analysis All statistical analysis was performed using SPSS 24.0. The hapiro-Wilk test was used to evaluate whether the data were obedient to normal distribution. Difference between groups were calculate by t-test. Western blot was carried out with one-way or two-way ANOVA. For all comparisons, P < 0.05 was regarded as statistically signi cant. Results Activation of WNT signaling inhibit de cit of learning/memory and psychiatric disorder In behavior analysis, motor function of all the groups were recovery at 6 days after TBI and the rats treated with LiCl and Wnt3a could get high score at early-stage post TBI (Fig. 1B). In MWM test, TBI (vehicle) induced an increased latency at 11 to 21 days compared with the sham group, however, treatment with LiCl and Wnt3a reduced the latency on 13-21 days after TBI (Fig. 1C). In OFT, the movement distance in TBI + salinomycin, TBI + DKK-1 group was signi cantly decrease at 3 d to 14 d (Fig. 1D 1 -D 3 ) post-TBI, however, the movement distance had no statistically signi cant at 28 days after TBI (Fig. 1D 4 ). Compare with sham group, the center distance of TBI, both in agonist and inhibitor group was downregulation at 3d and 7 d after TBI (Fig. 1E 1 − 2 ), however, no statistically signi cant difference at 14 d to 28 d ( Fig. 1E 3 − 4 ). In LDB, as shown in Fig. 1F and 1G, light compartment times and number of light compartment entries signi cantly decreased if rats were suffered TBI. LiCl and Wnt3a could increase the light compartment time and entries compared with vehicle ( Fig. 1F 3 ). The effects of TBI, agonist and inhibitor treatments on MBT, which represent anxiety-related sequelae, are shown in Fig. 2H. WNT agonist signi cantly decreased the marble burying behavior as compared to vehicle group at 7d and 14d post-TBI. SPT could re ect depression-like behavior, signi cant decrease in sucrose consumption was observed in TBI, TBI + DKK and TBI + salinomycin groups at 7d post-TBI ( Activation of WNT signaling bene t neuroprotection post-TBI We measured the level of β-catenin which is a key protein to regulate WNT signaling pathway. For injured cortex tissue, WNT signaling was dysregulated in TBI and acute neuroin ammation model (Supplementary Fig. 1A-B). In cell-cell interaction ( Supplementary Fig. 1C), the WNT signaling was inhibited with activated microglia-condition medium (MCM) in primary neuron and slight change from 1 hour to 8 hours, but down-regulation at 12 hours in astrocyte (Supplementary Fig. 1D-E). The neuroprotective effect was con rmed by lesion volume (LV) showed a signi cantly smaller LV in the LiCl and Wnt3a group on 14 d after TBI ( Fig. 2A-B). In addition, brain edema was signi cantly decrease compared with TBI (vehicle) group at both 24 h and 3 d post TBI in agonists treatment groups (Fig. 2C). Then, in western blot ( Fig. 2D-E), LiCl and Wnt3a, as agonists, could reversed the β-catenin decrease at 3 d after TBI and the expression of in ammatory factors like iNOS and IL-1β was decrease in LiCl and Wnt3a treated groups compared with vehicle, salinomycin and DKK-1 group. The expression of Bcl-2 increased and Bax decreased at agonist groups. Immuno uorescence staining was employed to analyze the distribution and polarization state of astrocytes and microglia (Fig. 2F). Salinomycin and DKK decreased the number and length of branch in microglia and activated microglia. The level of microglia activation signi cantly increased compared with vehicle group (Fig. 2G). For astrocyte, the neurite length and cells' area also increased. Phenotype of astrocyte was regulated by microglia and WNT signaling pathway Under astrocytes-microglia/co-culture condition, the cells treated with LPS can induce the A1 astrocytes, in which the length of neurite increased and cells' area shrunk signi cantly (Fig. 3A-B). To analyze the relationship between microglia and WNT pathway, the primary microglia were treated with LPS/IL-4, LiCl/DKK-1, and OGD/R, our study found that activation of WNT signaling (LiCl treatment) in microglia will promote to generate M2-microglia (Fig. 3C) and activation of WNT signaling will promote CD206 upregulation and formation M2 microglia as well as reduce apoptosis (Fig. 3D-E). We hypothesis different phenotype of microglia could modify reactivate astrocytes process. To prove the hypothesis, transwell was used to co-culture microglia and astrocytes. The microglia were plated in upper chamber and treated with LPS, Il-4, DKK and LiCl, and then placed into the bottom wells that containing primary astrocytes (Fig. 3F). According to the western blot analysis, for astrocyte ( Fig. 3G), β-catenin was overexpressed in M2 and LiCl treated microglia induced groups. Besides, iNOS, IL-1β and TNFα increased in M1-and DKK-microglia treated groups (especially DKK treated group), and AQP4 upregulated in the M1-microglia treated group signi cantly (Fig. 3H). Then, in RT-qPCR, C3 and GBP-2, regarded as A1 astrocyte markers, remarkably increased in M1-and DKK-microglia treatment group and A2 astrocyte markers S100a10, Ptx3, Tm4sf1, Arginase-1 and Nrf-2 were upregulated in M2-and LiCl-microglia treated groups compared with M1-and DKK-microglia treated groups (Fig. 3I). WNT/β-catenin pathway regulated morphology and phenotype of astrocytes After treatment with LPS, we found morphological change in astrocytes (Fig. 4A). The area of astrocyte was signi cant decreased after treatment with LPS 12 hours and the neurite length increase, especially LPS induce 48 hours after LPS induction. To determine whether reactive astrocyte function changes or not, we examined levels of iNOS, AQP4, IL-1β and TNFα by western blot. The results revealed that iNOS and AQP4 increased at 12 hours post-treatment with LPS, while IL-1β increased at 8 hours, and TNFα increased at 1 hour ( Fig. 4B-C). The morphology of astrocytes was detected under different treatment condition (activated-MCM treatment 24 h, OGD, and OGD/R), astrocytes showed polygonal with the treatment of OGD, OGD/R and LiCl compared with treatment of activated MCM and DKK (Fig. 4D). Astrocytes suffered OGD, OGD/R and LiCl could activate the WNT/β-catenin pathway and reduce the expression of AQP4 (Fig. 4E-F). In recent research, double-immuno uorescence labeling with complement component 3 and glial brillary acidic protein (GFAP) had been used to label A1 astrocytes, and double-immuno uorescence with S100a10 and GFAP could labeling A2 astrocytes. Besides that, PTX3, TM4SF1, Nrf2 and Arginase-1 also overexpressed in A2 astrocytes, meanwhile, GBP2 could be used for A1 astrocytes marker. In RT-qPCR analysis, S100a10 and Tm4sf1 overexpressed in OGD/R and LiCl treatment groups and Arginase-1 and Ptx3 downregulated in LPS, activated-MCM and DKK-1 groups. Nrf-2 overexpressed in OGD/R and LiCl group, but there was no statistical signi cance in LiCl group. GBP2 and C3q increased signi cantly in activated-MCM and DKK-1 group (Fig. 4G). WNT signaling could modify proliferation and migration of reactive astrocytes. To evaluate the relationship between phenotype of reactive astrocytes and WNT signaling pathway, the cell wound healing assays was used. Before scratch, all the groups were pre-treated with LPS, activated-MCM, LiCl, DKK-1 and OGD/R,the neurite length of the astrocytes was remarkable decreased in LiCl and OGD/R pretreated group (Supplementary Fig. 2A, 2D-E). In the meantime, the healed area is increased in activated-MCM and DKK treated group ( Supplementary Fig. 2B and 2F). We hypothesized that the decrease in scratch area was due to inhibition of astrocyte proliferation. According to EdU assay we found the number of EdU + astrocytes were decrease in LiCl and OGD/R group (Supplementary Fig. 2C and 2G). To assess the interaction among microglia, astrocyte and neuron, the transwell was employed to perform co-culture. Astrocytes and microglia were plated in upper chamber and neurons were plated in bottom chamber ( Supplementary Fig. 2H). The results showed that the migration of microglia had no change in all groups ( Supplementary Fig. 2I-J) but the number of astrocytes migration was increased signi cantly in LPS treated neuron group. However, the astrocyte will lose the ability of migration if pre-treated with agonist LiCl, no matter whether the neuron injured or not. The migration of astrocytes was weakened in OGD/R treated group compared with LPS and DKK group (Supplementary Fig. 2K). Different phenotype of astrocyte regulated neuron synapse formation and synapse elimination In primary neurons, there was no effect on length of synapsis in activation or inhibition WNT signaling, but activation WNT could promoted synapsis elimination and increased synapsis plasticity during neuron development (Fig. 5A-C). To investigate the effect of microglia and astrocytes on neuron synapse formation, the microglia were treated by LPS/IL-4 or LiCl/DKK rstly. All the types of microglia were plated in upper chamber, further, co-culture was performed after neuron plated in 6-wells culture dish for 36-48 h (Fig. 5D). 0.4 µm and 8.0 µm pore diameter inserts were performed for transwell assays. In 0.4 µm inserts, all the cells cannot migrate and 8.0 µm can migrate randomly. We observed that the synapsis of primary neuron treated with M1-and DKK-microglia was inhibited and the number of synapsis increased compared with M2-and LiCl-microglia group, especially LiCl treatment group (Fig. 5E-F). Then, all the type of astrocyte was plated in upper chamber, and co-culture was performed as above mentioned (Fig. 5G). We found that if reactive astrocytes lose the ability to migrate (astrocytes plated in 0.4 µm inserts), the length of synapsis grew longer than reactive astrocytes plated in 8.0 chamber, especially in resting and A2 astrocytes (Fig. 5H), meanwhile, the number of synapsis in neuron reduced signi cantly (Fig. 5I). If the primary neuron without astrocytes, the synapsis kept the level of complication and short (the data was not showed). In order to explore the regulation function between microgliaastrocyte-neuron axis and WNT pathway, primary astrocytes were plated in upper chamber and cocultured with bottom microglia. Then, the astrocytes were transferred to co-culture with primary neuron (Fig. 5J). If astrocytes induced by M2-and LiCl-microglia, the neuron showed longer synapsis, particularly in no migration condition (Fig. 5K-L). It is known that limiting the migration of astrocytes can promote the neurons maturation. Apoptosis of neuron was aggravated by A1 astrocytes (activated-MCM) and DKK treatment, which showed the decrease of Bcl-2 vs Bax and increase of caspase-3. Similarly, astrocytes showed neuroprotective activity which induced by M2 and LiCl-microglia, Bax and Caspase-3 were upregulated and Bcl-2 was downregulated by M1-and DKK-microglia-astrocyte axis ( Fig. 5M-P). To further explore the effect of WNT-astrocyte-neuron axis, astrocyte-neuron direct co-culture was used to monitor neuron apoptosis. TUNEL positive cells were more abundant after LPS treatment. Meanwhile, LiCl could dramatically reduce the mix cells apoptosis. For OGD/R group, the number of astrocyte apoptosis was downregulated, but there has no difference (TUNEL positive cells) in primary neuron in all groups ( Fig. 5Q-R). Discussion In our study, we elucidate a novel protective mechanism of WNT signaling activation against TBI induced de cit of neurological function and psychiatric disorder. We also provide evidence that activation of WNT signaling promotes recovery of anxiety-and depression-like behaviors, and relieves glia cells excessive polarization. Meanwhile, WNT signaling could modulate migration of astrocytes rather than microglia to affect plasticity of neuron. Based on primary cells culture, microglia with increased WNT could accelerate generation of A2 or resting-state astrocytes. Importantly, the results indicated that the astrocyte is a key role, which drive neuron stabilization, according to WNT signaling post CNS injuries. WNT signaling affect a series of neurodevelopmental processes. Previous studies also revealed that the abnormal expression of WNT could lead to the disorder of cognitive function, de cit of learning and memory [12,22] and may associate with mood disorder, such as schizophrenia [23], autism spectrum disorder [24] and bipolar disorder [25]. In our model, the behavioral tests were performed in this study, and the results showed that anxiety and depression signi cantly increased after TBI which also ts with recent reports implicating risk of anxiety and depression increased after TBI [26]. With treatment of DKK-1/salinomycin, rats exhibited impaired anxiety, depression and social interaction behavior even lasted to 28 days. The rodents exhibited cognitive dysfunction and activation of glia cells after TBI [27]. In our ndings here, WNT agonists could inhibit activation of astrocytes and microglia, LiCl and Wnt3a inhibited microglia activation and decreased formation of reactive-astrocytes. Interestingly, the branch of microglia decreased and cell signi cantly swelled after WNT inhibitors treatment. To explain this phenomena, primary cells experiments were developed in this study. The results showed that the primary astrocyte re ected to WNT signaling later than neuron after MCM-LPS treatment. So, the mechanism of the glianeuron interaction induced by injury is emerged, a proposition, WNT signaling as key signaling regulate neuron through astrocyte and/or glia, is given, and the experiment are explored further. Therefore, we rstly investigated the relationship between astrocytes-microglia interaction and WNT signaling. WNT-induced switch is an important in microglia M1/M2 phenotype regulation [28,29]. In addition to the above ndings, in this study, we found different phenotype of microglia would stimulate astrocytes polarization toward A1-and A2-astrocytes (resting-astrocyte) and activation of WNT affect phenotype of microglia, then, lead to change of astrocytes polarization further. It has been revealed that reactive astrocytes are strongly induced by CNS injury and always occurs with brain tissue defect and A1 astrocytes was strongly associated with pathogenic progression and induced neuron de cit [30,31]. A1 astrocytes highly increase the complement component 3 (C3), histocompatibility 2, D region locus 1 (H2d1) and serpin family G member 1 (Serping 1) [32]. A2 astrocytes upregulation many neurotrophic factors including pentraxin 3 (Ptx3), S100 calcium-binding protein A10 (S100a10) and sphingosine-1-phosphate receptor 3 (S1pr3) [31,32]. Recent study indicated that A1 reactive astrocytes could be induced only by activated microglia which induced by LPS [6], however, we achieved some new ndings in this study. We detected the A1 and A2 speci c markers including C3, GBP2, S100a10, Arginase-1, Ptx3, Tm4sf1 and Nrf-2. For A1 astrocyte markers, C3 and GBP2 increased in injury groups (treatment with LPS, LPS-MCM), meanwhile, inhibition of WNT signaling result in A1 astrocytes formation. A2 astrocyte markers (S100a10, Arginase-1, Ptx3, Tm4sf1 and Nrf-2) expressed high level in OGD/R and LiCl treatment groups compared with injury groups. The nding means that activated WNT signaling is bene t to A2 astrocytes generation and WNT pathway could directly regulated polarization of astrocytes even without the exists of microglia. In vitro, astrocytes always present polygonal, bipolar and stellate. LPS could induced morphological changes in cultured astrocyte, and astrocytes' reactivity will be upregulated by the change and the percentage of cells with bipolar and stellate shapes were higher compared with resting astrocyte [33]. However, the morphological speci city of A1 and A2 astrocyte is still unknown. In our in vitro study, we found the morphology of A2 astrocytes were similar to resting-astrocyte, and activated WNT signaling pathway was bene t to maintain the condition of anti-in ammatory. A2 reactive astrocytes and resting astrocytes tended to format of polygonal broblast-like shape, however, the A1 reactive astrocytes are always bipolar and stellate. Compared with astrocytes in vitro, the cells from neonatal rodent brain show few synapse and present polygonal broblast-like shape under physiological condition and the change of astrocytes morphology such as the withdrawal or outgrowth of astrocyte neurites are expected to modify signal exchange between astrocyte and neuron. However, further study is needed. Another interesting nding in our research is that LiCl and Wnt analogues have no signi cant effect on formation of synapses in primary neuron, but could promoted the synapsis elimination, and then, the M1and M2-like microglia could only regulate the formation of synapses. So, we assumed that the powerful ability of WNT signaling may relate to the function of secretion and metabolism of astrocytes, and the former hypothesis was proved by recent papers which have revealed that the release of glucose, glutamate and intermediary including cytokines and polypeptide were modi ed by WNT pathway to astrocyte [34,35]. It has become increasingly clear that not only based on the formation of synapses but also include the selective elimination of synaptic are essential for the development and maintenance of synaptic connectivity patterns [36][37][38]. So, we compared the effect of WNT-astrocyte/neuron and WNTmicroglia/astrocyte/neuron loop in synaptic plasticity, and found that WNT signaling could regulate phenotype of astrocytes and affected neuron function further. Activation of WNT signaling was bene t to maintain anti-in ammatory condition by transforming microglia from M1 to M2 phenotype, however, this indirect modulation led to weak in uence for development and maturation of neurons. Early research revealed that astrocytes modulate neuron synapse function and development in cultured primary neurons, and the presence of astrocytes greatly enhance synaptic activity and promote response to neurotransmitters [37]. In this study, it is noteworthy that astrocytes, which WNT was activated, could contribute to neuron plasticity and synaptic elimination, meanwhile, activation of WNT in astrocytes decreased neuron apoptosis induced by LPS. Importantly, migration of astrocytes is disadvantage factor for neuron plasticity (Fig. 5H), but, WNT agonists can counteract the effect and contribute to neuron plasticity. For the reason, we believe that WNT-astrocyte/neuron loop is the most important mechanism in regulation of neuronal plasticity and function. Astrocyte-neuron crosstalk may be more essential mechanism in improving brain function and prevention of anxiety and other mental diseases after brain injury. Moreover, many studies have proved that A1 astrocytes lose ability to induce synapse formation, and promote neuron death. However, A2 astrocytes obtain the capacity for function of phagocytosis and promote survivability of synaptic and neuronal [6,39]. In our experiments, the type A2 astrocytes which induced by WNT agonists, exert the greatest effects on the formation, development and recovery of damage neurons. It should be an important mechanism for protective effect in brain via regulation of astrocyte on WNT signaling. We also found that activation of WNT signaling pathway could inhibited the proliferation of primary astrocytes and maintained the integrity of damaged neuron (Fig. 6). This phenomenon may be related to the de cit of proliferation of astrocyte and inhibition of glial scar formation. It is a pity that A2 reactive astrocyte and resting state astrocyte have similar function in this study and it was di cult to distinguish A2-and resting state astrocyte on morphological level, protein and genes accurately and precisely. So, further studies are needed. Conclusions Overall, using multiple approaches, our research has identi ed that the activation of WNT pathway is implicated in neurofunctional recovery and activation of WNT pathway could relieve the psychiatric de cit post-TBI. Our ndings support WNT/β-catenin signaling pathway could affect the polarization of glial cells (astrocytes and microglia). Activation of WNT/β-catenin signaling pathway could maintain the resting state of glial cells and promote the polarization of cells to anti-in ammatory. Glial cells play an important role in maintaining neuronal plasticity, and regulation mode of WNT/β-catenin signaling pathway-astrocytes-neuron loop necessary for neuron plasticity. Abbreviations TBI Traumatic brain injury CNS All studies involving animals were in accordance with NIH guidelines and all procedures were approved by Soochow University Animal Care and Use Committee. Consent for publication All authors have given their consent for publication. Availability of data and materials All data used and analyzed for the current study are available from the corresponding author on reasonable request. The mRNA expression level S100a10, Ptx3, Tm4sf1, Aginase-1, Nrf-2, C3q and GBP2 under different treatment condition.. *p<0.05, ** p<0.01, *** p<0.001 vs control group, n=3-6 independent cells preparations. Neuroprotective mechanism diagram of WNT signaling pathway on CNS trauma and diseases. The neuronal function is impaired after TBI, brain in ammation and degenerative disease. Injured neurons promote astrocytes migration and proliferation which induce the formation of glial scar, and it is an essential protection mechanism. However, the injury could directly stimulate resting astrocyte and generate A1-reactivate astrocyte which is hazard for regulation of neuron, in the meantime, M1-microglia could also induce generation of A1-astrocytes to aggravate damage. Activation of WNT/β-catenin signaling pathway is bene cial to formation of A2 astrocytes which promote effect of neuroprotective, and M2-microglia could be also induced by WNT pathway which contribute to maintain A2-or resting astrocytes to exert neuroprotective mechanism. Moreover, the activation of WNT could accelerate synaptic elimination and maintain functional integrity of neurons. Supplementary Files This is a list of supplementary les associated with this preprint. Click to download. Supplementarymaterial.docx
6,328
2021-06-07T00:00:00.000
[ "Medicine", "Biology" ]
Antamanide Analogs as Potential Inhibitors of Tyrosinase The tyrosinase enzyme, which catalyzes the hydroxylation of monophenols and the oxidation of o-diphenols, is typically involved in the synthesis of the dark product melanin starting from the amino acid tyrosine. Contributing to the browning of plant and fruit tissues and to the hyperpigmentation of the skin, leading to melasma or age spots, the research of possible tyrosinase inhibitors has attracted much interest in agri-food, cosmetic, and medicinal industries. In this study, we analyzed the capability of antamanide, a mushroom bioactive cyclic decapeptide, and some of its glycine derivatives, compared to that of pseudostellarin A, a known tyrosinase inhibitor, to hinder tyrosinase activity by using a spectrophotometric method. Additionally, computational docking studies were performed in order to elucidate the interactions occurring with the tyrosinase catalytic site. Our results show that antamanide did not exert any inhibitory activity. On the contrary, the three glycine derivatives AG9, AG6, and AOG9, which differ from each other by the position of a glycine that substitutes phenylalanine in the parent molecule, improving water solubility and flexibility, showed tyrosinase inhibition by spectrophotometric assays. Analytical data were confirmed by computational studies. Introduction Mushrooms represent a significant source of natural bioactive peptides [1]. Indeed, in mushrooms not only bioactive peptides derived from protein fragmentation are present, but also small peptides, linear or cyclic, synthesized by mushrooms themselves [2]. In the past years, we evaluated the chemical and biological activities of the cyclic decapeptide antamanide, firstly isolated from the lipophilic fraction of an extract of Amanite phalloides by the T. Wieland's group in 1962 [3]. A characteristic of this peptide is its ability to form complexes of high stability with metal ions, including Na + , K + , Ca 2+ , Tl 3+ , and others [4][5][6][7][8]. A close correlation has been found between the ion binding properties and the selectivity for Na + over K + with its antitoxic activity against phallotoxins and the prevention of their accumulation in the liver cells [9]. Successive studies showed that antamanide possesses other biological activities, including an immunosuppressive activity comparable to that of cyclosporine A [10,11]. Moreover, antamanide treatment is also effective in limiting lung and heart edema [12], as well as preventing the development of neoplastic cells in mice models of leukemia [13]. Successively, the activity of antamanide and some linear and cyclic derivatives has also been tested on metastatic melanoma cell line, finding that antamanide caused a significant reduction of cell proliferation after 24 h of treatment, while its glycine derivatives were initially inactive and showed a cytostatic activity only after 48 h from the treatment [14]. Melanoma is an aggressive tumor that over-expresses tyrosinase [15], a widely distributed enzyme throughout the phylogenetic scale from bacteria to mammals. This coppercontaining enzyme catalyzes two distinct reactions: the hydroxylation of monophenols and the oxidations of o-diphenols to o-quinones. Although the typical substrate of tyrosinase is represented by the amino acid L-tyrosine (L-Tyr), tyrosinase activities appear to have broad substrate specificities. In the hydroxylation reaction, also known as monophenolase or creolase activity, the enzyme passes through four different states (E deoxy , E oxy , E oxy-M , and E met-D ), while in the successive oxidation of o-diphenols, known as diphenolase or cathecolase activity, five enzyme states (E deoxy , E oxy , E oxy-D , E met , and E met-D ) are involved (Scheme 1) [16]. The o-quinones are generally reactive and can sustain 1,4-addition to the benzene ring to provide, after several steps, eumelanin and pheomelanin, the prototypes of melanin [17]. In fungi and vertebrates, tyrosinase catalyzes the initial step in the formation of the pigment melanin from tyrosine. In plants, the physiological substrates are a variety of phenolics that are oxidized in the browning pathway observed when tissues are injured. Indeed, the first biochemical investigations on this enzyme were carried out in 1895 on the mushroom Russula nigricans, whose cut flesh turns red and then black on exposure to air [18]. Considering the important role that tyrosinase plays in the processing of fruit and vegetables and during the storage of processed foods, as well as in the hyperpigmentation of the skin with melasma and age spots, its inhibition is attractive in cosmetic, medicinal, and food industries. For this purpose, many natural and synthetic inhibitors have been developed [19] with the aim to obtain new safe and efficient anti-tyrosinase agents for the prevention of browning in plant-derived foods, seafood, and hyperpigmentation treatments. In this study, we analyzed the capability of the bioactive peptide antamanide and some of its Gly-derivatives (Table 1 and Figure 1) to inhibit tyrosinase activity. The substitution of Phe residue with Gly improves the water solubility of antamanide as well as the flexibility. The activity of antamanide derivatives was compared to that of pseudostellarin A (PS-A), a cyclic peptide isolated from Pseudostellaria heterophylla able to inhibit the tyrosinase activity [20]. A computational approach as molecular docking was used to understand at a molecular level the interactions occurring between the enzyme catalytic site and peptide inhibitors. Peptide Design and Synthesis Antamanide is a cyclic decapeptide isolated from the lipophilic fraction of an extract of the green toadstool Amanite phalloides, able to form complexes of high stability with different metal ions [4][5][6][7][8][9]. In order to improve the hydrophilicity of this peptide, the Phe residue at either the six or nine positions was replaced with a Gly residue. Previous studies showed that the substitution of Gly for Phe at these positions did not induce loss of the characteristic ion binding properties, maintaining the ions selectivity determined for antamanide (Ca > Na >> K) [21], while their cytotoxic activity on B16F10 metastatic cells is reduced compared to the parent peptide antamanide. Indeed, only the AG9 peptide showed a comparable activity after 48 h of treatment, while both AG6 and AOG9 resulted in being non-cytotoxic [14]. Pseudostellarin A is a cyclic pentapeptide isolated from the roots of Pseudostellaria heterophylla [22] characterized by a tyrosinase inhibitory activity and, in this study, was used as a reference standard to compare the inhibitory activity of the antamanide analogs. The linear peptides were synthesized by manual solid phase peptide synthesis starting from a preloaded Fmoc-Ala-Wang resin using the Fmoc/HBTU chemistry [23]. To prevent the formation of byproducts due to deletion reactions either in the Pro-Pro or in the Xaa-Pro sequences, double coupling of either Fmoc-Pro-OH or Fmoc-Xaa-OH residue was performed using HATU as coupling agent. After assembly of the peptide, it was detached from the resin by treatment with TFA, and the crude peptide was cyclized in diluted DMF solution (1 mM) by addition of DPPA (diphenylphosphoryl azide, 3 eq) as a coupling reagent in the presence of a low soluble inorganic base (K 2 HPO 4 , 5 eq). After RP-HPLC purification, the peptides were obtained in good yield with a purity of 95%. Tyrosinase Inhibition The capability of peptides to inhibit the activity of tyrosinase has been evaluated by a UV-Vis spectroscopic method monitoring the changes in the UV-Vis spectrum of its natural substrate, L-Tyr. As shown in Figure 2A, the near-UV spectrum of tyrosine in a buffer solution before the addition of enzyme is characterized by the presence of a band at about 275 nm due to the L b transition in the Platt's notation [24] of the phenolic moiety of the tyrosine residue. The addition of enzyme firstly hydroxylated the L-Tyr residue to dihydroxyphenylalanine (DOPA) and successively oxidized DOPA to dopaquinone, which is then converted to dopachrome (Scheme 2). Both DOPA and dopaquinone appearance can be detected by the change in the UV-Vis spectrum. The hydroxylation of L-Tyr to DOPA is characterized by an increase in the intensity of the band at 275 nm and the appearance of an additional band at about 304 nm, while the oxidation to dopaquinone is made detectable by the appearance of a band at 470-490 nm. The time-course of the tyrosine oxidation ( Figure 2B), obtained by monitoring the absorbance either at 477 or at 304 nm, showed the characteristic lag-phase of reaction catalyzed by the tyrosinase enzyme. This phase is related to the monophenolase activity of the enzyme and essentially is due to the low amount of the oxy form of the tyrosinase enzyme in the commercially available preparation. Although other various factors may play an important role, including substrate and enzyme concentration, enzyme source, pH of the medium, and presence of hydrogen donors such as DOPA or other catechols, also the presence of transition metal ions can influence the presence and the length of this lag-phase. Millimolar peptide stock solutions were prepared in DMSO and then diluted to µM concentration by buffer in UV cells. The absence of any scattering signals in the UV spectra indicated that peptides and tyrosinase did not aggregate or precipitate [25]. The capability of peptides to inhibit the tyrosinase activity was evaluated at two different peptide concentrations, 2 and 0.2 µM. At both these concentrations, the parent peptide antamanide did not inhibit the tyrosinase activity, and the time courses are superimposable to that obtained for tyrosine alone ( Figure S1 in Supplementary Materials). The absence of effects on tyrosinase activity was conserved also increasing the antamanide concentration to 20 µM (data not shown). In the presence of antamanide derivatives as well as of PS-A, a decrease in the tyrosinase activity has been detected, and the lag time phase observed for the Tyr substrate alone disappeared, while the band at 477 nm due to the dopaquinone and dopachrome reached a maximum and then decreased in time-dependent mode (Figures 3 and 4). In these conditions, among the investigated cyclopeptides, AG9 showed higher tyrosinase inhibitory activity, whereas AOG9 resulted in being the less active one (Figures 3 and 4). The inhibitory efficiency of antamanide derivatives, calculated at 303 and 477 nm, showed the presence of a selective activity towards the two reactions catalyzed by tyrosinase. All the Gly-peptides and pseudostellarin A efficiently inhibited the oxidation of DOPA to dopaquinone (about 30%) while having low activity towards the hydroxylation of tyrosine. In particular, the AOG9 peptide showed the lower inhibitory activity towards L-Tyr hydroxylation, while at 2 µM concentration, it owned an inhibitory activity towards DOPA oxidation comparable to that of the other peptides. The AG9 peptide showed a similar activity, comparable to that of PS-A, towards the two reactions. Surprisingly, AG6 and PS-A showed an unusual inhibitory activity towards tyrosinase; indeed, their inhibitory activity was lower when the concentration was higher (Figure 4). This unusual behavior will need to be the subject of further research. A possible explanation is the presence of additional low-affinity peptide-enzyme binding sites that may distrain part of the available binder peptide, but since the affinity for such additional sites is low, these interactions occur only at high peptide concentrations. Docking Studies In order to elucidate the interactions occurring among the tyrosinase catalytic site and the investigated cyclopeptides, molecular docking studies were carried out using the structure of the tyrosinase isolated from Bacillus megaterium crystallized in the presence or not of a molecule of kojic acid (PDB files: 3NQ1 and 3NM8, respectively) [26]. Two Cu(II) ions bridged with one molecule of water and surrounded by six histidine residues are present in the active site of the enzyme ( Figure 5). Each copper ion coordinates with three histidines identified as His42, His60, His69 for one ion and His204, His208, His231 for the second one. Recently, it has been supposed that both monophenols and o-diphenols bind to the same copper ion since a reorganization of the phenolate around the two copper ions occurs in the diphenolase form. The presence of one molecule of water bridged with Cu(II) and the absence of bonds among dioxygen and Cu(II) is representative of the met form of the protein involved in the diphenolase activity [27]. The crystal structure resolved with an inhibitor as kojic acid (KA) showed that this molecule is positioned at the entrance of the catalytic site, at 7 Å away from the copper center. The acid activates strong interactions with amino acid residues Phe197, Pro201, Asn205, and Arg209 that characterize the rim of a channel facing the catalytic site and stabilize the entrance of KA ( Figure 5B) [26]. In this representation, KA does not activate strong interactions with the molecule of water bonded to Cu(II) because the acid is positioned in proximity to the entrance of the active site. On the contrary, the best pose of molecular docking of KA (roughly 73%) evidenced strong interactions (hydrophobic and H-bond) with the His204, Asn205, and His208 amino acids positioned in the catalytic site (Tables S1 and S2 of Supplementary Materials). The mechanism of interaction with the active site of the enzyme is similar for all the analyzed cyclopeptides. Due to the relatively high dimensions and features of the molecules, they are positioned on the active site as a cap looked at by interactions such as hydrogen bonding and hydrophobic interactions with some side chains of the cyclopeptide oriented in the directions of the two copper atoms of the active site. Although antamanide and its analogs can form complexes with different metal ions, there is anyhow no evidence of the formation of stable complexes with the Cu(II) present in the catalytic site of enzymes. The conformation of the two best dockings poses of antamantides is pretty similar (Figure S3), evidencing a symmetry in the molecule with a Ki that ranges from 2.59 to 3.27 µM. During the catalytic process that involves modulation of the molecular volume of the catalytic site exerted by Arg209, the antamanide structure is likely less flexible than its derivatives, allowing the entrance of the substrate even at a high concentration of the compound. All cyclopeptides interact with a set of amino acid residues Met61, Phe197, Pro201, Asn205, Arg209, Gly216, Val217, and Val218 positioned at the entrance of the channel facing the catalytic center with an estimated Ki ranging µM values (Table S1). It has been suggested that Val218 plays a key role in the catalytic process. Likely, the amino acid, conformationally flexible and close to the first copper ion, is involved in the catalytic process, controlling the entrance of the substrate into the active site [28]. The different interactions that Val218 activates with each cyclopeptide can compromise the main function of the amino acid in directing the substrate correctly. Analogously, Arg209 positioned in proximity to the entrance of the catalytic site can expand or contract the molecular volume of the active site by virtue of the high conformational flexibility of this amino acid that allows modulating the interactions with the surrounding amino acids residues [28,29]. As shown for KA, the set of amino acids Phe197, Pro201, Asn205, and Arg209 appears to stabilize the cyclopeptides positioned at the rim of the site; in this conformation, the cyclopeptide would disturb the entrance of the ligand. The affinity that each cyclopeptide exerts on the amino acids surrounding this part of the site would affect the role of Arg209 and Val218 in the enzyme activity. Indeed, only AG9 and PS-A bind to Val218, Asn205, and Arg209 with strong hydrophobic and H-bonds interactions, whereas AG6 activates strong hydrophobic interactions only with Val218 and Arg209, confirming the less inhibitory activity observed in the spectrophotometric assay ( Figure 6, Tables S1 and S2). AG6 and AG9 are antamanide-derivatives that differ from each other by the position of a glycine that substitutes phenylalanine in the parent molecule (Table 1). Molecular docking of AG6 and AG9 evidenced that the presence and the position of a glycine residue in AG6 and in AG9 modify the conformation of the cyclopeptide, influencing its ability to activate interactions with the amino acids in the proximity of the catalytic center. Although the best docking pose of antamanide establishes a strong binding affinity between phenyl moieties of the molecule with Asn205 and Arg209 of the protein, the cyclopeptide activates only one hydrogen bond with the oxygen of the peptide bond of proline 7 with Val218, confirming the crucial role that this amino acid plays in the catalytic process of tyrosinase [28]. The cyclopeptide assumes a conformation that permits the entrance of the substrate and, in contrast with the other cyclopeptides, no interaction with any histidine residue of the catalytic center was estimated for antamanide ( Figures S2 and S3 in Supplementary Materials). No inhibitory activity was detected for antamanide by spectrophotometric tyrosine assays. In AG9, two strong hydrophobic interactions (π-π stacking) were evidenced: proline 7 and glycine 9 of the cyclopeptide with Arg209 and proline 8 with Asn205. Leucine 4 of PS-A strongly interacts with the aliphatic chain and the carbonyl of Val218, whereas crossbridge interactions with the OH-phenolic group of tyrosine 3 and Asn205 were calculated. AOG9, the octapeptide lacking Phe5 and Phe6 residues compared to AG9, possesses a conformation that generates 99% of docking poses at the lowest estimated free energy of binding of −7.55 Kcal/mol. The molecular conformation of AOG9 forces the molecule to assume only one energetically favored position at the entrance of the catalytic site in such a way to hamper any strong hydrophobic interactions with Val280 as AG9, AG6 and PS-A activate in all own lower conformations of docking poses. This assumption is in agreement with the spectrophotometric assay that confirms a weak tyrosinase inhibition occurred in the presence of AOG9. Figure 7 represents the lowest docking poses of AG9, AG6, PS-A, and AOG9 inside the catalytic site of the protein. Cyclopeptides AG6, AG9, and, to a lesser extent, PS-A interact with a set of amino acids located in the rim of the channel and are representative of a wide area of the protein: Gly46, Lys47, His49, Asp55, Asn57. While the best docking poses of AG6, AOG9 and PS-A interact with almost one histidine of the catalytic center, supposing a possible inhibition, no interaction with histidine residues of the catalytic center has been predicted with the best pose of AG9. The conformational flexibility of AG9 permits the molecule to cover the entrance of the catalytic site thanks to strong interactions with Gly46, Lys47, Asp55, Asn57, Met61, and Glu158, leading to suppose that the real role of these cyclopeptides is to hamper the entrance of the substrate. In particular, interactions with Glu158 and Pro201, positioned at the opposite side with respect to the set of amino acids Gly46, Lys47, Asp55, Asn57, and Met 61, allow AG9 to cover the whole perimeter of the rim of the channel ( Figure S4 in Supplementary Materials). Peptides were synthesized by manual solid phase using Fmoc chemistry on 0.06 mmol scale. HBTU/HOBt activation employed a three-fold molar excess (0.24 mmol) of Fmocamino acids in DMF for each coupling cycle. Coupling times were 40 min. Fmoc deprotection was performed with 20% piperidine in DMF. Coupling yields were monitored on aliquots of peptide-resin either by Kaiser test for the amino groups or by evaluation of Fmoc displacement [31]. The cleavage from the resin was performed by treatment with TFA-anisole-triisopropylsilane-H 2 O (95:2.5:2.0:0.5 v/v) (45 min). Peptides were cyclized in 1 mM DMF solution by addition of 2 eq. of DPPA in presence of solid Na 2 HPO 4 (5 eq) [32]. Peptides were purified by preparative Reversed-Phase HPLC using a Shimadzu LC-8 (Shimazdu, Kyoto, Japan) system with a Vydac 218TP1022, 10 µ, 250 × 22 mm column (Dionex, Sunnyvale, CA, USA). The column was perfused at a flow rate of 12 mL/min with a mobile phase composed of solvent A (0.05% TFA in water) and solvent B (0.05% TFA in acetonitrile/water, 9:1 by vol.), and a linear binary gradient was used. The fractions containing the desired products were collected and lyophilized to constant weight. Analytical HPLC analyses were performed on a Shimadzu LC-10 instrument fitted with a Jupiter C18, 10 µ, 250 × 4.6 mm column (Phenomenex, Torrance, CA, USA) using the above solvent system (solvents A and B), flow rate of 1 mL/min, detection at 216 nm. All peptides showed less than 1% of impurities. Molecular weights of compounds were determined by ESI-MS on a Mariner (PerSeptive Biosystems, Framingham, MA, USA) mass spectrometer. The mass was assigned using a mixture of neurotensin, angiotensin, and bradykinin at a concentration of 1 pmol/µL as external standard. Tyrosinase Inhibition The capability of cyclic peptides to inhibit the oxidation of Tyr by the tyrosinase/O 2 oxidizing system was monitored by means of a UV-Vis spectroscopy method, carried out in a Shimadzu UV-Visible UV-2501 spectral spectrophotometer using Helma (Mülheim, Germany) duple chambers UV-cell (2 × 4.375 mm). Briefly, 880 µL of a Tyr solution (2 mM) and 900 µL of a mushroom tyrosinase solution (2000 U/mL, Sigma-Aldrich), both dissolved in 50 mM phosphate buffer (pH 6.8), were separately introduced in the duple UV-cell chambers. A total of 20 µL of an appropriate DMSO stock solution of inhibitors (1.8, 0.18, or 0.018 mM) were added to the chamber containing the Tyr solution. The tyrosinase activity in absence of inhibitors was detected by adding 20 µL of DMSO. A background UV-Vis spectrum was acquired in the 250-800 nm wavelength range with a 2.0 nm slit and fast speed before starting the reaction. Thereafter, the solutions were mixed, and UV-Vis spectra were acquired at different time points after mixing. Three replicates were performed for each experiment. Data processing was performed with UV Probe (Shimazdu, Kyoto, Japan) and OriginPro 2019 (v. 9.60) software (OriginLab Corporation, Northampton, MA, USA). Purchased mushroom tyrosinase was used without further purification. Both Ltyrosine and tyrosinase solutions were filtered through a 0.45 µm syringe filter, aliquoted in Eppendorf tubes, and stored in freezer. The concentration of the two solution was determined spectrophotometrically (ε Tyr at 274.6 nm = 1.420; ε tyrosinase at 280 nm = 1.426) [33,34]. Tyrosinase-dependent diphenol and o-quinone formation was assessed as the appearance of a characteristic absorption band at about 304 and 477 nm, respectively. The inhibition activity was determined according to the following equation [33,34]: where C and D are the absorbance values at either 304 or 477 nm in presence of inhibitor before mixing and upon reaching the maximum absorbance, respectively, while A and B are the corresponding values in absence of inhibitor. Computational Methods Preparation ligand. Model compounds were constructed with standard bond lengths and angles from the fragment database with MacroModel 5.5 (Schrödinger, LLC, New York, NY, USA, 2012) using a Silicon Graphics O2 workstation running on IRIX 6.3 Sybyl 6.2 (2001) (Sgi™, Mountain View, CA, USA). Minimization of structures conformational search was performed with the MacroModel/BachMin 6.0 program using the AMBER force field. An extensive conformational search was carried out using the Monte Carlo/energy minimization [35] (E i − E min < 5 kcal/mol, energy difference between the generated conformation and the current minimum). The atomic charges were assigned using the Gasteiger-Marsili method [36]. Representative minimum energy conformations of each compound were optimized using the Density Functional Theory (DFT), quantum chemistry program Gaussian 09 W [37] with method B3LYP/6-31G basis set [38]. Virtual analysis was performed, taking advantage of GaussView version 5.0 [39]. For the graphical display CHIMERA version 1.8 was used [40]. All ligands were docked with all bonds free to rotate. Structure refinement and computational docking procedures. Binding of the compounds was analyzed using AutoDockTools 1.5.7rc1 and AutoDock4.2.5.1 docking programs [41]. The starting protein tyrosinase was prepared from the 2.00 Å resolution crystal structure deposited by Sendovski et al. [26] (PDB files: 3NM8). The binding site of 3NM8 was determined by comparing the position of the inhibitor (Kojic Acid) of the tyrosinase, as present in the 3NQ1 X-ray [26]. Provided the type of protein-containing inhibitor KA (PDB 3NQ1), the docking was performed using a grid of 60 × 60 × 60 points, 0.375 Å spacing, and center −10, 20, −5, in order to circumscribe the interaction area of the catalytic site and therefore improving computation time. Then, considering that crystal structures 3NQ1 and 3NM8 are overlapping, the docking of the compounds was performed using crystal protein 3NM8 lacking in the ligand. Additionally, some tests were run considering the whole enzyme to search other interesting sites. The crystallographic water molecules and Zn, Cl ion not involved in the catalytic process, were stripped. Conversely, the two copper ions and the molecule of water present in the catalytic site were conserved. Hydrogen atoms were added using the ADT module of MGLTools 1.5.7rc1 suite program. The Gasteiger [36] charges of Autodock for the ligands and protein were used. The structures were docked using the Lamarckian genetic algorithm (LGA), utilizing the default grid spacing, treating the active docking site as a rigid molecule and the ligands as flexible, i.e., all non-ring torsions were considered active. The Lamarckian genetic algorithm (LGA) of up to 100 runs was employed with the settings of population size of 150 individuals, maximum number of generations, and energy evaluations of 27,000 and 25,000,000, respectively. Estimated inhibition constant (Ki ex ) is modeled by the equation: where ∆G ex is a semiempirical free energy approximation (derived from molecular mechanics and experimental parameters), R is 1.98719 cal·K −1 ·mol −1 , and T is 298.15 K. Notewothly, the result is not real free energy but only an approximation used in order to obtain an estimation of Ki and to simplify the results for the comparison between experiment and docking procedures. Conclusions The identification of inhibitors of tyrosinase activity represents an attractive goal not only in the food industry, to preserve fruits and vegetables during storage, but also in the cosmetic and medicinal industries. Here we investigated the capability of the natural bioactive peptide antamanide, isolated from mushrooms, and that of its analogs to inhibit the tyrosinase activity. While antamanide was not able to inhibit the tyrosinase activity, three glycine derivatives, AG9, AG6, and AOG9, were identified as tyrosinase inhibitors by spectrophotometric assays and computational studies. AG9 showed higher inhibitory activity in comparison to PS-A, a natural and known tyrosinase inhibitor. Molecular docking performed on B. megaterium tyrosinase protein evidenced the binding affinity that cyclopeptides AG9, AG6 and AOG9, and PS-A exert on the catalytic site of the protein, in agreement with the catalytic process suggested in previous studies for conventional tyrosinase inhibitors. The structural features of cyclopeptides AG9, AG6, and AOG9 lead to suppose an additional behavior of the cyclopeptides at the entrance of the catalytic site that hampers the entrance of the substrate. Conflicts of Interest: The authors declare no conflict of interest.
6,122.2
2022-06-01T00:00:00.000
[ "Chemistry", "Medicine", "Agricultural and Food Sciences" ]
Economic Deprivation and Its Effects on Childhood Conduct Problems: The Mediating Role of Family Stress and Investment Factors This study investigated the mechanisms by which experiences of poverty influence the trajectory of conduct problems among preschool children. Drawing on two theoretical perspectives, we focused on family stress (stress and harsh discipline) and investment variables (educational investment, nutrition, and cognitive ability) as key mediators. Structural equation modeling techniques with prospective longitudinal data from the Growing Up in Scotland survey (N = 3,375) were used. Economic deprivation measured around the first birthday of the sample children had both direct and indirect effects on conduct problems across time (ages 4, 5, and 6). In line with the family stress hypothesis, higher levels of childhood poverty predicted conduct problems across time through increased parental stress and punitive discipline. Consistent with the investment model, childhood deprivation was associated with higher levels of conduct problems via educational investment and cognitive ability. The study extends previous knowledge on the mechanisms of this effect by demonstrating that cognitive ability is a key mediator between poverty and the trajectory of childhood conduct problems. This suggests that interventions aimed at reducing child conduct problems should be expanded to include factors that compromise parenting as well as improve child cognitive ability. INTRODUCTION Strong associations exist between poverty in early childhood and problem behavior in later life (e.g., Dearing et al., 2006;Sun et al., 2015;Mazza et al., 2016). While not all children living in economic hardship go on to display conduct problems, a disproportionately high number of children with conduct problems tend to come from families living in poverty (Boe et al., 2012). Evidence from longitudinal studies (e.g., Kiernan and Huerta, 2008;Rijlaarsdam et al., 2013) have identified poverty in early childhood as a risk antecedent to problem behavior across the lifespan. Additionally, experimental and longitudinal findings demonstrate that changes in family income directly lead to changes in child conduct problems (Costello et al., 2003;Morris and Gennetian, 2003;Votruba-Drzal, 2006). While these findings suggest a causal link between poverty and conduct problems, the mechanism by which economic deprivation leads to conduct problems remains unclear. Poverty and Conduct Problems: Theories on the Mechanisms of Effect Two theoretical perspectives that have been extensively deployed to explain this mechanism are the family stress model and the investment model (Mayer, 1997;Conger et al., 2010). Both theories posit an indirect effect of poverty on childhood conduct problems. Boss et al. (2017, p. 4) defined family stress as "a disturbance in the study state of the family system." Such a disturbance may be due to external factors such as, unemployment or internal factors such as, divorce. Others (e.g., McCubbin et al., 1980) have conceptualized family stress as the response of a family to distressing life events and tensions generated by these events. According to the family stress model, economic deprivation induces psychological distresses such as, depression, anxiety, and parental stress, due to the strain of having fewer resources available for day-today living. Such stressors are associated with frustration and aggressive interactions (Berkowitz, 1989) which in turn lead parents to adopt punitive or unresponsive parenting styles with consequences for childhood conduct trajectories . Support for this model comes from studies demonstrating a link between poverty, parental psychological distress, punitive discipline, and conduct problems (Gershoff et al., 2007;Kiernan and Huerta, 2008;Rijlaarsdam et al., 2013). Family investment on the other hand is defined as the amount of money parents put into purchasing quality education, nutrition, health, good neighborhood, and other inputs that improves a child's future well-being (Mayer, 2002). This investment is determined by a family's income. The investment model proposes that poverty restricts parents' ability to provide enriching educational experiences and services, as well as sufficiently nutritious diets. This in turn leads to lower cognitive abilities with potential consequences for other developmental domains (Mayer, 1997;Conger et al., 2010). Economic deprivation has been found to longitudinally predict low educational investment and consequently cognitive abilities (Kiernan and Huerta, 2008;Sun et al., 2015). Additionally, changes in parental economic circumstances predict investment in nutritious diets (Skafida and Treanor, 2014), and childhood malnutrition has been linked to low cognitive ability and conduct problems in adolescence (Galler et al., 2012). Recent extensive reviews of the application of the family stress and investment models show that very few studies (e.g., Guo and Harris, 2000;Yeung et al., 2002) have simultaneously integrated elements from the two models in understanding a single developmental outcome such as, conduct problems Shaw and Shelleby, 2014). Most studies employing both models in a single study have used them to explain different outcomes, that is, the family stress model being used to explain behavioral outcomes and the investment model to explain cognitive outcomes (e.g., Gershoff et al., 2007;Kiernan and Huerta, 2008). Where both models have been used to explore pathways from poverty to conduct problems (e.g., Linver et al., 2002;Rijlaarsdam et al., 2013), these were not directly predicted from the main consequence of low investment, that is, cognitive ability. It is well established that poverty directly stunts the development of those cognitive competences (e.g., executive function, language, working memory, and decision making) that underpin children's emotional and self-regulatory responses (Noble et al., 2005;Farah et al., 2006), mechanisms that are directly linked to conduct problems or tendency to take on prosocial roles such as, standing up to bullies (Belacchi and Farina, 2010;Montroy et al., 2014). Concurrent association studies have also found that cognitive ability predicts conduct problems (e.g., Bellanti and Bierman, 2000). Further, Galler et al. (2012) found that the effect of childhood malnutrition on conduct problems in adolescence was mediated by cognitive ability. It is therefore no surprise that interventions aimed at improving cognitive ability and underpinning processes such as, emotional regulation also lead to improvements in child conduct problems or gains in prosocial behavior, and those aimed at improving behavior result in cognitive benefits (Lunkenheimer et al., 2008;Scott et al., 2010;Ornaghi et al., 2017). In other words, an investment pathway from poverty to conduct problems should include cognitive ability as a key mediator. Closely linked to the above are calls to explore other pathways between poverty and childhood outcomes within the context of these models. For instance, Shaw and Shelleby (2014) argued for the testing of a direct path between parental distress and childhood conduct problems, beyond the indirect effect through parenting because associations between parental distress and conduct problems may depend on factors other than compromised parenting. One argument is that maternal psychological distress can have direct effects on childhood conduct problems through heritability of negative traits linked to conduct problems during pregnancy (Goldsmith et al., 1997;Kim-Cohen et al., 2005). According to Shaw and Shelleby (2014), parental stress during pregnancy can induce neuroendocrine alterations which in turn lead to development of negative traits, such as, irritability, associated with conduct problems. Other researchers have documented direct effects between economic deprivation and conduct problems (Kiernan and Huerta, 2008), suggesting that the effect of poverty may not be completely mediated by family stress and investment variables. Further, researchers have critiqued the limited use of longitudinal data in testing these models among children Shaw and Shelleby, 2014). We came across only one study that used data matching the temporal ordering of predictors, mediators and outcome variables (i.e., Rijlaarsdam et al., 2013). Additionally, only one recent longitudinal study using the family stress model (e.g., Mazza et al., 2016) have examined the effect of deprivation on conduct problems over time, and we are not aware of any study combining both stress and investment mediators to examine conduct problems over time. Focus of the Current Study The current longitudinal prospective study was conceptualized to examine pathways by which experiences of economic deprivation in early childhood influence the trajectory of conduct problems during the preschool years. We focused on the preschool years because familial economic circumstances during the early years are crucial for development (Votruba-Drzal, 2006). At this age, children are highly dependent on their families and therefore more sensitive to contextual influences such as, poverty (e.g., Bronfenbrenner, 1977). To achieve our research goal, we integrated elements from both the family stress model and the investment model. We simultaneously examined the extent to which resultant family stress variables (stress and harsh parenting) and investment variables (educational investment, nutrition and cognitive ability) mediate the relationship between economic hardship measured when children were 10 months old, and trajectory of conduct problems from ages 4 to 6. We hypothesized the following (Figure 1): 1. Parental economic deprivation will have a direct effect on the trajectory of conduct problems (i.e., higher conduct problems across ages 4, 5, and 6). 2. Parental economic deprivation will have an indirect effect on the trajectory of conduct problems via increased parental psychological distress and punitive parenting. 3. Parental economic deprivation will have an indirect effect on the trajectory of conduct problems via low educational investment, poor nutrition, and low child cognitive ability. Considering that early experiences of poverty can lead to childhood conduct problems, and childhood conduct problems are risk antecedents for poverty in adulthood (Fergusson et al., 2005), we envisage the findings to offer information on effective strategies for prevention and intervention that might break this cycle. Data and Participants Data from the Growing Up in Scotland Survey (GUS), a national longitudinal survey, was used for this study. To ensure a nationally representative sample, a multi-stage stratified random sampling technique of all eligible children within a cohort year was employed. Data were obtained annually through face-toface interviews with the child's main caregiver (mostly the child's mother, 95.5% of respondents). A detailed description of the sampling procedure and method of data collection is available on the GUS webpage and in the official user guide (ScotCen Social Research, 2013). For the current study, data from wave 1 (obtained in 2005/06) to wave 6 (obtained in 2010/11) of the first Birth Cohort survey were used. Wave 1 data was collected when the children were 10.5 months old. Subsequent waves were obtained at 22, 34.5, 46, 58, and 70 months, respectively. A total of 5,217 children born between June 2004 and May 2005 were recruited for the initial survey in wave 1. Of these, 3,375 participants who responded to all six waves of data collection were retained for analysis. This represents 94.2% of all eligible respondents (those who completed all previous 5 waves) and 64.7% of all Wave 1 cases. To overcome limitations of sample attrition, birth cohort longitudinal weights were used in the analyses to help attenuate biases associated with non-random attrition (ScotCen Social Research, 2013). The sample children consisted of 51.3% male and 48.7% female. Ethnicity of the cohort children as designated in the GUS dataset was 96.5% "White" and 3.5% "Other ethnic background." Ethical Approval The GUS study was subject to a medical ethical review and received approval from the Scotland "A" MREC committee (application reference: 04/M RE 1 0/59). Approval for the use of the data for this study was obtained through the UK Data Service. Measures A strategy was adopted to select variables sequentially to reflect the hypothesized pathways (Figure 1). The dependent variable, conduct problems was measured at ages 4, 5, and 6. The predictor variables, parental economic deprivation was measured at age 1, psychological distress, educational investment, and nutrition at age 2, child cognitive ability at age 3, harsh discipline although measured at age 4 reflected parental behavior at age 3. Descriptive statistics are represented in Table 1. Conduct Problems Child conduct problems were measured using five items from Goodman's (1997) Strength and Difficulties Questionnaire (SDQ). The SDQ has good structural, concurrent, discriminant, convergent, and predictive validity (e.g., Kersten et al., 2016), and it is measurement invariant across time (Sosu and Schmidt, 2017). The instrument was administered to parents when their children were just under 4, 5, and 6 years of age. Parents were asked to indicate the extent to which the sample child engages in five specific behaviors (tantrums; fights; lies; steals; and obedient which was reverse coded). The five items were measured on a 3-point scale (0 = Not true; 1 = Somewhat true; 2 = Certainly). Due to the polytomous nature of the conduct problem's response scale, we explored reliability within a structural equation modeling framework (Brown, 2015). More specifically, we tested for longitudinal measurement invariance to enable us judge whether the scale was configural, metric, or scalar invariant over time. Such information is crucial for longitudinal studies as it tells us whether respondents' understanding of items and constructs measured by an instrument remain the same across time. Finding from this analysis suggest the conduct problem scale is reliable (see result section reporting on item reliability and measurement invariance). Parental Economic Deprivation Two items, equivalized income and subjective poverty, were used to measure parental economic deprivation. These were obtained when children were about 10 months of age. To measure equivalised income, parents were first asked to select from a range of 17 income bands (1-less than £3,999 to 17-56,000 or more), the amount that best represented their family income before tax including all state benefits and interests. All income bands between the minimum and maximum described above had a range of about £2,000 (i.e., 4,000-5,999; 6,000-7,999, etc.). The figures were then equalized by adjusting for differences in household size and composition (see e.g., Scottish Government, 2009;Bradshaw et al., 2015) and converted into quintiles with a range from 1 (>£33,571) to 5 (<£8,410). Subjective poverty was measured through perceived economic pressure. Parents were asked to rate how they feel about managing on their present income. Responses were on a 5-point scale ranging from 1 (Living Table 1 for number of items used to measure each construct and model specification. a The values represent range, mean, standard deviation (SD) and factor loadings of items used to measure conduct problems (β) across waves 4, 5, and 6 of data collection respectively. Means, standard deviations, factor loadings and percentages are based on a weighted sample; b higher scores represent low educational and nutritional investment. Frontiers in Psychology | www.frontiersin.org very comfortably on present income) to 5 (Finding it very difficult on present income). Higher scores on both items represent a higher level of deprivation. Nutrition Nutrition was measured using two items obtained from parents when the children were 2 years old. Parents were asked to indicate how many different types of fresh, frozen or tinned fruit and vegetable their child eats on a typical day. Responses were on a 5-point scale (0-More than five to 4-None), with higher scores indicating poorer nutrition. These two items were chosen in line with previous studies indicating significant associations between income deprivation and consumption of fruits and vegetables (Skafida and Treanor, 2014). Educational Investment Educational investment was measured when children were 2 year's old with three items. Parents were asked to respond to the question: "Can you tell me on how many days in the last week childname has done each of the following things either on his own or with someone else? By "the last week, " I mean the last 7 days." The items were: looking at books or read stories; reciting nursery rhymes; and recognizing letters, words, numbers, or shapes. Responses were coded from 0 to 7 so that higher scores represent low educational investment. These items represent proximal measures of educational investment and have been used in previous studies (e.g., Guo and Harris, 2000;Yeung et al., 2002). While it can be argued that the measure may be child-driven, it was obtained when the cohort children were just 2 years of age, a time when parents are more likely to be the ones shaping their children's interests. Parental Psychological Stress Parental psychological stress was measured when children were 2 years old using three selected items from the Depression, Anxiety, and Stress Scale (Lovibond and Lovibond, 1995). The complete scale has well established psychometric properties (Henry and Crawford, 2005). Participants were asked to indicate how much the following statements applied to them over the past week: "I found myself getting upset rather easily, " "I found it difficult to relax" and "I found that I was very irritable" measured on a 4 point scale (1-Did not apply to me at all to 4-Applied to me very much or most of the time). Child Cognitive Ability Cognitive ability was measured at age 3 using the naming vocabulary and picture similarities subtests of the British Ability Scales Second Edition (BAS II; Elliott et al., 1997). Studies indicate that the BAS has a sound theoretical underpinning, possesses good psychometric properties and is age appropriate compared to other available tests (Hill, 2005). Naming vocabulary assesses expressive language ability and development, while picture similarities assess problem solving and reasoning ability. For the current study, T-scores derived from normative scores (with a range from 20 to 80, and a mean 50) for both the naming vocabulary and picture superiority scales were used. Items were recoded so that higher scores indicating low cognitive ability. Harsh Discipline Harsh discipline was measured using parental response to one question when the children were 4 years old. Participants were asked to indicate whether they have ever used smacking with the named child in the previous year (corresponding to age 3) during which the question was not asked. Response to this item was dummy coded (No-0; Yes-1). Analytic Procedure Analysis was undertaken using longitudinal structural equation modeling (SEM). First, longitudinal measurement invariance of the conduct problems scale was tested to ascertain whether the conduct problems scale was measuring the same construct across time (Widaman et al., 2010). Measurement invariance was sequentially examined by testing for configural, metric, and scalar invariance over time (Davidov et al., 2011). Second, an unconditional latent growth model (LGM) was estimated to evaluate the trajectory of conduct problems over time. LGMs generally estimate an intercept mean (i.e., average conduct problems at age 4), intercept variance (i.e., individual differences in conduct problem at age 4), slope mean (i.e., rate of change in conduct problem from ages 4 to 6 for the entire sample), and slope variance (i.e., individual differences in the rate of change). Since our analysis was a multiple indicator LGM with ordinal items, the mean of the intercept (i.e., average conduct problems score at age 4) is not estimated due to model specification procedures (see Muthén and Muthén, 2012 for detailed explanation). Third, following outcomes of the unconditional LGM, models hypothesizing both direct and indirect effects of economic deprivation via family stress and investment mediators on trajectory of conduct problems (i.e., across ages 4, 5, and 6) were tested. To determine evidence of indirect effects, we examined the statistical significance of direct paths linking parental economic deprivation, associated mediators and outcomes in each hypothesized mediation process, as well as confidence intervals of indirect paths (MacKinnon et al., 2007;Kenny, 2016). All variables (predictor, mediator, and outcome) except for harsh discipline were modeled as latent constructs. Model Estimation, Attrition, and Missing Data Since, items underpinning the conduct problems scale were measured on an ordinal (polytomous) scale, the weighted least squares means-variance (WLSMV) estimation procedure which yields more accurate parameter estimates, and standard errors when ordinal level data are modeled was used (Byrne, 2012). All analyses were undertaken using Mplus 7.4. A key problem with all longitudinal studies is attrition. Within the GUS data, attrition analysis showed that those who are unemployed, live in large urban areas, less likely to indicate their income at a previous sweep, and younger parents were more likely to drop out of the study (ScotCen Social Research, 2013). The GUS data includes longitudinal weights generated using sociodemographic characteristics associated with non-response (ScotCen Social Research, 2013). These weights were taken into account in the computation of model fit indices and parameter estimates in our analysis. With respect to missingness, there was negligible missing data on items used to measure conduct problems over time (average of 1.3%, 1.5%, and 0.98% across age 4, 5, and 6, respectively). Average missing data for covariates was equally small (2.3%), with a range from 0 (no items missing for nutrition) to 8.9% (income quintiles). According to Asparouhov and Muthén (2010), the WLSMV approach for treatment of missing data implemented in Mplus produces unbiased estimates when the amount of missing data is not substantial and the model includes covariates that predict missingness. Model Evaluation Goodness of fit was evaluated using the Tucker-Lewis index (TLI) and comparative fit index (CFI) with values >0.90 and 0.95 indicative of "adequate" and "good" fit respectively, and root mean square error of approximation (RMSEA) values lower than 0.05 as evidence of good fit (Hu and Bentler, 1999;Marsh et al., 2004). Nested models are tested when evaluating measurement invariance. Although the chi-square difference test is recommended for evaluating such models, it is sensitive to marginal differences and performs poorly against other indices such as, changes in CFI and RMSEA (Cheung and Rensvold, 2002;Chen, 2007;Little, 2013). Thus, we used changes in CFI of >0.01 and RMSEA of > 0.015, as well as overall fit of each model to determine measurement invariance (Chen, 2007;Little, 2013). Specifically, a model was invariant if at least one of the indices was within the cut-off benchmark and the overall model was a good fit. Finally, to determine the strength and the practical utility of our indirect and total effects, we evaluated the effect size of our standardized coefficients with values of 0.01, 0.09, and 0.25 representing small, medium and large effects respectively. These thresholds represent appropriate benchmarks for determining small, medium, and large effects when reporting completely standardized indirect effects (Cohen, 1988;Preacher and Kelley, 2011;Kenny, 2016). Descriptive Statistics and Item Reliability Detailed descriptive statistics for all variables are represented in Table 1. Goodman (1997) provided the following cut-off points for the composite scale of conduct problems: Normal (0 to 2), Borderline (3), and Abnormal (4 to 10). Consistent with previous studies (e.g., Goodman, 1997), the proportion of children in the current sample who fell into the abnormal score range were 14% at age four, 12% at age five, and 10% at age six. Borderline conduct problems were 17, 14, and 12% across ages four to six respectively. Using the 2005/06 income threshold, that is, at the time of data collection, for defining who is living in poverty in the United Kingdom (Scottish Government, 2009), all respondents in the bottom income quintile (22%) would have been living in poverty (i.e., had income below 60% of the UK median). Taking into account the proportion of respondents who reported finding it difficult or very difficult managing on their current income (18%), it can be concluded that about 18 to 22% of the sample children were living in households experiencing economic deprivation. This figure is similar to the proportion of children living in relative poverty (21%) in Scotland at the time of data collection (Scottish Government, 2009). Average item level standardized factor loadings based on outcome of structural equation models (Table 1) for conduct problems (0.56, 0.59, 0.64, at ages four, five and six respectively), economic deprivation (0.66), stress (0.72), poor nutrition (0.56), low educational investment (0.47), and low cognitive ability (0.63) suggests that, on the whole, the items used to measure these latent constructs were both valid and reliable (Brown, 2015). Preliminary analysis exploring gender differences on our predictor, mediator, and outcome variables were undertaken since it is well established that boys generally demonstrate higher conduct problems than girls (Rutter et al., 2003). Results (Table 2) indicate that boys had significantly higher conduct problem scores than girls across the three time points. Additionally, there was greater parental investment in the education of girls than boys, and girls obtained significantly higher cognitive ability scores. Significant associations were also observed between gender and use of harsh discipline, with parents reporting greater use of smacking with boys than girls. No significant gender differences were observed for economic deprivation, parental stress, or nutritional investment. Longitudinal Measurement Invariance of the Conduct Problems Scale Results from the first analysis revealed that the conduct problems scale was configural, metric, and scalar invariant over time ( Table 3). A comparison between the configural and metric invariance model using our stated criteria ( RMSEA and CFI) suggests that there was no significant deterioration in the model. With regards to metric and scalar models, the CFI suggests an absence of invariance, while the RMSEA indicate the scale was scalar invariant. Since our examination of modification indices and other parameter estimates did not show any significant form of local misfit, and the overall model had a good fit, we concluded that the conduct problems scale was scalar invariant in line with our stated criteria. Furthermore, it measured the same construct across the three measurement periods and it is legitimate to compare latent means over time. Unconditional Growth Model: Trajectory of Conduct Problems over Time In the second analysis we examined the trajectory of conduct problems without predictors. The findings indicate a good fit of the linear growth model ( Table 3) with significant growth parameters ( Table 4). The intercept variance (b = 0.33; SE = 0.033) suggested that children in this sample differed significantly on their initial level of conduct problems at age 4. The mean of the slope (b = −0.174; SE = 0.011) indicated that conduct problems decreased significantly during the preschool years. However, the slope variance was not significant (slope variance: b = 0.026; SE = 0.017), meaning that everyone declined at roughly the same rate over time. Additionally, covariance between intercept and slope was not significant (b = 0.031; SE = 0.018), suggesting that a child's initial level of conduct problems at age four was unrelated to the rate of change (between ages four and six). To ensure that the linear model provided the best description for the data, a nonlinear growth model was equally estimated. Since our data had only three data points, we modeled nonlinear growth ( * , 1, 2) by freely estimating the first slope factor (Kamata et al., 2012;Nese, 2013). Assumptions of nonlinear growth were not supported (b = −0.25, SE = 0.23, p = 0.278). Effect of Economic Deprivation on Conduct Problems Across Time-Ages 4, 5, and 6 Since the slope variance from the unconditional model was not significant, we proceeded to investigate the effect of economic deprivation on trajectory of conduct problems by specifying generalized structural equation models rather than a conditional LGM which aims to predict variance of the slope (i.e., individual differences in the rate of change). Specifically, we tested three separate models (Figure 1) exploring the direct and indirect effects of economic deprivation on conduct problems across the three measurement time points (ages 4, 5, and 6). The model also included covariances between the two investment mediators. The hypothesized models had a good fit to the data ( Table 3) and accounted for 27, 23, and 23% of the variance in conduct problems at ages 4, 5, and 6 respectively. The pattern of findings was similar across the three time points (Table 5). Direct, Indirect and Total Effects of Economic Hardship on Trajectory of Conduct Problems As shown in Table 5, economic deprivation when children were 10 months old had a significant direct effect on conduct problems at ages 4, 5, and 6 with higher levels of deprivation associated with higher conduct problems scores (p < 0.001). These results suggest that the effect of deprivation on conduct trajectory in the preschool years is not completely mediated by family stress and investment variables, supporting our first hypothesis. In line with our hypothesized family stress model (Figure 1), economic deprivation had a significant direct effect on parental stress (p < 0.001), parental stress had a significant direct effect on parental discipline (p < 0.01), and parental discipline had a significant effect on conduct problems at ages 4, 5 and 6 (p < 0.001). Additionally, there was a significant direct effect from parental stress to conduct problems across all time points (p < 0.001). A significance test revealed two indirect effects consistent with the family stress mediators (deprivation→stress→discipline→conduct problems, p < 0.01; deprivation→stress→conduct problems, p < 0.001). That is, higher levels of deprivation resulted in higher levels of parental stress, which in turn led to greater use of harsh discipline and subsequently higher conduct problems at ages 4, 5, and 6. Additionally, the path via parental stress separately accounted for higher levels of conduct problems. The total indirect effect through the family stress model across the three time points (β = 0.04), indicate a medium effect size. These findings partially support our second hypothesis. With respect to the hypothesis based on the investment model (Figure 1), economic deprivation was significantly associated with poorer nutrition (p < 0.001), lower educational investment (p < 0.001) and lower cognitive ability (p < 0.001). While lower educational investment had a significant direct effect on lower cognitive ability (p < 0.001), poorer nutrition was not significantly associated with cognitive ability (p > a The mean of the intercept is not estimated in a multiple indicator LGM with ordinal items using WLSMV estimation. This is fixed at 1 to ensure model identification (Muthén and Muthén, 2012). Level of significance: ***p < 0.001. 0.10). Lower cognitive ability had a significant effect on conduct problems at ages 4, 5 and 6 (p < 0.001). Test of indirect effects via the investment mediators revealed two significant findings (deprivation→investment→cognitive ability→conduct problems p < 0.001; deprivation→cognitive ability→conduct problems p < 0.001). Thus, consistent with our hypothesis, experiences of economic deprivation influenced conduct problems across the preschool years via low educational investment and low cognitive ability, partially supporting our third hypothesis. The total indirect effect from the investment pathway was β = 0.10 to 0.12 across the three time points suggesting a medium effect. The total effect of all deprivation pathways on trajectory of conduct problems can be considered to be of large effect across all three time points (β = 0.30 to 0.36). To check the robustness of our findings, we undertook followup analyses by controlling for the effect of gender on conduct problems, educational investment, cognitive ability, and harsh discipline due to the significant gender differences obtained in our preliminary analysis. The key model indices indicate significant fit for the age 4 (CFI = 0.96; TLI = 0.95), age 5 (CFI = 0.95; TLI = 0.94), and age 6 (CFI = 0.96; TLI = 0.95) models. Crucially, there were no changes in the significance of parameter estimates or direction of effects (full results available from the first author). DISCUSSION We used prospective longitudinal data to examine the mechanisms by which economic deprivation leads to conduct problems among preschool children. Consistent with the family stress model , economic deprivation measured around the first birthday of our sample children was indirectly associated with higher levels of conduct problems across the preschool years through effects on parental stress that increases the use of harsh discipline. Punitive parenting in turn led to higher conduct problems. We also found that elevated parental stress associated with poverty predicted increased levels of conduct problems, beyond effects through harsh discipline. This additional pathway concurs with Shaw and Shelleby (2014) hypothesis that associations between parental distress and conduct problems may depend on factors other than compromised parenting. One possible explanation is that there may have been maternal psychological distress during pregnancy which may have induced endocrine alterations that led to the transmission of negative traits linked to conduct problems (Goldsmith et al., 1997;Kim-Cohen et al., 2005). Another plausible explanation is that children living in poverty are equally exposed to stress. Thus, the direct effect from parental stress to child conduct problems may simply reflect the mediating role of child level stress in the pathway between poverty and conduct problems. The investment pathway provided an equally valuable explanation for the association between poverty and conduct problems. As in previous studies (Mayer, 1997;Gershoff et al., 2007) economic deprivation restricted parental ability to invest in enriching educational experiences, which in turn led to lower cognitive ability in early childhood. Consistent with our hypothesis, cognitive ability predicted differences in conduct problems across ages 4, 5, and 6. Although poverty predicted nutritional investment in line with previous findings (Skafida and Treanor, 2014), the pathway from nutrition via cognitive ability was not significant, possibly due to the moderate association between the two investment variables (r = 0.44). Overall, the above findings extend our theoretical understanding in that, an investment pathway from poverty to conduct problem should include cognitive ability as a key mediator in line with previous findings on the mediating role of cognitive ability on parental investment (Galler et al., 2012). Additionally, it demonstrates how the effect of childhood poverty on cognitive ability and conduct problems can create a cycle of poverty in adulthood. Children living in poverty are more likely to begin school with significant disadvantages that include lower cognitive ability and higher levels of conduct problems, factors that may make them lose substantial grounds in educational attainment to their peers (Masten et al., 2005;Montroy et al., 2014). A resultant poor educational outcome and increased conduct problems over time means fewer prospects and success in the labor market, thereby creating a cycle of poverty (Fergusson et al., 2005). Breaking this cycle will therefore require attention to both raising educational attainment and reducing conduct problems. Further research is however needed in order to understand the directionality and nature of the relationship between cognitive ability and conduct problems. In contrast to the underlying assumption of both the family stress and investment models , the effect of poverty on childhood conduct problems was not completely mediated by stress and investment variables. Consistent with previous findings (Gershoff et al., 2007;Kiernan and Huerta, 2008), we found that economic deprivation directly influences conduct problems across the preschool years. However, not all studies have documented such direct effects and there is the suggestion that direct effects tend to be common when conduct problems are reported by caregivers other than by children or adolescents themselves (Sun et al., 2015). A more plausible explanation for our finding is that the effect of poverty may be mediated by other factors such as, childhood stress (Lupien et al., 2001;Evans and Kim, 2007). Future studies should therefore explore potential effects through Level of significance: ***p < 0.001; **p < 0.01; ns non-significant. Econ dep, economic deprivation; edu investment, educational investment. a Indirect effect of stress and investment pathways computed by adding together all significant associated indirect pathways. child related variables such as, stress in addition to parental variables. LIMITATIONS OF THE STUDY Our study is limited by the fact that we mainly focused on the effect of poverty through psychosocial and investment mechanisms. It is likely that other yet to be explored pathways may add to the explanatory power of the integrated model. Evidence from biological theories suggests that conduct problems may be a result of genetic (e.g., Rhee and Waldman, 2002) and brain structure defects (e.g., Fairchild et al., 2013). However, it has been argued that possible genetic risks of childhood conduct problems may remain latent until children are exposed to adversities such as, economic hardship (e.g., Rutter et al., 2001;Costello et al., 2003). In other words, economic disadvantage may serve as the catalyst for genetic predisposition to conduct problems to become a reality. Future studies examining these mechanisms would shed light on the interaction between poverty and neuropsychological processes. We are also aware that economic deprivation tends to vary over time rather than remain static. Thus, using economic deprivation when children were 10 months old may mask variability over time. However, compared to income in middle childhood, parental income during early childhood appears to be more influential on children's developmental trajectories (Votruba-Drzal, 2006). As evident in our findings, income measured when children were only 10 month old predicted a significant amount of variance in conduct trajectory. A subsequent analysis using cumulative measures of economic deprivation when the cohort children were about 1and 2 years of age did not alter our results. Finally, our study is correlational in nature and we were unable to model cross-lagged paths by adjusting for previous levels of predictor and mediator variables as these variables were not consistently available in our data set. Thus, our findings do not completely account for the directionality of effects and caution is needed when making causal attributions based solely on these findings. Although we used a sequential approach in selecting our predictor, mediator and outcome variables, experimental field studies and studies combining growth models with standard panel models offer future avenues for exploring causality of the underlying processes. IMPLICATIONS OF THE STUDY Despite the above limitations, our findings have significant implications with respect to identifying key areas of target for policy intervention. Firstly, helping families to overcome financial stress either through direct financial support or assistance to earn better income may help alleviate both parental stress and boost parental investment in education, key mediators of conduct problems. Approaches that increase family income do not only have positive effects on childhood behavior but also contribute to improvement in other outcomes including educational attainment (Costello et al., 2003;Morris and Gennetian, 2003;Votruba-Drzal, 2006). Although several countries including the UK provide social support for low income families, such support constitutes a minimal safety net and significant levels of poverty still exist (Scottish Government, 2016). Secondly, considering the mediating role of parental processes and cognitive ability, target domains for intervention need to be expanded to include factors that compromise parenting as well as improve cognitive ability for children. Evidence suggest that improvement in one domain can serve as a catalyst for changes in another (Lunkenheimer et al., 2008;Scott et al., 2010), and such multi-layered approaches may help break the cycle of poverty. Additionally, the significant direct effects observed between economic deprivation and conduct problems, as well as parental stress and conduct problems suggests that the effect of poverty on conduct problems is not exclusively a result of parental behaviors. As a result, interventions need to go beyond parenting programmes in a bid to reduce childhood conduct problems. Finally, it is clear that poverty is a significant early risk antecedent for childhood conduct problems. Thus, policies that prioritize support for children at the very early stages before they begin formal schooling may prevent their behavior from getting worse and subsequently falling further behind their peers in educational achievement. While the initial cost of early intervention might be an immediate concern, this needs to be balanced against the fact that future costs associated with supporting children whose problems deteriorate by adulthood is equally substantial (Scott et al., 2001). Early interventions at the preschool stage also have a greater efficacy for reducing conduct problems than those for older children because childhood conduct problems and their associated parenting practices are more malleable during the early years (Olds, 2002;Reid et al., 2004). Additionally, as found in the current and previous studies (e.g., Mesman et al., 2009;Fanti and Henrich, 2010) conduct problems decrease as children grow older. Thus, early intervention should help quicken the pace of change for those at risk. To conclude, the present study extended previous work on the exact mechanisms by which poverty leads to childhood conduct problems by demonstrating the role of cognitive ability as a key mediator between poverty and conduct problems. It is also only one of two studies using prospective longitudinal data matching temporal order of hypothesized variables, and the only one to examine trajectory of conduct problems across time. Interventions based on the integrated family stress and investment model may help improve conduct behavior for children from disadvantaged households, and by extension their future prospects. AUTHOR CONTRIBUTIONS ES and PS contributed to the conceptualization of the study. ES undertook literature review. ES and PS undertook data analysis and contributed to the writing of the manuscript.
9,230.2
2017-09-13T00:00:00.000
[ "Economics", "Sociology" ]
Hysteresis of wettability in porous media: a review The process of “hysteresis” has widely attracted the attention of researchers and investigators due to its usage in many disciplines of science and engineering. Economics, physics, chemistry, electrical, mechanical, and petroleum engineering are some examples of disciplines that encounter hysteresis. However, the meaning of hysteresis varies from one field to another, and therefore, many definitions occur for this phenomenon depending on the area of interest. The “hysteresis” phenomenon in petroleum engineering has gained the attention of researchers and investigators lately, because of the role that plays in reservoir engineering and reservoir simulation. Hysteretic effects influence reservoir performance. Therefore, an accurate estimation of rock and fluid property curves has an essential role in evaluating hydrocarbon recovery processes. In this paper, a comprehensive review of research and growth on the hysteresis of wettability for its applications in petroleum engineering is reported. Also, theoretical and experimental investigations of hysteresis of wettability are compared and discussed in detail. The review highlights a range of concepts in existing models and experimental processes for wettability hysteresis. Furthermore, this paper tracks the current development of hysteresis and provides insight for future trends in the research. Finally, it reveals an outlook on the research challenges and weaknesses of hysteresis of wettability. Introduction Wettability of rocks is a crucial property in many aspects, such as controlling the location, flow, and distribution of fluids in the reservoir (Anderson 1986a). Moreover, studies have shown the effect of wettability in the electrical properties of porous media (Anderson 1986b;Elhaj et al. 2018a), capillary pressure (Anderson 1987a), waterflood behavior (Anderson 1987b), relative permeability (Anderson 1987c;Elhaj et al. 2018b), dispersion (Wang 1988), simulated tertiary recovery (Anderson 1986a), irreducible water saturation (Anderson 1987c), and residual oil saturation (Anderson 1987a;Hirasakl 1991). As it is known, wettability can be measured by contact angle (Yuan and Lee 2013;Zisman 1964), which has the ability to measure the angle of the wetting phase to solid. As the contact angle is a characteristic of the rock wettability, it is considered only an indication of rock wettability, which means a contact angle with much less than (90°) indicates high wettability. In contrast, the contact angle with a much larger angle than (90°) indicates low wettability. There are two types of contact angles: (1) static or (2) dynamic based basically on the movement or the stationary of the fluid and solid while the measurement takes place (Johnson et al. 1977). Most studies refer to wettability by the degree of contact angles (Michaels and Lummis 1959;Cassie and Baxter 1944;Bartell and Cardwell 1942). Based on this fact, the term "contact angle" used in this paper shall refer to wettability. The hysteresis of wettability has a long history in the oil and gas industry (Haines 1930;Benner et al. 1942;Melrose 1965). It was found in a previous study that the hysteresis that occurred in contact angle was akin to similar hysteresis that existed in petroleum engineering, such as capillary pressure hysteresis and relative permeability hysteresis (Johnson et al. 1977). Therefore, when the interface between oil and water, for instance, gave two angles versus reservoir rock, advancing and receding of the water, this phenomenon of exciting of two angles for one system is well known as hysteresis of contact angle (Benner et al. 1942). Other authors refer to the hysteresis term in wettability to the difference between these two angles (advancing and receding) (Gao and McCarthy 2006;Extrand 2003Extrand , 2002Extrand , 2004. Three cases can happen for a reservoir rock (Benner et al. 1942) which can be shown graphically in Fig. 1: 1. The two different angles were both less than 90°; the reservoir rock would be water-wet, and there would be a continuous movement of water forcing the oil out of the rock. 2. The two different angles were both higher than 90°; the reservoir rock would be oil-wet, and there would be a continuous movement of oil, forcing the water out of the rock. 3. The two different angles were on opposite sides (one was less than 90°, and the other was greater than 90°), and there would be no movement of liquid in either direction. Despite the extensive studies that focused on investigating contact angle hysteresis, the fundamental reasons for this phenomenon are not entirely understood (Extrand and Kumagai 1997). It is often referred to as surface heterogeneity (Ruch and Bartell 1960;Good 1952;Pease 1945), roughness (Shuttleworth and Bailey 1948;Eick et al. 1975;Huh and Mason 1977), overturning of molecular segments at the surface (Langmuir 1938;Hansen and Miotto 1957;Ter-Minassian-Saraga 1964), adsorption and desorption (Vergelati et al. 1994), interdiffusion (Timmons and Zisman 1966;Good and Kotsidas 1979), and or surface deformation (Bikerman 1950;Lester 1961). In the next two sections, essential experimental and theoretical techniques will be highlighted and discussed. Physical explanation of hysteresis in wettability To understand the physical cause of hysteresis in wettability, it is essential to have a good understanding of the physical explanation behind the occurrence of wettability itself. As it is known, the contact angle is considered one of the thermodynamic properties and it is commonly used to measure the wetting properties of two immiscible fluids (Xie et al. 2001). From a physical point of view, contact angle can be measured and defined using the term "surface energy" in Young's equation (Xie et al. 2001;Ryder and Demond 2008). The contact angle is a function of three interfacial tension phases: (1) two-fluid phase, (2) solid drop phase, and (3) solid immersion phase. Previous studies showed that contact angles measured macroscopically might differ from the intrinsic contact angle due to hysteresis phenomena (Eick et al. 1975;Ryder and Demond 2008;Dettre and Johnson 1965;Restagno et al. 2009). A justification of this phenomenon is that at a larger size of the drop, the advancing edge gives the contact angle against the low-energy areas of the surface. On the contrary, at a smaller size of the drop, the receding edge provides contact with the angle against the high-energy areas of the surface (Ryder and Demond 2008). Another physical justification of contact angle hysteresis occurrences is when a droplet experiences an external force which is considered extra energy of a system (Cheng et al. 2016). Moreover, molecular size and properties of liquid have also effect on contact angle hysteresis existence (Lam et al. 2001). Several parameters and properties influence wettability hysteresis, as reported in many previous studies. These parameters are listed but not limited to surface roughness (Xie et al. 2001), surface geometries (Cheng et al. 2016), drop size (Brandon et al. 2003), liquid and solid surface composition (Ryder and Demond 2008), molecular size and properties of liquid (Lam et al. 2001), and solid-liquid contact time (Lam et al. 2002). Experimental observations of hysteresis in wettability Many experimental techniques and methods were developed during the last decades to investigate and measure the hysteresis phenomenon in contact angles. These techniques can be divided into techniques that were measured on flat solid surfaces and others on different geometries (nonideal (Benner et al. 1942) surfaces), such as plates, fibers, and powders (Chau 2009). In another perspective, these techniques can be categorized in terms of static and dynamic conditions depending on the situation of the liquids during measurements (Yuan and Lee 2013;Ralston and Newcombe 1992). In this section, both perspectives, movement type and surface type, will be discussed briefly. The most common method that is used to describe and measure the contact angle depends on observing the image of the drop by low-magnification optical devices (Chau 2009). It is quite challenging to determine the degree of wettability with the low-magnification device. Additionally, keeping a surface clean in an open-air laboratory is almost an impossible task. An advantageous technique to keep surfaces clean and uncontaminated is abrasion and polishing underwater using scrupulously controlled conditions which is proposed and whose efficiency is proved (Wark and Cox 1932). A well-known technique that used to measure the tangent angle of the contact angle known as "telescope-goniometer" is used to determine the contact angles (Bigelow et al. 1946) on a flat solid surface, as shown in Fig. 2. The same method was designed and modified by Zisman (1968). The eyepiece was used to measure the tangent of the drop and the surface contact point. Over the years, enhancements and improvements were made to improve the accuracy of angle measurements, such as magnifying (up to 50 times) the intersection profile which allows for better assessment as well as using a camera instead (Smithwich 1988;Leja and Poling 1960). Another study proved the sessile drop's angle could be measured up to the accuracy of ± 2° when the contact angle is higher than 20° (Hunter 2001). A motor-driven syringe is employed in the experimental setup to control the liquid rate when measuring the dynamic contact angle was another development for this technique (Kwok et al. 1996). The advantages of this method can be (1) simplicity, (2) small surface and a small amount of liquid are required to conduct this experiment. On the other hand, the disadvantages of this method can be summarized as follows: 1. As the liquid size and surface are small, the possibility of the existence of impurities which may affect the read-ing of the angle is likely to be high (Yuan and Lee 2013;Chau 2009). 2. This method entirely depends on the measurement of tangent line's angle that leads to significantly inaccurate measurements if a minor error occurs (Yuan and Lee 2013). 3. The focus of the camera only is toward the most significant drops (Yuan and Lee 2013;Chau 2009). 4. Variations in contact angles' measurements happen when the flat surface is either heterogeneous or rough (Chau 2009). 5. The small size of the droplet leads to difficulties in measuring the contact angle (Brandon et al. 2003;Letellier et al. 2007). Another popular method that is used for investigating hysteresis is a "tilted plate" or "inclined plate" introduced in the 1940s (Macdougall and Ockrent 1942). Figure 3 depicts a schematic of the inclined plate method. This technique is a modified version of the "telescope-goniometer" technique. The same method was used to study contact angle hysteresis on various types of polymer surfaces, such as silicon wafers and elastometric surfaces Kumagai 1995, 1997). This technique used a recorded video camera and videotape to measure both angles using a protractor when the drop started moving; the tape was stopped. Measurements of these two angles must be taken carefully because most of the time, they can be different (Pierce et al. 2008;Krasovitski and Marmur 2005). In the early history of contact angle measurement, a platinum wire was used to measure contact angle hysteresis by forming sessile drops on a solid surface (Zisman 1968). The drops were created by heating the wire and then putting it in a fluid to form the drops. The drop gently and slowly puts on a surface, building a sessile drop (Yuan and Lee 2013). Despite the accuracy of reproducing the sessile drop that was be claimed (± 2°) (Spelt et al. 1986), some concerns that moving the drop from the wire to the surface may cause some kinetic energy combined with the flowing, which may lead to metastable contact angles (Eick et al. 1975;Derjaguin 1946;Johnson and Dettre 1966;Neumann and Good 1972). The tangentometer method is also known for measuring contact angle hysteresis, which uses a mirror that is seated at the baseline of the droplet (Yuan and Lee 2013;Phillips and Riddiford 1972). The role of the mirror is to rotate until the full curve of the drop is formed, and with its reflection image, the protractor that is adhered to the mirror can be used to measure the tangent line's angle. This method has the problem of the measurement errors because of the inherent subjectivity of tangentometers (Fenrick 1964 Fig. 2 Sketch shows a telescope-goniometer technique for contact angle measurement (Salim et al. 2008) the hysteresis of the contact angle (Langmuir and Schaefer 1937). The light source is rotated around the droplet until the reflection from the drop dies; afterward, the contact angle can be read from the degree of the rotation. The accuracy of this method is (± 1°) and can be used for both sessile drops and menisci (Johnson and Shah 1985). Flat solid surfaces, horizontal or vertical, were the focus of the previous discussion, and general observations can be highlighted in these points: 1. Contact angle measurement relies mainly on two factors, which are the surface quality and its cleanliness (Chau 2009). 2. When the contact angle is under 20°, it is difficult to measure, and most of the techniques give inaccurate estimation (Gaudin 1957). 3. Heterogeneity of the surface appears to be the biggest problem for the flat surfaces' measurement techniques (Extrand 2004;Neumann and Good 1972). 4. Some techniques use a small droplet and surface, which may lead to inaccuracy in measuring the contact angle hysteresis (Bigelow et al. 1946). For the other type of surfaces, nonideal or different geometries, Table 1 summarizes, discusses, and analyzes the essential techniques that are used to measure contact angle hysteresis. In general comparison between these techniques, the most widely used technique that can be applied for most cases is the Wilhelmy balance method (Wilhelmy 1863) because it can be used in static and dynamic contact angle measurements and is simple. In addition, most of the other techniques primarily originated from its fundamentals. Other studies considered temperature dependency on measuring the contact angle hysteresis, such as the captive bubble method (Taggart et al. 1930) and capillary rise at a vertical plate method (Shimokawa and Takamura 1973;Neumann 1964;Budziak and Neumann 1990;Kwok et al. 1995). Some studies give very low error possibility, and others provide large error values under exceptional circumstances, such as capillary rise at a vertical plate method and individual fiber method (Schwartz and Minor 1959), respectively. For more details about these techniques that used experiments to estimate the contact angle, see Table 1. Modeling of hysteresis in wettability As been discussed in previous sections, hysteresis can be referred to the difference between advancing a and receding r angles, which can be mathematically formulated as hys : The literature on contact angle hysteresis has highlighted several mathematical models. As Extrand (2003) reported, the first model was developed by Cassie and Baxter (1944) and Cassie (1948), which is applied to heterogeneous surfaces and can estimate the values of advancing and receding angles as: where i refers to either advancing or receding, for materials 1 and 2 , 1, and 2 are the fractional areas of material 1 and material 2: The model developed by Cassie is simple and very straightforward, and the primary assumption of this model is that the fluid will change the model surfaces. Still, this model and other models that originated from it failed to predict contact angle correctly (Dettre and Johnson 1965;Gaines 1960;Brockway and Jones 1964) that is because Shimokawa and Takamura (1973), Neumann (1964), Budziak and Neumann 1990) and Kwok et al. (1995) Individual fiber A fiber is put in a horizontal position in the microscopic field A goniometer eyepiece is a tool to estimate the contact angles The zero contact angle can be measured The critical surface tension should always be higher than the surface tension The packing powder must be obtained in the capillary tubes Washburn (1921), Zografi and Tam (1976) and Lerk et al. (1977) all these models assumed that the apparent contact angle is controlled by the interfacial contact area between liquid and solid. Several studies have suggested that contact angles can be estimated by the interactions that occur at the contact line (Extrand 2002(Extrand , 2003. More advanced models have been developed by Good (1952), Neumann and Good (1972), Johnson and Dettre (1964), Öpik (2000) and Marmur (1994). Most of these models employed geometry as a function; moreover, the surface roughness was also included. The effect of surface roughness and chemical nonuniformities on the wettability hysteresis was investigated mathematically. In these mathematical models, the geometries were assumed to be regular, such as the form of parallel stripes (Öpik 2000). A previously published study that dealt with this assumption can be found in the Murmur's article, which contained a list of all previous references (Marmur 1994). The reader may also refer to the study done by de Genes for more details (De Gennes 1985). An interesting study conducted by Brandon et al. (1997) modeled and simulated hysteresis phenomenon of threedimensional sessile drops in equilibrium with a model of chemically heterogeneous smooth solid surface in which the energy is spatially periodic. The main assumptions of this model are: (1) the fluid and liquid are mutually immiscible, (2) gravity effect is neglected, and (3) contact angle is assumed to vary along the surface. To achieve stability, the dimensionless free energy of the system is given by: where G is free energy, x and y spatial coordinates, S interfacial area, and cos i (x, y) can be defined by Young's equation: where sf (x, y)and sl (x, y) are solid-liquid and solid-fluid interfacial tension, respectively. As a conclusion for this result, the hysteresis was found to have existed in both the average contact angle (as a function of volume) and liquid-fluid interfacial curvature. Another conclusion of this study was a good agreement in calculating the drop shapes in three-dimensional Young and Young-Laplace equations. Although this study gave good results as well as better understanding in three-dimensional point of view, it had limitations, that is, the software that was used failed to investigate a large drop size of a bubble, which is the same disadvantage of the study that dealt with two-dimensional sessile drop (Brandon and Marmur 1996). Several studies also considered the surface free energy of wetting as a function in mathematical models (Extrand 2002(Extrand , 2003(Extrand , 2004Extrand and Kumagai 1997;Cheng et al. 2016;Extrand 1998). Summary and conclusions Determination of solid surface tension is one of the most applications of wettability measurement, which was the focus of several studies for decades (Lam et al. 2002;Neumann and Good 1972;Marmur 1994;Brandon and Marmur 1996;Dettre and Johnson 1969). However, most existing techniques rely on surface deformation, not surface tension, except for indirect methods that can deal with surface tension (Kwok and Neumann 2003). The first model that correlated the contact angle and interfacial tension was proposed by Young. To test liquids on a solid surface, the surfaces need to be rigid, homogenous, smooth, and inert. The main focus of most researchers when they studied the hysteresis of wettability was to allow a quick indication of surface hydrophobicity (Chau 2009). Numerous methods that are widely applied in measuring the contact angle hysteresis were discussed and analyzed, such as the conventional telescope-goniometer method, capillary penetration methods for particles, and the Wilhelmy balance method. The applications and setbacks of these techniques are shaded. Each technique has its advantages and disadvantages, as can be seen in Table 1, but in general, the most widely used technique that can be applied for most cases is the Wilhelmy balance method (Wilhelmy 1863). On real mineral samples, researchers found that the accurate method to estimate the contact angle is capillary penetration because of its quickness and easiness compared to techniques on flat mineral surfaces (Chau 2009). Numerous studies investigated and attempted to explain the reason for the existence of contact angle hysteresis mathematically and theoretically, which involved the drop volume (Marmur 1994), complex surface geometries (Cheng et al. 2016), and drop size (Brandon et al. 2003). However, the investigators concluded that the geometric characteristics of the patterned surface are one of the vital factors in measuring hysteresis of wettability. Despite all these studies, the hysteresis of the contact angle is still not fully understood. Hysteresis is a natural phenomenon that occurs in many disciplines, such as economics, biology, chemistry, physics, mathematics, civil engineering, electrical engineering, and petroleum engineering. Each discipline has its definition, and applications of hysteresis depend on the nature of conditions (Elhaj et al. 2018a, b). The focus of this paper has been on investigating the hysteresis phenomenon experimentally and theoretically in wettability. However, the discussion and investigation of this property revealed the gap in the part of either experiment, theoretical or mathematical, generally, can be highlighted as: 1. The limitations of the experimental studies such as special conditions, which made it inapplicable for others, 2. Most of the experiments are conducted in laboratory conditions, not reservoir conditions, 3. The mathematical models may have double integrals which makes it challenging to inverse the process mathematically, and 4. The analytical solution for such a model is complicated to be done, if not impossible. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
4,954.6
2020-04-02T00:00:00.000
[ "Engineering", "Environmental Science", "Physics" ]
Electromagnetic Wave Absorption Performance of Carbonized Rice Husk Obtained at Various Temperatures Abstract Agricultural wastes such as rice husks (RHs) are valuable due to their feasibility to be converted into carbon materials, low cost, and abundancy in contrast to the conventional carbon material sources. In this study, RHs are carbonized at various temperatures from low to high temperatures, and their electromagnetic (EM) wave absorption properties are evaluated. Carbon materials, silicon carbide (SiC) whiskers, and SiC particles are obtained from RHs carbonized at 1500 °C (CRH1500) for 0.5 h with presence of Ar gas at 1 atm. In order to evaluate their EM wave absorption performance, complex permittivity and permeability are measured by using vector network analyzer, and the values are utilized in the reflection loss (R.L.) calculation according to the transmission line theory. CRH1500, 40 wt% with thickness of 1.6 mm exhibits minimum R.L. of ≈−55.4 dB (>99.9997% absorption) at 11.37 GHz and response bandwidth (R.L. < 10 dB, > 90% absorption) of 4.21 GHz. Low‐cost and abundant RHs, carbonized at various temperatures, show significant absorption performance. Their absorption performance and response bandwidth are highly dependent on matching thickness, indicating that they can be easily modulated for promising electromagnetic wave absorber materials. DOI: 10.1002/gch2.201900045 this might leads to device malfunction, disturbance of electronic system, harm the environment, or affect the health of human beings, which has attracted the urgency to develop EM wave absorbing materials. [1][2][3][4][5] Effective EM wave absorber materials will be able to dissipate EM wave energy into heat or different forms of energy and in result the EM wave cannot be reflected or permeated through the absorber materials, in addition to other advantageous properties such as lightweight, tunable absorbing ranges, and multifunctionality. [1,5] The incident wave through the absorber materials undergoes the following phenomenon: penetration, reflection, and absorption, for the energy loss process. [1] Recently, alternative functional absorber materials derived from biomass materials or agro-based waste materials such as rice husk, [4,5] walnut shell, [6] cotton, [7] spinach, [8] loofah [9] have received considerable attention. To the best of our knowledge, the EM wave absorption performance of rice husk (RH) carbonized at high temperatures which produced heterogeneous materials including carbon materials and ceramic materials (silicon carbide whiskers and particles) has not been reported before. RH is an agro-based lignocellulosic waste material that is abundant in various countries, mainly generated from rice milling activities. [10][11][12] In year 2016, based on the report from Food and Agriculture Organization of the United Nations (FAO), world rice production is ≈741 million tons. RH is accountable for around 20% from the rice production, which is ready to be utilized. [13] Commonly, RH is abandoned, manipulated as low value energy resource, or basically combusted at the site, which hinder its effective utilization and unfavorable to the environment. [14] RHs are lignocellulosic materials, which is beneficial to convert them into raw carbon materials together with silicon content through carbonization process using heat treatment. [15,16] For instance, Fang et al. reported that carbonized RHs combined with magnetic cobalt particles showed reflection loss of −40.1 dB with thickness of 1.8 mm. [5] Moreover, Su et al. extracted silicon carbide (SiC) particles from the RHs and they also showed considerable EM wave absorption. [17] In this study, low cost and abundantly available RHs are simply carbonized at various temperatures, which produced a mixture of carbon and ceramic materials. Accordingly, their complex permittivity and permeability were measured and utilized in reflection loss Agricultural wastes such as rice husks (RHs) are valuable due to their feasibility to be converted into carbon materials, low cost, and abundancy in contrast to the conventional carbon material sources. In this study, RHs are carbonized at various temperatures from low to high temperatures, and their electromagnetic (EM) wave absorption properties are evaluated. Carbon materials, silicon carbide (SiC) whiskers, and SiC particles are obtained from RHs carbonized at 1500 °C (CRH1500) for 0.5 h with presence of Ar gas at 1 atm. In order to evaluate their EM wave absorption performance, complex permittivity and permeability are measured by using vector network analyzer, and the values are utilized in the reflection loss (R.L.) calculation according to the transmission line theory. CRH1500, 40 wt% with thickness of 1.6 mm exhibits minimum R.L. of ≈−55.4 dB (>99.9997% absorption) at 11.37 GHz and response bandwidth (R.L. < 10 dB, > 90% absorption) of 4.21 GHz. Low-cost and abundant RHs, carbonized at various temperatures, show significant absorption performance. Their absorption performance and response bandwidth are highly dependent on matching thickness, indicating that they can be easily modulated for promising electromagnetic wave absorber materials. The disclosure of unnecessary or excessive electromagnetic (EM) wave emission are expected due to the extensive utilization of devices/applications associated with EM wave especially at GHz frequency ranges (G-, X-, Ku-band, etc.) in electronic fields, wireless communication, and radar. [1][2][3][4][5] Furthermore, calculation in order to evaluate the EM wave absorption performance. The combination of heterogenous materials, carbon and SiC, can incorporate the properties of the components to form effective EM wave absorber materials. The dried RH samples went through carbonization at various temperatures of 800, 1500, and 2200 °C. Typically, the carbonization of raw RHs was at 800 °C, 1 h by utilizing furnace, in argon gas atmosphere. Next, the RHs carbonized at 800 °C were once more loaded into graphite box; further heat treated at 1500 °C, 0.5 h by utilizing graphite resistance furnace operating with the supply of argon gas at 1 atm. Similar process was repeated for the carbonization of RHs at 2200 °C. Finally, the carbonized RHs were crushed into powder within few micrometer range, and they will be denoted as CRH, 800 °C carbonized RHs will be denoted as CRH800, 1500 °C carbonized RHs will be denoted as CRH1500, and 2200 °C carbonized RHs will be denoted as CRH2200, hereafter. Their structures and morphologies were observed using field emission scanning electron microscope (FE-SEM; Hitachi SU-8000) and transmission electron microscope (TEM; JEOL JEM-2100F) with an accelerating voltage of 15 and 200 kV, respectively. Energy dispersive X-ray spectroscopy (EDS) equipped together with TEM was utilized to check the elemental compositions. Samples for the complex permittivity and permeability measurement were fabricated by incorporating the CRH in paraffin wax with a weight fraction of 40 wt%. Then, the powder-wax composites were compressed into a 1.0 mm thickness of toroidal shape using a mold designed with an outer diameter of 7.0 mm and inner diameter of 3.0 mm. The complex permittivity and permeability were measured by utilizing a vector network analyzer (37247D, Anritsu Co. Ltd.) in the frequency range of 0.5-13.9 GHz. The reflection loss was calculated by using the measured complex permittivity and permeability. FE-SEM image of the CRH1500 is depicted in Figure 1a. A mixture of various shapes graphitized carbon materials, SiC whiskers, and particles were observed. EDS results of CRH1500 further confirmed the existence of carbon and silicon elements, as presented in Figure 1b. The EDS mapping is provided in Figure S1 (Supporting Information). The copper (Cu) peaks are attributed to the copper grid utilized during TEM observation. The inset of Figure 1b shows the magnified FE-SEM image of CRH1500 carbon porous structure. TEM image of CRH1500 graphitized carbon layers with planar distance, d = 0.35 nm, which is slightly larger than (002) plane of single crystal graphite (d = 0.335 nm), is shown in Figure 1c. Larger d indicates the graphitized carbon is turbostratic carbon. [18] TEM images of CRH1500 SiC whiskers are presented in Figure 1d,e, and CRH1500 SiC particles in Figure 1f. The diameter of SiC whiskers is up to ≈100 nm with few µm lengths, and the diameter of SiC particle agglomerates is few hundred nm with irregular shapes. From Figure 1e, d = 0.25 nm is corresponds to SiC. [17] From our previous study and other reports, CRH800 are amorphous carbon with porous structure, and the contents included silica (SiO 2 ). [10,19] Meanwhile, CRH2200 contain graphitized carbon materials and mostly SiC particles. [19] TEM image of CRH800 and FE-SEM image of CRH2200 are provided in Figure S2 (Supporting Information). Overall, CRH1500 showed most heterogeneity when compared to other CRHs. The process to achieve the heterogeneity is relatively simple, without any further steps to combine different types of materials, in order to obtain excellent EM wave absorption. The complex permittivity (real: ε′, imaginary: ε″) and complex permeability (real: µ′, imaginary: µ″) of the CRHs, in the range of 0.5-13.9 GHz are presented in Figure 2a,b, respectively. As the frequency increased, the complex permittivity declined. This can be attributed to the dipoles in CRHs are not maintained in the phase orientation with the electric vector of the penetrating EM field. [3] Comparatively, CRH1500 exhibited highest complex permittivity in contrast to other CRHs. Heterogeneous materials including porous carbon, SiC whiskers, and particles exist in CRH1500, which permit strong polarization to take place. Furthermore, SiC might facilitated dielectric loss attributable to the semiconductivity. [20] Based on the free electron theory, higher ε″ values are proportional to higher conductivity values. [20] Thus, relatively higher electrical conductivity of CRH1500 permits strong polarization to take place, dissipation of electrostatic charges, Ohmic losses, or multiple scattering attributed to the heterogeneity and large specific area, which lead to improved complex permittivity. [3,21] It is worth noticing that certain conductive materials are suitable for electromagnetic interference (EMI) shielding material, where the core process of EMI shielding is to produce a blockage made of electrically conductive materials that attenuates radiated or conducted EM energy through reflections and absorption. [22] Moreover, multifunctional electrical conductive carbon-based composite materials in various forms, also exhibited competitive performances, such as enhanced electrical conductivity and excellent EMI shielding capability. [23,24] For the complex permeability, fluctuations can be traced throughout the measured frequency range. This might be attributed to scattered magnetic field due to porous structure (similar with electric field, associated in EM wave propagation), leads to partial energy attenuation. [8] For CRH1500 and CRH2200, the complex permeability slightly increased until a certain frequency, after which it started to decline. This behavior can be related to the natural resonance or eddy currents loss initiated from the graphitized carbon. [3,25,26] Graphitized carbon in an alternating magnetic field will build up a close induced current inside the sample, which would scatter the energy and known as the eddy current loss. [25] Considerable high electrical conductivity of the CRH1500 and CRH2200 might leads to permeability declining swiftly at high frequency, which might be attributed to the eddy current loss. [3] Furthermore, the SiC which possess semiconductivity that might contributed to a certain resistivity, playing a part that also directs to the decrease of eddy currents when stimulated by the EM waves. [3,26] Based on the transmission line theory, the EM wave absorption performance was evaluated by calculating the reflection loss (R.L.) with the consideration of measured complex permittivity and permeability as follows where ε r = ε′ -jε″, µ r = µ′ -jµ″, f is the EM wave frequency (Hz), d is the thickness of the absorber (m), and c is the velocity of light in free space (m s −1 ). The R.L. of CRH800, CRH1500, and CRH2200 with weight fraction of 40 wt% and thickness, t = 1.0 mm is illustrated in Figure 3a. Apparently, the CRH1500 showed the best EM wave absorption performance when compared with other CRH samples with similar thickness and weight fraction. The CRH1500 indicated minimum R.L. of The CRH1500 are thinkable as a heterogeneous material, where graphitized carbon, SiC whiskers, and particles will stimulate additional dielectric relaxation by supplementary dielectric interfaces and higher polarization charges at the interface between those materials. [3] Moreover, the considerable EM wave absorption performance is also assisted by space charge and orientational polarization. The heterogeneity presents at the interface between the graphitized carbon, SiC whiskers, and particles is strongly correlated to the space charge polarization; meanwhile the orientational polarization is linked with the bound charges (dipoles) present in the CRH1500. [3] Furthermore, the heterogeneity of CRH1500 will boost the connectivity between the materials and increase the complexity of EM wave propagation paths through the CRH1500. These enable them to polarize repeatedly at high frequency EM wave field, which results in the EM wave energy transformed into other form of energy such as heat energy. [3,5] In the same time, appropriate complex permittivity and permeability are also important to improve EM wave absorption, which in this study CRH1500 exhibited highest complex permittivity compared to the other CRH samples. [3,21,25] This also can be associated to another significant parameter related to R.L., i.e., the concept of matched impedance. This concept implies that the intrinsic impedance of the material should be almost identical to that of the free space to attain zero reflection on the front surface of the absorber. [3] The porous structure of CRH1500 could enhance the impedance matching, where more EM wave can penetrate into the absorber, which will promote the multi reflections that are related to the decay of EM wave energy. [27] Undoubtedly, to produce optimal absorber with a high minimum R.L., suitable matched impedance is essential. The absorber's thickness is also a key factor that can influence the R.L. by relating to Equation (2), thus we explored the association between the thickness and the R.L. The R.L. of CRH800, CRH1500, and CRH2200 with matching thickness Global Challenges 2019, 3,1900045 is depicted in Figure 3b. The R.L. of −10 and −20 dB is matched to 90% and 99% absorption, respectively. [3] Obviously, CRH1500 with matching thickness, t m = 1.6 mm demonstrated highest minimum R.L. of ≈−55.4 dB (>99.9997% absorption) at 11.37 GHz, with response bandwidth of 4.21 GHz. CRH1500 can be considered as the best EM wave absorber compared to other CRH samples, although CRH2200 showed wider response bandwidth of 5.12 GHz, considering CRH2200 t m = 2.7 mm which is almost double the t m of CRH1500. Furthermore, the dependence of CRH1500 R.L. with various thicknesses is shown in Figure 3c,d. With increasing thickness, the R.L. peak shifted to lower frequency. Noticeably, the minimum R.L. decreased gradually when the thickness increased beyond 1.6 mm. However, wider response bandwidth can be obtained; for instance, CRH1500 with thickness of 2.0 mm showed response bandwidth of 4.74 GHz and in the same time covered almost the whole X-band. Partially, when the thickness of CRH1500 is beyond 1.2 mm, some parts showed absorption <−20 dB or >99% absorption. Through this particular thickness design of CRH1500, they showed significant EM wave absorption performance and considerable wide absorption frequency. The CRH samples are a prominent EM wave absorber, due to their absorption and response bandwidth that can be exploited simply by adjusting the thickness to suit practical utilization in diverse frequency ranges. In brief, heterogeneous materials including graphitized carbon, SiC whiskers, and particles were obtained through carbonization of RHs at 1500 °C. Their heterogeneity results in high interfacial polarization, which enhanced the EM wave absorption performance. CRH1500, 40 wt% with thickness of 1.6 mm revealed minimum R.L. of ≈−55.4 dB (>99.9997% absorption) at 11.37 GHz and response bandwidth of 4.21 GHz. Evidently, numerous factors such as type of filler, thickness, and heterogeneity, play a significant role towards the considerable EM wave absorption performance and response bandwidth. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
3,558.8
2019-08-27T00:00:00.000
[ "Materials Science", "Environmental Science", "Agricultural and Food Sciences" ]
Power Production and Biochemical Markers of Metabolic Stress and Muscle Damage Following a Single Bout of Short-Sprint and Heavy Strength Exercise in Well-Trained Cyclists Purpose: Although strength and sprint training are widely used methods in competitive cycling, no previous studies have compared the acute responses and recovery rates following such sessions among highly trained cyclists. The primary aim of the current study was to compare power production and biochemical markers of metabolic stress and muscle damage following a session of heavy strength (HS) and short-sprint training (SS). Methods: Eleven well-trained male cyclists (18 ± 2 years with maximal oxygen uptake of 67.2 ± 5.0 mL·kg−1·min−1) completed one HS session and one SS session in a randomized order, separated by 48 h. Power production and biochemical variables were measured at baseline and at different time points during the first 45 h post exercise. Results: Lactate and human growth hormone were higher 5 min, 30 min and 1 h post the SS compared to the HS session (all p ≤ 0.019). Myoglobin was higher following the HS than the SS session 5 min, 30 min and 1 h post exercise (all p ≤ 0.005), while creatine kinase (CK) was higher following the HS session 21 and 45 h post exercise (p ≤ 0.038). Counter movement jump and power production during 4 sec sprint returned to baseline levels at 23 and 47 h with no difference between the HS and SS session, whereas the delayed muscle soreness score was higher 45 h following the HS compared to the SS session (p = 0.010). Conclusion: Our findings indicate that SS training provides greater metabolic stress than HS training, whereas HS training leads to more muscle damage compared to that caused by SS training. The ability to produce power remained back to baseline already 23 h after both training sessions, indicating maintained performance levels although higher CK level and muscle soreness were present 45 h post the HS training session. INTRODUCTION Road cycling is an endurance sport with competitions typically lasting several hours (Coyle, 1999;Jeukendrup et al., 2000;Padilla et al., 2000;Faria et al., 2005). However, many races are decided in a sprint finish (Martin et al., 2007) where high power over a short period (∼10 s) of time is critical. To improve sprint power in cycling, many competitive cyclists regularly supplement their endurance training with heavy strength (HS) and/or short-sprint (SS) training. These strategies are supported by previous studies, showing positive effects of both HS training (Rønnestad et al., 2010(Rønnestad et al., , 2015(Rønnestad et al., , 2016Aagaard et al., 2011;Vikmoen et al., 2016) and SS training (Creer et al., 2004;Sloth et al., 2013;Hebisz et al., 2016) on overall cycling performance and aerobic endurance indices in well-trained cyclists. HS and SS exercise exert different loads on the neuromuscular and metabolic systems (Coffey et al., 2009). While HS is normally executed with high loads and slow concentric and eccentric muscle actions (Kraemer and Ratamess, 2004), SS in cycling is done one the bike and involves mainly concentric muscle work with lower loads and higher velocity contractions (Martin et al., 2007). Although this reasoning implies that both acute responses and recovery rates following HS and SS sessions should differ, the current studies investigating such responses have been conducted on athletes performing either high intensity or strength training, or on untrained and less trained subjects (Barnett, 2006). However, an athlete's training status would have a significant impact on both acute responses and recovery rates (Brancaccio et al., 2007(Brancaccio et al., , 2010Bishop et al., 2008) highlighting the importance of conducting such studies on highly-trained participants. Understanding training load and recovery in a given sport is imperative when designing training programs, because these variables determines long-term adaptations (Bishop et al., 2008). As a measure of external workload, power output is commonly used both in cycling and in strength training, whereas internal training load is typically estimated based on physiological and perceptual responses such as oxygen uptake, heart rate (HR), blood lactate concentration ([La − ]) (Borresen and Lambert, 2009), and rating of perceived exertion (RPE) (Borg, 1982;Wallace et al., 2009). The changes in these variables are also used to measure recovery status following HS training, in combination with variations in strength and power performance (Raastad and Hallen, 2000;Andersson et al., 2008;Haugvad et al., 2014) and muscle soreness (Armstrong, 1984;Nosaka et al., 2002). Based on such measurements, HS training programs containing 2-4 sets of 4-8 repetitions are shown to require 24-72 h of recovery in well-trained athletes (Paulsen et al., 2012). On the other hand, recovery following SS sessions has not yet been investigated, and possible differences in recovery rates following HS and SS training are therefore not clear. However, the acute responses of [La − ] and hGH seems to be higher after high intensity training and sprint training compared to HS training (Kraemer et al., 1990;Godfrey et al., 2003;Stokes et al., 2004), and Mb and CK levels are showed to be higher after HS training (Brancaccio et al., 2007(Brancaccio et al., , 2010Speranza et al., 2007). In addition to power production and perceptual measurements, various blood (biochemical) markers may provide a more detailed picture of how the various systems are loaded during a training session, as well as the subsequent rate of recovery (Brancaccio et al., 2010;Bessa et al., 2016). For example, creatine kinase (CK) (Koch et al., 2014) and myoglobin (Mb) (Speranza et al., 2007;Soares and Bozza, 2016) provide an indication of muscle damage, while human growth hormone (hGH) and [La − ] during and immediately after exercise (Smilios et al., 2003;Gladden, 2004;Stokes et al., 2004) are regarded markers of the metabolic disturbances following training sessions. The primary aim of the current study was to compare power production and biochemical markers of metabolic stress and muscle damage following a HS and a SS training session, as well as the 45-h recovery rates in well-trained cyclists. The secondary aim was to compare the changes in these values compared to their baseline levels. We hypothesized that biochemical indicators of muscle damage recover more slowly after HS compared to SS training, whereas acute metabolic responses are altered to a greater extent after SS training in well-trained cyclist. Participants Twelve well-trained male cyclists gave their written, informed consent to participate in the study. All cyclists had experience with HS and SS training from their daily training. To be included, the following criteria was fulfilled: (1) competitive cycling at national or international level, (2) maximal oxygen uptake (VO 2max ) of ≥60 mL·kg −1 ·min −1 , (3) implemented strength training including squat, hip flexion and leg press twice a week for a minimum of 4 weeks before testing, and (4) currently healthy and free from injury. One participant was excluded from the study due to illness. Baseline characteristics of the participants are presented in Table 1. The Regional Committee for Medical and Health Research Ethics in West Norway evaluated our study not to include any medical or health related ethical concerns, and the study was then approved by The Norwegian Data protection Authority. After the submaximal test, each participant cycled at low intensity for 10 min before a continuous, incremental cycle ergometer test to volitional exhaustion determinedVO 2max . The test began one stage below the workload that elicited [La − ] of 4 mmol·L −1 in the submaximal test, with increments of 25 W every minute. HR was measured continuously throughout the test, and the peak value recorded was defined as HR max . Expired gas was collected and analyzed continuously using a computerized metabolic system with mixing chamber (Oxycon Pro, Erich Jaeger GmbH, Hoechberg, Germany), calibrated before every test with certified calibration gases of known concentrations and a 3-L calibration syringe (CareFusion, Hoechberg, Germany). The determination of maximalVE,VO 2 , andVCO 2 and aerobic power (Watt) was defined as the highest average of two consecutive 30 s measurements. Familiarization Sessions Familiarization to the specific sessions used in the present study was performed 1 week before the first experimental session. The 6RM load for each exercise in the HS session was defined during the familiarization session for each participant. During the cycling familiarization session, the pedaling resistance applied for the sprints was individually adjusted using an air braked bicycle ergometer (WattBike, WattBike Ltd, Nottingham, UK). This bike was used for both familiarization and experimental trials. To ensure that the participant achieved the highest possible power output during the 8-s sprints at a cadence of 130-140 revolutions per min (RPM) (Hopker et al., 2010), each participant performed at least three sprint at different resistance level with 2 min recovery in between. Procedures All participants were instructed to abstain from strenuous exercise, to perform the same volume of low-intensity training and to have similar diet 48 h before both experimental training sessions, and until the final collection of recovery data 47 h post exercise were performed. All meals, daily activity and sleep were registered for all participants during the data collection period. Prior to both training sessions, participants arrived to the laboratory at the same time following an overnight fast of at least eight h for the baseline blood sample. A standardized breakfast was then served 1.5 h before each training session. Body Composition Before breakfast, a direct segmental multi-frequency bioelectrical impedance analysis (DSM-BIA) was performed using the In-Body720 body composition analyzer (Biospace, Tokyo, Japan) to determine body composition. Body mass (kg) and body fat percentage were used in the analyses. CMJ and Peak Cycling Power A continuous 15 min warm-up on a cycle ergometer (Tomahawke IC7, ICG, Germany) at an intensity of 70-80% of HR max was performed before both the CMJ and the peak cycling power test. The CMJ was performed using both legs on a three-dimensional force plate (Kistler 9286B, Kistler Instruments AG, Winterthur, Switzerland) immediately before and 23 and 47 h after each training session. The CMJ started from an upright position and the participants were instructed to descend to a self-chosen depth before jumping vertically with maximal effort. Throughout the CMJ, participants used a hand-on-hips position. Maximum jump height (cm) was calculated using Kistler Measurement, Analysis and Reporting Software (MARS, 2015, S2P, Lubljana, Slovenia). Participants performed at least three jumps, or continued until performance decreased. The best attempt was used in the final analyses. Peak power (P peak ) was measured during a 4-s all-out sprint test. The test was completed 23 h after each training session (Herbert et al., 2015;Wainwright et al., 2017). The 4-s all-out test was performed from a standstill start in a seated position, with maximal acceleration from the start, with similar settings as for the SS session. Baseline P peak was defined as the highest peak power during the 4 first seconds obtained in one of the 12 sprint intervals in the SS session. HS and SS Sessions HS and SS sessions were performed immediately after the CMJ test. Total duration of the SS and HS sessions, including warmup, were approximately 45 min. The HS session consisted of squats with both legs in a smith machine (TKO, Houston, USA), unilateral leg-press (Mobility, Norway), and unilateral hip flexor exercises in cable cross apparatus (Gym 80, Gelsenkirchen, Deutschland), organized as 3 sets of 6 repetition maximum (RM) per exercise, separated by 3 min recovery between sets and 5 min between each exercise. Participants were instructed to carry out the concentric phase with maximal effort, while the eccentric phase was completed as a controlled movement lasting 2 s. The SS session consisted of three sets of four 8-s intervals with maximal effort, performed from a standstill start in a seated position. Participants started with the individually chosen preferred leg and sitting position in all sprints. Each repetition was separated by 2 min active recovery and each set by 5 min active recovery, consisting of cycling at 70% HR max . All SS sessions were completed on a cycle ergometer (WattBike, WattBike Ltd, Nottingham, UK), which allowed measurement of power output (Hopker et al., 2010). The HS and SS sessions were carefully designed to mirror typical training sessions implemented by Norwegian world-class cyclists. HR was monitored for each set during both HS and SS sessions (Polar V800, Kempele, Finland). In the analyses, session peak HR (HR peak ) is defined as the mean of the highest obtained HR within each set. RPE was recorded using Borg scale 6-20 (Borg, 1982;Borg et al., 1985) and used as following: immediately after (0 min) the session, 30 min and 1 h post-exercise, the participants were asked the following question: "how exhausted are you in your legs now?" (Day et al., 2004;Wallace et al., 2009). Muscle soreness was measured on a 1-10 scale, 21 and 45 h post-exercise. Calculation of Power and Work In the HS session, work done (kJ) and power were calculated using the distance and speed of the lifted weights, respectively, by a linear encoder (Muscle Lab, Ergotest Technology, Langesund, Norway) (Bosco et al., 1995). Data were acquired and analyzed using Musclelab software (Musclelab version 8.26 Ergotest Technology). In the squat exercise, 90% of the body weight was added to the external load in the calculation. For the leg-press and hip-flexor exercises, only external load was used. Work done in the eccentric phase was calculated using 1/3 of concentric work (Knuttgen et al., 1982) in all exercises. Average velocity was calculated through the whole range of motion utilized to perform a complete repetition and multiplied by the resistance (in N) to obtain average power (in W) (Bosco et al., 1995). The average power (P avg ) in each set was calculated as the P avg in each repetition divided by the number of repetitions. In the SS session, RPM, P avg , and peak power (P peak ) were sampled using Expert software v2.6020 (WattBike Ltd, Nottingham, UK). Work done in each interval was calculated as the P avg during the interval multiplied by the interval duration. Work was calculated for the 12 × 8-s intervals, excluding the low intensity active recovery in-between. Due to missing data in the HS session, work done and power for both sessions are only calculated for 7 cyclists that were representative for the overall performance level of all cyclists ( Table 1). Blood Sampling and Processing Blood samples were collected pre-exercise (at baseline) and postexercise (5, 30 min, 1, 21, and 45 h). On each occasion, blood was sampled from an antecubital vein into three vacutainers containing K2-EDTA, lithium heparin and clot activator for serum separation (BD Life Sciences, New Jersey). Hematological analyses were performed within 2 h of collection on the K2-EDTA sample. The serum sample was kept at room temperature for approximately 30 min prior to centrifugation at 1,300 × g for 10 min. Serum was then frozen at −80 • C until analyses. The lithium heparin samples were stored on ice and centrifuged at 1,800 × g and 4 • C for 10 min within 90 min of collection. Plasma was stored at −20 • C until analyses. Serum CK was determined using coupled enzymatic reactions, while serum Mb was measured using a turbidimetric immunoassay, both using the ABX Pentra C400 (Bergman Diagnostica, Horiba Medical, France) according to the manufacturer's protocol. Serum hGH was determined using a solid-phase, two-site chemiluminescent immunometric assay by the IMMULITE 2000 (Siemens Diagnostics, Germany), while plasma lactate was analyzed by the ABX Pentra C400 according to the manufacturer's protocol. To eliminate inter-assay variance, all samples for a particular assay were thawed once and analyzed in the same assay run. Quality controls for the individual variables were within the acceptable ranges given by manufacturers. All data were corrected for change in plasma volume (Dill and Costill, 1974). Statistical Analyses Data are presented as mean ± standard deviation (SD). For serum level of CK, Mb, [La − ] and hHG, as well as for CMJ performance, DOMS, peak cycling power and maximal RPM obtained in the 4-s all-out sprint test, the repeated measures ANOVA analyses were employed to evaluate main effects of time, training sessions (HS and SS), and the interaction effects between time and sessions. Paired sample t-tests were performed for comparison between sessions regarding work done, RPE, session HR peak , and session [La − ] and for post hoc comparisons between and within training sessions for all variables. Statistical significance was determined at an alpha level of <0.05. SPSS R version 24.0 (IBM Corporation, Armonk NY, USA) for Windows R was used for all the statistical analyses. Training Load Total work done (kJ) was about two times larger in the SS session compared to the HS session (p < 0.001), with correspondingly higher HR peak (p < 0.001) and [La − ] (p < 0.001) ( Table 2). Power Production and Perceptual Responses There were no differences in CMJ height (23 and 47 h post exercise), nor in P peak or RPM obtained in the 4-s all out sprint test (23 h post exercise) between the HS and SS sessions, and no changes from baseline values for any of the sessions ( Table 2). For RPE, there was a main effect of time (0 min, 30 min and 1 h post exercise) (p < 0.001), and an interaction between time and session (p = 0.022) with higher RPE (p = 0.010) immediately after the SS session compared to the HS session. There was a main effect of session for DOMS (p = 0.043), with a significantly higher DOMS score 45 h after the HS session compared to the SS session (p = 0.010), with similar levels after 21 h. Biochemical Markers There were main effects of time for CK (p = 0.001), Mb (p < 0.001), hGH (p < 0.001) and [La − ] (p < 0.001), and main effects of session for Mb (with higher levels for HS than SS; p = 0.004), for hGH (with higher levels for SS than HS; p = 0.003) and for [La − ] (with higher levels for SS than HS; p < 0.001). Interaction effects of time × session were found for the levels of CK (p = 0.002), Mb, hGH, and [La − ] (all p < 0.001 Figure 1). Higher levels of CK and Mb were reported following the HS session, whereas higher levels of hGH and [La − ] were detected following the SS session. Creatine Kinase CK was higher following the HS compared to the SS session both 21 h (p = 0.023) and 45 h (p = 0.38) post exercise ( Figure 1A). Moreover, increased CK from baseline was seen 5, 30 min, 1, 21, and 45 h after completing the HS session (p < 0.001, p < 0.001, p < 0.001, p = 0.015, p = 0.034, respectively). An increase in CK from baseline was only seen 5 min and 30 min after completing the SS session (p = 0.002 and p = 0.007, respectively), while no difference was seen 1, 21, and 45 h post session ( Figure 1A). Myoglobin Mb was higher following the HS compared to the SS session 5 min, 30 min, and 1 h post exercise (p = 0.002, p = 0.004, and 0= 0.005, respectively; Figure 1B). Increased Mb from baseline was seen 5 min, 30 min and 1 h both after completing the HS (p = 0.002, p = 0.003, and p = 0.002 respectively) and the SS session (p = 0.009, p = 0.001, and p = 0.003, respectively). No difference from baseline was seen 21 and 45 h post in any of the sessions ( Figure 1B). Lactate [La − ] was higher following the SS compared to the HS session 5 min, 30 min, and 1 h post exercise (p < 0.001 for all comparisons) ( Figure 1C). An increase in [La − ] from baseline was seen 5 min, 30 min and 1 h after completing the HS and the SS session (p < 0.001 for all comparisons), while no differences were seen 21 and 45 h post sessions ( Figure 1C). Human Growth Hormone Level of hGH was higher following the SS compared to the HS session 5 min, 30 min and 1 h post exercise (p = <0.001, p = 0.006, and p = 0.019, respectively; Figure 1D). An increase in hGH from baseline was seen 5 min post exercise following both the HS (p = 0.016) and the SS (p = 0.002) (Figure 1D). DISCUSSION The primary aim of the current study was to compare power production and biochemical markers of metabolic stress and muscle damage following a HS and a SS training session designed to mirror typical training sessions implemented by world-class cyclists. A main result was higher levels of [La − ] and hGH 5 min, 30 min and 1 h following the SS session compared to the HS session, as well as higher levels of Mb following the HS session compared to the SS session at the same time points. However, no differences between sessions were found 21 h post exercise. As expected, the serum level of CK was higher 21 and 45 h following the HS session compared to the SS session. In addition, DOMS was higher 45 h after the HS session compared to the SS session. There was no difference in CMJ performances 23 or 47 h post exercise, or in P peak and RPM obtained in the 4-s all-out sprint test 23 h post exercise between the HS and SS sessions. The inherent differences in load exerted from the HS and SS sessions led to subsequent diversities in the acute responses, which is in line with previously published studies in this area (Kraemer et al., 1990;Godfrey et al., 2003;Mougios, 2007;Speranza et al., 2007;Bishop et al., 2008;Coffey et al., 2009;Brancaccio et al., 2010;Koch et al., 2014;Bessa et al., 2016;Soares and Bozza, 2016). However, while none of the previous studies compared the acute responses between sprint and strength training, the novelty of our approach were the paired samples design used to compare acute responses and recovery rates following typical training sessions. In addition, we used a complex battery of biochemical markers indicating metabolic stress and muscle damage among highly trained cyclists. The acute responses following these sessions seemed to be influenced both by the total work done that was larger for the SS session and the peak and average power/load that was higher during the HS session. Specifically, the work done during the twelve 8-s maximal cycling sprints was about 2 times larger compared to the HS training session containing three sets of 6RM using three different strength exercises. This was reflected in higher RPE scores, HR peak and levels of [La − ] following the SS compared to the HS session. In comparisons, the higher levels of [La − ] and hGH following the SS session indicate larger metabolic disturbances than for the HS session (Smilios et al., 2003;Gladden, 2004;Stokes et al., 2004). The increase of hGH after high-intensity exercises are well recognized (Godfrey et al., 2003), since greater demands of anaerobic glycolysis stimulates serum hGH elevations (Kraemer et al., 1990). Overall, the higher levels of [La − ] and hGH 5 min, 30 min and 1 h after the SS session compared to the HS session, coincide with the larger work done and most likely reflects higher metabolic stress than the HS session. These differences in recovery rates between sessions provide new insight that might help coaches and athletes to understand the load and recovery rates from such sessions. In this case, these markers remained back to baseline within 1 day after both sessions, indicating that well-trained cyclists are metabolically recovered and can train as normal on the subsequent day. Furthermore, the higher levels of Mb until 1 h post exercise and greater CK levels 21 and 45 h post exercise reported after the HS session compared to the SS session indicates larger muscle damage (Mougios, 2007;Speranza et al., 2007;Bishop et al., 2008;Brancaccio et al., 2010;Koch et al., 2014;Bessa et al., 2016;Soares and Bozza, 2016). It has previously been shown that weight bearing exercises, including eccentric muscle actions, cause the highest increase in serum level of CK (Brancaccio et al., 2007(Brancaccio et al., , 2010Koch et al., 2014) and Mb (Speranza et al., 2007;Soares and Bozza, 2016). In the present study, exercises in the HS session were performed with slow movements in the eccentric phase and with maximal effort and movement velocity in the concentric phase, while the SS session was performed with lower resistance and less eccentric action. This difference may have led to more muscle damage for the HS session, which subsequently requires longer time to be fully recovered-an important and novel finding to be aware of when implementing strength training in well-trained cyclists' training schedule. In the present study, a relatively low DOMS score was reported after both sessions, although there was a significantly higher score after the HS session compared to the SS session 45 h post session. Our findings are generally in line with several previous studies where DOMS was increased 24-72 h following a strength training session (Armstrong, 1984;Nosaka et al., 2002;Kraemer and Ratamess, 2004;Bishop et al., 2008). However, the relatively low DOMS scores in the present study might be due to the higher fitness levels of our participants, as well as their high level of familiarization to such sessions. Differences in both acute responses and recovery rates between athletes of different training levels or with various degrees of familiarization are important distinctions to be aware of when comparing studies. However, the use of CK levels and DOMS score as measures of recovery is controversial (Nosaka et al., 2002), and previous studies show no correlation between changes in CK levels, DOMS scores and performance tests measuring force production following fatiguing events (Byrne et al., 2004). This also applies to our data, where the development of CK, DOMS and power production in CMJ and 4-s all-out sprints follows different patterns. According to Paulsen et al. (2012), force production during performance tests are essential measures of recovery status. Here we measured CMJ height and P peak on a cycle ergometer, which reflects the ability to produce force and power, and should provide valid measures of recovery status in that context. However, we found no difference between the HS and SS session in either of these measures, which are likely explained by the higher fitness levels and familiarization of our cyclists compared to previous studies. Furthermore, no difference from baseline values was found neither after the HS nor after the SS session. This is in contrast to the decline in force production previously reported both after HS and SS training (Raastad and Hallen, 2000;Andersson et al., 2008;Haugvad et al., 2014;Gathercole et al., 2015). Although the relatively rapid recovery among our participants might be due to their high fitness level, we cannot exclude that endurance athletes may have limited ability to produce force and power, and thereby induce less muscle damage compared to studies done on power-trained athlete groups. Less muscle damage in our group may also be influenced by our inclusion criterion that that HS and SS training should have been part of the cyclists' weekly training before entering our study. CONCLUSION Our findings indicate that SS training provides greater metabolic stress than HS training, whereas HS training leads to more muscle damage compared to that caused by SS training. However, although higher CK level and muscle soreness were present 45 h post the HS training session the ability to produce power remained back to baseline already 23 h after both training sessions indicating a rapid rate of recovery in our well-trained cyclists. Practical Application Based on our findings, it appears that sprint training provides a different type of response than strength training among welltrained cyclists, with higher metabolic disturbances after SS training and greater muscular damage subsequent to HS training. This must be taken into account by coaches and athletes when including such sessions in the weekly training plans. However, it seems like both types of training require relatively short recovery times compared to previous studies on less trained participants. Such sessions will therefore not substantially influence sessions performed approximately 24 h later in well-trained cyclist who are already familiar with sprint and strength training. Still, there seems to be indications of muscle damage and perceptual feelings of muscle fatigue the first 45 h after HS training that athletes/coaches should be aware of. ETHICS STATEMENT This study was carried out in accordance with the recommendations of The Norwegian Data protection Authority with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the The Norwegian Data protection (45048). AUTHOR CONTRIBUTIONS MK: Planning, data collection, analyzing, and writing; ØS, ET, GP, and BR: Planning, analyzing, and writing; IS: Analyzing and writing; EE, IN, KR, and AR: Blood data collection, analyzing, writing; VI and KS: Data collection and analyzing; HG: Data collection, analyzing, and writing. Barnett, A. (2006)
6,743.8
2018-03-05T00:00:00.000
[ "Biology" ]
Coulomb engineering of the bandgap and excitons in two-dimensional materials The ability to control the size of the electronic bandgap is an integral part of solid-state technology. Atomically thin two-dimensional crystals offer a new approach for tuning the energies of the electronic states based on the unusual strength of the Coulomb interaction in these materials and its environmental sensitivity. Here, we show that by engineering the surrounding dielectric environment, one can tune the electronic bandgap and the exciton binding energy in monolayers of WS2 and WSe2 by hundreds of meV. We exploit this behaviour to present an in-plane dielectric heterostructure with a spatially dependent bandgap, as an initial step towards the creation of diverse lateral junctions with nanoscale resolution. T he precise and efficient manipulation of electrons in solid-state devices has driven remarkable progress across fields from information processing and communication technology to sensing and renewable energy. The ability to engineer the electronic bandgap is crucial to these applications 1 . Several methods currently exist to tune a material's bandgap by altering, for example, its chemical composition, spatial extent (quantum confinement), background doping or lattice constant via mechanical strain 2 . Such methods are typically perturbative in nature and not suitable for making arbitrarily shaped, atomically sharp variations in the bandgap. Consequently, there is a motivation to approach this important problem from a fresh perspective. The emerging class of atomically thin two-dimensional (2D) materials derived from bulk van der Waals crystals offers an alternative route to bandgap engineering. Within the family of 2D materials, much recent research has focused on the semiconducting transition-metal dichalcogenides (TMDCs)-MX 2 with M ¼ Mo, W and X ¼ S, Se, Te 3 . In the monolayer limit, these TMDCs are direct-bandgap semiconductors with the optical gap in the visible and near-infrared spectral range 4,5 . They combine strong inter-and intraband light-matter coupling 6,7 with intriguing spin-valley physics [8][9][10] , high charge carrier mobilities 11,12 , ready modification of the in-plane material structure [13][14][15][16] and seamless integration into a variety of van der Waals heterostructures 17 . Importantly, the Coulomb interactions between charge carriers in atomically thin TMDCs are remarkably strong [18][19][20][21] . This leads to a significant renormalization of the electronic energy levels and increase in the quasiparticle bandgap. The Coulomb interactions are also reflected in the binding energies of excitons, that is, bound electron-hole pairs 2 , that are more than an order of magnitude greater in TMDC monolayers than in typical inorganic semiconductors [22][23][24][25][26] . The strength of the Coulomb interaction in these materials originates from weak dielectric screening in the 2D limit 21,27,28 . For distances exceeding few nanometres, the screening is determined by the immediate surroundings of the material, which can be vacuum or air in the ideal case of suspended samples. More generally, the interaction between charge carriers is highly sensitive to the local dielectric environment 23,24,[26][27][28][29][30][31][32] , as seen in measured changes of the exciton Bohr radius 33 and in theoretical analysis of the environmental screening 34,35 . Correspondingly, both the electronic bandgap and the exciton binding energy are expected to be tunable by means of a deliberate change of this environment, as illustrated in Fig. 1a, like the influence of a solvent on the properties of molecules, quantum dots, carbon nanotubes and other nanostructures suspended in solution [36][37][38] . In addition, passivated and chemically inert van der Waals surfaces allow atomically thin layers to be brought into close proximity while still retaining the intrinsic properties and functionality of the individual components 17 . These observations motivate a programme to explore the concept of Coulomb engineering of the bandgap by local changes in the dielectric environment. This strategy offers a means of locally tuning the energies of the electronic states in 2D materials, even allowing in-plane heterostructures down to nanometre length scales 34 . As a result, this approach not only effectively demonstrates the validity of fundamental physics with respect to the Coulomb interaction in atomically thin systems, but offers a viable opportunity to directly harness these many-body phenomena for future technology. In this report, we provide direct experimental demonstration of control of the bandgap and exciton binding energy in 2D materials using Coulomb engineering through the modification of the local dielectric environment. By placing layers of graphene and hexagonal boron nitride above and below monolayers (1L) of WS 2 and WSe 2 , we achieve tuning of the electronic quasiparticle bandgap, as well as of the exciton binding energy of the two TMDC monolayers by several 100's of meV. We note that graphene is particularly well-suited to demonstrate and explore the concept of dielectric heterostructures. It combines a high dielectric screening with the possibility of adding an arbitrary number of additional layers as thin as only 3 Å. Furthermore, the TMDC/graphene structures have been heavily studied recently in a variety of contexts with potential applications in optoelectronics and photovoltaics [39][40][41][42] . Screening is found to be maximized for just a few layers of graphene as the surrounding dielectric, suggesting that Coulomb-engineered bandgaps can be realized with a spatial resolution on the nanoscale. Moreover, an in-plane heterostructure with a spatially dependent electronic bandgap is shown to exhibit a potential well on the order of more than 100 meV. Our results are supported by calculations employing a quantum mechanical Wannier exciton model 21 . The dielectric screening leading to the bandgap renormalization can be treated in a semiclassical electrostatic framework that accounts for the underlying substrate and the nanostructured dielectric environment, which we computed using a recently developed quantum electrostatic heterostructure approach 30 . Results Coulomb engineering of monolayer WS 2 . An optical micrograph of a typical sample, 1L WS 2 partially covered with bilayer (2L) graphene, is presented in Fig. 1b. To monitor the quasiparticle bandgap of the material, we first identify the energies of the excitonic resonances in different dielectric environments using optical reflectance spectroscopy. The relationship between exciton Rydberg states and the electronic bandgap is shown in the schematic illustration of the optical response of a 2D semiconductor in Fig. 1c. The Coulomb attraction between electrons and holes leads to the emergence of bound exciton states below the quasiparticle bandgap 2,43,44 , which are labelled according to their principal quantum number n ¼ 1, 2, 3, y, analogous to the states of the hydrogen atom. (Throughout the rest of the manuscript we omit the term quasiparticle for clarity of presentation.) The difference between the bandgap E gap and the exciton resonance energies defines the respective exciton binding energies. In particular, the energy D 12 between the exciton ground state (n ¼ 1) and the first excited state (n ¼ 2) scales with the ground state exciton binding energy E B . Along with the experimentally determined transition energy E 1 of the exciton ground state, we can determine the electronic bandgap via Typical linear reflectance contrast spectra, DR/R ¼ (R sample À R substrate )/R substrate , of the bare 2L graphene, 1L WS 2 and the resulting heterostructure at T ¼ 70 K are presented in Fig. 1d. For such ultra-thin layers with moderate reflectance contrast signals on transparent substrates, the quantity DR/R is predominantly determined by the imaginary part of the dielectric function, which is proportional to the optical absorption 45,46 . In the spectral region shown, the response of 1L WS 2 is dominated by the creation of so-called A excitons at the fundamental optical transition in the material, at the K and K 0 points of the hexagonal Brillouin zone. In particular, the ground state (n ¼ 1) excitonic resonance occurs at 2.089 eV. The first excited state n ¼ 2 appears as a smaller spectral feature at 2.245 eV, with an energy separation between the two states of D 12 ¼ 156 meV. In addition, the first derivatives of DR/R are presented in Fig. 1e, where the spectral region in the range of the n ¼ 1 state is scaled by factor 0.03 for better comparison. Here, the energies of the peaks correspond to the points of inflection of the asymmetric derivative features, as indicated by dashed lines for the n ¼ 2 states. Finally, the shoulder on the low-energy side of the n ¼ 1 peak at 2.045 eV arises from charged excitons, indicating slight residual doping in the WS 2 material 47,48 . Overall, the 1L WS 2 response matches our previous observations on uncapped samples supported on fused silica 24,47 , consistent with an exciton binding energy on the order of 300 meV. For bilayer graphene, we recover the characteristic flat reflectance contrast over the relevant spectral range 49 . In case of WS 2 capped with graphene, the overall reflectance contrast is offset by the graphene reflectance, similar to findings in TMDC/TMDC heterostructures 50 . Most importantly, however, we observe pronounced shifts of the WS 2 exciton resonances to lower energies, where the n ¼ 1 transition and the n ¼ 2 states are now located at 2.060 eV and 2.167 eV, respectively (Fig. 1d,e). The corresponding decrease of D 12 from 156 to 107 meV is indicative of a strong reduction in the exciton binding energy and bandgap. In particular, the absolute shift of the n ¼ 2 state by almost 70 meV defines the minimum expected decrease in the bandgap. More quantitatively, by assuming a similar non-hydrogenic scaling like that in ref. 24, that is, E B ¼ 2D 12 , the reduction in exciton binding energy is estimated to be on the order of 100 meV, from 312 meV in bare WS 2 to 214 meV in WS 2 capped by 2L graphene. From E gap ¼ E 1 þ E B , we infer a bandgap for bare WS 2 of 2.40 eV, reducing to 2.27 eV in the WS 2 /graphene heterostructure. We thus see a 130 meV decrease in the bandgap energy from the presence of the capping layer. To understand these experimental findings more intuitively, we recall that although the excitons are confined to the WS 2 layer, the electric field between the constituent electrons and holes permeates both the material and the local surroundings (Fig. 1a). In particular, the screening for larger electron-hole separations is increasingly dominated by the dielectric properties of the environment. Therefore, the strength of the Coulomb interaction is reduced by the addition of graphene layers on top of WS 2 , leading to a decrease in both the exciton binding energy and the bandgap. Nanoscale sensitivity of Coulomb engineering. The spatial extent of the modulation is an important aspect of our approach to dielectric bandgap engineering. We have been able to probe this issue spectroscopically with sub-nanometre precision. To do so, we track the change in the WS 2 bandgap for dielectric screening when the semiconductor is capped by 1, 2 or 3 layer graphene (Supplementary Note 3 and Supplementary Fig. 5). The extracted exciton peak separation energy D 12 and the corresponding evolution of the bandgap are presented in Fig. 2a,b, respectively. Remarkably, we observe the strongest change already from the first graphene layer, which is followed by rapid saturation with increasing thickness within experimental uncertainty. This result strongly suggests that the change in bandgap should also occur on a similar ultra-short length scale at the inplane boundary of the uncapped and graphene-capped WS 2 , consistent with predictions from ref. 34. For a more precise analysis of our findings we turn to a Wannier-like exciton model 21 , where the non-local screening of the electron-hole Coulomb interaction leads to environmental sensitivity. To atomistically handle complex dielectric environments, we employ the recently introduced quantum electrostatic heterostructure (QEH) approach presented in ref. 30. Within this model, the electrostatic potential between electrons and holes confined to a 2D layer can be obtained for nearly arbitrary vertical heterostructures, taking into account the precise alignment of the individual materials and the resulting spatially dependent dielectric response. The exciton states are subsequently calculated by solving the Wannier equation in the effective mass approximation with an exciton reduced mass of 0.16 m 0 as obtained from ab initio calculations 21 . To account for the dielectric screening from the environment, mainly through the underlying fused silica substrate and potential adsorbates such as water, we adjust the effective dielectric screening below the 2D layer, resulting in D 12 ¼ 188 meV and E B ¼ 289 meV, roughly matching experimental observations. Then, additional graphene The theoretically predicted energy separation D 12 is plotted in Fig. 2a as a function of the number of graphene layers and compared to experiment. The calculations reproduce both the abrupt change and the subsequent saturation of D 12 with graphene thickness. Furthermore, the absolute energy values are in semi-quantitative agreement with the measurements, supporting the attribution of the measured change of D 12 to the dielectric screening from adjacent graphene layers. The model also agrees with a classical electrostatic screening theory for the limiting cases of an uncapped WS 2 monolayer on fused silica and for a layer fully covered with bulk graphite on top, the results of which are indicated by dashed lines in Fig. 2a (see Supplementary Note 2 for details). The calculated exciton binding energy changes from 290 meV for uncapped WS 2 to 120 meV for the case of a trilayer graphene heterostructure. As previously discussed, the binding energies together with the absolute energies of the exciton ground state resonances can be used to infer the size of the bandgap. The evolution of the bandgap and the corresponding n ¼ 1 and n ¼ 2 exciton transition energies are presented in Fig. 2b. The binding energies obtained from the QEH model are compared with experimentally determined limits from the relation E B pD 12 by assuming a non-hydrogenic scaling E B ¼ 2D 12 as was observed for a single WS 2 layer on SiO 2 (ref. 24) or conventional 2D hydrogenic scaling with E B ¼ 9=8D 12 for a homogeneous dielectric. These two relations provide, respectively, boundaries for the scaling in generic heterostructures of 1L TMDCs embedded in a dielectric environment with higher dielectric screening than the SiO 2 support and lower dielectric screening than the corresponding bulk crystals. In general, the scaling of E B with D 12 converges towards the 2D-hydrogen model as the screening of the surroundings approaches that of the bulk TMDC. For the case of trilayer graphene, this simple estimate implies a bandgap reduction of at least 150 meV and at most 230 meV. Flexibility of material systems and configurations. In addition to the graphene-capped WS 2 samples, a variety of heterostructures were investigated in a similar manner. These include 1L WS 2 encapsulated between two graphene layers, graphenecapped 1L WSe 2 , graphene-supported 1L WSe 2 and 1L WSe 2 on an 8 nm layer of hexagonal boron nitride (hBN). In all cases, a decrease in D 12 separation was observed with increasing dielectric screening of the environment (see Supplementary Notes 4 and 5 including Supplementary Figs 6 and 7 as well as the Supplementary Table 1 for individual reflectance spectra and additional sample details). A summary of the results is presented in Fig. 3a, including experimentally obtained n ¼ 1 and n ¼ 2 transition energies, as well as the corresponding shifts of the bandgap, estimated as above (see also Supplementary Fig. 8 for the shift of the B exciton states in WSe 2 ). The bandgap of WSe 2 can be thus tuned by more than 100 meV and the largest shift of almost 300 meV is observed for graphene-encapsulated WS 2 , the structure with the highest dielectric screening. For comparison, the influence of an arbitrary dielectric environment is presented in Fig. 3b, which shows the calculated exciton binding energy of 1L WS 2 encapsulated between two thick layers of varying dielectric constants. As we have shown, the change in the bandgap is roughly the same as the change in the binding energy and thus can be as high as 500 meV (corresponding to the intrinsic value of the exciton binding energy for a sample suspended in vacuum). In-plane dielectric heterostructure. Finally, we demonstrate an in-plane 2D semiconductor heterostructure with a spatially dependent bandgap profile by constructing a spatially varying dielectric environment surrounding the semiconductor. We scan across the structure (cf. Fig. 1b) through regions of bare WS 2 and WS 2 covered by a bilayer of graphene. The corresponding path is illustrated schematically in Fig. 4c and in the inset. First-order derivatives of the reflectance contrast spectra are presented in Fig. 4a,b in the spectral range of the WS 2 exciton n ¼ 1 and n ¼ 2 resonances, respectively. Each spectral trace corresponds to a different spatial position x on the sample; the bilayer graphene flake covers the WS 2 monolayer between 5 and 12 mm on the x axis. Like the data shown in Fig. 1d,e, both the ground and excited state resonances of the WS 2 excitons shift to lower energies in the presence of graphene. The peak energies are extracted from the points of inflection of the derivative, indicated by circles in Fig. 4a,b. The appearance of multiple transitions in the same spectrum reflects the limited spatial resolution (1 mm) and a small amount of the WS 2 monolayer not being in close contact with graphene (see Supplementary Note 1 and Supplementary Fig. 2 for details). The spatial dependence is presented in Fig. 4c along the path marked in the optical micrograph (inset), which includes two WS 2 /graphene in-plane junctions. As previously discussed, the induced energy shifts result in an overall decrease of the relative energy separation D 12 from about 160 meV, down to 105 meV. Here, the binding energy is extracted by multiplying D 12 with the scaling factor deduced from the QEH calculations presented in Fig. 2 (1.54 and 1.40 for the bare and 2L graphene-covered sample, respectively) to obtain the bandgap at each point. The resulting bandgap profile is representative of a potential well (graphene-covered area) surrounded by two adjacent barriers at higher energies (bare sample). Model self-energy calculations on monolayer TMDCs in structured dielectric environments 34 suggest that the interface between the uncapped and capped regions should yield an in-plane type-II heterostructure. In particular, the areas capped by graphene are expected to have a higher local valence band that acts as a potential well for holes. The dielectric effect on the conduction band is predicted to be weaker, with a slightly higher energy for the capped regions leading to a small barrier for electron flow from the bare to capped regions. Since the overall energy shifts of the bandgap are larger than thermal energy at room temperature, our results render the observed phenomenon technologically promising for applications under ambient or even high-temperature conditions. Discussion We have demonstrated a new approach to the engineering of electronic properties through local dielectric screening of the Coulomb interaction in 2D heterostructures. We have shown tuning of the bandgap and exciton binding energy in monolayers of WS 2 and WSe 2 for a variety of combinations with graphene and hBN layers. The overall shift of the bandgap ranged from 100 to 300 meV, with an estimated theoretical limit of about 500 meV. In addition, the saturation of the screening effect with the thickness of the dielectric layer is found both in theory and experiment to occur on a nanometre length scale. We have demonstrated the flexibility of the technique by examining a variety of material combinations including WS 2 , WSe 2 , graphene and hBN in several distinct configurations, with top and bottom alignment as well as in a sandwich-type structure. We emphasize that the screening effect is not restricted to any particular choice of a capping material. Finally, we demonstrated Coulomb engineering of a prototypical in-plane dielectric heterostructure, illustrating the feasibility of our approach. As a consequence, we expect that patterning of dielectric layers on top of these ultrathin semiconductors or placing the latter on a prefabricated substrate will allow us to explore a variety of novel devices in the 2D plane. In addition to the impact for more conventional optoelectronic devices-such as transistors, light emitters and detectors-one can envision custom-made superstructures for 2D layers that permit integration with photonic cavities, plasmonic nanomaterials and quantum emitters for the creation of new hybrid technologies. The considerable strength of the Coulomb forces in atomically thin materials is thus not only of fundamental importance, but also offers a strategy towards deterministic engineering of bandgaps in the 2D plane. Methods Sample preparation. Monolayers of WS 2 and WSe 2 , mono-and few-layer graphene, and hBN samples were produced by mechanical exfoliation of bulk crystals (2Dsemiconductors and HQgraphene). The thickness of the layers was confirmed by optical contrast spectroscopy. The heterostructures were fabricated using well-established polymer-stamp transfer techniques described in refs 50,53 for the WS 2 based samples and ref. 54 for the WSe 2 samples (see Supplementary Note 2 and Supplementary Fig. 1 for additional details). Optical spectroscopy. To study the exciton states, we performed optical reflectance measurements using a tungsten-halogen white-light source. The light was focused to a 1-2 mm spot on the sample for the measurements on WS 2 , and to a 5-10 mm spot for the measurements on WSe 2 due to larger sample sizes. The samples were kept in an optical cryostat at temperatures around 70 K and 4 K for the WS 2 and WSe 2 samples, respectively. The reflected light was spectrally resolved in a grating spectrometer and subsequently detected by a CCD (see Supplementary Notes 1 and 3 and Supplementary Fig. 5 for additional details of the experimental measurements and analysis procedures). Theoretical methods. Exciton binding energies were calculated within the Wannier-Mott model, with an exciton reduced mass obtained from density functional theory (DFT) calculations 21 . The electron-hole-screened Coulomb interaction was obtained from the quantum electrostatic heterostructure approach 30 Data availability. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
5,128.2
2017-05-04T00:00:00.000
[ "Physics", "Materials Science", "Engineering" ]
Tamm resonances and minibands in the models of atomic chains and superlattices The spectrum of the modelled regular superlattice which includes contacts is investigated using the transfer-matrix formalism. It is shown that the separated pair resonances in the chain spectrum (which are distributed in the permitted and in the forbidden zones, respectively) appear due to the existence of a single contact. The eroding of the noted minizones after making correlations between the separate regular segments in the superlattice is demonstrated. Similar resonances and minizones which can be called the Tamm resonances and minizones are considered to be useful elements for describing the surface states and zone-band structure of real systems within the restricted geometry. Introduction The modelling of the low-dimensional atom-molecular complexes, adatomic clusters, chains, lattices and superlattices belongs to a number of traditionally and actively researched problems of the solid state theory [1].The urgency of the research is limited by the essentiality and the difficulties of a direct microscopic description of the recently synthesized low-dimensional atom-molecular complexes and other physical objects with the symmetry, for instance, different from the translational one (icosahedra or quasicrystallic) [2].This way, specifically, in [3], the one-dimensional Kronig-Penny superlattice with a regular intrinsic structure determined by the combination of zero-radius potentials [4] was studied.Then, the simple analytical model is suggested [1] permitting to explore some general properties of superlattices, separate contacts (interfaces) being the model elements for describing semiconductor and metallic films, quasicrystalls and other objects.Such a model can be used in constructing more general theories.Using the transfer-matrix formalism this paper considers the problem regarding the spectrum of a regular superlattice as well as suggests a simple method of describing the contact in the superlattice structure.It is shown that the existence of a single contact causes separate pair resonances in the chain spectrum (which are distributed in the permitted and in the forbidden zones, respectively).The eroding of the mentioned minizones after making a connection between the separate regular segments in the superlattice is demonstrated as well.Similar resonances and minizones can be called the Tamm's ones, being useful elements for describing the surface states and the zone structure of the real low-dimensional systems. Statement of the problem. Transfer-matrix formalism Consider the sequence of nonstructured particles forming a one-dimensional superlattice with the alternation of the regular segments which are characterized by different periods as well as by different power constants.The potential energy of the system is modeled using the point-like interactions: where a, b are the lattice periods, V a , V b are the power constants of the centrums of different segments.We also assume that the mass (or the effective mass) of the scattered particle and the Plank constant are equal to 1.The description of one-dimensionality (quasi-onedimensionality) of the zond-particle motion is given, for instance, in [8].The summation in (1) is carried out over all different indexes n and n ′ .Modelling of the superlattices using the potential (1) is demonstrated in figure 1.The contacts of segments with different lattice parameters are shown by means of the contrast horizontal lines.For simplicity, the symmetry in the position of superlattice segments is suggested as a translational (Na, Nb = const).The refusal from this assumption leads us to the problem of a regular chain with defects, which was considered in [9].In such a case with a superlattice, a quasiperiod composed of two nearest segments (see figure 1) could be considered. The Shrödinger equation of the considered problem could be written in the following form: where E and Ψ (x) are respectively the eigenvalue of the energy and of the wave function of zond-particle in the field of the potential U (x). For the model potential (1), the zond-particle motion can be considered as a free motion and the wave function can be described by superposition of two plane waves: where k 2 = 2E.The coefficients A and B could be extracted from the boundary conditions. The natural boundary conditions in the point d of n-atomic centrum for the wave function and their first derivative could be chosen in the following way: The boundary conditions (4) follow from the original structure of equation ( 2) and also from the general properties of the zero-radius potential (see [4]).In the matrix form, the boundary conditions (4) could be rewritten as: where T n is the transfer-matrix being the square unimodular matrix of the size 2×2 in the Kelly form [6]: Within the framework of the transfer-matrix formalism, the problem regarding the spectrum of a one-dimensional system of point-like centrums could be solved exactly.A general approach to using the matrix form for the boundary conditions for their connection with the energy spectrum as well as some characteristics of particle transmission in the field of different potentials is given in [6].This formalism also makes it possible to determine some transport characteristics for the system studied (for example, the Landauer resistance coefficient [9] which represents the relation between the quantum-mechanical coefficients of transmission and reflection, respectively).Following the general approach described in [6,9], the expressions for the spectrum E and the resistance coefficient ρ could be extracted in the form of the following relations (the conversion from the boundary conditions (4) to the following expressions have also been considered in [9]): where Q is the quasi-wave vector of the zond-particle, K tran , K refl are the reflection and transmission quantum-mechanical coefficients.The expressions for α n , β n are given in [9]. Two types of a zond-particle motion in the field of the system considered would be analyzed below: the resonance and nonresonance tunnelling [8].Completely real and completely imaginary values of the quasi-wave vector Q correspond to these cases respectively.Now consider the pure resonance tunnelling (otherwise, we can put Q → iQ to obtain correlations).If N a , N b > 1, the diagonal representation of transfer-matrix T n could be written in the following form: where λ n± are the eigenvalues of the matrix (6), For instance, in the case of regular chain with the period a (V a =V b =V, d=na): The chain containing isotopic and (or) shift defects was considered in detail in [9].Within the next chapters, the suggested transfer-matrix formalism will be used for describing the complex superlattice (see figure 1). A model of contact within the superlattice. The spectrum and resistance of a single contact Consider the superlattice within the quasi-period which includes N a atoms in the segment with the period a and N b atoms in the segment with the period b.The respective transfer-matrix has the following form: where T a and T b are the transfer-matrix (6) with the elements (10) for the regular chains with periods a and b, respectively.We are interested in the diagonal representation of the matrix T c (only the numerical solution of the problem could be provided within all other representations).As it was shown in [6,9], the diagonal representation of the matrix (6) with different parameters doesn't exist.Using matrix S a (as a diagonalizing one), using ( 9) and (10) we obtain: where T a,bd are the transfer-matrixes within the diagonal representation, and In this representation, the contact of two regular segments could be considered as the last one, just like the regular chain that includes two defects in their structure could be described by the non-diagonal transfer-matrixes T q and T p (where matrix T p characterizes the next contact from the left to the right).Thus, the transfermatrix which corresponds to a single contact is T q , namely: Note that the expression for the matrix T p could be obtained from ( 14) by changing the periods a and b.Finally, for describing the zone structure of the segment which includes the contact we obtain: where Q c is the quasi-wave vector of zond-particle in the field of the segment with the contact.The results of the numerical calculations using the formulas (15) are presented in figure 2. We should take into account the deformation of the zone structure of the lattice consisting of two regular segments in comparison with the structures of zones of separate regular segments.The positions of a boundary limit of the permitted and of the forbidden zones are shifted, and the separate resonances (interfaces) have appeared which correspond to the forbidden levels in the permitted zones and (or) to the permitted levels in the forbidden zones, respectively.Those resonances (which have a Tamm's nature) are split into pairs near the point of the zone boundary q = πn/a (n is the integer number).Note that the same effect of splitting the separate resonances was also found in the spectrum of the regular chains that include the isotopic-and (or) shift-type defects [11].Now consider the resistance coefficient (see chapter 2) which is limited by the contact of two regular segments.Using the substitution of a nondiagonal element of the transfer-matrix (14) into (8) one can obtain: where The coefficient ρ c as a function of ka and 1 − b/a which was calculated using formula ( 16) is shown in figures 3a,b.Besides the simplicity of calculation we put V a = V b .The coordinate dependence for the resistance coefficient is not present in the homogeneous chain model.Asimptotically, the resistance coefficient increases near the edges of the zones as a function of the parameter 1 − b/a.Note that at the determined values of V and b/a, the decrease of the resistance coefficient up to zero could be observed which corresponds to the regime of zond-particle pure tunnelling (without dissipation).The mentioned properties of the contact can be considered as general properties for any one-dimensional structure having an arbitrary superlattice symmetry. Spectrum of the superlattice having a translational symmetry As it was stressed in chapter 1, the problem of determining the spectrum of the superlattice having a translational symmetry reduces to the similar problem of a regular chain with the change of the chain periods to the superlattice quasi-one scale.Within the framework of the transfer-matrix formalism, we need to calculate the trace of the matrix (11).Finding the solutions of the equation where Λ is the eigenvalue of the matrix T c , I is the unity matrix, in accordance with (7) the expression for superlattice spectrum could be expressed in the following form: where G ± = cos q (N a a ± N b b) .The respective pictures for the spectrum (obtained by calculations using the expression (18)) are shown in figure 4. The complex character of ( 18) is demonstrated at the determined values of the potential and the lattice period parameters.Taking into account the correlation between separate interfaces, the multiplication of resonance factors of the kind (sin qa sin qb) −1 occurs.This effect can be interpreted as the erosion of splitting Tamm resonances (interface states) in some minizones.In the case of disturbance of translational symmetry of the superlattice by the "point-like" defects, the change of zone structure and Tamm minizones would be provided.Thus the considered model gives the exact solution to the problem of calculating the spectrum and introduced transport characteristics of one-dimensional superlattices (including contacts).In spite of their model character, the obtained results should be taken into account for describing the real physical objects having the structure similar to atomic chains and superlattices [10][11][12]. Figure 1 . Figure 1.The schematic scene of the superlattice potential with the translational symmetry (a,b are the segments period, V a , V b are the power constants). Figure 2 . Figure 2. The dependence of Sp T q (vertical axis) from the wave vector ka (horizontal axis): V a = 0.4, b/a = 4. Figure 4 . Figure 4.The energy spectrum E s of the superlattice as a function of ka.
2,697.8
2000-01-01T00:00:00.000
[ "Physics" ]
The effect of the scalar unparticle on the production of Higgs - radion at high energy colliders An attempt is made to present the influence of the scalar unparticle on some scattering processes in the Randall - Sundum model. The contribution of the scalar unparticle on the production of Higgs - radion at high energy colliders is studied in detail. We evaluate the production cross-sections in the electron-positron ($e^{+}e^{-}$), photon-photon ($\gamma\gamma$) and gluon-gluon ($gg$) collisions, which depend strongly on the collision energy $\sqrt{s}$, the scaling dimension $d_{U}$ of the unparticle operator $\mathcal{O}_{U}$ and the energy scale $\Lambda_{U}$. Numerical evaluation shows that the cross - sections for the pair production of scalar particles are much larger than that of the associated production of the scalar particle with unparticle in the same condition I Introduction The Standard model (SM) is the successful model in describing the elementary particle physics. Recently, the 125 GeV Higgs is discovered by the ATLAS and CMS collaborations [1,2], which has completed the particle spectrum of the SM. Although the SM has been considered to be successful model, the model suffers from many theoretical drawbacks. In 1999, Lisa Randall and Raman Sundrum suggested the Randall-Sundrum (RS) model to extend the SM and solve the hierarchy problem naturally [3,4]. The RS setup involves two three-branes bounding a slice of 5D compact anti-de Sitter space. Gravity is localized at the UV brane, while the SM fields are supposed to be localized at the IR brane. The separation between the two 3-branes leads directly to the existence of an additional scalar called the radion (φ ), corresponding to the quantum fluctuations of the distance between the two 3-branes [5][6][7]. In the Lagrangian of the Standard model, the scale invariance is broken at or above the electroweak scale [8,9]. The scale invariant sector has been considered as an effective theory at TeV scale and that if it exists, it is made of unparticle suggested by Geogri [10,11]. Based on the Banks-Zaks theory [12], unparticle stuff with nontrivial scaling dimension is considered to exist in our world. The invariant Banks-Zaks field can be connected to the SM particles [13]. Recently, the possibility of the unparticle has been studied with CMS detector at the LHC [14,15]. The effects of unparticle on properties of high energy colliders have been intensively studied in Refs. [16][17][18][19][20][21][22][23][24][25][26]. However, the influence of scalar unparticle on the production of particles at the high energy colliders have not yet been concerned in the RS model. In this work, the contribution of the scalar unparticle on the production of Higgs -radion at the e + e − , γγ and gg colliders are studied in detail. The layout of this paper is as follows. In Section II, we give a review of the RS model and the mixing of Higgs-radion. The contribution of the scalar unparticle on the production of Higgsradion at high energy colliders are calculated in Section III. Finally, we summarize our results and make conclusions in Section IV. II A review of Randall-Sundrum model and the mixing of Higgsradion The RS model is based on a 5D spacetime with non -factorizable geometry. The single extra dimension is compactified on an S 1 /Z 2 orbifold of which two fixed points accommodate two three-branes (4D hyper-surfaces), the UV brane and the IR brane. The four dimensional effective action is obtained by integrating out the extra dimension. The classical action describing the above set-up is given by [3] where M is the five dimensional Planck scale, G = detG M N , Λ is a bulk cosmological constant, R is the 5D Ricci scalar. In the RS model, the values of the bare parameters are determined by the Planck scale and the applicable value for size of the extra dimension is assessed by kr c π 35 (r cthe compactification radius and k -the bulk curvature). Thus the weak and the gravity scales can be naturally generated. Consequently, the hierarchy problem is addressed. The gravity-scalar mixing is described by the following action [5] where ξ is the mixing parameter, R(g vis ) is the Ricci scalar for the metric g µν vis = Ω 2 b (x)(η µν + εh µν ) induced on the visible brane, Ω b (x) = e −krcπ (1+ φ 0 Λ φ ) is the warp factor, φ 0 is the canonically normalized massless radion field,Ĥ is the Higgs field in the 5D context before rescaling to canonical normalization on the brane. With ξ = 0, there is neither a pure Higgs boson nor pure radion mass eigenstate. This ξ term mixes the h 0 and φ 0 into the mass eigenstates h and φ as given by where Z 2 = 1+6γ 2 ξ (1 − 6ξ) = β −36ξ 2 γ 2 is the coefficient of the radion kinetic term after undoing the where m h 0 and m φ 0 are the Higgs and radion masses before mixing. The new physical fields h and φ in (4) are Higgs-dominated state and radion, respectively Feynman rules for the couplings of Higgs and radion are showed as follows where b 3 = 7, b 2 = 19/6, b Y = −41/6 are the SU(2) L ⊗ U(1) Y β-function coefficients in the SM. The auxiliary functions of the h and φ are given by with m i is the mass of the internal loop particle (including quarks, leptons and W boson), m s is the mass of the scalar state (h or φ). Here, τ f = 2m f ms 2 , τ W = 2m W ms 2 denote the squares of fermion and W gauge boson mass ratios, respectively. There are four independent parameters Λ φ , m h , m φ , ξ that must be specified to fix the state mixing parameters. We consider the case of Λ φ = 5 TeV and m 0 M P = 0.1, which makes the radion stabilization model most natural [6,7]. III The contribution of the scalar unparticle on the production of Higgs -radion at high energy colliders The effects of unparticle on properties of high energy colliders have been intensively studied in Refs. [16][17][18][19][20][21][22][23][24][25][26]. In the rest of this work, we restrict ourselves by considering only scalar unparticle. The scalar unparticle propagator is given by [9,11] where The effective interactions for the scalar unparticle operators are given by where G αβ denotes the gauge field strength and f stands for a standard model fermion. Feynman rules for the couplings of the scalar unparticle in the RS model are showed as follows Using the above formulas, we will study the effect of the scalar unparticle on some high energy scatterings in the RS model. We note here that in our previours works [27][28][29] we have shown that the detection of scalar particles in the RS model at high energy colliders would provide a clear evidence of new physics beyond the SM. Now we will investigate the contribution of the scalar unparticle on the production of Higgs -radion in the RS model at high energy colliders, such as e + e − , γγ and gg collisions in which Feynman diagrams are considered in detail in Appendix A. 1. The e + e − → hh/φφ collisions Now we consider the collision process in which the initial state contains electron and positron, the final state contains the couple of the scalar particles (Higgs or radion). We note here that the contribution of the scalar unparticle is by the propagator in the s -channel where X is Higgs or radion. The transition amplitude is given by Here, is the square of the collision energy. From the expressions of the differential cross-section [30] dσ d(cosψ) = 1 32πs where is the scattering angle. The model parameters are chosen as: λ f f = λ hh = λ φφ = λ 0 = 1, Λ U = 1000 GeV, 1 < d U < 2 in case of the scalar unparticle [22], m h = 125 GeV, m φ = 10 GeV [27,28]. We give estimates for the cross-sections which depend on the collision energy √ s, the scaling dimension d U of the unparticle operator O U and the energy scale Λ U as follows i) In Fig.1 we plot the total cross-sections as the function of d U . The collision energy is chosen as √ s = 500 GeV and 1.1 ≤ d U ≤ 1.9. From the Fig.1 we can see that in case of the additional scalar unparticle propagator, the cross sections decrease rapidly as d U increases and they are flat when d U > 1.6. ii) In Fig.2 we evaluate the dependence of the total cross-sections on the collision energy √ s. The collision energy is chosen in the range of 500 GeV≤ √ s ≤ 1000 GeV (ILC), the various d U is chosen as 1.1, 1.3, 1.5, 1.7, respectively. The figure shows that the total cross-sections decrease when the collision energy √ s increases. It is worth noting that with the contribution of the scalar unparticle propagator, the cross-sections for pair production of scalar particles are much enhanced. iii) In Fig.3 we evaluate the dependence of the total cross-sections on the Λ U at the fixed collision energy, √ s = 500 GeV. In case of the additional scalar unparticle propagator, the cross-sections increase rapidly in the region of 2 TeV ≤ Λ U ≤ 5 TeV. Note that here we only plot the maximum cross-sections based on Fig.1 In this section, we investigate the associated production of the scalar particle with unparticle at high energy e + e − colliders in which the scalar unparticle contribution on the scattering process is in the final state The transition amplitude can be given as follows With the parameters chosen as above, we give some estimates for the cross-sections with the contribution of scalar unparticle as follows i) In Fig.4 we plot the total cross-sections as the function of d U . We can see from the figure that the curve of the cross-sections is similar to Fig.1. That is, the cross-sections decrease rapidly as d U increases. ii) In Fig.5 we evaluate the dependence of the total cross-sections on the collision energy √ s with the various d U . The result shows that the cross-sections decrease as the collision energy √ s increases. Note that the curve of the cross-sections is flat at very high energies. iii) The dependence of the total cross-sections on the Λ U at the fixed collision energy, √ s = 500 GeV is shown Fig.6. The figures show that the total cross-section for the associated production in the e + e − → U h collision is about 10 3 times larger than that in the e + e − → U φ collision. Numerical values for the production cross section with d U = 1.1 are given in detail in Table 1. We can see from Table 1 that the cross-sections for the pair production of scalar particles are much larger than that of the associated production of scalar particles with unparticle under the same conditions. It is worthing to note that, when the collision energy increases, the total cross-section in the e + e − → φφ collision is insignificantly larger than that in e + e − → hh collision. 3. The γγ → hh/φφ collisions In this section, we consider the collision process in which the initial state contains the couple of photons, the final state contains the couple of scalar particles. The Feynman diagram is given by We obtain the results in the s, u, t -channels Now we estimate the production cross-sections with the contribution of the scalar unparticle propagator as follows i) In Fig.7 we plot the total cross-sections in the γγ → hh/φφ collisions as the function of d U . The collision energy is chosen as √ s = 3000 GeV (CLIC) and 1.1 ≤ d U ≤ 1.9. We can see from the figure that, the curve goes through the minimum value at d U = 1.65 and then increases rapidly with d U . ii) In Fig.8 we plot the total cross-sections as a function of the collision energy √ s. The collision energy region is 1T eV ≤ √ s ≤ 5T eV . The total cross-sections decrease gradually as √ s increases with the fixed d U . iii) In Fig.9 we plot the dependence of the total cross-sections on the energy scale Λ U with the parameters chosen as above. The figure shows that the cross-sections decrease gradually as the Λ U increases. The γγ → U h/U φ collisions In this section, we investigate the unparticle contribution on γγ → U h/U φ collisions The transition amplitude can be written as follows We estimate the cross-sections for the associated production as follows i) In Fig.10 we plot the total cross-sections as the function of d U with the parameters chosen as in previous items. The figure shows that the curve of the cross section is similar to Fig.1. We can see that the cross section decreases rapidly as d U increases and it is flat with d U > 1.6. ii) In Fig.11 we evaluate the dependence of the total cross-sections on the collision energy √ s with the fixed d U . The figure shows that when the collision energy √ s increases then the total cross-sections increase gradually. iii) In Fig.12 we plot the dependence of the total cross-sections on the Λ U . The figure shows that in the region 1 TeV ≤ Λ U ≤ 5 TeV the cross-sections decrease gradually as Λ U increases. Some typical values for cross-sections are given in detail in Table 2. The result shows that the crosssections for pair production of scalar particles are much larger than that of the associated production. Moreover, the total cross-section in γγ → U h collision is larger than that in γγ → U φ collision under the same conditions. 5. The gg → hh/φφ collisions Now we consider the gg → hh/φφ process which is similar to the γγ → hh/φφ process. The reaction is given by The transition amplitude for this process can be written as We evaluate the cross-sections as follows i) In Fig.13 we plot the total cross-sections in the gg → hh/φφ collisions as the function of d U . We can see from the figure that the shape of the cross-section is similar to Fig.7. That is, the curve of the cross-sections goes through the minimum value and then increases rapidly with d U . ii) In Fig.14 we plot the total cross-sections as a function of the collision energy √ s. The figure shows that, the cross-sections decrease as √ s increases. The total cross-section in gg → φφ collision is insignificantly larger than that in gg → hh collision. iii) In Fig.15 we plot the dependence of the total cross-sections on the Λ U . We can see that in the region 1 TeV ≤ Λ U ≤ 5 TeV, the cross-sections decrease as Λ U increases. 6. The gg → U h/U φ collisions Finally, we study the contribution of the scalar unparticle on the associated production in gg → U h/U φ collisions We obtain the transition amplitude in the s, u, t -channels We estimate the cross-sections for associated production as follows i) In Fig.16 we plot the total cross-sections as the function of d U . From the figure we can see that the cross section decreases rapidly as d U increases and it is flat when d U > 1.45. ii) In Fig.17 we evaluate the dependence of the total cross-sections on the collision energy √ s with the fixed d U . The figure shows that when the collision energy √ s increases in the region 1T eV ≤ √ s ≤ 5T eV then the total cross-sections increase. The total cross-section in gg → U h collision is larger than that in gg → U φ collision. iii) In Fig.18 we plot the dependence of the total cross-sections on the Λ U . The figure shows that the cross-sections decrease as Λ U increases. Some numerical values for cross sections in case of d U = 1.1 are given in Table 3. IV Conclusion In this paper, we have evaluated the contribution of the scalar unparticle on the production crosssections of Higgs -radion in the Randall-Sundrum model at the (e + e − ), (γγ) and (gg) colliders, which depend strongly on the collision energy √ s, the scaling dimension d U of the unparticle operator O U and the energy scale Λ U . The results indicate that the cross -sections for the pair production of scalar particles are much larger than that of the associated production of scalar particle with the unparticle under the same conditions. In the e + e − → hh/φφ collisions, the production cross -section decreases as the collision energy √ s increases. With the contribution of the scalar unparticle propagator, the cross-sections for the pair production of scalar particles are much enhanced while the cross-sections for the associated production in e + e − → U h/U φ collisions are very small. Numerical evaluation has shown that the cross-sections for the pair production of scalar particles are about 10 15 times larger than that of the associated production under the same conditions. In the γγ → hh/φφ collisions, due to the main contribution of scalar unparticle on the propagator in the s-channel, the cross -sections decrease as √ s increases while the cross-sections for the associated production increase as √ s increases. This is because the unparticle couplings in u,t channels give the main contribution on the γγ → U h/U φ scattering process. However, the production cross-sections in γγ → hh/φφ collisions are much larger than that of γγ → U h/U φ collisions (about 10 5 times) under the same conditions. In the gg → hh/φφ collisions, the cross-sections for the pair production of the scalar particle decrease rapidly first and then increase as √ s increases, while the cross-sections for the associated production increase as √ s increases, which is similar to γγ → U h/U φ process. Numerical evaluation has shown that the cross-sections of the associated production in the gg collisions are much larger than that in the γγ collisions under the same conditions. This is because the scalar couplings with gluon are larger than that with the photon. Finally, we emphasize that in this work we have considered only on a theoretical basis, other problems concerning the scalar unparticle signals at LHC the readers can see in detail in Ref. [25] .
4,393.2
2018-07-02T00:00:00.000
[ "Physics" ]
Moral sensitivity, moral distress and moral functioning For this open issue of the Etikk i Praksis: Nordic Journal of Applied Ethics, we put together a broad mix of different articles tackling current important issues in the field. Introduction Moral sensitivity, moral distress and moral functioning Allen Alvarez, May Thorseth Moral beliefs and values motivate us to act in ways that align with these beliefs and values. We experience satisfaction when our actions align with our values and feel distressed when we cannot act according to them. Distress also occurs when circumstances prevent us from acting with integrity, that is, acting according to values we hold dear. A similar kind of distress is experienced when we are pushed to act in ways contrary to our values. Since Andrew Jameton first used the term in 1984, moral distress has been described in the empirical and conceptual literature as the experience of troubling emotions (frustration, anger, feeling powerless, hopelessness) due to constrained moral agency. Interventions have been studied and tested (Morley et al. 2021) because of the negative health impact of moral distress on those who experience it. Reducing moral distress is important in healthcare because healthcare professionals who suffer from chronic moral distress tend to leave their roles to protect their health and wellbeing (Karakachian & Colbert 2019). The effectiveness of interventions in managing or reducing moral distress has become a growing research interest in healthcare ethics (Musto, Rodney & Vanderheide 2015). While any form of suffering should be reduced, if not eliminated, we also need to consider the human function that gives rise to moral distress. We do not want to merely eliminate the symptom without understanding the cause. The attribute of moral sensitivity enables the moral agent to feel the alignment between actions and values. Misalignment would cause moral distress. Given this functional relationship between moral sensitivity and moral distress, it may not necessarily be bad to experience moral distress if it functions to signal that something is wrong with the moral environment that needs to be changed. Could moral distress be a sign of moral wellness (defined as having a well-functioning moral compass)? The distress felt could motivate moral action to address the cause of distress. Therefore, we need not merely aim to reduce moral distress beyond addressing the circumstances that gave rise to it. As De Villers and DeVon (2012) stated: Moral sensitivity fosters commitment to patients and the ability to use strategies in ethical decision-making. Nurses who have lost their ability to care may lack moral sensitivity and will not experience moral distress. Those who maintain high levels of sensitivity and competency are more likely to demonstrate moral courage and moral heroism and are able to take action resulting in moral comfort rather than moral distress. Nevertheless, the functional relationship requires additional conceptual and empirical investigation to inform further work on testing intervention with respect to the appropriate goal we should aim for (Souvandjiev 2021;McAuley-Gonzalez 2018). Applied ethicists can play a role in increasing our understanding of this relationship. For this open issue of the Etikk i Praksis: Nordic Journal of Applied Ethics, we put together a broad mix of different articles tackling current important issues in the field. The issue opens with the article by Arseniy Kumankov "Nazism, Genocide and the Threat of The Global West. Russian Moral Justification of War in Ukraine". The article critically examines how the Russian invasion of Ukraine was preceded by several public actions that aimed to frame the military operation as necessary and inevitable. Kumankov examines how, during these events, the Russian authorities used moral language to justify the war and the use of force against Ukraine. This article looks at why Russian officials used moral language to justify the war, what arguments they used, and whether these arguments would be effective in the long term. It examines speeches by the Russian President and materials from the Russian Federation Security Council meeting to answer these questions. Kumankov concludes that Putin's lack of legitimacy led him to justify the war in moral terms, which the nature of Russian moral discourse allowed him to do, but that this justification strategy may not be stable or sustainable in the long term. The author analysed speeches by Putin and other senior officials to show that the conflict was initially presented as a moral clash with the West rather than just a political rivalry. This strategy was intended to give legitimacy to the decision to attack Ukraine. The author also reproduced and classified the arguments used to support the war, showing that the Great Patriotic War was employed as a framework to justify this war and maintain Russia's image as a victorious and moral state. Other reasons for the war included the perceived threat of the West to Russia's values, and the Nazi character of the Ukrainian regime. The effectiveness of this strategy is discussed and uses some statistical information to conclude that although initial support in Russia for the war appeared high, the author questions the depth of the moral grounding and commitment for this war in the long term. A commentary by Jennifer Bailey accompanies this original article by Kumankov. Bailey uses a political science lens to examine the thesis and arguments presented to help readers broaden their thinking about the issue. In the second article, "Socratic dialogue on responsible innovation -A methodological experiment in empirical ethics" by Bjørn K. Myskja and Alexander Myklebust, the authors describe an experiment in which the Socratic dialogue method was used to promote Responsible Research and Innovation (RRI) in an interdisciplinary life sciences research project. The authors present an approach to avoiding the imposition of predetermined norms in interdisciplinary research projects by engaging researchers in group discussions. The method, which is based on Svend Brinkmann's epistemic interviewing, was used in two research group sessions to facilitate reflection on the issue of responsibility in research and innovation. This approach differs from other empirical ethics methodologies in that it aims to develop knowledge through dialogue, and the facilitators are active participants in the discussions rather than just observers. Myskja and Myklebust discuss the potential of this method as a supplement to other approaches to RRI and argue that it can contribute to both knowledge production and reflexivity. The main focus of their article is on the methodology used to produce knowledge. The effectiveness of this approach will be determined when the central arguments are developed and integrated into academic papers. The authors believe that researchers have valuable knowledge based on their experiences that can be used to contribute to academic or public debates. They are not concerned with whether the participants are representative of their group or whether the data generated in the sessions is valid. Instead, the validity of the approach will be tested by its contribution to knowledge when the arguments are presented to a competent audience. The third article by María-Jesús Úriz, Juan-Jesús Viscarret, and Alberto Ballestero, titled "Ethical challenges of social work in Spain during Covid-19", the authors tell a story of the experience of social workers in Spain during the pandemic. In 2020, during the initial surge of COVID-19 in Spain, social work professionals faced significant ethical dilemmas. This article delves into the primary challenges encountered in the field, as the pandemic not only impacted healthcare but also had far-reaching effects on social work. Throughout this period, social workers grappled with profound ethical concerns encompassing breaches of confidentiality, equitable allocation of limited resources, the absence of personal and emotional connections with service users, the struggles of remote and isolated work, uncertainties regarding the reliability of information handled, and the complexities of accurate diagnoses. To gain a comprehensive understanding, an international research team led by Dr. Sara Banks collaborated with the International Federation of Social Workers on a broader project. The study involved collecting data through an online questionnaire targeted at social workers from different countries. In this article, we focus on the analysis of results specifically related to the primary ethical challenges faced by social workers in Spain. The research group identified two distinct categories of ethical challenges, each explored in separate sections. The first section addresses direct interactions with users, highlighting concerns such as the absence of emotional support, reliability and appropriate use of technology, adherence to professional standards, maintaining confidentiality, vulnerability and fair resource distribution. The second section concentrates on ethical challenges encountered within social organizations on a daily basis, encompassing aspects such as e-social work and coordination difficulties, managing pressure within social bodies, and adapting to changes in intervention methodologies. The fourth article by Annamari Vitikainen entitled "LGBTIQ+ Prioritization in Refugee Admissions -The Case of Norway", the author delves into the normative foundations behind Norway's recent (2020) policy that places emphasis on admitting LGBTIQ+ refugees. The aim is to examine the compatibility of this policy with the vulnerability selection criteria outlined by the United Nations High Commissioner for Refugees (UNHCR) and to evaluate its independent justifications. While the article argues that the Norwegian policy aligns with the UNHCR criteria when appropriately interpreted, Vitikainen also emphasizes that it does not derive exclusive support from these criteria alone. To form a comprehensive understanding, she considers a range of broader moral principles that shape refugee admissions, encompassing both state-based and refugeecentered rationales for resettlement. By drawing on the specific challenges and dynamics associated with the resettlement and integration of LGBTIQ+ refugees, the article's analysis offers cautious endorsement for the Norwegian policy of prioritizing this vulnerable group. However, it also highlights certain limitations inherent in such an approach, particularly regarding the agency of the refugees themselves. Throughout the article, Vitikainen underscores the importance of amplifying the voices of refugees in the selection and resettlement processes. This entails recognizing cases where the default position of prioritizing LGBTIQ+ individuals may be superseded by their own interests in seeking resettlement elsewhere. The article aims to contribute to the ongoing dialogue surrounding the prioritization of LGBTIQ+ refugees, shedding light on the normative considerations that inform Norway's policy while advocating for a comprehensive and inclusive approach to refugee admissions. And finally, in the fifth article "Stakeholder Inclusion as the Research Council of Norway's Silver Bullet" by Matthias Solli, the author delves into an important concept known as responsible research and innovation (RRI) and its implications within a public funding system. Using a fascinating case study from Norway, the author uncovers how the Research Council of Norway has embraced the idea of stakeholder inclusion. They believe that by involving various stakeholders in a transdisciplinary project, they can ensure its success and secure further funding for its development. However, there are potential risks associated with this approach. Through careful analysis of this case, the author unveils a concept called "4E Waste" -waste that occurs when a project with great potential to benefit society and tackle significant challenges ultimately falls short. To understand this waste, the author breaks it down into four types: Economic Waste, Eidetic Waste, Ecological Waste and Ethical Waste. Through this exploration of responsible research and innovation, the author attempts to shed light on the importance of avoiding these different types of waste. By doing so, the author believes that we can maximize the value and impact of projects, ensuring they deliver tangible benefits to society while addressing the pressing challenges we face today. It is our wish that the new articles included in this issue will help stimulate deeper thinking in the various topics discussed by the authors. We encourage you to explore other complex ethical challenges. We seek articles that employ ethical theories and principles to analyze and evaluate different facets of society, ranging from politics and science to technology and the economy. We are particularly intrigued by the ethical ramifications of emerging issues like artificial intelligence, genetic engineering, climate change, and the politics of disinformation. We welcome submissions from diverse disciplines and perspectives, encompassing philosophy, sociology, law and public policy. Call for papers We would like to invite submissions for the Fall 2023 Special issue on environmental (food and water) ethics. The deadline for submission to this special issue is 1 August 2023.
2,808.4
2023-06-30T00:00:00.000
[ "Philosophy" ]
Lower mantle geotherms, flux, and power from incorporating new experimental and theoretical constraints on heat transport properties in an inverse model An inverse method is devised to probe Earth’s thermal state without assuming its mineralogy. This constrains thermal conductivity (κ) in the lower mantle (LM) by combining seismologic models of bulk modulus (B) and pressure (P ) vs. depth (z) with a new result, ∂ln(κ) / ∂P ∼ 7.33/BT , and available high temperature (T ) data on κ for lengths exceeding millimeters. Considering large samples accounts for the recently revealed dependence of heat transport properties on length scale. Applying separation of variables to seismologic ∂B/∂P vs. depth isolates changes with T . The resulting LM dT / dz depends on ∂2B/∂P 2 and ∂B/∂T , which vary little among dense phases. Because seismic ∂B/∂P is discontinuous and model dependent ∼ 200 km above the core, unlike the LM, our results are extrapolated through this tiny layer (D). Flux and power are calculated from dT / dz for cases of high (oxide) and low (silicate) κ . Geotherm calculations are independent of κ , and thus of LM mineralogy, but require specifying a reference temperature at some depth: a wide range is considered. Limitations on deep melting are used to ascertain which of our geotherm, flux, and power curves best represent Earth’s interior. Except for an oxide composition with miniscule ∂2B/∂P 2, the LM heats the core, causing it to melt. Deep heating is attributed to cyclical stresses from > 1000 km daily and monthly fluctuations of the barycenter inside the LM. Introduction and background Heat moves when a temperature (T ) difference exists, where the net flow is from hotter to colder regions. This phenomenon is important to Earth because it is dynamic. But as a consequence, the outcome of a laboratory experiment is greatly influenced by the time dependence of the applied heat (e.g., Tye, 1969), which has led to overlooking the lengthscale dependence of heat transport and misunderstandings of experimental limitations and uncertainties as well as of microscopic mechanisms (Hofmeister, 2019(Hofmeister, , 2021. Thermal models in Earth science are particularly affected by these shortcomings, due to wide variations in relevant length scales, temperature, pressure, and material properties, such as transparency to thermal radiation. An improved under-standing, based on a new theory and accurate data on mineral heat transport, is described next. Recent findings on heat transport properties relevant to mantle studies One incorrect presumption is that the physical properties representing heat flow (thermal conductivity, κ, or its close relative thermal diffusivity, D) are independent of the distance along the thermal gradient. This static view is inconsistent with Fourier's heat equation, as follows. Its simplest onedimensional form is where t = time and z represents the direction of heat flow. Equation (1) holds for temperature changes being sufficiently small that the relevant properties vary negligibly. Dimension analysis of this effectively constant T condition provides where L is the distance over which heat travels, ζ is a time constant, and u is a characteristic speed. Thermal conductivity likewise depends directly on L because it is proportional to D, and because the multiplying parameters, density (ρ) and specific heat (c P ) at constant pressure (P ), are independent of L. The required length-scale dependence has been masked by experimental limitations, including ubiquitous use of similar sample lengths of > 1 to < 5 mm. Many experiments are steady state, so time and ζ are irrelevant. Other common techniques are periodic, where these oscillations about quasiequilibrium involve another, different time constant (see, e.g., Tye, 1969;Zhao et al., 2016). The transient technique of laser-flash analysis (LFA), which avoids heat losses from physical contacts (see Vozár and Hohenauer, 2003) and monitors thermal evolution across L with time, has confirmed that D linearly depends on L below about 1 mm for electrical insulators, glasses (Hofmeister, 2019, chap. 7), semiconductors, metals, and alloys (Hofmeister, 2021). Results ( Fig. 1) are consistent with a linear response when L is small, as in Eq. (2). Misunderstandings also stem from reliance on the historic kinetic theory of gas (KTG) to depict heat transfer in solids. However, heat and matter move together across long expanses in a gas, which is unlike a solid where these motions are decoupled. Furthermore, KTG assumes elastic collisions, through which temperature cannot change. Neither non sequitur is addressed by morphing molecular collisions in a gas into elastic scattering of pseudo-particles denoted as phonons in a solid. Because gas data are collected under negligible T gradients to avoid convection, assuming random fluctuations in all three directions is reasonable and provides formulae mostly compatible with gas data. Yet, the ratios of the transport properties are not correctly described, while the ubiquitous emission of thermal radiation from all states of matter remains unexplained. Accounting for inelasticity in molecular collisions addresses both shortcomings in KTG (Hofmeister, 2019, chap. 5). Regarding condensed matter, Fourier assumed heat flows into, across, and out of the stationary solid, whereby part of the heat is stored in the elements along the path. The process is diffusion, which is underscored by Fick constructing his formulation after Fourier's. Fourier defined flux as heat per area per time and realized that = −κ(T , P ) ∂T ∂z P = κ(T , P ) × − ∂T ∂z P . (4) One dimension suffices for discussion since heat flows down the thermal gradient per the second law of thermodynamics. Equation (4) is fundamental: taking its spatial derivative, conserving energy, and simplifying using the definition of Eq. (3) leads to Eq. (1). Experiments and theory show that light is the diffusing entity in solids (Hofmeister et al., 2014;Criss and Hofmeister, 2017), which is real and pure energy. Light, unlike a phonon, crosses interfaces. Attenuation of light across the sample provides the length-scale dependence of Fig. 1. These recent discoveries led to new formulae for the dependence of κ on P and T and the absorption spectra of a material, which were verified against reliable data on κ below 2 GPa (Fig. 2) and on D and κ from a few kelvins to well above ambient T (Hofmeister, 2019(Hofmeister, , 2021. LFA measurements of D at high T ( Fig. 3a), combined with Eq. (3), show that κ above 1000 K is nearly constant for structures or chemical compositions more complex than Al 2 O 3 (Fig. 3b). These advances are used in the present paper to evaluate of thermal conductivity in the lower mantle (LM) while accounting for ambiguities in the temperature and mineralogy for this immense region of the Earth. Reliability of available information on lower mantle heat transport Thermal models are based on transport properties. To achieve high pressures appropriate to the deep Earth, diamond anvil cell (DAC) experiments probe tiny samples. Accurately determining P near 1 atm in devices geared for extreme compression has not been achieved. Hence, results from DAC heat transport experiments are benchmarked against independent measurements of D or κ at ambient P (e.g., Hsieh et al., 2009). However, ambient data are collected from L > 1 mm, which are ∼ 100× larger than sample thicknesses used in DACs. Extrapolation from large to small L was not done and is non-linear ( Fig. 1). Very high P studies are difficult, leading to additional problems, as discussed in detail by Hofmeister (2009Hofmeister ( , 2010bHofmeister ( , 2019Hofmeister ( , 2021. To summarize, large thermal gradients preclude use of Eq. (1) while requiring knowledge of the T dependence of D (or κ) at P , which is the unknown sought. For tiny samples, heat flow is twodimensional but one-dimensional equations are used. At high T , cooling occurs by ballistic radiation to the surroundings (e.g., to the detector used to ascertain T ), which is not addressed in Fourier's description of heat diffusion (conduction). Thermal gradients changing direction during the experiments of McWilliams et al. (2015) and Konôpková et al. (2016) (see figure 13 in Hofmeister, 2021) were not addressed in their analysis. Thermoreflectance methods (e.g., Hsieh et al., 2009) assume the length scale over which heat diffuses, which dictates the results per Eqs. (1) to (4). Low P experiments using ∼ millimeter lengths and other techniques utilized in Fig. 2 lack these difficulties, as discussed in previous work and Sect. 3, and are utilized here. Importantly, thermal gradients inside Earth are low. Even in the lithosphere, ∂T /∂z only reaches ∼ 20 K km −1 . Thermal transport properties vary little over a few degrees (Fig. 3). Because T varies less than 4 K over L > 5 km inside Earth, its thermal length scale is immense, and so heat transfer therein is always diffusive and isothermal properties are relevant. But in the laboratory, L is ∼ 10 5 smaller, permitting ballistic (boundary-to-boundary) transport to augment diffusion, as recognized in minerals and rocks by Kanamori et al. (1968) and further documented by Pertermann and Hofmeister (2006), Branlund and Hofmeister (2007), and Merriman et al. (2018). Laser-flash experiments reduce and remove ballistic effects via sample coatings and via models (e.g., Blumm et al., 1997;Hahn et al., 1997). Our large and growing LFA database (e.g., Hofmeister, 2019) and the associated theoretical model are essential to ascertain heat transport at high mantle temperatures. Seismic models provide velocities and density inside the Earth. Pressure is well constrained, since Earth's mass and moment of inertia provide independent boundary conditions (e.g., Anderson, 2007). Mineralogy is based on comparing laboratory data on minerals to radial models, such as the preliminary earth reference model (PREM) of Dziewonski and Anderson (1981) since radial changes depict average values. Comparison with laboratory studies in a forward-(fitting) approach is used but leads to equivocal results because temperature is not known independently. For the Earth, temperatures are changing, heat is moving, and seismic waves contribute energy to the rocks during their attenuation. Hence, conditions are not adiabatic, as previously assumed in forward-(fitting) models. In addition, minerals vary greatly in possible chemical compositions and structures. Assessing the lower mantle is particularly uncertain because no rocks have been exhumed from below 670 km. Inclusions in diamonds only indicate P and T conditions when a single inclusion contains multiple phases because most, if not all inclusions, predate their diamond host (Nestola et al., 2017). The inference that lower mantle material is preserved in microdiamonds is based on separated inclusions of (Mg,Fe)O and enstatite (Stachel et al., 2000). The tetragonal garnet phase TAPP (now jeffbenite) once considered to form in the lower mantle is now known to be stable above 13 GPa, i.e., in the transition zone (Nestola et al., 2016). Evaluating temperatures from seismic models via forwardfitting requires knowledge of the mineralogy (e.g., Cammarono et al., 2003). A thermal model is needed to account for Earth's heat being lost to space (i.e., non-adiabatic gradients). For the lower mantle, a wide range of κ values is possible due to the ambiguities in mineralogical models, even if experimental uncertainties were small. An alternative approach to fitting seismic velocities is needed to better understand this immense region of the thermally evolving Earth. All data were collected from millimeter-sized samples below 2 GPa. The box lists parameters for a linear fit with no intercept. Metals and Si (red crosses) are included in the fit. KBr exemplifies hydroscopic alkali halides which are soft (bulk moduli < 16 GPa). Details on the 24 heat transport studies of this figure are given in Hofmeister (2021), Table 3. Bulk modulus has been measured many times for the samples and is listed in the compilation of Bass (1995) and several others. Purpose and thesis In view of limited knowledge of the LM, an analytical inverse approach is used here to decipher its thermal state from a seismic reference model with minimal assumptions. As in previous large-scale mineralogical or thermal studies (e.g., MacDonald, 1959;Anderson, 2007;Murikami et al., 2009;Criss and Hofmeister, 2016), average, radial temperatures are sought to describe Earth's structure, which is reasonably represented as spherical shells. Surface heat flux being remarkably similar for the continental and oceanic crusts, despite the great contrast in their heat-generating elements (e.g., Veiera and Hamza, 2018), points to the radial thermal gradients dominating Earth's thermal state and evolution. The low measured surface emissions of ∼ 60 mW m −2 , corresponding to ∼ 100 W km −3 of underlying rock, show that thermal evolution is now slow; i.e., conditions are quasi-steady state. Hence, angular (lateral) motions of heat are unimportant to describing Earth as a whole. Our mathematical analysis is based on decades of mineral physics efforts which show that (1) pressure derivatives of diverse physical properties vary far less than ambient values do and that, as P climbs, all properties increase more weakly with P . (2) Physical properties at high T behave similarly, as illustrated by the Dulong-Petit limit representing heat capacity at high T . (3) Second-order P or T derivatives of physical properties are small, which means that cross derivatives are small, and so separation of variables reasonably describes many physical properties. Tabulated data on diverse properties and phases (e.g., Anderson and Isaak, 1995;Bass, 1995;Fei, 1995;Knittle, 1995) illustrate these points. If variables are separable, a property of interest (ϒ) follows the form where f and g are independent, dimensionless functions. Equation (5) describes bulk and shear moduli from diverse elasticity experiments (e.g., Anderson and Isaak, 1995;Bass, 1995). This finding is important to heat transport, as bulk modulus is the prime descriptor of κ(P ) per dimensional analysis (e.g., Dugdale and MacDonald, 1955). For the lower mantle, variations in velocities from available seismic reference models differ negligibly except for the ∼ 200 km above the core (D ) where variations among studies are small, despite larger uncertainties for this region (see figures in Kennett et al., 1995). Utilizing PREM suffices (see Section 2.1 for further discussion). In the LM, excluding D , velocity changes are slow and smooth, leading to interpretation of invariant chemical composition. Since T changes far more slowly with distance in the Earth than in experiments, an isothermal bulk modulus represents the mantle values. The present paper assumes that changes in mineralogy of the LM are secondary, i.e., that the main changes in its seismic radial profile are from P and T , which permits use of Eq. (5). General behavior of bulk moduli for dense materials from both compression and elasticity studies supports this contention. It is most fortunate that the derivatives are simply described: where for dense and hard materials compatible with the LM, the constant B = ∂B/∂P is commonly near 4 and B = ∂ 2 B/∂P 2 is negative and sufficiently small to require very high pressure for its resolution (e.g., Sinogeikin and Bass, 1999;Zha et al., 2000). Results center on B = 4 because this value corresponds to a harmonic interatomic potential (e.g., Hofmeister, 1993) and anharmonicity links to T , not P , changes (e.g., Wallace, 1972). Most measurements provide ∂B/∂T as a constant. Although second-order T derivatives exist, these are small (if even resolvable) for hard oxide and silicate minerals, as shown in compilations and more recent work (e.g., Aizawa et al., 2004). 1.4 Synopsis of our novel, analytical inverse approach, and organization of the report Section 2 shows how to extract the LM temperature gradient (∂T /∂z) from pressure and depth derivatives of radial (2012). Forsterite and olivine are 001 plates (Pertermann and Hofmeister, 2006). Perovskites from Hofmeister (2010a). (b) Thermal conductivity calculated from data on D, c P , and density using Eq. (3). For the oxides, the dotted lines use constant, high-T values for c P and ρ, whereas their other curves use temperature-dependent data. Orthorhombic perovskite is an estimate for a wide range of compositions; see Sect. 3.1. seismic reference models by using Eqs. (5) and (6), in an inverse approach. Fitting is not used, which describes familiar, forward modeling. The extraction uses generic values for ∂B/∂T and ∂ 2 B/∂P 2 , which describes dense phases, including the rock salt and perovskite type structures thought to occur in the deep mantle, to explore the possible range of ∂T /∂z and its depth dependence. Thus, our thermal model is independent of what phases with what compositions might exist in the LM. The method is new, so details are provided. Section 3 sets upper and lower bounds on heat transport properties for the LM based on verifiably accurate methods. We derive a simple formula for ∂ ln(κ)/∂P from Fourier's heat equation, which confirms that the result of Hofmeister (2021) is an identity. The sole parameter in the identity (other than B T and P ) appears be a constant, as suggested in Fig. 2 and independently by previous work (e.g., Chopelas and Boehler, 1992). The resulting bounds on κ(T , z) for the LM, combined with ∂T /∂z derived from PREM (Sect. 2), provide flux and power across the LM, without assuming its mineralogy. Section 4 constructs geotherms using a reference temperature at a shallow level that avoids melting of a peridotite composition anywhere in the LM. These geotherms are independent of κ. Then, we ascertain which of our geotherms, fluxes, and powers are compatible with additional constraints, such as phase equilibria and latent heat of melting. Our inverse model, which is based on radial seismic changes and high T and P behavior common to dense phases, indicates that the LM has a heat source which is warming the outer core, while causing the inner core to melt. Possible heat sources are discussed in Sect. 5, along with implications of our results. 2 Extraction of geothermal gradients from radial seismological models in an inverse approach Features of PREM Seismic reference models represent Earth's average interior; i.e., they are radial. Aspherical images of the Earth's internal structure are represented as perturbations to a reference mode (e.g., Ritzwoller and Lavely, 1995). Reference model results are displayed as the fairly smooth functions of velocities, density, and pressure as a function of depth (z) or radius (s), or similarly as plots of bulk and shear moduli, the quantities of which are also fairly smooth, being derived from ρ and the two velocities. Seismic discontinuities are present as kinks, most of which are small in these typical representations. In contrast, large jumps dominate plots of derivatives of variables vs. depth (Fig. 4). Hence, this paper makes use of the derivatives. Taking derivatives accentuates differences, as this mathematical operation is the converse of integration, which averages and smooths. The pattern exhibited by velocity derivatives (not shown) is similar to moduli derivatives, whereas the density derivative (not shown) is relatively smooth, more like the pressure derivative, and so the depiction of Fig. 4 is inherent to PREM. Smooth and continuous ∂B/∂P describes the lower mantle but only between depths of 871 and 2741 km (Fig. 4). Im- Figure 4. Derivatives from the seismologic model PREM (Dziewonski and Anderson, 1981) as a function of depth. Variables as tabulated by Anderson (2007), from which we calculated the pressure derivatives of the bulk (black) and shear moduli (grey) as well as the depth derivative of pressure (dots). portantly, other reference models such as Ak135 differ negligibly from PREM velocities in this restricted region (see Fig. 12 in Kennett et al., 1995). Our approach (below) applies to continuous functions only, so PREM suffices to represent all reference models of this volumetrically immense region. However, D , where seismic models differ, cannot be quantitatively analyzed. We can only extrapolate into this tiny shell. For brevity, "lower mantle" or "LM" refers to its central region from 871 to 2741 km only, unless specified otherwise. Extrapolation of results for the LM into the underlying D layer and up to 671 km, which defines the transition zone (TZ), is discussed in Sects. 4 and 5. Separation of variables The geothermal gradient is defined by PREM provides the input quantity ∂P /∂z (Fig. 4). PREM values for B and ∂B/∂P as a function of depth are affected by both compression and heating of the minerals inside the LM. Our goal is to utilize available data to distinguish the effects of P and T on B from PREM. This is possible through separation of variables and decades of data acquisition. Equation (6) provides the input for ∂B/∂P . Because an inverse approach is being used, we consider values compatible with many dense phases, oxides, and silicates. The changes (differences) below z = 871 km are of interest, so the value of ∂B/∂P at 871 km serves as a reference point. This approach links B and B , as shown graphically in Fig. 5. Hence, obtaining thermal gradients from PREM via Eq. (7) requires some estimates of ∂B/∂T and ∂ 2 B/∂P 2 but not of ∂B/∂P . This is understood by considering two end-member cases. The case B = 0 for the lower mantle sets an upper limit since B is not positive. If this commonly used limit (e.g., Knittle, 1995) applies to the LM, then ∂B/∂P with depth solely results from compression and thus is invariant (horizontal dashed-dotted line in Fig. 5). Consequently, rising temperature causes the linear decrease in PREM ∂B/∂P with z immediately below 871 km under separation of variables. For the second case, we consider B = −0.015 GPa −1 , which reproduces the decrease in PREM ∂B/∂P with z below 871 km. With this match, changes in B of PREM solely result from compression; i.e., T is constant for z slightly below 871 km. Hence, B between 0 and −0.015 GPa −1 depicts a LM that is both compressing and warming below 871 km, whereas B > −0.015 GPa −1 depicts a cooling and compressing LM just below 871 km. This case is not shown because T in the outermost layers of the Earth increases with depth, and so the top of the LM should behave likewise. Once z reaches 1300 km, PREM ∂B/∂P curves have become rather flat, but as z increases further, deeper than ∼ 2200 km, PREM ∂B/∂P curves take on a positive slope with z, which is linear just below 2741 km. The broad minimum near 2000 km suggests that a maximum temperature may exist in the LM, where its manifestation depends on mantle values of ∂B/∂T and ∂ 2 B/∂P 2 (discussed below). The positive slope in PREM ∂B/∂P curves at great depths, assuming that Eq. (6) represents the LM, shows that its deepest extent is shedding its heat downwards while compressing, discussed further in Sect. 4. The above findings are general. Here we emphasize that following the discovery of heat generation by radionuclides, it was recognized that Earth may be heating up rather than undergoing progressive cooling. This question was considered in some remarkable papers (e,g., MacDonald, 1959). Notably, Takeuechi et al. (1967) devoted an entire chapter of their book to this subject. Importantly, temperature differences from the starting point at 871 km are germane. Consequently, the input for ∂B/∂P in Eq. (7) is the PREM curve less a line for the P response of B, i.e., Eq. (6). Figure 5 shows the two endmember cases, discussed above, and one intermediate case. These three examples show that B is controlled by the choice of B , in order to match the starting point at 871 km. The range of values of 3.8 to 4.3 is compatible with mineral data (e.g., Knittle, 1995), with 4 being the harmonic value (Hofmeister, 1993). The edges of the transition zone and inner core define the x axis. Discontinuities shallower than 871 km and in D are not addressed in our approach. If ∂ 2 B/∂P 2 is small, near 0, then PREM indicates that warming first occurs as z increases but at greater depths temperatures decrease. This behavior describes the LM and outer core. Constraints on ∂B/∂T and ∂ 2 B/∂P 2 from elasticity and volumetric measurements of diverse phases Regarding the thermal input parameter, ∂B/∂T equals −0.023 GPa K −1 from the average of four experiments on MgO (compiled by Bass, 1995) and is only slightly larger in magnitude, −0.029 GPa K −1 , for MgSiO 3 with the perovskite structure (Aizawa et al., 2004), now known as bridgmanite. A large range of values is possible, per Aizawa et al. (2004) and work cited therein. More importantly, uncertainties are high while similar values are observed for corundum, spinel, five compositions of olivine, orthopyroxene, zircon, five garnets, seven other oxides, and four metals, whereas framework silicates, diamond, and alkali halides have ∂B/∂T near −0.01 GPa K −1 (Bass, 1995;Anderson and Isaak, 1995). An input value of −0.026 GPa K −1 is used, the average of which is within reported experimental uncertainty of measurements of LM candidate minerals and moreover describes dense silicates and oxides in general. Because ∂T /∂z is inversely proportional to ∂B/∂T , the effect of varying this parameter on the results is easily ascertained. In contrast, a non-linear response is associated with B , since a difference with PREM is involved and PREM curves are non-linear with depth (Figs. 4 and 5). Hence, possible B values are the focus. From the compilation of Bass (1995), B is positive for silica glass and near 0 for MgAl 2 O 4 spinel, which is also disordered. Otherwise, B for oxides and silicates ranges from −0.03 to −1.6 GPa −1 . The largest magnitude depicts orthopyroxene, which has unusually high B as well. For bridgmanite, B was not resolved even at compression to 155 GPa (Dorfman et al., 2013), consistent with incompressibility of high-pressure, dense phases. Aluminum being present makes no difference (Zhu et al., 2020). However, ubiquitous use of the Birch-Murnaghan equation of state, which involves trade-offs between B and B (see, e.g., Knittle, 1995), prevents resolution of B for these stiff structures. Polynomial fits are needed to ascertain B . Many studies exist of MgO, but some ambiguity exists because elasticity data are fit to a polynomial in pressure, whereas volumetric data are analyzed using an equation of state, which is a general formulation for the P and T dependence of volume (V ). The second-order polynomial coefficient for B(P ) is B /2. To make sure this convention (i.e., Eq. 6) was used, original studies were consulted. Elasticity measurements of MgO by Sinogeikin and Bass (1999) provide B = −0.04 ± 0.02 GPa −1 . The X-ray diffraction study of Yoneda (1990) is consistent with the range of 0 to −0.029 GPa −1 . Zha et al. (2000) reached the highest pressures and found that the null value reasonably represents elasticity and volumetric data combined. For the dense LM, B is small. Again, B = 4 and negligible B describes a harmonic solid (e.g., Hofmeister, 1993). We consider 0 to −0.02 GPa −1 , mostly in steps of 0.0025 GPa −1 , to calculate ∂T /∂z from Eq. (7). Thermal gradients from 871 to 2741 km Thermal gradients are shown in Fig. 6, with an example of a fit to B = −0.005 GPa −1 . The sign convention used here is based on flow from a central source (s = 0) moving outwards. All curves are well represented by third-order polynomials (Table 1) with similarly high residuals. Fits can be scaled to address variations from an input value of −0.026 GPa K −1 . Constrains on ∂T /∂z are covered in Sect. 4. 3 Lower mantle transport properties from theory and experiment Dependence of thermal diffusivity on temperature Three-parameter fits describe measurements of thermal diffusivity for diverse solids above room temperature up nearly to melting that are neither affected by physical contact losses nor by spurious radiative transfer gains: The polynomial for ∂T /∂z is T 0 + T 1 z + T 2 z 2 + T 3 z 3 . n/a: not applicable because large |B | leads to unsupportable melting that occurs near 670 km. n.d.: not determined. a These values satisfy the criteria that the LM is not melted and that the outer core melts above 2750 K as measured for the Fe-S system (see text). b Range at 2741 km from estimates of low and high κ. Power at the core mantle boundary (CMB) is lower, by about 1 TW. Positive sign indicates heat flow from the core into the LM, discussed below. c From the broad maximum in power which is nearly the same for low and high κ. d Most likely to represent the LM; see text. The fitting coefficient G is near unity and H is small (Hofmeister et al., 2014). When H = 0, the parameter F * = F (298) G on the right-hand side equals D at 298 K. The general applicability of these formulae has been established by additional measurements now encompassing over 200 substances (Merriman et al., 2018;Hofmeister, 2019). Equation (8) represents sample thickness L > 1 mm, i.e., bulk samples, and high temperatures and thus is appropriate to the mantle. Examples of D(T ) for dense phases are shown in Fig. 3. Generally, H is quite small, ∼ 0.0002 mm 2 s −1 K −1 , but is essential to represent high-temperature behavior (T > 1200 K) of structures involving unit cell formulae more complex than Al 2 O 3 . For simple materials such as MgO and alkali halides occupying the cubic B1 and B2 structures, H = 0 within uncertainty. However, Si with the diamond structure has non-negligible H when impurities are present, whereas graphite, which has a more complicated anisotropic structure and is generally impure, has a substantial H term (Hofmeister, 2019, chap. 7). LFA data on glasses show that large H is commonly associated with high Fe cation content (e.g., Sehlke et al., 2020). These findings point to absorption bands above ∼ 1000 cm −1 and into the visible region being associated with the HT term. To provide H when LFA data on (Mg x Fe 1−x )O are not available, we consider corundum, rather than MgO, to represent an oxide-rich lower mantle. For a silicate LM, systematic behavior of orthorhombic and cubic perovskites with various chemical compositions (Hofmeister, 2010a) is considered. The average D from the three orientations of NdGaO 3 is used to compute the T dependence of D for orthorhombic perovskite, since this agrees with D near 298 K for unoriented MgSiO 3 from Osako and Ito (1991). A periodic technique was used, and their sample was polycrystalline. Contact losses are more important than ballistic gains, because the latter is reduced by physical scattering between grains. Hence, D(T ) for NdGaO 3 in Fig. 3a represents a minimum for a complex silicate phase in the LM. Differences among dense silicates at high temperature are not large: this is the basis of D = 1 mm 2 s −1 being commonly used in geophysical models. Near independence of D from T for complex solids at high T considerably simplifies calculations (below). Figure 3b shows examples of κ(T ) calculated from Eq. (3). For MgO and Al 2 O 3 , c P and ρ as a function of T are well constrained even at high T (e.g., Ditmars et al., 1982;Fiquet et al., 1997;Chase, 1998). Heat capacity data on MgSiO 3 perovskite (Akaogi and Ito, 1993) are limited to near-ambient T due to back-conversion problems. Using a model to extrapolate c P is unnecessary, due to uncertainties in D(T ) for silicate perovskite. Since D varies more strongly with T compared to c P and ρ, which moreover respond in opposite directions during heating, ambient values are used to estimate κ(T ) for a silicate perovskite mantle. To ascertain the uncertainty in this approach, analogous computations were made for κ(T ) of periclase and corundum. Accurate and estimated trends for MgO differ little (Fig. 3b), but the Al 2 O 3 estimate is significantly steeper with T than using exact values in Eq. (3). Corundum D(T ) is very flat and is better fit to a polynomial than to Eq. (8), so the discrepancy is likely due to extrapolation beyond the temperatures actually measured. Another factor is that the T variations of both D and κ with T are weak when the corresponding ambient values are low, as is indicated in Eq. (9) and evident in Fig. 3. Thermal conductivity of other silicates depends similarly on T as our estimate for perovskite (for examples, see Hofmeister et al., 2014, and references therein). Therefore, we estimate high-T mantle thermal conductivity in terms of constant, limiting values. For an insulating silicate LM, κ is taken as 2.7 W m −1 K −1 , whereas for a thermally conductive oxide LM, κ is taken as 7 W m −1 K −1 . The generic value of D used in geophysical models corresponds to ∼ 3.5 W m −1 K −1 . Dependence of thermal conductivity on pressure Many formulae have been proposed for P derivatives of transport properties, based on dimensional analysis. An exact thermodynamic relationship, was derived and confirmed using reliable available data on 20 different homogeneous solids at pressures up to 2 GPa (Hofmeister, 2021). Equation (9) excludes a typographic error in the earlier report. The physical properties, other than κ, are part of the equation of state (EOS). Thermal expansivity is defined by Compressibility (= 1/B T , the bulk modulus) is defined by The dimensionless Anderson-Grüneisen parameter (δ T ) describes the opposing effects of T and P on V , while clearly showing that the efficiency of expansion and compression for any given solid is related. Hence, the farright-hand side (RHS) of Eq. (9) describes how temperature components of the EOS, not just pressure components, regulate changes in heat conduction during compression. Derivation of ∂ ln(κ)/∂P from Fourier's equation Due to the importance of compression to mantle heat transfer, we explain why Eq. (9) is exact. The original derivation considered diffusion of thermal radiation. A simpler approach is covered here. Since is independent of pressure in experiments, taking the P derivative of its definition (Eq. 4) suffices to relate ∂ ln(κ)/∂P to EOS parameters. The algebra is simple and not specified here. However, one must recognize that a negative temperature gradient (∂T /∂z| P ) is associated with positive signs for and κ in Eq. (4). But Eq. (10) defining thermal expansivity is a scalar quantity, being based on volume, which has no direction. A positive sign for α, typical of most materials, requires the thermal gradient and its inverse, ∂z/∂T | P , to be positive in an isotropic solid. In contrast, heat flow has a well-defined direction from some origin and so is a vector quantity. Maintaining consistent signs for both κ and α during algebraic manipulations after differentiation of Fourier's equation leads to Eq. (9). Experimental validation The hot wire/hot strip and Angstrom techniques accurately measure thermal transport in metallic samples at low P (< 2 GPa) and low T (< 1000 K) since metal-metal contact losses are low and ballistic radiative transfer gains are negligible (e.g., Andersson and Bäckström, 1986;Jacobsson and Sundqvist, 1988). Moreover, use of standard L near millimeters in piston-cylinder and multi-anvil apparatuses permits direct comparison of results on diverse materials. Reliable data for insulators at high P over millimeter length scales have been obtained near 298 K using the hot strip/hot wire or Angstrom methods (e.g., Andersson, 1985;Osako et al., 2004). Samples are single crystals, glasses, and disks of finegrained soft powder (alkali halides) that were compacted prior to study. Unlike metals, systematic errors exist due to interface thermal resistance and ballistic transfer, but taking a logarithmic pressure derivative minimizes these problems. Low-pressure transport property measurements and EOS results of over 20 solids, mainly from elasticity data, are summarized in Table 3 of Hofmeister (2021). The response of thermal conductivity to compression (Fig. 2) points to δ = 7 at ambient conditions representing the average for silicates, oxides, metals, alloys, and alkali halides. Further verification is provided by a well-studied material with a special, negative sign of thermal expansivity. A negative sign for pressure response for thermal conductivity is expected and was indeed observed for silica glass by Andersson and Dzhavadov (1992) and Katsura (1993). Consistency with Eqs. (9) to (12) is demonstrated, since fused silica also has uncommonly positive ∂B/∂T (Spinner, 1956). Chopelas and Boehler (1992) constrained mantle values of δ from 5 to 6 for metals, oxides, and alkali halides by assessing thermal expansivity at high pressure and temperature. Anderson et al. (1992) argued for an ambient value δ 0 = 6.5 and a weak volume dependence. The focus of these studies, along with Helffrick (2017), who further modified the V dependence, is the effect of compression on α. Previous EOS evidence for nearly constant δ T Larger δ 0 = 7 was obtained from Fig. 2, which compares measurements of ∂κ/∂T to 1/B for the same types of solids. Hofmeister (2021) calculated δ 0 from temperature derivatives of bulk moduli, which are more accurate than P (or V ) derivatives of α for several reasons. (1) Elasticity measurements determine ∂B/∂T as a first derivative and so are more accurate than x-ray diffraction (XRD) studies which determine ∂B/∂T as well as ∂α/∂P as second derivatives. (2) Linear dependence of B on T exists over a wide range of temperatures (e.g., Anderson and Isaak, 1995), which simplifies establishing this parameter and reduces its uncertainty. (3) Because B is large in magnitude whereas α is small, derivatives of B are easier to determine accurately. Average δ 0 = 7 obtained in Fig. 2 better agrees with the EOS study of Anderson et al. (1992) because they included elasticity data in their assessment. (4) Data on thermal expansivity at pressure from XRD methods give α as an average over the temperature ranges explored, which makes this approach to P derivatives of α very uncertain. High uncertainty in α, let alone its P derivative, is evident in the compilation of Fei (1995). Pressure-independent δ is consistent with separation of variables describing the bulk modulus, i.e., if Then from the RHS of Eq. (12) follows Equation (14) equals 0 when δ 0 = B and is small otherwise. Equation (14) suggests that δ is a constant on the order of 4, the value of which for B constitutes the simple Murnaghan EOS. Similarly exploring the middle term of Eq. (12) indicates that δ also weakly depends on temperature when thermal expansivity is described by separation of variables. Constant δ is thus a reasonable first-order approximation. Exact evaluation is difficult due to the generally small sizes of all derivatives, limitations of a polynomial representation, and assumptions underlying the forms for the EOS. Compressible alkali halides are very important for this endeavor but are hydroscopic, easily deformed, and transparent in the infrared as single crystals. Lower and upper bounds for thermal conductivity in the LM The pressure (or depth) dependence of κ in the LM is much weaker for perovskites than oxides (Fig. 7), assuming δ 0 = 7. Corundum and perovskite have similar bulk moduli, so their different curves in Fig. 7 relate to relative efficiency of heat transfer at high T only. Pure MgO would have lower κ than Al 2 O 3 with depth due to T increasing beyond 1500 K (Fig. 3b) but would have higher κ with depth due to P increasing. For this reason, corundum, which has the same κ as periclase at 1500 K, is used to represent an oxide lower mantle (Fig. 7). Results Calculations of geotherms, flux, and power from 871 to 2471 km are presented here, which are extrapolated through D . Comparison is made with possible melting temperatures to eliminate cases unlikely to represent Earth's interior. The limiting case of B = 0 is unexpected but is useful for comparisons. Importantly, our geotherms do not utilize data on thermal conductivity and thus are essentially independent of mineralogy. In contrast, flux and power are independent of the reference temperature, but use κ and thus are affected by LM mineralogy. However, as κ varies little at high T (Fig. 3), only the proportion of complex silicates to simple oxides matters. Figure 7. Thermal conductivity in the lower mantle for oxide and silicate perovskite compositions assuming temperature independence and using Eq. (9) with δ = 7. Solid curves are computed for B = 0. Symbols utilize larger magnitude B than likely exists, given the results from PREM, but even this makes little difference. Calculation of temperatures across the lower mantle from PREM and a reference point Geotherms are calculated by integrating the thermal gradients (Fig. 6) which were obtained from PREM, from 871 km downwards, using various B values (Table 1). Results (Fig. 8) consider three values for the 871 km reference temperature (T ref ) representing the top of the lower mantle. The minimum T ref of 1500 K is based on temperatures for basalt extruding at the surface (e.g., Falloon et al., 2008), whereas the maximum T ref of 2500 K is based on dry melting of peridotite at 670 km (Zhang and Herzberg, 1994). The intermediate (T ref = 2000 K) corresponds to Takahashi's (1986) melting curve of peridotite at ∼ 410 km, which probably involved tiny amounts of moisture, based on sensitivity of melting to hydration. All geotherms (Fig. 8) are flat near 871 km, due to PREM derivatives being linear for the shallowest LM (Fig. 6). Thus, the inverse method suggests that temperatures change little from 871 km up to the shallower depth of the 671 km seismic discontinuity. Chemical composition could be changing near 670 km, as descending slabs are resolved from earthquakes down to these depths but not below (summary figures are in Hofmeister, 2020, chap. 7, andin Hofmeister et al., 2022). Regarding the base of the LM, the geotherms smoothly decrease with depth. Extrapolation of LM results from 2741 to 2891 km presumes similar bulk moduli derivatives for D and the LM. Although this seems unlikely, since reaction with core material is possible, discontinuities at these depths preclude robust analysis (Sect. 2), and modeled velocities in this region are less certain than in the LM (e.g., Kennett et al., 1995). But, as discussed in Sect. 2.3, as evidenced by compilations and recent work, derivatives of B vary little with structure, composition, and bond type. Melting curves of Fe-S and Fe-Ni-S systems (Chudinovskikh and Boehler, 2007;Morard et al., 2011;Mori et al., 2017) set a minimum of ∼ 2750 K at 2891 km, whereas peridotite melting (Fiquet et al., 2010) sets a maximum near 4100 K (Fig. 8). Only for the combination of the highest T ref and the limiting case of B = 0 is peridotite melting reached. Many, but not all, geotherms exceed the Fe-S eutectic. For example, if T ref = 2000 K, then |B | must be smaller than 0.0075 GPa −1 . Table 1 lists the range of reference temperatures consistent with the above-mentioned phase equilibria for each B value considered. Only for the hottest possible T ref = 2500 K can B reach −0.01 GPa −1 . Small second derivatives are consistent with difficulty in resolving these in experiments on dense materials, unless extreme pressures are reached (see Zha et al., 2000) and polynomial fits are used (Sect. 2.3). Irrespective of the temperature values, the shape of the geotherms require a thermal maximum inside the lower mantle. For large magnitudes of B , a maximum is indicated roughly near 670 km or slightly shallower. For the smallest |B |, the maximum T in the lower mantle is reached at its interface with the core. In all cases consistent with phase equilibria, maximum LM temperatures are reached for z below 2200 km. Thus, from analyzing PREM curves, LM temperatures climb inwards for most cases considered. A heat source located in the LM is supported by flux and power calculations, below. Altering our input value for ∂B/∂T from −0.026 GPa K −1 will either expand or contract the splayed patterns of Fig. 8, which rest on thermal gradients of Fig. 6 obtained from Eq. (7). Comparison of such revised curves with phase equilibria would then change the ranges of B and T ref that avoid melting in the LM while allowing melting in the core (Table 1, RHS), resulting in quite similar shapes. Thus, geotherms from PREM are robust, given the subsidiary information on melting relations, whereas the specific input values are interdependent. The shapes of the geotherms are compatible with the process of heat diffusion, i.e., thermal conduction, even though the calculations (depicted in Table 1 and Figs. 6 and 8) did not incorporate thermal conductivity values. Calculation of flux across the lower mantle from PREM and κ Flux, defined by Eq. (4), describes spherical geometry for radially changing T . Temperature values are not needed to compute the amount of heat being moved inside and across the LM, when κ only weakly depends on T . The gradients Aizawa et al. (2004). For the dry peridotite solidus, results of Zhang and Herzberg (1994) are merged with high P data or Fiquet (2010). The thick black line is the eutectic melting curve of the Fe-S system (e.g., Morard et al., 2017). of Fig. 6 and Table 1 lead to families of flux vs. depth curves which depend on B values for each of low and high κ. The sign convention used here portrays heat from the center of a sphere moving outwards. All cases (Fig. 9) provide a broad peak for across the lower mantle. For comparison, surface flux values are larger, averaging ∼ 60 mW m −2 for either crust (Veiera and Hamza, 2018). The height of the peak in increases as B approaches its null limit. Only for this limiting case (B = 0) is flux within D large and positive, the behavior of which signifies that heat emitted from the core contributes to the flux in D and at large z in the LM. For the next larger value of B , flux near D is positive but near zero. Thus, the vast majority of our calculations point to a lower mantle source whose heat is being shed to its adjacent layers. Section 4.3 provides further discussion. Note that the curvature and a maximum in are inherent to PREM (Figs. 4 to 6;Sect. 2.3). The increase in thermal conductivity with P serves to flatten these curves at great depths near the outer core and, importantly, means that the efficiency of heat conduction increases with depth. Consequently, heat from a source deep in the Earth is conveyed more readily inwards than outwards. How much heat is carried depends on the material, i.e., on the high-T value at ambient pressure. Our model (Fig. 7) rests on heat transport values that are established via measurements. The thermal response at modest temperatures (< 2000 K) of any given material is largely controlled by infrared fundamentals and near-IR overtones, whose frequencies overlap with the associated blackbody curve (Hofmeis-ter, 2019, chap. 11). We have not accounted for electronic transitions of Fe 2+ augmenting heat transfer significantly above ∼ 1000 K, as observed in various glasses (Sehlke et al., 2020, and references therein), since the chemical composition of the LM is not known. The purpose of this paper is to ascertain Earth's thermal state with minimal assumptions and input parameters. As accuracy is not possible given the scant definitive information on LM mineralogy, such as samples, salient features not affected by the details are pursued here. Nonetheless, enhancements in thermal conductivity with depth would raise the flux near 2471 km and straighten the curves, resulting in melting in the deepest lower mantle. We suggest that κ cannot be significantly larger than that considered in Fig. 7 or the boundary layer D would be significantly larger and mostly molten, which contradicts observations of shear waves in this region (cf. behavior of velocities in the molten outer core from PREM, Fig. 5, to those in the solid layers). Calculation of power across the lower mantle from PREM and κ In spherical coordinates, power is provided by For all parameters explored, ℘ has a broad peak in the lower mantle (Fig. 10). The depth where the maximum ℘ occurs (Table 1) points to the location of a heat source. This source is in the lower mantle, unless |B | is larger than considered, which suggests a location near or in the TZ. Power is supplied to the LM from the core only for the case of high κ with quite small |B |, both of which are unexpected. For this unlikely situation to exist, the core would also need to have a heat source with a sufficiently large associated outward flux to overtake the flux inwards from the LM source. Here, we pursue a simple explanation, consistent with realistic input parameters, that a source in the LM supplies heat to the core region. In detail, half of the four curves require B = 0, which is unexpected, and a third curve points to negligible power from the core. The fourth case of an oxide (high κ) with low B = −0.0025 GPa −1 is probably incompatible parameters. This inference is consistent with large |B | being associated with compressible solids like salts (e.g., Bass, 1995). Periclase, which has been viewed as a lower mantle phase, is much more compressible than silicates with perovskite-type structures, considered to dominate the LM (see Sect. 2.3). If the mantle is mostly composed of dense silicates, as is the current view, the LM is an inefficient heat transmitter and stays hot where the heat is produced. Heat conveyed from the lower mantle to the core could cause melting. The latent heat of ∼ 450 kJ kg −1 for iron melting at high P and T (Aitta, 2006) is close to latent heats for basalt and many other materials. A constant rate of melting over geologic time requires 5.8 TW to make the outer core. This value is most compatible with the case of low κ (silicate) and B = −0.010 GPa −1 . If the source is winding down with time, as is likely, then B should be more positive. The case of low κ (silicate) and B = −0.0075 GPa −1 is compatible with a wide range of temperatures for the top of the LM and a sulfide-rich core, which addresses the expected sulfur concentration from meteoritic models (see tables in Lodders and Fegley, 1999). These parameters compose our best estimate of the thermal gradient, geotherm, flux, and power, while suggesting that heating occurs near z = 1900 km (Table 1). Implications and conclusions Earth's largest zone by mass or volume is the lower mantle, which has only been accessed remotely, via seismic data acquisitions and processing. PREM, Ak135, and other reference models provide nearly identical velocities (Kennett et al., 1995). The models yield smoothly varying properties and their derivatives over most of the LM, the changes of which are taken in this report to represent combined effects of P and T varying with depth. However, a smooth variation in chemical composition is not precluded in utilizing separation of variables (Sect. 2). Gradual changes in chemical composition could be hidden in our choices for B if changes in mineralogy lead to a linear response of ∂B/∂P . Specifically, mineralogy depending linearly on density and thus on P would lead to an equation equivalent to Eq. (6). Indeed, Fig. 9 and Eq. (15). Positive power is associated with temperature increasing inwards. Arrows indicate the direction of flow across the coremantle boundary region and also into the TZ. systematic dependence of velocity (and thus of B) on density exists and has received much attention (see, e.g., the seminal paper of Shankland, 1972). But because ∂B/∂P varies little among dense phases, as underscored by commonly assuming harmonic values of 4 in analyzing data (see Sect. 2.3), geotherms calculated from PREM via the inverse approach developed here are largely independent of mineralogical details. Values considered for B depict all three different thermal situations that are possible below 670 km: LM temperatures can increase with depth below the TZ, or are constant, or may decrease. Only one case was considered for the last, due to strongly decreasing T being incompatible with a molten outer core (Fig. 8). The thermal state and gradients inward from the top of the LM indicate that a power source exists in the lower mantle. Locations of the peak in ℘ or flux are little affected by the high-T values of thermal conductivity considered in the inverse model: instead, the location of the power source is largely controlled by B . As discussed in Sect. 2.3, B is not well constrained. This ambiguity effectively lumps compositional changes with those solely due to compression. Contrastingly, the heights of the peaks in Figs. 9 and 10 are affected by values for κ and for ∂B/∂T . Neither variable depends strongly on mineralogy. Note that the large contrast (in κ) explored considers vastly different simple oxide vs. complex silicate compositions. Notably, thermal models show that the scale length for cooling over geologic time is ∼ 1000 km (Criss and Hofmeister, 2016), so heat generated in the LM cannot reach the surface to any appreciable degree. A similar scale length is obtained from Eq. (2): the Earth cools slowly because it is large and spherical with a refractory outside. Flat geotherms near z = 1000 km with a source near 2000 km are compatible with the thermally insulating nature of rocks and our earlier cooling models. Criss and Hofmeister (2016) used a constant κ that is independent of pressure and so assumed a much more thermally insulating mantle. The flat and low κ case underestimates in the lower mantle. The consequence is a narrower thermal maxima with little heating of the core in the calculations of Criss and Hofmeister (2016). The broad geotherms inferred from PREM without knowing thermal conductivity are thus consistent with the strong dependence of thermal conductivity on pressure that is demonstrated by experiments (Fig. 2) and, importantly, is inherent to Fourier's definition of (Sect. 3.3). In short, results (Figs. 6, and 8 to 10) obtained by analyzing the seismologic representation of Earth's interior are consistent with studies of the equation of state, phase equilibria, and thermal transport properties. Table 1 lists the most likely thermal gradient for the parameters explored. Although other values are possible, the mostly likely gradient is unlikely to change significantly, as likelihood was deduced from melting temperatures for LM and core candidate materials. Heat is produced in the LM, but how? Commonly considered sources are discussed next. The paper concludes with a proposal. Heat produced in rocks by radioactive decay has been the focus of many studies. Yet, it is well known that U, Th, and K are concentrated in the continents, leaving very little for the mantle, if meteorites represent the bulk Earth. Flux from the oceans suggests mantle production of < 100 W km −3 . Such a tiny source can only heat the interior if deeply buried, but for this case excessively high T results, due to the time evolution of the heat-generating isotopes (Criss and Hofmeister, 2016). This long-standing problem in geochemistry led to considering primordial heat as another source. This hypothesis is based on gravitational contraction producing heat and rests on Kelvin's discounted proposal for the generation of starlight. Changes in gravitational potential produce motions as per elementary physics textbooks. Conversion of gravitational potential to spin and orbits quantitatively accounts for the high kinetic energy for Earth and sister planets today, as well as high spin observed for young stars (see Hofmeister and Criss, 2012). Some accretionary heating is expected in the final stages, but this source is a small fraction of the gravitational potential, non-renewable, shallow, and winding down. Likewise, core formation is not a source of heat: rather, the planet would need to be already melted for a homogeneously accreted object to sort, since self-compression itself provides a stable density stratification. Heat is a by-product of motions, when accompanied by deformation, non-elastic in particular, or friction. Motions are produced by forces which on large, planetary scales are gravitational in origin. On this basis, and because the previously explored sources of radiogenic and primordial heat are inadequate to describe the workings of Earth (e.g., the hypothesis of mantle convection: see Bercovici, 2007) as well as seismologic detection of a molten core, which is hot, our recent efforts have focused on forces and motions. Hofmeister et al. (2022) argue that the location of the barycenter, where the immense solar pull and orbital centrifugal forces balance, differing from that of the geocenter results in imbalanced stresses and forces that are cyclical with periods of both 1 d and 1 month, with plate tectonics being a consequence. Cyclic stresses promote failure (Schijve, 2009). Spin is important, as the force field is axially symmetric, which explains orientation of the mid-ocean ridge fracture system. The barycenter is a point in space that the Earth spins through. Its location relative to the geocenter is defined by masses of the Earth and Moon plus the lunar distance, which varies over the month. The depth range of ∼ 1450 to 2050 km, shown in Figs. 8 to 10, includes the position of the power source indicated by B more negative than −0.005 GPa −1 . Over the day, this point in space moves through the LM and rarely lies in the equatorial plane, due to Earth's tilted spin axis. Thus, our results for Earth's thermal state are compatible with cyclical stresses heating the LM. This region is strong and plastic (or elastic) rather than brittle, like the lithosphere, but both are underlain by fluid layers. Liquids flow under any stress, the lack of rigidity of which adds stress to the overlying layers. The amount of heat generated is small, ∼ 1 TW, and is conducted away from the source in both directions. Investigating this proposal further is beyond the scope of the present report. Data availability. No data sets were used in this article. Competing interests. The contact author has declared that there are no competing interests. Disclaimer. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Special issue statement. This article is part of the special issue "Probing the Earth: experiments and mineral physics at mantle depths". It is a result of the 17th International Symposium on Experimental Mineralogy, Petrology and Geochemistry, Potsdam, Germany, 1-3 March 2021.
13,608
2022-02-28T00:00:00.000
[ "Geology", "Physics" ]
AN INVESTIGATION OF DEEP DRAWING OF LOW CARBON STEEL SHEETS AND APPLICATIONS IN ARTIFICIAL NEURAL NETWORKS In this study, the deep drawability of SAE 6114, being a low carbon steel, was investigated. The materials with thickness varying from 0.67 mm to 2 mm were subjected to tensile tests and then R (average vertical anisotropy coefficient) and n (stain hardening exponent) values were determined. At the same time, h (the height of the cup) and F (the reaction force) values of the materials were found by subjecting them to Erichsen test A sheet with 2 mm thickness was cold rolled in 6 different deformation ratios and the tests were applied to it Results obtained from the tests were compared with each other and ANN application was performed for these results. It was proved that, there was an ANN solution to obtain new values of % deformation rate and thickness properties of deep drawing of low carbon steel sheets which were found by experiment The obtained values satisfied our estimation. Material quantity which is scraped due to tear off and similar causes in the formation process, sometimes excesses very much the acceptable level. This situation happens mostly because of usage of the inconvenient material if die design and other reasons is not taken into account Even the properties in a material which is suitable for the process vary depending on the sort of process, basically they must be strengthened in thinning of plate and rapid hardening. The most common sheet metal forming processes are used for deep drawing and stretch forming. In several sheet metal forming processes, both forming types are used together. Deep drawing might be defined as the metal shaping process used for shaping flat sheets into cup-shaped articles. The wall thickness of the produced cup is nearly the same as the thickness of the blank sheet However, when the thickness of the metal sheet is compared with the thickness of product, the thickness decreases in stretch forming, markedly [1,2]. A sheet which has a good drawability characteristic should have high resistance to thickness thinning in the desired cup-shape without a change in sheet thickness when it is formed. R value is a measure of resistance to thinning, it may be determined by tensile test and is the plastic strain ratio of width to thickness in a sheet [1,3,4]. Sheet material which is used to stretch forming should be ductile and uniformly deformed without necking. Strain hardening exponents (n) are a measure of good stretch forming characteristics. Having greater value for n means the desired higher ductile and uniform plastic deformation characteristic. Erichsen test is also a measure of good stretchable. The greater height of the cup shows the higher stretchable characteristics. Another impor1ant technique for controlling failure is sheet-metal forming limit diagram (FLD) Deformation and strain rates (n) are the effective parameters for formability of sheets [1,3,4]. Mechanical properties of rolled sheets depend on the direction. Therefore average vertical anisotropy coefficient (R ) is defIned as in equation (1). Where anisotropy values are the values at ° , 45 and 90° angles on the rolling direction of the tested sheet. Deep drawability increases with the increasing average anisotropy values. R values are greater than 10 in most bcc crystal structured metals of low carbon alloys, average anisotropy values are in the range of 1.35 196. It can be said that if average anisotropy value is higher than 1.0, the material has good drawability and if smaller than 1.0, the material has poor drawability [6,7]. Holloman equation (2) is valid at the uniform plastic deformation region for low carbon steel. Strain hardening exponent (n) is a central parameter for stretch forming. The plastically deformed region of material shows higher strain hardening exponents, and this region will have a high resistance to necking. Therefore, the uniform plastic deformation of the shaped sheet occurs. Necking is seen at a particular region in which strain hardening exponents are small and localized at that region . As a result of rapidly thinning, cracks may occur quickly [8,9]. The height of the cup measured in Erichsen's experiment is also another criteria for the stretchability. In this experiment the material was forced until it was torn in the standardized conditions by means of a punch having a spherical tip. The most formable sheets are required to be above a certain height of the cup depending on the thickness of the material. The necessary minimum height of the cup obtained in plates and sheets depending on the material's quality has been given in the standards [10,11] The results of Erichsen's experiment are interpreted as the values which provide a comparison with the values of any material in its standard rather than using them to standardize the materials. They are commonly used in low carbon steels. The higher value of the height of the cup means the ductility of the material [10,11,12]. Artificial neural networks (ANN) are computing systems whose structures are inspired by a simplified model of the human brain. A typical multilayer (3-layers) feed-forward ANN is given in Table 2. It consists of an input layer, an output layer and a hidden layer. Sets of nodes are arranged in these layers. Activation signals of nodes in one layer are transmitted to the next layer through links which either attenuate or amplify the signal. For a non-linear relation or a complex pattern between input and output values, ANN is a very powerful estimation method. In most ANN applications, for constructing non-iinear transfer functions, usually "back-propagation technique" is used. During the training stage, the output part of a training pattern is the same as input part and both parts consist of correct measurements of the system. When the neural network is being trained, the connection weights are corrected to minimize the error between the true and estimated values of the measurement variables. In ANN, the weights are the distributed associated memory units and show the current state of the knowledge. In training examples, systems operation measurements are shown with all weights and are distributed among the measurements taken from system operation states. Many training patterns for each neural network are formed by selecting the true values of its corresponding measurement subset from the training examples. Each training pattern represents a training example for its corresponding neural network [13]. An ANN is trained to emulate a function by presenting it with a representative set of input/output functional patterns. The back propagation training technique adjusts the weights in all connecting links and thresholds in the nodes so that the difference between the actual output and the target output are minimized for all given training patterns. For the plh training pattern (p= I,2 ..p), this is done by minimizing the energy function, with respect to all the weights and thresholds. Yi corresponds to the activation function of the ith neuron in the output layer. di denotes the desired target. The corresponding updates for the weights are calculated by using the iterative gradient descent technique. The above algorithm is commonly known as error back propagation. The constant £ is the learning step while the constant is the a momentum gain. (For in this study is a = £ = 0 75 ) ~Wij indicates the weight change in the previous iteration. Weights are iteratively updated for all P training patterns. The training process may require many such sweeps. Sufficient learning is achieved when the total error function, In this study, we use 6114 quality steel which is produced in ERDEMIR. Materials that have 0.67, 1, 1.2, 1.5 and 2.0 mm thickness are purchased from the market. The chemical compositions of materials are given in Table-I. C Mn P S Cr Ni Cu Mo Sn AI V 0.03 0.003 0.008 0.025 0.01 0.02 0.03 0.004 0.001 0.004 0.001 The sheets are cold-rolled in different ratios. Tensile test specimens prepared according to ASTM E5 were tested (with a load of 10 kN). During tensile tests, grips velocity was taken as constant, 1 mm/min. All tests were performed at room temperature. -+-R ---D--R(ANN) 14 ,--------------, 12 I 10 E 8 .§. 6 .c 4 2 -ill!h(ANN) 0+-----+-----,------------' o I Figure 1:Deformation R(ANN). 10 ~ ~ ~ Deformation Rate (%) I Rate Versus R, Figure 3: Deformation Rate Versus h, h(ANN). It was proved that, there was an ANN solution to obtain new values of % deformation rate and thickness properties of deep drawing of low carbon steel sheets which were found by experiment The obtained values satisfied our estimation. Material quantity which is scraped due to tear off and similar causes in the formation process, sometimes excesses very much the acceptable level.This situation happens mostly because of usage of the inconvenient material if die design and other reasons is not taken into account Even the properties in a material which is suitable for the process vary depending on the sort of process, basically they must be strengthened in thinning of plate and rapid hardening. The most common sheet metal forming processes are used for deep drawing and stretch forming.In several sheet metal forming processes, both forming types are used together.Deep drawing might be defined as the metal shaping process used for shaping flat sheets into cup-shaped articles.The wall thickness of the produced cup is nearly the same as the thickness of the blank sheet However, when the thickness of the metal sheet is compared with the thickness of product, the thickness decreases in stretch forming, markedly [1,2]. A sheet which has a good drawability characteristic should have high resistance to thickness thinning in the desired cup-shape without a change in sheet thickness when it is formed.R value is a measure of resistance to thinning, it may be determined by tensile test and is the plastic strain ratio of width to thickness in a sheet [1,3,4].Sheet material which is used to stretch forming should be ductile and uniformly deformed without necking.Strain hardening exponents (n) are a measure of good stretch forming characteristics.Having greater value for n means the desired higher ductile and uniform plastic deformation characteristic.Erichsen test is also a measure of good stretchable.The greater height of the cup shows the higher stretchable characteristics.Another impor1ant technique for controlling failure is sheet-metal forming limit diagram (FLD) Deformation and strain rates (n) are the effective parameters for formability of sheets [1,3,4]. Mechanical properties of rolled sheets depend on the direction.Therefore average vertical anisotropy coefficient (R ) is defIned as in equation (1). Where anisotropy values are the values at °,45 and 90°angles on the rolling direction of the tested sheet.Deep drawability increases with the increasing average anisotropy values.R values are greater than 10 in most bcc crystal structured metals of low carbon alloys, average anisotropy values are in the range of 1.35 -196.It can be said that if average anisotropy value is higher than 1.0, the material has good drawability and if smaller than 1.0, the material has poor drawability [6,7]. Holloman equation ( 2) is valid at the uniform plastic deformation region for low carbon steel.Strain hardening exponent (n) is a central parameter for stretch forming.The plastically deformed region of material shows higher strain hardening exponents, and this region will have a high resistance to necking.Therefore, the uniform plastic deformation of the shaped sheet occurs.Necking is seen at a particular region in which strain hardening exponents are small and localized at that region .As a result of rapidly thinning, cracks may occur quickly [8,9]. The height of the cup measured in Erichsen's experiment is also another criteria for the stretchability.In this experiment the material was forced until it was torn in the standardized conditions by means of a punch having a spherical tip.The most formable sheets are required to be above a certain height of the cup depending on the thickness of the material.The necessary minimum height of the cup obtained in plates and sheets depending on the material's quality has been given in the standards [10,11] The results of Erichsen's experiment are interpreted as the values which provide a comparison with the values of any material in its standard rather than using them to standardize the materials.They are commonly used in low carbon steels.The higher value of the height of the cup means the ductility of the material [10,11,12]. Artificial neural networks (ANN) are computing systems whose structures are inspired by a simplified model of the human brain.A typical multilayer (3-layers) feed-forward ANN is given in Table 2.It consists of an input layer, an output layer and a hidden layer.Sets of nodes are arranged in these layers.Activation signals of nodes in one layer are transmitted to the next layer through links which either attenuate or amplify the signal. For a non-linear relation or a complex pattern between input and output values, ANN is a very powerful estimation method.In most ANN applications, for constructing non-iinear transfer functions, usually "back-propagation technique" is used. During the training stage, the output part of a training pattern is the same as input part and both parts consist of correct measurements of the system.When the neural network is being trained, the connection weights are corrected to minimize the error between the true and estimated values of the measurement variables.In ANN, the weights are the distributed associated memory units and show the current state of the knowledge.In training examples, systems operation measurements are shown with all weights and are distributed among the measurements taken from system operation states.Many training patterns for each neural network are formed by selecting the true values of its corresponding measurement subset from the training examples.Each training pattern represents a training example for its corresponding neural network [13]. An ANN is trained to emulate a function by presenting it with a representative set of input/output functional patterns.The back propagation training technique adjusts the weights in all connecting links and thresholds in the nodes so that the difference between the actual output and the target output are minimized for all given training patterns.For the plh training pattern (p= I,2 ..p), this is done by minimizing the energy function, with respect to all the weights and thresholds.Yi corresponds to the activation function of the ith neuron in the output layer.di denotes the desired target.The corresponding updates for the weights are calculated by using the iterative gradient descent technique. The above algorithm is commonly known as error back propagation.The constant £ is the learning step while the constant is the a momentum gain.(For in this study is a = £ = 0 75 ) ~Wij indicates the weight change in the previous iteration.Weights are iteratively updated for all P training patterns.The training process may require many such sweeps.Sufficient learning is achieved when the total error function, In this study, we use 6114 quality steel which is produced in ERDEMIR.Materials that have 0.67, 1, 1.2, 1.5 and 2.0 mm thickness are purchased from the market.The chemical compositions of materials are given in Table - Strain hardening exponent (n) was determined by using true-strain and true-stress values derived from uniform plastic deformation region of all the tensile tests.All the test results fit to the expected results for these kinqs of sheet materials, and some research results.Thickness (mm) I-+-R I I~R(ANN)I Materials may be compared with each other for a particular sheet metal forming type.Strain hardening exponent is important for stretch forming.R value is accepted as an important forming parameter for deep drawing.However, high values for both Rand n at the same time mean good formability whatever the shaping type is.It should be as high as possible, for good deep-drawable materials. When n values of the tested materials given in Figure 6 are compared, it may be easily seen that the thickest specimen gives the highest value.n values of the specimens have the reverse effects according to R values as shown in Figure 5.In spite of this, when the thickness and the deformation ratio are increased, R values are decreased but n values are increased.It can be said that the low carbon steel specimens have a good deep drawability Strain hardening exponent values are found in the range of 1.35-1.96for steel materials.If these values are compared to n values for the current study, it may be said that the investigated materials are at the good level from earing problem point of view. Samples were located into the apparatus with a compressing force of 10 kN.The hole diameter of die used is 27 mm.The samples has been lubricated with a grease including graphite in order to produce a thin film before they were located.The experiments were performed at the room temperature.The speed of the spherical tip was taken to be constant during the experiments.For all samples, the height of the cup and the reaction force were determined.The results obtained has.been shown in Figure 3 In this study, deep drawing for 6114 quality of low carbon steel and ANN practicability of the obtained values were investigated.For this purpose, specimens with 0.67-2.0thickness were subjected to tensile tests and Erichsen test.Also the effect of deformation on the material was investigated.As a result : 1-When the percentage of deformation ratio was increased, R values were decreased and n values were increased (Fig. 1, 2).2-When the thickness was decreased, R values were decreased but n values were increased (Fig. 5,6). 3-When the percentage of deformation ratio was increased, h values were decreased and F values were increased (Fig. 3,4).4-When the thickness was decreased, h values were decreased but F values were increased (Fig. 7, 8).5-In spite of limited amount of training data set the generalization capability of NN is fairly acceptable.The test error was less than 3.4 % . The sheets are cold-rolled in different ratios.Tensile test specimens prepared according to ASTM E5 were tested (with a load of 10 kN).During tensile tests, grips velocity was taken as constant, 1 mm/min.All tests were performed at room temperature. ,4,7,8.Data set consists of 6 data for training.In order to make use of data available, the following training method was utilized.Out of 6, 4 patterns were held out as a test file each time.The NN configuration during above process was 1-2-4.That is, 1 nodes in input layer, 2 -neurons in hidden layer and 4 neuron in output layer.After training was completed, another data set was prepared for testing.This data set includes 4 patterns.The results regarding with the test file were given in Table2.
4,254.6
1997-12-01T00:00:00.000
[ "Materials Science" ]
Attention-Block Deep Learning Based Features Fusion in Wearable Social Sensor for Mental Wellbeing Evaluations With the progressive increase of stress, anxiety and depression in working and living environment, mental health assessment becomes an important social interaction research topic. Generally, clinicians evaluate the psychology of participants through an effective psychological evaluation and questionnaires. However, these methods suffer from subjectivity and memory effects. In this paper, a new multi- sensing wearable device has been developed and applied in self-designed psychological tests. Speech under different emotions as well as behavior signals are captured and analyzed. The mental state of the participants is objectively assessed through a group of psychological questionnaires. In particular, we propose an attention-based block deep learning architecture within the device for multi-feature classification and fusion analysis. This enables the deep learning architecture to autonomously train to obtain the optimum fusion weights of different domain features. The proposed attention-based architecture has led to improving performance compared with direct connecting fusion method. Experimental studies have been carried out in order to verify the effectiveness and robustness of the proposed architecture. The obtained results have shown that the wearable multi-sensing devices equipped with the attention-based block deep learning architecture can effectively classify mental state with better performance. I. INTRODUCTION Mental health evaluation is an important topic for human safety analysis. Wearable device, acquiring data of related social-speech and behavioral activity, provides a new approach to understand mental health better by establishing the interrelationships of Social Signal Processing (SSP) and Physical Mental Health (PMH). Traditional methods had been proposed to measure and evaluate social behavior. However, they are of limited effectiveness for continuous monitoring of mental health. The key point is to use a more comprehensive analysis by combining multi-sensor features available from the wearable device. These features assist to determining the potential relationship between human activities and mental health. Moeslund et al. [1] summarized The associate editor coordinating the review of this manuscript and approving it for publication was Xinyu Du . technologies in in automatic visual analysis of human behavior including automatic initialization, tracking, pose estimation, and movement recognition. However, these technologies have many restrictions in daily life and equipments are expensive. Thus, sensor-based social signal processing has become an active research topic [2]- [4] which attracted researchers on the relationship generation between the multi-sensor data and healthcare. In [5], Pentland proposed the wearable intelligent devices which was developed to objectively sense and gain an understanding of human wellbeing. In order to capture social signals with high quality, reliability, and validity, the first priority is to create an appropriate collection environment or experiment. Long-term wellbeing monitoring [5] is able to achieve high accuracy for analyzing long-term daily behaviors for human. Long-term monitoring requires the expenditure of a long duration and this results in significant challenges in recruiting and retaining sufficient number of participants. In addition, there is a need to protect the privacy of the participants [6]. The use of wearable devices in short-term for targeted psychological tests is a possible solution to offer an efficient and low-cost method to analyze social signals for mental wellbeing monitoring. The application of machine learning and deep learning algorithms in wearable devices is crucial. In most wearable devices, they extract 6-axis behavior data in the classification of complex movements such as gestures or dances [7], [8]. In addition, by fusing with speech and behavioral features, it is possible to design wearable devices with machine learning algorithms for monitoring mental health wellbeing. Efficient speech segmentation and classification methods help to analyze social audio. Audio features mainly include Mel-frequency cepstral coefficients (MFCCs) and spectral features. Log-mel spectrograms are used as audio features, which can be processed by using image classification and segmentation model [9], [10]. Speech classification methods can be divided into supervised and unsupervised models. Unsupervised models include Hidden Markov Models (HMMs) [11], Gaussian Mixture Models (GMMs) [12], and Nonnegative Matrix Factorization (NMF) [13], [14] which have advantages of fast computations and do not require human annotation for the data. In recent years, deep learning model significantly improves the classification performance despite the long-duration training process. For instance, convolution neural networks (CNNs) can extract high level speech features and achieve high classification accuracy by using spectrogram [15], [16]. Another network with high performance accuracy in audio classification is the Long Short-Term Memory (LSTMs) which is a variant of recurrent neural network with good results in analyzing time series signals. Chernykh et al. [17] achieved emotion classification by using LSTM, and Han et.al [18] built a LSTM network through the DenseNet structure to further improve the accuracy. Deep learning model often requires large datasets while the annotation is a complicated task [19]. Transfer learning [20], [21] enables the deep model to perform better in a small datasets. The model can first learn abundant information in a large public dataset and then fine-tuning in the small target dataset. Transfer learning achieves remarkable results in natural language processing [22] and image classification [23]. For multi-sensor wearable devices, speech pattern is one of the most effective cues for analyzing mental health. This is usually accomplished by speech segmentation from the wearable users. However, single speech segmentation has severe limitations. It does not comprehensively consider the relationship between speech signals under various emotions nor can it relates to behavioral data such as natural limb movements under stress. Thus, multi-sensor data is considereds as a way forward in assisting speech segmentation to further enhance the classification accuracy. Appropriate feature fusion method or model can effectively fuse different categories of features and learns the intrinsic association of different features. Chen et al. [24] constructed a deep feature fusion model for CTR(Click-Through Rate) prediction whereby they fused image features with one hot features and obtained good performance. Yu et al. [25] proposed a model to fuse deep learning and traditional image features which yielded better results than single CNNs. Janani and Ramanan [26] presented a feature fusion framework to connect traditional Bag-of-Features and CNN features in the object classification task. Feature fusion method achieves good performance in processing speech data. Hasan et al. [27] proposed an audio-visual feature fusion via deep neural networks and implemented speech recognition with low error rate. In addition, the audio-visual feature fusion was used to recognize lip language [28]. Xu et al. [29] constructed the deep model which fused MFCCs and spectrograms, and resulted in high score in the DCASE-2017 audio scene classification challenge. Therefore, the effective feature fusion method can help to utilize features to improve the classification performance of the classifier. In this paper, we propose an effective features fusion method that fuses multiple sensor features of the wearable device for mental health evaluation. The contributions can be summarized as follows: (i) Designing wearable devices with multiple sensors and developing an efficient collection process of the voice and behavioral data for wearer. In addition, we design an objective psychological test for depression/anxiety and recruit participants among the university students. The collected data generates the dataset for training and testing the proposed wearable device. (ii) Proposing attention-based features fusion block to fuse behavior features and speech features under various emotions. It improves the performance compared with direct connection fusion method. Based on the block, we construct a mental state classifier. (iii) Presenting and analyzing classification results for depression/anxiety level of participants and exploring the relationship between multi-sensor data and mental health. The paper is organized as follows, the framework of wearable device, classification and model fusion are presented in Section II. Results and analysis are shown in Section III. Section IV is the conclusion of the paper. II. IPROPOSED SYSTEM DESCRIPTION A. DESCRIPTION WEARABLE SOCIAL SENSING PLATFORM AND ANALYSIS FRAMEWORK The block diagram of wearable social-sensing and data analysis is presented in Figure 1. It indicates the various signals collected by the wearable device and describes how the feature fusion model can be used in the system. The proposed system is illustrated in four parts: (i) audio signal processing, (ii) activity signal processing, (iii) feature fusion system. (iv) prediction and analysis for social sensing results. The wearable device collects audio and activity data. The audio data consists of 5 speech fragments of different emotions for every participant. The system analyzes 5 speech features VOLUME 8, 2020 from different emotions as well as activity features and find their relationship with social sensing results. Finally, it makes a fusion on these features and predicts the level of depression. In addition, the wearable device collects data from participants, performs feature extractions and stores data. The training and prediction process of machine learning model runs on the local server. The proposed wearable device and its relevant hardware platform is shown in Figure 2. The microprocessor of the wearable device is an ARM-Cortex4 microcontroller with DSP function for audio feature calculation and the model is STM32F405. Besides, the sensors system consists of 6D acceleration and angular sensor (MPU6050), temperature and humidity sensor (SI7021) to collect multi-modal data from the environment, physiological signals and behavioral activity. The sampling frequencies of MPU6050 and SI7021 are 100hz and 0.1Hz, respectively. The audio collecting system contains MEMS microphones as well as audio code unit (WM8978), the audio signals collected and amplified by an inter-integrated-circuit (I2C) bus with 8 KHz sampling frequency. The display module is an OLED screen. In order to record large amounts of data, the wearable device contains power management unit with a 2200mAh lithium battery and a micro SD card. B. DESCRIPTION EXPERIMENT PROTOCOL AND SOCIAL DATA ACQUISITION The dataset is collected from an autobiographical memory test which involves the participation of 60 students (30 males and 30 females; age range = 18 − 26) at the University of Electronic Science and Technology. All students signed informed consent before the experiment and we have signed a confidentiality agreement with the participants on their speech content. Prior to the experiment, the level of depressive symptoms and state as well as trait anxiety of all participants were assessed by using the Beck Depression Inventory (BDI-II) [30], and State-Trait Anxiety Inventory (SAI, TAI) [31]. The scores of the questionnaires were used to calibrate the data. For the autobiographical memory test, the participants were initially asked to think of six specific events for each emotion (happy/angry/sad/fearful/neutral) that had happened to them rather than being told by others. Meanwhile, the participants can write them down for each event to give a clue for the following recording session. During the experiment, the participants were shown the prompt words for 30s during which they verbalized the events coupled with emotion as specifically as possible. During data collection, the wearable device is worn at the preferred wrist to collect the voice as well as the behavioral data of the participants. The behavior in the experiment is not a specific movement, it behaves as a hand swing of the participant during the experiment and this movement may be unconscious. In addition, in order to prevent the participants from being disturbed, each subject was tested alone in a quiet room. To prevent potential bias, all experimental procedures were guided by computer programs. The collection of the wearable device is synchronized to the clock of the computer, which allows us to effectively timestamp the collection data. After the experiment, we extracted the speech and behavioral data of the wearable device. In this case, we collected 30 pieces of speech and behavioral data for each subject (six speaking fragments for each emotion). The dataset of the paper is composed of speech and behavioral data of the 60 participants and the ground truth is the scores of the questionnaires. The specification of the dataset is shown on Table 1. Temperature and humidity sensors are embedded in wearable devices, whereas they are not used because the experimental environment for collecting data is fixed and the time is short. Thus, the environmental sensing data changes little. There are two main limitations with the experiment. Each emotion is generated by recalling a specific event instead of generation in the natural state. Besides, all participants are university students with age range from 18 to 26 and our experiments do not cover wider age groups. Thus, at the initial evaluation, our experiments only focus on these age group. Therefore, the impact should be drawn that our system can only test on data of persons on these age groups. C. DESCRIPTION AUDIO FEATURES AND BEHAVIOR FEATURES In the experiment, the speech data is collected by microphones of the wearable device, it is grouped into data segments of 30s with 8kHz sampling frequency. The behavioral data is obtained through 6-axis sensor in which it consists of three axes acceleration data and three axes angular velocity data. This 6-axis data is used to extract behavioral features and they are calculated from the sliding window. The time interval between the sliding windows is 3 seconds. In addition, the features are divided into time domain features and frequency domain features as listed in Table 1. The input audio features of the network are Mel Frequency Cepstral Coefficients (MFCCs) [32] which mimic the human auditory system. Firstly, the audio signal is divided into several frames with 512-points and take the Short-Time Fourier Transform (STFT) of each frame. It then maps the power of the spectrum onto the Mel scale and take the discrete cosine transform of the Mel log power. The MFCCs are the amplitudes of the resulting spectrum. Feature extraction operations are conducted by using Librosa [33] which is an open-source library for audio analysis. For speech classification tasks, the raw speech is used as input. However, for the training tasks, the dimensions of the raw speech signal are huge (30s speech segment has 240,000 data points), which cannot be directly used as the input of the network because of the excessive calculation. Furthermore, it is more complicated to learn effective speech features in the network from the raw speech signal since it requires large amount of training data. MFCC is th e commonly used effective speech features and it can be used as input for deep learning model. the features are extracted by framing the raw signal, which reduce the dimension of the input signal. D. PROPOSED FRAMEWORK OF FEATURE FUSION SYSTEM AND CLASSIFICATION METHOD 1) LSTM BASED NETWORK AND FINE-TUNING METHOD The basic network of the proposed framework is the LSTM (Long Short-Term Memory). The effect of LSTM on time series learning is profound. A significant attribute of LSTM is the ability to map from the entire history of inputs to each output [34]. Besides, LSTM solves the vanishing gradient and context access problems commonly plague the RNN [35], [36]. The basic unit of the LSTM architecture consists of a memory block with different types of memory cells and three adaptive multiplications named input gate, forget gate as well as output gate. LSTM contains information outside the normal flow of the RNN in a gated cell, which helps to avoid the vanishing gradient problem. The training loss of LSTM can be back-propagated through time and layers. The audio data can be divided into time series segments and each segment has 470-time steps. The proposed model makes use of a multi-layered LSTM structure to extract high level emotional features on time steps. The network with 3 LSTM blocks is used to process audio features. However, the size of collected data is small and hence this presents difficulties in sufficiently extracting the bottom layer features. Extracting rich features and generalizing bottom layer features are vital to learning more efficient high-level abstract features and improving the network performance as well as robustness. Figure 3 shows the audio classification model and fine-tuning method. The EMO-DB audio dataset [37] is chosen as the source dataset as it consists of audio segments with 6 labeled emotions and our experiment audio data also contains different emotions. Thus, the source task refers to emotions classification and training this task helps the network to learn more of the basic emotional low-level features. In implementation, the weights of the first two LSTM layers are initialized by using source task training weights while other further layers need to be retrained. For the main task, the input of the network is the audio features of single emotion, which is the 40-length MFCCs with 470-time steps. In the first step, the model is trained on audio features of each emotion separately to analyze and compare the classification results of the speech under each emotion. 2) PROPOSED ATTENTION-BASED FEATURE FUSION Commonly used feature fusion methods include weighting method as well as direct connecting method. For weighting method, finding the weight value for different features is the crucial part. However, determining the optimal weights combination is a difficult task. In order to effectively integrate the features of different emotions, we designed an attention block to produce better combination weights and this block can make the model focus on relevant emotions. Attention mechanism was used in the transformer model [38] and word encoder model [39] where this mechanism enables the model to exercise attention to the more related word vectors in the translation task, while reducing the attention to unrelated word vectors. Thus, the attention mechanism can be used in feature fusion tasks as it enables the model to focus on important features. For our task, the attention layer enables the proposed deep learning network to concentrate on emotional features with different weights, and construct a model to analyze the relationship between emotions and mental health. Specifically, as shown in Figure 4(a), the vectors E 1 , E 2 , . . . , E n represent different groups of emotional features. The weighted features fusion layer F is computed as concatenated weighted fusion of these group features where the weights (α 1 , α 2 , . . . , α n ) are computed in a method illustrated in Figure 4(b). The weights vector A is calculated as . . . ; E T 1 ] with a shape of d × n, W 1 is a vector of parameters with size d × 1 and the size of W 2 is n × n. S1 and S2 are the middle layers, which are composed of n neurons. The softmax() function ensures all the computed weights sum up to 1. After obtaining the fusion weights, the fusion layer F can be represented as The loss function of the model is categorical cross entropy function. The optimizer of training is Adam (Adaptive Moment Estimation) and the initial learning rate is 0.00001 while the batch size is 16. In order to avoid overfitting, we set early stopping and its patience is 15 epochs. This means that the training will automatically stop if the accuracy of the validation set is not improved in 15 epochs. The model for single audio features is trained in GTX 1070 and the fusion model is trained in GTX1080Ti. The total training time of 5-fold cross-validation is approximately 18 hours.. III. EVALUATION AND ANALYSIS A. EXPERIMENTAL RESULTS In this experiment, F 1 − Score is used to measure the prediction results. It considers the Precision and Recall at same time and can be regarded as a weighted average of the Precision and Recall. Thus, F 1 − Score helps objectively analyze the performance of the classifier. Precision and Recall are defined as: where tp is true positive, fp is false positive, fn is false negative. Precision is also referred to as positive predictive value (PPV), and it is the fraction of correctly predicted positive samples to the total predicted positive samples. Recall is also referred to as the true positive rate or sensitivity, it can be represented as the fraction of correctly predicted positive samples to the total positive samples in actual label. The F 1 − score is calculated by using Precision and Recall with same weight: All test results are obtained using a 5-fold cross-validation strategy which balances the training accuracy of each round and the total training time. The overall performance is computed by averaging the results from all 5 iterations. 20% of the data is used as a test set for each iteration. Our cross-validation method is similar to the subject cross validation [40]. The method of dividing the training set and validation set is shown in Figure 1. The training and testing sets are split by subject. The dataset contains speech and behavior data for 60 subjects and the raw data are stored in different folders with subjects' numbers. In each iteration of cross-validation, the training set and validation set are divided by this number. For example, the data with numbers 01 to 48 is the training set, and the data with 49 to 60 is the test set. Thus, the training set and validation set are independent of each other and the data of one subject may only be in the training or test set. Overlapping windows are only used in feature extraction process. We performed feature extraction after dividing training set and validation. Thus, the training features and validation features are extracted separately. Therefore, there is no overlapping data between training set and validation set. The results of the source task are shown in Figure 6. The pre-train task is an emotion classification for EMO-DB audio dataset and training on task can initialize the weights of the LSTM based network. CNN-based networks are usually used for speech classification. The figure compares classification performance of the LSTM-based model and a powerful CNN-based network: VGG-net. The input of VGG-net is the same as the input of LSTM since they are both 40-length MFCCs. In addition, the time step is zero padded to 40. Thus, the input size of VGG-net is 40 × 40 (since the time step of each speech segment in EMO-DB is less than 40). It is illustrated that the LSTM-based model has relatively better performance and the average F 1 − Score of LSTM-based model is 8.7% higher than VGG-based model. The main reason is that LSTM block extracts features of time dimension more efficiently for short-term sequences whereas CNN network with deeper layer is difficult to train for small datasets. Our project dataset description and collection experiment has been covered in the Section II. The classification index of the dataset in our experiment is based on three questionnaires of BDI assessing depression, SAI assessing state anxiety, and TAI assessing trait anxiety, which avoid the contingency of individual indicator results. Besides, we have divided the data into three labels for every index, the 27% lowest scores are the low class of depression, the 27% highest scores are the high class and another 46% middle scores are the middle class. 27% is a common criterion for dividing the ratio of high and low in psychological experiments. This method is named as high-low-27-percent group method [41]. Tables 2 to 4 show the F1-score results of BDI, SAI and TAI class, respectively. The input is the audio features with single emotions and these tables compare the classification performances for audio input with different emotions. Figure 7 shows the average F1-scores for these three indices. The result reveals that the accuracy of depression and anxiety classification under the emotions of happiness, fear and anger are higher than that under the other two emotions. This rather interesting result shows that depression and anxiety are more easily detected through speech when the emotions of the participants are anger, fear and happy. In this paper, the attention block has been designed to produce the appropriate combination weights for the emotion features as well as behavior features and to enable the model focus on more relevant features. In addition, we compared the fusion method based on attention block with the direct connection method which uses same weights on features. Figure 8 compares the average F1-scores of these two fusion methods. It is seen that attention block can improve the classification accuracy because better dynamic combination weights are obtained by training the attention model. The detailed results of the two fusion methods will be compared in Table 5. recognition. For prediction of different indices, the generated fusion weights are different whereas several common phenomena can be found. This is clearly visible in Figure 9 that the attention model has highlighted the importance of speech features under emotions of anger and fear in each index prediction. On the separate hand, the speech features under neutral and sadness contribute less to the classification of mental states. Besides, the weight changes little for different mental states. High-level behavior features and emotional audio features are fused in attention block 2 and their fusion weights in each index are shown in Figure 10. It is seen that the weight of the emotional audio features is much greater than the behavioral features. Therefore, emotional audio features contribute more to the classification of mental states. The evaluation results of attention-based fusion model and direct connection fusion model are tabulated in Table 5. The accuracy of the fusion models is significantly higher than that of the model under single emotion features. This illustrates the multiple emotional audio features are useful for analyzing mental wellbeing. Besides, the accuracy of attention-based model has obvious improvement compared with direct connection fusion model. Furthermore, the fusion of behavioral features slightly improved the classification performance. BLSTM (Bi-directional Long Short-Term Memory) is another state-of-the-art learning algorithm for time series classification. Thus, two algorithms are conducted for comparison in both the single and the overall fusion model. Table 1 shows the classification results of LSTM-based model and BLSTM-based model. The input is the speech features with single emotion and the results are the average F1 scores. It is seen that BLSTM-based model slightly improves the classification performance compared with LSTM-based model. Besides, Table 2 compares the results of the overall fusion model with LSTM blocks and BLSTM blocks. It is seen that LSTM blocks and BLSTM blocks have overall similar results for the fusion model. One BLSTM layer is composed of two LSTM layers, the parameter and computational complexity of BLSTM is much greater. Therefore, LSTM is chosen in the overall fusion model. B. ANALYSIS AND DISCUSSION Through the prediction results of these three indices, the relationship between mental health and multiple-sensor features can be analyzed objectively. Besides, it is shown that the fusion of multiple emotional features and behavioral features contributes to improving the classification accuracy. We evaluated mental state through wearable devices and deep learning models. This is different from traditional method. The model we used is based on a supervised algorithm which requires automated features extractions and annotation of the data through the labels of the training set. Although the training set is implicitly derived from the questionnaires, these questionnaires are not used in a traditional sense to code up a system. In addition, the prediction process of the proposed system does not depend on the questionnaires. The LSTM-based network of the fusion model is initialized by using the method. The fine-tuning method is derived from the parameters/model-based transfer learning. The source dataset is the EMO-DB and the task is emotion classification. The target dataset is from our experiments and the target task is the classification of mental states. The two datasets are similar, they are both human speech fragments and the input features are MFCCs. The difference between source and target is that the speech language of the two datasets are different and the classification task is different. According to the theory of transfer learning, the lower layers of the neural network can extract general features, while the specific features are extracted in the higher layer. Therefore, the lower layers are transferable even if there are several differences between source and target. In details, the first two LSTM layers are initialized by the model trained on the EBO-DB, other layers have random initialization and all layers are trainable. In this case, general speech features can be shared and the model converges faster. IV. CONCLUSION A wearable device with multiple sensors has been proposed and designed to collect social signals and continuously monitor the mental health status of the wearer. In addition, psychological experiments have been designed to analyze the degree of depression and anxiety. The speech as well as behavioral data have been collected by the wearable devices. By analyzing data and building models from more than 60 participants, the relationship between audio and behavioral features and degree of depression has been established. In particular, three indices of depression and anxiety have validated the pro-posed detection approach to ensure the objectivity of the results. Attention-based features fusion model has successfully demonstrated to achieve high level of performance accuracy in classifying depression and anxiety levels.
6,657
2020-05-12T00:00:00.000
[ "Computer Science", "Psychology" ]
Hydrofluoric acid etching versus self-etching glass ceramic primer: consequences on the interface with resin cements The long-term survival of aesthetic restorations remains a challenge dependant on successful and reliable bonding of ceramics to dental substrate. In order to improve resin cement bonding to ceramics, various surface treatments favoring micromechanical retention and chemical bonding were recommended [1,2]. According to CekikNagas, the composition of the ceramic determines the best surface treatment to be applied [3]. Introduction The long-term survival of aesthetic restorations remains a challenge dependant on successful and reliable bonding of ceramics to dental substrate. In order to improve resin cement bonding to ceramics, various surface treatments favoring micromechanical retention and chemical bonding were recommended [1,2]. According to Cekik-Nagas, the composition of the ceramic determines the best surface treatment to be applied [3]. A vast number of previous works have investigated the effect of etching protocols on glassmatrix ceramics. Acid etching of the bonding surface of glass ceramic restorations is considered as the most effective treatment method. Selective removing of the glassy matrix of The hydrofluoric acid (HF) is the most frequently used acid [6][7][8]. But the acidulated phosphate fluoride (APF) and the ammonium hydrogen difluoride (ADF) are also used. The ammonium hydrogen difluoride, in reaction with silica matrix creates some silicon tetrafluoride and ammonium fluoride. This acid may be used as a glass etchant or as an intermediate for the production of hydrofluoric acid [5,9]. Hydrofluoric acid etching followed by silanization generates higher bond strengths than either treatment alone. Silanization is understood to create hydrogen bonding and covalent bonding between the resin and the ceramic and increased wettability of the ceramic surface while etching provides the mechanical interlocking [4,9]. The chemical adhesion produced by silane promoted higher mean bond strength values than the micromechanical retention produced by any etchant [10]. Recently, a simplified acid ceramic primer has been introduced, claiming to perform a mild acid etching (very smooth etching pattern) and silanize using a single solution [11]. This onebottle system, Monobond Etch&Prime (MEP), combining ammonium polyfluoride and silane based on trimethoxypropyl methacrylate leaves a chemically bonded thin layer. It was introduced to simplify the bonding procedure by etching and priming glass ceramics in a one step process. Ammonium polyfluoride has milder acidity in comparison to hydrofluoric acid, which is expected to result in weaker etching pattern. Several authors have published studies comparing the efficiency of the protocol using this new system with 2-step surface treatments using hydrofluoric acid at concentration and application times determined followed by silane [12][13][14][15][16][17]. Their studies gave comparative results on shear bond strengths, field-emission scanning electron microscope (FESEM) analyzes, contact angle and micromorphological analyses, tensile bond strength. The purpose of this in vitro study was to compare the effects of traditional HF + silane (two steps process) versus self-etching glass ceramic primer (one step process) on the wettability of 2 types of CAD/ CAM glass ceramics and the chemical bonding between the ceramics and a composite cement. The tested ceramics were a leucite-reinforced feldspathic glass ceramic (IPS Empress CAD Multi) and a lithium disilicate-based glass ceramic (IPS e.max CAD). Methods and materials The ability for bonding to the ceramics and the different surface treatments proposed were investigated by comparing the resulting surface energies obtained [18]. The interfaces between treated ceramic and resin cement were examined by Fourier Transform Infrared spectroscopy (FTIR). Surface energy of two CAD-CAM glass ceramics after different surface treatments The surface treatments selected for the glass ceramics were acids and silanes [19][20][21][22]. Forty specimens (18 mm diameter x 4 mm height) were fabricated, sanded to alumina of 50 µm under reduced pressure of 1 bar, twenty from each glass ceramic (Table 1). Each group of 20 was thereafter randomly divided into two groups (n=10) according to the surface treatments: The effect of the different surface treatments applied to the glass ceramics was compared by analysis of water contact angle measurements as well as spreading coefficients. Contact angles were determined with a Digidrop device (GBX) using a graduated micro syringe to place 10 μl drops on the surfaces to be analyzed. At equilibrium, the right and left contact angles and were cut longitudinally in 2 ( Figure 1) using a low speed saw (IsoMet, Buehler Ltd, Evanston, USA) equipped with a diamond disc (102 mm x 0.3 mm series 15 LC Diamond, Buehler Ltd, Evanston, USA. The specimens were treated as described previously: Groups 1b and 3b with MEP, Groups 2b and 4b with 5% HF and MP. After each surface treatment, a resin cement (Multilink Automix, Ivoclar Vivadent) was applied to the surface of the halfdisks and then photopolymerized for 20 seconds with the lamp of intensity 1000 mW/cm² (SmartLite, Dentsply). The samples were examined on their slice by spectrophotometric analysis (treated ceramic + resin cement). The flat slices of the half-discs were placed directly on the face of the diamond. FTIR, by peak recognition and spectral comparison, allowed to reveal the bonding patterns formed between the ceramic after surface treatments and the resin cement. A differential analysis was carried out to compare the spectra obtained with the 2 treatments of each of the ceramics and to highlight any differences. Results Surface energy of two CAD-CAM glass ceramics with different surface treatments Average values of measured contact angles and spreading coefficients obtained between ceramic surface and liquid test (water) at 60 seconds ( Table 2): for leucite-reinforced feldspathic glass ceramic (IPS Empress CAD Multi), the angles obtained for Group 2 (treated with HF + MP) the average were calculated by the GBX software; each contact angle was measured five times for the liquid at room temperature (22°C). The software also calculates the spreading coefficient S for each measurement. Water was used for contact angle measurements on glass ceramic specimens. Its surface tension is 72.8 mJ/m². The measurement was made at 60 seconds after contact of ceramic and water. Contact angles (θ in degrees) and spreading coefficients (S in mJ/ m²) for ceramics with different surface treatments were compared by ANOVA. A second analysis of variance was made to compare the two glass ceramics with identical treatments. The Duncan post hoc test (p < 0.05) was used to find any statistically significant differences between groups. The pH values for HF (IPS Ceramic Etching Gel), MP and MEP were evaluated with a model 210 pHmeter from Hanna instruments, (Woonsocket, Rhode Island, USA) and repeated three times. The pH value is a significant element of the aggressiveness of the acid Infrared Spectroscopy of materials and interfaces The FTIR spectra of materials used in this study (resin cement, MEP, MP) were recorded with a Mattson Genesis II spectrometer (Thermoelectron France) from 400 to 4000 cm -1 . Eight additional specimens (18 mm diameter x 4 mm height) were fabricated, sanded to alumina of 50 µm under reduced pressure of 1 bar, four from each glass ceramic (Groups 1b and 2b for IPS Empress CAD Multi, Groups 3b and 4b for IPS e.max CAD). The ceramic discs were lower than those obtained for Group 1 (treated with MEP) and it is known that a low contact angle means a better interaction between two phases and a more complete wetting of the liquid. We noted that the spreading coefficients obtained for Group 2 were closer to zero than those obtained for Group 1. This means that the treatment with HF followed by the application of the silane provides a better coefficient of spreading. for lithium disilicate-reinforced glass ceramic (IPS e.max CAD), the angles obtained for Group 4 (treated with HF + MP) were lower than those obtained for Group 3 (treated with MEP). We noted that the spreading coefficients obtained for Group 4 were closer to zero than those obtained for Group 3. The values of the contact angles were more favorable with HF treatment followed by MP for the two glass ceramics, the same applied to the values of the spreading coefficients (negative values). These results were even more pronounced for the IPS Empress CAD Multi. Regardless of the surface treatment on IPS Empress CAD Multi, the results obtained with Groups 1 and 2 were statistically different for contact angles (F = 543.235, p < 0.05) as well as for the spreading coefficients (F = 189.690, p < 0.05). The results obtained on IPS e.max CAD with Groups 3 and 4 were statistically different for the contact angles (F = 456.96, p < 0.05) as well as for the spreading coefficients (F = 236.273, p < 0.05). If we compare the results of the two ceramics after the self-etching glass ceramic primer, it can be notedthat Groups 1 (IPS Empress CAD Multi) and 3 (IPS e.max CAD) had a significant difference for the contact angles (F = 58.630, p < 0.05) as well as for spreading coefficients (F = 24.467, p < 0.05). The comparison after HF treatment followed by silanization, Groups 2 (IPS Empress CAD Multi) and 4 (IPS e.max CAD) showed a significant difference, whether for contact angles (F = 88.84, p < 0.05) or for spreading coefficients (F = 33.671, p < 0.05). The different surface treatments performed on both ceramics gave significantly different results. The Duncan post hoc test showed that there were four distinct groups (a, b, c, d), for contact angles and spreading coefficients. The observed pH values were 2.0 for IPS Ceramic Etching Gel, 3.8 for MEP and 3.2 for MP. They showed that the most concentrated acid was found in IPS Ceramic Etching Gel. Interaction between treated glass ceramics and resin cement Analysis of the FTIR spectrum of MEP showed mainly a large and strong band (3400-3330 cm -1 ) corresponding to silanol groups, as well as sharps but weak bands (2950-2872 cm -1 ) of C-H bonds of Si-O-alkyl, [23][24][25] (Figure 2a). The resin cement ( Figure 3) had bands related to the acrylate group which will serve as "markers" in the analysis of the interfaces. We noted the presence of bands corresponding to symmetrical and asymmetric stretching of alkyl group located at 2872 cm -1 -2959 cm -1 ; characteristic bands of acrylate group (C = O) located around 1715 cm -1 ; bands located at 1296 cm -1 corresponding to C-H of the Si-R [25,26]. For Group 1b (IPS Empress CAD Multi treated with MEP), 1 step process, (Figure 4a), the strong and broad band around 3400-3300 cm -1 noted for MEP was no longer present, the O-H and/or NH signals were also absent. Broadening of C-H bands occurred and various multiple vibrations were observed in the « fingerprint » region between 1200 and 500 cm -1 . For Group 2b (IPS Empress CAD Multi treated with HF and MP), 2 steps process, (Figure 4b), dehydration was observed and low intensity bands appeared near 2900-2800 cm -1 (ethyl groups of resin cement and of MP). A neoformed matrix with large and broad absorption formed around 1000 cm -1 . For Group 3b (IPS e.max CAD treated with MEP), 1 step process, (Figure 4c), the large band in the zone 3400-3300 cm -1 noted for MEP was absent and three bands typical of C-H vibrations at 2950, 2923 et 2854 cm -1 were clearly seen. For Group 4b (IPS e.max CAD treated with HF and MP), 2 steps process, (Figure 4d), the characteristic bands of the methacrylate monomers and HEMA of resin cement, C=O located at 1721 and 1636 cm -1 , were sharp. Discussion The interaction between a resin cement and a ceramic is determined by the capacity the cement has to wet the ceramic surface, as a function of the surface chemistry and roughness of the ceramic, as well as by the viscosity and composition of the resin cement [27]. In this work we investigated the wettability obtained with two surface treatment protocols on two ceramics. The surface treatment provided the combined effects from MEP vs HF + MP. Hydrofluoric acid (HF) treatment is commonly used on silica-based ceramics to react with, and remove, the glassy matrix that contains silica. This leaves the crystalline phase exposed, generating surface roughness. This process also results in enhanced wettability and surface energy on the ceramic surface [28,29]. Hydrofluoric acid etching of feldspathic and lithium disilicate ceramics, followed by priming with a silane coupling agent has been considered as the gold standard for the treatment of the silica-based ceramics [30]. Etching with hydrofluoric acid leads to preferential dissolution of one of the glassy phases of porcelain to create an appropriate microstructure for bonding. Meanwhile, the application of a silane coupling agent to the pretreated ceramic surface provides a chemical bond that is a major factor in creating a sufficient resin bond to silica-based ceramics. This treatment protocol offers the opportunity of improved micromechanical retention and/or increased physical interactions and wettability with the luting resin material, which is generally hydrophobic in nature [19]. HF increases chemical binding on the ceramic by inducing polarity which promotes surface hydrophilicity [31]. Murillo-Gomez, et al. [16] in an experiment on the effect of acid etching of ceramics determined that MEP produced smoother etching patterns and lower roughness values (comparable to untreated specimens) than any other protocol employing HF. This may be attributed to the fact that this primer, instead of common HF, uses tetrabutylammonium dihydrogen trifluoride as etching agent [32]. This ammonium polyfluoride salt is an acidic compound used in industry to etch silica-based surfaces and it has a softer etching potential than HF [13]. Prado Lopes, each concluded in their work on HF etching followed by a silane solution that higher bond strengths were obtained than with MEP, the selfetching ceramic primer [14,15]. Such results confirm our observations on the surface structures noted with lower contact angles obtained for Groups1 and 3 (treated with HF followed by silane) compared to results for Groups 2 and 4 (treated with MEP). Smaller contact angles reflect stronger interaction between two phases and more complete wettability with liquids. We also find that spreading coefficients for Groups 1 and 3 are closer to zero than the values obtained for Groups 2 and 4. This signifies that treatment with HF followed by the silane gives the optimum spreading coefficient, in agreement with results of Stawarczyk and Sattabanasuk [27,33]. Together with ceramic wettability, microstructure and chemical composition, silane treatment influences the quality of the bonds to resin cements. Peumans showed that the different adhesion values found for the bonding of glass ceramics to adhesive cements are mostly due to the modifications of the structure by acid priming [34]. Infrared spectra of the interfaces between the two treated ceramics and the resin cement show bonding patterns. These interactions exist thanks to the silane intermediation that leads to stronger and stable bonds between the bonding components [34][35][36]. Comparing the spectra in Figures 4a and 4b, a broad matrixlike strong absorption formed in 4b. These intra and extramolecular absorptions reveal multiple bonding whereas in Figure 4a, more individual and better-defined peaks show independent molecules. For IPS e.max CAD, comparing Figures 4c and 4d shows that whatever the silananization, there are bonds between resin cement and the silanized ceramic with independent and well-defined C-H vibrations. With both silanes, a strong siloxane bond remains. The previously hydrophilic surface becomes hydrophobic by formation of this surface complex [2]. For HF/MP the bands related to acrylate groups (C = O) and (C = C) of the cement are more intense than for the MEP treatment. For the two ceramics, after joining, the hydrolysed methoxy groups do not form hydrogen bonds with water, but insoluble associations with other silane components [37]. Such strong, hydrophobic interactions play an important role in the long-term durability of bonding in cementceramic associations [36]. As treating materials are mainly composed by glassy phase, using strong etching protocols may damage materials internal microstructure, possibly affecting their mechanical performance, even more in the case of thin restorations as veneers. Future investigations must confirm the extent of these findings on materials mechanical properties in order to preserve their structural integrity [16]. Conclusion Within the limitations of this in vitro study, the following conclusions can be drawn: 1 the values of the contact angles were more favorable with HF treatment followed by silanization versus self-etching glass ceramic primer treatment for the two glass ceramics. These results were even more pronounced for the leucite-reinforced feldspathic glass ceramic. 2 infrared spectra of the interfaces between the two treated ceramics and the resin cement showed bonding types. The treatment with hydrofluoric acid followed by silanization increased multi-bonding more than treatment with a self-etching glass ceramic primer, particularly for the leucite-reinforced feldspathic glass ceramic.
3,852.8
2019-01-01T00:00:00.000
[ "Materials Science", "Medicine" ]
Prediction of the cementing potential of activated pond ash reinforced with glass powder for soft soil strengthening, by an artificial neural network model . The effect of Pond Ash (PA) activated with sodium chloride (NaCl) solution and reinforced with glass powder on the mechanical properties of soft clay soil, which comprise of the California bearing ratio (CBR) and the unconfined compressive strength (UCS) has been studied in this research work. The PA requires pozzolanic improvements to meet the ASTM C618 requirements for pozzolanas. In the present research paper, further emphasis has been on the machine learning prediction of CBR and UCS of the soft clay soil stabilized with a composite of PA. Generally, the studied soft clay soil properties, which were the microstructure, microspecter/micrograph, oxide composition, Atterberg limits, compaction behavior, free swell index (FSI), CBR and UCS significantly improved due to the enhanced cementitious ability of the activated and reinforced PA. The multiple data collected from this general stabilization result were used to predict the soil’s CBR and UCS by the artificial neural network (ANN) technique. The results showed high performance of the model in terms of the sum of squares error (SSE) of 1.5% and 2.0% and the coefficient of determination (R 2 ) of 0.9979 and 0.9973 for the CBR and UCS models, respectively. The models also outclassed the performances of other models from the literature. Introduction Soil is one of the most widely used natural resources which is the uppermost layer of the earth and is formed by the continuous breakdown of rocks in the presence of temperature, water, pressure, frost, etc.Although all soils have the same mineral particles, organic matter, water and air, their properties might vary from one location to another because of the parent material i.e., the rock from which the soil has come, temperature, precipitation, and human influence [1].The classification of soil can be viewed from the perspective of soil as a material and resource which can be in geology, agriculture, and engineering [1].In the civil engineering field, cohesive soil has been the major challenge in the construction of foundation design, underground and earth retaining structures, pavement design, excavation, embankment and dams because of its poor bearing capacity, high compressibility, and low permeability [2].Cohesive soil which is also known as black cotton soil is a soft soil that majorly contains iron, lime, magnesium, carbonate, phosphorus, and a few amounts of organic matter [3].Due to the presence of a problematic mineral called montmorillonite found in it, it can be hazardous to construct structural buildings and other civil engineering structures on it, which calls for the need to boost the preferable properties of the soil such as the porosity, loading carrying capacity and hardness [4].In order to improve its engineering characteristics to be suitable for construction several methods like drainage, surface compaction, vibration, grouting, consolidation, injection, soil reinforcement, thermal treatment, electro-osmosis, Geo-synthesis, chemical and mechanical stabilization are used.Some of these methods are costly and tedious to carry out but stabilization is low-cost construction and pollution controlling [2].The fast development of industrialization caused the production of waste materials in large quantities which are hazardous to health and the environment [5].A thermal power plant generates electric energy for industrial usage which alongside produce some waste by-product that contains 90% of fly ash and it affects the environment by polluting soil, water, and air [6]. Fly ash is made up of tiny particles that rise with the flue gas while that which does not rise it termed bottom ash.Pond ash is the term used to describe the leftover fly ash and bottom ash that are held in ash ponds.Pond ash consists of silica, alumina, and iron.It is known to be a weak pozzolan material because of the presence of silica in it [6].Pond ash has been used in a variety of geotechnical applications including developing lands, highway embankments, road construction, and low-lying areas for the development of commercial and residential sites because it improves the soil strength and reduces the suck and shrink characteristics [7].Glass powder is a by-product material from the process of glass treatment, it has been most often used as soil stabilizer because it can create impressive change in the properties.Glass powder has been used in landfilling, load construction, highway pavement, and drainage purpose.When used, it was discovered it is a good building material that reduces the load of landfilling.In drainage, it reduces the time of water accumulation behind the wall because of its high permeability.It has the physical property of high permeability, small strain stiffness, and high crushing resistance which made it a good pozzolan material [8]. This research work is keen to study the effect of pond ash activated with sodium chloride (NaCl) solution and reinforced with glass powder on the mechanical properties of soft clay soil which comprises the Unconfined Compressive Strength (UCS) and the California bearing ratio (CBR). Lakshmisha and Manoj attempted to reduce the moisture movement capacity of the Black cotton soil by the addition of pond ash.Black cotton soil stabilized with pond ash increases the maximum dry density and reduces the specific gravity.When 30% pond ash (by weight of soil) was added to Back cotton soil, it improves the strength carrying capacity to a maximum extent and a long tern curing effect was recorded.Also, when the optimum pond ash was added, it improves the California bearing ratio (CBR) value by 128% [9].Bharat and Nirpinder investigated the result of California bearing ratio (CBR) tests on Black cotton soil with varying percentages of pond ash.The soaked CBR value of virgin soil increased by 624% on the addition of 20% pond ash for 7 days, 14 days, and 28 days of curing the samples [2].Kolay and Sii studied the stabilization potential of class F pond ash on tropical peat soil.According to the UCS test, the amount of pond ash (i.e., 5%, 10%, 20%) applied to the original peat sample improved the compressive strength of the peat that had been stabilized with it.With the addition of 20% pond ash to the original peat soil weight, the compressive strength of the peat-pond ash sample nearly doubled from the original peat soil.In comparison to the original tropical peat soil's compressive strength of 77.6 kPa, the UCS value for stabilized peat soil with the addition of 20% pond ash by weight after 28 days provided the greatest average compressive strength of 153.9 kPa [10]. Some of the other studies on soft soil were conducted by using glass powder.Syed and Sudipta examined the impact of using waste glass powder to stabilize the soil.Through the addition of glass powder, both the soaked and unsoaked CBR increased reaching maximum values of 22.5% and 10.4%, respectively.Once 8% of glass powder of dry-weight of soil was added, the UCS increased to 133.5 kN/m 2 , while decreased to 119.7 kN/m 2 when 10% of Glass powder was added [11]. Additionally, several scholars look into soil stabilization by applying machine learning for predictive modeling.Eyo and Samuel investigated the use of machine learning techniques called gradient boosting (GB) to model the unconfined compressive strength (UCS) of soils stabilized by cementitious addition.By an overall accuracy of 0.920, weighted scores for precision and recall rate of 0.938 and 0.920, respectively, and an overall lift of 5 in a multinomial Classification, GB Algorithms demonstrate a very high capacity to distinguish between positive and negative UCS categories (firm, very stiff, hard) [12]. Most of the studies in the literature affirm the use of pond ash with glass powder in soft soil strengthening as an environmentally responsible project as this totally replaced cement, which has been established as having high carbon emission.It has been realized that using the idea of artificial intelligence (AI) as applied in this work could save money, time, and resources during the design stages of soil improvement, choice of curing duration, laborious trial batching of binder type, quantities, optimal combinations, extensive laboratory analysis and the determination of other influencing factors were performed. Treated Soil Database and Statistical Analysis The soft soil's basic properties were tested according to BS 1377 [13] to characterize it and the treatment was carried out based on BS 1924-1 [14].A database of 25 soil samples tested to determine the physical and mechanical properties of pond ash (PA)-treated soft soil reinforced with glass powder (GP) was tabulated and utilized in this research project.Generally, the soil was found to be an A-7(5) group of soil based on ASHTO classification with a plasticity index of 17.07%, which translates to a highly expansive consistency.It also possessed a free swell index of 110, optimum moisture content of 21.15% with an associated maximum dry density of 1.34g/cm 3 .The SEM, XRF, and XRD were carried out on the soil and the GP-NACL-PA blend to study the microstructure and mineral composition of the test materials [15][16][17][18][19][20].The following were the mixture materials and the treated soil parameters; Glass Powder content (GP) %, Sodium Chloride content (NACL) %, Pond Ash content (PA) %, Liquid limit (LL) %, Plastic limit (PL) %, Free Swell Index (FSI) %, Optimum Moisture Content (OMC) %, Maximum Dry Density (MDD) g/cm 3 , California Bearing Ratio (CBR) %, and Unconfined Compressive Strength (UCS) MPa.The measured records were divided into a training set (20 records) and a validation set (5 records).In Tables 1 and 2, their statistical characteristics and the Pearson correlation matrix, are summarized [21][22][23][24][25][26]. Figure 1 presents the distribution histograms for both inputs and outputs. Microstructure of the Test Materials Figure 2 shows the surface configuration of the soil and the mixture of glass powder, sodium chloride, and pond ash.The GP-NaCl-PA blend structure shows a crystal structure of strong pozzolanic configuration according to Bauluz Lazaro [15].The tetrahedral layer consists of a dominant structure of silica and quartzite minerals, which gives it the strengthened pozzolanic ability when utilized in soil stabilization.Fig. 3, which shows the micrograph of the mineral structure of the mixed blend of GP-NaCl-PA and the soft soil also confirms the binding ability of the GP-NaCl-PA blend with rich composition of quartz, calcite and other cementing strength-based minerals.This agrees with the surface configuration of the composite blend.Generally, the microstructural and mineralogical analyses conform with the results of UCS and CBR improvement of the treated soft soil, which shows a strength increase with the addition of the activated PA reinforced with GP. Prediction of the CBR and the UCS of the PA-GP-Treated Soft Soil A backpropagation ANN with one hidden layer of the 10:3:2 model network and Hyper Tan activation function [21][22][23][24][25][26] was used to predict both the CBR and UCS values of the PA-GP-Treated Soft Soil.The used network layout and its connection weights are illustrated in Figure 4 and Table 3.The average errors of these models were 1.5% and 2.0% and the corresponding R 2 values were 0.998 and 0.997 for the CBR and UCS, respectively.It also showed a near-perfect fit between the predicted and measured values.The relation between calculated and predicted values is shown in Figure 5. The absolute summation of the link weights at each node in the input layer presents the relative importance of each considered parameter as shown in Figure 6.It can further be observed from the relative importance of the studied parameters of the stabilization exercise and following a model that GP, PA and NACL showed a strong influence on the predicted model, which agrees with their role in the stabilization protocol as a strong hybrid binder in their blended constitution and agreeable to composite ash behaviors [16][17][18][19][20]. Overall, the ecofriendly materials (GP and PA), which improved the studied strength characteristics of the soft soil have provided a potential for their utilization in the stabilization of soft soil to meet the sustainability requirements for environmentally responsible soft soil reengineering.Also, the models are based on this PA and GP soil stabilization potential and their carbon neutrality pathway to save the environment from cement utilization emissions. Conclusions This research presents the strength behavior of a GP-NACL-PA treated soil and an artificial neural network model to predict the values of both the California Bearing Ratio (CBR) and the Unconfined Compressive Strength (UCS) for the treated soil using the measured Glass Powder content (GP), Sodium chloride content (NACL), Pond Ash content (PA), Liquid Limit (LL), Plastic Limit (PL), Free Swell Index (FSI), Optimum Moisture Content (OMC) and Maximum Dry Density (MDD) as the independent parameters.The results could be concluded as follows: • The NACL-activated PA reinforced with GP showed a potential to be used as an alternative binder in soft soil stabilization as it substantially improved the strength properties in terms of the CBR and the UCS of the treated soil.• The prediction accuracies of the ANN model were 98.0 and 98.5% with R 2 values of 0.998 and 0.997 for the CBR and the UCS, in that order.• Absolute summation of weights in the ANN model, showed that OMC has about 25% of the total importance, both OMC and GP have about 45%, and four contents OMC, GP, PA and NACL have about 75% of the total importance while other parameters have the rest 25%.It indicates that the mixture contents have a major impact on both CBR and UCS.• Generally, the utilization of the GP and the PA in the stabilization as a total replacement for cement has provided a pathway for carbon neutrality for a healthier construction environment and the elimination of the cement's carbon footprint.• As in other regression techniques, the herein generated formulas are valid only within the considered range of parameter values, and beyond this range, the prediction accuracy should be verified. Fig. 4 . Fig. 4. Architecture layout of the developed ANN model and its connection weights. Table 1 . Statistical analysis of the collected database. Table 3 . Connection weights for the developed ANN. Fig. 5. Relation between predicted and calculated (CBR) values using the developed models.
3,210.2
2023-01-01T00:00:00.000
[ "Materials Science" ]
The Higgsino-Singlino World at the Large Hadron Collider We consider light higgsinos and singlinos in the next-to-minimal supersymmetric Standard Model at the Large Hadron Collider. We assume that the singlino is the lightest supersymmetric particle and that the higgsino is the next-to-lightest supersymmetric particle with the remaining supersymmetric particles in the multi-TeV range. This scenario, which is motivated by the flavor and CP issues, provides a phenomenologically viable dark matter candidate and improved electroweak fit consistent with the measured Higgs mass. Here, the higgsinos decay into on (off)-shell gauge boson and the singlino. We consider the leptonic decay modes and the resulting signature is three isolated leptons and missing transverse energy which is known as the trilepton signal. We simulate the signal and the Standard Model backgrounds and present the exclusion region in the higgsino-singlino mass plane at the Large Hadron Collider at $\sqrt{s}=14$ TeV for an integrated luminosity of 300 fb$^{-1}$. Introduction Supersymmetric (SUSY) models are very popular among the numerous TeV extensions of the Standard Model (SM) [1,2]. One of the main tasks of the LHC is the direct search for SUSY particles. After three years of running, both LHC experiments ATLAS and CMS have not revealed any new particles beyond the SM, but the absence of any excesses above the SM expectation can be translated into strict limits on the parameter space of low-energy SUSY. In particular, the first two generation squarks and gluinos with masses below 1.7 TeV are excluded if the squarks and gluinos are mass degenerate [3]. This result together with a relatively heavy Higgs [4,5] somewhat undermines the rationale for TeV scale SUSY, since heavy SUSY particles seem to reintroduce finetuning. However, electroweak finetuning arises from the minimization of the scalar potential. The matching condition for electroweak symmetry breaking is 1/2 M 2 Z ≈ −µ 2 − M 2 Hu [6] 1 , where M Z is the mass of the Z boson and M Hu is the mass of the Higgs boson that couples to the top. A very large µ term is unnatural due to the required precise cancellation between the soft breaking terms and µ. Thus, a supersymmetric model with heavy multi-TeV scalars, but sub-TeV µ values can still avoid large electroweak finetuning. SUSY models with heavy multi-TeV matter scalars have the advantage that loop induced flavor changing neutral current and CP violating processes are suppressed [7][8][9] and help to ameliorate the late time gravitino decay problem [10,11]. Moreover, potential baryon number violating dimension five operators are suppressed and the resulting proton decay rate becomes very small [12]. Ref. [13] considers such a split scenario with light and degenerate higgsinos and decoupled gauginos and matter scalars (higgsino world scenario). However, in split scenarios with a light higgsino LSP the annihilation cross section is too large. Assuming standard cosmology, the thermal relic density is too small compared to the WMAP and Planck measurement [14,15] Ωh 2 ≈ 0.1187. The simplest extension of the MSSM is the next-to-MSSM (NMSSM) with a scale invariant superpotential [16]. The supersymmetric Higgs mass term µ is dynamically generated by the vacuum expectation value (vev) of a gauge singlet chiral superfield S and thus the NMSSM provides a weak scale solution of the µ problem in the MSSM. The singlet superfield leads to additional singlet-like CP-even and CP-odd Higgs states as well as a singlino-like neutralino state and thus the additional degrees of freedom can provide a solution to the dark matter issue of the higgsino world scenario. However, the resulting relic density is either too large or to small in large region of parameter space in the singlet extended higgsino world scenario. One solution is to demand a singlino-like neutralino LSP whose annihilation cross section is resonantly increased via Higgs bosons in the s-channel. Another solution is co-annihilation with a slightly heavier higgsino-like next-to-LSP (NLSP). Both mechanism lead to the desired relic density and hence the singlet extension of the higgsino world scenario is a phenomenologically viable model [17,18]. In this paper, we consider a higgsino-singlino world scenario with multi-TeV matter scalars and decoupled gauginos, but with a small µ term and a singlino-like LSP i.e. m singlino < m higgsino m scalar , m gaugino . We want to explore the discovery reach of our scenario at the LHC at √ s = 14 TeV in the production of a neutralino-chargino pair, The hadronic decays of the higgsinos lead to a final state signature with large QCD background and thus is not a viable signal at the LHC. However, the leptonic decay mode has particularly small QCD and SM background. The signature is three isolated leptons and missing transverse energy. This process is known as the trilepton signal and the corresponding searches has been performed by ATLAS and CMS [19][20][21]. Studies of the discovery potential at 14 TeV has been studied in [22,23] and in Ref. [24] the discovery potential of CP violation in the trilepton channel has been investigated. In this paper, we want to re-analyze the trilepton study. We simulate the signal and background at hadron level and we take into account the most important detector effect by performing a fast detector simulation. In particular, we derive limits for higgsino-like charginos and neutralinos with a singlino-like LSP at the LHC at 14 TeV which has not been considered in previous works. The remainder of the paper is organized as follows. In Sect. 2, we discuss our scenario in more detail. In Sect. 3, we briefly review the main phenomenological features of the scenario. In Sect. 4, we first discuss the constraints from LEP2 and the LHC8 results and then the selection cuts before showing the numerical results for two benchmark points. Finally, we show the discovery reach in the higgsino-singlino mass plane at the LHC at √ s = 14 TeV for an integrated luminosity of 300 fb −1 . We conclude in Sect. 5. The Spectrum We consider the scale invariant NMSSM [16]. Assuming that the gauginos and the sfermions with masses in the multi-TeV scale are essentially decoupled from the low energy scale theory, we are left with the following particle spectrum beyond the SM fields : (i) neutralinos: a singlino-like LSP (χ 0 1 ), two higgsino-like neutralinos (χ 0 2 ,χ 0 3 ); (ii) charginos: higgsino-like stateχ ± 1 ; (iii) CP-even Higgs fields: the singlet like field (H 1 ), the SM like Higgs field (h) and the heavy doublet-like CP-even scalar field (H 2 ) and (iv) the CP-odd scalars: a singlet-like scalar (a) and a heavy CP-odd scalar (A). In this limit the entire effective theory of the higgsino-singlino world scenario essentially reduces to the following superpotential and the corresponding soft breaking part of the Lagrangian assuming a Z 3 symmetry where S, H u and H d denote the singlet, SU(2) doublet up-type and the doublet down-type Higgs superfields, respectively. S, H u and H d are the respective scalar fields. λ and κ are dimensionless Yukawa couplings, whereas the soft breaking terms for the scalar fields are given by m 2 Hu , m 2 H d and m 2 S . A λ and A κ are the trilinear soft breaking terms. Once the singlet gets a vev, the Higgs mixing term µ ≡ λ s is generated. This can easily be at the weak scale, solving the usual µ problem of the MSSM. In the remainder of this section, we briefly sketch the masses of the relevant sub-TeV particles in the theory. In the limit where |µ| M gauginos , the neutralino mixing matrix block diagonalizes into the predominantly heavy gaugino sector and the light higgsino-singlino sector. The mass matrix of the light higgsino-singlino sector can be written as, (2) In this paper we only consider the parameter region with 2κ < λ 1. This choice ensures that the lightest neutralino is predominantly singlino-like and the chargino and neutralino masses are approximately given by, The Higgs sector is composed of the usual CP-even scalar state that will be identified with the Higgs state observed at the LHC with a mass around ∼ 125 GeV. The mass of this state can be written as, with δ quantifying the radiative contributions from the sparticles, mainly from the stops. It will be assumed that the masses of the heavier sfermions will be set by fixing the Higgs mass at 125 GeV. 2 The masses of the other light singlet-like CPeven and CP-odd Higgs is given by, The sub-TeV spectrum will also include the usual doublet type CP-even (M H2 ) and CP-odd Higgs (M A ) which have nearly degenerate mass given by, and the charged Higgs with mass M 2 In the following, we choose λ 1, so that the singlino and the singlet-like Higgs bosons only couple weakly to the other particles and thus possible light singlet-like Higgs bosons are not excluded by the LEP constraints. As we discuss below this is also the parameter range that is consistent with the Dark Matter constraints. This framework is conceptually different from the split [25][26][27] or mini-split [28] models due to the existence of additional light scalar states. This necessarily includes additional sources of fine-tuning in the classical sense. However, in the paradigm where this notion of naturalness is disregarded [29,30] or reformulated [31][32][33], such proliferation of sources of fine-tuning may not be considered as conceptually inconsistent. Indeed, in models where the Higgs-Singlet sector is sequestered from the rest of the supermultiplets, this kind of spectrum can naturally arise. E.g., in 5d SUSY models where the Higgs-Singlet multiplets are usually confined to the brane and the rest of the multiplet can access the bulk [34]. Phenomenology of the Higgsino-Singlino World Scenario In this section we briefly comment on some phenomenological aspects of the higgsino-singlino world scenario: 1. With this split spectrum one can achieve a slightly better fit to the electroweak precision observables. Assuming λ 1 one can neglect the effect of the singlet-doublet mixing. With this assumption there is negligible contribution to the S and T parameters 3 [35]. The non-zero contributions can be parametrized using the three observables M W , s 2 l and Γ(Z → l + l − ). The latest experimental values are given in [37]. We utilize the SM prediction including all computed higher order corrections for M W [38], the leptonic weak mixing angle (s 2 l ) [39] and Γ(Z → l + l − ) [40]. The fit to the experimentally measured values is slightly improved in a large region of the parameter space as shown in Figure 1. 2. In the limit 2κ < λ 1, that we explore in this paper, the singlino-like neutralino is the lightest supersymmetric particle, see Equation (3). With conserved R-parity this can be the dark matter candidate in this class of models [18,41]. The higgsino-like neutralinos and charginos are the NLSPs. They are essentially degenerate with mass ∼ µ, except the electroweak corrections that lifts the degeneracy making the charginos slightly heavier by ∼ O(10) MeV. We have performed a systematic scan to obtain the region of parameter space that is consistent with the dark matter relic density. Considering that the sfermions and the gauginos are decoupled, one finds that the entire parameter space of the theory, as expressed in Eq. 1, can be defined in terms of the following parameters, We utilize NMSSMTools [42,43]and micrOmegas [44] to perform a scan over in the range: A scatter plot for points consistent with observed relic density is shown in Figure 2 for M A = 300 GeV. The allowed parameter space can be divided into two distinct regions. In the region of the parameter space where κ ∼ λ/2, the right dark matter relic density for LSP is achieved through co-annihilation with a relatively degenerate higgsino-like neutralino. The LSP can have a significant higgsino component in this scenario. In this case for effective reduction of the number density a mass difference of ∆M = Mχ0 1 − Mχ0 2 < 20 GeV is required. The relative degeneracy of the chargino and the neutralino makes it relatively difficult to probe at the LHC. The collider phenomenology of this region of the parameter space closely resembles the higgsino world scenario and have better prospects of being probed at future colliders like the ILC [13]. For the possibility of probing this region at LHC with mono-jets + E T , see [45][46][47]. A phenomenologically more promising region is obtained when Mχ0 where M A is the mass of the heavy CP-odd Higgs. In this case the LSP can have efficient resonant annihilation with the heavy Higgs scalars in the s-channel. Actually we observe that the relative degeneracy of the CP-even and CP-odd heavy doublet-like Higgs implies a double resonance through the processχ 0 1χ 0 1 → on-shell H 2 /A → bb, aa * . In this case the LSP is predominantly singlino-like. In this case a relative separation between the higgsinos and the singlinos of the order of 100 GeV is possible. In the rest of this paper we will concentrate on the collider signal of this region of the parameter space. In this section, we want to discuss the phenomenological consequences of our higgsino-singlino world scenario at the LHC with a relatively simple collider study. The higgsinos and singlinos are the only kinematically accessible supersymmetric states at the hadron collider. Here, we consider associated chargino-neutralino pair production, The cross section for chargino-neutralino pair production is determined by tan β, λ, the higgsino mass parameter and the singlino mass. Motivated by the LEP constraints on light singlet like scalars and the Dark Matter constraints discussed above we set tan β = 10 and λ = 0.01 in our study. However the results presented here is relatively insensitive to the specific choice of these parameters. In particular, a different value of lambda only modify the branching ratio of the higgsino slightly as long as lambda is small. In Table 1, we show the total chargino-neutralino pair production cross section in picobarn at the LHC for √ s = 14 TeV [48]. We assume that the gauginos and sfermions are decoupled and we set M H± > Mχ± 1 − Mχ0 1 , thus theχ ± 1 decays into W ( * )χ0 1 with a branching ratio of 100%, where the asterisk denotes off-shell W bosons. The heavier neutralino eigenstatesχ 0 2 andχ 0 3 generally decay into Z ( * )χ0 1 . However,χ 0 2 andχ 0 3 can also decay into the CP-even and CP-odd Higgs bosons. The explicit decay properties depend on the details of the Higgs sector. We set M A > Mχ0 2 − Mχ0 1 , thus kinematically disallowing theχ 0 2 andχ 0 3 to decay into the heavy doublet like Higgs. The branching ratios of theχ 0 2 andχ 0 3 into singlet-like Higgs states are negligible, since we consider λ 1. The branching ratio of the neutral higgsino states into the SM-like Higgs h with a mass of 125 ± 3 GeV cannot be neglected, if the decay is kinematically possible. However, the branching ratio ofχ 0 2 andχ 0 3 into Z is still sizable. We focus on the leptonic decay modes of the gauge bosons which results in the trilepton and missing transverse energy final state configuration. The trilepton and missing transverse energy ( E T ) signal at the LHC was first investigated in [22,23]. The ATLAS [19,20] and CMS [21] searches for trilepton and large missing transverse momentum at the LHC at √ s = 8 TeV with an integrated luminosity of 20 fb −1 put already strict constraints on gaugino pair production in a simplified MSSM model. They consider wino-like lightest charginoχ ± 1 , heavier wino-like neutralinoχ 0 2 and a bino-like LSPχ 0 1 with decoupled sfermions and higgsinos.χ ± 1 andχ 0 2 masses up to 345 GeV are excluded. However, the mass limits on charginos and neutralinos are much weaker for the higgsino-singlino world scenarios, since the production cross section for higgsino-like charginos and neutralinos are much smaller than for the winos. Ref. [49] published a study with a light higgsino-singlino scenario. They derived constraints in the Mχ± 1 − Mχ0 1 mass plane from the ATLAS trilepton and E T search [19]. They found that chargino masses up to 250 GeV are excluded. For small mass differences between the higgsino and the singlino, searches from LEP for e + e − →χ 0 2χ 0 1 are relevant [51,52]. As demonstrated in [49] the LEP bounds are stricter than the current LHC bounds for Mχ± 1 ≤ 140 GeV. In the following, we derive the exclusion limits in the higgsino-singlino mass plane at the LHC for an integrated luminosity of 300 fb −1 at √ s = 14 TeV. The mass spectrum, couplings and decay widths are obtained with NMSSMTools 4.1.0 [42][43][44]. The signal events are generated with Herwig 2.7.0 [53]. The signal cross sections are normalized with the next-to-leading order (NLO) calculation from Prospino2.1 [48]. The dominant SM backgrounds W Z, ZZ and tt are generated with Herwig2.7.0. The NLO cross sections for vector boson pair production and tt are taken from MCFM 6.7 [54] and [55], respectively. We have generated 5 × 10 5 leptonic W Z, 5 × 10 5 leptonic ZZ events and 10 6 leptonic tt events. The detector effects are estimated with the fast detector simulation Delphes 3.0.12 [56]. We replaced the ATLAS detector card of Delphes 3.0.12 with the CheckMATE 1.1.4 card [57]. The detector tuning of CheckMATE 1.1.4 has been validated with several ATLAS studies (in particular with [19]) and hence should be more accurate. Our event samples are then analyzed with the program package ROOT [58]. Jets are defined using the anti-k T algorithm [59] with ∆R = (∆Φ) 2 + (∆η) 2 = 0.4. Here, ∆Φ and ∆η are the difference in azimuthal angle and rapidity, respectively. We demand that all jets have p T > 20 GeV and |η| < 2.5. The btagging efficiency is 85%. ATLAS distinguishes between different kinds of electrons which have different reconstruction and identification efficiencies as a function of η and p T . We require "tight" electrons in our study [57]. All electrons must have p T > 7 GeV and |η| < 2.5. The electrons must be isolated, i.e., the scalar sum of the transverse momenta of the tracks within ∆R = 0.3 of the electron must be less than 16% of the electron p T [19]. As for the electrons, ATLAS also have different types of muons with different efficiencies. We require "combined+standalone" muons in the following [57]. We also demand that all muons have to have p T > 7 GeV and |η| < 2.7. The isolation requirements for the muons are similar to the electron case, but with a ratio of 12% [19]. For the overlap removal we use the following procedure [19]. Any jet within ∆R ≤ 0.2 of an electron will be removed. This cut prevents double counting, since electrons are usually reconstructed as jets as well. Since we do not want to consider electrons and muons from heavy flavor decays within jets, all electrons and muons within 0.2 ≤ ∆R ≤ 0.4 of a jet will be removed. We have implemented the lepton triggers from [19]. The single electron or single muon triggers require at least one electron or one muon with p T ≥ 25 GeV. The symmetric di-muon trigger demands at least two muons with each p T ≥ 14 GeV, while the asymmetric trigger requires p T ≥ 18 GeV and p T ≥ 10 GeV. For the symmetric di-electron trigger, at least two signal electrons are required to have p T ≥ 14 GeV, while for the asymmetric electron trigger, we demand p T ≥ 25 GeV and p T ≥ 10 GeV. Finally, the mixed electron-muon (muon-electron) trigger requires one electron with p T > 14 GeV (10 GeV) and one muon with p T ≥ 10 GeV (18 GeV). In the following, we assume an overall trigger efficiency of 100 %. All events in the signal regions must contain three isolated leptons (electrons and muons). We demand at least one same flavour opposite sign (SFOS) lepton pair with an invariant mass above 20 GeV to suppress low mass resonances. We have defined three signal regions with one Z depleted region and two Z enriched regions. For the Z depleted signal region SRnoZ, we demand that the SFOS pair closest to the Z mass satisfies m SFOS ≤ 81.2 GeV or m SFOS ≥ 101.2 GeV. Events with jets with p T ≥ 20 GeV are vetoed. Finally, we require E miss T ≥ 30 GeV. This signal region is very similar to the trilepton study presented in [60]. Both Z enriched regions are defined as follows. We require for the invariant mass m SFOS closest to the Z mass: 81.2 GeV ≤ m SFOS ≤ 101.2 GeV. We veto all events with b-jets with p T ≥ 20 GeV. We demand large missing transverse energy with E miss T > 75 GeV and 150 GeV corresponding to signal regions SRZ1 and SRZ2, respectively. The transverse mass is given by where ∆φ l,E miss T corresponds to the azimuthal angle between the lepton and the missing transverse momentum vector. The lepton in the m T calculation is the one which is not the lepton of the SFOS pair. p T is the transverse momentum of the lepton. We demand m T ≥ 110 GeV in order to suppress the W Z background. SRnoZ considers scenarios with small Table 2: Number of background and signal events for benchmark point BP1 with Mχ0 2 = 160 GeV and Mχ0 1 = 100 GeV after each cut for signal region SRnoZ. In the last three columns, we present the ratio between the number of signal and background events, the statistical significance and the significance including systematic errors. All numbers are normalized to 300 fb −1 at √ s = 14 TeV. mass splitting between the singlino and the higgsino, which is generally smaller than the Z mass. SRZ1 and SRZ2 target scenarios with larger mass differences between the singlino and the higgsino. The difference between SRZ1 and SRZ2 is the missing transverse energy cut which is larger for SRZ2 and thus it is more sensitive for heavy higgsinos and large mass differences between the higgsino and the singlino. We present the cutflows for the SM backgrounds as well as for two benchmark points for an integrated luminosity of 300 fb −1 at the LHC with √ s = 14 TeV in Table 2 and 3. The statistical significance is estimated with where S and B correspond to the number of signal events and background events after each cut. We also show the significance taking into account the systematical errors. We assume an overall systematic uncertainty of 10% for all SM backgrounds. Our estimate of the significance is then given by First, we choose a light chargino with Mχ± 1 = 160 GeV and a singlino with Mχ0 1 = 100 GeV for benchmark point BP1. Here bothχ 0 2 andχ 0 3 decay via off-shell Z bosons. The first cut already provides a statistical significance of 8.8 due to the large production cross section. The dominant backgrounds are W Z and tt. After the SFOS cut, we veto Z bosons and thus heavily suppress the W Z and ZZ background. We apply a mild E T cut which reduces SM backgrounds containing a Z. However, nearly 20% of the signal events do not pass the cut due to the small mass splitting between the higgsino and the singlino. We keep this cut, since it removes the Zb background which we did not simulate [60]. The jet veto heavily suppresses the tt background and we obtain a good statistical significance of 13.6. Finally, if we account for systematic errors, the significance reduces to 1.6 owing to the large systematic uncertainty of the W Z and tt backgrounds. tt remains as one of the dominant background in SRnoZ. Note that our tt background is larger than in [60], partly because they did not normalise their tt sample to the NLO cross section. We rescaled their cross section to NLO, but their tt background is still smaller than our estimate, because they further reduced the tt background by imposing different isolation criteria for the leptons. In addition, they demand a larger minimal transverse momentum of 10 GeV on the leptons. However, we keep our isolation requirements, since we validated our analysis with [19]. In any case, we believe that our background estimate for tt is sufficiently conservative. In scenario BP2, the neutralino and chargino masses are set to Mχ± 1 = 400 GeV and Mχ0 1 = 20 GeV. On-shell decays ofχ 0 2,3 into Z are still dominant even though decays into the SM Higgs are kinematically allowed. The branching ratios are BR(χ 0 2 →χ 0 1 + Z) = 65% and BR(χ 0 3 →χ 0 1 + Z) = 43%. The first two cuts are identical as for BP1. The SFOS and the Z requirement suppress the tt background. We apply a b-jet veto which further reduces the tt background while the other backgrounds are still sizable. However, the strict cut on the missing transverse energy heavily suppresses the di-gauge boson backgrounds. The final cut on m T further reduces the W Z background and we obtain a statistical significance of about 4.9. Taking into account the systematic uncertainty, we still obtain a significance of 4. In Figure 3, we present the exclusion limits in theχ 0 2 -χ 0 1 mass plane at the LHC at √ s = 14 TeV with 300 fb −1 . The statistical significance is estimated with Equation (9). The best signal region is chosen for each point in the mass plane. Table 3: Number of background and signal events for benchmark point BP2 with Mχ0 2 = 400 GeV and Mχ0 1 = 20 GeV after each cut for signal region SRZ2. In the last three columns, we present the ratio between the number of signal and background events, the statistical significance and the significance including systematic errors. All numbers are normalized to 300 fb −1 at √ s = 14 TeV. The red, black dashed and black solid curve correspond to 2σ, 3σ and 5σ, respectively. Above the blue solid line, the decay of the higgsino into a singlino + X is not allowed. The blue dashed line corresponds to Mχ0 2 − Mχ0 1 = m Z . Below the blue dashed line, theχ 0 2 decays in a 2-body decay withχ 0 2 →χ 0 1 Z. Here, the selection cuts of the Z enriched signal regions provide the best sensitivity for our signal. Above the blue dashed curve, theχ 0 2 decays via off-shell Z * in a three body final state which is sensitive to the Z depleted signal region. We are sensitive for higgsino masses up to 540 GeV for massless singlinos. With decreasing mass difference between the higgsino and the singlino, the significance drops sharply. Decreasing the mass splitting reduces the average p T of the leptons. Thus, the final state leptons becomes too soft which does not allow to separate our signal from the SM background. However, these region can be probed in higgsino pair production in association with a hard jet [45][46][47] or a trilepton search with a relatively hard initial state radiation jet [61]. In Figure 4, we included the systematical errors for the calculation of the significance, see Equation (10). Higgsino masses up to 500 GeV can be excluded for massless singlinos. We are not sensitive to small mass differences between the higgsinos and singlinos due to the large systematic errors of the W Z and tt backgrounds. However, our estimate of 10% is quite conservative and hence Figure 4 is a rather pessimistic estimate of the exclusion limit for small mass differences. As more data is collected, the systematic uncertainties will be much smaller and one can expect to cover a substantial portion of the DM allowed region through the trilepton channel. Conclusion In this paper, we considered a light higgsino-singlino world scenario with decoupled matter scalars and gauginos in the NMSSM. There are phenomenological reasons to consider such a split scenario. The non-observation of supersymmetric particles with a relatively heavy Higgs provides strict limits on the soft breaking scale of supersymmetry. However, finetuning arguments favor relatively light higgsinos. But a light higgsino LSP with multi TeV scalars and gauginos typically results in a too small relic density in standard cosmology. On the other hand, a supersymmetric model with a light higgsino-singlino sector can provide a viable DM candidate. If the higgsino is the NLSP with a small splitting to the singlino LSP, co-annihilation between both sparticles can lead to the correct relict density. However, for a relative degeneracy between the higgsino and the singlino, the production of higgsinos is difficult to detect at the LHC since the decay products of the higgsinos are very soft. On the other hand, the right amount of the relic density can be obtained via resonant annihilation with heavy Higgs scalars while allowing for a large mass splitting between the higgsino and the singlino. Another advantage to consider a higgsino-singlino world scenario is that flavor changing neutral current CP violating processes are suppressed and that the gravitino problem is solved. Thus motivated, we focused on the production of a higgsino-like chargino neutralino pair at the LHC. In particular, we considered the leptonic decay modes which results in the trilepton and missing transverse energy final state. In this work, we present a collider study of the higgsino-singlino world scenario at the LHC at √ s = 14 TeV for an integrated luminosity of 300 fb −1 . We simulated the signal and the most important SM backgrounds with recent MC simulations and we also estimated the detector response with a fast detecter simulation. We considered three signal regions corresponding to a Z depleted region (for small mass differences between the higgsino and the singlino) and two Z enriched signal region. We discussed in detail the cuts for two benchmark scenarios. We examined the discovery reach in the higgsino-singlino mass plane. For massless singlinos, higgsinos with masses up to 500 GeV can be excluded for an integrated luminosity of 300 fb −1 at √ s = 14 TeV. However, the discovery reach is severely constrained in the small splitting region due to the low efficiency of the selection cuts and the assumptions on the systematic errors. Higgsino masses with a mass splitting of the order of the Z mass boson can be excluded with 200 (300) GeV assuming a systematic error of 0% (10%). However, the region of small splitting would require more involved search strategies [45][46][47]61] to be accessible at the LHC. Finally, our results of the discovery reach are also true if we allow for non-split scenarios, e.g. if matter scalars are kinematically accessible at the LHC, but do not alter our assumptions of the decay chain.
7,294
2014-05-14T00:00:00.000
[ "Physics" ]
FUNDAMENTAL AND APPLIED NANOIONICS IN IMT RAS Term and concept of a new branch of science and technology, namely, “nanoionics”, were formulated in IMT RAS (1991-1992). It was the article "A step towards nanoionics". New area R&D devotes to nanoscale fundamentals of fast ionic transport (FIT) in solid-state materials, as well as to methods for design of FITnanomaterials, for description of local FIT-space-temporal processes, for creation of devices with FIT on a nanoscale ("nanoionic devices"), etc. Main achievements of IMT RAS in nanoionics are: (1) new opticallyactive nano-physical-chemical systems Ag(Cu)Hal -M, created in high vacuum (M are rare-earth/transition metals); (2) new classification of solid-state ionic conductors (it distinguishes for the first time a new class of solid state conductors “advanced superionic conductors“, i.e., materials whose crystal structures are closely to optimum for FIT); (3) new scientific direction "nanoionics of advanced superionic conductors"; (4) a crystalengineering of heteroboundaries in FIT-materials and an invention of supercapacitors with coherent polarized heterojunction and record-high frequency-capacitance characteristics ("nanoionic supercapacitors"); (5) substantiation about possibility of using of nanoionic supercapacitors in deep-sub-voltage nanoelectronics; (6) definition of ways for heterointegration in supercapacitors of advanced superionic conductors and carbon nanostructures with a high quantum capacitance; (7) theory of a dynamic response of layered nanostructures with ionic hopping transport in a non-uniform potential landscape ("structure-dynamic approach of nanoionics"); (8) new fundamentals of electrostatics related to materials with FIT; (9) proposition of a nonlinear non-local dynamics for FIT-materials. Future nanoionic researches are analyzed in terms of the dynamic theory of information. INTRODUCTION Term and concept "nanoionics" had been proposed in the Institute of Microelectronics Technology and High Purity Materials Russian Academy of Sciences (IMT RAS). It was the article "A step towards nanoionics" [1]) in which nano-objects with fast ionic transport (FIT) were characterized by the dimensionless parameter where L is an object size, λ is a characteristic length of localization of FIT-processes, for example, a localization of ionic space charge. Now, nanoionics is regarded as an interdisciplinary branch of science and technology, for example, as a division of solid state ionics [2], or, as a section of nanoelectronics [3]. In conditions of very limited available resources, our strategy for long-term R&D in IMT RAS was consisting in attempts of an expansion of borders of nanoionics in new directions. The main obtained results on nanoionics are listed in this report. A new theoretical system, i.e., structure-dynamic approach (SDA) of nanoionics, and a new optically-active nano-physical-chemical systems are considered in some details. The graph of R&D items (which were initiated in IMT RAS on nanoionics) is presented with some comments related to basics of dynamic theory of information. METHODOLOGY In this section, some challenges (in condensed matter/solid-state ionics) and methods of their solutions in nanoionics are discussed. A critical view on the interpretations of impedance spectroscopic data. The method of impedance spectroscopy is used everywhere in solid state ionics, however authors of the report call into question the standard interpretations of frequency dependences of impedance Z(ω) by means of using of constant phase elements, as well as the interpretations of the universal dynamic response (UDR) by means of RC-grids. UDR, i.e., the power law of conductivity, had been discovered by A.K. Jonscher in 1977 [14]: where σ*(ω, T) is the real part of the complex (thermally activated) conductivity, and n < ≈ 1. The law (2) holds in a wide frequency range. There has been no consensus until now on a standard theoretical explanation of the reasons and mechanisms of the physical averaging leading to the emergence of UDR in macroscopic solid ionic conductors. To date (the end of 2019), about 9,800 references had been made on two works by A. K. Jonscher devoted to UDR. In the literature, there are many ideas (interpretations) related to the law (2). According to [15], "Structurally disordered solid electrolytes, both crystalline and glassy, as well as ionic melts, exhibit a set of spectroscopic peculiarities for ionic conductivity that is at variance with the predictions of simple random-hopping models". The negation of simple hopping models means that the macroscopic behavior is defined by an existence of unknown complex transient states of mobile ions. However, the remarkable 'universality' (1) refers to the independence of UDR from physical and chemical structures also as from details of ion-ion interactions. According to [16] "The universal properties found suggest they originate from some fundamental physics governing the motion of the ions.". This statement can be understood as the existence of an unknown fundamentality in condensed matter physics. However, the proposition by [17] is more applicable for the analysis of UDR in ionic conductors: "both conductive and dielectric dispersions are simultaneously important in the frequency region of interest.". This proposition implies the existence of such well-known fundamentality as the Maxwell displacement currents in experimental samples. In our opinion, the law (1) is some result of a space-time averaging of interconnected currents, i.e., ionic hopping currents and Maxwell displacement currents on a nanoscale. Mainstream for the realization of the idea of interconnected "conductive and dielectric dispersions" is the presentation of processes by the method of complex impedance (Z) of an appropriate equivalent electric circuit. In this approach, results are presented in a manner like "The Z′ and Z″ versus frequency plots are well fitted to an equivalent circuit model. The circuits consist of the parallel combination of resistance (R), constant phase element (CPE), and capacitance (C). Furthermore, the frequency-dependent AC conductivity obeys Jonscher's universal power law" [18]. However, for a macroscopic sample, the physical sense of an equivalentelectric-circuit appears if each unit of the circuit means an elementary process. In this case, the whole circuit mimics the result of physical averaging for a set of elementary processes. Note, constant phase elements (CPEs), are phenomenological macroscopic objects without standard physical interpretation. CPE itself needs a definition through elementary physical processes and mechanisms just like for UDR. Solid-state impedance spectroscopy gives roughly averaged experimental data, i.e., a large amount of information (which reflects interconnected local processes) gets lost. For clear interpretations of such data, we need theoretical approaches, which allow calculating the impedance Z through the state variables directly connected with local ionic hopping transport and local dielectric polarization. Structure-dynamic approach of nanoionics A new theoretical approach [9] addresses to the logic of elementary ion-transport processes on a nanoscale and emphasizes that solid-state ionic conductors are dynamical non-local non-linear systems. In such systems, key parameters (heights of barriers in potential landscape) depend on external influences. Dynamic behavior of an ionic space charge in parametric-dependent systems with long-range Coulomb interactions cannot be presented correctly by equivalent electric circuits where elements have constant parameters. These findings are obtained within the frame of SDA by computer experiments. Calculated data are in good concordance with results of impedance spectroscopy. SDA takes into account the main nanoscale features of crystal structures of all solid-state ionic conductors, namely, a non-uniform potential landscape in which mobile ions hop. SDA does not use the derivatives on spatial coordinates in a set of differential equations, because the differentiation on space coordinates is a doubtful operation on a nanoscale. It provides more correct interpretations for a dynamic response in ionics. In SDA, several theoretical innovations were proposed: method of effective uniform electrostatic field; new dimensional ri-factor; new notion "Maxwell displacement current on a potential barrier"; laws of spatial averaging of potential differences in solid-state ionic conductors [9-12]. New optically-active nano-physical-chemical systems Chemical reactions ways and final products can depend on conditions in which are located initial chemical components. Dimensional factors have to evince itself in solid-state chemistry. Therefore, the search for new solid-state optically-active physical-chemical nanosystems on the basis of haloids of Ag and Cu was carried out in [4,5]. Nanosystems of the Ag (Cu)I  M type (where M  La, Ce, Nd, Sm, Tb, Tm, Yb, Lu, Sc, Y, Al, etc.) were created (at 300 K) in high vacuum by method of deposition of M-films (5-10 nm thickness, the method of laser ablation) on the of β-Ag-(Cu)I with hexagonal structure of the wurzite and γ-Ag-(Cu)I with cubic structure of the blende (50-100 nm thickness). It was revealed, such nanosystems have state parameters, kinetics of ionic transport, unusual ways of synthesis and properties, which depend on dimensional factors. Works [4,5] show a possibility of synthesis in the Ag (Cu)I  M type physical-chemical nanosystems of a set new nonstoichiometric compounds (with variable composition and structure) and materials. New objects of nanoionics differ by high concentrations (~10 21 cm -3 ) of rare-earth (RE) and transition (TE) elements and, presumably, Fcenters (an electron in a vacancy of Hal). SOME PROSPECTS The beginning era of nanoionic devices demands a nonlocal non-linear theory of dynamic response of solidstate ionic conductors. Nanoionics tries to describe, for example, diffusion and reactions, in terms, which have sense only on a nanoscale, e.g., in terms of a non-uniform potential landscape. Therefore, search for fundamental properties (which can be included in a future theory of ionic transport on a nanoscale) are very important. The theoretical system, i.e., structure-dynamic approach (SDA) of nanoionics, is a step on this way. The problem of high mobility of ions in ordered nanostructures is fundamental to various membranes and the hetero-systems of live organisms. Therefore, results of nanoionics have to be demanded in the new multidisciplinary BioElectronic Medicine/Semiconductor Synthetic Biology areas [19,20]. Information carriers with large masses (ions) are necessary for suppression of tunnel currents of leak in nanodevices of logic and memory of extremely small sizes. Therefore, research and development in the field of deep-sub-voltage nanoelectronics and design of nanostructures of AdSICs [13] can lead to creation of the hybrid highly functional electronic and ion devices combining quantum transport of electrons and the classical movement of ions. By insertion of various chemical elements into simple Ag-and Cu-haloids and their numerous derivatives, for example, such as advanced superionic conductors of RbAg4I5 family, a set of new materials and chemical compounds with high concentrations of optically/magneto-active elements and F-centers can be synthesized. It is possible to expect a discovery of unusual combinations of properties: electronic conductivity, FIT, optical and magnetic activity, etc. Figure 2 R&D on nanoionics in IMT RAS Authors analyze results of development of a nanoionics in IMT RAS and prospects of further R&D in a foreshortening of fundamentals of "The dynamic theory of information" (the book of D.S. Chernavskii [21]) and from a point of view of influence of a strategic innovative management on achievements of applied science, see Figure 2. The analysis purpose -to establish compliance between methodological bases of scientific and technical searches and decisions on a choice of new directions of research in specific conditions. In terms of the dynamic theory of information [21,22], the nanoionics can be defined as a developing information system. Main objectives of such systems (capable to perceive, remember and generate information) are the preservation and an increase of own valuable information, a forecasting of behavior of an environment and an own behavior. CONCLUSION The analysis shows that a nanoionics grows as a self-developing information system. Long-term stability of process of generation of information in this system is provided by interference of knots of "thesaurus-purposeresult" triad. For a further development of a FIT-theory on nanoscale and a deeper understanding of processes at a high-frequency response of nanostructures, the inclusion of local magnetic field in a FIT-theory is necessary. The obtained results, i.e., an expansion of ideas and approaches of a nonlinear dynamics (the section of modern theory of oscillations and waves) on the field of intersection of solid-state ionics and nanotechnologies, -can be considered as the initiation of new scientific direction "dynamical not-local nonlinear ionics".
2,697.8
2020-01-01T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Deep Transfer Learning for Biology Cross-Domain Image Classification Automatic biology image classification is essential for biodiversity conservation and ecological study. Recently, due to the recordshattering performance, deep convolutional neural networks (DCNNs) have been used more often in biology image classification. However, training DCNNs requires a large amount of labeled data, which may be difficult to collect for some organisms. +is study was carried out to exploit cross-domain transfer learning for DCNNs with limited data. According to the literature, previous studies mainly focus on transferring from ImageNet to a specific domain or transferring between two closely related domains. While this study explores deep transfer learning between species from different domains and analyzes the situation when there is a huge difference between the source domain and the target domain. Inspired by the analysis of previous studies, the effect of biology cross-domain image classification in transfer learning is proposed. In this work, the multiple transfer learning scheme is designed to exploit deep transfer learning on several biology image datasets from different domains. +ere may be a huge difference between the source domain and the target domain, causing poor performance on transfer learning. To address this problem, multistage transfer learning is proposed by introducing an intermediate domain. +e experimental results show the effectiveness of cross-domain transfer learning and the importance of data amount and validate the potential of multistage transfer learning. Introduction Building accurate knowledge of the identity, taxonomy, the geographic distribution, and the evolution of living species are essential for a sustainable development of humanity as well as for biodiversity conservation. In terrestrial ecosystems, plants are extremely complex and diverse, and there are millions of different plant species [1,2]. For us, plants must be classified into identifiable groups in order to have a clear, organized way of identifying the diverse array of plants and some specific applications such as weed control [3,4]. Besides, the study of marine ecosystems is vital for global climate and environment protection [5][6][7][8]. ere are many kinds of organisms in the marine worth studying, such as fish and plankton, which play an important role in the ecosystem [9] and the marine food chain [10]. At the very beginning, species classification was usually implemented on morphological diagnoses provided by taxonomic studies [11] in a manual identification process. However, for some species like weed plants and plankton, only experts such as taxonomists and trained technicians can identify taxa accurately. Furthermore, one expert may only identify a limited number of species in a specific domain (such as only species of weeds or phytoplankton) because it requires special skills acquired through extensive experiences [3,12]. At the same time, there is an increasing shortage of skilled taxonomists [13]. e declining and partly nonexistent taxonomic knowledge within the general public has been termed "taxonomic crisis" [14], making great challenges to the future of biological study and conservation [11]. Using computer-based multimedia identification tools with computer vision and machine learning techniques have been considered as promising solutions to classify organisms, and a lot of work has been done on this topic [15,16]. Traditional Image Classification. e traditional image classification process can be generally divided into three steps: image preprocessing, feature extraction/description, and classification [17]. Some preprocessing techniques are often used in the image classification system for producing a suitable enhanced image for the next feature extraction step, such as image denoising, image enhancement, image segmentation, and so on [18]. Feature extraction refers to taking measurements, geometric or otherwise, of possibly segmented, meaningful regions in the image [19]. To characterize and describe some properties of the organism image by a set of values, computer vision experts have handcrafted a lot of features. In previous studies, some general features like size [20], color, shape context [21][22][23][24], invariant moments, granulometric features, co-occurrence matrix, Fourier descriptor, Gabor filters, local binary pattern (LBP) [25], histograms of oriented gradients (HOG), scale invariant feature transform (SIFT) etc., have been used commonly. ere are also some features that have been designed for some specific species [26][27][28]. However, the handcrafted features are usually lack of robustness and cannot represent the complex biomorphic characteristics of some organisms [12]. Besides, some features are elaborately handcrafted for specific organisms [33], which often perform poorly after being extended to other organisms. ese traditional classifiers usually have not high prediction accuracy on different datasets [12]. Especially when the datasets are big or contain more than 20 categories, these classifiers may be limited by the "curse of dimensionality" [34], so that they are hard to be directly applied for ecological studies. Deep Convolutional Neural Networks. In recent years, DCNNs [35][36][37][38][39][40][41][42] have become a mainstay of computer vision community due to their record-shattering performance in the ImageNet large-scale visual recognition challenge (ILSVRC) [43]. ImageNet is a large-scale image dataset with 1000 classes, containing 1.3 million training images, 50,000 validation images, and 100,000 testing images. DCNNs consist of a stack of learned convolution filters that extract hierarchical contextual image features, thus are high-capacity classifiers. With the high capability, DCNNs can find the relevant contextual image features in classification problems intelligently and are less likely to be restricted by the "curse of dimensionality. " Moreover, unlike traditional methods, DCNNs do not need to divide the training process into several steps but use end-to-end learning mechanism, which is more suitable for real applications. e outstanding performance of DCNNs in image classification and other problems has received unprecedented attention, prompting scholars to apply them to various practical problems including biology image classification [3,[44][45][46][47][48]. Nevertheless, the very large number of parameters in DCNNs requires large-scale annotated training data. For some organisms inhabiting a complex environment, such as some marine and even microscopic organisms, it is very difficult to collect their images. For another thing, the collected data can only be used after being precisely classified by experienced experts. While the experienced experts are often scarce and one expert can only identify a limited number of species in a specific domain (such as only species of weeds or phytoplankton) [12], the data available in practical studies may be insufficient to fully exploit the potential of DCNNs. Transfer Learning with DCNNs. Transfer learning aims to transfer knowledge between the source domain and the target domain [49]. In biology image classification or some other scenarios, obtaining training data might be difficult and expensive. However, transfer learning can overcome the deficit of training examples in some domains by adapting classifiers trained on another domain [50]. ere are two ways to apply transfer learning with DCNNs. One is treating the DCNN as a big feature extractor and utilizing the pretrained network with learning weights to extract features that would be subsequently used in a new domain. e outputs of the DCNN are considered as high-level features and are then fed into the following classifier. Another is to fine-tune the network weights by training the network with the data from the new domain. In this case, the dimension of the output layer must be changed to match the number of classes in the new domain dataset. ere are some studies about biology image classification using transfer learning. Ge et al. [51] learned a domain-generic DCNN for the task of plant classification, by applying transfer learning on the parameters of the GoogLeNet [37] model (pretrained on the large-scale ImageNet dataset) using all of the training data for the plant classification task. Lee et al. [52] incorporated transfer learning by pretraining DCNN with class-normalized data and fine-tuning with original data. Orenstein and Beijbom [53] built on the insights from Kaggle's National Data Science Bowl (NDSB) and investigated how DCNNs perform on several datasets of in situ plankton images, and their study suggests that weights from a highly tuned network for one planktonic image set could be used effectively in another plankton domain. Ge and Yu [54] introduced a source-target selective joint fine-tuning scheme for improving the performance of deep learning tasks with insufficient training data. eir idea is to identify and use a subset of training images from the original source learning task whose low-level characteristics are similar to those from the target learning task and jointly fine-tune shared convolutional layers for both tasks. Previous studies about transfer learning with DCNNs mainly focused on the tasks which transfer from ImageNet to a specific domain or transfer between two closely related domains [53]. Only a few studies exploited the transfer learning between two domains that are not directly related. When applying transfer learning to biology image classification, the different distance between species in the source domain and the target domain may have different effects on the performance. Although there is a certain biological distance between the two domains, they may share some common patterns in the view of DCNNs. In this paper, inspired by the analysis of the literature and practical applications, deep transfer learning for biology cross-domain image classification is explored. By analyzing the experimental results on image datasets in different biology domains, including flowers, plant seedlings, plankton, and fish, some interesting conclusions are drawn. e main contributions of this paper can be listed as follows: ( Methods To exploit deep transfer learning for biology cross-domain image classification, multiple transfer learning scheme and propose multistage transfer learning are designed to train DCNNs with several datasets from different domains. AlexNet [35] consists of five convolutional layers and three fully connected layers. ere are three max-pooling layers of 3 × 3 after layers 1, 2, and 5. In the first layer, the 3 channels in the filters correspond to the red, green and blue components of the input image. e local response normalization (LRN) [35] was dropped in our implementation, which was introduced in AlexNet but was no longer used in subsequent DCNNs as it was replaced with batch normalization [38]. VGG-16 [36] consists of 13 convolutional layers and 3 fully-connected layers. In order to increase the depth of the network, the small (3 × 3) convolution filters are used in all convolutional layers. GoogLeNet [37] has 22 layers, which consist of three convolutional layers, nine inception layers (each of which is two convolutional layers deep), and one fully connected layer. e inception layer is composed of parallel connections with different sized filters, including 1 × 1, 3 × 3, and 5 × 5, along with 3 × 3 max-pooling, are used for each parallel connection. e outputs of each connection in the inception module are concatenated together as the inception output. Using multiple filter sizes has the effect of processing the input at multiple scales. In order to reduce the number of weights, 1 × 1 filters are applied as a "bottleneck" to reduce the number of channels for each filter. GoogLeNet has multiple versions while batch normalization was introduced in the second version, and the most popular version, as known as GoogLeNet v3, is used in this paper. GoogLeNet v3 decomposes the convolutions by using smaller 1-D filters to reduce the number of weights to go deeper. As the error back-propagates through the network, the gradient shrinks, which affects the ability to update the parameters in the earlier layers for very deep networks. To deal with the vanishing gradient problem, ResNet uses residual connections. ResNet introduces a "shortcut" module which contains an identity connection so that the "weight" layers (the layers that contain parameters) can be skipped. Rather than learning the function for the weight layers, the shortcut module learns the residual mapping. e "bottleneck" approach used in GoogLeNet, which uses 1 × 1 convolution to reduce the number of weight parameters, is also used in ResNet. e ResNet can be implemented with different layers. In this paper, ResNet with 18, 34, 50, 101, and 152 is built. Rectified Linear Unit. Rectified Linear Unit (ReLU) activation function is applied to the output of every convolutional layer in all DCNNs used in this paper. e ReLU activation function can be described by the following equation: where z indicates the input of ReLU activation function. e ReLU activation function can make DCNNs more sparse. For example, in a randomly initialized network, only about 50% of hidden units (z < 0) are activated (having nonzero Journal of Control Science and Engineering 3 output) simultaneously. Another benefit of ReLU is that it reduces the likelihood of vanishing gradient. is arises when z < 0, the gradient has a constant value, which results in faster learning of the DCNNs. Dropout. Dropout is a technique to reduce overfitting, which sets the output of each hidden neuron to zero with a probability. e neurons which are "dropped out" in this way do not contribute to the forward pass and do not participate in back-propagation of the training process. Every time an input is presented, the neural network samples a different architecture, but all these architectures share weights. Dropout in the fully-connected layers of AlexNet and VGG is employed. [38] speed up the training process and improve accuracy by controlling the input distribution across layers. To this end, the distribution of the layer input activations (μ, σ) are normalized such that it has a zero mean and a unit standard deviation, which can be described as Batch Normalization. Batch Normalization where μ and σ indicate the mean and standard deviation of the distribution of layer input activations, c and β are parameters that can be learned from training, and ϵ is a small constant to avoid numerical problems. Softmax. Softmax function is employed after the output layer, which is a fully connected layer with K units. Here K indicates the number of classes in the image classification task, which has the same meaning in equation (3). e output of the softmax can represent a probability distribution over all the predicted classes, which is computed by where x i represents the output of the i-th unit in the last fully-connected layer and i ranges from 0 to K − 1. 2.6. Data Augmentation. By enlarging the dataset using label-preserving transformations [35,39] artificially, data augmentation is the easiest way to reduce overfitting on image data. ere are three forms of data augmentation in our classification system: feature normalization, image resizing/cropping, and image horizontal flipping. It has been proved that feature normalization can make the gradient descent converge faster [38]. During both the training phase and the test phase, when image data are fed into the system, the system will do feature normalization for each channel of the image, respectively, where x c indicates the c-th channel of the input image; μ c and σ c indicate the mean and standard deviation in the c-th channel among all the images in the training set, respectively; x c ′ indicates the c-th channel of the normalized input image. Pipeline and Experiment Details. All the DCNNs in this paper are implemented with PyTorch deep learning framework. For GoogLeNet v3 network, firstly the input image will be resized to 342 × 342 and then be cropped into 299 × 299; for other networks, the input image will be resized to 256 × 256 and then be cropped into 224 × 224. To prevent substantial overfitting [35], different methods of cropping are employed during the training phase and the test phase. During training phase, randomly cropping are employed by extracting random 224 × 224 patches (for GoogLeNet v3 network, it is 299 × 299) from the 256 × 256 images (for GoogLeNet v3 network, it is 342 × 342). en, randomly horizontally flip these patches and feed them into the network for training. During the test phase, for each image in the test set, only need to predict once, the foreground organisms in the image are more likely to appear in the center. So, only center cropping is employed. All the first convolutional layers of the DCNNs in this paper have three channels, corresponding to the three channels of an RGB image. Except for GoogLeNet v3, all the inputs to the DCNNs are fixed-sized 224 × 224 × 3 images. While for GoogLeNet v3, the input image size is fixed to 299 × 299 × 3. If a single-channel gray image is an input to the DCNN, it will be converted to an RGB image with three same channels, whose values are copied from the single-channel image. To get more details of our experiments, please visit our open-sourced repository BioTL [58] on GitHub. Training from Scratch. e DCNNs training procedure generally follows Krizhevsky et al. [35]. e initialization of the network weights is important because the bad initialization can [36]. e biases with zero and the weights are initialized in all the convolutional layers with N(0, 2/n), where n is the product of the size and the number of channels of the filters in the layer. A weight decay of 10 − 4 and a minibatch size of 16 is used. e learning rates of AlexNet and VGG-16 are both initialized to 10 − 3 , while the learning rates of all other DCNNs are initialized to 10 − 2 . With the initial learning rate, all DCNNs are trained up to 300 epochs, during which every 100 epochs divide the learning rate by 10. Cross-Domain Transfer Learning 2.9.1. Fine-Tuning on ImageNet. To fully utilize the potential of DCNNs with small amounts of data, we use ImageNet as the source domain and apply transfer learning to transfer the knowledge learned from ImageNet to the target domain. e operations of data augmentation are the same with training from scratch. Instead of initializing all the weights randomly, they are initialized them (except the last fully-connected layer) with the weights learned from ImageNet dataset. Because the number of classes in the target task may differ from the ImageNet's 1000 classes, which corresponds to the output dimension of the last fully-connected layer, the weights of the last fully-connected layer in the pretrained modeled were dropped. Multiple Transfer Learning. To exploit the deep transfer learning for biology image classification, a multiple transfer learning scheme is designed. e multiple transfer learning scheme is designed to apply transfer learning several times on multiple source domains to observe the effect of cross-domain. For example, at first, a DCNN model on the Flowers17 dataset is trained, which is considered as the source domain. Secondly, all the weights of the trained model are used except the last fully-connected layer to initialize a new model with the same architecture. is is because the dimension of the output in the last fullyconnected layer corresponds to the number of classes in the classification task. While the number of classes in the source domain is often different from that in the target domain, so the last fully-connected layer needs to be rebuilt to fit the new task. At last we train the new model with initialized weights on the target domain dataset, such as QUT Fish. In practice, first of all, ImageNet dataset is used as the source domain and then fine-tune the pretrained models on the five target domain datasets (Flowers17, Flowers102, Plant Seedlings, PlanktonSet 1.0, and QUT Fish). After that, to exploit the effects of different distance between species from the source domain and the target domain, different combinations from the five datasets to apply transfer learning is chosen. Multistage Transfer Learning. ere may be a huge difference between the source domain dataset and the target domain dataset, causing the knowledge learned from the source domain cannot be well-transferred. If the data in the intermediate domain can adapt the learned features to fit the target domain, the hindering effect will not be particularly noticeable or the performance may be improved. To make the knowledge learned from the source domain more transferable, the multistage transfer learning is proposed. To perform multistage transfer learning, to add an intermediate domain between the source domain and the target domain is needed. In Figure 1, a diagram is used to demonstrate the multistage transfer learning framework. In Figure 1, "CONV 1" to "CONV N" blocks indicate N convolutional layers in the DCNN model, "FC" block indicates the fully connected layer in the DCNN model. As shown in Figure 1, the proposed multistage transfer learning consists of three stages: prepretrain the models on ImageNet which is considered as the source domain; pretrain the models in an intermediate domain; and fine-tune the models in the target domain. We do not know how to find the best intermediate domain dataset, so followed multiple transfer learning scheme with a grid search to try different datasets as the intermediate domain. Considering on the computational cost consideration, only multistage transfer learning on ResNet-18, ResNet-34, and ResNet-50 three models are explored, which have the similar structures but different depths. Datasets In this paper, to exploit cross-domain transfer learning, several datasets that come from different domains are choosed, including Oxford Flowers, Plant Seedlings, PlanktonSet 1.0, and QUT Fish. Oxford Flowers. ere are two versions of Oxford Flowers datasets, Oxford Flowers 17 (Flowers17) and Oxford Flowers 102 (Flowers102). Flowers17 contains 17 classes of flowers, with 80 images in each class which was chosen to be indistinguishable solely by color. Flowers102 dataset consists of 102 classes represented by 40 to 258 images per class and 8189 images in total. ere are about 45% of the Flowers17 images are also part of the Flowers102, so Flowers17 is not simply a subset of Flowers102. e image examples of these two datasets are shown in Figures 2 and 3, in which the images in the same row come from the same class and images from different rows come from different classes. According to the recommendation in the official datasets documents, the datasets are splitted into the training set, validation set, and test set, respectively. Image segments extracted from the raw data contains 60 736 images in total are sorted into 121 plankton classes and split into a training dataset and test dataset with a ratio of 1 : 1. e images obtained using the camera were already processed by a segmentation algorithm to classify and isolate individual organisms and then cropped accordingly, which can be seen in Figure 5. e image samples demonstrate that there are high intraclass variance and small interclass variance among some plankton species. [60] consists of 3960 images collected from 482 fish species. e data contain real-world images of fish captured in conditions defined as "controlled," "in situ," and "out-of-the-water" shown in Figure 6. Since "controlled" images are captured with a controlled background and high quality, when splitting the dataset, to split "controlled" images into training set while splitting "in situ" and "out-of-the-water" images with low-quality and pose variations are tended into the test set. At last, QUT Fish dataset is splitted into the training set and the test set with a ratio 1 : 1. As a result of there are some classes in this dataset that only contain two image examples, only 2-fold crossvalidation on it can be applied. QUT Fish. QUT Fish Because the amount of training data plays a crucial rule in training DCNNs, the number of all training examples and the average number of training samples are listed in each class for above datasets in Table 2. From Table 2, the scales of all the datasets are small compared to ImageNet, which contains more than one million training image examples. For QUT Fish, the data is extremely scarce since on average there are only 4 training samples in each class. In Table 2, it is obvious that with the increase of the number of layers (depth), the performance of the DCNN on ImageNet is getting better and better. At the same time, the number of parameters is also increasing along with the layers. Evaluation In this paper, accuracy and F measure as the evaluation metrics are used. Accuracy is the most intuitive and frequently-used 2 performance measure of the classification task. Accuracy is simply a ratio of correctly predicted samples to the total samples so it can be easily calculated. Accuracy is a good measure if the datasets are symmetric, however, for some imbalanced datasets, accuracy may not reflect the real performance of the classifier. Most of the datasets used in this paper are imbalanced, such as Flowers102, Plant Seedlings, PlanktonSet 1.0, and QUT Fish. e distributions of these datasets can be seen in Figure 7. To evaluate the classification performance on imbalanced datasets, F measure as another metric is used. Both accuracy and F measure can be calculated by the confusion matrix, which is a table containing information about actual and predicted classifications. As shown in Table 3 (Refer to Table 1 in Ref. [12]), each row of the confusion matrix represents the instances in a predicted class while each column represents the instances in an actual class. For a binary classifier, according to the true condition and predicted condition, the confusion matrix consists of four parts: true positive (TP), true negative (TN), false positive (FP), and false negative (FN). In this way, several measures can be derived from a confusion matrix: Recall � TP TP + FN . F measure is the harmonic mean of precision and recall erefore, F measure takes both FP and FN into account and is more useful than accuracy when we have an uneven class distribution. Results e multiple transfer learning scheme is designed to exploit deep transfer learning on Flowers17, Flowers102, Plant Seedlings, PlanktonSet 1.0, and QUT Fish datasets. When performing transfer learning, to make a comparison, DCNN models are also pretrained on ImageNet and then fine-tuned the models on the five datasets. For multiple transfer learning, one dataset from the above five datasets is chosen as the source domain and chosen another as the target domain. In the table of ere are relatively more data in PlanktonSet 1.0 so the DCNNs achieved better results on PlanktonSet 1.0 tend to be deeper. On PlanktonSet 1.0, the best accuracy 77.40% is achieved by ResNet-152 and the best F measure 0.659 3 is achieved by ResNet-101. Cross-Domain Transfer Learning. To make a comparison, experiments of fine-tuning the pretrained on ImageNet model are performed. e multiple transfer learning scheme is designed to apply transfer learning on several cross-domain datasets. Similar to fine-tuning on ImageNet, the weights from the model pretrained on the source domain is used to initialize a new DCNN and then fine-tune it on the target domain dataset. To adapt the features in pretrained on ImageNet models to fit the target domain well, multistage transfer learning is proposed by adding an intermediate Table 8, how much gains the cross-domain transfer learning methods get compared with training from scratch is listed. In the "transfer process" column, the entries in boldface indicate the source domain dataset or intermediate dataset with best classification results; in the "Accuracy (%)" column, the entries in boldface indicate the best performance with the highest accuracy; in the "F measure " column, the entries in boldface indicate the best performance with the highest F measure . In the "Transfer process" column, the entries in boldface indicate the source domain dataset or intermediate dataset with best classification results; in the "Accuracy (%)" column, the entries in boldface indicate the best performance with the highest accuracy; in the "F measure " column, the entries in boldface indicate the best performance with the highest F measure . Fine-Tuning on ImageNet. e results of fine-tuning on ImageNet are shown in Tables 9 and 10. Comparing with the training from scratch results in Tables 4 and 5, for Flowers17, Flower102, and QUT Fish dataset, every single model achieves a better performance after fine-tuning on Image-Net. For Flowers17, after fine-tuning on ImageNet, 9.52% accuracy and 0.091 1 F measure were gained on average among all models; for Flowers102, there is a much better result, with 35.65% accuracy and 0.369 5 F measure gain on average after fine-tuning on ImageNet; For QUT Fish, there is also a better result, with 17.96% accuracy and 0.174 6 F measure gain on F measure than training from scratch. For the Flowers17 dataset, using Flowers102 as the source domain dataset gets the gain with 0.66% accuracy and 0.007 0 F measure on average, which is much poorer than using PlanktonSet 1.0 as the source domain dataset; for Flowers102, using Flowers17 as the source domain gets the gain with 8.44% and 0.087 1 F measure on average, which is also poorer than using PlanktonSet 1.0 as the source domain dataset. In the "Transfer process" column, the entries in boldface indicate the source domain dataset or intermediate dataset with best classification results; in the "Accuracy (%)" column, the entries in boldface indicate the best performance with the highest accuracy; in the "F measure " column, the entries in boldface indicate the best performance with the highest F measure . For Plant Seedlings, Table 12 shows that using Flowers17 as the source domain dataset can get the gain with 0.21% accuracy and 0.001 7 F measure on average. Using Flowers102, PlanktonSet 1.0 and QUT Fish as the source domain dataset all get the result decreased. In the "Transfer process" column, the entries in boldface indicate the source domain dataset or intermediate dataset with best classification results; in the "Accuracy (%)" column, the entries in boldface indicate the best performance with the highest accuracy; in the "F measure " column, the entries in boldface indicate the best performance with the highest F measure . For PlanktonSet 1.0, Table 13 shows that there is no clear evidence proving that using multiple transfer learning can get the results improved all the time. On average, using Flowers17 as the source domain dataset gets a decrease with 0.06% accuracy and 0.000 6 F measure ; using Flowers102 as the source domain dataset gets a decrease with 0.09% accuracy and 0.002 3 F measure ; using Plant Seedlings as the source domain dataset gets the gain with 0.08% accuracy and 0.000 2 F measure . In fact, even using ImageNet as the source domain dataset get the accuracy decreased with 0.17% accuracy and 0.000 5 F measure on average. In the "Transfer process" column, the entries in boldface indicate the source domain dataset or intermediate dataset with best classification results; in the "Accuracy (%)" column, the entries in boldface indicate the best performance with the highest accuracy; in the "F measure " column, the entries in boldface indicate the best performance with the highest F measure . Table 8, the gains of cross-domain transfer learning results compared with Discussion In this paper, the multiple transfer learning scheme and the multistage transfer learning method are introduced to exploit cross-domain transfer learning on biology image classification. Our aim is to address the problem that limited labeled data may not fully utilize the feature representation power of DCNNs. In order to achieve this, multiple transfer learning scheme is designed to explore cross-domain transfer learning and proposed multistage transfer learning to learn high-level patterns from different domains to get the learned features fitting the target domain. Table 1 shows that, with the increase of the DCNN's depth, the performance on ImageNet can get better and better. But meanwhile, the parameters of the network also increase dramatically, which makes training the network more difficult especially when the amount of data is scarce. In order to compare the performances of different models on different datasets and observe their changes trend intuitively, the performances of different models in Figure 8 are normalized and translated. Added the depth, the number of parameters and the performance on ImageNet for each model to Figure 8, which have also been normalized and translated. In Table 2, it can be seen that the scales of datasets in this paper are very small compared to ImageNet. It can be seen that after the depth of the network reaches a certain level, its performance will no longer improve as the depth of the network increases. Most of the best results on the datasets are achieved with ResNet-18 or ResNet-34. DCNNs can learn some high-level patterns that are general, so transfer learning can be used to transfer these learned high-level patterns to the target domain with limited data. When the data amount in the target domain is small, the data amount in the source domain plays an important role in the transfer learning performance. For example, there are more data in PlanktonSet 1.0, so when make PlanktonSet 1.0 as the source domain dataset, the multiple transfer learning results tend to be better (Tables 6, 11 and 7). For example, in Table 11, although there is a closer biological distance between Flowers17 and Flowers102, the performance of using Flowers17 as the source domain dataset is worse than using PlanktonSet 1.0. When the data amount in the target domain is large, the effect of different biological distances between the species in the source domain and the target domain will be reflected (Table 12). In Table 12, although PlanktonSet 1.0 contains more data than all other datasets, using PlanktonSet 1.0 as the source domain dataset did not get the best result. Multistage transfer learning is proposed to address the problem caused by the big gap between the source domain and the target domain. From Table 8, it can be seen that since there is a huge difference between ImageNet and Plank-tonSet 1.0, multistage transfer learning with cross-domain datasets can improve the performance of fine-tuning on ImageNet. But when performing multistage transfer Conclusions In this paper, the multiple transfer learning scheme is designed to exploit deep transfer learning for biology crossdomain image classification. By pretraining the DCNN model in different source domains, the results on the target domain dataset can be improved significantly. It has been proved by the experimental results that even the out-ofdomain data are effective when the target domain data is insufficient. Multistage transfer learning method is also proposed which can improve the performance of DCNNs when there is a huge difference between the source domain and the target domain. A limitation of multistage transfer learning is that the datasets in the intermediate domain should be carefully selected; otherwise, the final performance may be hindered. However, it is difficult to found the best way to search the optimal dataset as the intermediate domain and this need further study. In our view, searching the datasets which have similar low-level characteristics with the target domain may be a good choice. Since DCNNs can learn some high-level domain-independent features, the ideas of multiple transfer learning and multistage transfer learning can be widely applied to biology image classification or other fields.
7,992.6
2021-12-15T00:00:00.000
[ "Computer Science" ]
Assessment of MERIS reflectance data as processed with SeaDAS over the European seas The uncertainties associated with MERIS remote sensing reflectance (RRS) data derived from the SeaWiFS Data Analysis System (SeaDAS) are assessed with field observations. In agreement with the strategy applied for other sensors, a vicarious calibration is conducted using in situ data from the Marine Optical BuoY offshore Hawaii, and leads to vicarious adjustment factors departing from 1 by 0.2% to 1.6%. The three field data sets used for validation have been collected at fixed stations in the northern Adriatic Sea and the Baltic Sea, and in a variety of European waters in the Baltic, Black, Mediterranean and North Seas. Excluding Baltic waters, the mean absolute relative difference |ψ| between satellite and field data is 10-14% for the spectral interval 490-560 nm, 16-18% at 443 nm, and 24-26% at 413 nm. In the Baltic Sea, the |ψ| values are much higher for the blue bands characterized by low RRS amplitudes, but similar or lower at 560 and 665 nm. For the three validation sets, the root-mean-square differences decrease from approximately 0.0013 sr−1 at 413 nm to 0.0002 sr−1 at 665 nm, and are found similar or lower than those obtained for SeaWiFS or MODIS-Aqua. As derived from SeaDAS, the RRS records associated with these three missions thus provide a multi-mission data stream of consistent Introduction Considering the potential offered by ocean color remote sensing to investigate marine ecosystems, several space agencies have placed optical sensors in space during the last 15 years. The Medium Resolution Imaging Spectrometer (MERIS, [1]) was launched on-board the Envisat platform by the European Space Agency (ESA) on 1 st March 2002, and has since provided global coverage of the biosphere with observations acquired at 15 spectral bands in the visible and near-infrared (NIR). Even though the time series derived from the various ocean color missions are in general consistent, these satellite products show varying levels of differences (e.g., [2,3,4,5]), that result from a complex set of factors, including different instrument designs, calibration strategies, atmospheric correction schemes, or bio-optical algorithms. An approach to minimize these differences is to adopt as many common elements as possible for the processing of the data streams. The SeaWiFS Data Analysis System (SeaDAS, [6]) developed by the National Aeronautics and Space Administration (NASA) offers the possibility of processing satellite ocean color imagery from various missions in a common framework, and has been extensively applied to imagery collected by the Sea-viewing Wide Field-of-View Sensor (SeaWiFS, [7]) and the Moderate Resolution Imaging Spectroradiometer (MODIS, on-board the platforms Terra and Aqua [8]). This work investigates the application of SeaDAS to process MERIS data with a focus on European seas. After introducing the data used for the analysis, the vicarious calibration of MERIS is presented. Then, the uncertainties associated with MERIS-derived reflectance spectra are documented through comparison with field data collected in European waters. MOBY data The site used for vicarious calibration is the Marine Optical BuoY (MOBY) operating in deep oligotrophic waters offshore Lanai (Hawaii, [9]), which offers suitable environmental conditions [10] and accurate measurements [11] for that task. Water leaving radiance (or alternatively reflectance) values are derived from underwater hyper-spectral radiometric measurements (340-955 nm) at fixed depths along with above-water solar incident irradiance, and are convolved with the spectral response of the satellite sensor to yield MERIS-like radiance values. Bidirectional effects (dependence on illumination condition and seawater optical anisotropy) can be accounted for with look-up tables dependent on the chlorophyll a concentration [12], in turn computed from remote sensing reflectance with the standard empirical algorithm OC4v6 [13,14]. The measurement series up to deployment 238 (September 2007) constitutes a consolidated set of radiometric data also used for vicarious calibration of SeaWiFS and MODIS, and is adopted here. Field data in European seas Radiometric field measurements for validation have been collected through two different sources. Autonomous observations are obtained at sites included in the Ocean Color component of the Aerosol Robotic Network (AERONET-OC, [15]), the Acqua Alta Oceanographic Tower (AAOT) in the northern Adriatic Sea, the Gustav Dalén Lighthouse Tower (GDLT) in the Baltic Proper, and the Helsinki Lighthouse Tower (HLT) in the Gulf of Finland (see Fig. 1 for positions). These sites operate the SeaWiFS Photometer Revision for Incident Surface Measurements (SeaPRISM [16]), a CE-318 sunphotometer (CIMEL Electronique, Paris, France) that performs sea-viewing radiance measurements following a common protocol [17]. At AAOT, continuous data collection started in April 2002, while at GDLT and HLT data have been col-lected in the summer season (approximately April-May to September-October) since 2005 and 2006, respectively. The derived product of interest for the current work is the spectrum of remote sensing reflectance R RS (λ ) at center-wavelengths close to those of the satellite ocean color sensors (see also Section 2.3 for a more complete definition of R RS ). This quantity is proportional to the normalized water leaving radiance, with the normalization entailing a correction for bidirectional effects [12]. All data are so-called Level-2 records, for which final calibration and quality checks have been applied. An uncertainty budget conducted at the AAOT site led to uncertainties of ∼5% in the blue-to-green spectral domain and ∼8% in the red [17]. As with regular AERONET sites [18], direct solar irradiance measurements are used to derive the aerosol spectral optical thickness τ a (λ ), from which theÅngström exponent α is computed. The procedure to derive τ a data is independent from that followed for R RS , and on average, there are more τ a data that fulfill the related AERONET quality checks. Validation results will be illustrated separately for the AAOT site, representative of a coastal environment with a moderate influence of sediments and dissolved organic matter, and for the two Baltic sites that are more characteristic of highly absorbing waters. These autonomous measurement systems are optimal to gather validation data with regular frequency but they are tied to fixed locations. They are ideally supported by observations collected during ship campaigns that provide a more extensive view of various water bodies. The Bio-Optical mapping of Marine Properties (BiOMaP) program [19] has constructed a highly consistent data set of apparent and inherent optical properties (AOPs and IOPs, respectively) since 2004 (without counting the first proof-of-concept campaign in 2000). The measurement stations cover a significant part of the variability in optical conditions found in European seas, from oligotrophic to very turbid waters ( [20], Fig.1), and has proved useful for the validation of satellite products [19]. In this case, the values of R RS are derived from radiometric in-water profiles with center-wavelengths of 412, 443, 490, 510, 555 and 665 nm. Their uncertainties are estimated as ∼5% in the blue-to-green and ∼7% in the red [19,21]. These data are accompanied by the determination of the chlorophyll a concentration (Chla) and a comprehensive set of IOP measurements, including the absorption coefficients by pigmented particles, a ph , nonpigmented particles, a npp , and chromophoric dissolved organic matter (CDOM), a cdom , and the backscattering coefficient of particles b bp . MERIS data and processing MERIS data used in the present work are Level-1b top-of-atmosphere (TOA) radiance measurements resulting from the MERIS 3 rd reprocessing [22], for which the calibration history has been revisited. These data have then been processed with SeaDAS (version 6.2) with an atmospheric correction scheme originally devised by Gordon and Wang [23]. The scheme has known numerous evolutions [24,25], including an updated set of aerosol models [26] and biooptical modelling in the NIR [27]. The output of the atmospheric correction is the spectrum R RS (λ ) at 413, 443, 490, 510, 560, 620, 665 and 681 nm, which is directly comparable to the field data. Two other channels, at 754 and 865 nm, are used for the selection of aerosol models and the determination of the aerosol optical thickness. The τ a spectrum serves to compute the Angström exponent α. Following Franz et al. [25], the top-of-atmosphere radiance at wavelength λ , L t (λ ) is written as (with dependencies other than λ omitted): where L r , L a , and L f are the radiance contributed by air molecules in the absence of aerosols (Rayleigh scattering), aerosols (including their interactions with air molecules) and sea foam, respectively. The term t d v is the diffuse transmittance for the atmospheric path from sea surface to sensor; t g v and t g s represent the gaseous transmittance from the sea surface to the sensor and from the sun to the surface, respectively. Finally, f p is a term correcting for polarization effects. The main output of the atmospheric correction scheme is the water leaving radiance L w . Contributions from sun glint and correction of the so-called smile effect [28] are excluded from Equation 1. The terms L w and R RS are related by: where µ s is the cosine of the solar zenith angle θ s , F s is the extra-terrestrial solar irradiance, f s is a correction for the variations in the sun-Earth distance, t d s (λ ) is the diffuse transmittance for the path from the sun to the sea surface, with these terms jointly operating a normalization by the solar illumination. The factor f b (λ ) accounts for bidirectional effects [12], and f λ corrects out-of-band contributions to L w [29]. The field data are acquired at center wavelengths which are slightly different than those of MERIS. For a direct comparison between field and satellite values of R RS , a band shift correction is performed on the field value when their center wavelengths differ by a few nm. The field value R RS (λ 0 ) is expressed at λ as follows: This approach has been already described in various studies (e.g., [30,31]), and is only briefly introduced here. R RS is considered a function of b b /a, ratio of total backscattering and absorption, f , that relates the underwater irradiance reflectance to the ratio b b /a, and Q, the ratio of underwater irradiance and radiance. The bidirectional effects having been corrected, f and Q are expressed with null solar and viewing zenith angles [12]. The absorption coefficient a is written as the sum a w +a ph +a npp +a cdom , and b b as b bw +b bp , with a w and b bw absorption and backscattering associated with pure water, respectively. The values of Chla (input to f /Q tables [12]) and IOPs at specific wavelengths are derived from regional empirical algorithms in the case of the AERONET-OC sites [32], whereas they are determined from field data in the case of the BiOMaP stations [19]. The IOPs of the various components are expressed at other wavelengths using assumed spectral shapes [32,19]. For the AERONET-OC sites, a w is a fixed spectrum [33], and b bw has been computed with a salinity of 35 and 7 psu for AAOT and the Baltic sites, respectively [34]. For the BiOMaP data, a w and b bw are varied as a function of the salinity and temperature measured in situ [33,35,34]. Considering the wavelengths associated with field measurements, validation will be presented for R RS at wavelengths in the interval 413 to 665 nm. If a SeaPRISM record lacks the band at 500 nm (a channel only included in the early SeaPRISM systems), the associated synthetic R RS at 510 nm is not computed, leading to fewer match-ups at AAOT with respect to the other wavelengths, and to few match-ups for GDLT and HLT (so that the results for 510 nm are not presented for these 2 sites). As opposed to the data from AERONET-OC and BiOMaP utilized for validation activities, MOBY data employed for vicarious calibration do not require band-shift corrections because they are derived from hyper-spectral measurements combined with the spectral response of each MERIS band (see Section 2.1). Match-up selection protocol for validation The MERIS scenes at the locations of field observations are processed and a square of 3x3 pixels is extracted for analysis from the Level-2 files. The average R RS computed over this macro-pixel is deemed the most representative value for comparison with the field observation. A match-up (i.e., concurrent field and satellite data) is retained for validation if it satisfies the following selection criteria: i) the time difference between satellite over-pass and field measurement is within an interval ±∆t, ii) none of the 9 pixels is affected by the standard flags of the processing code which mostly exclude an atmospheric correction code failure, cloud, Sun glint or stray light conditions and high solar or viewing zenith angles [36], iii) R RS averaged over the macropixel is higher than 0 at all channels, and iv) the coefficient of variation (CV, ratio of standard deviation and average) of the MERIS R RS at selected wavelengths is lower than a threshold arbitrarily set to 20% [37,38]. The bands selected for the CV test are those between 490 and 560 nm, wavelengths associated with a significant R RS signal across most natural waters (whereas R RS can be near 0 in the blue and red domains for CDOM-dominated and oligotrophic waters, respectively). The value of ∆t for the different data sets is a compromise between the requirement of obtaining a significant number of match-ups, the number of measurements available during the day at a given location (there are potentially multiple SeaPRISM observations at the AERONET-OC sites), and the general conditions found at the measurement sites. It is set to 2-h for the AERONET-OC sites (if several observations are available within the interval ∆t, the closest record is selected for validation), and to 6-h for the BiOMaP set. In the latter case, ∆t is large, particularly for coastal waters, but this is justified to encompass field measurements collected in the afternoon (MERIS overpass time is early in the morning) and thus obtain a fairly large match-up set; moreover, tests with ∆t of 3 or 4-h do not significantly affect the average validation statistics (see Section 4.3). The match-up selection protocol is slightly modified when comparing satellite and field aerosol products. First ∆t is set to ±1-h (as used in some previous works [38,39,40]) and there needs to be at least 2 AERONET measurements within that time window. The test on the CV is conducted on the aerosol optical thickness for the satellite τ a (865) in space (i.e., applied to the 3x3-pixels of the satellite τ a ) and for the AERONET τ a (870) in time (i.e., applied to the field data collected in the ±∆t 2-h interval). In comparing satellite and field τ a data, differences in center wavelengths are corrected using a 2 nd -order polynomial approximation for the τ a spectrum [41]. Once the ensemble of N match-ups is selected, the differences between MERIS, x M , and field values x f are quantified by: with |ψ| and ψ respectively the mean absolute relative difference and mean relative difference (or bias) given in percent. The root-mean-square difference (rmsd) is also computed: providing a measure of the uncertainty of R RS in units of remote sensing reflectance (sr −1 ). Vicarious calibration Considering R RS accuracy goals (as low as 5% in clear waters [42]) and the generally small contribution of the water leaving radiance to the TOA radiance budget (usually smaller than 10%), vicarious calibration aims to adjust the system sensor+atmospheric correction scheme, by removing residual uncertainties associated with the calibration of the sensor in space and the modeling of the radiative transfer processes in the atmosphere [43,44]. As a corollary, the resulting vicarious calibration coefficients are valid only for the considered sensor and code. Vicarious calibration for MERIS is conducted, as for SeaWiFS and MODIS [25], by forcing Eq. (1) to reproduce the field value of the water leaving radiance L w for each match-up i selected for the task. This is done for each band in the visible by introducing a multiplicative factor g i for the TOA radiance L t , called vicarious gain or adjustment factor (this implicitly treats the sensor as an integrated system combining its various constituents). The final vicarious adjustment factor is taken as the average g computed over the semi-interquartile range (SIQR) of the population (g i ) i=1,N in order to exclude the possible influence of outliers. Table 1 shows the spectrum of g as well as the standard deviation calculated with the entire set of coefficients. In this exercise, the vicarious gains have been kept at unity for the NIR bands. For SeaWiFS and MODIS, the gain is assumed unity at the longer NIR band and computed at the shorter NIR band so that a maritime aerosol model is selected by the atmospheric correction over targets located in the South Pacific and Indian Ocean subtropical gyres. In this first attempt at presenting a vicarious calibration gain set for the system MERIS+SeaDAS, there is no strong reason to consider g significantly different than 1 at 754 nm. When computing L t in forward mode with measured L w , g is found equal to 0.9995 and 1.0003 at 754 and 865 nm, respectively. Moreover, the averageÅngström exponent α found by MERIS at the MOBY site (average of 1.05, standard deviation, s.d., of 0.43, N=221) is very consistent to that derived from SeaWiFS (1.07, s.d. 0.48) and MODIS onboard Aqua (0.92, s.d., 0.41). A set of N=102 MERIS scenes fulfill the stringent requirements [25] for computing a vicarious calibration adjustment factor (with 30 in the SIQR). The vicarious calibration adjustment factors depart little from 1, from less than -0.2% in the red to the largest correction of -1.6% at 413 nm ( Table 1). The standard deviation of the gain population decreases from 0.011 to 0.006 from blue to red domains, a variability comparable to that found for SeaWiFS (0.009 to 0.007, [25]). As verification, the R RS spectra from MERIS and MOBY have been compared for all available match-ups. Between 413 and 510 nm, the mean absolute relative difference |ψ| varies between 6% and 8%. It is slightly higher (11%) at 560 nm (where R RS is lower), and reaches 62% at 665 nm, a wavelength for which the signal is very near zero. Validation of MERIS reflectance The location of the 3 AERONET-OC sites used for validation as well as the BiOMaP measurement stations selected as match-ups are displayed on Fig. 1. All statistics are reported in Table 2. Match-ups at 510 nm are fewer or absent because the SeaPRISM 500 nm channel is not always available (see Section 2.3). AAOT The R RS spectra found at AAOT cover a fairly large range of optical conditions [17] and AAOT is thus an informative test site for atmospheric correction schemes. The 194 match-ups found at this site display a relatively good agreement with field data (Table 2, Fig. 2). A few outliers can be seen at the low end of R RS in the blue. Between 490 and 560 nm, |ψ| amounts to 10% with a negligible bias, and r 2 is as high as 0.93-0.95. There are only 90 match-ups at 510 nm but the statistics are consistent with the neighboring bands, except for a slight increase in rmsd (0.00089 sr −1 ) that might be due to the smaller statistical basis and to uncertainties associated with the band shift correction used to derive a synthetic value of R RS (510) from the SeaPRISM data. The |ψ| value reaches 25% at 413 nm, and 35% at 665 nm. At all wavelengths, the bias does not exceed 6% (except -10% at 665 nm). The availability of aerosol field measurements at AAOT also allows an assessment of the aerosol model selection by the atmospheric correction, and indirectly of the vicarious calibration in the NIR bands (see Section 3). Comparing the MERIS and field value of α (N=353), |ψ| is equal to 23%, with a negligible bias of 0% (or -0.049 in units of α). In terms of τ a , |ψ| varies from 28% at 443 nm to 45% at 865 nm, and rmsd (in unit of τ a ) from 0.054 to 0.040 for the same bands. Baltic sites The two Baltic sites are associated with 85 match-ups, 43 at GDLT and 42 at HLT (Table 2, Fig. 3). It should be noted that the axes of Fig. 3 cover a range of R RS values that is considerably reduced with respect to Fig. 2. Indeed, R RS found at GDLT and HLT are usually small compared to other water bodies (except in the red) [17]. A consequence is the high values of relative differences in the blue, a spectral domain where R RS found for the match-ups are mostly below 0.002 sr −1 . Actually, for these sites the atmospheric correction occasionally returns negative values for R RS (413), and the exclusion of these records by the match-up selection process concurs to shift the distribution of biases at 413 nm towards large positive values. Other elements that might contribute to higher uncertainties in the Baltic Sea include large atmospheric masses associated with high latitudes and the specific bio-optical properties found in the basin. The |ψ| values are lower for longer wavelengths (11% at 560 nm). Still related to the low R RS amplitudes found at the Baltic sites, the spectrum of rmsd shows significantly smaller values than those observed at AAOT (as low as 0.00039 sr −1 at 490 nm, Table 2). As for AAOT, the α distribution found at the Baltic sites is almost unbiased with respect to field surface measurements. The relative bias is -1% for the GDLT site (N=85, |ψ| equal to 26%), and +2% for HLT (N=107, |ψ| equal to 30%). The |ψ| values are higher than at AAOT for τ a , approximately 50% and 75% at 443 and 865 nm, respectively (τ a for the validation set is significantly lower at the Baltic sites than at AAOT). Based on the analysis conducted with α, the atmospheric correction appears to select a proper representation of theÅngström exponent in different types of atmospheres. BiOMaP The BiOMaP data set yields 100 match-ups distributed in various basins ( Fig. 1 and 4), with 45 match-ups in the Baltic Sea (24 and 21 in the northern and southern Baltic Sea, respectively), 22 in the western Black Sea, 15 in the Ligurian Sea, 12 in the eastern Mediterranean and 6 in the English Channel. If ∆t is reduced to 3-h, the number of match-ups becomes 64, with |ψ| decreasing by 6% at 413 nm, and 1.0% to 1.6% at the other wavelengths. From 490 to 665 nm, |ψ| varies between 11% and 19% (Table 2), and is higher in the blue. Large relative overestimates associated with Baltic stations are found in the lower range of R RS . Some of the related stations are located in the Gulf of Bothnia which is characterized by extremely absorbing waters [45]. Considering that almost half the match-ups are found in the Baltic Sea, the comparison statistics are also presented for this subset as well as without the Baltic samples ( Table 2). The statistics |ψ| without the Baltic data are very similar to those found at AAOT, from 12% at 560 nm to 23-24% at 413 and 665 nm, but differently the bias is negative across all wavelengths. The 27 match-ups found in the Mediterranean Sea (Ligurian Sea and eastern Mediterranean) display |ψ| of 11-14% between 443 and 560 nm, while |ψ| for the 22 match-ups in the Black Sea is 29% at 412 nm, 18% at 443 nm, approximately 12% at 490 and 510 nm, as low as 8% at 560 nm, and 17% at 665 nm. Both regional subsets show underestimates of R RS at almost all wavelengths (except at 665 nm for the Mediterranean stations, ψ equal to +2%), but they are less pronounced for the Black Sea stations (from -2% to -10% in the domain 413-560 nm). The statistics obtained with the BiOMaP Baltic data share common elements with those found at GDLT and HLT: |ψ| is lowest in the green-to-red spectral domain (as low as 9% at 560 nm) and strongly increases in the blue in relation to large overestimates. The values of rmsd are very low for wavelengths longer than 490 nm, decreasing from 0.00044 sr −1 at 490 nm to 0.00011 sr −1 at 665 nm, while it is comparable to rmsd found in the other European basins at 413 and 443 nm (which, combined with low R RS amplitudes, leads to high relative differences |ψ|). Discussion Similar validation statistics have been derived for the SeaWiFS and MODIS missions in previous studies [32,15,19,46]). The RMS difference (rmsd) is used here as a basis for comparison since it is less affected by the variations that might be found for relative differences (like |ψ|) when R RS amplitudes cover different ranges (particularly when they are low). Fig. 5 shows rmsd found for the three match-up subsets described here and for the three satellite missions as obtained with consistent selection criteria over similar time periods. For the AAOT set, there is a local maximum observed in rmsd for MODIS at 531 nm, which might be at least partly due to the lower number of match-up points at that wavelength (201 versus 486 for the other wavelengths) and to the band shift correction that relies on SeaPRISM records at approximately 500 and 550 nm [47]. The results for that band are thus to be taken with more caution. Generally, the rmsd curves obtained for the three missions appear relatively consistent, even though some differences can be noticed and at least partly explained by the differences in the match-up sets as well as the various elements that are specific for each mission in terms of sensor design, observation geometry or processing code. The lowest values are usually shown for the Baltic sites GDLT and HLT, and the highest for AAOT. The rmsd spectra are broadly contained in an envelop decreasing from 0.0008-0.0015 sr −1 at 412-413 nm to 0.0002-0.0004 sr −1 in the red. The results obtained at corresponding bands can be compared for the 3 sensors using the letters M, A and S as superscripts for MERIS, MODIS and SeaWiFS, respectively. The ratio of rmsd associated with MERIS and MODIS (i.e., rmsd M /rmsd A ) is in the interval 0.69-0.83 for the Baltic sites (i.e., rmsd lower for MERIS), and in the interval 0.81-1.25 for the 2 other data sets, being noticeably larger than 1 only at 412 nm (1.25) in the case of BiOMaP. If the Baltic stations are excluded from the BiOMaP data set, this ratio is between 0.96 and 1.17. The rmsd found for SeaWiFS tends to be higher than for MERIS or MODIS, which might be explained by a lower signal-to-noise ratio for that mission. The ratio of rmsd asso- and rmsd M (665)/rmsd S (670) for the Baltic sites, and rmsd M (413)/rmsd S (412) for the BiOMaP validation set. The ratio associated with BiOMaP considered without its Baltic stations is in the interval 0.73-1.01. Overall using the rmsd metric, the uncertainties associated with MERIS are generally comparable with those of MODIS and lower than those of SeaWiFS. Conclusion This work is an early assessment of the use of SeaDAS to process MERIS imagery. A set of vicarious calibration coefficients has been derived at the MOBY site, which was used for the same purpose for SeaWiFS and MODIS. Using this set of coefficients is recommended for processing MERIS imagery with SeaDAS (version 6.2). The validation data are associated with measurements collected in the European seas and cover a large gradient of optical properties, from oligotrophic areas to coastal sediment-dominated or CDOM-dominated waters. While recognizing that the accuracy of the atmospheric correction should still be improved, the encouraging results documented here provide a solid ground for future developments aiming at fine-tuning the MERIS+SeaDAS system. Excluding the Baltic Sea, the mean absolute relative difference |ψ| is between 10% and 14% for the spectral interval 490-560 nm, 16-18% at 443 nm, and 24-26% at 413 nm. The |ψ| values are much higher for Baltic waters for the blue bands, but similar or lower at 560 and 665 nm. The validation statistics presented here show differences lower than those documented for MERIS R RS derived from the MEGS version 7.4 processor [30,48,49,50]. These differences are likely to be affected by a recent update (MEGS version 8 [51]) that includes vicarious calibration performed with in situ data from two target sites (MOBY and the BOUSSOLE system in the Ligurian Sea [52]). Importantly, the present validation results document uncertainties that appear at least as good as those associated with SeaWiFS and MODIS. The rmsd values given here for MERIS as well as those documented for SeaWiFS and MODIS, are required information to generate merged records of R RS [53,2,54]. With a view on creating a multi-sensor data stream for the European seas, this works leads to processing imagery collected by the major ocean color satellite missions with a common processing environment producing ocean reflectance spectra of very comparable accuracy.
6,792.8
2011-12-05T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Image Registration Algorithm Based on Convolutional Neural Network and Local Homography Transformation : In order to overcome the poor robustness of traditional image registration algorithms in illuminating and solving the problem of low accuracy of a learning-based image homography matrix estimation algorithm, an image registration algorithm based on convolutional neural network (CNN) and local homography transformation is proposed. Firstly, to ensure the diversity of samples, a sample and label generation method based on moving direct linear transformation (MDLT) is designed. The generated samples and labels can e ff ectively reflect the local characteristics of images and are suitable for training the CNN model with which multiple pairs of local matching points between two images to be registered can be calculated. Then, the local homography matrices between the two images are estimated by using the MDLT and finally the image registration can be realized. The experimental results show that the proposed image registration algorithm achieves higher accuracy than other commonly used algorithms such as the SIFT, ORB, ECC, and APAP algorithms, as well as another two learning-based algorithms, and it has good robustness for di ff erent types of illumination imaging. Introduction Image registration is a process of image matching and transformation of two or more different images. It is widely used in such fields as panoramic image splicing [1,2], high dynamic range imaging [3], simultaneous localization and mapping (SLAM) [4], and so on. Traditional image registration algorithms are mainly classified into pixel-based algorithms and feature-based algorithms [5,6]. In pixel-based image registration algorithms, the original pixel values are directly used to estimate the transformation relationship between images [7,8]. Firstly, the homography matrix between a pair of images is initialized. Then, the homography matrix is used to transform the image, and the errors of pixel values of the transformed image are calculated. Finally, the optimization technique is used to minimize the error function to achieve image registration. The pixel-based algorithms usually run slowly and are effective to low-texture scenes, but have poor robustness to scale, rotation and brightness. In feature-based image registration algorithms [9,10] such as SIFT [11], ORB [12], etc., feature points of images are generally extracted first, and the corresponding relationship between feature points of the two images is established by feature matching, and the optimal homography matrix is estimated by algorithms such as RANSAC [13], etc. Feature-based image registration algorithms are generally better and faster than pixel-based image registration, but feature-based algorithms require that there must be enough matching points between the two images and that the accuracy of matching points is higher and the location distribution of matching points is uniform. Otherwise, the registration accuracy will be greatly reduced. Feature-based image registration algorithms generally have good robustness to scale and rotation and have robustness to brightness to some extent, but are not suitable for low-texture images. Recently, some deep learning-based image registration algorithms have been proposed. DeTone et al. [14] proposed a homography matrix estimation algorithm with supervised learning. A 128 × 128 image I A was generated by randomly clipping from an image I, and then random perturbation values were added to the coordinates of the four corners of the image I A to generate four perturbation points, so that four pairs of matching points were obtained. The homography matrix corresponding to the four pairs of points was calculated by using the coordinates of the four corners of image I A and their corresponding perturbation points. The homography matrix was used to transform image I A into image I B . Then, the images I A and I B were converted into grayscale images as samples, and the coordinate differences between the four corner points of I A and their corresponding perturbation points in I B were used as labels, with which a 10-layer VGG (Visual Geometry Group) network was trained, and finally a homography matrix estimation model that could be used for image registration was obtained. The algorithm has better robustness to brightness, scale, rotation, and texture. On the basis of DeTone's work, Nguyen et al. [15] proposed a homography matrix estimation algorithm with unsupervised learning to solve the shortcoming of artificially generated labels in supervised learning, but this algorithm had weak robustness to illumination. The samples used in these two algorithms were mainly artificially generated samples. The artificial samples ensured that the accuracy of the samples and labels was high enough, which was a beneficial exploration for deep learning to solve the actual image registration problem. However, the artificial samples adopted by these two works default to no parallax between the images to be registered, so only four pairs of corresponding points are used to represent the registration relationship between the two images. However, in practice, there is parallax between the images to be registered, and the relationship between such kinds of images is often not exact homography transformation. In image registration, it is necessary to estimate the homography matrix between the target image and the reference image. The homography matrix is used to transform the target image to achieve the alignment of the target image and the reference image in spatial coordinates. The transformation process is called image mapping or image transformation. According to the application scope of the homography matrix, image transformation can be divided into global homography transformation and local homography transformation. Global homography transformation [7,11,12,14,16] uses the same homography matrix to transform the whole image. It requires that the target image and the reference image contain basically the same image information in the overlapping region. It is only suitable for images with small or no parallax. When this condition is not satisfied, the accuracy of image registration will be reduced significantly. Local homography transformation algorithm [17][18][19] maps different regions of an image using different transformation matrices, which can better overcome the shortcomings of the global homography transformation algorithm. As-Projective-As-Possible (APAP) algorithm [19] is a representative local homography transformation algorithm. It first extracts the feature matching points between the images and then divides the images into a uniform grid. Moving direct linear transform (MDLT) is used to estimate the homography matrix of each grid. Finally, the homography matrix of each grid is used to implement local homography transformation on the image to be registered. For images that do not satisfy the condition of global homography transformation, the image registration accuracy achieved by APAP algorithm is higher than that achieved by the global homography transformation algorithm [20]. APAP algorithm is also a feature-based image registration algorithm in essence. It also has the characteristics of a feature-based image registration algorithm and has higher accuracy than the general feature-based image registration algorithm. The general image registration algorithm based on global homography transformation only uses one homography matrix estimation and one homography transformation, while APAP algorithm needs multiple homography matrix estimations and homography transformations, so the speed of the APAP algorithm is slower than that of the general feature-based image registration algorithm. The above two deep learning-based image registration algorithms are both for global homography transformation, and the used samples cannot be adopted to estimate the local homography matrix. Therefore, based on the above researches, an image registration algorithm based on deep learning and local homography transformation is proposed in this paper. An image sample and label generation method suitable for local homography transformation is designed so as to train the image registration model with convolutional neural network (CNN) effectively. The resulted image registration model can effectively reduce the error of image registration and overcome the defects of poor robustness of traditional image registration algorithms and low accuracy of existing deep learning-based image registration algorithms. The main contributions of this paper are as follows: (1) A CNN and local homography transformation-based algorithm are proposed to solve the problem of image registration, which is a useful exploration for deep learning to solve the problem of image registration; (2) an image sample and label generation method suitable for local homography transformation is proposed, and the generated samples have good diversity and can simulate the actual image registration situation. The rest of this paper is organized as follows. Section 2 mainly introduces the basic theory of the proposed algorithm, focusing on the image sample, label generation, CNN model, and loss function. Section 3 shows the experimental results, which verify the effectiveness of the proposed algorithm. The conclusion is given in Section 4, which summarizes the main work of this paper and analyses the shortcomings of the algorithm and possible improvement aspects. Image Registration Algorithm Based on Deep Learning and Local Homography Transformation In supervised learning-based image registration, sample labeling is required first. However, the cost of labeling samples manually is too high, and it is usually difficult to ensure the labeling accuracy, as well as to collect enough diverse images for registration. To solve this problem, an image registration algorithm based on deep learning and local homography transformation is proposed in this paper. Firstly, a sample and label generation method for deep learning is designed. In this method, direct linear transformation (DLT) and moving direct linear transformation (MDLT) are used to automatically generate more reasonable and effective samples and labels for deep learning, and then supervised learning is used to train CNN so as to obtain the image registration model, with which the local homography transformation-based image registration can be achieved. Direct Linear Transformation (DLT) If there is no parallax between the reference and target images, the mapping relationship between the two images is simple homographic, which can be described by the homography matrix. Suppose that two points with coordinates x = [x , y ] T and x = [x, y] T are the corresponding matching points on the reference image I' and the target image I respectively, and the corresponding relationship between these two points can be expressed as where x and x are the homogeneous coordinates of the two points respectively, and x = In the non-homogeneous coordinates, the corresponding relationship between matching points x and x can be expressed as Transform Equation (1) into the form of 0 3×1 = x × H x and obtain When estimating H, more matching point information can be used to reduce the estimation error. In Equation (3), only two rows of the 3 × 9 coefficient matrix on the right side of the equation are independent. By selecting the first two rows to form an independent coefficient matrix A i , and taking all matching points into account, a 2N × 9 coefficient matrix A can be formed. By using the least square method, the solution of h can be expressed aŝ whereĥ is an estimation of h, Ah denotes the two norms of vector Ah, h is the normalized unit vector, N denotes the total number of pairs of matching points, and A i denotes the independent coefficient matrix corresponding to the ith pair of matching points. Singular value decomposition (SVD) can be used to calculateĥ. The right singular vector corresponding to the minimum singular value of A is the result. The estimation of homography matrix H is obtained by arranging the elements of vectorĥ in a certain order. Considering that SVD is time-consuming, which will affect the training speed of the neural network, Equation (3) is transformed into the form of non-homogeneous linear least squares. Let h 33 = 1, two independent non-homogeneous linear equations can be obtained as If all N matching points are included, then Equation (4) can be represented aŝ whereĥ is the estimation of h , and A is the coefficient matrix of 2N × 8 obtained by arranging all coefficient matrices A i in the vertical direction. b is a constant column matrix of 2N × 1 obtained by arranging all the constant column matrices b i in the vertical direction. Moving Direct Linear Transformation (MDLT) For an image with a certain parallax, the relationship between the reference and target images is no longer a simple homography transformation. In this case, the global homography transformation cannot ensure the accuracy of image registration, and simple local homography transformation will cause a blocking effect, which destroys the visual quality of the image. It is a good choice to use the MDLT algorithm for local homography transformation. The MDLT algorithm not only has high accuracy of image registration, but also can smooth different image blocks, taking into account the accuracy of image registration and the overall visual quality of the image. Firstly, the image to be transformed is divided into several image blocks, and then all matching points of the two images are taken into account. For each of the image blocks, according to the central position of the image block, the weights are assigned to all matching points so as to estimate the homography matrix corresponding to this image block. Accordingly, Equation (4) can be rewritten aŝ whereĥ j represents an estimation of the homography matrix of the jth image block, ω ij is a weight that changes with the coordinate of the center point of the current image block, and W j is a diagonal matrix that represents the weights of all matching points, and The weight ω ij is determined by the distance between the ith matching point and the center point of the jth image block. The smaller the distance, the larger the weight. Zaragoza et al. [19] used Gaussian function to calculate the weight where x * j represents the coordinate of the center point of the jth image block, x i represents the coordinate of the ith matching point of the image to be transformed, σ is the scale factor, and γ is the minimum weight value, which prevents the weight of some matching points far from the current image block from being too small. Lin et al. [21] proposed another method of calculating weights, using Student-t distribution function instead of Gaussian distribution function, which is represented as Because the student t-distribution function is smoother than the Gaussian distribution function, it is not easy for the block effect caused by local homography transformation to appear, so the student-t distribution function is adopted in this paper. By using the same analysis method of the DLT algorithm, the estimation of the local homography matrix is finally calculated as follows: Sample and Label Generation Method Based on Local Homography Transformation In the homography matrix, the rotational and shear components are often much smaller than the translation components, so it is difficult for a model to converge if the homography matrix is used as a label directly. Therefore, DeTone et al. proposed a method of substituting four pairs of corresponding points for the homography matrix [14]. The algorithm uses global homography transformation and is only suitable for the registration of an image without parallax. However, the actual images usually have parallax. To overcome the shortcomings of DeTone's method, an improved sample generation method based on local homography transformation is proposed to generate sample images with parallax, as illustrated in Figure 1. The sample and label generation process is described in detail as follows: Step 7: For image IB, an image block with the same size and coordinates as that of A I ′ in image IA is cropped as B I ′ . Image A I ′ and image B I ′ constitute the alternative sample of the neural network. The coordinate difference GAB between the points GB in image IB and its corresponding points GA in image IA forms the alternative label of the neural network. Figure 1g gives a pair of alternative samples cropped from the images in Figure 1b,f. Step 8: In the process of generation of image IB, if the overlap degree of two sample images is Step 1: Firstly, add random perturbation values to the coordinates of the four corners {P 1 , P 2 , P 3 , P 4 } of the original image I A to obtain four new points {P 1 , P 2 , P 3 , P 4 }, where the ranges of the random perturbation values in horizontal and vertical directions are [−ρ x , ρ x ] and [−ρ y , ρ y ], respectively. The two points before and after the perturbation form a pair of corresponding points, therefore, a total of four pairs of corresponding points are obtained, as shown in Figure 1a. Then, calculate the homography matrix H AB 4pt corresponding to the four pairs of corresponding points. Step 2: Randomly select a point p in the original image I A , cut out a block I A with fixed size using pas the upper left corner of the block, and divide the block into a uniform grid to get M × N grid points G A , as illustrated in Figure 1b. Step 3: According to Equations (1) and (2), transform the M × N grid points G A into new corresponding M × N points G A by using the homography matrix H AB 4pt , as illustrated in Figure 1c. Step 4: Add random perturbation values to each of the new corresponding M × N points G A to get M × N perturbation points G A , as illustrated in Figure 1d. The ranges of random perturbation values in horizontal and vertical directions are −ρ x , ρ x and −ρ y , ρ y , respectively, and ρ x < ρ x /2, ρ y < ρ y /2, so as to ensure the global consistency of these random perturbation points. Step 5: Through the M × N uniform grid points, G A generated in Step 2 and M × N corresponding perturbation points G A generated in Step 4, the corresponding global homography matrix H AB g is calculated by the DLT algorithm. Then transform the M × N uniform grid points G A into new points G A by using H AB g and calculate the root mean square error (RMSE) between G A and G A . After that, divide the original image I A into an m × n uniform grid according to the RMSE, as shown in Figure 1e. If the RMSE is large, which means that there is a strong locality between G A and G A , the grid of the original image should be partitioned smaller to improve the local accuracy; conversely, if the RMSE is small, it means that the local homography matrixes have strong global character, therefore, the grid of the original image can be partitioned larger so as to speed up sample generation. The number of rows and columns of the uniform grid can be determined by where m and n are the number of rows and columns of the uniform grid, W and H are the width and height of the image I A , x rmse and y rmse represent the RMSE between G A and G A in horizontal and vertical directions, and w min and h min represent the minimum width and minimum height of each image block, respectively. w min and h min should not be too small, otherwise, it will cause too many blocks of some samples, which will affect the speed of sample generation; however, it also should not be too large, so as to avoid too few blocks of samples, which will result in an unnatural block effect in the transformed image. Step 6: Calculate the local homography matrix H AB j (j = 1, 2, · · · , m × n) corresponding to each block of the m × n uniform grid with the MDLT algorithm, in which the M × N pairs of corresponding points between G A and G A are used as the pairs of matching points, so that the m × n local homography matrixes H AB L = H AB j j = 1, 2, · · · , m × n are obtained. Then transform the original image I A into a new image I B with H AB L and calculate the coordinate of the points G B in image I B corresponding to G A in I A with H AB L . Figure 1f shows the image I B generated from the original image I A shown in Figure 1a after local homography transformation, and the grid points in Figure 1f represent the new grid points generated by local homography transformation corresponding to the M × N uniform grid points G A in Figure 1b. Step 7: For image I B , an image block with the same size and coordinates as that of I A in image I A is cropped as I B . Image I A and image I B constitute the alternative sample of the neural network. The coordinate difference G AB between the points G B in image I B and its corresponding points G A in image I A forms the alternative label of the neural network. Figure 1g gives a pair of alternative samples cropped from the images in Figure 1b,f. Step 8: In the process of generation of image I B , if the overlap degree of two sample images is too low because of the extreme distribution of perturbation point G A , the samples are regarded to be invalid and will be discarded. The calculation of the overlap degree of two sample images is illustrated in Figure 1h. Let I A be the corresponding binary mask of sample image I A in the original image I A . Transform the mask image I A through the local homography matrix H AB L so as to obtain the corresponding binary mask I B in the image I B . Then the binary mask images I A and I B are intersected to get the binary mask image I AB , in which the non-zero-pixel region indicates the overlap region of the two sample images, as shown in Figure 1h. Thus, the overlap degree of two sample images is calculated as where ∂ denotes the overlap degree, S A denotes the number of non-zero pixels in I A , and S AB denotes the number of non-zero pixels in I AB . If ∂ of two sample images is lower than a threshold, the two sample images will be discarded. RMSE can be used as a loss function of CNN, which is defined by where x i is the label value of the ith pair of matching points,x i is the corresponding output value of the CNN, and k is the total number of pairs of matching points. General CNN can be used to obtain the image registration model. In this paper, three network architectures including VGG [22], Googlenet [23] and Xception [24] are compared. The structure of the VGG network is simple and the depth of the network is easily expanded, but its training speed is slow and it requires a lot of hardware resources. For simplicity, we adopted a 10-layer VGG network [14] in the experiments. Googlenet can deepen the depth and width of the neural network, speed up the training speed, and reduce the hardware resources needed by the network. The convergence speed of the Xception network is fast, and the hardware resources required are also less. Additionally, the convergence performance of the Xception network is generally better than that of VGG and Googlenet networks. Experimental Results and Analysis To test the performance of the proposed algorithm, it is compared with Scale-Invariant Feature Transform (SIFT) algorithm [11], Oriented FAST and Rotated BRIEF(ORB) algorithm [12], Error Checking and Correction (ECC) algorithm [7], APAP [19], the DeTone's algorithm [14], and the Nguyen's algorithm [15]. The experiments are implemented on a computer with Intel i7-6700 CPU, 32 GB memory, one NVIDIA GTX 1080 Ti GPU, and the operating system used is Ubuntu 16.04 LTS. The performances of different image registration algorithms are compared in terms of accuracy, running time and robustness. The three algorithms of SIFT, ORB and ECC are implemented by using Python OpenCV. The RANdom SAmple Consensus (RANSAC) threshold of SIFT and ORB algorithms is 5. The maximum number of iterations of the ECC algorithm is 1000. The adopted framework of deep learning is TensorFlow [25]. The APAP, DeTone's algorithm and Nguyen's algorithm are implemented with Python programming language on the same platform. To facilitate comparison with the DeTone's and Nguyen's algorithms, the size of sample images used in this paper is the same as that of DeTone's and Nguyen's algorithms. The used perturbation values consist of components in horizontal and vertical directions, the range of which should not be too small or too large. If the perturbation range is too small, the generated perturbation value will be small, which will reduce the diversity of the samples and weaken the generalization ability of the model. However, if the perturbation range is too large, it may easily generate some samples with extreme deformation, which will make the training of the model more difficult and lead to the reduction of prediction accuracy of the model. The maximum perturbation values ρ x or ρ y of corner points in Step 1 of the proposed image sample and label generation method should not exceed half of the width or height of the original image respectively. Generally, taking 1/3~1/10 of the image width or height can ensure that the generated samples have better diversity and visual quality. Similarly, in Step 4, taking 1/3~1/10 of ρ x for ρ x , 1/3~1/10 of ρ y for ρ y can achieve better results. The original data sets used in the experiments are MS-COCOCO2014 and MS-COCOCO2017 data sets [26]. Firstly, all images in these two data sets are scaled to 320 × 240, on which the proposed sample and label generation method is performed to obtain the gray-scale sample images with the size of 128 × 128. The maximum perturbation values ρ x and ρ y in horizontal and vertical directions of the corner points in Step 1 are set to 45, and the number of matching points for each pair of images in Step 2 is set to 5 × 5. The maximum perturbation values ρ x and ρ y in Step 4 are set to 11. In Step 5, the values of w min and h min are both 5. In Step 8, the threshold of overlap degree is 0.3, that is, when the overlap degree is lower than 0.3, the sample will be discarded. To increase the robustness of the model and reduce the possibility of over-fitting, image augmentation technology [27] is also used in the generation of training samples. The color and brightness of some of the sample images are randomly changed, and some of the sample images are processed with Gamma transformation. Finally, a total of 500,000 pairs of images are generated as a training set, 10,000 pairs of images as a validation set, and 5000 pairs of images as a test set. In order to prove the generality of the proposed algorithm, three CNNs, including VGG, Googlenet and Xception, are used to train and test each of the learning-based image registration algorithms. The used optimization algorithm is Adam [28], where β 1 = 0.9, β 2 = 0.999, ε = 10 −8 . The batch size is 128. The initial learning rate of the proposed algorithm and supervised learning of DeTone's algorithm is 0.0005, and that of unsupervised learning of Nguyen's algorithm is 0.0001. To prevent over-fitting, dropout [29] is used before the output layer of all neural networks. In the process of training, the test error of the validation set can be observed. When the test error of the validation set is no longer reduced, the training is stopped to prevent under-fitting or over-fitting. When training the network models of the DeTone's algorithm and Nguyen's algorithm, the perturbation values of their samples are also set to 45, the same optimization techniques and image augmentation techniques as well as the same CNN are adopted. The number of training samples generated is the same as that of the proposed algorithm, and the training methods and observation methods are also the same. All algorithms are tested on the test set generated by the proposed method to ensure the objectivity of the comparison. Accuracy of Image Registration The accuracy of image registration can be measured by RMSE of registration points, which is defined by where x i denotes the coordinates of grid points G A in image I A , and x i denotes the coordinates corresponding to x i in image I B ; f represents different image registration models, and the proposed algorithm and APAP algorithm use the local homography matrix, while the other algorithms use the global homography matrix as their image registration model; f (x i ) denotes the coordinates transformed from x i by using the image registration model f, which is the estimation of x i ; k is the total number of matching points in the pair of images, and it is set to 25 in the experiments. Table 1 shows the average RMSE of registration points achieved by several different image registration algorithms when implemented on the test set generated by the proposed method. To better present the performance of learning-based image registration algorithms, Table 1 gives in detail the registration accuracy of several deep learning-based image registration algorithms using VGG, Googlenet and Xception neural networks, respectively. From Table 1, it can be seen that the accuracy of the pixel-based ECC image registration algorithm is the lowest, and that of the feature-based SIFT image registration algorithm is higher. The APAP algorithm takes into account the locality of image registration, so it achieves the best result among the pixel-based and feature-based algorithms. The performance of the learning-based image registration algorithms is related to the used CNN models, and more advanced CNN models have higher image registration accuracy. The samples used by the DeTone's algorithm and Nguyen's algorithm are relatively simple, so there is little difference in the accuracy of image registration under different neural networks. These two algorithms do not fully consider the locality of image registration, resulting in low accuracy of image registration. Compared with other algorithms, the proposed algorithm achieves the highest image registration accuracy by using the Xception network model. In addition, from Table 1, it is seen that the effect of the proposed algorithm under Xception network is better than that under Googlenet and VGG networks. This is because the samples and labels used in the proposed algorithm are more complex, and there are obvious differences under different neural networks. When combined with more advanced CNN models, the proposed algorithm can achieve higher accuracy of image registration. Running Time To compare the calculation complexity of different image registration algorithms, Table 2 shows the average running time of each algorithm running for 10 times, where all algorithms are implemented under a computer with Intel i7-6700 CPU, 32 GB memory and one NVIDIA GTX 1080 Ti GPU. It is seen that APAP algorithm runs slowest due to the use of the local homography matrix and ORB algorithm runs fastest among the traditional image registration algorithms. For learning-based image registration algorithms, Table 2 gives the running time when the algorithms are accelerated with one GPU, as well as the running time achieved without the GPU. It is seen that GPU can significantly speed up the learning-based algorithms. The running speed of GPU is much faster than that of CPU, and different neural network models achieve different running speeds, among which Xception runs the slowest and Googlenet runs the fastest. Because the DeTone's and Nguyen's algorithms are only different in loss function and the neural network model is basically the same, the running time of the two algorithms are the same under the same conditions. The proposed algorithm involves the estimation of local homography matrices, so it runs slower than DeTone's and Nguyen's algorithms under the same neural network. Robustness to Illumination, Color and Brightness In order to compare the robustness of different image registration algorithms to illumination, color, and brightness, the test set in the experiments is augmented, and the used image augmentation method is the same as that of the training set. After image augmentation, the registration accuracy and failure rate of each algorithm are compared. We only randomly augmented some of the images in the test set, but not all of them. The higher the number of augmented images is, the higher the image augmentation degree of the test set is, and the test set has more diversity in illumination, color and brightness. The image augmentation degree can be represented by the probability of an image being augmented in the test set. The test set used in this experiment contains 5000 pairs of test images. Each algorithm runs 10 times repeatedly, during which the image augmentation is randomly implemented at a pre-specified image augmentation degree, and the average result of the 10 runs is taken as the final result of this algorithm with respect to the pre-specified image augmentation degree. Therefore, the image augmentation degree also represents the degree that the test set is affected by image augmentation. The accuracy and failure rate of image registration can be used to measure the robustness of different image registration algorithms. Since the maximum perturbation values of each grid point in the sample image in the horizontal and vertical directions are ρ x and ρ y respectively, when the accuracy of image registration of a pair of images is greater than ρ 2 x + ρ 2 y , the pair can be considered as a registration failure, and the failure rate of image registration on the test set can further be calculated. Considering that the RMSE values of test samples failed to be registered may be too large, and these extreme data may affect the RMSE values of the whole test set greatly, therefore, the RMSE of the whole test set is defined as where RMSE i represents the RMSE value of the ith pair of images, and K denotes the total number of image pairs in the test set. Figures 2-5 show the failure rate and RMSE achieved by different algorithms under different image augmentation degrees. The abscissa is the image augmentation degree of the test set, which changes from 0.0 to 1.0 with a step size of 0.1; the ordinate represents the registration failure rate or RMSE. Figure 2 shows the robustness comparison of seven image registration algorithms, in which the CNN model used by DeTone's and Nguyen's algorithms is VGG, while the model used by the proposed algorithm is Xception. As can be seen from Figure 2, the robustness of the traditional image registration algorithms to illumination, color, and brightness is very poor, and the robustness of the learning-based algorithms, especially the supervised learning-based algorithm, is better than that of the traditional ones. Figures 3-5 further give robustness analysis of the three learning-based image registration algorithms under three different CNN models. The used three CNN models are VGG, Googlenet and Xception, respectively. It can be seen that under the same neural network model, the robustness of Nguyen's algorithm is inferior to the other two algorithms. Nguyen's algorithm uses L1 norm as a loss function in the unsupervised learning algorithm, requiring the same image augmentation parameters for I A and I B in each pair of samples during the training, otherwise, the model will not converge normally, which results in the poor robustness of the unsupervised learning image registration algorithm. In contrast, DeTone's algorithm and the proposed algorithm do not have this problem, because both of them adopt supervised learning; the label value can supervise the training of the neural network very well, so the model has better robustness. In order to further analyze the influence of different perturbation values on the accuracy of the proposed algorithm, four maximum perturbation values in Step 1 including 24, 28, 32, and 36 are tested on test sets with different image augmentation degrees, respectively. The experimental results are shown in Figure 6, in which the abscissa and ordinate are the image augmentation degree of the test set and RMSE achieved by different image registration algorithms, respectively. It can be seen that as the maximum perturbation value ρ decreases, the RMSE of image registration also decreases, that is, the higher the accuracy of image registration. Figure 7 gives the visualized homography estimation results. The red boxes in the left images are mapped to the red boxes in the right images. These red boxes are labels, which are generated by the proposed method described in Section 2.3. The yellow boxes in the right images indicate the results of homography estimation. The more the red and yellow boxes in the right images coincide, the higher the accuracy of feature point matching is. From Figure 7, it is also noticed that the proposed algorithm with Xception model is superior to the proposed algorithms with Googlenet and VGG neural network models. (a) (b) In order to further analyze the influence of different perturbation values on the accuracy of the proposed algorithm, four maximum perturbation values in Step 1 including 24, 28, 32, and 36 are tested on test sets with different image augmentation degrees, respectively. The experimental results are shown in Figure 6, in which the abscissa and ordinate are the image augmentation degree of the test set and RMSE achieved by different image registration algorithms, respectively. It can be seen that as the maximum perturbation value ρ decreases, the RMSE of image registration also decreases, that is, the higher the accuracy of image registration. Figure 7 gives the visualized homography estimation results. The red boxes in the left images are mapped to the red boxes in the right images. These red boxes are labels, which are generated by the proposed method described in Section 2.3. The yellow boxes in the right images indicate the results of homography estimation. The more the red and yellow boxes in the right images coincide, the higher the accuracy of feature point matching is. From Figure 7, it is also noticed that the proposed algorithm with Xception model is superior to the proposed algorithms with Googlenet and VGG neural network models. Figure 7 gives the visualized homography estimation results. The red boxes in the left images are mapped to the red boxes in the right images. These red boxes are labels, which are generated by the proposed method described in Section 2.3. The yellow boxes in the right images indicate the results of homography estimation. The more the red and yellow boxes in the right images coincide, the higher the accuracy of feature point matching is. From Figure 7, it is also noticed that the proposed algorithm with Xception model is superior to the proposed algorithms with Googlenet and VGG neural network models. Conclusions Aiming at the problem of image registration with parallax, an image registration algorithm based on deep learning and local homography transformation is proposed. A sample and label generation method suitable for local homography matrix estimation is designed by using DLT and MDLT, so as to obtain an effective image registration model through supervised learning. The proposed algorithm overcomes the defect that the existing learning-based image registration algorithm cannot be used for local homography matrix estimation and improves the weak robustness of traditional image registration algorithms. Experimental results show that the Conclusions Aiming at the problem of image registration with parallax, an image registration algorithm based on deep learning and local homography transformation is proposed. A sample and label generation method suitable for local homography matrix estimation is designed by using DLT and MDLT, so as to obtain an effective image registration model through supervised learning. The proposed algorithm overcomes the defect that the existing learning-based image registration algorithm cannot be used for local homography matrix estimation and improves the weak robustness of traditional image registration algorithms. Experimental results show that the proposed algorithm achieves high image registration accuracy; low time complexity; and good robustness to illumination, color, and brightness. In particular, the combination of the proposed algorithm and a better CNN architecture can significantly improve the accuracy of image registration. In this paper, the MDLT algorithm is adopted to generate samples with local matching points. The perturbation value cannot be set very large, otherwise it will cause unnatural deformation and dislocation of the image. Therefore, the proposed algorithm is more suitable for the sample with weak locality. In addition, compared with the traditional algorithms, the proposed algorithm has higher requirements on hardware and takes a longer time to generate samples and train neural networks; this will be improved in further work.
9,445.8
2020-01-21T00:00:00.000
[ "Computer Science" ]
Quantifying Macro Logistics Cost of India Globalisation has opened up economic opportunities for developing countries in the form of outow of value-added services, low-cost raw materials , human resource skills, improved market access for their exports, eciency gains in their economies through technology transfer and spill-over, and resource re-allocations. Consequently, various developing countries, including India, have increasingly begun to position themselves for greater participation in regional and global markets. It goes without saying that India needs to build its capacity for establishing linkages with global and regional markets for deriving the optimal benets of engaging with the globalised world. This, in turn, depends on the creation of an ecient logistics system. For this purpose, most of the developed and emerging countries estimate logistics costs on a regular basis, and use performance indicators to measure the eciency levels of logistics activities. Till now, no attempt has been made to estimate logistics cost of India by the ocial statistical organisation. Two estimate of logistics cost computed by private bodies are usually quoted when one refers to Indian estimate. However, the methodology of the two needs serious introspection. In this context, this paper makes an attempt to estimate logistics cost of India. Introduction Unlike China, manufacturing sector has never been the driving force behind India's high growth trajectory (Mohan, 2017). It is often argued that the high logistics cost in India is a signi cant bottleneck for manufacturing sector's growth. Several reasons are cited for the high logistics cost in India. These include an unfavourable policy regime, lack of a multimodal transport system and the consequent heavy reliance on road transport, fragmented storage infrastructure, the presence of multiple stakeholders in the entire transport and storage value chain, poor quality of road and port infrastructure, and the absence of technology intervention in storage/transportation and distribution activities. The high logistics cost inevitably has an adverse effect on the country's competiveness in the globalised world. While everybody understand the need for keeping logistics cost on the lower side, no serious effort has been made in India to quantify the same. In this context, an effort has been made to quantify the logistics cost of India. However before we attempt to quantify the cost, several issues need to be sorted. Firstly, what are the elements that should be considered in quantifying the logistics cost. Secondly, what approaches are adopted by the different researchers' world-wide to estimate logistics cost in general and whether the same can be replicated in the Indian context. Thirdly, what approach has been adopted by other studies in estimating logistic cost of India? In practice, the logistics costs is measured in terms of a country's currency unit, or as share of country's gross domestic product (GDP), or as share of sales or turnover of an industry. It is customary to report it as a percentage of GDP for cross country comparison. The plan of the rest of the paper is as follows. The next section provides a brief review of the literature on estimation of logistics cost and approach adopted by the different researchers. The lacunas are also discussed in this section. The emphasis of the literature survey is primarily on the studies that addresses the logistics cost of the nation (country) as this is the focal theme of this paper. The subsequent section describes the methodology adopted by the authors to estimate the logistics cost of India. Finally, Sect. 4 describes the results. Review Of Literature: Estimating Logistics Cost It must be pointed out that there is no standard nomenclature on what elements should be considered for quantifying logistics costs. In the literature, the following functions-order processing, inventory management, warehousing, transportation, material handling and storage, logistical packaging, and information-are generally considered the core components of logistics process and thereby should be into account while measuring logistics costs (Sopple, 2007). Table 1 provides a summary of list of logistics cost components based on the literature review of some of the important publications. The list encompassed two dominant approaches namely, questionnaire-based surveys, and statistics-based studies. As Table 1 indicates, we can see that the ve most common logistic cost components are: transportation costs, warehousing costs, inventory carrying costs, administration costs, and packaging costs. With regard to the approaches to measure the logistic cost at the macro level, the literature proposes two principal ways (Rantasila, 2013): 1. Collate empirical data on elements of logistics cost through survey from respondents (survey method); 2. Use secondary data to derive logistics cost. In the rst approach, the information on logistics costs are collated from key-stakeholders of the industries using structured/semi-structured questionnaires. Generally questionnaires are canvassed to key persons (chief operating o cers) in industries. The macro logistics cost of a country is subsequently derived by aggregated these costs by a suitable weighing scheme re ecting sectoral contributions in the economy. In the second approach, attempts are made to use published macroeconomics data from national account statistics or other sources to quantify the logistics cost. This is usually complemented with primary surveys to quantify those costs which are usually not re ected as separate entries in macroeconomic data of a country. Some of the authors using this approach have also taken recourse to economic model to strengthen their estimates. The following caveats however apply to both these methods. By and large, the logistics process involves multiple agents. So, collating information from multiple agents is always a di cult proposition (Farahani et.al. 2009). Of late, companies are increasingly outsourcing their logistics operations to third parties along with other complementary service activities. In such situation, identifying individual logistics function/activity is not an easy task (Rantasila, 2013). Of course, second methodology can be applied in two ways: the top-down or bottom-up approach. In the top-down approach, data published in national accounts is disaggregated to a level that re ects transport, storage and other major components of logistics cost as de ned earlier. In the bottom-up approach, the detailed cost data on transport, warehousing and other components of logistics activities are aggregated across products to arrive at logistics cost. The latter approach is usually applied in most of the developed countries. It is a more data intensive process. The organized logistic sector in the developed countries generally collates these data for their own use and the governments of the respective countries also maintain such database. On the other hand, the logistics sector is by and large unorganized in a developing country. Such detailed data are not maintained by government or the logistics sector. Hence, the developing countries use the former approach in combination with surveys to arrive at logistic cost (Pohit et al, 2019). The rst notable attempt to measure logistic cost considered only four activities: transportation, inventory, warehousing, and order processing (Heskett, Glaskowsky, and Nocholas, 1973). This a model based approach, which has seen several transformation in their methodology. Currently, this adopts Arti cial Neural Network (ANN) modelling framework for logistic cost assessment (Bowersox, 1998 Armstrong & Associates Inc. has followed a similar approach to provide estimates of logistics costs of all the major and emerging economics of the world. Though these estimates may act as yardstick for respective countries to focus on their logistics ine ciencies, the drawback of this framework needs to be bear in mind. The estimation of the parameters of their neural model rests on observed data of input variables (economy, infrastructure related variables for countries, which are readily available from in World Bank database) and output variables (here logistics cost as percentage of GDP) of select developed countries derived from alternative methods (for instance bottom-up approach). Having estimated the parameters of the model using developed countries' data, one estimates the logistics cost as a percentage of GDP of any country by feeding the values of the input variables for the corresponding country. The reliability of estimates of logistics cost derived in this fashion for a developing economy like India may be questioned on several counts. The prevalence of high transaction cost in terms of bribes/speed money at each stage of logistics operations in a country like India is a fact (Pohit, 2016). If one uses data of developed countries to estimate the model where transaction cost is absent or negligible, the parameters may not re ect a developing country's perspective. The unpredictability of delivery schedule in India is also a fact due to poor quality of physical infrastructure. This is absent in developed country where predictability of delivery schedule is the hall mark of the logistics system. The Central Statistical Organization, o cial body of the Government of India has made no attempt to measure logistics cost. This does not preclude private industrial bodies to compute logistics cost, which are found to be high by global standard. In absence of o cial estimates, these are widely quoted by the logistics players to stress the point that India is a nation of high logistics cost. Implicitly, these estimates are used by the stakeholders to extract higher tax incentives for the sector. In the light of the above discussion, we have laid out below our approach to quantifying the logistics cost of India. Methodological Framework For Estimating India's Logistics Cost As noted earlier, there is no standard world-wide nomenclature that de nes logistics cost. Broadly speaking, the following are considered to be the core elements of logistics costs namely, handling and loading/unloading costs, packaging cost, insurance cost, transportation cost and management and administration costs. However surveying the literature on measurement of logistics cost, we have included a few additional logistics costs components for measuring the total logistics cost of India as indicated in Table 2. Note that we have incorporated the speed money (i.e. bribes) as part of the logistic cost as it is very much embedded in India's transportation system. At the outset, India's National Accounts Statistics (NAS) published by Central Statistical Organisation (CSO), Government of India provide the national (macroeconomic) estimates of GDP, the balance of payments, national production, input costs, consumption, investment and other fundamental attributes of the national economy. To be speci c, the cost estimates are more explicitly depicted in supply and use table (SUT), published by CSO which are also consistent with national accounts statistics. Nonetheless, the logistics cost estimates cannot be directly estimated from SUT as (a) the unit of analysis in SUT or NAS is an establishment and (b) logistics operation transcend multiple industries/sectors and its costs are embedded in SUT/NAS but are not readily apparent as they are not shown in SUT/NAS as an independent entity. The SUT table has been the fulcrum of our estimation of logistics costs as it is consistent with GDP estimates derived from NAS. The SUTs provide the statistical framework to include three major approaches to measure GDP, namely, production, income and expenditure, and hence enables the balanced estimate of GDP both at current prices and constant prices. In fact, these two tables detail the circular ows of goods and services in the economy. The SUTs depicts, in matrices form, where the products come from and how they are used. Their main use is to act as an integration framework for balancing the national accounts, by recording how the supplies of different kinds of goods and services originate from domestic industries and imports and how those supplies are distributed between various intermediate or nal uses (consumption, investment, exports). The Supply Table and Use Table of India are product X industry matrices but their entries are different. In the Supply Table, each column presents the values of products (kept in rows) produced by an industry or the products supplied by industries to the economy distinguishing the domestic supply from foreign supply (imports). These are at basic prices. The total supply of each product at the purchaser's price has been obtained by adding taxes less subsidies on products and trade and transport margins. On the other hand, a Use Table shows the use of the product (a good or service and kept in rows) by the type of use (kept in columns), that is, as intermediate consumption, gross capital formation, and exports. They are all at purchaser's price. For balancing the two tables and nally bringing them to an analysis, it is essential that both the tables are brought to the same valuation prices. The SUT for the year 2012-13 prepared by CSO is starting of our computation exercise. These tables consider 140 products and 66 industries. We made an effort of aggregating 140 products characteristics to 64 industries (see Appendix Table A1). The criterion of any product, to be the characteristics product(s) of an industry or in other words, any industry to be a producer of single or more products, is based on specialisation and coverage ratios based on supply matrix. The specialisation and coverage ratios are share matrices respectively in column totals and row totals. This provides us a mapping between 140 products and 64 industries. The square SUT of 64 × 64 size is used for further converting into uniform input output (IO) table of 64 × 64 sectors by making use of the industry technology and standard methodology suggested in the handbook of input-output published by the United Nations, 1999 (see the ow chart in Fig. 1). The input-output table of a country implicitly estimates the cost structure of each sector of the economy by the principal inputs (goods and services), value added (returns to factors of production) and indirect taxes paid to the Government. IO tables generally tabulates transportation cost as the same is a principal input in the production process. However other cost elements are not usually shown as separate entry in an IO table, but rather are subsumed under service sector. Our purpose is to cull out these costs using supplementary information from survey data. The next step involved in construction of IO table is the construction of trade and transport matrix (TTM). The TTMs matrix needs to be distinguished for both margin and non-margin manufacturing sectors. The services sector have no margin in TTMs matrix. The margin sectors are the following: railway transport, land transport, air transport, water transport, supporting and auxiliary transport activities. Each of the margin sector comprises of both passenger and freight segment. The values for non-margin sectors in TTM matrix is apportioned directly by using row shares of total use at purchaser's prices from TTM vector as provided in supply table. However, the entries for margin sectors are not direct and are based on sum of value of all non-margin entries in a column. The sum total of the column value is further apportioned using a share of individual margin commodity vis-a-vis row sum of all the margin commodities together. The cell entries for margin sectors in TTMs are placed as negative values. We netted out TTM matrix from use table and further arrived at use table at producer prices. Similarly, we constructed matrix for net product tax including tariffs by similarly allocating net product tax including tariffs column. A point worth noting here is that the computing of net product matrix is straight unlike TTM matrix. We have taken out net product tax matrix from use Table 3. The 17 sector uniform IO table is further scaled up to the year 2017-18 using latest national account statistics. The consistency of IO table hence prepared for 2017-18 is further checked for both income and expenditure side. The freight transport services which comprises of 72.45 per cent of total (passenger and freight) transport services are used for constructing a sector for freight transport services. The remaining passenger services are clubbed into other services. Finally, the share total intermediate use by freight transport services as GVA basic prices as well as GVA market prices is used to enumerate the freight cost. As mentioned earlier, we have computed sector-wise norm of other logistic cost elements given in Table 2 relative to transportation cost from survey data. These norms have been applied to cull out other logistics cost element from each sector's service input cost. The other logistic cost element are principally services activities which any industry/sector needs for production activities. So, they are accounted in IO transaction table under service input cost element. Summing up sector-wise logistics cost (freight transportation cost, freight material handling cost, and other logistic cost element) across sectors provides India's total logistics costs. Table 4 shows our estimates of the logistics cost for the year 2017-18. As this table shows, the total logistics cost turns out to be 8.87 per cent of GVA at basic prices. As Table 4 shows, there are variation across sectors depending on the nature of the products (low value or high products, bulk or non-bulk commodities). It is generally observed that low value items or bulk items have higher logistics costs. Results It must be noted the logistics cost displayed above is in no way an indicator of the contribution of the logistics sector to the economy. The number estimated above only re ects the logistics cost to the economy in the year 2017-18.
3,953.6
2020-06-04T00:00:00.000
[ "Business", "Economics" ]
Modelling of turbidity distribution along channels The purpose of the article is to develop the required and sufficient conditions under which numerical methods can be used for engineering calculations and for scientific research of hydrodynamic processes in solving practical problems related to surveying of pollutants diffusion in water flows. The conducted studies consisted in the finding out conditions under which mathematical modelling using hydrodynamic equations allows to solve engineering problems of channel hydrodynamics and, in particular, to numerically simulate the transport of suspended particles in channels. A number of additional nature of numerical models were studied in addition to approximation and stability, such as averaging over probability and over time averaging. It was noted that only stationary processes could be described by equations if they are obtained from the Reynolds equations, i.e. when using the Reynolds equations, an important class of problems with a pulsating flow under constant boundary conditions is excluded from consideration. And, if the equations are obtained directly from the conservation laws, then all the desired variables have the meaning of actual quantities averaged over the scale. That is even in the case of statistically stationary flows, using such equations, it is possible to solve nonstationary problems on large time scales. Introduction Turbulence is an inherent property of the flow of liquid media. Turbulent flow regimes are inherent in currents of natural and artificial channels. Therefore, in the mathematical modelling of flows, it is necessary to take into account dissipative processes related to viscosity, thermal conductivity, diffusion of components and the corresponding processes of turbulent heat -mass transfer. Otherwise, inadequate characteristics of hydrodynamic flows can be obtained. More than a century of research experience shows that the problem of turbulence is extremely complex and, so far, it has not been possible to obtain any simple analytical solutions for describing the processes occurring in turbulent flows. Turbulence has a stochastic nature and is a fundamentally three-dimensional unsteady nature and includes a wide and continuous spectrum of spatial and temporal scales [1 -3]. In some cases, turbulence is a decisive factor determining the speed and nature of the processes, such as mixing and transfer of suspensions. In such cases, it is necessary to take particular care over the introduction of simplifications into the systems of equations since this can lead to the impossibility of applying for modelling real flows and mass transfer and which could significantly change or even do not allow to obtain a picture of flows. Hydrodynamic equations The solution of the general equations of hydrodynamics is a very complex problem, which is traditionally solved by introducing certain hypotheses. As practice shows [5 -12], for reservoirs, the horizontal dimensions of which are much greater than the depth, this can be done by introducing a large scale to consider the phenomenon. In this case, depending on the degree of desensitization, three-dimensional equations of baroclinic liquid, twodimensional Saint-Venant equations, one-dimensional equations, and zero-dimensional (balance) equations could be obtained. The general equations of hydrodynamics can be written in the following way [13]: here: ui -projection of the current velocity vector onto the axis xi, p -hydrodynamic pressure, i -component of shear stress tensor,  -density, gi -component of the gravitational acceleration vector, qSr -internal sources of substance for Newtonian fluid. where:  -kinematic coefficient of viscosity, Sr -some substance that determines density (temperature, salinity). In [4], it was shown that with the introduction of scale: here Ln -linear scale in plan (provided that Lnh, where h -flow depth); T L U n  , here U -representative velocity. and considering the case when as well as using the turbulent viscosity hypothesis for moments here T -is be find out with either using the von Kármán turbulence model or "k-" model [5 -11], the assumption of small changes in all characteristics along the horizontal coordinate as compared with changes along the vertical one, results in hydrostatic pressure at a given scale and, thus, the following system of equations can be obtained: Here D -is vertical diffusion coefficient (similar to  ), it is usually assumed that D T, Sr -average substance concentration in scale (3). The same equations can be obtained from the Reynolds equations with the additional assumption of the possibility of neglecting turbulent interactions between liquid jets in the plane. Thus, two different approaches lead to the same form of equations. However, in terms of expected results, these approaches are not equivalent. Indeed, all dependent variables ui, qi and h in (6) have different meanings depending on whether these equations are derived from the Reynolds equations or directly from the conservation laws. In the first case, these values averaged first by probability and then on a large scale, in the second, these are actual values, averaged by the same scale. When considering a statistically stationary flow in the case of an ergodic process, averaging over probability is equivalent to averaging over infinite time. Therefore, if we assume that equations (6) . , where UСВ -disturbance drift rate. Thus, when using the Reynolds equations, an important class of problems with pulsating flow under constant boundary conditions is excluded from consideration. All of the above, strictly speaking, is true for free from boundaries in terms of flow. If the flow is considered near a rough vertical wall, it is impossible to consider the hydrodynamic quantities as actual. If we consider them as averaged over probability, then in the case of pull apart flows we will come, in the same way as before, to the same dead-end result (concerning shear stress between jets). Therefore, it is proposed to consider the hydrodynamic quantities in (6) as averaged over the scale M, much bigger than the roughness scale, but much smaller (3). Then ij will have a meaning similar to the Reynolds stress, where the averaging is carried out not by probability but by scale M. These stresses are determined by the roughness peaks. Stresses at the bottom are also determined by the roughness of the bottom. The modified Prandtl model [12] or a two-parameter model "k-" [5] is usually used to close the system of equations (6). One-dimensional equations Engineering practice of calculations [13][14][15][16][17] shows that for a certain class of flows there is another scale, within which phenomena can be neglected. Let us consider this class. Let there be a reservoir whose geometry satisfies the following ratio LB, here L -the length of the water body along the direction of prevailing flow, B -representative transverse size of the water body. In the future, such a water body will be called a waterway. Direction of the predominant flow let us take as the axis of the waterway and consider this axis straight if rB, here r -the radius of curvature of the axis of the waterway. According to [15], one-dimensional equations for describing flows in such waterways can be written in the following form: Here: y -transverse coordinate; -flow cross-sectional area; Q -water flow through the entire flow cross section; U Q -cross sectional flow average velocity; friction slope: R ( -wetted perimeter); i -average bottom slope; J -specific impulse supplied to the area along with lateral flow rate; q -lateral flow rate; Fforce associated with deviation from prismatic channel of the waterway: yb -transverse coordinate of the bottom surface; -the angle between the tangent to the bottom line in the plane x const and the axis OY in plane YZ, normal to the plane of the averaged bottom surface. These equations are widely used in practice. [13 -20]. Consideration of deformations Equations (7) are written for the conditions of bottoms, banks, could not be deformed, banks, and slopes. For the case of a wide rectangular channel with a smooth changes in the width of the cross sections problems can be solved toking into account deformations. In this case, the equations are as follows: Having a turbidity distribution along the length of the channel (for example, from experimental data) can find numerical values of "К" could be found from: Experimental studies [21] showed that, under certain conditions of formation of the original moving bottom, the first member of the equation could be neglected. Moreover, under conditions of erosion with water without sediments, it is in order of magnitude smaller than other members. Then with a constant flow rate: Data from [4,22] were processed to identify the degree of influence of suspended solids in streams on the numerical value of "K". According to [4] the settling velocity (hydraulic size) does not have significant effect on the value of * U K . On the contrary, the presence of suspended solids reduce its values the stronger the more of suspended solids are being transferred by the stream. In [4], it was shown that for a zone of flow descent from a berm, the value can be determined in the same way. In the conditions of turbidity-free water flow, the maximum value obtained as the result of experimental and field studies data processing is , and in the conditions of the formed equilibrium flow is Thus, by dividing flows into two zones: zone of local erosion and zone of common deformations, using the results of data processing, it is possible to calculate parameters of flows in easily deformed sandy channels. Conclusions Until today, there is no simple analytical solutions to describe processes in turbulent flows. This is due to the stochastic nature of turbulence flows, which is a process that has a threedimensional nonstationary nature and includes a wide and continuous spectrum of spatial and temporal scales. Turbulence in solving engineering problems of channel hydrodynamics is the decisive factor determining the speed and nature of the processes such as mixing and transfer of suspensions. Long-term studies allowed to formulate certain hypotheses regarding the scope of consideration of phenomena, the application of which permitted to obtain three-dimensional equations of baroclinic fluid, two-dimensional Saint-Venant equations, one-dimensional equations and zero-dimensional equations, each of which has its own specific field of application.. Averaging over probability is equivalent to averaging over infinite time while considering of statistically stationary flows in the case of an ergodic process. Therefore, if the hydrodynamic equations are obtained from the Reynolds equations then stationary processes only can be described by these equations, The use of the Reynolds equations leads to the exclusion from consideration of an important class of problems, i.e. problems of pulsating flows under constant boundary conditions. If the equations are obtained directly from the conservation laws, then all the required variables have the meaning of actual quantities, averaged over the consideration scale. That is, even in the case of statistically stationary flows, using such equations, it is possible to solve nonstationary problems on large time scales. The results of processing the turbidity distribution along the length of the channel conducted to identify the degree of its influence on the deformation processes allows to calculate parameters of flows in easily deformed sandy channels.
2,532.6
2019-01-01T00:00:00.000
[ "Environmental Science", "Engineering" ]
Bulk MgB2 Superconducting Materials: Technology, Properties, and Applications The intensive development of hydrogen technologies has made very promising applications of one of the cheapest and easily produced bulk MgB2-based superconductors. These materials are capable of operating effectively at liquid hydrogen temperatures (around 20 K) and are used as elements in various devices, such as magnets, magnetic bearings, fault current limiters, electrical motors, and generators. These applications require mechanically and chemically stable materials with high superconducting characteristics. This review considers the results of superconducting and structural property studies of MgB2-based bulk materials prepared under different pressure–temperature conditions using different promising methods: hot pressing (30 MPa), spark plasma sintering (16–96 MPa), and high quasi-hydrostatic pressures (2 GPa). Much attention has been paid to the study of the correlation between the manufacturing pressure–temperature conditions and superconducting characteristics. The influence of the amount and distribution of oxygen impurity and an excess of boron on superconducting characteristics is analyzed. The dependence of superconducting characteristics on the various additions and changes in material structure caused by these additions are discussed. It is shown that different production conditions and additions improve the superconducting MgB2 bulk properties for various ranges of temperature and magnetic fields, and the optimal technology may be selected according to the application requirements. We briefly discuss the possible applications of MgB2 superconductors in devices, such as fault current limiters and electric machines. Introduction Modern progress in the development of new superconducting materials has brought the manufacturing industry to the stage of real applications.The most promising for wide application in various fields are MgB 2 superconductors and high-temperature superconductors (HTS) based on rare-earth barium copper oxides and bismuth strontium calcium copper oxides [1][2][3][4][5].This group may soon be supplemented by a class of iron-based superconducting compounds (or FeSC) [1], for which the production technologies are being intensively developed.Of all the mentioned materials, MgB 2 -based superconductors are the cheapest and most easily prepared for magnetic applications.The high level of superconducting characteristics of MgB 2 , which are very important for applications, such as critical current density, and upper critical and trapped magnetic fields, can be achieved in a polycrystalline structure due to the absence of the weak-link problem at grain boundaries [6].The last represents the main drawback of HTS.This distinguishes magnesium diboride from HTS, which must be texturized or epitaxially grown to achieve high superconducting properties.In addition, the deviation of stoichiometry from MgB 2 to a sufficiently high degree is not an obstacle to achieving a high level of superconducting characteristics [7][8][9][10][11].The temperature of the superconducting transition of the MgB 2 compound is about 39 K, depending on the isotope composition [12].The critical temperature is lower than that of HTS, but is high enough for application in cryogenics devices in which liquid hydrogen (boiling temperature 20 K) and cryocoolers can be used for cooling. Liquid hydrogen, when it is produced using renewable sources, is a promising green fuel with zero carbon emissions.Its high energy density makes it an ideal fuel source for transport and industry feedstock [13][14][15].Since liquid hydrogen is more compact than hydrogen gas, its efficient storage and transportation are of great interest. The properties of magnesium diboride compounds differ somewhat from those of other superconductors.Some of these differences stem from the MgB 2 structure.The compounds possess a hexagonal crystal structure, hP3, with a space group of P6/mmm.The lattice parameters are a = b = 3.084 ± 0.001 Å and c = 3.522 ± 0.002 Å [25].Their layered stacking consists of alternating Mg and B layers [26].The bulk density according to Wikipedia is 2.57 g/cm 3 and according to [25] it is 2.63 g/cm 3 , the melting point is 830 • C. The materials have a bulk modulus of about 172 GPa.The unit cell of MgB 2 crystals demonstrates an anisotropic compressibility: the compressibility along the c axis is higher than that along the a and b axes [27].Bulk MgB 2 materials demonstrate isotropic characteristics, e.g., critical current density. Many publications have been devoted to the investigation of the various properties of MgB 2 superconductors and their theoretical considerations (e.g., [19, and the references therein).MgB 2 's properties are considered more similar to metal than to those of HTS [28].In this review, we limit ourselves to the analyses of the dependences of superconducting properties on the technology conditions and additions.Here, some theoretical results are noted only. The theoretical understanding of the properties of MgB 2 superconductors has nearly been achieved by the consideration of two energy gaps.The measured and estimated gaps of the πand σ-bands of the electrons of MgB 2 are typically around 2 meV and 6.5 meV, respectively [29, 34,35,38,39].In [39], it was noted, that these gaps can vary in the ranges of 1-4 meV and 5.5-10 meV. Recently, the electron localization functions and their isosurfaces were studied in [11]. Despite the structure of a unit cell of MgB 2 , it is simple and this compound nominally contains only two elements-Mg and B, the structure of MgB 2 -based materials can be complicated due to the presence of an admixture of oxygen, carbon, and even hydrogen and an inhomogeneous boron distribution.An oxygen impurity is usually present in a large amount (compared to carbon) even in materials prepared under 'clean' conditions in protective atmospheres.This is a result of the high affinity of magnesium toward oxygen. The carbon and hydrogen admixtures in MgB 2 materials can appear due to their presence in the initial boron powder or absorption from atmosphere. Among the dozens of studied additions to MgB 2 , the ones that are the most effective from the point of view of an increase in the critical current density are carbon, carboncontaining compounds, silicon carbide, titanium, tantalum, zirconium, and compounds containing these metals .Relatively recently, in the literature [78,[90][91][92][93][94][95][96][97][98][99][100], there has been information about the positive effects on the superconducting characteristics of MgB 2 -based materials of Si 3 N 4 , hexagonal, cubic BN (boron nitride), NbB 2 , NbTi, Ni-Co-B, Rb 2 CO 3 and Cs 2 CO 3 additions and conflicting results have been presented about the effects of the following oxygen-containing additions: Dy 2 O 3 , SnO 2 , Sn-O, Ti-O. The present overview is related to the preparation of MgB 2 -based bulk superconductors and an analysis of the dependence of their properties on technological processes and additions.It is focused on the effects of manufacturing technology parameters, such as pressure, temperature, holding time, impurities, and additions, on the materials' structure and superconducting characteristics.Below, we present the best-achieved superconducting properties of MgB 2 bulk materials, such as critical current density and upper critical and irreversibility magnetic fields.Some aspects of the practical application of MgB 2 -based materials are also considered briefly. Effect of Manufacturing Pressure-Temperature-Time Conditions on Bulk MgB 2 Superconducting Characteristics and Structural Features The superconducting characteristics of MgB 2 materials depend on many factors and their combination.Very deep and comprehensive studies of the synthesis process of MgB 2 -based materials, the correlation between material structure and superconducting characteristics, and the manufacturing technology have been performed by the authors of [7][8][9][10][11]16,19,73,76,82,84,85,98,99,.These correlations were comprehensively studied for materials prepared using initial powders of MgB 2 and stoichiometric Mg:2B mixtures (typical characteristics are given in Table 1) at manufacturing temperatures in the range of 600-1100 • C under different pressure conditions using the methods noted above. Table 1.Typical characteristics of initial boron and magnesium diboride powders and admixtures found in them.The data presented in the table were collected from [20,115,128].Note: (1) The amounts of C, H, and N in the initial boron marked by asterisks (*) were obtained by using the Universal Micro Analyzer "vario MICRO cube" of the ELEMENTAR vario-analyzer family.(2) The manufacturing company provided information about the amount of oxygen, grain size, and carbon and nitrogen contents (which are not marked by asterisks). Name (3) The higher amount of C and N determined by the "vario MICRO cube" as compared to the producer's estimation may be explained by chemical reactions during storage.(4) All "in-situ" materials were prepared from different types of amorphous boron using Mg(I) chips, and only samples from Type II boron with C addition were prepared using Mg(II) powder. To provide the required MgB 2 stoichiometry, boron powders can be mixed and milled, for example, in a high-speed planetary activator for 3 min with magnesium turnings (noted below as Mg(I)) or magnesium powder < 1 µm (noted below as Mg(II)) [20].MgB 2based materials can be prepared using previously synthesized MgB 2 powder as well.If a superconducting material is prepared from Mg and B mixtures the process is called synthesis or in-situ, if the material is prepared from MgB 2 powder it is called sintering or ex-situ. The critical current density, J c , of MgB 2 bulk samples is usually estimated from magnetization measurements using, e.g., a vibrating sample magnetometer (VSM) or a Physical Property Measurement System (PPMS), and the Bean model [102]. The superconducting transition temperature (critical temperature) is estimated using a SQUID magnetometer or four-point method. For the VSM measurements on samples with typical sizes of a few mm, the value of J c is calculated by using Equation (1): where ∆m is the hysteresis of the magnetic moment, V is the sample volume, and a s and b s are the sample dimensions perpendicular to the applied field, with a s > b s .The connectivity, A F , is estimated from the difference in resistivity at 40 K and 300 K, ρ 300 − ρ 40 , measured by using the four-point method: where 9 µΩ•cm is assumed to be the electrical resistivity of MgB 2 from a polycrystalline sample [6].The volume pinning force was determined as J c × B [131]. Below we present the upper critical magnetic, B C2 , and irreversibility, B irr , fields, which were determined using the four-point method and performing measurements in a 0-15 T field applying a 10-100 mA current [20,85].The SC shielding fraction can be calculated from the ac susceptibility, with a numerical correction accounting for the demagnetization of the actual sample geometry [109]. The typical dependences of the critical current density, J c , on an external magnetic field at 20 K and 30 K are presented in Figure 1. Figure 1 presents the highest values found in the literature for bulk MgB 2 -based materials prepared by different methods.These samples were prepared using different initial types of amorphous B and MgB 2 powders, both without and with the addition of SiC, Ti, and Ta in the amount of 10 wt%, and using boron into which some carbon was specially added during preparation, B(II).The improvement of the critical current density was achieved by the application of a higher manufacturing pressure or a higher pressure of cold compaction (in the case of the following pressureless synthesized samples).The various technologies and initial materials provided the highest critical current density for different ranges of magnetic field and temperature.For example, at 20 K, the sample 1 HP possessed the highest critical current density in relatively low fields, <5 T, it was 4 HP-in a higher field, >5.5 T (Figure 1a).The typical characteristics of MgB 2 -based samples prepared without additions from Mg:2B and MgB 2 under different conditions were summarized from [98,103,108,119] and are presented in Table 2. following pressureless synthesized samples).The various technologies and initial materials provided the highest critical current density for different ranges of magnetic field and temperature.For example, at 20 K, the sample 1 HP possessed the highest critical current density in relatively low fields, <5 T, it was 4 HP-in a higher field, >5.5 T (Figure 1a).The typical characteristics of MgB2-based samples prepared without additions from Mg:2B and MgB2 under different conditions were summarized from [98,103,108,119] and are presented in Table 2. Figure 2 allows for a comparison of the microstructures of the sintered, ex-situ, and synthesized, in-situ, prepared MgB2.One can see that "black" inclusions, which correspond to higher magnesium borides, are present in both materials [109]. The brighter areas on the photos correlate with a higher amount of impurity oxygen, and the darker-looking areas-with a higher concentration of boron in the MgB2-based materials. MgB12 inclusions, with sizes up to 10 μm and appearing as the darkest areas in the materials, are randomly distributed.These inclusions are large enough to allow for an estimation of nano-hardness.Using a Berkovich indenter, the nano-hardness of the MgB2 matrix and inclusions with stoichiometry near MgB12 were studied [20,109].The inclusion's nano-hardness of 32.2 ± 1.7 GPa and Young modulus of 385 ± 14 GPa, estimated under a 10-60 mN load, occurred about twice higher than those of the material matrix. Figure 3 shows the dependences of the critical current density on a magnetic field at 10-35 K for the samples demonstrating the highest JC.The samples were prepared from boron of Type III by SPS under an optimal pressure of 50 MPa and HotP under 30 MPa.The highest critical current densities in low magnetic fields were attained in the SPS materials prepared under 50 MPa pressure at 1050 °C, and in the HotP materials-under 30 MPa at 1000-1100 °C [20,119].The Materials sintered at 1050 °C by the SPS method from preliminarily prepared MgB2 powder (Type VII) or ex-situ demonstrated high critical current densities as well, but they were somewhat lower than those prepared from Mg:2B or in-situ (Table 2).The connectivity between the superconducting grains, AF, and shielding fraction, S, (Table 2) were as follows: AF = 80% and S = 100% for the ex-situ and Figure 2 allows for a comparison of the microstructures of the sintered, ex-situ, and synthesized, in-situ, prepared MgB 2 .One can see that "black" inclusions, which correspond to higher magnesium borides, are present in both materials [109].AF = 98% and S = 91% for the in-situ SPS prepared materials at 50 MP (at 600 °C for 0.3 h and then at 1050 °C for 0.5 h).The critical current density increased with the synthesis temperature.The explanation for this could be as follow.The material SPS synthesized from Mg(II):2B(III) at 800 °C demonstrates a low density (74% of the theoretical one) and Jc = 0.4-0.36MA/cm 2 in a 0-1 T field at 20 K (Table 2).The density of the material synthesized by SPS from Mg(II):2B(III) at 1050 °C was 94% of the theoretical value, and Jc = 0.5 ÷ 0.45 MA/cm 2 in a 0 ÷ 1 T field at 20 K.The typical structure of the SPS material is shown in Figure 4.One can observe big porous areas of MgB4-6 (Figure 4a,b).Note for all the images: the darkest spots match MgBx (x > 6) inclusions, the matrix with near-MgB2 stoichiometry appears as gray; the brightest spots in the figures are Mg-B-O nano-areas, and the dark-gray areas indicate near-MgB4-6 stoichiometry.The brighter areas on the photos correlate with a higher amount of impurity oxygen, and the darker-looking areas-with a higher concentration of boron in the MgB 2based materials. MgB 12 inclusions, with sizes up to 10 µm and appearing as the darkest areas in the materials, are randomly distributed.These inclusions are large enough to allow for an estimation of nano-hardness.Using a Berkovich indenter, the nano-hardness of the MgB 2 matrix and inclusions with stoichiometry near MgB 12 were studied [20,109].The inclusion's nano-hardness of 32.2 ± 1.7 GPa and Young modulus of 385 ± 14 GPa, estimated under a 10-60 mN load, occurred about twice higher than those of the material matrix. Figure 3 shows the dependences of the critical current density on a magnetic field at 10-35 K for the samples demonstrating the highest J C .The samples were prepared from boron of Type III by SPS under an optimal pressure of 50 MPa and HotP under 30 MPa.The highest critical current densities in low magnetic fields were attained in the SPS materials prepared under 50 MPa pressure at 1050 • C, and in the HotP materialsunder 30 MPa at 1000-1100 • C [20,119].The Materials sintered at 1050 • C by the SPS method from preliminarily prepared MgB 2 powder (Type VII) or ex-situ demonstrated high critical current densities as well, but they were somewhat lower than those prepared from Mg:2B or in-situ (Table 2).The connectivity between the superconducting grains, A F , and shielding fraction, S, (Table 2) were as follows: A F = 80% and S = 100% for the ex-situ and A F = 98% and S = 91% for the in-situ SPS prepared materials at 50 MP (at 600 • C for 0.3 h and then at 1050 • C for 0.5 h).The critical current density increased with the synthesis temperature.The explanation for this could be as follow.The material SPS synthesized from Mg(II):2B(III) at 800 • C demonstrates a low density (74% of the theoretical one) and J c = 0.4-0.36MA/cm 2 in a 0-1 T field at 20 K (Table 2).The density of the material synthesized by SPS from Mg(II):2B(III) at 1050 • C was 94% of the theoretical value, and J c = 0.5-0.45MA/cm 2 in a 0-1 T field at 20 K.The typical structure of the SPS material is shown in Figure 4.One can observe big porous areas of MgB 4-6 (Figure 4a,b).Note for all the images: the darkest spots match MgB x (x > 6) inclusions, the matrix with near-MgB 2 stoichiometry appears as gray; the brightest spots in the figures are Mg-B-O nano-areas, and the dark-gray areas indicate near-MgB 4-6 stoichiometry. Figure 5 shows the temperature dependences of the real part of the ac susceptibility, for some HP-synthesized materials under 2 GPa for 1 h from Mg:2B.The dependences allow for the determination of the temperature of the superconducting transition, T c , of the materials [108].The measurements were carried out in an ac magnetic field with 30 µT amplitude, which varied with a frequency of 33 Hz.The critical temperatures of the tested samples were from 34.5 to 38 K. Figure 5 shows the temperature dependences of the real part of the ac susceptibility, for some HP-synthesized materials under 2 GPa for 1 h from Mg:2B.The dependences allow for the determination of the temperature of the superconducting transition, Tc, of the materials [108].The measurements were carried out in an ac magnetic field with 30 μT amplitude, which varied with a frequency of 33 Hz.The critical temperatures of the tested samples were from 34.5 to 38 K. Figure 6 presents one of the important characteristics of superconductors, which determines the field of their application, the upper critical magnetic field, Bc2. Figure 6 shows the temperature dependences of the highest upper critical magnetic fields for the HP, SPS, and HotP materials [120,132].Figure 6 presents one of the important characteristics of superconductors, which determines the field of their application, the upper critical magnetic field, B c2 .Figure 6 shows the temperature dependences of the highest upper critical magnetic fields for the HP, SPS, and HotP materials [120,132]. Let us consider, as an example, the structure of the sample prepared from Mg(II):2B(II) (boron with C addition) at 600 • C under 2 GPa (Figure 1, curve 4HP).The sample demonstrates a low critical temperature, T c , of about 34.5 K (Figure 5, curve 8) and possesses a low connectivity, A F = 18%, and density (Table 2, line 5).Despite the low noted properties, the sample demonstrates the highest critical current density in a magnetic field range of 6-10 T at 20 K (Figure 1, curve 4 HP), and the highest upper critical magnetic field, B c2, of 15 T at 22 K (Figure 6, curve 1) presented in the literature.An extrapolation give a B c2 of 42 T at 0 K. Figure 7 shows the structure of this material under different magnifications.Figure 6 presents one of the important characteristics of superconductors, which determines the field of their application, the upper critical magnetic field, Bc2. Figure 6 shows the temperature dependences of the highest upper critical magnetic fields for the HP, SPS, and HotP materials [120,132].Let us consider, as an example, the structure of the sample prepared from Mg(II):2B(II) (boron with C addition) at 600 °C under 2 GPa (Figure 1, curve 4HP).The sample demonstrates a low critical temperature, Tc, of about 34.5 K (Figure 5, curve 8) and possesses a low connectivity, AF = 18%, and density (Table 2, line 5).Despite the low noted properties, the sample demonstrates the highest critical current density in a magnetic field range of 6-10 T at 20 K (Figure 1, curve 4 HP), and the highest upper critical magnetic field, Bc2, of 15 T at 22 K (Figure 6, curve 1) presented in the literature.An extrapolation give a Bc2 of 42 T at 0 K. Figure 7 shows the structure of this material under different magnifications. Effect of Manufacturing Pressure Usually, a higher manufacturing pressure allows to achieve a higher critical current density for materials both without and with additions due to an increase in the materialʹs density and connectivity between superconducting grains (Table 2) [20,98,103,108,109,119]. Figure 8 presents the dependences of critical current density vs. external magnetic field for the MgB2-based materials prepared from the same Mg(I):2B(III) mixture by different methods at 800 and 1050 °C, and under different Effect of Manufacturing Pressure Usually, a higher manufacturing pressure allows to achieve a higher critical current density for materials both without and with additions due to an increase in the material ′ s density and connectivity between superconducting grains (Table 2) [20,98,103,108,109,119]. Figure 8 presents the dependences of critical current density vs. external magnetic field for the MgB 2 -based materials prepared from the same Mg(I):2B(III) mixture by different methods at 800 and 1050 • C, and under different pressures: 0.1 MPa (PL), 2 GPa (HP), 50 MPa (SPS), and 30 MPa (HP).A comparison of curves 1 and 2, as well as of curves 3, 4, 5, and 6, demonstrates the positive effect of a pressure increase.During synthesis in a flow of Ar at 1050 °C and under a pressure of 0.1 MPa, some amount of Mg evaporated after 15 min of heating at 1050 °C.X-ray diffraction studies have revealed that the matrix of the synthesized material acquires the structure of MgB4 [109].The sample prepared under such conditions was non-superconducting.Previously, it has been shown that cold densification at 2 GPa does not improve results.However, high-pressure-synthesized materials under 2 GPa at 800 and 1050 °C have MgB2 matrices and demonstrate high critical currents.After a 15 min holding time at 1050 °C in flowing Ar under 0.1 MPa, some amount of Mg evaporates and non-superconducting MgB4 is formed (instead of MgB2).An increase in the holding time of up to 2 h at 1050 °C results in more intensive Mg evaporation and formation of the MgB7 matrix phase, which is non-superconducting as well [109]. In the materials synthesized in flowing Ar under 0.1 MPa, using SPS under 50 MPa, and HP under 2 GPa, one can observe grains of higher magnesium borides MgBx (x = 4-20), which look the blackest in photos of the microstructures.MgBx (x = 4-20) phase inclusions are larger, and their amount is higher in materials produced at low temperatures compared to materials produced at high temperatures. Effect of Manufacturing Temperature One important factor influencing the superconducting properties of MgB2 bulk material is the manufacturing temperature.The dependences of the superconducting properties on the manufacturing temperature are associated with variations in the MgB2 structures 85,109,110,117].The typical structures of MgB2 materials synthesized at low (800 °C) and high (1050 °C) temperatures under 2 GPa are shown in Figure 9a,b [132]. The X-ray analysis of both MgB2-based materials shows that they contain MgB2 and MgO phases.However, SEM and EDX analyses and an Auger spectroscopy study indicate the presence of three main phases in the materials: (1) a matrix with near-MgB2 stoichiometry, which contains a small amount of an impurity of oxygen (grey areas in the During synthesis in a flow of Ar at 1050 • C and under a pressure of 0.1 MPa, some amount of Mg evaporated after 15 min of heating at 1050 • C. X-ray diffraction studies have revealed that the matrix of the synthesized material acquires the structure of MgB 4 [109].The sample prepared under such conditions was non-superconducting.Previously, it has been shown that cold densification at 2 GPa does not improve results.However, highpressure-synthesized materials under 2 GPa at 800 and 1050 • C have MgB 2 matrices and demonstrate high critical currents.After a 15 min holding time at 1050 • C in flowing Ar under 0.1 MPa, some amount of Mg evaporates and non-superconducting MgB 4 is formed (instead of MgB 2 ).An increase in the holding time of up to 2 h at 1050 • C results in more intensive Mg evaporation and formation of the MgB 7 matrix phase, which is non-superconducting as well [109]. In the materials synthesized in flowing Ar under 0.1 MPa, using SPS under 50 MPa, and HP under 2 GPa, one can observe grains of higher magnesium borides MgBx (x = 4-20), which look the blackest in photos of the microstructures.MgB x (x = 4-20) phase inclusions are larger, and their amount is higher in materials produced at low temperatures compared to materials produced at high temperatures. Effect of Manufacturing Temperature One important factor influencing the superconducting properties of MgB 2 bulk material is the manufacturing temperature.The dependences of the superconducting properties on the manufacturing temperature are associated with variations in the MgB 2 structures 85,109,110,117].The typical structures of MgB 2 materials synthesized at low (800 • C) and high (1050 • C) temperatures under 2 GPa are shown in Figure 9a,b [132]. cally shown in Figure 9c,d.The MgBO inclusions can play the role of pinning centers and the difference in their structures is reflected in the different dependencies of the critical current densities on the magnetic field.Moreover, the effect of oxygen aggregation with the manufacturing temperature increases.Besides, the reduction with temperature in the amount and sizes of higher magnesium borides inclusions (which appear the most black) has been observed.The manufacturing temperature of MgB2 superconductors can be varied in a rather wide temperature range of 600-1200 °C.The application of a higher pressure allows for an increase in the manufacturing temperature of MgB2 superconductors because higher pressures prevent the evaporation of magnesium at higher temperatures, and the following changes in the materialʹs stoichiometry.The forms of the Mg-B-O inclusions depend on the manufacturing temperature and are principally different.In the MgB 2 material synthesized at low (800 • C) temperature, their forms are nanolayers noted by "L" in Figure 9a, and at high (1050 • C) temperature they are separate inclusions, noted by "I" in Figure 9b [109].The difference is schematically shown in Figure 9c,d.The MgBO inclusions can play the role of pinning centers and the difference in their structures is reflected in the different dependencies of the critical current densities on the magnetic field.Moreover, the effect of oxygen aggregation with the manufacturing temperature increases.Besides, the reduction with temperature in the amount and sizes of higher magnesium borides inclusions (which appear the most black) has been observed. The manufacturing temperature of MgB 2 superconductors can be varied in a rather wide temperature range of 600-1200 • C. The application of a higher pressure allows for an increase in the manufacturing temperature of MgB 2 superconductors because higher pressures prevent the evaporation of magnesium at higher temperatures, and the following changes in the material ′ s stoichiometry. As example of the manufacturing temperature influence, Figure 10 presents the critical current densities of the materials synthesized from different types of initial boron without and with Ti and SiC additions at the low (800 • C) and high (1050 • C) temperatures.One can see that the synthesis at the low temperature allows for the achievement of higher critical currents in higher magnetic fields.However, the synthesis at the high temperature leads to higher critical currents in low magnetic fields.This is observed for a temperature range from 10 to 35 K and in external magnetic fields up to 10 T [20,103,109].As example of the manufacturing temperature influence, Figure 10 presents the critical current densities of the materials synthesized from different types of initial boron without and with Ti and SiC additions at the low (800 °C) and high (1050 °C) temperatures.One can see that the synthesis at the low temperature allows for the achievement of higher critical currents in higher magnetic fields.However, the synthesis at the high temperature leads to higher critical currents in low magnetic fields.This is observed for a temperature range from 10 to 35 K and in external magnetic fields up to 10 T [20,103,109]. Pressure-Temperature Effect on Pinning in MgB2 The pinning force was estimated and the types of dominant pinning were determined for the MgB2-based superconductors in [7,69,71,91,128,131]. Table 3 and Figure 11 summarize the results of these studies, which were presented in [7,128].The materials tested in these works were prepared under different pressure-temperature conditions.The dominant pinning mechanism was determined using the method proposed in [131].This mechanism was determined using the volume pinning force Jc × B, according to the following procedure: "The field Bpeak, where the maximum of the volume pinning force Fp takes place, is normalized by the field Bn, at which the volume pinning force drops to half its maximum (on the high external field side).The position of the peak, k = Bpeak/Bn, is expected to be at 0.34 and 0.47 for grain boundary pinning (GBP) and point pinning (PP), respectively". Figure 11a shows the typical dependences of the maximal pinning force and field, Bn, at 20 K on the manufacturing pressure and temperature.At the low temperature (800 °C), there is the maximum volume pinning force at a manufacturing pressure of 50 MPa.At the high temperature (1050 °C), this force increases monotonically with the pressure Pressure-Temperature Effect on Pinning in MgB 2 The pinning force was estimated and the types of dominant pinning were determined for the MgB 2 -based superconductors in [7,69,71,91,128,131]. Table 3 and Figure 11 summarize the results of these studies, which were presented in [7,128].The materials tested in these works were prepared under different pressure-temperature conditions.The dominant pinning mechanism was determined using the method proposed in [131].This mechanism was determined using the volume pinning force J c × B, according to the following procedure: "The field B peak , where the maximum of the volume pinning force F p takes place, is normalized by the field B n , at which the volume pinning force drops to half its maximum (on the high external field side).The position of the peak, k = B peak /B n , is expected to be at 0.34 and 0.47 for grain boundary pinning (GBP) and point pinning (PP), respectively".Figure 11a shows the typical dependences of the maximal pinning force and field, B n , at 20 K on the manufacturing pressure and temperature.At the low temperature (800 • C), there is the maximum volume pinning force at a manufacturing pressure of 50 MPa.At the high temperature (1050 • C), this force increases monotonically with the pressure [128].An increase in pressure (up to 2 GPa) usually leads to a reduction in porosity (from 47% to 1%) and, as noted above, to an enhancement of the critical current density.F p (max) is also increased by the addition of Ti or SiC, both in the low-and the high-temperature synthesized materials (Table 3).The pinning forces in the in-situ prepared samples are higher than those in the ex-situ ones.The position of F p (max) shifts to higher magnetic fields with the manufacturing pressure and due to the addition of Ti or SiC.A shift has also been observed in the case of using the in-situ preparation (compared to the ex-situ) [98].The pinning type GBP dominates in the materials prepared at low temperatures (600-800 • C), while the high-temperature preparation results mainly in PP or intermediate behavior, so-called mixed pinning (MP).Exceptions have been found for materials produced by SPS (the k values were too high for the PP mechanism).These materials contain a wide range of higher magnesium borides, MgB x (x = 4-20), within their structure [20,109,119,128]. The studies of the samples prepared under a pressure in the range of 16-96 MPa have showed that a manufacturing pressure of about 50 MPa turns out to be optimal for the SPS synthesis method. The samples with different magnetic fields, B peak , corresponding to the maximum pinning force, F p demonstrate different behaviors of the critical current density.An increase in the magnetic field, B peak , usually leads to a decrease in the critical currents in low fields, and a significantly slower reduction with an increasing field (compare, e.g., curves 1 and 4 in Figure 11b,c). Characteristics of Initial Compounds and Critical Current Densities The grain boundaries and the amount of impurity oxygen can influence the pinning and critical current density of the synthesized (in-situ) and sintered (ex-situ) magnesium diboride-based materials [103]. In previous publications, the following correlations have been assumed to be important for changing the superconducting characteristics of materials based on magnesium diboride: the amount of oxygen in the initial boron and magnesium diboride powders and the oxygen concentration in the superconducting matrices of MgB2 bulk materials; -the average grain sizes of the initial boron and magnesium diboride and the average sizes of the grains in the superconducting phase; -the amount of oxygen in and the grain sizes of the initial components and the critical current density; -the oxygen amount and the grain sizes in the prepared superconducting materials and the critical current densities. The authors of [103] demonstrated that no correlation could be found between the average grain size (in the range of 0.8-9 μm) and the impurity oxygen content (0.66-1.9 Characteristics of Initial Compounds and Critical Current Densities The grain boundaries and the amount of impurity oxygen can influence the pinning and critical current density of the synthesized (in-situ) and sintered (ex-situ) magnesium diboride-based materials [103]. In previous publications, the following correlations have been assumed to be important for changing the superconducting characteristics of materials based on magnesium diboride: the amount of oxygen in the initial boron and magnesium diboride powders and the oxygen concentration in the superconducting matrices of MgB 2 bulk materials; -the average grain sizes of the initial boron and magnesium diboride and the average sizes of the grains in the superconducting phase; -the amount of oxygen in and the grain sizes of the initial components and the critical current density; -the oxygen amount and the grain sizes in the prepared superconducting materials and the critical current densities. The authors of [103] demonstrated that no correlation could be found between the average grain size (in the range of 0.8-9 µm) and the impurity oxygen content (0.66-1.9 wt%) in the different initial B or MgB 2 powders and the amount of oxygen in the superconducting bulk MgB 2 prepared using HP.The oxygen content (estimated by SEM EDX) in the in situ prepared MgB 2 was 7-24 wt% and in the ex-situ it was 4-12 wt%. The grain boundaries in MgB 2 can be considered as pinning centers for Abrikosov vortices.The higher density of the pinning centers leads to a higher critical current density, J c .Smaller grains and, thus, a higher total surface of grain boundaries in MgB 2 should provide stronger pinning and a higher J c . The critical current density and, average crystal sizes, calculated from the line broadening of the MgB 2 phase in the X-ray diffraction patterns (Equation ( 3)) and lattice parameters of the MgB 2 phase for ex-situ and in-situ prepared materials under 2 GPa are presented in Table 4. Table 4.The critical current density, J c , and lattice parameters of the MgB 2 phase vs. the average size of crystallites (grains) in the superconductor high-pressure sintered from MgB 2 (VI) and synthesized from Mg(I):2B(III) [103].The average crystallite sizes of bulk MgB 2 -based superconductors were calculated from the line broadening of the MgB 2 phase in the X-ray diffraction patterns by the standard program as follows: HPS under 2 GPa for 1 h at T s [ where: W size -the broadening caused by small crystallites; W b -broadened profile width; W s -standard profile width of 0.08 • ; K-shape factor; λ-X-ray wavelength.The value of the K factor in Scherrer's equation was set by default to 0.9 [103].There were no correlations between the sizes of the crystallites (grains) in manufactured bulk MgB 2 and the critical current density, J c (at 10 and 20 K in a 1T field), for both the in-situ and ex-situ superconductors manufactured under a pressure of 2 GPa (Table 4) [20,103]. The average crystallite (or grain) sizes of MgB 2 obtained using the HP method increased slightly with the preparation temperature (for example, in the range of 700-1000 • C, Table 4), especially for MgB 2 obtained in-situ (from 15 to 37 nm), and less for that obtained ex-situ (from 18.5 to 25 nm) [103].The in-situ MgB 2 with somewhat bigger crystallites demonstrated a higher J c that looks contradictive.The explanation may be that J c may be influenced in parallel by other factors.The critical current density can also be strongly influenced by the distribution of impurity oxygen in the MgB 2 structure and the formation of inclusions of higher magnesium borides, which are also affected by the production temperature.This is discussed in this review below. Up to now, it has not been entirely clear which set of characteristics, of the initial boron or MgB 2 , could give a guarantee for achieving a high critical current density in bulk MgB 2 superconductors.Of course, the high level of their purity is very important, but does not give a hundred-percent guarantee of high quality from the point of view of the superconductive characteristics of the synthesized superconductors.The authors of [103,108,125] have studied the efect of the boron concentration in the initial mixtures on the structure and superconducting properties of the HP-synthesized materials. The concentration of boron in the MgB x inclusions, which are present in the MgB 2 matrix, varies in a wide range.Along with the superconducting MgB 2 , there exist several stable, non-superconducting, higher magnesium borides (MgB 4 , MgB 7 , MgB 12 , MgB 17 , MgB 20 , and Mg 2 B 25 ).The higher magnesium borides can crystallize in the MgB 2 matrix and can affect pinning.By changing the pressure-temperature-time conditions, one can change the stoichiometry of the higher borides inclusions and the areas they occupy in the MgB 2 matrix.Higher magnesium borides MgB x in the high-pressure (2 GPa) manufactured materials demonstrate x = 9-14, and mostly around 12. In spark plasma manufactured materials, the MgB x phases with x = 4-6 occupy rather porous and rather large areas, which appear as the gray areas in Figure 4a,b.Small inclusions, with x = 8-16, are also present in the material and are shown as the black areas in Figure 4.The MgB x inclusions with x= 6-8 are in the materials synthesized by the hot-pressing method.This allows for the assumption that pressure plays an essential role in the stoichiometry of MgBx inclusions of high magnesium borides.The inclusions with x = 18-25, or even pure B, appear in the structure randomly and, thus, cannot influence the material characteristics as a whole [109]. MgB x inclusions are practically "invisible" to a traditional X-ray diffraction analysis despite the essentially different amounts of boron, the crystallographic structures of higher magnesium borides, and their properties (e.g., nano-hardness).The reason could be due to their fine dispersion in the material structure and the large number of atoms in unit cells of low symmetry, which results in a high amount of "reflecting planes".This essentially reduces the intensities of the X-ray reflections from higher magnesium boride grains randomly distributed in the MgB 2 matrix, which cannot be seen on the background of the very strong reflections from MgB 2 [109]. The study of the influence of boron concentration on the superconducting material properties has been performed using initial mixtures of Mg (I) and B(III) [103,108,125].The components were mixed and milled in a high-speed planetary activator for 3 min with steel balls, and then the materials were synthesized under 2 GPa at 800 and 1050 • C for 1 h.The following mixtures were investigated: Mg(I):4B(III), Mg(I):6B(III), Mg(I):8B(III), Mg(I):10B(III), Mg(I):12B(III), and Mg(I):20B(III).The results for the critical current, J c , and temperature, T c , obtained by a vibrating sample magnetometer and PPMS are shown in Figure 12.Rather high critical current densities (Figure 12c,d), as well as a transition superconducting temperature of about 35 K (Figure 12b), were estimated from magnetization loops of the materials prepared from Mg(I) and B(II) mixtures, taken in Mg:8B and even Mg:20B proportions.For example, an X-ray analysis showed that a high amount of the MgB 2 phase was present in materials prepared from the Mg:8B (Figure 12b,e) and Mg:12B (Figure 12a-c) mixtures.However, the study using the four-probe method allowed for the conclusion that there was no transport current flowing through the samples [103,108,125].Figure 12d demonstrates the microstructure obtained by an TEM of a MgB 12 grain, the stoichiometry of which was estimated by TEM EDX. Effect of Additions on Structure and Superconductive Characteristics of MgB2 As mentioned in Introduction, for more than 20 years, since the discovery of the superconductivity in MgB2, scientists have been exploring the possibility of increasing the pinning and, hence, the critical current density using various additives .The positive effects of C, C-containing compounds, Ti, Ta, Zr, compounds (borides and carbides) containing these metals, SiC, BN, Si3N4, NbB2, Dy2O3, SnO2, Sn-O, Ti-O, Rb2CO3, Cs2CO3, etc., have been reported.However, the discovered effects of some additives, such as SnO2, Sn-O, and Dy2O3 [94][95][96][97][98]121], have appeared contradictory due to a combination Effect of Additions on Structure and Superconductive Characteristics of MgB 2 As mentioned in Introduction, for more than 20 years, since the discovery of the superconductivity in MgB 2 , scientists have been exploring the possibility of increasing the pinning and, hence, the critical current density using various additives .The positive effects of C, C-containing compounds, Ti, Ta, Zr, compounds (borides and carbides) containing these metals, SiC, BN, Si 3 N 4 , NbB 2 , Dy 2 O 3 , SnO 2 , Sn-O, Ti-O, Rb 2 CO 3 , Cs 2 CO 3 , etc., have been reported.However, the discovered effects of some additives, such as SnO 2 , Sn-O, and Dy 2 O 3 [94][95][96][97][98]121], have appeared contradictory due to a combination of factors acting in parallel.In some cases, a significant improvement has been achieved by increasing the density of materials without additives, or their effect has been negligible and lies within the range of measurement error.The authors of [94][95][96][97] have claimed that additions of SnO 2 and Dy 2 O 3 can lead to critical current density increase, but the authors of [98,121] have demonstrated that these oxygen-containing additions reduce the critical current density or do not lead to its notable change.Here, we give a more detailed description of the effects of C, Ti, TiH 2 , Ta, Zr, SiC, and Ti-O, since in our opinion their effects have received more confirmations in the literature.In earlier publications [81, 133,134], the positive effect of Ti and Zr additions on the critical current density has been explained by the formation of TiB 2 and ZrB 2 inclusions into thin (atomic-size) layers, which improve pinning.However, the mechanism of the Ti and Zr additions influence has not been proven experimentally.The positive effect of SiC additions has been explained in [69][70][71][72]74] by the following: the carbon in the MgB 2 structure is solved after the decomposition of SiC into C and Si, the latter forming Mg 2 Si.The SiC additive acts as a source of carbon.Carbon, in small amounts, can form a solid solution in the superconducting MgB 2 phase, somewhat decreasing the transition temperature, but essentially increasing the upper critical magnetic and irreversible fields, i.e., increasing the critical current density in high magnetic fields. The review of publications [84,103,108,110,111,125,[135][136][137][138], in which the influence of Ti, Zr, and Ta additions are studied, have shown that the effects of these additions are different from that of SiC additions.No diffusion of Ti, Ta, or Zr into the MgB 2 was found in the samples prepared under 2-3 GPa at 700-1100 • C [83,125] and, as a result, the inclusions of phases containing Ti or Ta are rather too big and randomly distributed to be efficient pinning centers (Figure 13a,b).However, the presence of Ti or Ta causes an increase in the amount of inclusions with a stoichiometry near that of MgB x (x~12) in HP-prepared materials (Table 5) [53,55,95].At low synthesis temperatures (700-850 • C under 2 GPa), Ta and Ti transform into hydrides due to adsorbing impurity hydrogen (Figure 13e), which may come from the atmosphere or fro materials which were in contact with the Mg-B mixture during mixing or synthesis.Therefore, these additions prevent the formation of MgH 2 (Figure 9f), the presence of which decreases the critical current [53,55].The X-ray diffraction patterns shown in Figure 13e,f indicate that when Ti is added to a mixture of magnesium and boron, TiH 2 is formed along with magnesium diboride and an admixture of magnesium oxide.MgH 2 is not formed.The formation of only one titanium-containing phase, TiH 1,924 , in the materials prepared under 2 GPa at 800 • C has been confirmed by TEM and NanoSIMS ion mapping [139].This fact looks unusual from the point of view of thermodynamics, because the titanium hydride (TiH 2 ) formation enthalpy of −15.0 kJ mol −1 is higher than that of the formation of titanium boride (TiB 2 : from −150 to −314 kJ mol −1 ) or oxides (TiO 2 : −944.057kJ mol −1 ; Ti 2 O 3 : −1520.9kJ mol −1 ; and Ti 3 O 5 : −2459.4kJ mol −1 ) [123].There is a lot of impurity oxygen in the material and it contains boron, but only TiH 2 is formed at a low synthesis temperature [139].At higher synthesis temperatures, TiH 1,924 and TiB 2 form (Figure 13f).Here N is the ratio of the area occupied by the MgB 12 inclusions in the COMPO image obtained at 1600× magnification to the total area of this image.Here N is the ratio of the area occupied by the MgB12 inclusions in the COMPO image obtained at 1600× magnification to the total area of this image.The absorption of hydrogen and, thus, the prevention of the formation of MgH 2 by Ta and Zr additions, has been observed, as in the case of Ti additions [103].However, Ti is the most powerful absorbent of these three metals.Note also that the additions of Ti to the MgB 2 mixture, or even the synthesis of a big MgB 2 block wrapped in a Ti foil, prevents an MgB 2 sample from cracking due to the absorption of impurity hydrogen by Ti. When Ti and Ta were added to the initial Mg:2B mixture, in addition to hydrogen absorption, another effect was also observed.Additions of Ti and Ta promote the formation of higher magnesium boride inclusions [103].Within the structure of MgB 2 materials synthesized using the HP method with Ti and Ta additives (Table 5), a larger amount (N) of the magnesium boride phase with a stoichiometry close to MgB 12 was observed, compared to the material without additives.A higher amount of the higher magnesium boride phase correlates with higher critical currents in the 1 T field.So, the addition of Ti can affect the boron distribution in MgB 2 -based material.This can be seen in Figure 15b, for example, where around the Ti inclusions the density of the black inclusions (higher magnesium borides) is much higher.The absorption of hydrogen and, thus, the prevention of the formation of MgH2 by Ta and Zr additions, has been observed, as in the case of Ti additions [103].However, Ti is the most powerful absorbent of these three metals.Note also that the additions of Ti to the MgB2 mixture, or even the synthesis of a big MgB2 block wrapped in a Ti foil, prevents an MgB2 sample from cracking due to the absorption of impurity hydrogen by Ti. When Ti and Ta were added to the initial Mg:2B mixture, in addition to hydrogen absorption, another effect was also observed.Additions of Ti and Ta promote the formation of higher magnesium boride inclusions [103].Within the structure of MgB2 materials synthesized using the HP method with Ti and Ta additives (Table 5), a larger amount (N) of the magnesium boride phase with a stoichiometry close to MgB12 was observed, compared to the material without additives.A higher amount of the higher magnesium boride phase correlates with higher critical currents in the 1 T field.So, the addition of Ti can affect the boron distribution in MgB2-based material.This can be seen in Figure 15b, for example, where around the Ti inclusions the density of the black inclusions (higher magnesium borides) is much higher. At a low synthesis temperature (800 °C), in MgB2-based materials synthesized using the HP method, Ti promotes the aggregation of oxygen into individual oxygen-enriched Mg-B-O inclusions, in contrast to the material without additives containing Mg-B-O nanolayers (Figure 13c) The average amount of oxygen is about 5 wt% in the matrix of the sample with Ti addition (as SEM EDX showed), while in the matrix of the material without Ti additions and with Mg-B-O nanolayers, it is about 8 wt%.-d)-EDX maps of boron, oxygen, and magnesium distributions over the area of the image shown in 16e (the brighter the area looks, the higher the amount of the element under study) [103].Table 6.Results of the quantitative Auger analysis [atomic %] made for the points marked by No. 1-6 in Figure 15c, and located at the boundary between the MgB 2 and big (about 60 µm) Ti grains in the sample prepared under 2 GPa at 800 • C for 1 h.The sample was etched in Ar in a JAMP−9500F chamber before the study [113].Although, there is not yet a complete understanding of the mechanism of the influence of titanium on the characteristics of MgB2, a material based on MgB2 with titanium additives with large (about 60 μm, Figure 15) grains has provided some insight into the processes occurring during synthesis.An analysis of the interaction zones around the titanium grains (Figure 15, Table 6) allows us to come closer to an explanation of the observed oxygen and boron redistributions caused by Ti addition.As mentioned above, the density of the location of higher magnesium boride inclusions, MgBx, is higher around Ti grains than in the MgB2 matrix (Figure 15b).The inclusions (which look brightest in Figure 15), enriched by magnesium and oxygen, are observed inside the Ti grain near its boundary, which were formed as a result of Mg and O diffusion.The Mg-B-O inclusions with a somewhat smaller amount of oxygen (points 1, 2 in Figure 15c) than in the inclusions (points 5, 6 in Figure 15c) are observed near the grain boundary, inside of the Ti-containing grain.Magnesium defuses into titanium more intensively than boron (compare points 3, 4 and points 5, 6 in Figure 15c and Table 6) [113].Magnesium and oxygen diffuse deeper into the Ti grain (Figure 15) than boron, and this could be an explanation for the redistributions of boron and oxygen in MgB2, and possibly the reason for the higher magnesium boride grains formation.A layer containing boron is located the nearest to the boundaries inside the Ti grain (points 3 and 4 in Figure 15c and Table 6).Table 6.Results of the quantitative Auger analysis [atomic %] made for the points marked by No. 1-6 in Figure 15c, and located at the boundary between the MgB2 and big (about 60 μm) Ti grains in the sample prepared under 2 GPa at 800 °C for 1 h.The sample was etched in Ar in a JAMP−9500F chamber before the study [113].To summarize the influence of Ti addition on the structure and characteristics of the MgB2-based materials, we conclude the following.(1) The impurity of hydrogen is adsorbed by Ti. (2) A redistribution of the impurity of oxygen is caused, i.e., the effect of the titanium additive is similar to that of an increase in preparation temperature.Note that if titanium is added, oxygen aggregation occurs even at a low synthesis temperature.(3) Ti addition increases the number of inclusions of higher magnesium borides, MgBx (x > 4).[113].Notations: "I"-Mg-B-O inclusions, MgB x -higher magnesium borides.In (c), the points marked by No. 1-6 are the points for which were made quantitative Auger analyses, the results of which are summarized in Table 6 [113]. Element/ Point At a low synthesis temperature (800 • C), in MgB 2 -based materials synthesized using the HP method, Ti promotes the aggregation of oxygen into individual oxygen-enriched Mg-B-O inclusions, in contrast to the material without additives containing Mg-B-O nanolayers (Figure 13c) The average amount of oxygen is about 5 wt% in the matrix of the sample with Ti addition (as SEM EDX showed), while in the matrix of the material without Ti additions and with Mg-B-O nanolayers, it is about 8 wt%. Although, there is not yet a complete understanding of the mechanism of the influence of titanium on the characteristics of MgB 2 , a material based on MgB 2 with titanium additives with large (about 60 µm, Figure 15) grains has provided some insight into the processes occurring during synthesis.An analysis of the interaction zones around the titanium grains (Figure 15, Table 6) allows us to come closer to an explanation of the observed oxygen and boron redistributions caused by Ti addition.As mentioned above, the density of the location of higher magnesium boride inclusions, MgB x , is higher around Ti grains than in the MgB 2 matrix (Figure 15b).The inclusions (which look brightest in Figure 15), enriched by magnesium and oxygen, are observed inside the Ti grain near its boundary, which were formed as a result of Mg and O diffusion.The Mg-B-O inclusions with a somewhat smaller amount of oxygen (points 1, 2 in Figure 15c) than in the inclusions (points 5, 6 in Figure 15c) are observed near the grain boundary, inside of the Ti-containing grain.Magnesium defuses into titanium more intensively than boron (compare points 3, 4 and points 5, 6 in Figure 15c and Table 6) [113].Magnesium and oxygen diffuse deeper into the Ti grain (Figure 15) than boron, and this could be an explanation for the redistributions of boron and oxygen in MgB 2 , and possibly the reason for the higher magnesium boride grains formation.A layer containing boron is located the nearest to the boundaries inside the Ti grain (points 3 and 4 in Figure 15c and Table 6). To summarize the influence of Ti addition on the structure and characteristics of the MgB 2 -based materials, we conclude the following.(1) The impurity of hydrogen is adsorbed by Ti. (2) A redistribution of the impurity of oxygen is caused, i.e., the effect of the titanium additive is similar to that of an increase in preparation temperature.Note that if titanium is added, oxygen aggregation occurs even at a low synthesis temperature.(3) Ti addition increases the number of inclusions of higher magnesium borides, MgB x (x > 4). The TiH 2 phase is present in both the low-and the high-temperature-synthesized materials as detected by X-ray diffraction.TiH 2 coexists along with TiB 2 in the hightemperature-synthesized samples.In the case where TiH 2 , in the amount of 10 wt%, was specially added to the Mg:2B mixture [84], a high porosity after synthesis (Figure 16a) was observed.The high porosity results in an essential reduction (by more than two orders) in the critical current density in comparison to the materials without this addition. The TiH2 phase is present in both the low-and the high-temperature-synthesized materials as detected by X-ray diffraction.TiH2 coexists along with TiB2 in the high-temperature-synthesized samples.In the case where TiH2, in the amount of 10 wt%, was specially added to the Mg:2B mixture [84], a high porosity after synthesis (Figure 16a) was observed.The high porosity results in an essential reduction (by more than two orders) in the critical current density in comparison to the materials without this addition. Effect of SiC Additions The structures of magnesium diboride synthesized with additions of SiC (200-800 nm grain sizes), under 2 GPa at 800 and 1050 °C for 1 h from Mg(I):2B(I), are shown in Figure 17a-h [20,125,126].The sample synthesized at 1050 °C has the highest critical current density reported in the literature (Figure 10c).The X-ray study did not find a visible interaction between MgB2 and SiC, and also found the formation of Mg2Si (Figure 17).The addition of SiC, like in the case of Ti, promotes the impurity of oxygen for aggregation into separate inclusions, even at 800 °C (the brightest small inclusions in Figure 17c).The superconducting characteristics of the HP-synthesized MgB2 samples in which Mg2Si is detected by X-ray are not so high, and sometimes even lower than those of the materials without additions, which indicate that overdoping with carbon is not useful.The interesting fact is that SiC additions improve Jc if the initial boron contains the smallest amount of an admixture of oxygen (Figure 10c), but are not effective when the boron contains a higher amount of an admixture of oxygen.In the case of Ti additions, it is vice versa.It has been assumed that nanosized grains of SiC can act as pinning centers in the MgB2 matrix [41][42][43][44].The oxygen-enriched Mg-B-O inclusions are invisible on the image obtained by SEM in the COMPO regime (Figure 17h), but are very well seen in SEI mode, as the brightest small inclusions in Figure 17g.And, vice versa, the SiC inclusions are very well seen in the COMPO regime and are not so bright in SEI mode.Thus, using SEM SEI and COMPO modes, the inclusions of SiC and Mg-B-O can be revealed in the MgB2 matrix.Some SiC grains are agglomerated, but some of them are rather small.The boundaries of the SiC grains can play the role of additional pinning centers.The SiC additions also affect the agglomeration of an admixture of oxygen into separate inclusions, even at low synthesis temperatures.As in the case of a Ti addition (Figure 10d), the mechanism of the positive effect of SiC additions on Jc (Figure 10c) is not fully understood yet. Effect of SiC Additions The structures of magnesium diboride synthesized with additions of SiC (200-800 nm grain sizes), under 2 GPa at 800 and 1050 • C for 1 h from Mg(I):2B(I), are shown in Figure 17a-h [20,125,126].The sample synthesized at 1050 • C has the highest critical current density reported in the literature (Figure 10c).The X-ray study did not find a visible interaction between MgB 2 and SiC, and also found the formation of Mg 2 Si (Figure 17).The addition of SiC, like in the case of Ti, promotes the impurity of oxygen for aggregation into separate inclusions, even at 800 • C (the brightest small inclusions in Figure 17c).The superconducting characteristics of the HP-synthesized MgB 2 samples in which Mg 2 Si is detected by X-ray are not so high, and sometimes even lower than those of the materials without additions, which indicate that overdoping with carbon is not useful.The interesting fact is that SiC additions improve J c if the initial boron contains the smallest amount of an admixture of oxygen (Figure 10c), but are not effective when the boron contains a higher amount of an admixture of oxygen.In the case of Ti additions, it is vice versa.It has been assumed that nanosized grains of SiC can act as pinning centers in the MgB 2 matrix [41][42][43][44].The oxygen-enriched Mg-B-O inclusions are invisible on the image obtained by SEM in the COMPO regime (Figure 17h), but are very well seen in SEI mode, as the brightest small inclusions in Figure 17g.And, vice versa, the SiC inclusions are very well seen in the COMPO regime and are not so bright in SEI mode.Thus, using SEM SEI and COMPO modes, the inclusions of SiC and Mg-B-O can be revealed in the MgB 2 matrix.Some SiC grains are agglomerated, but some of them are rather small.The boundaries of the SiC grains can play the role of additional pinning centers.The SiC additions also affect the agglomeration of an admixture of oxygen into separate inclusions, even at low synthesis temperatures.As in the case of a Ti addition (Figure 10d), the mechanism of the positive effect of SiC additions on J c (Figure 10c) is not fully understood yet. Effect of Ti-O and TiC Additions The effect of Ti-O and TiC additions on the superconducting properties of MgB 2 superconductors prepared under HP conditions has been studied by the authors of [85]. Figure 18 presents the magnetic field dependence of the critical current density, J c , and the temperature dependences of the irreversibility, B irr , and upper critical, B C2 , magnetic fields of MgB 2 materials, both without and with additions of TiC and Ti-O.For comparison, the characteristics of the material prepared from Mg(I):2B(III) with Ti additions are also presented.In Figure 18g,h, the temperature dependences of B C2 and B irr of the superconductors prepared using HyperTech produced boron (B(II)) and fine Mg(I), with the specially added carbon (3.5 wt%) also shown.The sample with a 10% Ti addition prepared under 2 GPa at 1050 • C has the highest critical current density in a magnetic field of 1-5 T (Figure 18b).Despite the critical current density, J c , of the MgB 2 -Ti-O synthesized at 800 • C being lower than those of the MgB 2 , MgB 2 -Ti, and MgB 2 -Ti-O samples synthesized at 1050 • C (Figure 18a-f), its magnetic fields, B irr and B C2 , are higher (Figure 18g,h).The MgB 2 -TiC sample synthesized at 800 • C has an upper critical magnetic field about equal to that of the samples without additions prepared at 800 and 1050 • C. The irreversibility field, B irr , of the MgB 2 -TiC is lower than of the MgB 2 prepared at 800 • C. Table 7 presents the results of the study of connectivity, A F , shielding fraction, S, and transition temperature, T c [85].All the materials have a shielding fraction of 86-100%, but their connectivities are rather different. Effect of Ti-O and TiC Additions The effect of Ti-O and TiC additions on the superconducting properties of MgB2 superconductors prepared under HP conditions has been studied by the authors of [85]. Figure 18 presents the magnetic field dependence of the critical current density, Jc, and the temperature dependences of the irreversibility, Birr, and upper critical, BC2, magnetic fields of MgB2 materials, both without and with additions of TiC and Ti-O.For comparison, the characteristics of the material prepared from Mg(I):2B(III) with Ti additions are also presented.In Figure 18g,h, the temperature dependences of BC2 and Birr of the superconductors prepared using HyperTech produced boron (B(II)) and fine Mg(I), with the specially added carbon (3.5 wt%) also shown.The sample with a 10% Ti addition prepared under 2 GPa at 1050 °C has the highest critical current density in a magnetic field of 1 T-5 T (Figure 18b).Despite the critical current density, Jc, of the MgB2-Ti-O synthesized at 800 °C being lower than those of the MgB2, MgB2-Ti, and MgB2-Ti-O samples synthesized at 1050 °C (Figure 18a-f), its magnetic fields, Birr and BC2, are higher (Figure 18g,h).The MgB2-TiC sample synthesized at 800 °C has an upper critical magnetic field about equal to that of the samples without additions prepared at 800 and 1050 °C.The irreversibility field, Birr, of the MgB2-TiC is lower than of the MgB2 prepared at 800 °C.Table 7 presents the results of the study of connectivity, AF, shielding fraction, S, and transition temperature, Tc [85].All the materials have a shielding fraction of 86-100%, but their connectivities are rather different.Thus, a connectivity near 80% is demonstrated by the materials without additions prepared at 800 • C and 1050 • C. The materials with Ti additions have the highest critical current density, J c , in fields up to 4 T (Figure 18b,e), but their connectivity is lower than that of the materials without additions synthesized at the same temperatures (Table 7).The MgB 2 -TiC sample has a somewhat lower connectivity than that of the materials with Ti additions.The MgB 2 -TiC critical temperature, T c , is the highest (Table 7), but its critical current density, J c , at 1-5 T is the least (Figure 18).The lowest connectivity, but the highest magnetic fields, B C2 and B irr , are demonstrated by the MgB 2 -Ti-O sample synthesized at 800 • C. All the materials studied in [85] were prepared from the same initial B(III) and Mg(I).The variations in the compositions of the material structures are shown in Table 8.The matrices of MgB 2 contain less impurity of oxygen than theMg-B-O inclusions, and no carbon in the case of the Ti-O addition, opposite to the case of the TiC addition (Table 8).The inclusions of Ti-O absorb (or react with Mg) a rather high amount of Mg and some small amount of carbon (Table 8). Structure of Superconducting Magnesium Diboride and Substitution of Boron Atoms by Oxygen and Carbon The typical structure of MgB 2 materials synthesized at low (800 • C) and high (1050 • C) temperatures under 2 GPa are shown in Figure 9.As established in [20,56,78,79,87], the structure changes caused by a synthesis temperature increase are schematically shown in Figure 9c,d.An X-ray analysis of MgB 2 -based materials synthesized at 1050 • C shows that they contain MgB 2 and MgO phases (Figure 9e,f).However, SEM and EDX analyses and an Auger spectroscopy study indicate the presence of three main phases in the materials (Figure 9): (1) a matrix with near-MgB 2 stoichiometry, which contains a small amount of impurity of oxygen (grey areas in the photo); ( 2) inclusions (grains) of higher magnesium borides, MgB x , with x >> 2 looking the blackest; (3) nanolayers (if the synthesis temperature was low) or separate oxygen-enriched inclusions (if the temperature was higher) with a stoichiometry close to MgBO (oxygen-enriched places look the brightest or white) [108]. The possibility of impurities or specially added carbon atoms replacing boron atoms in MgB 2 is well known.The results of an Auger study and Rietveld refinement of the X-ray patterns of the materials with high critical current densities show that a small amount of oxygen, 0.2-0.32 atoms per one unit cell of MgB 2 , are present in all the studied materials. To analyze the existence of Mg, B, and O elements, a quantitative Auger analysis of the depth of the MgB 2 material matrix or so-called "depth profile" was used in [85].The quantity of elements was estimated in the same place of the structure (marked by a white cross in Figure 9b) after each of multiple etchings by Ar ions in the chamber of a microscope.The Auger analysis shows that the MgB 2 matrix phase contains some amount of oxygen, and the stoichiometry of the phase containing oxygen is about MgB 2.2-1.7 O 0.4-0.6.The set of quantitative Auger tests was performed up to a depth of 200-300 nm.The Auger spectra indicate the presence of a constant amount of oxygen in the MgB 2 matrix that, in turn, can witness about the formation of solid solutions of oxygen in MgB 2 . These facts stimulated the authors of [10,11,117,130,132] to perform detailed structural studies of MgB 2 and modeling of electron density in MgB 2-x Ox structures, binding energy, structure variations, and enthalpy of solid solutions formation. Rietveld refinements of the MgB 2 phases of the X-ray patterns of 10 samples with high critical current densities have demonstrated that they contained some solved oxygen, the amount of which was very similar in all the materials-within MgB 1.68-1.8O 0.2-0.32 stoichometry [10,11,117,130]. The results of ab-initio modeling have shown that the replacement of boron atoms with oxygen is energetically favorable if oxygen is substituted for boron up to the composition MgB In the case of carbon substitution, even very small levels of doping can essentially affect the superconducting characteristics of a material, due to changing its electron density.However, if oxygen substitutes for boron (especially in nearby positions of the same boron layer in a MgB 2 unit cell), the substitution slowly changes the superconductive properties of MgB 2 .The formation of vacancies at the Mg site in both the MgB 2 and MgB 1.75 O 0.25 phases has also been modeled.However, it was found that this vacancy formation is energetically disadvantageous.It was estimated by the authors of [87] that ∆H f of Mg 0.875 B 2 and Mg 0.75 Ba 1.75 O 0.25 are equal to −45.5 and −93.5 meV/atom, respectively. The X-ray study of MgB 2 prepared from Mg(I):2B(I) under 2GPa at 1050 • C for 1 h demonstrates that MgB 1.71 O 0.29 and MgO are (Figure 19) the structure of the main matrix phase. higher magnesium borides, MgBx, with x >> 2 looking the blackest; (3) nanolayers (if the synthesis temperature was low) or separate oxygen-enriched inclusions (if the temperature was higher) with a stoichiometry close to MgBO (oxygen-enriched places look the brightest or white) [108]. The possibility of impurities or specially added carbon atoms replacing boron atoms in MgB2 is well known.The results of an Auger study and Rietveld refinement of the X-ray patterns of the materials with high critical current densities show that a small amount of oxygen, 0.2-0.32 atoms per one unit cell of MgB2, are present in all the studied materials. To analyze the existence of Mg, B, and O elements, a quantitative Auger analysis of the depth of the MgB2 material matrix or so-called "depth profile" was used in [85].The quantity of elements was estimated in the same place of the structure (marked by a white cross in Figure 9b) after each of multiple etchings by Ar ions in the chamber of a microscope.The Auger analysis shows that the MgB2 matrix phase contains some amount of oxygen, and the stoichiometry of the phase containing oxygen is about MgB2.2-1.7O0.4-0.6The set of quantitative Auger tests was performed up to a depth of 200-300 nm.The Auger spectra indicate the presence of a constant amount of oxygen in the MgB2 matrix that, in turn, can witness about the formation of solid solutions of oxygen in MgB2. These facts stimulated the authors of [10,11,117,130,132] to perform detailed structural studies of MgB2 and modeling of electron density in MgB2-хOx structures, binding energy, structure variations, and enthalpy of solid solutions formation. Rietveld refinements of the MgB2 phases of the X-ray patterns of 10 samples with high critical current densities have demonstrated that they contained some solved oxygen, the amount of which was very similar in all the materials-within MgB1.68-1.8O0.2-0.32 stoichometry [10,11,117,130]. The results of ab-initio modeling have shown that the replacement of boron atoms with oxygen is energetically favorable if oxygen is substituted for boron up to the composition MgB1.75O0.25.(The enthalpy of MgB2 and MgB1.75O0.25 formation were estimated as ΔHf = −150.6meV/atom and ΔHf = −191.4meV/atom, respectively.) In the case of carbon substitution, even very small levels of doping can essentially affect the superconducting characteristics of a material, due to changing its electron density.However, if oxygen substitutes for boron (especially in nearby positions of the same boron layer in a MgB2 unit cell), the substitution slowly changes the superconductive properties of MgB2.The formation of vacancies at the Mg site in both the MgB2 and MgB1.75O0.25 phases has also been modeled.However, it was found that this vacancy formation is energetically disadvantageous.It was estimated by the authors of [87] that ΔH of Mg0.875B2 and Mg0.75Ba1.75O0.25 are equal to −45.5 and −93.5 meV/atom, respectively. The X-ray study of MgB2 prepared from Mg(I):2B(I) under 2GPa at 1050 °C for 1 h demonstrates that MgB1.71O0.29 and MgO are (Figure 19) the structure of the main matrix phase.The dependence of the critical current density of the sample on temperature and magnetic field is shown in Figure 19b. The various theoretical aspects of MgB 2 have been considered in many publications (e.g., [70,72,81,85,140] and the references therein).Here, we briefly discuss the recently obtained results of the calculation of the electronic states in MgB 2 . Calculations of the density of electronic states N(E) (DOS) for different concentrations of oxygen substituting for boron were performed in [132], which assumed that the oxygen atoms were in the same positions as the substituted boron ones.The authors of [132] found changes in the positions of the N(E) peaks, marked I, II, and III in Figure 20.The calculated DOS N(E) for the Mg-B-O supercells revealed significant hybridization of the s and p states of Mg, B, and O.With an increase in the oxygen content, x, in MgB 2-x O x , the hybridization of the Mg, B, and O states ensures an increase in the DOS N(E) near-Fermi level, E F (Figure 20d).An increase in N(E F ) with the oxygen concentration (x→1) leads to an increase in the total energy, and the minimum of free energy cannot be realized.This may explain the appearance of separate oxygen-enriched inclusions with increasing oxygen concentration, such as MgO and Mg-B-O [132]. The dependence of the critical current density of the sample on temperature a magnetic field is shown in Figure 19b. The various theoretical aspects of MgB2 have been considered in many publicatio (e.g., [70,72,81,85,140] and the references therein).Here, we briefly discuss the recen obtained results of the calculation of the electronic states in MgB2. Calculations of the density of electronic states N(E) (DOS) for different concent tions of oxygen substituting for boron were performed in [132], which assumed that oxygen atoms were in the same positions as the substituted boron ones.The authors [132] found changes in the positions of the N(E) peaks, marked I, II, and III in Figure The calculated DOS N(E) for the Mg-B-O supercells revealed significant hybridization the s and p states of Mg, B, and O.With an increase in the oxygen content, x, in MgB2-x the hybridization of the Mg, B, and O states ensures an increase in the DOS N near-Fermi level, EF (Figure 20d).An increase in N(EF) with the oxygen concentrat (x→1) leads to an increase in the total energy, and the minimum of free energy cannot realized.This may explain the appearance of separate oxygen-enriched inclusions w increasing oxygen concentration, such as MgO and Mg-B-O [132].The calculations of the DOS for MgB2-xOx and MgB2-xCx compounds for 0 < x demonstrate that all the compounds have a metal-like behavior near the Fermi level [13 In the case of the substitution of boron by oxygen, the lowest DOS of about 0 states/eV/f.u. is found for MgB1.75O0.25, if the oxygen atoms are in neighboring positio [130].The calculations show that the MgB2 structure is destroyed if the concentration oxygen is higher than that in MgB1.5O0.5.The lowest DOS of about 0.3 states/eV/f.ufound for MgB1.5C0.5. The modeling of the electron localization function (ELF) for MgB2 and MgB1.75O allowed the authors of [117] to conclude that the higher electron concentration in MgB between the boron atoms and corresponds to strong covalent bonding within the bor network.In the places where boron atoms are substituted by oxygen ones, the electro localize around the oxygen atoms and, thus, bonding polarization appears.The variat in ELF occurs because oxygen atoms affect nearby B-B bonds and B-O bonds.The calculations of the DOS for MgB 2-x O x and MgB 2-x C x compounds for 0 < x ≤ 1 demonstrate that all the compounds have a metal-like behavior near the Fermi level [130].In the case of the substitution of boron by oxygen, the lowest DOS of about 0.46 states/eV/f.u. is found for MgB 1.75 O 0.25 , if the oxygen atoms are in neighboring positions [130].The calculations show that the MgB 2 structure is destroyed if the concentration of oxygen is higher than that in MgB 1.5 O 0.5 .The lowest DOS of about 0.3 states/eV/f.u is found for MgB 1.5 C 0.5 . The modeling of the electron localization function (ELF) for MgB 2 and MgB 1.75 O 0.25 allowed the authors of [117] to conclude that the higher electron concentration in MgB 2 is between the boron atoms and corresponds to strong covalent bonding within the boron network.In the places where boron atoms are substituted by oxygen ones, the electrons localize around the oxygen atoms and, thus, bonding polarization appears.The variation in ELF occurs because oxygen atoms affect nearby B-B bonds and B-O bonds. Figure 21a shows the dependence of binding energies, E b , calculated using WIEN2k on the boron/oxygen/carbon concentration, x, in MgB 2-x O x /C x , when oxygen and/or carbon substitute for boron in MgB 2 randomly (homogeneously) and in ordered (nearby) positions [117,132].The lowest binding energy, E b , for each concentration of oxygen atoms distributed in a certain order is shown in Figure 21a, curve 2, and for when they are distributed homogeneously-in Figure 21a, curve 1. Materials 2024, 17, x FOR PEER REVIEW 28 Figure 21a shows the dependence of binding energies, Eb, calculated using WIE on the boron/oxygen/carbon concentration, x, in MgB2-xOx/Сx, when oxygen and/or bon substitute for boron in MgB2 randomly (homogeneously) and in ordered (nea positions [117,132].The lowest binding energy, Eb, for each concentration of oxygen oms distributed in a certain order is shown in Figure 21a, curve 2, and for when they distributed homogeneously-in Figure 21a, curve 1.The maps of the electronic density distributions of the MgB2, MgB2-xOx, MgB2-0.5C0.5 structures are shown in Figure 22. Figure 22b shows the boron plane with embedded oxygen atoms in nearby positions when oxygen atoms are absent in the ond (alternate) boron plane of the same unit cell (Figure 22c).Figure 22d displays a cu the unit cell inclined to the basal boron planes, displaying two boron planes.The plane contains only boron atoms; some boron atoms are substituted by oxygen in bottom plane (Figure 22d).If oxygen moves into nearby boron positions or forms e zigzag chains the lowest Eb is obtained.This can explain the following effects: the dency of oxygen aggregation in the MgB2 structure, the formation of oxygen-enric layers or inclusions, and a rather high amount of oxygen can be present in super ducting MgB2 with a higher transition temperature. The Z-contrast image of the coherent oxygen-containing inclusions in the MgB2 [ bulk material is shown in Figure 21b.This image was obtained experimentally by authors of [141] and shows that oxygen (if its amount is small) prefers to substitute boron atoms in the second boron plane of each MgB2 unit cell, leaving the first bo plane pristine. Figure 22е presents the boron plane of the MgB1.5C0.5 compound with the embed carbon atoms, the binding energy of which is least according to the ab-inito calculati Figure 22f shows the cuts of the unit cell MgB1.5C0.5, made in such a way as to show boron plane with the Mg and C atoms. If carbon is substituted for boron, the binding energy, Eb, is about the same for definite order (Figure 21a, curve 4) and homogeneous (Figure 21a, curve 3) distributi Despite there being no difference from the energetic point of view as to whether car atoms substitute for boron ones in a special order or homogeneously, the embeddin carbon into the MgB2 structure can essentially decrease the critical temperature and ical current density, especially in low magnetic fields at relatively high temperatures.The contrast increases in each second row and is due to the presence of oxygen in each second boron plane.The white arrows show the columns of atoms in which oxygen is present [117]. The maps of the electronic density distributions of the MgB 2, MgB 2-x O x , and MgB 2-0.5 C 0.5 structures are shown in Figure 22. Figure 22b shows the boron plane with the embedded oxygen atoms in nearby positions when oxygen atoms are absent in the second (alternate) boron plane of the same unit cell (Figure 22c).Figure 22d displays a cut of the unit cell inclined to the basal boron planes, displaying two boron planes.The top plane contains only boron atoms; some boron atoms are substituted by oxygen in the bottom plane (Figure 22d).If oxygen moves into nearby boron positions or forms even zigzag chains the lowest E b is obtained.This can explain the following effects: the tendency of oxygen aggregation in the MgB 2 structure, the formation of oxygen-enriched layers or inclusions, and a rather high amount of oxygen can be present in superconducting MgB 2 with a higher transition temperature. The Z-contrast image of the coherent oxygen-containing inclusions in the MgB 2 [010] bulk material is shown in Figure 21b.This image was obtained experimentally by the authors of [141] and shows that oxygen (if its amount is small) prefers to substitute for boron atoms in the second boron plane of each MgB 2 unit cell, leaving the first boron plane pristine. Figure 22e presents the boron plane of the MgB 1.5 C 0.5 compound with the embedded carbon atoms, the binding energy of which is least according to the ab-inito calculations.Figure 22f shows the cuts of the unit cell MgB 1.5 C 0.5 , made in such a way as to show the boron plane with the Mg and C atoms. If carbon is substituted for boron, the binding energy, E b , is about the same for the definite order (Figure 21a, curve 4) and homogeneous (Figure 21a, curve 3) distributions.Despite there being no difference from the energetic point of view as to whether carbon atoms substitute for boron ones in a special order or homogeneously, the embedding of carbon into the MgB 2 structure can essentially decrease the critical temperature and critical current density, especially in low magnetic fields at relatively high temperatures. Application of Bulk MgB2 Superconductors Since the discovery of HTS and MgB2 bulk superconductors, they have com with long wires and tapes for possible and real applications, such as small and m power motors, shields, and the creation of DC magnetic fields [142,143].For ex bulk superconductors can trap magnetic fields of an order higher than those trapp Application of Bulk MgB 2 Superconductors Since the discovery of HTS and MgB 2 bulk superconductors, they have competed with long wires and tapes for possible and real applications, such as small and middle power motors, shields, and the creation of DC magnetic fields [142,143].For example, bulk superconductors can trap magnetic fields of an order higher than those trapped by permanent magnets (e.g., a trapped magnetic field can be of 5.4 T in bulk MgB 2 , at 12 K and 5.6 T at 11 K [144]).In addition, for the manufacturing of wires/tapes and thin films, a complex multi-step processing technique is required.Bulk MgB 2 can be fabricated using an essentially simpler process.Unlike conventional magnets, a bulk superconductor magnet may be safely and conveniently demagnetized by simply heating above the critical temperature.The HTS-bulk prototypes of various devices have been designed and described in [143,[145][146][147].The operation principles of superconducting devices are independent of the superconductor type, and the choice of the type depends on the required superconducting properties, operation temperature, etc.The MgB 2 superconductors with a bulk density of about 2.63 g/cm 3 are the lightest materials among practical superconductors.This makes MgB 2 attractive for portable applications , especially for aviation and space technology [26,146,171]. Here, we briefly consider some applications of MgB 2 bulk materials.The MgB 2 bulk samples we are fabricated in the form of cylinders, cylinders with a bottom (cap), discs, and parallelepipeds (Figure 23) by different methods (hot pressing, high pressing, and spark plasma sintering).From these samples, rings and hollow cylinders were cut out by electro-erosion in oil [142] or in deionized water for the design of fault current limiter models, magnetic shields, etc. quired superconducting properties, operation temperature, etc.The MgB2 superconduc tors with a bulk density of about 2.63 g/cm 3 are the lightest materials among practica superconductors.This makes MgB2 attractive for portable applications , espe cially for aviation and space technology [26,146,171]. Here, we briefly consider some applications of MgB2 bulk materials.The MgB2 bulk samples we are fabricated in the form of cylinders, cylinders with bottom (cap), discs, and parallelepipeds (Figure 23) by different methods (hot pressing high pressing, and spark plasma sintering).From these samples, rings and hollow cyl inders were cut out by electro-erosion in oil [142] or in deionized water for the design o fault current limiter models, magnetic shields, etc. [120], (c -obtained using HP and then the rings were cut mechanically, and (d)-obtained by machining bulk cylinder manufactured using SPS [26].show the typical equipment for the manufacturing of bulk MgB2 ma terials by different methods.The high-pressing (Figure 24), hot-pressing (Figure 25), and spark plasma sintering (Figure 26) equipment allow for manufacturing rather big blocks the sizes of which are suitable for practical applications (up to 100-250 mm in diameter with high critical currents, and are highly dense and mechanically stable.During th synthesis or sintering of magnesium diboride using these methods, MgB2 can be in con tact with hexagonal boron nitride or with graphite stripe.show the typical equipment for the manufacturing of bulk MgB 2 materials by different methods.The high-pressing (Figure 24), hot-pressing (Figure 25), and spark plasma sintering (Figure 26) equipment allow for manufacturing rather big blocks, the sizes of which are suitable for practical applications (up to 100-250 mm in diameter) with high critical currents, and are highly dense and mechanically stable.During the synthesis or sintering of magnesium diboride using these methods, MgB 2 can be in contact with hexagonal boron nitride or with graphite stripe.The method of high isostatic pressing (HIP) at a high temperature allows for the manufacturing of bulk materials with high superconducting characteristics as well, but needs encapsulation to be densified.The capsule should be hermetized and soft enough under a high temperature to transmit gas pressure toward the green body of the sample or block, and be inert toward magnesium diboride.The HIP equipment for a big volume is rather unique and complicated.The method of high isostatic pressing (HIP) at a high temperature allows for the manufacturing of bulk materials with high superconducting characteristics as well, but needs encapsulation to be densified.The capsule should be hermetized and soft enough under a high temperature to transmit gas pressure toward the green body of the sample or block, and be inert toward magnesium diboride.The HIP equipment for a big volume is rather unique and complicated.Magnetized MgB 2 and HTS bulks can be used as quasi-permanent magnets providing magnetic fields of several Tesla or even more than ten.These values are much (up to an order) higher than a magnetic field, which can provide the best traditional permanent magnets.This opens a way to apply these superconductors as permanent magnets in various devices, such as flywheel energy storage systems. MT-YBCO bulks have demonstrated the possibility of trapping magnetic fields of 17.24 T at 29 K in the center of two 26 mm diameter samples impregnated with Wood's metal and resin and reinforced with carbon fiber [148].However, around 26 K [149] these reinforced samples have cracked.The trapped field of 5.4 T was measured in bulk MgB 2 at 12 K on the surface of a single cylinder (20 mm diameter), fabricated by hot pressing of ball-milled Mg and B powders [144].A uniaxial stack of two hot-pressed MgB 2 disc-shaped bulk superconductors with a diameter of 25 mm and a thickness of 5.4 mm can trap 3.14 T at 17.5 K [150]. The trapped field of REBCO magnets is limited by the mechanical properties of the superconductors.The Lorentz force can be so high that samples can be destroyed.MgB 2 bulk materials have demonstrated trapped fields higher than 3 T, although the trapped fields of MgB 2 are less than those of MT-YBCO at 20 K.The advantage of MgB 2 superconductors is that their preparation methods are much easier, cheaper, and quicker. For many applications, several rings can be stacked to form the required experimental structure.For example, a three-ring stack can trap a field of 2.04 T at 20 K [159] and block (D30 × h7.5 mm).A structure synthesized from Mg(I):2B (V) with 10% Ti under 2 GPa, at 900 • C for 1 h traps a field of 1.8 T at 20 K [20]. All the methods noted above open a way to use bulk MgB 2 superconductors as an element of the setup for physical experiments, medical devices, flywheel energy storage systems, levitation systems, electrical machines, etc. Fault Current Limiters The application of fast-operating nonlinear fault current limiters (FCLs) thet allow for the limiting of high fault currents due to the capability of increasing their impedance rapidly could be a promising solution to the fault current problem in power systems.Two properties of superconducting materials are the bases of SFCLs: an ideal conductivity in the superconducting state and a fast phase transition from this state into the normal conducting state with an increase in the current, magnetic field, or temperature above their critical values.SFCLs are one of the most attractive applications of superconductors in power systems, and there have been no classical equivalents up to now [120,136,145,146].These devices meet all the power system requirements; this has been confirmed experimentally by testing models, prototypes, and experimental power devices of various types of SFCLs, based on different superconductors. Bulk MgB 2 rings and hollow cylinders can be applied as active superconducting elements of inductive SFCLs.The principal inductive SFLC design and experimental setup for SFCL model testing are presented in Figure 27a.Under the nominal regime of a protected AC circuit, the impedance of the SFCL, the primary coil of which is connected in series" is low.During a fault event, the current in the circuit increases, causing a phase transition in the secondary superconducting coil, accompanied by an increase in the device impedance and, following that, a fault current limitation [145,146,151].An inductive SFCL can be also used for the protection of high-voltage direct-current (HVDC) systems [152]. The secondary coil can be formed using a superconducting ring or a set of rings (hollow cylinders) to increase the SFCL power [145,146,151]. sis conditions and ring sizes.A low, long-continued current in the protected circuit (nominal regime) does not cause the transition of the superconducting ring into the resistive state.At a high current (simulates a fault event), the voltage and current curves' deviations appear before the first current maximum (Figure 27b).These deviations are associated with the transition of the ring from the superconducting to the resistivity state, and with the quenching (critical) current of the ring.A set of FCL models with MgB2 rings prepared using various techniques and initial materials and additions has been built and successfully tested [91,120].).The experiment details are described in [120]."A"-is ammeter. The sizes and synthesis conditions of the rings that have been tested as elements of an inductive SFCL are presented in Table 9. Note, that the experimental set-up for SFCL model testing (Figure 27a) can be used for measuring a "transport" critical current, AC losses, and voltage-current characteristics [120,151].The "transport" critical current of the various rings was estimated as a quenching current, causing the transition.The ).The experiment details are described in [120]."A"-is ammeter. The character of the oscilloscope traces of the current in the circuit and the voltage drop across the primary coil of the inductive SFCL models is independent of the synthesis conditions and ring sizes.A low, long-continued current in the protected circuit (nominal regime) does not cause the transition of the superconducting ring into the resistive state.At a high current (simulates a fault event), the voltage and current curves' deviations appear before the first current maximum (Figure 27b).These deviations are associated with the transition of the ring from the superconducting to the resistivity state, and with the quenching (critical) current of the ring.A set of FCL models with MgB 2 rings prepared using various techniques and initial materials and additions has been built and successfully tested [91,120]. The sizes and synthesis conditions of the rings that have been tested as elements of an inductive SFCL are presented in Table 9. Note, that the experimental set-up for SFCL model testing (Figure 27a) can be used for measuring a "transport" critical current, AC losses, and voltage-current characteristics [120,151].The "transport" critical current of the various rings was estimated as a quenching current, causing the transition.The highest value of 63,200 A/cm 2 was obtained for Ring 3 (Table 9), with an outer diameter of 45 mm, a height of 11.6 mm, and wall thickness of 3.3 mm.The ring was prepared under a pressure of 30 MPa at 800 • C for 2 h.From the magnetization experiments, the critical temperature of these rings was estimated to be about 38 K. The large difference between the critical current measurement results obtained by the two methods (Table 9) can be explained by: -the granular MgB 2 structure-the critical values are different for currents inside and between the granules; -micro-cracks, which can play the role of centers of the normal zone nucleation; -dynamic magnetic and thermal instabilities of the superconducting state. Electrical Machines The application of superconductors in electrical machines is mainly connected with replacing the traditional normal metal wires in the design with superconducting ones.Progress in the electromagnetic properties of bulk superconductors has opened a way to design other types of electrical machines with bulk-superconducting rotor elements (see, e.g., [145,[154][155][156] and the references therein).It has been shown that these machines are effective in low and medium power ranges.Series prototypes of various types of machines (trapped field, hysteresis reluctance, etc.) have been designed using bulk YBCO superconducting elements and successfully tested in a wide temperature range.The authors of [104] presented the world ′ s-first motor (1.3 kW) built with a bulk high-pressure-high temperature-synthesized MgB 2 superconductor.The superconducting elements of the reluctance motor rotor were made of MgB 2 -10 wt% Ti and synthesized under 2 GPa at 800 • C for 1 h. Figure 28 demonstrates the general view of the zebra-type rotor (superconducting layers alternate with ferromagnetic ones) of a MgB 2 -10%Ti motor of 1300 W at 210-215 V.The comparative tests of the motor with MT-YBCO elements at the temperature for testing the MgB 2 motor, 20 K, have shown that the efficiency of these motors is of the same level [19,20]. The integral part of hydrogen energetics would be systems for the production, salving, and transportation of liquid hydrogen [157].Liquid hydrogen systems could be one of the first fields of application of MgB 2 motors and submersible liquid hydrogen (LH) pumps.The small-and middle-power electrical motors based on MgB 2 bulk superconductors have demonstrated efficiency higher than that of traditional motors and are cheaper than HTS motors.These pumps require superconducting magnets with trapped fields of around 500-600 mT.A bulk MgB 2 superconductor is suitable for such applications at liquid hydrogen ′ s temperature [142].The integral part of hydrogen energetics would be systems for the production, salving, and transportation of liquid hydrogen [157].Liquid hydrogen systems could be one of the first fields of application of MgB2 motors and submersible liquid hydrogen (LH) pumps.The small-and middle-power electrical motors based on MgB2 bulk superconductors have demonstrated efficiency higher than that of traditional motors and are cheaper than HTS motors.These pumps require superconducting magnets with trapped fields of around 500-600 mT.A bulk MgB2 superconductor is suitable for such applications at liquid hydrogenʹs temperature [142]. Magnetic Field Shields Bulk MgB2 superconductors have shown excellent magnetic shielding properties [26,[158][159][160] that can be useful for the passive shielding of various devices (measurement and medical devices, physical setup, etc.) and even for the protection of orbital stations in space from cosmic radiation.Also, the raw materials are largely available and do not contain rare earths, noble, or toxic elements, as in the case of other high-or low-temperature superconductors.In the literature, the results of the study of various designs of bulk MgB2 shields have been presented (e.g., [158,159], and the references therein). As an example, the results of the magnetic shield properties of MgB2 bulk materials in the shape of a cup are considered.The experimental shielding factors (dots in Figure 29c) are practically independent of the applied field, up to ~0.8 T [26,159].The factor strongly depends on the Hall probe position and reaches its maximum value, of the order of 10 5 , near the bottom of the cup.In the middle point, z3, the factor is ~250; this is sufficient in some cases. Magnetic Field Shields Bulk MgB 2 superconductors have shown excellent magnetic shielding properties [26,[158][159][160] that can be useful for the passive shielding of various devices (measurement and medical devices, physical setup, etc.) and even for the protection of orbital stations in space from cosmic radiation.Also, the raw materials are largely available and do not contain rare earths, noble, or toxic elements, as in the case of other high-or lowtemperature superconductors.In the literature, the results of the study of various designs of bulk MgB 2 shields have been presented (e.g., [158,159], and the references therein). As an example, the results of the magnetic shield properties of MgB 2 bulk materials in the shape of a cup are considered.The experimental shielding factors (dots in Figure 29c) are practically independent of the applied field, up to ~0.8 T [26,159].The factor strongly depends on the Hall probe position and reaches its maximum value, of the order of 10 5 , near the bottom of the cup.In the middle point, z 3 , the factor is ~250; this is sufficient in some cases.The integral part of hydrogen energetics would be systems for the product salving, and transportation of liquid hydrogen [157].Liquid hydrogen systems could one of the first fields of application of MgB2 motors and submersible liquid hydro (LH) pumps.The small-and middle-power electrical motors based on MgB2 bulk perconductors have demonstrated efficiency higher than that of traditional motors a are cheaper than HTS motors.These pumps require superconducting magnets w trapped fields of around 500-600 mT.A bulk MgB2 superconductor is suitable for s applications at liquid hydrogenʹs temperature [142]. Magnetic Field Shields Bulk MgB2 superconductors have shown excellent magnetic shielding proper [26,[158][159][160] that can be useful for the passive shielding of various devices (measurem and medical devices, physical setup, etc.) and even for the protection of orbital station space from cosmic radiation.Also, the raw materials are largely available and do contain rare earths, noble, or toxic elements, as in the case of other highlow-temperature superconductors.In the literature, the results of the study of vari designs of bulk MgB2 shields have been presented (e.g., [158,159], and the referen therein). As an example, the results of the magnetic shield properties of MgB2 bulk mater in the shape of a cup are considered.The experimental shielding factors (dots in Fig 29c) are practically independent of the applied field, up to ~0.8 T [26,159].The fac strongly depends on the Hall probe position and reaches its maximum value, of the or of 10 5 , near the bottom of the cup.In the middle point, z3, the factor is ~250; this is su cient in some cases. Conclusions This review examines the impact of technological parameters (pressure, temperature, etc.), additives, and impurities on the superconducting characteristics of MgB 2 -based bulk materials.The main attention is paid to the role of impurity oxygen in MgB 2 -based materials on the formation of their structures and on achieving the best superconducting characteristics (critical temperature and current density at 10-35 K in fields up to 10 T, temperature dependences of the upper critical, irreversibility, and trapped magnetic fields.The influence of additions of Ti, Ta, Zr, SiC, C, Dy 2 O 3 , Sn-O, Ti-O, TiC, and TiH 2 with various production conditions on the structure (higher magnesium borides formation, oxygen and boron distributions, etc.) and superconducting properties is considered.This analysis of publications, dedicated to studying the dependences of MgB 2 bulk material properties on manufacturing pressure, presents the positive effect of a manufacturing pressure increase on superconducting characteristics.One of the main reasons for this improvement is the suppression of magnesium evaporation during the production process.This leads to an increase in the material ′ s density and connectivity between the superconducting grains. The manufacturing temperature influences the dependence of the critical current density on magnetic fields: a higher manufacturing temperature results in higher critical currents in low magnetic fields, while a lower manufacturing temperature leads to higher critical currents in high magnetic fields.This effect is closely related to the oxygen admixture distribution: at higher manufacturing temperatures, separate oxygen-enriched inclusions appear, while oxygen-enriched nanolayers (or nanochains) form at lower manufacturing temperatures. Additionally, the variation of the critical current density can be connected with the formation and distribution of higher magnesium borides (x > 2) inclusions, observed in both in-situ (prepared from Mg and B) and ex-situ (prepared from MgB 2 powder) materials.In the materials prepared at higher temperatures, the amount and of inclusions of higher magnesium borides are smaller than in materials obtained at lower temperatures.These effects are more pronounced for materials produced at high pressures (2 GPa). It was shown that superconducting materials with high magnetic properties can be obtained with even a large deviation from the MgB 2 composition (initial Mg:4B-Mg:20B mixtures). In MgB 2 superconducting materials exhibiting extremely high critical current densities, the dissolutions of a small amount of oxygen and the formation of a superconducting matrix phase MgB 1.8-1.68O 0.2-0.32 have been detected using X-ray analysis.Similar results were obtained using quantitative Auger analysis: matrix phases of MgB 2 samples with high superconducting characteristics contain a small amount of impurity oxygen. Modeling the structure of MgB 2-x O x solutions showed that the AlB 2 structure type can be maintained even at x about 0.5.It was also shown, that the enthalpy of MgB 1.75 O 0.25 formation is lower than that of MgB 2 where oxygen replaces boron in nearby positions and penetrates only into one boron layer of the MgB 2 cell.At the same time, the second MgB 2 layer of the same cell remains intact, i.e., every second boron layer of the cell contains only boron atoms.This structure was observed in MgB 2 -based material using a High-Resolution Transmission Microscope. Ti, Zr, Ta, Ti-O, and SiC additions can lead to impurity oxygen aggregation into separate inclusions at low manufactured temperatures; thus, the MgB 2 matrix is "cleaned" from impurity oxygen or by reducing the volume that the Mg-B-O phase, containing a high amount of oxygen, occupies.Ti, Zr, and Ta additions are the absorbers of gases (e.g., hydrogen), and Ti is the most powerful one.So, they absorb the admixture of hydrogen transforming into hydrides and, thus, prevent the formation of the MgH 2 phase that is harmful for critical currents.The absorption of hydrogen can prevent big blocks of MgB 2based superconductors from cracking.The presence of Ti and Ta "provokes" the appearance of inclusions of higher magnesium borides in higher amounts, which increases the critical currents in high magnetic fields.The effect of SiC on oxygen aggregation in MgB 2 is not clear yet.The added, nanosized SiC inclusions can act as pinning centers in MgB 2 .However, SiC can partly decompose and react with the synthesized material forming Mg 2 Si and liberating C, which may be introduced into the MgB 2 structure, forming a solid solution.The addition of SiC (10 wt %) with micrometer-sized grains, which practically do not react with MgB 2 (at least in an amount detectable by X-ray), essentially increases the critical current density of the materials prepared from boron with a low concentration of impurity oxygen.The optimal level of carbon doping, without an essential reduction in the critical temperature of MgB 2 , is much lower than that for oxygen doping, regardless of whether carbon is homogeneously distributed or concentrated in the nearby positions.Modern Figure 2 . Figure 2. (a,b)-Sample structures obtained by SEM in COMPO (compositional) contrast: (a)-Sample sintered from MgB2 (Type VI) under 2 GPa at 1000 °C for 1 h; bright small zones in (a) seem to be inclusions (containing O, Zr, Nb, and possibly ZrO2) appearing due to milling of initial MgB2.(b)-Structure of sample synthesized from Mg(I):2B(I) under 2 GPa at 800 °C.(c,d)-X-ray patterns of these samples, respectively [109]. Materials 2024 , 46 Figure 8 . Figure 8.The dependences of critical current density, Jc, at 20 K on a magnetic field.The MgB2 samples were prepared from Mg(I):2B(I) and Mg(I):2B(III).The graph was composed using the data presented in [20,98,103,119]. Figure 8 . Figure 8.The dependences of critical current density, J c , at 20 K on a magnetic field.The MgB 2 samples were prepared from Mg(I):2B(I) and Mg(I):2B(III).The graph was composed using the data presented in [20,98,103,119]. Figure 9 . Figure 9. (a,b)-SEM images in SEI mode of MgB 2 materials synthesized from Mg(I):2B(III) mixtures under 2 GPa, for 1 h at 800 and 1050 • C, respectively [109].(c,d)-Schema of MgB 2 -based material structures synthesized at low temperature of 800 • C (e) and high temperature of 1050 • C (f) [85].(e,f)-X-ray patterns of samples shown in (a,b) [113].The X-ray analysis of both MgB 2 -based materials shows that they contain MgB 2 and MgO phases.However, SEM and EDX analyses and an Auger spectroscopy study indicate the presence of three main phases in the materials: (1) a matrix with near-MgB 2 stoichiometry, which contains a small amount of an impurity of oxygen (grey areas in the photo, Figure 9a,b); (2) inclusions (grains) of higher magnesium borides, MgB x , x >> 2, look the blackest; and (3) oxygen-enriched places look brightest or white, indicating Mg-B-O inclusions.The forms of the Mg-B-O inclusions depend on the manufacturing temperature and are principally different.In the MgB 2 material synthesized at low (800 • C) temperature, their forms are nanolayers noted by "L" in Figure9a, and at high (1050 • C) temperature they are separate inclusions, noted by "I" in Figure 9b[109].The difference is schematically shown in Figure9c,d.The MgBO inclusions can play the role of pinning centers and the difference in their structures is reflected in the different dependencies of the critical current densities on the magnetic field.Moreover, the effect of oxygen aggregation with Effect of Mg:xB (x = 4-20) Ratio of Powdered Mixture on Microstructure and Characteristics of HP-Synthesized Materials Figure 13 . Figure 13.Microstructures of the materials synthesized from Mg(I):B(III) with a 10 wt% of Ti (3-10 µm) addition under 2 GPa for 1 h at 800 (a,c) and 1050 • C (b,d) [108].X-ray patterns of these materials (e,f).(c,d) show the places where Ti is absent [103,113].The typical distribution of Mg, B, and O in the structure of MgB 2 -based materials prepared from Mg(I):2 B(III) with 10 wt% of Ti (3-10 µm), in the phase where Ti grains are absent, is shown in Figure 14.The absorption of hydrogen and, thus, the prevention of the formation of MgH 2 by Ta and Zr additions, has been observed, as in the case of Ti additions[103].However, Ti is the most powerful absorbent of these three metals.Note also that the additions of Ti to the MgB 2 mixture, or even the synthesis of a big MgB 2 block wrapped in a Ti foil, prevents an MgB 2 sample from cracking due to the absorption of impurity hydrogen by Ti.When Ti and Ta were added to the initial Mg:2B mixture, in addition to hydrogen absorption, another effect was also observed.Additions of Ti and Ta promote the formation of higher magnesium boride inclusions[103].Within the structure of MgB 2 materials synthesized using the HP method with Ti and Ta additives (Table5), a larger amount (N) of the magnesium boride phase with a stoichiometry close to MgB 12 was observed, compared to the material without additives.A higher amount of the higher magnesium boride phase correlates with higher critical currents in the 1 T field.So, the addition of Ti can affect the boron distribution in MgB 2 -based material.This can be seen in Figure15b, for example, Materials 2024 , 46 Figure 14 . Figure 14.(a) Image of microstructure of MgB2 sample with 10 wt% of Ti (3-10 μm); image 16a was taken in the place where the Ti grains are absent.(b-d)-EDX maps of boron, oxygen, and magnesium distributions over the area of the image shown in 16e (the brighter the area looks, the higher the amount of the element under study)[103]. Figure 14 . Figure 14.(a) Image of microstructure of MgB 2 sample with 10 wt% of Ti (3-10 µm); image 16a was taken in the place where the Ti grains are absent.(b-d)-EDX maps of boron, oxygen, and magnesium distributions over the area of the image shown in 16e (the brighter the area looks, the higher the amount of the element under study)[103]. Figure 15 . Figure 15.(a-c) SEM images of MgB 2 sample with 10 wt% of Ti powder (about 60 µm) synthesized under 2 GPa at 800 • C for 1 h: SEI (a-c)[113].Notations: "I"-Mg-B-O inclusions, MgB x -higher magnesium borides.In (c), the points marked by No. 1-6 are the points for which were made quantitative Auger analyses, the results of which are summarized in Table 6[113]. Figure 20 . Figure 20.Calculated density of electronic states, N(E), for MgB 2 (a), MgB 1.75 O 0.25 (b), MgB 1.5 O 0.5 (c) per formula unit; (d)-calculated DOS at the Fermi level.N(E F ) depends on the oxygen concentration, x, in MgB 2-x O x compounds (hollow squares).The total DOS and partial contributions of Mg, B, and O atoms are indicated by solid squares, solid triangles, and solid circles, respectively [132]. Figure 21 . Figure 21.(a)-Dependence of the binding energy, Eb, on the oxygen concentration, x MgB2-xOx/Сx: 1, 3-homogeneous oxygen and carbon substitutions of boron atoms, respective 4-the lowest binding energy vs. x for the ordered oxygen and carbon substitutions (for exam in nearby positions or in pairs), respectively.(b)-Z-contrast image of coherent oxygen-contai inclusions in [010] of MgB2 obtained using HRTEM (high-resolution transmission microsco Bright atoms-Mg.The contrast increases in each second row and is due to the presence of ox in each second boron plane.The white arrows show the columns of atoms in which oxygen is sent [117]. Figure 21 . Figure 21.(a)-Dependence of the binding energy, E b , on the oxygen concentration, x, in MgB 2-x O x /C x : 1, 3-homogeneous oxygen and carbon substitutions of boron atoms, respectively; 2, 4-the lowest binding energy vs. x for the ordered oxygen and carbon substitutions (for example, in nearby positions or in pairs), respectively.(b)-Z-contrast image of coherent oxygen-containing inclusions in [010] of MgB 2 obtained using HRTEM (high-resolution transmission microscopy).Bright atoms-Mg.The contrast increases in each second row and is due to the presence of oxygen in each second boron plane.The white arrows show the columns of atoms in which oxygen is present[117]. Figure 24 . Figure 24.High quasi-hydrostatic pressing (HP) in ISM NASU.Hydraulic 140 MN-effort press from the ASEA company (a), hydraulic 25 MN-effort press (b), cylinder piston high-pressure apparatus (HPA) (c), recessed-anvil type (HPA) for 25 MN press (d), and scheme of high-pressure cell of the recessed-anvil HPA (before and after loading) (e). Figure 25 . Figure 25.Hydraulic press DO 630 for hot pressing with generator and inductor (a,b); general view of inductor of hot press during heating (shining window-opening for temperature estimation by pyrometer) (c), scheme of assembled inductor (d). Figure 25 . 46 Figure 26 . Figure 25.Hydraulic press DO 630 for hot pressing with generator and inductor (a,b); general view of inductor of hot press during heating (shining window-opening for temperature estimation by pyrometer) (c), scheme of assembled inductor (d).Materials 2024, 17, x FOR PEER REVIEW 33 of 46 Figure 27 . Figure 27.(a)-The schemes of an SFCL model and a testing circuit for the simulation of a fault event.(b)-Typical oscilloscope traces of the current in a protected circuit (black, solid curve) and the voltage drop across the primary coil of the SFCL model (red, dashed curve) at 50 Hz and about 4 K (from[90]).The experiment details are described in[120]."A"-is ammeter. Figure 27 . Figure 27.(a)-The schemes of an SFCL model and a testing circuit for the simulation of a fault event.(b)-Typical oscilloscope traces of the current in a protected circuit (black, solid curve) and the voltage drop across the primary coil of the SFCL model (red, dashed curve) at 50 Hz and about 4 K (from[90]).The experiment details are described in[120]."A"-is ammeter. Figure 29 . Figure 29.(a) Magnetic shield of MgB2 in the shape of a cup (outer radius, Ro = 10.15 mm; inner radius, Ri =7.0 mm; external height, he = 22.5 mm; internal depth, di = 18.3 mm).The material is machinable by chipping.The shielding factors (i.e., the ratio between an outer applied magnetic field, Happl, and an inner magnetic field measured by a Hall sensor at different z1 ÷ z5 positions (b)) at T = 30 K are shown in (c).The dashed lines represent the shielding factors computed in correspondence with the Hall probe positions, assuming the magnetic field dependence of Jc(B) at 30 K. (Figure 2 in [26] adapts the results obtained in [159]). Figure 29 . Figure 29.(a) Magnetic shield of MgB2 in the shape of a cup (outer radius, Ro = 10.15 mm; inne dius, Ri =7.0 mm; external height, he = 22.5 mm; internal depth, di = 18.3 mm).The material is chinable by chipping.The shielding factors (i.e., the ratio between an outer applied magnetic fi Happl, and an inner magnetic field measured by a Hall sensor at different z1 ÷ z5 positions (b)) at 30 K are shown in (c).The dashed lines represent the shielding factors computed in corresponde with the Hall probe positions, assuming the magnetic field dependence of Jc(B) at 30 K. (Figure [26] adapts the results obtained in [159]). Figure 29 . Figure 29.Magnetic shield of MgB 2 in the shape of a cup (outer radius, R o = 10.15 mm; inner radius, R i =7.0 mm; external height, h e = 22.5 mm; internal depth, d i = 18.3 mm).The material is machinable by chipping.The shielding factors (i.e., the ratio between an outer applied magnetic field, H appl , and an inner magnetic field measured by a Hall sensor at different z 1 -z 5 positions (b)) at T = 30 K are shown in (c).The dashed lines represent the shielding factors computed in correspondence with the Hall probe positions, assuming the magnetic field dependence of J c (B) at 30 K. (Figure2in[26] adapts the results obtained in[159]). Table 2 . [98,103,108,119](J c , concentrations of MgB 2 , MgO, and MgB 4 ; mass density, ρ; connectivity, A F ; and amount of shielding fraction, S,) of MgB 2 -based materials prepared under different p-T conditions from Mg:2B mixtures (in-situ) or MgB 2 powder (ex-situ).The data presented in the table were collected from[98,103,108,119]. Note; All "in-situ" materials were prepared from Mg(I) chips and only C was added to initial boron and Mg(II) powder.PP, GBP, and MP-point, grain boundary, and mixed type of pinning, respectively.* Type of pinning is impossible to characterize exactly due to high k ratio. Table 9 . [120,153] current and current density of the rings tested using the SFCL model at 4.2-6 K and a primary current frequency of 50 Hz.The data presented in the table were collected from[120,153]. * The mixture of Mg(I) chips and amorphous B(III) powders were taken into Mg(I):2B(III) stoichiometry, then 200-800 nm SiC or 30 µm Ti granules of 95% purity were added.
26,777.8
2024-06-01T00:00:00.000
[ "Materials Science", "Physics" ]
Information Bottleneck Analysis by a Conditional Mutual Information Bound Task-nuisance decomposition describes why the information bottleneck loss I(z;x)−βI(z;y) is a suitable objective for supervised learning. The true category y is predicted for input x using latent variables z. When n is a nuisance independent from y, I(z;n) can be decreased by reducing I(z;x) since the latter upper bounds the former. We extend this framework by demonstrating that conditional mutual information I(z;x|y) provides an alternative upper bound for I(z;n). This bound is applicable even if z is not a sufficient representation of x, that is, I(z;y)≠I(x;y). We used mutual information neural estimation (MINE) to estimate I(z;x|y). Experiments demonstrated that I(z;x|y) is smaller than I(z;x) for layers closer to the input, matching the claim that the former is a tighter bound than the latter. Because of this difference, the information plane differs when I(z;x|y) is used instead of I(z;x). Introduction Mutual information is now widely used to investigate the process of machine learning [1][2][3][4][5][6]. One notable example is information bottleneck theory [7]; when x is the input, y is the desired output, and z is the latent variables, the theory proposes using mutual information I(z; x) and I(z; y) to analyze the dynamics of learning. The authors postulated that supervised learning aims to reduce the information bottleneck loss I(z; x) − βI(z; y). Recently, Achille and Soatto provided a fundamental analysis of information bottleneck theory using task-nuisance decomposition [8]. They proved that the second term I(z; x) in the information bottleneck loss bounds mutual information I(z; n) between the hidden layer activity and the nuisance. In this paper, we propose to use conditional mutual information as an alternative criterion for bounding I(z; n) and suggest its use in the analysis of neural networks by information bottleneck theory. Note that variables x, y, z, and n can be vectors, but we do not represent them using a bold font since the difference between scalar and vector is irrelevant to our analysis. Information Bottleneck Theory Information bottleneck theory provides a unified view towards understanding machine learning models that have latent variables [7,[9][10][11][12]. According to the theory, supervised learning aims to minimize the loss objective L = I(z; x) − βI(z; y), where β is a parameter that determines preference over the tradeoff between two terms. Since the latent variable z usually has a dimension lower than that of the observed variable x (as in convolutional neural networks), reducing I(z; x) while maintaining I(z; y) implies that information about y contained in x is compressed into z. An effective compression of x should keep most information about y but reduce information about x. A learning algorithm can realize that by reducing I(z; x) while maintaining I(z; y). A predictor having such a representation removes frivolous transformations present in x while keeping information regarding y. Note that y is the ground-truth class and is different from the outputŷ of a predictor. After learning, p(ŷ|x) will be similar to p(y|x). Information bottleneck theory has been applied to analyze the behavior of deep neural networks [13][14][15][16][17][18][19][20][21][22][23]. In this case, latent variables z correspond to hidden layer activities z for each layer . It has been suggested in [14] that the training process of deep learning may consists of fitting and compression phases, as represented in a schematic diagram in Figure 1. One possible use of our proposed bound is to conduct such an analysis in a more precise manner. Fischer proposed a conditional entropy bottleneck defined by −H(z|x) + H(z|y) + γH(y|z), which is derived from I(z; x|y) − γI(z; y), where γ is a hyperparameter similar to β in an information bottleneck [24]. The use of conditional mutual information I(z; x|y) comes from the minimum necessary information (MNI) criterion, I(x; y) = I(x; z) = I(y; z). When this criterion is met, I(x; y|z) = I(x; z|y) = I(y; z|x) = 0 is also true. In contrast, we derive the use of conditional mutual information by showing that I(z; x|y) forms an upper bound on I(z; n), where n is a nuisance variable. While Fischer claims that learning a compressed representation Z of X is equivalent to minimizing I(z; x|y), we show that reducing I(z; x|y) is even better than reducing I(z; x). We thereby provide solid ground to the conditional mutual information approach introduced by Fischer. Geiger and Fischer introduced conditional mutual information I(z; x|y) by reformulating the information bottleneck functional I(z; x) − βI(z; y) to I(z; x|y)˘(β˘1)I(z; y) [25]. They defined a variational bound to the reformulated functional and analyzed its tightness. Our work sees I(z; x|y) from a different viewpoint, namely as a bound to I(z; n), where n is a nuisance variable. Most recently, Yu et al. proposed deterministic information bottleneck (DIB) [26] based on matrix-based Rnyi's α-order entropy functionals on positive definite matrices [27,28]. From these functionals, they defined Rnyi's α-order mutual information I α (A; B). Standard deep learning frameworks, such as PyTorch, can conduct automatic differentiation on I α (A; B), enabling it to be trained using gradient descent. They also showed that the mutual information term acts as a regularization term. Figure 1. A schematic diagram of visualizing training dynamics using the information plane. Each dot represents a specific time point during the process of learning. In this example, the trajectory consists of two parts; fitting and compression phases. The fitting phase is where I(z; x) increases, and the compression phase is where I(z; x) decreases. Task-Nuisance Decomposition Achille and Soatto [8] provided a new theoretical justification for information bottleneck theory. They introduced a nuisance variable representing stochastic fluctuations present in x that are unnecessary for conducting the classification task. For example, in image classification, a nuisance can represent a frivolous transformation such as rotation and translation. In terms of probability, n is a nuisance if it is independent from y and a Markov chain (y, n) → x → z →ŷ holds. The first part, (y, n) → x, is due to the generative process of x. The true category y and nuisance n together affects x. For example, in the CIFAR-10 image dataset, the distribution of intensity for each pixel is determined by image class y and sample-specific transformations. The latter part of the Markov chain, x → z →ŷ, comes from the predictor's structure having latent variables z. In neural networks, z corresponds to a hidden layer.ŷ is the output of the network, which is the predicted category for x. It can be shown that, when z is a sufficient representation of x, that is, I(z; y) = I(x; y), then I(z; x) is an upper bound of I(z; n) [8]. Hence, reducing I(z; x) results in decreasing I(z; n). Because the effects from frivolous transformations are removed from z, the predictor generalizes better. Non-Parametric Estimation of Mutual Information One obstacle to putting information bottleneck theory into practice is the difficulty of estimating mutual information. When random variables are discrete or when distribution families are known, mutual information can be estimated straightforwardly. On the other hand, if the random variables' distribution families are unknown, mutual information must be estimated non-parametrically. It is known to be a notoriously tricky task. Kraskov et al. have shown that k-nearest neighbor estimation works well when random variables are low-dimensional. However, the error increases as the dimension of the random variables becomes higher [29]. Kandasamy et al. used the Von Mises expansion and influence functionals to estimate entropy and mutual information [30]. Belghazi et al. recently proposed mutual information neural estimation (MINE), which uses a neural network to approximate a lower bound of mutual information [31]. Exploiting the fact that neural networks are a universal approximator of functions, the lower bound is obtained by: where F is a set of functions achievable by a neural network. Pairs {(x (i) , z (i) )} come from joint distribution p(x, z), while samples {ž (i) } come from marginal distribution p(z). It has been used for analyzing mutual information between layers of neural networks [32,33]. Method We first describe the notations used in this section. We then describe the mathematical properties of our proposed use of conditional mutual information. Finally, we provide a way to estimate conditional mutual information. Notations Let a, b, c be scalars or vectors of random variables. We use a semicolon to separate random variables that are subject to computing mutual information, as in A vector of random variables can be expressed explicitly by separating their components by a comma. Conditioning both joint and product distributions defines conditional mutual information I(a; b|c). In some articles, conditional mutual information is defined without integrating out c, as inĨ (a; b|c) = E p(a,b|c) log p(a, b|c) p(a|c)p(b|c) . Our definition corresponds to taking the expectation ofĨ(a; b|c) by p(c), that is, When applying our proposed framework to analyzing a neural network, z represents the hidden layer activities, x is the input, and y is a one-hot vector representing the groundtruth class label. In a feed-forward neural network, z can represent activities of any of the layers. When indicating the activity of layer , we use z . Figure 2 illustrates an example of a feed-forward neural network. y is the true category of a sample. Observed signal x is generated from a distribution parametrized by y and contains fluctuations represented by a nuisance variable n. A neural network transforms the input to latent representations z . The output of the network isŷ, which is an estimate of y. A random variable n is a nuisance for x in performing task y if it affects x but is independent of y. For example, in image recognition, nuisances include translation, rotation, and small occlusions, which do not affect the object's identity in the image. z is a representation of x if there is a (possibly non-deterministic) function that defines z by x. z is sufficient for the task y only if I(x; y) = I(z; y). It means that all information required to predict y present in x is also present in z. Mathematical Property To bound I(z; n), we propose using conditional mutual information I(z; x|y) instead of I(z; x), which is commonly used in information bottleneck theory. We prove that I(z; x|y) provides a tighter upper bound for I(z; n) than I(z; x). To do so, we use the following lemma, called the functional representation lemma, whose proof is given in [34]. It is also presented as Lemma C.1 in [8]. Lemma 1. Given a joint distribution p(x, y), where y is a discrete random variable, we can always find a random variable n independent of y such that x = f (y, n) for a deterministic function f . We now show that I(z; x|y) bounds I(z; n). Theorem 1. Let n be a nuisance for the task y and let z be a representation of the input x. Suppose that z depends on y and n only through x. In other words, let random variables follow a Markov chain (y, n) → x → z. Then, I(z; n) ≤ I(z; x|y). Proof. Let H(a|b) be either entropy or differential entropy, depending on the cardinality of the domain of a. Then, The first line is from the chain rule for mutual information (Theorem 2.5.2 in [35]). The first line is from the data processing inequality. The third line is because y is independent from n, and also because conditioning decreases entropy, that is, H(y|z, n) ≤ H(y|z). The fourth and the fifth lines are from the Markov chain. The theorem shows that conditional mutual information I(z; x|y) can bound I(z; n) even when z is not sufficient, in contrast to Achille and Soatto's Proposition 3.1, which requires z to be sufficient [8]. It makes our theorem appealing since the sufficiency condition may not be fulfilled in general. Even in that case, our theorem makes task-nuisance decomposition applicable. One question is, what is the difference between I(z; x), used in [8], and I(z; x|y), used by us. The following proposition answers this question. Proposition 1. When random variables y, n, x, and z follow a Markov chain (y, n) → x → z, then I(z; x) − I(z; x|y) = I(z; y). The first and second lines are from the chain rule for mutual information, and the third line is from the Markov chain. When z is sufficient, I(z; y) = I(x; y) by definition. The proposition shows that, instead of I(z; x|y), one can use I(z; x) − I(z; y) for bounding I(z; n). If z is sufficient, I(z; n) can also be bounded by I(z; x) − I(x; y). However, estimated mutual information often contains some errors. Estimating two values of mutual information may double that. Let us note that lowering the upper bound does not necessarily reduce the objective function. However, in practice, upper bounds are commonly used as a surrogate objective. This may be because if a learning algorithm reduces an upper bound indefinitely, it will eventually reduce the objective function. Much of the existing work in machine learning relies on the assumption that reducing or raising bounds also reduces or raises the objective function, respectively. Furthermore, many approximators in machine learning are formulated either as an upper or lower bound. Since I(z; x) − I(x; y) in [8] is the difference of two terms, neither an upper bound alone nor a lower bound alone can bound it. To bound I(z; x) − I(x; y), a combination of an upper bound and a lower bound is necessary. For example, if f (a, b) is an upper bound to I(a; b), f (z, x) − f (x, y) does not necessarily upper bound I(z; x) − I(x; y), due to the negation of I(x; y). By the same token, if g(a, b) is a lower bound to I(a; b), g(z, x) − g(x, y) does not necessarily lower bound I(z; x) − I(x; y). In contrast, I(z; x|y) does not contain a term with negation and avoids such a limitation. There are many bounds on mutual information now and there will be more in the future. However, each bound has different strengths and weaknesses, such as asymptotic behavior, robustness and computational efficiency. If using two bounds, the resulting approximation will carry weaknesses from the two. It is often better to rely on only one approximator. Estimation Estimating mutual information for random variables with unknown distributions is a challenging task. It is even more so for high-dimensional random variables. Consequently, estimating conditional mutual information is also difficult. In this paper, we used MINE ( [31]) to tackle this problem. Conditional MINE (CMINE) To estimate conditional mutual information I(z; x|y), we group samples by class label y, compute an estimate by MINE for each group and take the weighted average of the estimates. In other words, we usê whereÎ(z; x|y = c) is the estimated value obtained by MINE using only samples in class c (i.e., y = c). m c is the number of samples in class c. We will call this estimation method conditional MINE (CMINE). Currently, the method can only be used when y takes discrete values. CMINE estimates mutual information multiple times, but all in the form of I(z; x|y = c), where each term is not affected by the dimension of y. When the output variable y is high-dimensional, for example, in natural language processing, estimating I(x; y) likely results in a significant amount of error. Using I(z; x) − I(x; y) to compute I(z; x|y) is vulnerable to such errors, but CMINE can avoid such a limitation. Averaged MINE (AMINE) In Section 5, we compare I(z; x|y) and I(z; x) using these estimates. We need to confirm that the number of samples used in estimation will not affect the comparison. When there are m samples and h possible values of y in I(z; x|y), CMINE applies MINE to roughly m/h samples for each possible value of y. Using fewer samples might lower the estimated mutual information since they may fail to capture the stochastic dependency between variables x and z. To avoid such unfairness, we used an estimator for I(z; x) that enforces the same restriction regarding the number of samples. Specifically, we randomly split the dataset into groups with the same sizes as grouping by class labels. We then run MINE for each group and compute the weighted average of the resulting estimates. We named this method averaged MINE (AMINE). Specifically, let c i be the class label (i.e., the value of y) for the i-th sample. Define ρ as a random permutation of 1, . . . , n, where n is the number of samples in the dataset. We give a new label c ρ(i) to the i-th sample. In other words, we shuffle values of y across samples in the whole dataset. We then group samples following the new labels, compute MINE for each group, and average them using the number of samples in each group as weights. Dataset, Architecture, and Parameters We used the MNIST, Fashion MNIST, and CIFAR-10 datasets for evaluation. Samples are images labeled by one of ten classes. Accordingly, y is a 10-dimensional one-hot vector. x is a vector obtained by flattening an image. To observe mutual information between layers of a trained target neural network, we implemented a system that uses CMINE and AMINE. Table 1 indicates the architecture of the target network. One characteristic of the target network is that almost all layers have the same number of nodes. When the numbers of nodes are different between layers, the dimensions of z will differ, and it can affect the amount of error when estimating mutual information. Such variations would make a comparison between layers difficult. The structure of MINE used in this paper is also shown in Table 1. Conv(a, b, c; d) is a convolution layer using a kernel of size a × b, with c channels and stride d. FC(a) is a fully-connected network with a nodes. We used ReLU as the activation function for each layer. We implemented the networks using PyTorch, and trained them using an NVIDIA Quadro RTX 8000 with 48 GB memory. Table 2 shows the hyper-parameters used for optimizing the networks. After training, the target network achieved 96.3% test accuracy for classifying images in MNIST, 87.1% for Fashion MNIST, and 46.1% for CIFAR-10. Preprocessing before Estimation by MINE We used singular value decomposition (SVD) to reduce the dimension of the hidden layer activity z. It decreases computation time and also can reduce estimation error resulting from the high-dimensionality of the random variables. Since the task was classification into 10 classes, we chose 4, 8 and 12 as the reduced dimension. Without dimension reduction, the learning curves fluctuated rapidly and, upon observation, did not converge. Cluttering To observe the effect of a nuisance on the mutual information between layers, we conducted artificial occlusion experiments [8,36]. We generated cluttered images by superposing randomly allocated squares on top of images in the datasets. The squares can overlap. We used them as inputs to already-trained target neural networks. Then, we observed the activities of layers and estimated the mutual information between them. Each square has zero intensity on a randomly selected channel, and its size was 4 × 4 pixels. We tested by adding 64 squares to each image. They were added only when estimating mutual information and not during training of the target network. Experiments We conducted experiments to see how CMINE estimates conditional mutual information between layers in a neural network. In this section,Î(a; b) andÎ(a; b|c) indicate estimates obtained by AMINE and CMINE, respectively, for mutual information I(a; b) and I(a; b|c). We used 10,000 samples to train the target network and 50,000 samples to estimate mutual information. When estimating mutual information, we recorded inputs x, desired outputs y, and hidden layer activities z from each layer of the target network. 5.1. Comparison ofÎ(z ; x) andÎ(z ; x|y) across Layers Figure 3 shows a comparison ofÎ(z ; x) andÎ(z ; x|y), obtained by AMINE and CMINE, respectively. A smaller (Layer ID) means the layer is closer to the input. The results using different datasets and the dimensions after SVD are compared. When the dimension increased,Î(z ; x) andÎ(z ; x|y) both increased, indicating information loss due to SVD. When squares are added, estimated mutual information decreased both for I(z ; x) andÎ(z ; x|y). The graphs show that, in general, bothÎ(z ; x) andÎ(z ; x|y) decrease as they get farther away from the input. This is consistent with the data processing inequality. The graphs also indicate that, for layers closer to the input,Î(z ; x|y) is smaller than I(z ; x), especially for MNIST and Fashion MNIST. For some layers closer to the output, the inequality did not hold. We assume this is due to SVD and MINE being unable to find stochastic dependency between layers due to how information is represented in these layers. The results in whichÎ(z h ; y) <Î(z ; y) for h < contradict the data processing inequality. A possible cause is that it is easier for MINE to capture stochastic dependency with y from z that is transformed with more layers to output the estimateŷ. For such transformed representations, the functional relationship between y and z is simpler, and MINE may more easily reach the supremum pursued during optimization [31]. It can also be from the difference in how much mutual information is preserved when preprocessed by SVD. If the functional relationship between y and z is highly non-linear, SVD fails to preserve that relationship. Information Planes Information planes are used in information bottleneck theory to visualize the dynamics of mutual information during training of the target network [14]. The dynamics are visualized as a trajectory on a plane whose axes are I(z; y) and I(z; x). Achille and Soatto pointed out that I(z; n), rather than I(z; x), is more fundamental [8]. From our analysis, I(z; x|y) is closer to I(z; n) than I(z; x). Therefore, we suggest using I(z; y) and I(z; x|y) as the axes of the information plane. To see the learning dynamics, we stopped training after every ten batches and estimated the mutual information. Each batch contains 64 samples. Figure 4 shows the resulting dynamics for images without cluttering squares. Each line represents a layer. On the other hand, in Figure 5, each line represents a batch. Note that the starting points are indicated by larger dots. The ranges of the horizontal axes are different betweenÎ(z ; x|y) andÎ(z ; x) since their values differ largely for some layers. x is the input, y is the output, and z is the activity of the -th layer. Horizontal axis represents different layers, with smaller numbers closer to the input. Vertical axis represents the value of estimated mutual information in bits. The boxes extend from the lower to upper quartile values for ten trials, with a line at the median. The whiskers extend from the boxes to show the ranges of the values across trials. The dimensions after SVD and the numbers of squares added for cluttering were compared. Figure 4 shows that some learning curves, for example, layers 5 to 8 for CIFAR-10, have the two-phased shape indicated in Figure 1. The shapes seem to be a little different between (Î(z; x),Î(z; y))-coordinates and (Î(z; x|y),Î(z; y))-coordinates for MNIST. Conclusions As a more precise way of conducting information bottleneck analysis, we proposed using conditional mutual information I(z; x|y) as an upper bound of I(z; n). We estimated values of conditional mutual information for a trained neural network using CMINE. The result showed that I(z ; x|y) could be used to observe information compression behavior of the neural network, similar to using I(z ; x) but with a tighter bound. Our result suggests a new approach that uses I(z; x|y) instead of I(z; x) for information bottleneck theory. From Proposition 1, the information bottleneck loss I(z; x|y) −βI(z; y) is equal to the original information bottleneck loss I(z; x) − βI(z; y) by settingβ = β − 1. However, the shapes of the trajectories in the (I(z; x|y), I(z; y))-coordinates would differ from those in the (I(z; x), I(z; y))-coordinates, and they can possibly provide more insights into the dynamics of compression and fitting in the process of learning. The experiments showed some deviation from the data processing inequality. This is possibly due to the limitation of SVD and MINE in recovering stochastic dependency between layers. We believe more sophisticated dimension reduction and estimation methods may reduce errors. One approach would be to use a non-linear parametric dimension reduction method, such as a convolutional neural network (CNN), but it may require designing the network architecture appropriately. In addition to SVD, we also tried dimension reduction by CNN or global average pooling (GAP). Currently, however, the results are not as robust as those obtained by SVD. Future work includes extending our scheme to tasks other than classification, for example, regression where y is a continuous variable. To do so, we must develop an estimation method of conditional mutual information I(z; x|y) other than CMINE. One possible way would be to combine CMINE with a nonparametric estimation method of p(y). Since information bottleneck analysis by conditional mutual information is independent of how the mutual information is estimated, newly proposed estimators may improve the results. For example, the ensemble KDE-plugin estimator by Moon et al. [37] and the dependency graphs by Noshad et al. [38] could be used. Methods that directly estimate conditional mutual information, such as those by Singh and Póczos, are especially promising [39]. A variational bound to conditional mutual information proposed by Geiger and Fischer is another possible approach [25]. It is preferable to use an estimator that upper bounds mutual information since the purpose of using I(z; x|y) is to upper bound I(z; n). In the future, we expect there will be more methods that directly estimate conditional mutual information. Such a method will provide a further advantage to our formulation.
6,070.4
2021-07-29T00:00:00.000
[ "Computer Science" ]
A Novel Expert System for Building House Cost Estimation: Design, Implementation, and Evaluation This paper introduces an expert system which demonstrates a new method for accurate estimation of building house cost. This system is simple and decreases the time, the effort, and the money of its beneficiaries. In addition, design and implementation of the proposed expert system are introduced. CLIPS 6.0 and C# are used in implementation phase. Also, this expert system is programmed to be in a standalone package with a platform independency. Furthermore, the developed expert system is tested under several real cases. Finally, an initial evaluation of this expert system is carried out and a positive feedback is received from user's samples, which makes it robust and efficient. I. INTRODUCTION An expert system is a computer program designed to simulate the problem-solving behavior of a human who is an expert in a narrow domain or discipline. Expert Systems (ES), also called Knowledge Based System (KBS), are computer application programs that take the knowledge of one or more human experts in a field and computerize it so that it is readily available for use. Expert System makes easier for user to identify the describe symptoms like image bases or textual bases information as it is very difficult to describe in words. It can also be integrated with textual database which can be used for explanation purposes of basic terms and operations to confirm and to reach conclusion in some situations (1). As a branch of artificial intelligence, an expert system has been widely used. An expert system shell greatly improves the efficiency of the construction of an expert system (2). Become a computer systems mechanism profound impact on our daily lives as we see every day research and new projects for the use of computers to make life easier and save the experience, and ease the pressure borne by the people? The paper is a mix of the latest techniques presented in this section of the computer science related systems expert systems and decision support, the paper provides scientific material distinguished and easy for the user in the field of architecture in terms of the ability to set the vision and the perception of urban are easy and available, where the idea is based on the establishment of an expert system alternative to the architect be helpful to them in calculating the cost of the construction according to data entered from (The land area, the site of the Earth And,.. etc.), which provides immediate support to customers and make decisions based on information obtained by them and contained them within the scope of existing knowledge has it . I. INTRODUCTION An expert system is a computer program designed to simulate the problem-solving behavior of a human who is an expert in a narrow domain or discipline.Expert Systems (ES), also called Knowledge Based System (KBS), are computer application programs that take the knowledge of one or more human experts in a field and computerize it so that it is readily available for use.Expert System makes easier for user to identify the describe symptoms like image bases or textual bases information as it is very difficult to describe in words.It can also be integrated with textual database which can be used for explanation purposes of basic terms and operations to confirm and to reach conclusion in some situations [1]. As a branch of artificial intelligence, an expert system has been widely used.An expert system shell greatly improves the efficiency of the construction of an expert system [2].Become a computer systems mechanism profound impact on our daily lives as we see every day research and new projects for the use of computers to make life easier and save the experience, and ease the pressure borne by the people?The paper is a mix of the latest techniques presented in this section of the computer science related systems expert systems and decision support, the paper provides scientific material distinguished and easy for the user in the field of architecture in terms of the ability to set the vision and the perception of urban are easy and available, where the idea is based on the establishment of an expert system alternative to the architect be helpful to them in calculating the cost of the construction according to data entered from (The land area, the site of the Earth And,.. etc.), which provides immediate support to customers and make decisions based on information obtained by them and contained them within the scope of existing knowledge has it . An expert system's knowledge base is traditionally encoded as a set of domain-specific rules.These rules are generally implications of the form: where the ai's are logical statements that are relevant to the system's problem domain.For example, in the context of soil science, the rule: IF a soil is sandy and the level of humus is high THEN the soil is compact The development of expert system is implemented in CLIPS programming environment (C Language Integrated Production System) [3,4,5].This programming tool is designed to facilitate the development of software to model human knowledge or expertise.CLIPS program is used by reason of the flexibility, the expandability and the low cost. The outline of the paper is as follows.Section2 problem recognition.Section3 presents the basic of building a house.Section 4 knowledge representation.Section 5 tree knowledge section 6 diagnosis process finally, working example and summarizes this paper. II. PROBLEM RECOGNITION We need to build expert system to presents the design and development of an expert system for Account the Cost of Building House (ACBH).To distribute human expertise in this science. IV. KNOWLEDGE REPRESENTATION The key problem is to find a KR (and a supporting reasoning system) that can make the inferences your application needs in time, that is, within the resource www.ijacsa.thesai.orgconstraints appropriate to the problem at hand.This tension between the kinds of inferences and application "needs" and what counts as "in time" along with the cost to generate the representation itself makes knowledge representation engineering interesting. There are representation techniques such as frames, rules, tagging, and semantic networks which have originated from theories of human information processing.Since knowledge is used to achieve intelligent behavior, the fundamental goal of knowledge representation is to represent knowledge in a manner as to facilitate inference (i.e.drawing conclusions) from knowledge [6,7]. Knowledge bases can be represented by production rules.These rules consist of a condition or premise followed by an action or conclusion (IF condition...THEN action). For example If Land Area 400 and work type foundation and the number of floors 1 and the number of room 4 and the size of water tank small Then cost 1,11000 SR To prove the conclusion "the overall cost744, 279 SR" inference engine must prove all condition that leading to this conclusion.Condition can be found from asking user or from another Rule because this condition is conclusion in Rule. V. TREE KNOWLEDGE A decision tree (or tree diagram) is a decision support tool that uses a tree-like graph or model of decisions and their possible consequences, including chance event outcomes, resource costs.Decision trees are commonly used in operations research, specifically in decision analysis, to help identify a strategy most likely to reach a goal.Another use of decision trees is as a descriptive means for calculating conditional probabilities. We give the example for land area 480 and 400 m square. A. Foundational There are two options to the user in the Foundational, one or two floors. If chosen for one floor, there are two cases before him to be the internal planning of the house 4 or 5 rooms with kitchen and two bathrooms.Then it will determine the size of tank water. In the case of the user's choice 2 floors, will pass the same the previous options on one floor, but will start a driver with a bathroom extension, Show this explanation in figure (1). B. Finishing If the user chooses the option of calculating the cost of finishing, also given the choice between 1 or 2 floors.Then the internal planning 4 or 5 with kitchen and 2 bathrooms. We have 2 type of finishing Normal , Excellent every Each type of them has a different cost from one another and are calculated cost them m², Show this explanation in figure (2). C. ALL The user select choice All this mean (Foundational and finishing together), then calculate the cost at the same way previous, Show this explanation in the following figures (3) and (4).Tree Knowledgeall-Two floors Fig. 3. VI. DIAGNOSIS PROCESS The expert system software adopted C# to deal with the preparatory work, including the maintenance and management.All the status parameters, status values and solutions were obtained from the database access through C#, then past to CLIPS through interface functions that between C# and CLIPS, lastly diagnosed by the program that was built by CLIPS, and meanwhile, the inference information and result were past to and displayed on the interface module, which was programmed by C#. Expert System Shell CLIPS keep in memory a fact list, a rule list, and an agenda with activations of rules.Facts in CLIPS are simple expressions consisting of fields in parentheses.Groups of facts in CLIPS, usually follow a fact-template, so that to be easy to organize them and thus design simple rules that apply to them.Our expert system contains 100 CLIPS rules.Below, we present the rules for (ACBH). Working Example We can represent this rule using our representation as follow: Some instruction in the Clips defrule work_type_process => (printout t "foundation, finish, all" crlf) (bind ?answer (read idata)) (assert(work_type ?answer)) (printout t ?work_type) (defrule area_process => (printout t "land area 400 or 480?" crlf) (bind ?answer (read idata)) (assert(area ?answer))) (defrule f1 (work_type foundation)(area 480)(floor one)(rooms five )(small tank) => (printout odata "122342" crlf)) (defrule f2 (work_type foundation)(area 480) (floor one)(rooms five)(medium tank) => (printout odata "128342" crlf)) Figures 5,6,7,8,9 and 10 present some samples from the proposed expert system forms and menus.In this paper, the design of an expert system for estimating the Cost of building house is introduced.The expert system is implemented using Clips to build a knowledge base and C # to design a foreground interface.The developed expert system interface is used to receive information from users and handle it under several cases.Accordingly, it returns an accurate estimation to the user.The proposed expert system is included in one executable standalone package.In addition, the proposed expert system test proves that it simple, accurate, powerful, and flexible. After Execution A HOUSE -Choose a place of building a house -Settlement of the land -soil quality -Construction area -Foundations and pillars.-Types of foundations -Finished Construction -Types of buildings -Types of fossils -Internal planning for the home -Determine the labor -The numbers and types of housing required during the next twenty years in the Kingdom
2,503.2
2013-01-01T00:00:00.000
[ "Computer Science" ]
Long non-coding RNA LINC01234 regulates proliferation, migration and invasion via HIF-2α pathways in clear cell renal cell carcinoma cells Long non-coding RNAs (lncRNAs) have been proved to have an important role in different malignancies including clear cell renal cell carcinoma (ccRCC). However, their role in disease progression is still not clear. The objective of the study was to identify lncRNA-based prognostic biomarkers and further to investigate the role of one lncRNA LINC01234 in progression of ccRCC cells. We found that six adverse prognostic lncRNA biomarkers including LINC01234 were identified in ccRCC patients by bioinformatic analysis using The Cancer Genome Atlas database. LINC01234 knockdown impaired cell proliferation, migration and invasion in vitro as compared to negative control. Furthermore, the epithelial-mesenchymal transition was inhibited after LINC01234 knockdown. Additionally, LINC01234 knockdown impaired hypoxia-inducible factor-2a (HIF-2α) pathways, including a suppression of the expression of HIF-2α, vascular endothelial growth factor A, epidermal growth factor receptor, c-Myc, Cyclin D1 and MET. Together, these datas showed that LINC01234 was likely to regulate the progression of ccRCC by HIF-2α pathways, and LINC01234 was both a promising prognostic biomarker and a potential therapeutic target for ccRCC. INTRODUCTION In 2018, it was predicted that 403,262 new cases would be diagnosed with kidney cancer in 185 countries and 175,098 cases would be dead (Bray et al., 2018). In 36 kinds of cancers, the morbidity and mortality of kidney cancer were 2.2% and 1.8% respectively (Bray et al., 2018). Clear cell renal cell carcinoma (ccRCC) is the most common subtype of RCC and it accounts for 75% . Although surgery is still the preferred therapeutic option for the localized and locally advanced ccRCC, the long-term prognosis remains unsatisfactory and unpredictable. Current existed evaluation approach for prognosis of ccRCC is mainly based on clinicopathologic data, such as TNM staging. However, it does not reflect the biological heterogeneity of cancer (Cheng, 2018). Therefore, there is an urgent need for discovering a new prognostic model and biomarkers LncRNA expression and clinical datasets of ccRCC cases The TCGA Research Network was available at http://cancergenome.nih.gov/ (Deng et al., 2016). The datasets for ccRCC cases within the TCGA database were downloaded using the GDC Data Portal. The version of the dataset was: Data Release 14.0-December 18, 2018. Differentially expression analysis to identify differentially expressed lncRNAs Differentially expression analysis was performed as previously described (Yang et al., 2019). A volcano plot was plotted for the differentially expressed lncRNAs. Univariate cox regression and least absolute shrinkage and selection operator regression to identify key prognostic lncRNAs The univariate cox regression was performed for the differentially expressed lncRNAs. Then, the statistically significant lncRNAs (p < 0.05) were used for least absolute shrinkage and selection operator (LASSO) regression to identify key prognostic lncRNAs. The univariate cox regression and LASSO regression were performed as previously described (Yang et al., 2019). Multivariate cox regression to establish the prognostic model The multivariate cox regression was performed for the key prognostic lncRNAs as previously described (Yang et al., 2019). It calculated the risk score for each patient. Based on the median of the risk score, all patients were divided into the high-risk group and low-risk group. A heatmap was plotted to present the expression levels of the key prognostic lncRNAs in the two groups. And a forest plot was plotted to present the hazard ratio (HR) and 95% confidence interval (CI) for the key prognostic lncRNAs. ROC curve and C-index to evaluate the prognostic model The 3-year and 5-year time-dependent receiver operating characteristic (ROC) curves, the area under the ROC curves (AUCs) and the C-index were performed as previously described (Yang et al., 2019). Kaplan-Meier (K-M) survival analysis to identify independent prognostic biomarkers The R package "survival" (cran.r-project.org/web/packages/survival/index.html) was used for K-M survival analysis. Firstly, The K-M survival analysis was performed for the high-risk group and the low-risk group. Then K-M survival curves were plotted individually for each statistically significant lncRNA from the result of the multivariate cox regression. Validation of the expression and prognostic significance of the independent prognostic biomarkers Gene Expression Profiling Interactive Analysis (GEPIA) server (Tang et al., 2017) is a newly developed interactive web server and has been running for 3 years. It was used for analyzing the RNA sequencing expression data computed by a standard processing pipeline. Therefore, we validated the expression levels and prognostic significance of the independent prognostic biomarkers in patients with ccRCC via GEPIA server according to their Ensembl ID. RNA extraction, reverse transcription and real-time quantitative PCR (qPCR) RNA extraction and reverse transcription were performed as previously described (Wang et al., 2020). QPCR was performed using SYBR Green Realtime PCR Master Mix (TOYOBO, Osaka, Japan) in the QuantStudio 5 Real-Time PCR System (Thermo Fisher Scientific, Waltham, MA, USA). The PCR primers are shown in Table S1. The relative expression levels of genes were calculated using the 2 −ΔΔCt method relative to GAPDH. CCK-8 cell proliferation assay Cells stably expressing LINC01234 shRNA or control vector were plated into 96-well plates (2,000 cells per well) and incubated at 37 C under 5% CO2 for 1, 2, 3 or 4 days respectively. Then CCK-8 solution (Dojindo, Kumamoto, Japan) was added into the culture medium, and the optical density (OD) at 450 nm was measured with a Microplate Reader (Bio-Rad Laboratories Inc, Hercules, CA, USA) after incubation for 1.5 h. Each group had five duplicates and the experiment was performed in triplicate. Cell colony formation assay Cells stably expressing LINC01234 shRNA or control vector were plated into 10 cm culture dish (1,500 cells per dish) and incubated for 14 days. Wells were fixed with 4% paraformaldehyde and stained with 0.1% crystal violet. The cell colonies with >50 cells were counted. Each group had three duplicates and the experiment was performed in triplicate. Transwell assays Transwell assays including migration assays and invasion assays were performed as previously described (Wang et al., 2020). Each group had three duplicates and the experiment was performed in triplicate. Western blots Western blots were performed as described (Liu et al., 2013;Yang et al., 2018). Total cellular protein was extracted using RIPA buffer (Beyotime, Shanghai, China) with 1% of 100 mM PMSF (Solarbio, Beijing, China). Protein concentration was quantified using a BCA Protein Quantitative Kit (Beyotime, Shanghai, China). Briefly, 30 mg of protein was resolved by 10% SDS-PAGE, and transferred to a PVDF membrane (Millipore, Billerica, MA, USA). The membrane was blocked with 5% skim milk and then probed with rabbit or mouse anti-human primary antibodies respectively. Next, the membranes were incubated with corresponding HRP-conjugated goat anti-rabbit or anti-mouse IgG (1:1,000 dilution) (CST, Boston, USA) and detected with Western Blotting Luminol Reagent Statistical analysis For the datasets from TCGA database, the software Perl, R (version 3.4.4), RStudio 1.2.1335-Windows 7+ (64-bit) and R packages were used for data integration, extraction, analysis and visualization. Briefly, the R package "edgeR" was utilized to screen differentially expressed genes (FDR < 0.05 and |log 2 FC| > 2). The univariate cox regression and the Lasso regression were performed to identify key prognostic factors. The multivariate cox regression and K-M survival curve were performed to establish the risk score model and identify independent prognostic factors. ROC curve and C-index were performed to estimate the prognostic power of the risk score model. For the data about the function of LINC01234, SPSS 22.0 (IBM, Endicott, NY, USA) and GraphPad Prism 5.01 (GraphPad Software, San Diego, CA, USA) were used for statistical analyses. The data was expressed as mean ± SD from at least three independent experiments. Cell proliferation abilities of CCK-8 assay were compared with two-way ANOVA. Cell colony, migration and invasion levels, as well as qPCR data were compared using the Student's t-test. A p < 0.05 was considered statistically significant. Identification of differentially expressed lncRNAs and key prognostic lncRNAs in patients with ccRCC A total of 70 normal tissue samples and 541 cancer tissue samples from patients with ccRCC were collected. A total of 11,368 lncRNAs were extracted from the transcriptome profiling. Compared with the normal tissues, a total of 1541 lncRNAs were identified as differentially expressed lncRNAs in tumor tissues (FDR < 0.05 and |log 2 FC| > 2), including 1075 upregulated (log 2 FC > 2) and 466 downregulated (log 2 FC < −2) lncRNAs ( Fig. 1A) ( Table S2). Preliminarily, a total of 323 statistically significant lncRNAs were considered to be related to the prognosis by the univariate cox regression (Table S3). Next, through the LASSO regression, 13 lncRNAs were identified as key prognostic lncRNAs (Figs. 1B and 1C), which were used for the further establishment of the risk score model by multivariate cox regression. Establishment and evaluation of the prognostic model in ccRCC The median cutoff point of the risk scores calculated by multivariate cox regression was 0.842. All patients were divided into the high-risk group and low-risk group. It was Figure 1 Identification of differentially expressed lncRNAs and key prognostic lncRNAs in patients with ccRCC. (A) Identification of differentially expressed lncRNAs. A total of 11,368 lncRNAs were extracted from the transcriptome profiling and 1,541 lncRNAs were identified as differentially expressed lncRNAs in tumor tissues, including 1,075 upregulated (log 2 FC > 2) and 466 downregulated (log 2 FC < −2) lncRNAs. (B and C) Tuning parameter and variable selection by LASSO regression to identify key prognostic lncRNAs. A total of 323 significant lncRNAs were preliminarily associated with prognosis by the univariate cox regression, and finally 13 key prognostic lncRNAs were identified by LASSO regression. The numbers on the top of the figures indicated the number of the candidate lncRNAs for the corresponding lambda (λ) value in LASSO regression. lncRNA, long non-coding RNA; FC, fold change; LASSO, least absolute shrinkage and selection operator. revealed that the patients in the high-risk group had a significantly worse overall survival (OS) rate than those in the low-risk group (p < 0.001) ( Fig. 2A). The AUC was 0.753 (3-year ROC curve) and 0.784 (5-year ROC curve) respectively, and the C-index was 0.753 ( Fig. 2B). In addition, it also presented the relationship between the survival time and the risk score for patients (the death and the alive) (Fig. 2C). Moreover, a heatmap was plotted to illustrate the expression levels of the key prognostic lncRNAs in the high-risk group and low-risk group (Fig. 2D). Identification of independent prognostic biomarkers The multivariate cox regression revealed HR and 95% CI for the 13 key prognostic lncRNAs with a forest plot (Fig. 3). It indicated six statistically significant lncRNAs as the independent prognostic biomarkers, including lncRNAs AC009654.1, AC012615.3, AC092490.2, AL357507.1, LINC01234 and LINC01956. Moreover, K-M survival analysis was performed for the six lncRNAs. It revealed that all the six lncRNAs with high expression levels predicted a significantly worse OS rate than the low expressed one (Figs. 4A-4F). Therefore, they could serve as the adverse independent prognostic factors. Furthermore, we validated the expression levels and prognostic significance of the six lncRNAs in patients with ccRCC via GEPIA server. It suggested that AL357507.1, LINC01234, and LINC01956 were highly expressed at higher pathological stage of the disease, while LINC01234 exhibited the most significance in terms of expression at different pathological stage of the disease (Figs. 5A-5F). Moreover, GEPIA server revealed the significance of LINC01234 in terms of survival time. The high expression level of LINC01234 predicted a significantly worse disease-free survival rare or OS rate than the low expressed one (Figs. 5G and 5H). Unfortunately, GEPIA server could not provide the prognostic significance of the other five lncRNAs because the server showed the sample size was insufficient. LINC01234 knockdown suppressed the proliferation and clone formation of ccRCC cells Knockdown of LINC01234 was performed in Caki-2 and A498 cells by the lentivirus-mediated shRNA transfection. It suggested that the expression of LINC01234 was reduced in Caki-2 and A498 cells, which was validated by qPCR (Fig. 6A). Next, the CCK-8 assay revealed the proliferations of Caki-2 and A498 cells were significantly suppressed ( Fig. 6B and 6C). Moreover, cell colony formation assay was performed to analyze the role of LINC01234 in the colony formation of Caki-2 and A498 cells. As shown in Figs. 6D-6I, the clonogenic capacities of Caki-2 and A498 cells were dramatically inhibited. Obviously, it indicated that LINC01234 played an important role in the proliferation and colony formation of Caki-2 and A498 cells. LINC01234 depletion inhibited the migration and invasion of ccRCC cells The migration capabilities of Caki-2 and A498 cells were assessed by Transwell migration assay, while the invasion capabilities of these cells were assessed by Transwell Matrigel invasion assay. The results of the Transwell assay indicated that LINC01234 knockdown significantly inhibited the migration capabilities of Caki-2 and A498 cells (Figs. 7A-7F). Similarly, the invasion capabilities of Caki-2 and A498 cells were also suppressed following LINC01234 depletion (Figs. 7G-7L). These findings demonstrated that LINC01234 played an important role in the migration and invasion capacities of ccRCC cells. LINC01234 knockdown suppressed EMT process in ccRCC cells EMT process was closely related to the migration and invasion of cancer cells. Therefore, the mRNA levels of EMT-associated genes and the levels of EMT-associated proteins were detected by RT-PCR and western blots respectively. As shown in Figs. 8A and 8B, we found that the mRNA level of epithelial marker E-cadherin was increased, while the mRNA level of mesenchymal marker N-cadherin was decreased in Caki-2 and A498 cells following LINC01234 knockdown. Similarly, as shown in Fig. 8C, the protein expression levels of mesenchymal markers Vimentin and N-cadherin were significantly decreased in Caki-2 and A498 cells with LINC01234 knockdown, while the protein expression level of epithelial marker E-cadherin was upregulated. Moreover, the protein expression level of the transcription factor Snail was decreased in Caki-2 and A498 cells with LINC01234 knockdown. In addition, the protein expression level of β-catenin was also inhibited in Caki-2 and A498 cells following LINC01234 depletion. LINC01234 suppression suppressed HIF-2a pathways in ccRCC cells As shown in Figs. 8A and 8B, we found that the mRNA levels of HIF-2a and VEGFA were decreased in Caki-2 and A498 cells following LINC01234 knockdown. Similarly, as shown in Fig. 8D, we found that the protein expression levels of HIF-1a and HIF-2a were significantly decreased in Caki-2 and A498 cells with LINC01234 knockdown. Additionally, we found the protein expression levels of several target genes of HIF-2a, including VEGFA, EGFR, c-myc, Cyclin D1 and MET, were also inhibited in Caki-2 and A498 cells following LINC01234 depletion. DISCUSSION LncRNA was a kind of long RNA transcripts (>200 nucleotides) and it had no apparent protein-coding potentials (Quinn & Chang, 2016). Even so, lncRNA possessed a wide range of biological functions involved in multiple vital cellular activities (Shen et al., 2015). Generally, lncRNA achieved its function by regulating gene expression in the levels of epigenetics, transcription and post-transcription (Lee, 2012;Wang & Chang, 2011). It could serve as a molecular signal, a molecular decoy, a molecular guide, or a molecular scaffold to achieve its functions (Wang & Chang, 2011). The function of lncRNA was associated with its subcellular localization (Wang & Chang, 2011). More specifically, lncRNA might be involved in chromatin regulation, gene transcription and alternative splicing of transcripts when it was in nucleus, while if it was in cytoplasm, it might serve as a competing endogenous RNA (ceRNA), and regulated the stability or translation of mRNA (Yang et al., 2019). Recently, more and more evidences indicated that aberrations of lncRNA, such as overexpression, deficiency or mutation, played an important role in malignant phenotypes of cancers (Schmitt & Chang, 2016), including tumor formation, progression, metastasis and poor prognosis (Esteller, 2011;Gupta et al., 2010;Martens-Uzunova et al., 2014;Yu et al., 2017;Yue et al., 2016). Some aberrant lncRNAs were also associated with lots of malignant biological behaviors of cancer cells, such as proliferation, apoptosis, migration and invasion (Ellinger et al., 2016;Huang et al., 2017;Yue et al., 2016). It was also reported that some aberrant lncRNAs could serve as prognostic indicators in ccRCC, such as lncRNA Fer1L4 (Cox et al., 2020). With the development of molecular biological techniques and bioinformatics, more and more lncRNAs were marked as novel biomarkers and prognostic signatures for ccRCC utilizing TCGA database. For example, lncRNA Fer1L4 was overexpressed in ccRCC tissues, and its high expression levels were found in higher grade, higher stage, and metastatic tumors (Cox et al., 2020). LncRNA Fer1L4 overexpression was also an independent prognostic factor for patients with ccRCC (Cox et al., 2020). It was also reported an 11-lncRNA signature (AC245100. In the present study, utilizing the TCGA database, we identified 1,541 differentially expressed lncRNAs. More importantly, we not only constructed a 13-lncRNA-based risk score model with moderate accuracy, but also identified six independent adverse prognostic lncRNAs for patients with ccRCC, including lncRNA AC009654.1, AC012615.3, AC092490.2, AL357507.1, LINC01234 and LINC01956. It was similar to a recent study which suggested that lncRNA AC009654.1, AC092490.2, LINC00524, LINC01234 and LINC01885 were significantly associated with ccRCC prognosis (Zhang et al., 2020). The expression levels of the six lncRNAs above were upregulated in ccRCC tissues and the high expression levels of them predicted a worse OS in ccRCC patients. Furtherly, we investigated their expression levels at different pathological stages and validated their prognostic significance in ccRCC patients via GEPIA server. It revealed AL357507.1, LINC01234, and LINC01956 were highly expressed at higher pathological stages of the disease, while LINC01234 exhibited the highest significance in terms of expression at different pathological stages of the disease. It was a very interesting finding, because the pathological stage was closely associated with the prognosis of ccRCC patients. Moreover, GEPIA server revealed the significance of LINC01234 in terms of survival time. Unfortunately, GEPIA server could not provide the prognostic significance of the other five lncRNAs because the server showed the sample size was insufficient. Besides, I also referred to the recent studies and references about these six lncRNAs. Nevertheless, except limited researches for LINC01234, there are no investigations for them currently and it deserves to further researches. Therefore, we mainly focused on lncRNA LINC01234 for the subsequent experiments. Recently, partial functions and mechanisms of LINC01234 (also known as LCAL84) were reported in cancers, such as gastric cancer , esophageal cancer (Ghaffar et al., 2018), and colorectal adenocarcinoma (He et al., 2018). LINC01234 was upregulated and had oncogenic potentials in esophageal carcinoma cells in vitro (Ghaffar et al., 2018;He et al., 2018). LINC01234 was significantly associated with the prognosis of colorectal adenocarcinoma and the malignant biological behaviors of esophageal carcinoma cells including proliferation, migration, invasion and apoptosis (Ghaffar et al., 2018;He et al., 2018). Besides, LINC01234 expression was significantly upregulated in gastric cancer tissues and was associated with larger tumor size, advanced TNM stage, lymph node metastasis, and shorter survival time . Moreover, LINC01234 could serve as ceRNA to regulate core-binding factor β (CBFB) expression by sponging miR-204-5p to regulate the apoptosis, growth arrest and tumorigenesis in gastric cancer . In our study, we also explored the role of LINC01234 in ccRCC. It indicated that LINC01234 expression was upregulated in ccRCC tissues. LINC01234 was expressed increasingly as the stage increased. The high expression level of LINC01234 predicted a significantly worse disease-free survival rate or OS rate than the low expressed one for the patients with ccRCC. Besides, LINC01234 knockdown inhibited proliferation, migration, invasion and EMT process of ccRCC cells. More importantly, LINC01234 knockdown impaired the expression of HIF-1a, HIF-2a, VEGFA, EGFR, c-Myc, Cyclin D1 and MET in Caki-2 and A498 cells following LINC01234 depletion. EMT was considered as an essential process during development whereby epithelial cells acquired mesenchymal, fibroblast-like characteristics and displayed reduced intracellular adhesion and increased motility (Aigner et al., 2007;Moreno-Bueno, Portillo & Cano, 2008). EMT played a critical role in in the progression of primary tumors towards spread and metastasis, as well as the migration and invasion of malignant tumor cells (Gloushankova, Zhitnyak & Rubtsova, 2018;Peinado, Olmeda & Cano, 2007;Yang et al., 2018). Recently, an increasing number of studies supported the role of lncRNAs in the regulation of tumor progression and metastasis through the regulation of EMT (Gugnoni & Ciarrocchi, 2019). In the carcinogenic progression, downregulation of cell-adhesion molecules like epithelial cadherins, occludins, claudins, certain cytokeratins, and ZO-1 together with the coordinated upregulation of mesenchymal cadherins, vimentin, fibronectin and β1 and β3 integrins, promoted loss of cell-cell adhesion and apico-basal polarity and acquisition of invasive and migratory capacity (Gugnoni & Ciarrocchi, 2019;Lu & Kang, 2019). A group of transcription factors including Snail, Slug, Twist, zinc finger E-box-binding homeobox 1 and 2 (ZEB1, ZEB2) were well known to regulated EMT process partially or completely (Gugnoni & Ciarrocchi, 2019;Yang et al., 2018). Therefore, we detected the mRNA levels of EMT-associated genes and the expression levels of EMT-associated proteins in ccRCC cells by qPCR and western blots respectively following LINC01234 knockdown. It revealed that the mRNA level of epithelial marker E-cadherin was increased, while the mRNA level of mesenchymal marker N-cadherin was decreased in Caki-2 and A498 cells following LINC01234 knockdown. The protein expression levels of the transcription factor Snail and the epithelial markers N-cadherin and Vimentin were reduced, while the protein expression level of the mesenchymal marker E-cadherin was up-regulated in A498 and Caki-2 cells with LINC01234 knockdown. These findings indicated that the function of LINC01234 was associated with EMT process. EMT was impaired after LINC01234 knockdown. In addition, we also found the inhibition of β-catenin pathway contributed to the EMT impairment after LINC01234 depletion. All these evidences suggested that LINC01234 knockdown could inhibit the cell proliferation, migration and invasion, as well as EMT process in ccRCC. During EMT process, LINC01234 knockdown might suppress the expression of transcription factor Snail, and further stimulate the expression of E-cadherin, and inhibit the expressions of Vimentin and N-cadherin, which might result in a inhibition of malignant biological behaviors of ccRCC cells, such as cell proliferation, migration and invasion. Hypoxia could induce ccRCC cells to undergo EMT, angiogenesis and metastasis (Meléndez-Rodríguez et al., 2018;Zhang et al., 2017). Adaptation to a hypoxic environment played an important role in the progression of ccRCC (Garje et al., 2018). Hypoxia was mediated via HIFs (Semenza, 2012). Previously, HIF-1a was supposed to be a key oncogenic factor, but recent evidence showed HIF-2a was a predominant driver in renal cancer progression (Keith, Johnson & Simon, 2011). Currently, HIF-1a is supposed to be a ccRCC tumor suppressor, but the activity of HIF-1a is commonly diminished by chromosomal deletion in ccRCC (Schödel et al., 2016). Conversely, HIF-2a has emerged as an oncogene that is essential for ccRCC tumor progression (Meléndez-Rodríguez et al., 2018;Schödel et al., 2016). The polymorphisms at the HIF-2a gene locus predispose to the development of ccRCC, and HIF-2a can promote tumor growth (Schödel et al., 2016). Indeed, preclinical and clinical data have shown that pharmacological inhibitors of HIF-2a can efficiently inhibit ccRCC growth (Meléndez-Rodríguez et al., 2018). HIF-2a was found to be more sensitive to moderate hypoxia and showed more enduring expression in hypoxic conditions . HIF-2a could translocate to the nucleus and bind to the hypoxia response elements (Garje et al., 2018). This binding resulted in the expression of several target genes involved in angiogenesis, proliferation, migration and invasion of cancer cells, such as VEGFA, EGFR, c-Myc, Cyclin D1 and MET (Garje et al., 2018). VEGFA played an important role in the formation of blood vessels, which was closely associated with carcinogenesis (Shi et al., 2019). In ccRCC, as a well-known target of HIF-2a, VEGFA also played a vital role in angiogenesis and was a key target of anti-cancer therapeutic agents (Garje et al., 2018;Meléndez-Rodríguez et al., 2018). Besides, EGFR, c-Myc and Cyclin D1 were associated with ccRCC cell cycle and proliferation (Meléndez-Rodríguez et al., 2018). EGFR signaling could also influence ccRCC patient survival (Meléndez-Rodríguez et al., 2018). Moreover, MET was related to ccRCC metastasis (Meléndez-Rodríguez et al., 2018). Based on this above, we detected HIF-2a pathways after LINC01234 depletion. It revealed that the mRNA levels of HIF-2a and VEGFA were decreased in A498 and Caki-2 cells with LINC01234 knockdown. Similarly, the expression levels of proteins HIF-2a, VEGFA, EGFR, c-Myc, Cyclin D1 and MET were reduced in A498 and Caki-2 cells with LINC01234 knockdown. In our study, LINC01234 was expressed increasingly as the stage increased and its high expression level predicted a significantly worse disease-free survival rate or OS rate for the patients with ccRCC. LINC01234 knockdown suppressed cell proliferation, migration and invasion of ccRCC cells. Combined with all these findings above, it suggested that LINC01234 knockdown might suppress the expression of HIF-2a, and then inhibit the expression of VEGFA, EGFR, c-Myc, Cyclin D1 and MET, which might further inhibit the proliferation, metastasis and then influence the survival of ccRCC patient. Unfortunately, there was several limitations in our study. Firstly, the function of lncRNA was associated with its subcellular localization, but we did not identify the subcellular localization of LINC01234 in ccRCC cell lines. Secondly, although LINC01234 functioned as ceRNA to regulate CBFB expression by sponging miR-204-5p in gastric cancer, we did not identify any miRNAs as direct targets of LINC01234 to investigate whether LINC01234 was a ceRNA for miRNAs in ccRCC. It deserves to more investigations. CONCLUSIONS In summary, we constructed a lncRNA-based prognostic model with moderate accuracy and identified LINC01234 as an independent prognostic biomarker in ccRCC. Moreover, LINC01234 knockdown might inhibit the proliferation and metastasis of ccRCC cells by suppressing HIF-2a pathways. Therefore, LINC01234 might serve as a promising prognostic biomarker and a potential therapeutic target for patients with ccRCC.
5,874.4
2020-10-14T00:00:00.000
[ "Medicine", "Biology" ]
Bionic women and men ‐ Part 2: Arterial stiffness in heart failure patients implanted with left ventricular assist devices What is the topic of this review? This review discusses how implantation of continuous flow left ventricular assist devices impact arterial stiffness and outcome. What advances does it highlight? Not all patients implanted with continuous flow left ventricular assist devices show an increase in arterial stiffness. However, in those patients where arterial stiffness increases, levels of composite outcome (stroke, gastrointestinal bleeding, pump thrombosis and death) is significantly higher than those who's arterial stiffness does not increase. INTRODUCTION Major advancements in mechanical circulatory support mean that patients suffering severe heart failure are now living longer as a result of continuous-flow (CF) left ventricular assist device (LVAD) therapy (Colombo et al., 2019;Mehra et al., 2018Mehra et al., , 2019. However, in parallel with these important improvements in outcome, patients implanted with CF-LVADs continue to be at increased risk of peripheral organ damage, including stroke and gastrointestinal (GI) bleeding (Colombo et al., 2019). As already detailed in 'Bionic women and men. Part 1' (Stohr, Cornwell, Kanwar, Cockcroft, & McDonnell, 2020), this increased risk of stroke and GI bleeding in CF-LVAD patients might be associated with the nature of continuous flow and its impact on blood flow dynamics, blood pressure regulation and overall organ health (Stöhr, McDonnell, Colombo, & Willey, 2019a, b). In non-LVAD patients, flow dynamics, blood pressure regulation and organ health This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. -Shlomo et al., 2014). The purpose of this report is to highlight and discuss the variable impact of CF-LVAD therapy on arterial stiffness and attempt to highlight some potential mechanisms linking these associations in this unique population. CONTINUOUS FLOW AND ARTERIAL STIFFNESS Increased large artery stiffness, as measured by aortic pulse wave velocity, is independently associated with an increased risk of stroke and cardiovascular disease (Ben-Shlomo et al., 2014). However, these data are derived from circulatory systems with dynamic oscillations, whereby one can measure the influence of blood pressure and the Experimental Physiology. 2020;105:755-758. wileyonlinelibrary.com/journal/eph relative deformation of the artery and pulse wave velocities to measure stiffness. To date, there are no studies describing the assessment of artery stiffness during LVAD therapy, owing to the inability to measure artery deformation and pulse wave velocities of these non-pulsatile and CF systems. However, in a number of studies, an attempt has been made to understand the impact of CF-LVAD therapy on arterial stiffness by assessing aortic stiffness before patients are implanted with a CF-LVAD and subsequently, after the patient has been taken off the CF-LVAD and has received a heart transplant. The first study to show the changes in aortic stiffness was conducted by Ambardekar et al. (2015), who showed a significant increase in the aortic stiffness index of tissue samples harvested before LVAD implantation and after orthotopic heart transplant. Interestingly, their study provided an important insight into the structural changes of the arterial morphology (significant reductions in elastin and significant increases in collagen) in the aortic tissue in those implanted with an LVAD compared with heart failure patients and healthy control subjects. Furthermore, in 2017, the same group showed that non-invasive echocardiographic measures of aortic stiffness index confirmed an increased stiffness in LVAD patients in vivo and that the change in stiffness was determined by whether the LVAD had a pulse or not (Patel et al., 2017). However, during these observations, the authors did not determine whether the LVAD with a pulse was directly related to the device, the ability of the heart to impact on the pulse produced or the speed settings of the devices implanted. More recently, our own data showed that on average, aortic stiffness did increase during CF-LVAD therapy. However, aortic stiffness did not increase in all patients, and those patients with increased aortic stiffness had the highest risk of the composite outcome of stroke, GI bleeding and pump thrombosis. Interestingly, those individuals with increased aortic stiffness were on CF-LVAD therapy for a longer duration and were on lower numbers of angiotensin converting enzyme (ACE) inhibitors or angiotensin receptor blockers (ARBs) compared with those who had an unaltered or decreased aortic stiffness (Rosenblum et al., 2018). Paradoxically, patients who had an increased stiffness with LVAD therapy had a significantly lower baseline stiffness. This suggests that prior stiffness might reduce the risk during subsequent LVAD therapy or alternatively, the relative increase in stiffness, even from a lower baseline, might also play a role in increasing risk. Further work is required to understand the mechanisms associated with the progression of aortic stiffness in various pulsatile and non-pulsatile CF-LVAD patient groups, especially with the introduction of third-generation LVAD devices that have an added 'artificial pulse' (i.e. the HeartMate 3 LVAD and the HeartWare VAD). POTENTIAL MECHANISMS As already eluded to in the symposium report 'Bionic women and men. Part 1' (Stohr et al., 2020), the role of pulsatility has been a topic of great debate in terms of outcome in CF-LVAD patients. New Findings • What is the topic of this review? This review discusses how implantation of continuous flow left ventricular assist devices impact arterial stiffness and outcome. • required. Importantly, it is proposed that the role of endotheliumderived NO in a dynamic system is associated with endothelial function (Moncada, Radomski, & Palmer, 1988). Endothelial function, as measured by flow-mediated dilatation, is a technique predominantly associated with the flow-mediated release of NO or, to a lesser extent, the release of prostacyclin and endothelium-derived hyperpolarizing factor (Stoner et al., 2012). Moreover, flow-mediated dilatation has been shown to be impaired in CF-LVAD patients (Witman et al., 2015) but significantly higher in patients implanted with pulsatile LVADs (Amir et al., 2006). Given that endothelial dysfunction, reduced NO and endothelium-derived hyperpolarizing factor are considered mechanisms related to increased large artery stiffening (McEniery et al., 2006;Bellien et al., 2010), it is acceptable to propose that this link might be one that plays a fundamental role in the LVAD and arterial stiffness story. Whether the settings and the degree of pulsatile flow and pressure outputs from different LVADs, in different patients, have a role to play in endothelial production of NO, endothelial dysfunction and development of arterial stiffness remains to be seen. Better description of macro-and microvascular flow and pressure profiling (haemodynamic profiling) in LVAD patients are needed to inform and lead to a better understanding of individual CV risk in the future. In addition to the relationship between CF associated with LVAD therapy and endothelial function, it has been proposed that the lack of pressure and flow oscillations in CF-LVAD therapy significantly affects baroreceptor sensitivity and function in regulating blood pressure in LVAD patients. It has been shown that the degree of pulsatility or having some pulsatility in the output from the LVAD impacts sympathetic activity (SA) of the patient (Cornwell et al., 2015). In addition, authors of that paper have previously investigated the impact of acute alterations in the LVAD settings and subsequent flow outputs from LVAD therapy in relationship to SA. Their data demonstrated that an acute increase in LVAD speed is associated with increased SA. By increasing the speed, the pulsatile nature of the flow and output is diminished enough to unload the baroreceptors and, in turn, increase SA. Importantly, an increase in SA has been shown to be related to arterial remodelling and increased arterial wall thickness (Dinenno, Jones, Seals, & Tanaka, 2000) in dynamic systems. Therefore, the impact of the speed and settings of CF-LVAD therapy on individual flow and pressure outputs might have a direct impact on the SA of patients and have long-term structural and functional implications for the vasculature that might influence future CV risk. Similar to the impact of reducing endothelial function when on CF-LVAD therapy, increased arterial wall thickness and stiffness will consequently disable the buffering capabilities of the large arteries (when needed) and potentially expose the microcirculation to detrimental pulsatile energy in a system ill prepared to deal with such oscillations . The cumulative effect of LVAD therapy on functional and structural mechanisms of arterial stiffness might have a significant impact on the CV risk of individual patients. Importantly, our group has previously shown that systolic blood pressure in CF-LVAD patients is relatively low in comparison to healthy people when measured in the office setting; however, for the first time, use of 24 h monitoring has shown that these patients can present with multiple hypertensive crises during 24 h. Therefore, in LVAD patients sensitive to minimal increases in blood pressure, increased large artery stiffness and an inability to buffer pressure and flow when needed, presentation with multiple hypertensive episodes during 24 h might significantly increase the risk of stroke and GI bleeds. FUTURE PERSPECTIVES Moving forwards, it is crucial that clinicians and scientists work
2,072
2020-03-07T00:00:00.000
[ "Medicine", "Engineering", "Biology" ]
Sensing Magnetic Fields with Magnetosensitive Ion Channels Magnetic nanoparticles are met across many biological species ranging from magnetosensitive bacteria, fishes, bees, bats, rats, birds, to humans. They can be both of biogenetic origin and due to environmental contamination, being either in paramagnetic or ferromagnetic state. The energy of such naturally occurring single-domain magnetic nanoparticles can reach up to 10–20 room kBT in the magnetic field of the Earth, which naturally led to supposition that they can serve as sensory elements in various animals. This work explores within a stochastic modeling framework a fascinating hypothesis of magnetosensitive ion channels with magnetic nanoparticles serving as sensory elements, especially, how realistic it is given a highly dissipative viscoelastic interior of living cells and typical sizes of nanoparticles possibly involved. Introduction Influence of weak electromagnetic fields on living species is perceived by many scientists as a controversial subject matter. Nevertheless, there is a huge body of evidence of a substantial impact ( [1][2][3][4], see, especially, the book by Binhi [5] and the references therein). One of such manifestations is given by the microwave auditory effect or Allan Frey hearing effect [6][7][8][9][10][11][12][13][14][15][16], an auditory perception of microwave pulses by humans and animals, which earlier has been considered mysterious. Now, the mystery of this effect is completely resolved within a thermoelastic theory [7][8][9][10][11][12][13][14][15][16] of acoustic wave production in closed resonators (e.g., human or animal head) filled with microwave absorbing tissues having a very large water content (think about heating of food in microwave oven, to realize a possible physical reason). Good reviews are available [13][14][15] and the theory and experiment agree convincingly well. The energy absorption per pulse of 16 µJ/kg sufficient to produce the microwave hearing effect [9,13] in humans is 36,000 times lower that the maximal limit of 576 J/kg permitted in the IEEE C95.1 radiation safety standard [13], and a corresponding pulse-like elevation of temperature is really tiny, about 10 −6 • C per pulse [13,15], however, rapid (about µs). This is currently probably the only one of known profound effects of weak electromagnetic fields on living systems which is explained completely. However, a direct influence of GHz and THz waves on neuronal tissues, which is also evidenced experimentally, is still not convincingly explained and this is the subject of ongoing research [17,18]. Epidemiological evidence for follow-up health effects including a spectrum of neuropsychiatric disorders is extensive, see e.g., in [19], and the references therein. Sensing and navigation of various living species such as magnetosensitive bacteria, fishes, turtles, bees, bats, rats, birds, etc. in the weak magnetic field of the Earth (about 50 µT) presents another well established effect [5,20,21]. Differently from electric fields, quasi-static magnetic fields are practically not screened by moving ions and counter-ions, and can deeply penetrate into biological tissues [5]. Currently, such a high sensitivity to magnetic fields presents a puzzle with two basic hypothetical magneto-sensitive ion channels or complex nanomagnetic biostructures involving ion channels. In addition, in Ref. [48], a membrane pore forming activity of magnetic nanoparticles has been shown. Hence, such man-made biological structures were already in fact demonstrated, however, for sufficiently strong magnetic fields, much stronger than biological species normally experience. The idea that a magnetic nanorod can play a role in biological sensing of weak magnetic fields by birds has first been proposed by Yorke [49]. Kirschvink et al. [27,28,50] suggested that it can be a magnetosensitive ion channel that involves a magnetic nanoparticle as sensory element. Several variants of such channels were further proposed and discussed [51,52]. Whether such theoretical proposals can be feasible or not requires, however, a serious detailed investigation of the dynamics of such models [40], which is necessarily stochastic. The sensory element unavoidably experiences friction and random thermal noise caused by the environment, which are related by the fluctuation-dissipation theorem [53][54][55], at a local thermal equilibrium. Cytosol is a viscoelastic liquid, when it is functional, to a first rough approximation, with the main water component which accounts for up to 80% of the cytosol's mass content. However, it is densely stuffed with different protein polymers, which dramatically enhances the cytosol viscosity for particles of the linear size of 100 nm range and even smaller. Thus, in Refs. [28,29,50], the effective viscosity felt by magnetic nanoparticles is assumed to be 100 times larger than the one of water. Notice that a starving cell can do a transition to an anabiosis state, where cytosol behaves more like a superviscous solid with virtually infinite viscosity [56]. However, we are more interested in its functionally active liquid-like state. The characteristic time scale of sensor was estimated it Refs. [28,50] assuming that the sensor is monostable and it fluctuates around a fixed point, which is not affected by the magnetic field. In that original model, one assumes that the ion channel opens when a critical angle fluctuation occurs, and the amplitude of this fluctuation is affected by the magnetic field. However, most biological ion channels do exhibit a characteristic bistable dynamics while fluctuating between open and closed states [57]. Binhi and Chernavskii considered an orientational bistable dynamics of a magnetosome tethered to cytoskeleton [58,59], however, not in the context of ion channels, but rather stipulating that the above quantum mechanism can be mediated by a fluctuating magnetic field of a magnetosome. Indeed, it can largely exceed one of the Earth [60,61]. Anisotropic field of a spherical ferromagnetic magnetosome is estimated to be up to 402 mT strong near to its surface [61]. The magnetic sensor dynamics of hypothetical ion channels should also be bistable, rather than monostable. Such a model was proposed recently in Ref. [61]. The bistability therein is induced by a gating spring type instability as earlier suggested in the context of hair cell ion channels [62,63]. The analysis of this model for realistic parameters showed, however, that for a viscous friction that is 100 times larger than one in water the time scale of switchings would be so large that such a channel would not be functional. Moreover, the effective friction caused by cytosol for the particles of the size of 100 nm can be even larger, e.g., 1000 fold larger than one in water [64][65][66][67][68][69]. Cytosol as a complex fluid [70,71] is, however, not a Newtonian but rather viscoelastic liquid [72][73][74][75][76][77][78][79][80][81][82][83][84][85], and on the appropriate time scales (probably up to hours in some cases) it is characterized by a slowly, power law decaying memory kernel with a memory cutoff at large times [70,71,86,87]. Integration of this memory kernel yields an effective friction at very large times, when the memory effects can be neglected. The discussed memory friction yields subdiffusion on the relevant time scales, which has been experimentally measured for various nanoparticles in cytosol [56,73,74,[77][78][79]82,[88][89][90][91][92][93][94][95][96][97][98][99] including magnetosomes [81]. Even smaller particles of the size of only several nanometers can subdiffuse on the time scale up to one hour [74,83]. The non-Markovian dynamics, which includes such effects, has also been studied in Ref. [61] using a Markovian embedding approach of Refs. [87,100]. Contrary to naive reasoning involving a largely enhanced normal viscous friction, however, in accord with the results of non-Markovian rate theory [101][102][103][104][105], it has been shown that such bistable sensors can be functional and operate on a millisecond to second time scale. This is in line with some earlier studies showing that viscoelastic subdiffusion largely accelerates (and not hinders, contrary to a common but misleading interpretation [84]) transport processes in living cells over the naive macroscopic Markovian treatment with a largely enhanced normal viscosity. Viscoelastic power-law memory friction yields non-exponential distribution of the waiting times such as stretched exponential distribution, which elegantly explains [61,87,100] the physical origin of such distributions in ionic channels [106][107][108] and 1/ f noise [109], although other approaches also exist [109][110][111][112][113][114][115][116]. In [61], nanosensor rod consists of several nanoparticles. It is coupled by peptide elastic springs to the gating structural elements of several ion channels forming a cluster and can do a large-amplitude orientational motion (about 150 angle degree change), while moving to a metastable state corresponding to the open state of the channels in cluster. In this paper, I will explore the possibility of sensor consisting of the only one sufficiently large nanoparticle and doing a relatively small orientational motion (about 30 degree change) while creating an opening torque on the gates of the channels within a very similar model. It will be shown that such a magnetic sensor is more realistic and it would operate much faster, on biologically relevant time scales, despite a largely enhanced effective friction. Viscoelastic properties of cytosol are very important for this and cannot be disregarded. Model We consider the model sketched in Figure 1, where a single magnetosome consisting of an elongated nanoparticle of magnetite in single-domain ferrimagnetic state of length L and width d < L, dressed in a protein-lipid membrane, can rotate around one edge fixed on a cytoskeleton element arming the cell membrane inside the cell. For L = 150 nm and d = 107 nm its magnetic energy in the magnetic field of the Earth reaches E M = µB e = 4.12 · 10 −20 J or about 10 k B T r for T r = 297 K depending on its orientation, which is characterized by the angle φ, and the orientation of the magnetic field given by the angle ψ in plane geometry. Biomagnetite nanoparticles of such a size are commonly met both in some bacteria and in the human brain. This "nanocompass" is coupled by m elastic peptide linkers (depicted in red) to the gates (depicted in blue) of m ionic channels forming a cluster in the membrane (one is depicted). One end of the linker is attached at the distance l from the rotation axis to magnetosome and another one to a molecular latch, which forms a gate opening and closing an ion-conducting pore formed by a membrane channel protein. In the absence of magnetic field or for its unfavorable orientation (e.g., ψ = 0), the linkers are relaxed (folded) and the channels are closed (right part of Figure 1). For a properly oriented field, the linkers are fully stretched (unfolded) and the channels are open (left part of Figure 1). Even in predominantly closed state channels can stay open time from time due to thermally activated transitions of sensor between its two metastable states depicted in Figure 1. Likewise, in the predominantly open state channels close stochastically in time. The mean time intervals in the states and the mean opening probability of ion channel complex, which determines the ion current controlled by it, strongly depend on the magnetic field. Sensor experiences friction and thermal noise caused by environment, which crucially determine its stochastic dynamics. The linker, which is an entropic spring provided by a disordered peptide, is modeled by a finite extensible nonlinear elastic (FENE) chain [117]. Its elastic energy depending on the elongation x is given by U FENE (x) = − 1 2 k L l 2 max ln(1 − x 2 /l 2 max ), where k L is elastic spring constant, and l max is the maximal extension length of the linker, when it is fully stretched. The rotation of sensor is thus bounded to some angular interval [0, φ max ], where φ max depends on l and l max . We will choose it sufficiently small, like in Figure 1. The channel gate can be in one of two states. The closed state is characterized by the energy 1 , and the open one has the energy 2 − f 0 x, which depends on the linker elongation x, where f 0 is a force constant characterizing the strength of coupling (force exerted by the linker on the gate). The gate fluctuates very fast and its dynamics is slaved to a much slower sensor. Statistical mean force exerted by the channel gate on the linker is is probability of the gate to be open and l 0 = ( 2 − 1 )/ f 0 . To define some x 0 as equilibrium point, we, following [63], redefine the mean force by a shift as The potential of mean force or rather torque acting on the rod in our model is [61] where p(φ 0 ) = p(x = 2l sin(φ 0 /2)), k = mk L . We shall scale the energy in units of U 0 = kl 2 max , temperature in the units of U 0 /k B , distances in units of l max , and forces in units of f u = U 0 /l max . U 0 will be fixed to U 0 = 10 k B T r ≈ 41 pN · nm ≈ 0.25 eV. For m = 7 and a linker with stiffness k L = 0.0429 pN/nm [95], k ≈ 0.3 pN/nm, this U 0 corresponds to l max ≈ 11.69 nm and force units f u ≈ 3.51 pN. In this paper, we choose l = 2, l 0 = 0.91, f 0 = 3, φ 0 = 0.1 rad ≈ 5.73 • . The corresponding U(φ) and p(φ) are plotted in Figure 2. U(φ) is bistable due to a gating spring instability featuring such models [62]. Namely, if the sensor pulls the linker sufficiently strong, another metastable state emerges. Notice that the probability of the channel to be half-open, p = 0.5, belongs in this model to the attraction domain of U(φ) belonging to the open state. When it becomes lower in energy than one corresponding to the relaxed linker (closed channel), the channel becomes predominantly open. This occurs, e.g., when the magnetic field is applied at the angle ψ ≈ 114.59 • . Theory and Results To obtain the averaged probability p(ψ, B) of the ion channel cluster to be in the open state depending on the strength and orientation of the magnetic field, one has to average p(x(φ)) in Equation (1) dφ is the corresponding statistical sum. The result is shown in Figure 3. degrees) for two values of the magnetic energy. One corresponds to U 0 = kl 2 max , i.e., a characteristic energy of the stretched gating springs, and another one is double of it. Notice that the sensor operation is possible already for µB e = U 0 with the maximal averaged opening probability over 0.5. For a larger sensor with the linear sizes increased by the factor 2 1/3 ≈ 1.26, i.e., 189 × 134.8 × 134.8 nm 3 (also met in living species), the maximal averaged probability increases to about 0.8. Such a sensor would be, however, less sensitive to the variations of ψ near to the maximum. The corresponding averaged current through a cluster of ion channel is I = mi 0 p(ψ, B) , where i 0 is unitary conductance of a single channel in the cluster. Already single such sensory complex consisting of m = 7 large conductance ion channels with i 0 ∼50 pA can be sufficient to depolarize the membrane above a sensitivity threshold and evoke spiking activity in hypothetical magneto-sensitive neurons [61]. For the magneto-sensor complex to be functional, its dynamics is, however, also very important. For example, if its characteristic times would lie in the range of minutes or hours, it, for sure, would not be of any relevance for animals. Stochastic Dynamics without Memory First, we consider stochastic orientational dynamics of the sensor in the viscous medium under the mean torque f (φ) = −∂U(∂)/∂φ, viscous friction term F v = −η 0φ , with orientational friction coefficient η 0 , and the corresponding white Gaussian thermal noise ξ 0 (t) of the environment. The last two are related by the fluctuation-dissipation relation (FDR) named also the second fluctuation-dissipation theorem (FDT) by Kubo [53][54][55], ξ 0 (t )ξ 0 (t) = 2k B Tη 0 δ(t − t ), at the medium's temperature T. δ(t) here is the Dirac's delta function signaling that this noise has an infinite root-mean squared amplitude-a common singular model in statistical physics. The overdamped stochastic dynamics reads [54,55,118] and a characteristic time-scale entering it is τ sc = η 0 /U 0 . Precise estimation of the rotational frictional coefficient for the particle of the form considered is not easy [119]. A simplest estimate can be obtained by replacing it with the sphere of equal volume V = Ld 2 . Then, η 0 ∼6ζ 0 V, where ζ 0 is the medium's viscosity. For water at T = 20 • C with ζ 0 ∼ 1 mPa · s, we obtain τ sc ∼ 0.17 ms, which is much smaller than for the rod-like sensor in [61]. This is a first reason why it is faster. τ sc is the time scale used in our simulations done using the second-order stochastic Runge-Kutta method, or stochastic Heun method [120], see in Methods. The sample trajectories are shown in Figure 4 both for predominantly closed channels (part a, for ψ = 0), and for predominantly open channels (part b, ψ = 2 rad.) From very long single trajectories we extract residence time distributions in open and closed states using the following procedure. Two thresholds are placed at the potential minima of the sensor metastable states corresponding to U(φ) or, equivalently, two maxima of the probability distribution of the ionic current. The residence time interval in the closed state starts after a first downward crossing of the lower threshold and continues until the first upward crossing of the higher threshold. Likewise, the residence time interval in the open state starts after a first upward crossing of the upper threshold and continues until the first downward crossing of the lower threshold. This allows to map the continuous fluctuation processes in Figure 4 (central part) onto the corresponding two-state, on-off processes, characterized by the survival probabilities of the residence time-intervals in the corresponding states, P c (τ) and P o (τ). In the case of sufficiently large potential barriers, the Markovian continuous state dynamics yields also two-state Markovian dynamics completely characterized by the exponential survival probabilities for the whole survival probability derived from numerics (then, c c,o = 1) or some parts of it. The corresponding density has a decaying power law part 1/τ 1−β c,o , for 0 < β c,o < 1 which is the reason why this distribution can be confused for a truly power-law dependence on the corresponding plots for a large τ c,o . To avoid this pitfall of interpretation, the plots of survival probability P(τ) can be preferred. Figure 4), or, equivalently, the current values (the right panel in Figure 4). (c) Only one threshold is used at the minimum of current distribution corresponding to p = 0.2 and the top of U(φ) barrier separating two metastable states. Notice, that many re-crossings of this threshold occur when the sensor dwells on the top of this barrier. A measuring device with a finite time resolution ∆t res will miss many of such events. We model this by using ∆t res = 100δt, where δt = 2 · 10 −6 is the time step in simulations. Notice that this incorrect procedure leads to spurious power law and stretched exponential distributions with the parameters shown in the plot. It results also in far too small values of the mean residence times, τ c , τ o , as compare with the correct values in the part (b). Clearly, these "measurable'' mean times will become even much smaller for ∆t res = δt. This procedure with one threshold placed at top of the barrier separating two basins of attraction is hence very subjective and it cannot be trusted. The just described procedure of extracting two-state process is well known (see, e.g., in [61,100]). In [61], the derived in this way exponential distributions agree very well with the results of Kramers theory for the transition rates [105], which confirms that this procedure is essentially correct. In the present case, the potential barriers are smaller and the curvatures of the potential wells (at the bottom), and the barrier (at the top) are very different. In such a situation, a good agreement with the Kramers theory is not expected. In this respect, notice large fluctuations of the sensor orientation in Figure 4 in the case of the relaxed linker and the closed state of channel. In a sharp contrast, these fluctuations are much smaller when the linker is in its tense state. This does not mean, however, that there cannot be large fluctuations of conductance when the linker is tense. Vice versa, they are much larger when the linker is tense than when it is relaxed. This is because the midpoint of p(φ) dependence belongs to the attraction basin of the metastable state of U(φ) that corresponds to the open state, and the barrier corresponds to p(φ) ≈ 0.2 (see Figure 2). The amplitude of current fluctuations which corresponds the large-amplitude fluctuations of sensor with the relaxed linker is quite small because of an exponential dependence in Equation (1). The numerics for the case of predominantly closed channels in Figure 5a show that the closed time residence time distribution is nearly exponential c c = 1, and τ c ≈ τ c . However, the open time distribution is not quite exponential. Initially, it is different from exponential, see in inset of Figure 5a, which is the reason why c o ≈ 1.214 > 1 and fitted τ o does not agree well with τ o . The corresponding mean opening probability is calculated as p = τ o /( τ o + τ c ), and it is quite small in Figure 5a. For predominantly open channels in Figure 5b, both open and closed time distributions are nearly exponential. The corresponding mean times τ o = 4.254 × 0.17 ≈ 0.723 ms and τ o = 2.338 × 0.17 ≈ 0.397 ms are quite small, being typical for very fast ion channels like sodium channels, which are crucial for neuronal excitability. Notice also that the corresponding p ≈ 0.645 is somewhat larger than p calculated for the continuous process in Figure 2. Separation of Closed and Open States with a Single Threshold The distribution of current values in the right panel of Figure 4 implies, however, that it might be difficult practically implement the detecting procedure with two thresholds because the lower threshold must be set at a very tiny, 10 −8 − 10 −6 , value of current. What a theorist can easily do in a Gedankenexperiment, an experimentalist can have difficulties to realize in practice. Setting the lower threshold at some other higher value, say 0.2, can very essentially modify the thus derived statistics [61], and, hence, it is a rather arbitrary procedure. Another common procedure, which some experimentalists implement both in ion channel research [57], and, especially, in deriving statistics of blinking quantum dots [121][122][123][124] is to operate with the only one detection threshold. It is "naturally" placed at the minimum of the current distribution separating two maxima. An immediate objection of a theorist experienced in the rate theory is that this procedure corresponds to placing a separation threshold on the top of potential barrier (for the sensor, in our case) separating two metastable basins of attraction. It might first look natural. However, physical picture of the rate transitions between two metastable basins of attraction says that it is not. In fact, the particle dwells mostly in a potential well and seldomly, with rate r 0 = (ω 0 /2π) exp[−∆U/(k B T)], where ω 0 is circular frequency of oscillations near the bottom of the potential well, and ∆U is the height of the potential barrier, comes on the barrier top. However, it generally can dwell for a while on the top of this potential barrier and re-cross this single threshold many times, before the particle will make finally transition to another potential well. These multiple crossings yield the so-called transmission coefficient which being multiplied with r 0 renders the resulting rate much smaller. The whole rate theory is, roughly speaking, about how to calculate this transmission coefficient, which depends on friction, etc. Its maximal value is one (single crossing). This is why for the overdamped dynamics considered, this second procedure yields totally different residence time distributions, which severally distort the correct distributions characterizing two-state dynamics. In particular, the mean residence times derived in such a way will be much smaller than the correct ones. Experimentally, the problem is softened and can be masked by a finite time resolution ∆t res of a measurement device. This is why not every fast recrossing will be measured. With ∆t res becoming large many such recrossing events will be missed. However, this does not mean that statistics will become closer to the correct one. Not at all. To model this second procedure we use ∆t res = 100δt, where δt is the time step used in simulations, and the corresponding results are depicted in Figure 5c. First, the mean residence times are indeed much smaller (they became naturally even much more smaller for ∆t res = δt). Second, the residence time statistics is severely distort. In particular, about 90% of the closed time intervals follow now a spurious power law, P c (τ) ∝ 1/τ γ , with γ ≈ 0.437, and hence ψ c (τ) ∝ 1/τ 1+γ = 1/τ 1.437 . Similar power laws are indeed measured for quantum dots using a similar approach with one threshold (however, the physics there is very different and our reasoning cannot be directly applied). Moreover, a spurious stretched exponential tail appears with β c = 0.762 and weight c c = 0.0834. The open time distribution has, however, an exponential tail, β o = 1. Our example shows explicitly how dangerous can be this second detection method and why it cannot be trusted. Interestingly, this second procedure barely affects p , cf. p = 0.642 in Figure 5b vs. p = 0.626 in Figure 5c. Notice that with a naíve replacement τ sc → 100τ sc = 17 ms in a medium with effective viscosity 100× larger than one of water our sensor dynamics would become 100× slower what would essentially deteriorate its functionality. A much slower model channel in [61] would even cease to be of any interest in biological context, if to apply this Markovian reasoning with η eff = 100 η 0 . This, however, does not happen in fact in viscoelastic media such as cytosol, where a more careful treatment is required, since the viscoelastic memory effects become very essential. Stochastic Dynamics in Viscoelastic Environment In viscoelastic crowded environments such as cytosol, apart from viscous friction caused by the its main water component, also a viscoelastic memory friction is present, is a memory kernel. It is necessarily complemented by a corresponding correlated thermal unbiased Gaussian random force of the environment, ξ mem (t). In accordance with the (second) FDT, ξ mem (t)ξ mem (t ) = k B Tη mem (|t − t |). Sensor dynamics in this case obeys a generalized Langevin equation (GLE) reading The simplest Maxwellian model of viscoelasticity corresponds to an exponentially decaying memory kernel, η mem = k 1 exp(−ν 1 t), where k 1 is a spring constant and ν 1 is a relaxation rate of stress. For ν 1 → 0, F v−el (t) behaves as an elastic force, while in the limit k 1 → ∞, ν 1 → ∞, η 1 = k 1 /η 1 = const it corresponds to the viscous Stokes friction with the friction coefficient η 1 . This is how Maxwell derived the phenomenon of viscosity from the phenomenon of elasticity, i.e., by letting the elastic stress to relax in time [87]. Complex viscoelastic liquids and gels are, however, characterized by a power law decaying memory kernel, η mem (t) = η α t −α /Γ(1 − α), with 0 < α < 1, as it has first been established by Gemant [87,100,125], and which now presents a common model. In this particular case, F v−el (t) can be abbreviated as F v−el (t) = η α d α φ(t)/dt α , which just defines the fractional Caputo derivative of the order α. Hence, η α is customarily named the fractional friction coefficient, and GLE in this particular case is named fractional Langevin equation or FLE. This is, of course, an idealization. In reality, there are always two memory cutoffs present. A large time memory cutoff τ h = 1/ν l defines the slowest Maxwellian relaxation mode of the environment, and with η α → η α exp(−ν h t), an effective friction can be introduced. Notice that η eff characterizes diffusion on the time scale t τ h . However, τ h can be well in the range of minutes and even hours. It depends on the system considered. We assume it to be in the range of seconds for our nanosensor. As long as t < τ h , e.g., for the duration of sojourn times of our sensor in the metastable states, it is η α that determines the stochastic dynamics and not η eff , which can be effectively infinite, with τ h → ∞. This is the reason why thinking in terms of some η eff can be very misleading for viscoelastic media. This is a macroscopic type approximation, which can fail completely on micro-and nano-scales. With this reservation we use it because far too many researchers continue to think in terms of some η eff . In the simulations presented below we fixed α = 0.5 (one of common experimental values for cytosol [74]) and τ h = 10 4 (or about 1.7 s, when τ sc = 0.17 ms). η α will be fixed to two values by choosing η eff = 100 η 0 and η eff = 1000 η 0 . In dimensionless units used in numerics, the former (intermediate) fractional friction is about η α ≈ η 0 , whereas the latter one (strong) is η α ≈ 10 η 0 . For intermediate η α , the relaxation within a potential well is mostly exponential with a heavy power law tail, while for the large η α , it is initially stretched exponential and then changes into a power-law decay (Mittag-Leffler relaxation function) [61]. This latter one corresponds to dielectric Cole-Cole response [126] which is typical for biological media [4]. Furthermore, on physical grounds also a short time cutoff τ l = 1/ν h is always necessarily present. It ensures that spectral density of the noise ξ mem (t) does not contain frequencies much above ν h . This physically corrects the approximation of continuum medium where such a cutoff is absent because this approximation neglects the atomistic nature of any real condensed medium. In our numerics, we take it to be ν h = ν 0 = 10 4 , and the numerical method is based on approximating the memory kernel by a sum of exponentials, and using a Markovian embedding (9) of the GLE dynamics in Equation (6), see in Methods. This allows for a numerically highly accurate approach, with a well controlled accuracy [87,100]. Typical trajectories for strong fractional friction are shown in Figure 6. For a weak or intermediate fractional friction, they look more like the ones in Figure 4. One striking feature is immediately seen in Figure 6. This is a highly bursting character of sojourns in the open state, where huge many very short excursions happen to the closed state during a long sojourn in the open state. It visually signals a truly non-exponential kinetics. 3.2.1. Intermediate fractional friction, η α ≈ η 0 The first profound influence of fractional viscoelastic friction on the statistics of the residence time distributions can be reveal in Figure 7. Namely, the distribution of closed times becomes stretched exponential with β c ≈ 0.92 in part a, and β c ≈ 0.82 in part b. However, the distribution of open times remains almost exponential, and the mean times in the states and the mean opening probability remain only weakly affected, compare with Figure 5a,b. This is especially striking because we can further arbitrarily increase η eff , while keeping the same η α . This is easy to do in our numerics just by further increasing the number N of exponentials in Equation (7) and, correspondingly, the Markovian embedding dimension in Equation (9). The results will not be changed because the essential kinetics in Figure 7 is on the time scale which is already smaller than the cutoff time τ h , that further increases with N. This feature must be shocking for all those who continue thinking in terms of η eff , rather than fractional friction featuring such complex media as cytosol. All in all, for such an intermediate η α , our sensor operates as fast as in water. This is a good news. Figure 8). Here, the influence of viscoelastic effects is very strong. Initially, for small time, β can exceed one, see for closed times (red fitting curve) in part b. However, the mean opening probabilities are also almost unaffected, as compared with the Markovian case, despite the mean residence times increase. This increase is not very strong, by a factor of less than three only. Our sensor remains very fast. Discussion Design of a magnetosensitive ion channel complex using a single biomagnetite nanoparticle as a magnetic field sensor makes it more realistic. In essence, the model studied in this paper is a variant of the model in Ref. [61]. The differences are in detail. However, these details do matter. First, a single prolonged nanoparticle is used instead of a rod made of 5 ÷ 7 such nanoparticles, and, second, a shorter linker is used to restrict the orientational motion of sensor when it fluctuates between two metastable states in response to a change of the magnetic field orientation. In our present work, it is just about 30 • , whereas, in [61], it is about 150 • . This makes the present variant much faster and more realistic in view of possible sterical restrictions within the cell. This comes, however, at a price: the magnetic moment of the sensor must be larger to ensure its magnetic energy to be about 10 k B T in the magnetic fields of Earth vs. 3 ÷ 4 k B T in Ref. [61]. This is, however, not a problem because such sufficiently large nanoparticles are met both in certain bacteria, and in the human brain, even if they are certainly less common that ones assumed in [61]. Furthermore, our analysis shows that such a sensor would be operating very fast even in viscoelastic cytosol feeling an effective viscosity 1000× larger than one of water. More precisely, this effective viscosity can be even formally infinite, like in solid, because this is not an effective macroscopic friction defined on a very large time scale which determines the stochastic switching dynamics. In fact, the microscopic fractional friction does matter here and it is crucially relevant. If it is large and dominates the dynamics, as in the studied case of η eff = 1000 η 0 , the "off-on" two-state dynamics becomes very bursting. It is not characterized by nearly exponential distributions of the residence times in two states anymore, but rather by profoundly non-exponential stretched exponential distributions. Such distributions are indeed measured in several biological ion channels, and our theory tentatively explains their principal origin as one rooted in the viscoelasticity of environment. All this clearly correlates with the dielectric Cole-Cole response in such media (Mittag-Leffler viscoelastic relaxation). It is important that the mean residence times (MRTs) as well as all higher moments remain finite; moreover, MRTs were increased in our model study by a factor of less than three only, as compared to ones in water. Arguably, it can be unbelievable and embarrassing for those who continue to think in terms of an η 0 → η eff renormalization within a Markovian dynamics. However, this is a result of the proper treatment of non-Markovian effects, and it presents a very good news with respect to feasibility of such magnetosensitive complexes in living systems. If such magnetosensitive ion channels do exist, why then they were not found until now? The situation here can be similar to ion channels associated with cilia in hair cells. The existence of those channels is widely assumed and is taken nowadays by many almost as a real fact. However, they were not identified until now as biological protein structures, such as many other well-known ion channels, despite numerous efforts. It is very difficult to identify them because each cilia is assumed to be connected by elastic protein linkers to the gates of many such channels, and it is difficult to confirm this hypothesis experimentally. The ion channels of cilia in hair cells remain elusive, even if in their existence practically nobody doubts. The hypothesis of magnetosensitive ion channel complexes is much less studied. However, it is a reasonable one and it should attract more attention in the future. Methods The power law memory kernel is approximated between two cutoffs by a sum of exponentials with a fractal scaling of relaxation rates ν i = ν 0 /b i−1 and weights k i ∝ ν α i . Namely, Here, ν 0 = ν h = 1/τ l is the largest viscoelastic rate of the environment. Using scaling or dilation parameter b = 10 allows to achieve 4% accuracy of power law approximation between two cutoffs for α = 0.5 and a sufficiently large N [87,100]. In our numerics, we use ν 0 = 10 4 and N = 9, τ h = b N−1 /ν 0 = 10 4 . The fractional friction coefficient is η α = η eff τ α−1 h /g α , with an inverse proportionality coefficient g α , which slightly differs from unity. For α = 0.5, b = 10, and N > 5, g α ≈ 1.07. This allows for a Markovian embedding of GLE dynamics (6) in a space of N + 1 dimension. Here, y i are non-dimensional linear auxiliary variables, η i = k i /ν i , and ξ i (t) are mutually independent auxiliary uncorrelated white Gaussian noises, ξ i (t)ξ j (t ) = 2δ ij k B Tη j δ(t − t ). Conclusions To conclude, in this paper, we studied a model of magnetosensitive ion channel complexes in more realistic detail. Our study underpins theoretically a possible existence of such complexes which should attract more attention of researchers trying to resolve the puzzle of magnetosensitivity of many animals to environmental magnetic fields. The author is convinced that ion channels are involved in magnetosensing, even if the concrete structures involving them can be different. More research in this direction is required and welcome.
9,074.8
2018-02-28T00:00:00.000
[ "Materials Science", "Physics" ]
Mental Health in the Digital Age: Grave Dangers, Great Promise Mental Health in the Digital Age: Grave Dangers, Great Promise Edited by Elias Aboujaoude and Vladan Starcevic Oxford University Press, 2015, £29.58, pb, 302 pp. ISBN: 9780199380183 Technology has always been a double-edged sword: there are associated risks and benefits. As a practising Reviews Technology has always been a double-edged sword: there are associated risks and benefits. As a practising psychiatrist I increasingly rely on technology at work, using next-generation electronic medical records and at times recommending appropriate smartphone-based applications as additional therapy for my patients. In contrast to numerous other titles about technology and its impact on healthcare -which have emerged as a result of the massive technical advances in the past decade -Mental Health in the Digital Age does not focus only on the benefits of the use of technology in mental healthcare. It offers a timely balanced perspective by also providing an in-depth analysis of the risks. The risks highlighted in the book are not limited to addictive behaviours such as internet or gaming addiction, but also include cyberbullying and the increased risk of suicide due to pro-suicide websites and suicide pacts. Cyberbullying is perhaps one of the most common problems linked with the use of technology to date and it is not unusual for me and my team to see children and adolescents who refuse to go to school as a result of cyberbullying. Unlike conventional forms of bullying, cyberbullying implies the use of social networks and internetbased messaging services to harass an individual. This work examines not only the prevalence of the problem, but also the various prevention strategies available, such as having a specific academic curriculum to deal with the issue. The authors review the existing literature comprehensively -referring also to current evidence -and look at the potential of technology across several areas of mental healthcare, including the provision of psychotherapy and the integration of patients' health records. They also discuss how recent advances -such as virtual reality -could in principle be a powerful tool in exposure therapy. As a team with an interest in e-health, my colleagues and I have been developing smartphone applications for various mental health disorders. The introduction of virtual reality technology means that we could perhaps tap on games and various other sensors and headset devices to create an interactive environment not just for psychotherapy but for other forms of interventions too. This is a good guide for novices in e-health but equally a useful tool for the more experienced in this area. It would be helpful if a future edition included more detailed coverage of smartphone applications and their inherent risks and benefitsa topic of concern not only for clinicians, but patients at large. Love them or loathe them, most medical student written examinations now take the form of multiple choice questions (MCQs). Some medical educators dislike this assessment style, suggesting it encourages students to learn isolated facts in a superficial way. Yet, undeniably, MCQs provide an objective, time-efficient manner of evaluation. MCQs in Psychiatry for Medical Students is a valuable resource for medical students undertaking their psychiatry rotations. It includes MCQs and extended matching items grouped into chapters concerned either with a type of disorder -for example, psychotic disorders and alcohol and substance misuse disorders -or another important aspect of psychiatry, such as physical health, pharmacological treatments, psychology and psychotherapy. Each MCQ is accompanied by a paragraph or two explaining the correct answer. More information is provided than is strictly necessary to understand the answer, but this is illuminating rather than turgid. The 400-plus contemporary references encourage the reader to consider issues in more depth than the superficial learning style many associate with MCQs, making the scope of this book potentially greater than is obvious from its title. In contrast, the three extended matching item questions in each chapter are not followed by explanations, making them far less informative. Writing good MCQ distractor items is a challenge, and in a few places -especially questions on risk factors and protective factors -it is possible to guess the answer by eliminating answers simply based on whether they describe something positive or negative. This is a must-have title for all medical students; it will pique the interest of many students and may even assist in recruiting future psychiatrists to the profession.
1,009.4
2017-06-01T00:00:00.000
[ "Psychology", "Computer Science" ]
The action of DNA ligase at abasic sites in DNA. Apurinic/apyrimidinic (AP) sites occur frequently in DNA as a result of spontaneous base loss or following removal of a damaged base by a DNA glycosylase. The action of many AP endonuclease enzymes at abasic sites in DNA leaves a 5'-deoxyribose phosphate (dRP) residue that must be removed during the base excision repair process. This 5'-dRP group may be removed by AP lyase enzymes that employ a beta-elimination mechanism. This beta-elimination reaction typically involves a transient Schiff base intermediate that can react with sodium borohydride to trap the DNA-enzyme complex. With the use of this assay as well as direct 5'-dRP group release assays, we show that T4 DNA ligase, a representative ATP-dependent DNA ligase, contains AP lyase activity. The AP lyase activity of T4 DNA ligase is inhibited in the presence of ATP, suggesting that the adenylated lysine residue is part of the active site for both the ligase and lyase activities. A model is proposed whereby the AP lyase activity of DNA ligase may contribute to the repair of abasic sites in DNA. DNA repair pathways have evolved to process a wide range of chemically distinct lesions in DNA (1). One of the most common types of damage is spontaneous or enzymatic hydrolysis of the N-glycosidic bond between a DNA base and the sugar phosphate backbone generating an abasic site (AP site). 1 AP sites are highly mutagenic and require rapid and efficient repair. AP sites are processed by a base excision repair pathway that is frequently initiated by the action of a class II AP endonuclease that cleaves the DNA backbone adjacent to the lesion to produce 3Ј-OH and 2Ј-deoxyribose 5Ј-phosphate termini (2). The latter residue, referred to as a 5Ј-dRP moiety, is relatively alkali labile and can be removed by AP lyases that facilitate ␤-elimination. Most enzymes demonstrated to have dRPase activity operate through this lyase mechanism, although some, such as the Escherichia coli recJ protein, catalyze hydrolysis (3). The two classes of dRPase enzymes can be distinguished by the fact that the lyase mechanism frequently involves formation of a transient Schiff base intermediate in which an amino group on the enzyme is covalently bound to the DNA. This lyase mechanism, first proposed for E. coli endonuclease III (4), provides a simple method to identify a polypeptide with AP lyase activity because the Schiff base intermediate can be trapped in a stable form by reaction with a strong reducing agent, such as NaBH 4 or NaBH 3 CN. Thus, with an appropriate radioactively labeled substrate, the label can be transferred to the AP lyase enzyme. This borohydride trapping has been documented for several repair enzymes (5)(6)(7)(8)(9) and, more recently, for DNA pol ␤ (10,11). The combined action of AP endonuclease and AP lyase leaves a one-nucleotide gap that is filled by a DNA polymerase. The final step in repair involves DNA strand sealing by DNA ligase. The mechanism of DNA ligase involves covalent modification of the enzyme by adenylation, transfer of the AMP residue in a phosphoanhydride linkage to the 5Ј-phosphate of nicked DNA, followed by resealing of the DNA strand driven by the energy of AMP hydrolysis. We recently characterized a mtDNA ligase as part of an effort to reconstitute repair of AP sites using mitochondrial enzymes (12). The size, template specificity, and immunological properties of the mtDNA ligase suggested that this was a form of DNA ligase III. In the course of this work we found that mtDNA ligase is active on a DNA substrate containing an AP site incised on the 5Ј side by a class II AP endonuclease. Our observations prompted a detailed investigation of the action of T4 DNA ligase as a prototype for ATP-dependent DNA ligases at AP sites in DNA. In this paper we show that in the presence of ATP, T4 DNA ligase is able to reseal an incised AP site. In the absence of ATP, T4 DNA ligase acts as an AP lyase to facilitate a ␤-elimination reaction that leads to removal of the 5Ј-dRP residue. A model is presented whereby an intrinsic AP lyase activity in DNA ligase may facilitate repair of AP sites. EXPERIMENTAL PROCEDURES Materials-mtDNA ligase, mitochondrial AP endonuclease, and DNA ligase I were purified from Xenopus ovary tissue as described (12). T4 DNA ligase was obtained from Boehringer Mannheim. T7 DNA ligase was a gift from Dr. J. Dunn (Brookhaven National Laboratory). Variable quantities of both preparations of bacteriophage DNA ligase were subjected to SDS-PAGE analysis (13) in parallel with standard proteins of known concentration. The gels were stained with Coomassie Blue to confirm that both preparations were essentially homogeneous and to permit estimation of protein concentrations using densitometry of the stained gels. Radiochemicals were purchased from ICN Radiochemicals. Uracil DNA glycosylase (UDG) was obtained from Epicentre Technologies (HK-UNG). FPG protein was a gift from J. Tchou and A. P. Grollman (SUNY-Stony Brook). Sodium borohydride and sodium thioglycolate were obtained from Sigma-Aldrich. Other reagent grade chemicals were obtained from Sigma-Aldrich or Fisher. The Poros Q 4.6 ϫ 50-mm column used for anion exchange HPLC was obtained from Perceptive Biosystems. Oligonucleotides were either synthesized by the phosphoramidite method at the SUNY-Stony Brook Oligonucleotide Synthesis Facility or were obtained from Operon. A continuous duplex oligonucleotide was prepared by annealing a 5Ј-32 P kinase-labeled 32mer (5Ј-CATGGGCCGACATGAUCAAGCTTGAGGCCAAG) to a complementary oligonucleotide (5Ј-TCTTGGCCTCAAGCTTGATCAT-GTCGGCCCATG). Two nicked duplex oligonucleotides were prepared by annealing a 5Ј-32 P kinase-labeled 17-mer (5Ј-UCAAGCTTGAGGCC-AAG; referred to as U17) and either a nonradioactive 15-mer (5Ј-CAT-GGGCCGACATGA) or 12-mer (5Ј-GGGCCGACATGA) to the same complementary strand described above. The 12-mer was used in the experiment in Fig. 7C to permit better resolution of the reaction product from the initial labeled substrate. Methods-Oligonucleotides were phosphorylated using standard procedures (14) and annealed by heating to 80°C in 0.1 M NaCl, 10 mM Tris, pH 8, 1 mM EDTA and slow cooling to 4°C. Oligonucleotides were treated immediately before use with UDG in 20 mM Hepes, pH 7.5, and diluted into an assay mixture with DNA ligase in 10 mM Hepes, pH 7.5, 1 mM MgCl 2 , unless otherwise indicated. Borohydride trapping was performed by a modification of published procedures (9, 11) by including 20 or 50 mM NaBH 4 in the binding reaction. After 30 min at 25°C, the solution was adjusted to contain 6 mM CaCl 2 and 10 g/ml micrococcal nuclease. After 20 min of incubation at 37°C, proteins were precipitated with trichloroacetic acid, analyzed by SDS-polyacrylamide gel electrophoresis (13), and detected by autoradiography or PhosphorImager analysis (Molecular Dynamics). dRPase activity was measured as described (3), and HPLC analysis of products generated in the presence of sodium thioglycolate was performed as described (5,15), except that a Poros Q anion exchange column was used. RESULTS AND DISCUSSION DNA Ligases Are Able to Reseal DNA Strands Nicked on the 5Ј Side of an AP Site-Our laboratory has studied the base excision repair pathway with nuclear and mitochondrial protein fractions using templates containing precisely positioned single AP sites embedded in covalently closed circular DNA (12,15). These sites are readily cleaved on the 5Ј side by class II AP endonuclease to yield a 3Ј-OH terminus and a 5Ј-deoxyribose phosphate (dRP) residue. When templates bearing incised AP sites are incubated with DNA ligase in the presence of ATP, the DNA ligases are able to reseal the nicked strand ( Fig. 1). This reaction has been reported for T4 DNA ligase (16) but not for eukaryotic DNA ligases. Because ligation of a 5Ј-dRP moiety directly reverses the action of AP endonuclease, it is counterproductive for repair. It is also a potential confounding factor in efforts to reconstitute repair reactions in vitro, because religation of an abasic site might not be differentiated from actual repair. In the experiment in Fig. 1, we used a substrate with a synthetic analogue of an abasic site, a tetrahydrofuran analogue that has been used extensively in repair studies (15,17). This is an analogue of a reduced deoxyribose moiety and is not subject to ␤-elimination. Experiments presented below show that T4 DNA ligase can also reseal an authentic (nonreduced) AP site. T4 DNA Ligase Has AP Lyase Activity-We previously showed Xenopus laevis mtDNA ligase can be labeled using a borohydride trapping procedure that is specific for AP lyase activities (12). To determine whether this is a general property of ATP-dependent DNA ligases, we tested T4 and T7 DNA ligases for the ability to react with AP sites in a borohydride trapping assay. This assay employed an oligonucleotide substrate designed to contain a specific U residue adjacent to a nick, referred to as 15-*U17:33 to denote a 15-mer annealed adjacent to a kinase-labeled (*) 17-mer to a 33-mer complementary strand. Treatment of the duplex oligonucleotide with UDG results in a 5Ј-phosphoryl abasic site. This is an exact model of the substrate that would be generated by action of a class II AP endonuclease. Fig. 2 shows that both T4 DNA ligase and T7 DNA ligase react in this assay. Controls in lanes 3 and 4 of Fig. 2 demonstrate that cross-linking was dependent on the presence of NaBH 4 and on the AP site, because the reaction was not observed with a control oligonucleotide containing thymine in place of uracil. We performed a variety of control experiments to characterize the putative AP lyase activity in T4 DNA ligase. The extent of cross-linking with a nicked substrate varies with solution conditions. Cross-linking is reduced at 5 mM MgCl 2 in comparison with the standard reaction conducted at 1 mM MgCl 2 and is inhibited more than 90% by 1 mM ATP (Fig. 3). The standard reaction includes post-treatment with micrococcal nuclease to degrade the oligonucleotide cross-linked to protein by NaBH 4 . When this nuclease treatment was omitted, the electrophoretic mobility of the major cross-linked protein species was reduced, as expected for a DNA-protein complex (Fig. 3B). In the absence of micrococcal nuclease treatment a minor cross-linked labeled species with essentially the same gel mobility as unmodified T4 DNA ligase was also observed. This faster migrating species is expected to be formed as an intermediate in the action of an AP lyase, as shown in Fig. 4. The initial product of an attack by AP lyase on an AP site is a Schiff base intermediate in which the enzyme is joined to DNA. The AP lyase reaction proceeds with elimination of the DNA from the C3Ј position of deoxyribose, producing an enzyme-dRP intermediate with a Schiff base linkage. Both the enzyme-DNA and enzyme-dRP species can be reduced by NaBH 4 to generate the doublet of cross-linked species in Fig. 3B. These borohydride FIG. 1. DNA ligases can reseal a nick adjacent to a 5-phosphorylated abasic site. A 5Ј-end labeled oligonucleotide containing a single synthetic AP site (3-hydroxy-2-hydroxymethyltetrahydrofuran, designated F for furan) was ligated into a gapped heteroduplex, and the covalently closed circular DNA product was purified as described (15). PAGE-urea gel analysis of a HinfI digest of this substrate confirmed that all substrate molecules were ligated with the tetrahydrofuran residue embedded in a 46-mer fragment, denoted as 46(F) (lane 1). Treatment with mitochondrial AP endonuclease led to cleavage on the 5Ј side of the tetrahydrofuran residue, providing a 26-mer fragment following diagnostic HinfI cleavage (lane 2). This fragment with a 5Ј-tetrahydrofuran residue is identified as F26. Samples of this mitochondrial AP endonuclease-incised substrate were incubated with X. laevis DNA ligase I, mtDNA ligase, T4 DNA ligase, or T7 DNA ligase (lanes 3-6). The products were deproteinized by organic extraction and cleaved with HinfI endonuclease prior to electrophoresis. The radioactive species moving slightly slower than the F26 fragment, which is most apparent in lane 6, is an intermediate in the ligase reaction produced by transfer of AMP to the 5Ј terminus at the nick. cross-linking results are consistent with the hypothesis that T4 DNA ligase contains AP lyase activity. In other experiments, we found that it was not necessary to present the AP site in the context of a nick in DNA, although this is the preferred substrate. Borohydride trapping was observed when the 15-mer oligonucleotide was omitted from the standard nicked substrate, leaving a 5Ј-AP site adjacent to single-stranded DNA. We also observed cross-linking to a free oligonucleotide with a 5Ј-dRP site generated by the action of UDG on the 5ЈU-17-mer oligonucleotide (5Ј-32 P-UCAAGCTT-GAGGCCAAG). The efficiency of borohydride trapping with a single-stranded 5Ј-AP oligonucleotide was about 50% of that observed with the nicked oligonucleotide substrate. The singlestranded oligonucleotide substrate was used in some experi-ments described below to simplify preparation of large quantities of substrate for AP lyase assays. The borohydride trapping reaction is a very sensitive probe of AP lyase activity but is not always efficient because it acts on a transient intermediate in the overall AP lyase reaction. For DNA glycosylases that remove a base and attack the AP site in a sequential dual mechanism, such as FPG protein, this borohydride trapping procedure is relatively easy to control. It is more difficult to trap a large fraction of a protein that does not contain an intrinsic glycosylase activity because the NaBH 4 required to cross-link the enzyme-DNA Schiff base intermediate can also react with the ring open form of the sugar residue to inactivate the substrate. As an independent assay for AP lyase activity, we monitored the release of free 5Ј-dRP from DNA as an acid or ethanol-soluble species, as shown in Fig. 5. When reactions were performed with a high concentration of the single stranded 5Ј-dRP-17-mer oligonucleotide, we found that dRP release was linear with time, showed a clear temperature optimum, and was inhibited by ATP as observed for the borohydride trapping assay. The ethanol-soluble species released by T4 DNA ligase in the presence of thioglycolic acid was observed to have the same chromatographic properties as the product released by FPG protein, which has a well characterized AP lyase activity ( Fig. 6 and Refs. 5, 7, and 18). The borohydride trapping and dRP release assays show that T4 DNA ligase is an authentic AP lyase by the same criteria used to show that DNA pol ␤ possesses AP lyase activity (10,11). Although T4 DNA ligase is capable of processing multiple AP site substrates (Fig. 5A), the turnover number is less than 10% of the vigorous rate observed for FPG protein (5). It should be noted that the glycosylase activity of FPG protein is reported to be significantly slower than the AP lyase activity (5). A lower turnover number for the AP lyase activity in T4 DNA ligase may be expected because the enzyme is likely to bind persistently to the gapped substrate generated by AP lyase action on a site previously cleaved by AP endonuclease. We have not yet identified the active site for the DNA ligaseassociated AP lyase. The borohydride trapping reaction requires attack on the C1Ј residue of deoxyribose by an N-terminal amino group or by an internal lysine (19). The fact that the AP lyase activity is suppressed by ATP suggests that the active site lysine residue that is adenylated in DNA ligase (20) may be involved directly in the nucleophilic attack that promotes ␤-elimination. This residue is normally in close proximity to the nick in a DNA substrate in the course of a DNA ligation reaction. However, we have not ruled out the possibility that another lysine residue may be involved in this attack, because the deadenylated enzyme may have an altered conformation that interacts differently with DNA substrates containing 5Ј-dRP residues. It will be particularly interesting to map the active site residue in T7 DNA ligase that reacts in the borohydride trapping reaction because the structure of this enzyme has been determined (21). This enzyme has the added advantage that it is relatively small, with only 359 amino acid residues. A Model for the Role of AP Lyase Associated with DNA Ligase-We have observed AP lyase activity using the borohydride trapping assay for the following four different ATP-dependent DNA ligases: T4 and T7 DNA ligase (Fig. 2), mtDNA ligase (12), and DNA ligase I (data not shown). Because ATPdependent DNA ligases as a class share structural and functional features (22), it is likely that the presence of AP lyase activity will be conserved in this family. To date we have not been able to document AP lyase activity in bacterial DNA ligases from either E. coli or T. aquaticus either in the presence or the absence of their cofactor, NAD (data not shown). The critical question raised by our observations is whether the AP lyase activity associated with T4 DNA ligase plays a significant physiological role. A model for the action of T4 DNA ligase at AP sites is shown in Fig. 7. In living cells, AP sites are very rapidly incised by AP endonuclease to generate the sort of nicked AP substrate we have used in our reactions. The experiments reported here show that T4 DNA ligase can act as an AP lyase at these sites in the absence of ATP. Under these conditions, the deadenylated enzyme cannot seal the nick and instead facilitates ␤-elimination, leading to loss of the 5Ј-dRP residue. This produces the single nucleotide gap structure diagramed as species 5 in Fig. 7. This single base gap may be repaired by the action of DNA polymerase and the conventional strand sealing action of DNA ligase. It is also important to consider the action of T4 DNA ligase at incised AP sites in the presence of ATP, because a large fraction of DNA ligase may exist in the adenylated state in vivo. Our results suggest that the adenylated T4 DNA ligase FIG. 4. The chemistry of AP lyase action accounts for DNA-enzyme and dRP-enzyme complexes following NaBH 4 treatment. This scheme is based on those presented for other AP lyase enzymes (7,19). The Shiff base reaction scheme requires attack by a free amino group of the AP lyase on the C1Ј residue of the deoxyribose. This reactive nitrogen is referred to as N-enz. Other functional groups within the enzyme may assist in the ␤-elimination reaction as indicated. Covalent intermediates in the Schiff base reaction scheme labeled 1 and 2 may be reduced by borohydride to yield stable species with the protein cross-linked to an oligonucleotide or to a dRP moiety, respectively. FIG. 5. Release of an acid soluble product from a 5-32 P-labeled abasic site by T4 DNA ligase. 15 pmol of 5Ј-32 P U17 oligonucleotide pretreated with UDG was incubated with 100 ng of T4 DNA ligase in 40 mM Hepes buffer, pH 7.5, for varied periods of time at 30°C (A), for 30 min at varied temperature (B), or in the presence of increasing concentrations of ATP (C). dRP release was measured as the generation of a radioactive product soluble in the presence of cold TCA and activated charcoal (3). The percentage of dRP removed was determined relative to the total alkali-labile cpm. The maximal amount of label solubilized in parallel reactions without enzyme represented 4% of the total available substrate. FIG. 6. The putative 5-dRP product released by DNA ligase has the same chromatographic properties as that released by FPG protein, a well characterized AP lyase. The 5Ј-32 P U17 oligo pretreated with UDG was incubated with either FPG protein as a positive control (A) or with T4 DNA ligase (B) in the presence of 50 mM sodium thioglycolate. The product released by the lyase activity of FPG protein has been shown to react with thioglycolate to generate an anionic species (5,18). Reaction of the ␤-elimination product with thioglycolate leads to reduction of the aldehyde, blocking the subsequent ␦-elimination reaction for FPG protein. The ethanol-soluble reaction products were analyzed by chromatography on a Poros Q HPLC column using the indicated NaCl gradient. The intact oligonucleotide elutes from this column with the 1 M NaCl step. has a reduced ability to promote ␤-elimination. Instead, when a T4 DNA ligase molecule that is activated by adenylation binds this nicked substrate, it seals the nick to regenerate an internal AP site, as shown in Fig. 1 and diagramed as species 3 in Fig. 7A. The second product of the ligation reaction is a "disarmed" DNA ligase molecule that is no longer adenylated but is still in contact with the AP site. To test whether T4 DNA ligase is able to incise DNA on the 3Ј side of an internal AP site (i.e., without prior action of an AP endonuclease), we performed the experiment in Fig. 7B. This experiment shows that T4 DNA ligase is clearly capable of strand incision to yield a product with a slightly slower gel mobility than that produced by the well characterized AP lyase of FPG protein. This suggests that T4 DNA ligase is able to promote ␤-elimination but unlike FPG protein does not efficiently promote ␦-elimination. This sort of incision reaction was not observed in Fig. 1 because that experiment employed a reduced AP site analogue. These results suggest that when T4 DNA ligase seals a nick generated by class II AP endonuclease, it may recognize the product as a mistake and employ its lyase activity to reopen the DNA. To test this prediction, we performed the experiment shown in Fig. 7C. In this experiment, T4 DNA ligase was incubated with a 17-mer oligonucleotide containing a 5Ј-32 P-dRP residue adjacent to a nonradioactive 12-mer. DNA ligase was able to ligate the 12-mer to the 5Ј-dRP-17 mer to generate a 29-mer with an internal 32 P-dRP residue (lane 3 of Fig. 7C). A limited extent of ligation was observed without the addition of exogenous ATP (lane 2), presumably because a fraction of the T4 DNA ligase is puri-fied in an adenylated form. The more efficient ligation in the presence of ATP was followed by incision on the 3Ј side of the AP site to produce a labeled 12-mer with a 3Ј-dRP residue. Thus, the label transfer experiment in Fig. 7C confirms the model for the action of T4 DNA ligase at an AP site in the presence of ATP. The ring open 3Ј-dRP residue produced by AP lyase cannot be rejoined by DNA ligase due to the 2Ј-3Јdouble bond, but the 3Ј-dRP group would be susceptible to release by class II AP endonuclease. Taken together, these experiments suggest that the role of T4 DNA ligase in base excision repair may not be limited to the final step of strand closure.
5,210.8
1998-04-03T00:00:00.000
[ "Biology", "Chemistry" ]
Evaporation protons from the low-energy fusion of 6Li + 58Ni Very recently, fusion cross sections for the weakly-bound proton-rich systems (8B,7Be) + 58Ni were obtained by using a technique where the protons evaporated after the respective fusion reaction were measured. In the present work, the same technique is applied to get fusion cross sections for the 6 Li + 58Ni system at sub-barrier energies. Comparison with data reported previously for 6Li projectiles on similar targets gives consistent results, even though the previous data were obtained with quite different techniques. Introduction In the present work, the protons that are emitted in the 6 Li + 58 Ni fusion reaction are measured. The fact that 6 Li is a weakly-bound nucleus that can easily break into an alpha particle and a deuteron, makes this system an interesting case to investigate. The study of the interplay between the breakup and the fusion channels in weakly-bound systems is a subject that has attracted much interest lately. This work intends to make a contribution to this general subject, by providing fusion data for this particular system. Similar measurements for the ( 8 B, 7 Be) + 58 Ni systems were recently performed [1,2]. In these experiments, the protons evaporated after the respective fusion reaction were measured and the calculated proton multiplicities were used to deduce the respective fusion cross sections. Somehow aided by the proton excess of the projectiles, the corresponding compound nuclei are close to the proton drip-line, so proton emission is naturally a good signature for fusion in these systems. The two projectiles mentioned above, along with 6 Li, are the main components of a mixed beam produced at the radioactive beam facility TwinSol at the University of Notre Dame. The 6 Li projectile does not have a proton excess but, when fused with 58 Ni, it also produces a compound nucleus which is in the proton-rich region. Because of the high Q value for fusion, it is highly excited and proton emission is still the dominant evaporation channel. Similar to the previous two cases, more than one proton (∼ 1.5) is typically produced for each fusion reaction in this system. In the present work, an experiment using the same technique [1,2] was performed to measure the protons emitted during the 6 Li + 58 Ni fusion reaction at sub-barrier energies. Only part of the data have been analyzed so far and the respective results are presented in this preliminary report. In next section, the main experimental details are given while the results are presented and discussed in section 3. Section 4 shows a comparison with fusion data for other two systems having the same projectile and similar targets. Finally, a summary and the conclusions are given in section 5. Experimental details The experiment was performed using the radioactive-ion-beam facility TwinSol at the University of Notre Dame [3], where a primary beam of 6 Li was used to impinge a primary gas-target of 3 He. This beam was bunched so that the respective Time-of-Flight (TOF) could be measured. The reaction products of interest are selected by a first superconducting solenoid which focuses them in a Mid chamber. A second superconducting solenoid further transports the secondary beam, focusing it at the secondary chamber, where the Ni target is placed. Figure 1 shows an upper view of the experimental set up at the secondary chamber. In order to get an image of the beam, a detector was placed at the target position, lowering the beam rate just enough for the detector to be able to take it. The three main reaction products are generated by the 2p and 1p pickup reactions (producing 8 B and 7 Be, respectively), and by elastic scattering (secondary beam of 6 Li). Respective secondary beams that correspond to the same magnetic rigidity are obtained at the secondary-target position. The 8 B, 7 Be and 6 Li beams arrive at the target with different times of flight, so they can be selected by placing software windows in the corresponding TOF values. However, each of these beams has its own contaminants that have to be dealt with separately. In the case of 6 Li, satellite beams of alphas and deuterons fall on the same time window, so they cannot be separated. To measure the evaporated protons after the fusion reaction, four telescope detectors were placed at backward angles and two additional telescope detectors at forward angles served to monitor the beam for normalization purposes (see Fig. 1). During the experiment, the Ni target was placed at the center of the chamber. The satellite beams of alphas and deuterons were of some concern because the protons produced by them in reactions with the Ni target could not be separated from those produced by 6 Li. However, by looking at the respective cross sections reported earlier for these reactions [4,5], the corresponding contributions could be estimated as less than 0.3% in both cases, so they could be safely neglected. The experiment was performed in three stages covering a total of six energies of 6 Li, between 10 and 14.1 MeV in the laboratory frame of reference. In the present preliminary report only the results corresponding to one of the stages, including three of the six energies (E lab = 10.9, 12.2, 13.2 MeV), are presented. Figure 2 shows the proton angular distribution obtained for each of the three energies. The respective PACE2 [6] predictions are indicated with the solid curves, which were integrated over the whole solid angle to get the total proton cross sections σ p . These cross sections can be mapped into σ fus by using the respective proton multiplicities, M p , calculated also with the code PACE2. The possible contribution of incomplete fusion to the data was estimated to be small. The resulting fusion excitation function is shown in Fig. 3 with solid squares and, for comparison purposes, the total reaction cross sections that were obtained previously by our group [7] are also shown, with empty squares. It can be observed that the fusion cross section practically saturates the total reaction cross section, especially at the lowest energies. The dashed curve corresponds to a Barrier-Penetration-Model (BPM) calculation using Wong's formula [8]: Results and discussion The respective barrier parameters V B , R B ,hω 0 were obtained from the Sao Paulo potential (SPP) [9], which is a double-folding potential that has been shown to provide a realistic bare potential for many systems. In the SPP calculation, default values were used for the respective nuclear densities. The potential is shown with the solid line in figure 4, and the respective barrier leads to the values V B = 12.4 MeV, R B = 9.02 fm, andhω 0 = 3.63 MeV for the height, the radius, and the barrier curvature parameter, respectively. With respect to Wong's predictions, the fusion data show a considerable enhancement, but the actual enhancement is in fact even larger, as explained in the following paragraph. In the sub-barrier region, Wong's formula is inaccurate for systems as light as the one considered in the present work, actually overpredicting the results of more accurate BPM calculations. To account for this, an optical potential model (OPM) calculation was done for this system by using the respective SPP for the real part and an interior imaginary potential of Woods-Saxon form, with parameters W 0 = 50 MeV, r W = 1.06 fm, a W = 0.2 fm (dashed line in figure 4). The absorption in this potential effectively simulates an incoming wave boundary condition, thus providing a good estimation for fusion. The obtained result, σ OPM , is indicated Figure 5. Comparison of reduced fusion cross sections for 6 Li on targets of 58 Ni (present work), 59 Co (Ref. [12]) and 64 Zn (Ref. [13]). The curve is to guide the eye. Comparison with similar systems A comparison of our results was made with data for other systems having the same projectile but targets of 59 Co [12] and 64 Zn [13]. To make the comparison, the data must be properly scaled, so the cross sections and energies were reduced according to the expressions σ red = σ/(A . This reduction of data is expected to eliminate trivial effects of size and charge without washing out other important effects [10,11], thus making data for different systems directly comparable to each other. Intuitively, one would expect this reduction to work better for similar systems, in particular for systems whose respective barrier curvatures have close values. Such curvatures are estimated to differ by less than 2% for the three systems compared, thus justifying the method. Compared to other existing prescriptions for data reduction, this method has the great advantage of being completely model independent. Figure 5 shows the reduced fusion data for the 6 Li + ( 58 Ni, 59 Co, 64 Zn) systems. It can be seen that, in the overlapping region, our data are quite consistent with the data for the other two systems. This in spite that the experimental techniques were completely different in the three cases. In the work of Beck et al. ( 6 Li + 59 Co) [12], the gamma rays emitted by the evaporation residues were measured. For the 6 Li + 64 Zn system [13], the cross sections for heavy residue production were measured using an activation technique, detecting off-line the characteristic X−rays emitted in the electron capture decay of the reaction products. It is thus very encouraging to have such a good consistency with our data, where the normal experimental difficulties are very much increased by the fact that they were actually obtained as part of a multi-beam experiment, using only secondary beams. In a way, we could say that the observed consistency further supports the reliability of our technique. Conclusions Evaporated protons were measured for the 6 Li + 58 Ni system at sub-barrier energies and the respective fusion excitation function was deduced by using calculated proton multiplicities. It was observed that the fusion cross sections nearly saturate the previously measured total reaction cross sections. The data show a big fusion enhancement with respect to the predictions of the one dimenssional barrier penetration model using a realistic bare potential. Good agreement was observed with previous fusion data for the 6 Li + ( 59 Co, 64 Zn) systems even though the experimental techniques used in the three cases were quite different.
2,420.6
2015-09-14T00:00:00.000
[ "Physics" ]
Benchmarking and Improving Text-to-SQL Generation under Ambiguity Research in Text-to-SQL conversion has been largely benchmarked against datasets where each text query corresponds to one correct SQL. However, natural language queries over real-life databases frequently involve significant ambiguity about the intended SQL due to overlapping schema names and multiple confusing relationship paths. To bridge this gap, we develop a novel benchmark called AmbiQT with over 3000 examples where each text is interpretable as two plausible SQLs due to lexical and/or structural ambiguity. When faced with ambiguity, an ideal top-$k$ decoder should generate all valid interpretations for possible disambiguation by the user. We evaluate several Text-to-SQL systems and decoding algorithms, including those employing state-of-the-art LLMs, and find them to be far from this ideal. The primary reason is that the prevalent beam search algorithm and its variants, treat SQL queries as a string and produce unhelpful token-level diversity in the top-$k$. We propose LogicalBeam, a new decoding algorithm that navigates the SQL logic space using a blend of plan-based template generation and constrained infilling. Counterfactually generated plans diversify templates while in-filling with a beam-search that branches solely on schema names provides value diversity. LogicalBeam is up to $2.5$ times more effective than state-of-the-art models at generating all candidate SQLs in the top-$k$ ranked outputs. It also enhances the top-$5$ Exact and Execution Match Accuracies on SPIDER and Kaggle DBQA. Introduction Research on Text-to-SQL generation has focused on scenarios where each natural language question is associated with one correct SQL (Zelle and Mooney, 1996;Tang and Mooney, 2000;Scholak et al., 2021a;Wang et al., 2020;Rubin and Berant, 2021;Xie et al., 2022;Arcadinho et al., 2022;Zeng et al., 2022;Scholak et al., 2021b;Pourreza and Rafiei, 2023).Popular benchmarks driving such research, including WikiSQL (Zhong et al., 2018), SPIDER (Yu et al., 2018), its robust perturbations (Chang et al., 2023), and even "in-thewild" benchmarks such as KaggleDBQA (Lee et al., 2021) and SEDE (Hazoom et al., 2021) all associate one correct SQL with text.Meanwhile, ambiguity is prevalent in real-life databases -particularly the ones obtained by integrating several data sources for data analysis, where a natural language interface is most in demand.The sources of ambiguity are several -inherent ambiguity of natural language, the user's ignorance of table/column names, overlapping strings in column names, underspecified clauses, and confusion about whether aggregates are pre-computed, or if a join is required.Hazoom et al. (2021) observe that up to 87% of queries on the stack exchange database are underspecified, and Wang et al. (2022) mention that 11% of queries exhibited ambiguity in column names.Although prior work has brought up ambiguity, there is no publicly available benchmark with ambiguous queries, nor a comprehensive evaluation of systems under ambiguity. Our first contribution is to bridge this gulf by developing a benchmark, AmbiQT, that tests performance under ambiguity in the context of current models.AmbiQT includes over 3000 examples, each associating a natural question on a database with two valid SQLs.Inspired by our experience with several real-world datasets, we target four types of ambiguity spanning both lexical (ambiguous column and table names) and structural (whether a join is necessary, and an aggregate is pre-computed) ambiguity.The benchmark is generated via a combination of ChatGPT (OpenAI, 2022) based synonym generation and perturbation, and standard rule-based perturbation. When faced with ambiguity, an ideal Text-to-SQL system should incorporate all valid alternatives in their top-k SQL outputs, for user resolution.We show that present approaches, ranging from T5-3B (Raffel et al., 2019) to SOTA models, fail to generate all ambiguous outputs with any decoding strategy, including beam search and diversity-promoting sampling methods such as Nucleus (Holtzman et al., 2020) and Typical sampling (Meister et al., 2023).Most outputs are small lexical tweaks of the top choice, bringing about little meaningful diversity in SQL structures or schema alternatives.Even SOTA LLMs like Chat-GPT (OpenAI, 2022) suffer from this issue. To remedy the lack of diversity, we propose a new decoding algorithm, LogicalBeam, that allocates branching to explore underlying logical variants of the SQL rather than the string form.We catalog the errors of T5-3B (Raffel et al., 2019) on the SPIDER dev split and use our insights to encourage targeted types of diversity -the number of JOINs and selections, and table/column names. Our main contributions are: • We develop AmbiQT, the first benchmark that tests performance under four types of ambiguity over 3000+ examples.• We show that SOTA methods, including a finetuned T5-3B, RESDSQL (Li et al., 2023), Ope-nAI Codex, and ChatGPT, provide a poor representation of ambiguity despite their high accuracy on conventional benchmarks.• We present LogicalBeam, a two-step algorithm that generates plan-based templates with counterfactually controlled plan diversity and fills them via a beam search that branches only on schema names.• We show that LogicalBeam consistently increases the fraction of time when all gold SQLs get generated in the Top-5 choices by 1.5 − 2.5× over the baselines across the board on AmbiQT. Background and Related Work A Text-to-SQL model takes as input a question expressed as a natural language text x, and a database schema s comprising of table and column names, and outputs an SQL program y which can be executed against the database to answer the user's question.Figure 1 shows an example.The training data for the task comprises (text, schema, SQL) triplets spanning multiple distinct databases.Benchmarks.Popular benchmarks for the Textto-SQL task are WikiSQL (Zhong et al., 2018) and SPIDER (Yu et al., 2018).A few others have been proposed recently to capture real-world scenarios, such as KaggleDBQA (Lee et al., 2021), SEDE (Hazoom et al., 2021), and EHRSQL (Lee et al., 2022).They all attach one SQL per text, though some of them mention the problem of ambiguity in real-world datasets.Dr. SPIDER (Chang et al., 2023), designed to test the robustness of existing models, perturbs either the text or schema of SPIDER but still assigns one SQL per text. Ambiguity in SQL Although ambiguity has been studied in other fields of NLP (Pilault et al., 2023;Li et al., 2022;Futeral et al., 2022), it has been unexplored in the context of semantic parsing.Ambiguity in SQL arising from related column names is discussed in (Wang et al., 2022), but they only consider column ambiguity.Their method of recognizing ambiguous queries depends on labeling words of the text and does not generalize to other kinds of ambiguity.To the best of our discernment, AmbiQT represents the first open benchmark for testing coverage of ambiguous alternatives. Diverse Decoding.Prior work has critiqued the lack of meaningful diversity in beam-search outputs (Finkel et al., 2006;Gimpel et al., 2013;Li et al., 2016;Li and Jurafsky, 2016).In response, many fixes have been proposed.Some proposals attempt to restrict the tokens sampled, using strategies like Nucleus sampling (Holtzman et al., 2020), Truncated Sampling (Hewitt et al., 2022), and Typical Sampling (Meister et al., 2023), while some rely on Template-Based decoding (Wiseman et al., 2018;Zhang et al., 2022;Fu et al., 2023;Elgohary et al., 2020;Awasthi et al., 2022).A third approach is to generate a prefix with high diversity first, then generate the rest of the sentence with lower diversity.Narayan et al. (2022) follow this recipe but focus on incorporating diverse entity orders in text summarization. 3 AmbiQT: A Benchmark of Ambiguous Text-to-SQL Conversion AmbiQT is constructed so that each text query has two distinct valid SQL interpretations.Motivated by our experience working with real-life databases, we designed AmbiQT to encompass four types of ambiguity.Each entry is designed so that both alternatives have a similar relevance to the question, and a well-calibrated decoding method is expected to rank them close by in their outputs.We create AmbiQT by modifying the SPIDER (Yu et al., 2018) dataset, anduse ChatGPT (Ope-nAI, 2022) to aid with the creation.In each case, we modify the schema instead of the text as that provides greater control over the modification process.We explain the kinds of ambiguity in AmbiQT below and portray examples of each in Table 1. Column Ambiguity (C).Unlike the SPIDER benchmark where column names usually appear verbatim in the question text (like born state for the column born_state), when users unaware of the schema pose a natural question, they introduce column ambiguity (Wang et al., 2022).For example, "What is the capacity of O2 Arena?" could be ambiguous if the schema has separate columns for standing and seating capacity.Likewise, a query on the number of under-nourished children is ambiguous if we have different columns for "under-weight children" and "stunted growth in children". To simulate column ambiguity, for each text x, schema s, and SQL y in SPIDER, we prompt Chat-GPT to generate two synonyms for each column name of s in a one-shot manner.Appendix A furnishes more details of the prompt.We then modify s by replacing c with two columns c 1 , c 2 , and we use y to generate two queries y 1 , y 2 where all mentions of c are replaced with c 1 in y 1 and with c 2 in y 2 .An example appears in the first row of Table 1.We do not reuse c because the SPIDER dataset often contains column names verbatim in the question, and that would violate our attempt at keeping the two options at similar relevance levels.We modify one column at a time and generate up to 3 examples from each original entry.(Cafarella et al., 2008;Pimplikar and Sarawagi, 2012).Here again, we prompt ChatGPT to generate two alternate names for each table.We then modify one SQL y to generate two candidates y 1 , y 2 as shown in Table 1. Table Ambiguity (T). Join Ambiguity (J).In production databases, a logical table is often vertically partitioned across several tables for efficient clustered access (Stonebraker et al., 2019).Column names overlapping across tables leads to Join Ambiguity.Suppose we have two tables: (1) person with columns id, name, email_address, and (2) person_details with columns id, postal_address, photo.A question asking for a person's name and address is ambiguous on whether a JOIN with the person_details is necessary.We expose such ambiguity by modifying the schema as follows. Consider a (x, s, y) triplet.Suppose y involves selecting two or more columns c 1 , c 2 , . .., not necessarily in the same order, from a table t.Suppose further that c 1 is not a primary key of t.We create a table called t_c 1 that includes just the primary Question: Show the names of high school students and their corresponding number of friends. Precomputed Aggregates (P):.This ambiguity is particularly common in data warehouses such as Data Commons which pre-aggregate certain variables.For instance, the "total rice production" of a state might refer to the column rice_production of state rather than a sum over it.Text-to-SQL models have a bias toward introducing a sum()...group-by clause every time total appears in the text.The non-aggregated alternative is usually missing in the top-k options.We incorporate this ambiguity as follows. For each (x, s, y), where y has at least one aggregate, we construct a new table t ′ .For each aggregate A over column c in y, we add to t ′ the columns A ′ _c for all A ′ ∈ {avg, sum, min, max}, and the columns grouped by in y.For count(*) we add a column called number.We get two gold queries, the original y and a second with the groupby replaced by a direct SELECT on t ′ as shown in the example in Table 1.We also support aggregates across multiple tables but skip the details here. 4 Are Existing Text-to-SQL systems resilient to ambiguity? We evaluate several SOTA Text-to-SQL models and decoding algorithms on their ability to generate the alternatives of AmbiQT in their top-k outputs.Descriptions of the systems compared and evaluation metrics appear in Subsection 6.2.Table 3 features the results we obtained.For all systems, the top-5 outputs contain both outputs only for a small percentage of the instances.To investigate the reasons for their poor coverage, we manually inspected several outputs of T5-3B and ChatGPT.A few anecdotes for each kind of ambiguity are shown in Appendix F. The reason for the failure is that Beam Search tends to produce outputs that are minor tweaks of the best hypothesis, as also corroborated by prior work (Finkel et al., 2006;Gimpel et al., 2013;Li et al., 2016;Li and Jurafsky, 2016).One example from the 'C' split of AmbiQT that illustrates this is displayed in Figure 2. Recent diversity-promoting decoding strategies like Nucleus (Holtzman et al., 2020) and Typical (Meister et al., 2023) sampling are designed for natural language and are ineffective for capturing the structural diversity that SQL variants require.These observations motivated the design of our inference algorithm, LogicalBeam. Our method: LogicalBeam LogicalBeam attempts to induce meaningful diversity, while steering clear of vacuous forms of diversity in the formatting of the SQL.We first attempt to understand the type of logical diversity required by analyzing the errors of the top-1 output of T5-3B on the SPIDER benchmark. The mistakes of the top-1 output are cataloged in Table 2. Apart from the column selection order, which is arguably not a serious error, the top four errors are a wrong number of joins, columns, and incorrect column and table names.A large fraction of the errors involves the "skeletal structure" of the SQL, whereas vanilla Beam Search exhibits little diversity in the SQL structure.Most of its diversity is around generating alternate forms of string literals, tweaking comparison orders, or swapping the names of temporary variables (like t1 with t2).These observations drove us to design a twostage approach.In the first stage, we generate diverse SQL skeletons (templates) to capture structural diversity, and in the second we fill in the template with schema-diverse alternatives.We illustrate our approach in Figure 3. Plan-based Template Generation A template of an SQL query abstracts away the names of the tables and columns of the SQL query, string literals, and constants, so that only the structural components (SELECTs, GROUP BYs, JOINs, comparisons and so on) remain.On the train split of SPIDER, we convert the gold SQL to a template by a simple rule-based replacement of schema names (details in Appendix E) and use it to train a Text-to-Template model.However, the top-k templates found via beam search on this model again lacked logical diversity.One example is shown by Figure 6 in Appendix D. We thus explored a more deliberate mechanism to induce diversity following these three steps: First, we preface a template with a plan declaring the structural properties of the SQL where diversity is desired.Based on our error analysis in Table 2, we chose to induce diversity on the number of JOINs and final SELECTions.Thus, for a given input question, we output a plan followed by a template as: <J> joins | <S> selects | <TEMPLATE> The left yellow box in Figure 3 shows one such plan prefixed template. Second, we counterfactually perturb the counts in the plan as follows.We generate the topchoice template t without any constraints (say, with j joins and s selections).We then generate four diverse plans by searching in the neighborhood of the most likely predicted structure as (j − 1, s), (j + 1, s), (j, s − 1), (j, s + 1).We skip invalid combinations (j < 0, j > 3, or s ≤ 0).We also explored sampling j, s based on predicted probabilities, but these were extremely skewed. Finally, for each plan (enforced as a prefix), we use greedy decoding to generate the template.The decoding algorithm was good at generating templates as per the specified plan. Thus, at the end of the template generation phase, we have at most five templates. Template filling with Diverse Schema Here we fill diverse column names and table names in the generated templates.We use beam search to this end but enforce adherence to the template.We track our position in the template during infilling.If the next token is expected to not be part of a table or column name, we disallow the model from exploring anything apart from the highest-scoring next token.Otherwise, we allow it to branch in the next decoding step.However, we restrict its options to a whitelist of tokens computed beforehand by enumerating the columns/tables from the schema.The pseudocode of our Restricted Infilling method is presented in Algorithm 1. The next challenge is how to rank the SQLs from the diverse templates and select the top-5.We initially attempted to rank based on the product of probabilities of the template and in-filling steps.However, the probability distribution of the models we worked with was extremely skewed -for example, top-p sampling with p = 0.9 produced the same template in all infillings over 70% of the time.Combined with the well-known lack of calibration of neural sequence models, we found it better to simply choose the top−2 SQLs from each template, along with the top−2 from vanilla beamsearch without any templates.After filtering out duplicates, the top-5 queries in the list are returned. Experiments We present extensive comparisons of several Stateof-the-Art Text-to-SQL models and decoding methods on AmbiQT in the following sections.We then show that LogicalBeam can be helpful even Algorithm 1: Pseudocode for one Beam Extension step of the Restricted Fill-In Algorithm Data: Beam width k, current hypotheses and scores (y1, s1), (y2, s2), in the absence of ambiguity.We also present a detailed ablation study of LogicalBeam, and a discussion of its use-cases and shortcomings. Implementation Details of LogicalBeam Both stages of LogicalBeam are fine-tuned versions of Flan T5-3B (max length = 512), with an Adafactor (Shazeer and Stern, 2018) optimizer (learning rate 1e − 4, and no decay).The models were trained for roughly 300 epochs each, with checkpoint selection based on the highest Template Match and Exact match, respectively (on the validation set, with greedy decoding).Our datasets for the models consist of one-to-one maps of each example from SPIDER, with, e.g., the SQL query replaced by the corresponding template for the Text-to-Template model.We use the Hugging-Face LogitsProcessor 2 for the Template-Infilling model, which allows us to modify logits at each decoding step.We set all the disallowed tokens' logits to −∞ to implement the restricted beam search. Methods Compared We compare with the following models.All use Beam Search with a beam width of 10 unless otherwise specified.For T5-3B (one of the bestperforming baselines), alternate decoding algorithms are also included in the comparison.ChatGPT (CGPT):.We prompt ChatGPT for its top five choices given the question and schema in 2 https://huggingface.co/docs/transformers/ internal/generation_utils#logitsprocessor a one-shot manner using an example outside of AmbiQT.One-shot prompting was required to get ChatGPT to adhere to the output format.More details can be found in Appendix A. We also show in Appendix B that alternate prompts tried by prior works (such as (Liu et al., 2023)) are inefficient in getting ChatGPT to cover all possibilities.OpenAI Codex (Codex):.We use few-shot prompting with the code-davinci-002 version of OpenAI Codex (Chen et al., 2021).This is the most capable Codex version at the time of writing.More details are provided in Appendix A. RESDSQL (RSQL):.Among approaches that do not use ChatGPT/GPT-4, RESDSQL (Li et al., 2023) is the best-performing method on SPIDER at the time of writing.We use its 3B variant (the most potent one) for comparison but turn off the NatSQL (Gan et al., 2021) representation, as it is orthogonal to our approach and can be used with it.T5-3B (T5-3B):.We use the T5-3B checkpoint from the PICARD (Scholak et al., 2021b) repository that fine-tunes T5-3B on SPIDER.By default, we use Beam Search for T5-3B.T5-3B with Top-k sampling (T5-3B-k):.At each step of decoding, we sample from the top-50 tokens, i.e. using top-k sampling with k = 50.T5-3B with Nucleus/Top-p Sampling (T5-3B-p):.At each step of decoding, we sample from the top-p tokens that account for 90% of the probability mass as proposed in (Holtzman et al., 2020).T5-3B with Typical Sampling (T5-3B-T):.Typ- ical Sampling (Meister et al., 2023) is another recent diverse decoding algorithm for enforcing natural diversity.This algorithm uses a parameter, typical_p, similar to the top_p of Nucleus Sampling.Following (Meister et al., 2023), we set typical_p to 0.9. LogicalBeam.For both stages we fine-tuned separate Flan T5-3B (Chung et al., 2022) models.We use a learning rate of 1•10 −4 and an effective batch size of 810 via gradient accumulation in both cases. Evaluation Metrics.We present two types of accuracies (i) EitherInTopK -that checks if either of the gold queries feature in the top-5 outputs (ii) Both-InTopK -that checks if both gold queries feature in the top-5.We only report the Execution Match (EXM) accuracies for each.The numbers of Exact Set Match are given in Appendix C. Overall comparison on AmbiQT We present the results of the system comparison in Table 3.We make the following observations: • State-of-the-art Text-to-SQL models cannot handle ambiguity: Existing approaches, including T5-3B, ChatGPT, and RESDSQL among others, fail to cover both alternatives in Top-5 even when they perform reasonably under the EitherIn-TopK heading.Surprisingly, despite being SOTA on the SPIDER dataset, RESDSQL sees its coverage plummet under ambiguity.We observed that it often produced outputs that corresponded to neither of the alternatives.This behavior was also exhibited by T5-3B, by using aggregates such as max(avg_age).Though outputs produced this way are syntactically correct, they do not correspond to any meaningful question.• Beam-search gives unhelpful token-level diversity Although it may seem like increasing the beam width allows greater exploration and thus greater diversity, this is not the case.As Figure 4 Kind of Ambiguity CGPT Codex RSQL F-T5-3B T5-3B T5-3B-k T5-3B-p T5-3B-T LogicalBeam Performance on Unambiguous Queries Although our main focus is coverage under ambiguity, we also evaluate our proposal against the baseline T5-3B model on the dev split of SPIDER.We find that LogicalBeam doesn't just help the AmbiQT benchmark but also provides gains on conventional Text-to-SQL benchmarks like SPI-DER where ambiguity is limited.Table 5 shows that LogicalBeam improves the top-5 Exact-Set and Execution Match accuracies on SPIDER by 2.3% and 3.1% over the baseline, respectively.As another example, we evaluate our method on the dev split of the challenging Kaggle DBQA (Lee et al., 2021) benchmark.We observe a drastic increase in the top- 35.4%, respectively.We conclude that Logical-Beam is useful across a wide range of Semantic Parsing tasks.Unlike earlier grammar-based generators like SmBoP (Rubin and Berant, 2021) that require special decoder models, our approach can work within existing LM-based models. Ablation study LogicalBeam has three design decisions: (1) Use of a two-step approach, (2) Counterfactual structural directives via plans, (3) Template-guided schema diversity.We present an ablation study where we incrementally add these changes in Table 4.The first column ("Single Stage") generates an SQL directly with a prefix for structural diversity, differing from LogicalBeam only in using a single stage.It still uses plan enforcement and branching control.We find that its coverage lags behind LogicalBeam, and by a large margin for T and P. The primary reason could be that template-guided decoding allows us to discard erroneous extensions at each decoding step.The second column ("Two Stages") shows a simple two-stage method where we generate a template without any counterfactual control, and use Beam Search to fill it in.This method decouples template and schema diversity, but cannot encourage either by itself.Forcing counterfactual diversity ("+Template Diversity") boosts the coverage under Join Ambiguity and Precomputed Aggregates.Finally, encouraging Schema Diversity via our Restricted Fill-In Algorithm (Log-icalBeam, the last column) significantly improves coverage for Column and Table Ambiguity. Discussion LogicalBeam is general and need not be confined to the world of Semantic Parsing.For instance, the plan (prefix) could be replaced with any aspect of a code snippet that we wish to control.More generally, since the underlying mechanism only involves the model being faithful to the prefix and has no manual components, we could do the same with almost any Sequence-to-Sequence task (for example, political alignment in news summarization). LogicalBeam consistently improves performance both under ambiguity and in the absence of it, often by drastic margins.However, we would also like to highlight one failure mode we observed, that was also exhibited by other approaches.Consider a query "... table1 as t1 JOIN table2 as t2".On rare occasions, we observed that an identical query with t2 replaced by t3 (and t2 skipped) was also present in the choices.We believe this indicates a strong bias of the underlying model towards a particular template -so much so that it prefers this weird (t1, t3) combination to introducing template diversity.The problem of debiasing the model makes for exciting future work.It is not unique to Semantic Parsing, and, we believe, deserves attention in its own right. Conclusion In this work, we highlighted the lack of evaluation of Text-to-SQL models under ambiguity in contemporary literature.To address this, we developed AmbiQT, a novel benchmark with 3000+ challenging examples that evaluates Text-to-SQL models on four kinds of ambiguity.We demonstrated that current methods fall short of acceptable performance under ambiguity.Motivated by analyzing the errors of a T5-3B model on the SPIDER dataset, we developed a two-step approach of generating and then filling in a template.To this end, we trained a model to predict the number of JOINs and selections as a plan before the template, and controlled template diversity by enforcing appropriate plans.Beam Search was modified to enforce template adherence during in-filling.Our method aligns well with intuition and greatly improves a model's coverage under ambiguity, as measured on AmbiQT.It also delivers improvements in the absence of ambiguity, on the SPIDER and Kaggle DBQA datasets.We hope our efforts inspire future work to study generation under ambiguity in more detail, both in the domain of Text-to-SQL conversion and beyond. Limitations In this work, we curated a benchmark of ambiguous queries by perturbing SPIDER, an existing dataset.While we believe that our benchmark is a good measure of performance under ambiguity, real-life databases may exhibit more numerous as well as varied forms of ambiguity.In addition, AmbiQT only consists of examples with questions in English.Ambiguity may manifest differently based on the choice of natural language, and a corresponding study should make for interesting future work. Due to the two-step approach, LogicalBeam incurs a higher number of decoding steps as compared to an end-to-end model.However, due to using a lightweight Greedy Search for the first stage, the number of decoding steps of LogicalBeam falls not much beyond the baseline.Nevertheless, finding an optimal trade-off between decoding steps and coverage remains an intriguing challenge. At the time of writing, ChatGPT and OpenAI Codex represent the most powerful publicly available LLMs suitable for Text-to-SQL conversion and are unable to exhibit sufficient diversity under ambiguity.Future versions or models may overcome this barrier. A Computational Resources and Prompting Details We highlight in this section the prompts we used for prompting ChatGPT, both for the synonyms of table/column names and for the Text-to-SQL conversion on AmbiQT.We also provide the prompts we used with OpenAI Codex, and furnish details of the computational resources we used.All details provided below are specified as of June 20, 2023. A.1 Computational Resources All of our experiments were run on a single NVIDIA A100 GPU with 80GB of memory.We estimate the total GPU usage to have been roughly 500 GPU hours across training and inference.We You are a helpful assistant that assists the user in deciding alternate names for → their tables in an SQL database. Listing 1: The directive we use for asking ChatGPT to produce table synonyms. The database with database ID "[DB_ID]" currently has tables with the names [ → TABLES_STRING].Give me two alternate names for the table "[TABLE_NAME]".→ Print your output as a python list.Do not print any additional information, → formatting, explanation, or notes. Listing 2: The prompt we use for asking ChatGPT to produce table synonyms. further estimate the cost of utilizing ChatGPT and OpenAI Codex to be under 100$ in total. A.2 Synonyms (ChatGPT) For column and table synonyms, we use one-shot prompting to indicate to ChatGPT the kind of transformation we desire. For column synonyms, the overall directive and prompt are shown in Listings 1 and 2 respectively.The demonstrated example also follows the format of the prompt.[DB_ID] is the database ID of the database having the column, and [TABLE_NAME] is the name of the table containing it.A commaseparated list of all database table names in quotes is filled into [TABLES_STRING]. Similarly, for table synonyms, the directive and prompt are shown in Listings 3 and 4 respectively.In particular, [DB_ID] and [TABLE_NAME] are replaced with the database ID and table name. [COLUMN_NAMES] is a comma-separated list of columns of the specified table.The demonstrated example also follows this format. We found that asking ChatGPT to structure its output as a JSON snippet saved us the trouble of sanitizing its outputs and separating it from any decoration (comments or explanation) it produced.It also made it easier to detect invalid outputs and retry. A.3 Text-to-SQL (ChatGPT) We prompted ChatGPT in a one-shot manner for evaluation on our benchmark.This was necessary as our benchmark is built by modifying SPIDER (Yu et al., 2018).The queries are expected to be in a specific format in the spider dataset.In particular, the table aliases are always t1, t2, . . . .Further, columns are never aliased, and only unqualified JOIN is used and INNER JOINs, OUTER JOINs not used.Therefore, rather than do some ad-hoc post-correction, we showed ChatGPT one example from the original SPIDER dev set.In addition, we asked ChatGPT to structure its output as a JSON snippet, a departure from the conventional prompt as in (Liu et al., 2023).This was motivated by our observation that ChatGPT would occasionally sneak comments or notes into its queries despite our best efforts.By asking it to produce the output in a structured (JSON) format, it was much easier to detect errors and retry. We use the directive and prompt shown in Listings 5 and 6 respectively.The database ID and its schema go into [DB_ID] and [SCHEMA], respectively.The question is passed at the end in the placeholder [QUESTION].Our demonstration for ChatGPT consists of using the question "Show the stadium name and the number of concerts in each stadium", and the output used for demonstration is shown in Listing 7. A.4 Text-to-SQL (OpenAI Codex) We found that asking Codex to produce multiple SQLs in the same output did not have the desired effect, as it did not usually conform to the number of outputs or the format.Therefore, we instead prompt Codex multiple times with a temperature of 0.6 (as recommended by OpenAI to elicit creativity) and a top-p of 0.7 to get its outputs.To this end, we found both zero and one-shot prompting ineffective in conveying to Codex the specific format of the output (unlike ChatGPT).In contrast, we found that few (specifically, two) shot prompting to work much better, and therefore proceeded with that alternative.Our two demonstrations as well as the query prompt follow the format of Listing 8.The output formatting is simply the SQL query string inside curly braces.The two demonstrated examples are replicated in Listing 9. B Alternate Prompts with ChatGPT Before settling on our choice, we also experimented with existing prompts used by prior work (zero-shot, as opposed to our one-shot method).In You are a helpful assistant that assists the user in deciding alternate names for → their tables' columns in an SQL database. Listing 3: The directive we use for asking ChatGPT to produce column synonyms. The database with database ID "[DB_ID]" has a table called "[TABLE_NAME]".This → table has columns with the following names: [COLUMN_NAMES] Give me two alternate names for each column.Format your output as a json snippet → with keys corresponding to column names.Do not print any additional → information, formatting, explanation, or notes. Listing 4: The prompt we use for asking ChatGPT to produce column synonyms. You are a helpful assistant that converts provided English questions to SQL queries → with respect to a provided schema. Listing 5: The directive we use while prompting ChatGPT on our benchmark. The schema for a database with Database ID [DB_ID] is: [SCHEMA] Convert the following English question to the five most plausible SQL queries → compatible with the above schema.Use simply the column name for selections in simple queries.For queries with joins → , use t1, t2, and so on as aliases for the tables, and use t1.column, t2. → column, and so on for the column selections.Structure your output as a JSON snippet with a single key "queries", mapping to a → list of alternatives.Do not print any additional information, explanation, → formatting, or notes.Question: [QUESTION] Listing 6: The prompt we use while prompting ChatGPT on our benchmark. → stadium_id = t2.stadium_id group by t1.stadium_id" ] } Listing 7: The demonstrated outputs for the one-shot example with the query "Show the stadium name and the number of concerts in each stadium". # Use the schema links to generate the SQL query for the question [SCHEMA] Convert the following English question to SQL queries compatible with the above → schema. Use simply the column name for selections in simple queries.For queries with joins → , use t1, t2 and so on as aliases for the tables, and use t1.column, t2.→ column and so on for the column selections. Question: [QUESTION] particular, we tried the prompt used by (Liu et al., 2023) to evaluate ChatGPT on our benchmark with minor modifications (asking for five outputs instead of one).We showcase it in Listing 10. However, as shown in Table 8, the results with this prompting method always lag behind those obtained with our main choice. Therefore, we decided to stick with our choice for the comparison in Subsection 6.3. C Exact Set Match Accuracies for the System Comparison and Ablation Study Here we report the Exact Match (EM) accuracies of our System Comparison and Ablation Study for both the EitherInTopK and BothInTopK modes of evaluation. The System Comparison on AmbiQT in terms of EM, and of the various decoding algorithms, when applied to T5-3B, are shown in Table 6.We observe that Exact Set Match (EM) follows the same trend as Execution Match (EXM) under both headings, once again demonstrating the superior coverage of LogicalBeam. The results of our Ablation Study, in turn, are shown in Table 7.The trend of EM also matches that of EXM here. D Inadequacy of Conventional Decoding Algorithms In this section, we give some anecdotes to highlight the shortcomings of conventional decoding algorithms for our purposes.The example for the case of Beam Search when used with a Text-to-SQL model was given in the main material as Figure 2. We also give here an anecdote of Nucleus Sampling in Figure 5. Strikingly, all the outputs of Nucleus Sampling are the same.This was the case for many of the examples we manually appraised.Upon further investigation, we discovered that the model produced extremely skewed probability distributions for its tokens -it was not uncommon for certain tokens to be assigned greater than 0.99 probability.This renders conventional decoding algorithms, including sampling-based methods, ineffective.Similarly, we found Beam Search (as well as sampling approaches) to be suboptimal for the case of Text-to-Template conversion, as Figure 6 exemplifies. E Examples of Templates A template is generated by abstracting away column names, table names, integer constants, and string literals from an SQL query.While these are only a small fraction of the various features of the SQL query, they represent a disproportionately large percentage of viable alternatives -for instance, a column name may be replaced by any of the numerous other ones to generate an (otherwise useless) alternative.By abstracting away these details, we avoid generating spurious alternatives by swapping these features with other ones at the template generation stage.In addition, by generating, e.g., column instead of t1.column for t1.name, we avoid trivial alias swaps.Some examples of templates for a few SQL queries are shown in Table 9, and the replacements carried out for each kind of abstraction are outlined in Table 10. F Example Outputs From the Systems We showcase example outputs from three chosen systems -our method, ChatGPT, and T5-3B on the various kinds of ambiguities of AmbiQTin Figures 7 through 10.Note that the first two outputs of our approach are from T5-3B.We observe that our approach is more consistent than the other two in incorporating all the possible queries. Figure 1 : Figure 1: A Text-to-SQL system converts a user question to an SQL query, conditioned on the database schema and/or content. Figure 4 : Figure 4: The coverage only increases slightly with more outputs, and decreases with increasing beam width.The x-axis varies the controlled hyperparameter, while the y-axis reports coverage. Figure 6 : Figure 6: Vanilla Beam Search is inadequate to elicit meaningful template diversity.In particular, diversity in the number of JOINs or selections is lacking. Table 1 : The AmbiQT benchmark.For each question, we list two valid SQL queries as per the schema.The schema is not shown here, but the ambiguity in it can be inferred based on the two SQL queries. Table 2 : A catalog of errors on the SPIDER dev split, based on Exact Match (EM), corresponding to the top-1 output from a Beam Search with a beam width of 25.Most errors stem from an incorrect number of JOINs or SELECTions, with incorrect schema names being a concern as well. • • • , (y k , s k ), template t, set of all column names C and table names T Result: The next set of hypotheses with scores (y ′ 1 Our approach in its entirety.A counterfactual template generation step provides template diversity via Prefix Enforcement.Constrained infilling generates content diversity by restricting branching and enforcing template adherence. Table 3 : The results of all compared systems on AmbiQT as portrayed by Execution Match (EXM) accuracy in the Top-5 outputs.LogicalBeam usually performs the best under the EitherInTopK heading, except for Precomputed Aggregates.More importantly, LogicalBeam consistently outperforms all other systems under the BothInTopK heading.This shows the capacity of LogicalBeam to capture greater meaningful diversity in its outputs. Table 5 : The Exact-Set and Execution Match accuracies of LogicalBeam on two popolar Text-to-SQL datsets, SPIDER and Kaggle DBQA.Despite the datasets not exhibiting ambiguity, LogicalBeam delivers significant improvements over the T5-3B baseline. Table 6 : Listing 8: The format of both the demonstrated and query examples Question 1: List the official name and status of the city with the largest → population.Query 1: SELECT official_name, status FROM city ORDER BY population DESC LIMIT 1 Question 2: Show the stadium name and the number of concerts in each stadium.An alternate prompt used by prior work that we tried.The Exact Set Match (EM) Accuracy of the compared systems.SELECT name, nationality, age FROM singer ORDER BY age DESC 2. SELECT name, nationality, age FROM singer ORDER BY age DESC 3. SELECT name, nationality, age FROM singer ORDER BY age DESC 4. SELECT name, nationality, age FROM singer ORDER BY age DESC 5. SELECT name, nationality, age FROM singer ORDER BY age DESCFigure 5: Nucleus Sampling shows virtually no diversity in top-5 outputs due to a highly skewed probability distribution leading to the same tokens being sampled each time. Table 9 : Examples of templates. Table 10 : The abstractions in a template.
8,986.6
2023-10-20T00:00:00.000
[ "Computer Science" ]
Passivation mechanism of thermal atomic layer-deposited Al2O3 films on silicon at different annealing temperatures Thermal atomic layer-deposited (ALD) aluminum oxide (Al2O3) acquires high negative fixed charge density (Qf) and sufficiently low interface trap density after annealing, which enables excellent surface passivation for crystalline silicon. Qf can be controlled by varying the annealing temperatures. In this study, the effect of the annealing temperature of thermal ALD Al2O3 films on p-type Czochralski silicon wafers was investigated. Corona charging measurements revealed that the Qf obtained at 300°C did not significantly affect passivation. The interface-trapping density markedly increased at high annealing temperature (>600°C) and degraded the surface passivation even at a high Qf. Negatively charged or neutral vacancies were found in the samples annealed at 300°C, 500°C, and 750°C using positron annihilation techniques. The Al defect density in the bulk film and the vacancy density near the SiOx/Si interface region decreased with increased temperature. Measurement results of Qf proved that the Al vacancy of the bulk film may not be related to Qf. The defect density in the SiOx region affected the chemical passivation, but other factors may dominantly influence chemical passivation at 750°C. Background Excellent surface passivation is required to realize the next-generation industrial silicon solar cells with high efficiencies (>20%). Silicon oxide films thermally grown at very high temperatures (>900°C) are generally used to suppress the surface recombination velocities (SRVs) to as low as 10 cm/s and applied in front-and rear-passivated solar cells. In recent years, atomic layer-deposited (ALD) aluminum oxide (Al 2 O 3 ) thin films have been investigated as candidate surface passivation materials [1][2][3]. ALD Al 2 O 3 thin films enable perfect passivation similar to high-quality thermally grown silicon oxide and can be prepared at low temperatures (<300°C). Given that the silicon bulk lifetime is sensitive to high temperatures, ALD Al 2 O 3 has a natural advantage over thermal SiO 2 in terms of integration into industrial cell processes. Extensive experiments on Al 2 O 3 film applications in photovoltaics have demonstrated that Al 2 O 3 can passivate both low-doped n-and p-type silicons. ALD Al 2 O 3 also exerts a better passivation effect on p + -type emitters than other dielectric layers. Very recently, Hoex et al. [4] found that Al 2 O 3 can also enable high-surface passivation for n + -type emitters within the range of 10 to 100 Ω/sq. Low SRVs for dielectric passivation are attributed to two passivation mechanisms: chemical passivation and fieldeffect passivation [5,6]. Chemical passivation (e.g., thermal SiO 2 films) decreases the interface defect density (D it ). In dielectric layers such as SiN x and Al 2 O 3 , a high fixed charge density (Q f ) near the silicon surface generates an electric field, repelling electrons or holes to reduce carrier recombination on the surface. Thermal ALD Al 2 O 3 reportedly acquires a negative Q f as high as 10 13 cm −2 with sufficiently low D it (about 10 11 eV −1 cm −2 ) after annealing [7,8]. Experiments have shown that the fixed charge located near the Al 2 O 3 /Si interface is related to some types of defect proposed as Al vacancies, interstitial O, and interstitial H in Al 2 O 3 film or at the interface [5]. Positron annihilation is a useful technique for vacancy-type defect investigation. Edwardson et al. [9] performed Doppler broadening of annihilation radiation (DBAR) studies and found an interface that traps positrons in an ALD Al 2 O 3 sample, which significantly differed from the S-W result of DBAR in the current work. The discrepancy can be attributed to the different annealing conditions. In the present study, the effect of annealing temperature on the surface passivation characteristics of Al 2 O 3 films was investigated. Corona charging experiments were performed to distinguish between chemical and field-effect passivation mechanisms. Slow positron beam DBAR measurements were performed to probe the defects in Al 2 O 3 films annealed at 300°C, 500°C, and 750°C. Experimental Aluminum oxide films were deposited onto a 1 to 10 Ωcm p-type Czochralski Si (100) substrate using the thermal ALD method. The 420-μm-thick double-sided polished wafers were cleaned using the RCA standard method and dipped in 1% hydrofluoric acid for 1 min before deposition to remove the native oxide layer on the surface. Thermal ALD Al 2 O 3 films about 23 nm thick were prepared with Al (CH 3 ) 3 and H 2 O as reactants at 250°C. The optimum deposition temperature that led to the highest as-deposited effective lifetime was determined to be 250°C. Double faces were deposited to prepare symmetrical Al 2 O 3 /Si/Al 2 O 3 . After deposition, the samples were annealed at different temperatures (300°C to 750°C) for 10 min in air. Annealing in air was performed because it closely resembles the firing condition in the manufacturing process of solar cells. The effective lifetimes of these samples were measured before and after annealing, and a negative Q f of the Al 2 O 3 films was obtained using corona charging measurements using Semilab WT2000 (Semilab Semiconductor Physics Laboratory Co. Ltd., Budapest, Hungary). DBAR measurements of the three annealed samples (300°C, 500°C, and 750°C) were performed to investigate the defects in the films. A slow beam of positrons that had variable energies (<10 keV) was used to obtain information from the thin films. Corona charging measurement The effective lifetime of the annealed samples was measured using the microwave photoconductive decay method. Corona charging experiments were performed to determine Q f [10]. As a positive charge was added stepwise to the film surface using a corona, the effective lifetime decreased until the positive charge was totally balanced with the negative fixed charge and then increased because the positive charge also enabled field-effect passivation. Thus, the negative Q f was equal to the amount of added corona charge density (Q c ) at the minimum point of the τ eff -Q c curve. The surface passivation mechanism comprises chemical passivation and field-effect passivation. Thus, the minimum effective lifetime was also obtained to determine the role of chemical passivation because the effective lifetime is mainly controlled by chemical passivation when the negative charge is neutralized. Figure 1 shows the typical corona charging measurement for the as-deposited Al 2 O 3 /Si sample. Q f before annealing was determined as −3.5 × 10 11 cm −2 from the curve, and the lowest lifetime was recorded as 42.8 μs to characterize the chemical passivation of the sample. DBAR measurement Positron annihilation is used to analyze defects in oxides and semiconductors [11][12][13]. When a positron is implanted into a matter, it annihilates an electron and emits two γ rays. The energy of γ rays varies around 511 keV because of the energy and momentum conservation of the positronelectron system given by the relation E γ = 511 ± ΔE γ keV, where ΔE γ is the Doppler shift. Even a slight change in momentum can lead to a large shift of energy. The S and W parameters were calculated to characterize Doppler broadening. The S parameter is defined as the ratio of the mid-portion area to the entire spectrum area. The W parameter is the ratio of the wing portion to the entire area. With increased concentration of vacancy in solid, the positron is mostly trapped and annihilates low-momentum electrons, leading to a narrow Doppler peak with a high S parameter. W parameters are higher and S parameters are lower when annihilation of the core electrons of atoms occurs. Given that the momentum distribution of electrons varies in different types of defect, changes in S-W plots can also characterize the types and distributions of defects in the films [14]. Influence of annealing temperature on surface passivation The effective lifetimes of the samples annealed at different temperatures in air are shown in Figure 2. The effective lifetime change is the ratio of the effective lifetime after annealing to that of the effective lifetime before annealing. The ratio was used instead of the actual value because the effective lifetimes of the six as-deposited samples (before annealing) were not strictly identical, which rendered meaningless the observation of the absolute value of the effective lifetime after annealing. The effective lifetime change initially increased with increased annealing temperature and then rapidly decreased below unity. This result indicated that passivation collapsed at annealing temperatures higher than 700°C. The optimum annealing temperature was around 500°C in air, which was higher than the reported 400°C to 450°C when annealed in N 2 [15]. Corona charging measurement was performed to observe the field-effect and chemical passivation mechanisms. Q f and the lowest lifetime can be extracted from the resulting measurement curve, as described in the section 'Corona charging measurement.' Figure 3a shows the measured data, and Figure 3b shows the Q f and the minimum effective lifetime change (lowest lifetime after annealing vs. as-deposited value) as a function of the annealing temperature. Q f significantly increased to 10 12 cm −2 after annealing at 400°C compared with Q f of about 10 11 cm −2 before annealing ( Figure 1). Q f increases from 2.5 × 10 11 cm −2 at 300°C, reaches the highest point of about 2.5 × 10 12 cm −2 at 500°C, and thereafter decreases to 8 × 10 11 cm −2 . Q f did not significantly change when the annealing temperature was higher than 600°C. Meanwhile, the effective lifetime of the sample annealed at 300°C was slightly enhanced (Figure 2), i.e., 1.2 times greater than that of the as-deposited sample. This result indicated that Q f of 2.5 × 10 11 cm −2 did not significantly affect surface passivation. The chemical passivation variation at 300°C to 500°C was similar to Q f based on the minimum lifetime in the corona charging measurement. The chemical passivation effect increased with increased annealing temperature before 500°C and quickly decreased thereafter. This variation was related to the hydrogen release from the film found by Dingemans [16]. Notably, Q f reached 10 12 cm −2 after annealing at 750°C, and this value was almost one magnitude higher than that of the as-deposited sample. However, the effective lifetime was low (Figure 2) because of the poor chemical passivation at 750°C in Figure 3b of the minimum lifetime change value. Therefore, chemical passivation was a prerequisite in achieving excellent surface passivation. The approximate effective lifetime τ eff of a symmetrically passivated silicon wafer can be expressed as 1/τ eff = 1/τ b + 2S eff /W, where τ b is the bulk lifetime, W is the crystalline silicon (c-Si) wafer thickness, and S eff is the effective SRV. The bulk lifetime was estimated at about 1 ms using the I 2 passivation method to determine S eff . Figure 4 shows that S eff was linear with 1/Q f 2 for negative Q f values >6.8 × 10 11 cm −2 , except for the sample annealed at 750°C. The linear relationship of samples annealed between 400°C and 700°C indicated that passivation was dominated by fieldeffect passivation (Q f ). Thus, the sample annealed at 300°C (dislocated line) indicated that Q f of 2.5 × 10 11 cm −2 was too low to dominate surface passivation, which confirmed the conclusion drawn from Figure 3. This result also agreed with the simulation of Hoex et al. for p-type c-Si [5]. Based on the dislocation of the sample annealed at 750°C, a high interface trap density was inferred to destroy the field-effect passivation and increase S eff . DBAR analysis at different annealing temperatures DBAR analysis was performed at the Beijing Slow Positron Beam (Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China). A positron beam generated from a Na 22 radioactive source was used, and the energy of the positrons was modulated between 0 and 10 keV to obtain the incident energy profile of positron annihilation. The energy region of the S parameter ranged from 510.24 to 511.76 keV, whereas the W parameter ranged from 504.2 to 508.4 and from 513.6 to 517.8 keV. Thus, the total energy region of the peak ranged from 504.2 to 517.8 keV. The vacancy defects in the alumina films were mainly Al vacancies, O vacancies, and clusters of vacancies (voids) [13,17,18]. O vacancies with a positive charge (F + -and F 2+ -type defects) have difficulty trapping positrons because of their identical charge. Nobuaki Takahashi et al. [19] calculated the defect energetics using first-principle calculations and found that the oxygen vacancy has a much higher formation energy than the aluminum vacancy [19], further supporting the view that few positrons are trapped in charged O vacancies. Therefore, Al and neutral O vacancies (F center) are crucial to the annihilation results in the present study. Figure 5a,b shows the measured S and W parameters as a function of the incident positron energy for samples annealed at different temperatures for 10 min. In Figure 5a, the shapes of the three curves are similar because the deposition conditions of the three films were identical, and the substrates on which these films grew were also the same. The first one or two points were recorded at low positron incident energy (<0.5 eV), which can be ascribed to the trap states near the film surface. The S parameter of the injection energy was approximately between 0.5 and 2 keV, which mainly represented the annihilation events occurring in the aluminum oxide film. Figure 5a shows that the S parameter initially increased rapidly, which indicated a higher vacancy defect density of the inner oxide film than that of the surface. A decrease was observed beyond 1 keV, demonstrating that the S parameter of the Al 2 O 3 /Si interface was lower than that of the Al 2 O 3 films. The lower S parameter can be attributed to the positron annihilation with high-momentum electrons of oxygen at the interface. This result was probably due to the SiO x layer grown between the aluminum oxide and Si substrate, which reportedly has an important function in excellent surface passivation [6,20,21]. The S parameter continued to increase after 2 keV with increased incident energy because larger portions of positrons were injected into the silicon substrate. The S parameter in the substrate was much higher than that in the oxide film because of the different chemical environments of annihilation. The S parameter did not reach a constant value before 10 keV, which implied that positrons with 10 keV energy cannot completely penetrate the Si substrate far from the oxide layer. The S-E plot in Figure 5a also shows that the S parameter in Al 2 O 3 films (about 1 keV) evidently decreased with increased annealing temperature because of the decreased density of trap vacancies in the Al 2 O 3 films. The W parameter was more sensitive to the chemical environment of the annihilation site. The larger W and smaller S parameters indicated more positrons annihilating core electrons. Thus, the smallest S and largest W parameters of the sample annealed at 750°C (Figure 5a,b) implied that the Al 2 O 3 films had been compressed at this temperature with the lowest vacancy defect density and that the film structure probably did not change. The S and W parameters of the same incident energy were plotted in one graph, as shown in Figure 5c. The S vs. W diagrams of monolithic materials present clusters of points because all S or W parameters are almost the same [14]. For example, in one type of defect, the S and W parameters may vary with the positron incident energy, and the S-W plot extends to the line passing the data point of the bulk region without defect [13,14]. The slope of the line changes with the layers of different compositions and defect types. Thus, the annealed sample consisted of a three-layered structure in which each curve consisted of three extended line segments (Figure 5c). This finding corresponded with the S-E curve analysis result, which also suggested that the film contained a layer different from Al 2 O 3 and Si near the interface. No significant interface response in the S-W result has been previously observed [9], and the discrepancy may be a result of the different annealing environments (air vs. N 2 ). Annealing in air may lead to a thicker interface oxide (SiO x ) resulting in more evident responses in the DBRA result. The different slopes of the Al 2 O 3 segment of the three samples indicated that the defect types or chemical environments of these samples were different. The three lines crossed one another to avoid passing through a single point of bulk sample without defects, indicating that each of the samples had more than two types of defect. As mentioned in the section 'DBAR analysis at different annealing temperatures, ' the S parameter was mainly influenced by Al and neutral O vacancies. Thus, residual C during deposition and O-H bond content also possibly influenced the S-W line slope. Residual C varied with the annealing temperature and may have thus influenced the environment of Al vacancies, although further investigations are needed. A thinner sample was prepared to understand the microstructure of the Al 2 O 3 /Si samples, which showed a three-layered structure in DBAR analysis. The 6-nm -thick sample was obtained using thermal ALD and observed by transmission electron microscopy (TEM). The fitted S parameter can be clearly analyzed in different parts of a film to gain accurate information from DBAR spectroscopy. In this study, the energy of injected positrons had a different distribution at the positron incident energy of the X-axis in the S-E plot. The positrons also reached different layers of the film. Thus, the S parameter of each point in the S-E plot contained integrated information on multiple layers. The S parameter was separated in different layers, and the density/type of vacancies was analyzed at different positions in the film. The S-E plot was fitted using the VEPFIT program to calculate the S parameter from different layers using a four-layered mode, which corresponded to the surface/Al 2 O 3 /SiO x /Si structure observed by TEM. The obtained S parameter is shown in Figure 7. The S parameter in the Al 2 O 3 films decreased with increased temperature, indicating that the vacancy density in the Al 2 O 3 film decreased with increased annealing temperature. The S parameter was much lower in the SiO x layer than that in Al 2 O 3 and the Si substrate. The S parameter also decreased with increased annealing temperature, which probably corresponded with the dominant P b defect that decreased with increased annealing temperature [22]. Al vacancies, O interstitials, and H interstitials are proposed as the reasons for the negative Q f of Al 2 O 3 [23,24]. The measured Q f in Figure 3 and information on Al vacancies in Figure 7 were considered in analyzing the effect of Al vacancy density on the negative fixed charge Q f . With increased annealing temperature from 300°C to 500°C, the increase in Q f was opposite to the decrease in Al vacancy in the bulk film. Thus, Q f may not be related with Al vacancies in the Al 2 O 3 films. The measured minimum effective lifetime in Figure 3 and S parameters of SiO x interface in Figure 7 were correlated, and the decrease in vacancy of SiO x was coincident with the enhanced chemical passivation at annealing temperatures lower than 500°C. However, the chemical passivation breakdown at 750°C cannot be explained: among the samples annealed at 300°C and 750°C, the chemical passivation at 750°C was the poorest, but the defect density at the interface region still decreased. The functions of interstitial atoms (O or H) near the interface require further investigation. Conclusions Q f did not significantly affect the passivation at a low annealing temperature (300°C). The interface trap density markedly increased at a high annealing temperature (750°C) and failed at surface passivation even at a high Q f . Positron annihilation techniques were used to probe the vacancy-type defects. A three-layered microstructure of thermal ALD Al 2 O 3 films on Si substrate was found. The Al defect density in the bulk film and the vacancy density near the interface decreased with increased temperature based on the fitted S parameter at different positions in the Al 2 O 3 films. The Al vacancy of the bulk film was not related to Q f based on the Q f measurement results. The effects of interstitial atoms on Q f need further investigation. The defect density in the SiO x region may affect chemical passivation, but other factors may also influence chemical passivation particularly beyond 500°C.
4,835.6
2013-03-02T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Predictive Modeling of Uniaxial Compressive Strength of Rocks for Protecting Environment Using Arti�cial Neural Network : Sedimentary rocks provide information on previous environments on the surface of the earth. As a result, they are the principal narrators of former climate, life, and important events on the surface of the earth. Complexity and expensiveness of direct destructive laboratory tests are adversely affects the data scarcity problem, making the development of intelligent indirect methods an integral step in attempts to address the problem faced by rock engineering projects. This study established artificial neural network (ANN) approach to predict uniaxial compressive strength (UCS) in MPa of soft sedimentary rocks using different input parameters i.e. dry density (ρ d ) in g/cm 3 ; Brazilian tensile strength (BTS) in MPa; point load index (I s(50) ) in MPa. The developed ANN models M1, M2 and M3 were divided into the overall dataset; 70% training dataset and 30% testing dataset; and 60% training dataset and 40% testing dataset respectively. In addition, multiple linear regression (MLR) was performed to compare with the proposed ANN models to verify the accuracy of the predicted values. The performance indices were also calculated by estimating the established models. The predictive performance of the M3 ANN model with the highest coefficient of correlation (R 2 ), the smallest root mean squared error (RMSE), the highest variance accounts for (VAF) and reliable a10-index was 0.99, 0.00060, 0.99 and 0.99 respectively at the testing dataset revealing ideal results and proposed as the best-fit prediction model for UCS of soft sedimentary rocks at the Thar Coalfield, Pakistan, among other developed models in this study. Moreover, by performing sensitivity analysis, it was determined that the BTS and I s(50) were the most influential parameters in predicting UCS. Introduction Sedimentary rocks provide information about the previous environment of the Earth's surface.As such, they are the primary narrators of climate, life, and important events that occurred prior to the Earth's surface.Uniaxial compressive strength (UCS) is an essential rock strength parameter widely used in the design of rock structures (Madhubabu et al. 2016;Asheghi et al. 2019).UCS is an integral parameter in rock characterization, tunnel construction, slope stability analysis, construction, bridges, and other rock-related complications (Abdi eta al. 2019;Abdi et al. 2018;Shahri et al. 2020;Barzegar et al. 2019;Gockceoglu et al. 2004;Baykasoğluet al. 2008).Direct estimation of UCS based on the principles of ISRM (international society rock mechanics) and ASTM (American society for testing materials) is complex, time-consuming, and expensive procedure.It makes testing infeasible for engineering projects where large amount of data is needed.To overcome these shortcomings, this study establishes artificial neural network predictive models for the estimation of UCS.Many research scholars have established predictive methods to deal with such comple problems using various statistical methods such as artificial neural network (ANN) and adaptive neuro-fuzzy interference system (ANFIS) (Tiryaki 2008;Ozcelik et al. 2013;Rajesh-Kumar et al. 2013;Kong et al. 2018;Teymen et al. 2020;Kamani et al. 2020;Cabalar et al. 2012;Bashari et al. 2011;Umrao et al. 2018).Currently, intelligent methods like ANN, ANFIS, PSO (particle swarm optimization), and GA (Genetic Algorithm) are frequently applied to solve problems related to rock structure design (Asheghi et al. 2019), and these methods are considered to be fast, economical, and have achieved good agreement between the measured and predicted values of rock mechanical properties, i.e., UCS and E (modulus of elasticity in MPa), etc. (Teymen et al. 2020).(Torabi-Kaveh et al. 2015) employed ANN and multiple regression methods to estimate UCS and their findings indicated that the ANN method performed better.Yagiz et al (Yagiz et al. 2012) analyzed ANN and multiple regression for predicting UCS of carbonate rocks and found that the ANN method is in good agreement with traditional multiple regression.(Ceryan et al. 2013) also employed the ANN and regression methods to predict the UCS of carbonate rocks and proposed that the ANN results were significantly accurate.(Mohamad et al. 2020) used a PSO-based ANN method to estimate the UCS of soft rocks with input parameters of Brazilian tensile strength (BTS) in MPa, point load index (Is(50)) in MPa, and ultrasonic (Vp) in m/s, and demonstrated the high performance of the proposed model.ANN method has proved to be a key method among all intelligent methods and is therefore mostly used to solve challenging problems that are reliant on laboratory experimental data for the reason of their high efficiency and ability to learn from inputs (Aboutaleb et al. 2018).Based on the reliable predictions of ANN methods, some researchers have estimated various mechanical properties of rocks by analyzing the correlation among various physical parameters (Bejarbaneh et al. 2018;Fakir et al. 2017). (Yin et al. 2020) employed ANN back-propagation algorithm, which has been considered as the best prediction method based on the previous studies.Table 1 shows previous studies using intelligent methods to predict UCS.This study applied the ANN approach to estimate UCS with different input parameters such as dry density (ρd) in g/cm 3 ; Brazilian tensile strength (BTS) in MPa; and point load index (Is(50)) in MPa.A total of 37 soft sedimentary rock samples of each type of core rock were randomly selected from Block IX of the Thar coalfield.For the developed ANN models, the dataset is distributed as follows: model 1 (M1) is the overall dataset, model 2 (M2) consists of 70% of the training dataset and 30% of the testing dataset, and model 3 (M3) consists of 60% of the training dataset and 40% of the testing data set.Similarly, simple regression and multiple regression analyses were performed for comparison with the proposed ANN model to check the accuracy of the predicted values.The performance indices are also calculated by estimating the established models.Besides, to determine the effect of each variable on the estimated values of UCS, a sensitivity analysis was performed.Complexity and expensiveness of direct destructive laboratory tests is adversely affects the data scarcity problem, making development of intelligent indirect methods an integral step in attempts to address the problem faced by rock engineering projects. Building dataset In this study, soft sedimentary rock samples were collected from Block IX of the Thar Coalfield, Pakistan.Fig. 1 represents the geological site of collected rock samples.Initially, a total of 37 randomly selected core rock of each type was prepared and subdivided into standardized samples according to the ISRM and ASTM standards to maintain the same rock core dimensions, geological and geotechnical features.Next, these rock samples were tested in the laboratory at the Department of Mining Engineering, Mehran University of Engineering and Technology to determine the physical and mechanical parameters such as ρd in g/cm 3 ; BTS in MPa; Is(50) in MPa and UCS in MPa using universal testing machine (UTM) and Point Load Testing Device (TS-706) as shown in Fig. 2 2 presents the dataset of physical and mechanical parameters.Table 3 shows the minimum, maximum, average, and standard deviation of parameters of rock samples determined in the laboratory. Methods The ANN approach was employed to predict UCS with three corresponding inputs ρd (g/cm 3 ), BTS (MPa), and Is(50) (MPa).Fig. 5 demonstrates the flow chart of predictive modeling process for UCS.A dataset of 37 samples for the established models (M1, M2 and M3) was divided and is presented in Table 4.Moreover, cosine amplitude method based sensitivity analysis was carried out in order to estimate the influence of each variable on output. Artificial Neural Network The concept of artificial neural network (ANN) was originally introduced by Frank Rosenblatt in 1958 (Alexx 2001).ANN is considered to be the most common and effective soft computing technique (Alizadeh et al. 2018;Asteris et al. 2019) based on the function of the human brain's nervous system (Ly et al. 2020;Pham et al. 2020;Le et al. 2020a;Le et al. 2020b).This technique is mainly used to solve complex rock structure design problems, i.e. mining, civil, geotechnical, geological engineering, etc.The ANN structure is an essential factor in designing the ultimate prediction model, as the structure affects the learning capability and performance when estimating the network data.The ANN is structured with three layers (i.e.input layer; hidden layer; and output layer) with a number of interrelated units, called neurons, and the method is used to classify the appropriate correlation between the specified input and output parameters (Asteris et al. 2019;Pham et al. 2020).Fig. 6 shows the structure of ANN to estimate UCS in this research. Because of the complexity of the problem, each neuron has enough neuron capacity, and each neuron is related to the weight of the next layer (Rashidian et al. 2014;Fidan et al. 2019;Gowida et al. 2019).Eq. 1 is used to evaluate the approximate number of neurons in the hidden layer, since the improper selection of the number of neurons in the hidden layer often leads to "under-fitting" and "over-fitting" and must be avoided. ANN toolbox in MATLAB package 2018a was used in this study to develop the feed-forward back propagation (FFBP) ANN model with 3-7-1.BP is the most commonly applied powerful learning algorithms in multilayer networks (Hajihassani et al. 2014;Ekemen Keskin et al. 2020).The predictive input parameters ρd, BTS, and Is(50) were allocated to an input layer composed of three neurons to predict UCS of the output layer.ANN models, M1, M2 and M3 were trained, tested and validated.One hundred epochs were used to train the models and the minimum validation error is considered as a stop to prevent overfitting.Fig. 7 represents the validation curves for the training performance of ANN models of UCS. Therefore, the M3 model demonstrates the best performance curve of UCS, with validation error equal to 0.014, which is reached at 0 epochs.Fig. 8 illustrates the scatter plots of the predicted UCS against the measured UCS, as M1 for overall dataset, and M2 and M3 for the training and testing dataset, respectively. Multiple Linear Regression SPSS (version 23) was used to conduct a multiple linear regression (MLR) analysis to determine the existence of a linear relationship between the dependent variable and the independent variables.Regression analysis is used to determine the independent variables' significance in determining the dependent variable's values (Sajid 2020a;Sajid et al. 2017).More precisely, the purpose of regression analysis in this study is to compare the performance of ANN analysis to that of conventional linear regression.This approach is also used in several recent studies on the application of artificial neural networks and linear regression analysis (Sajid 2020b).The basic linear regression equation (Eq.2) modified to include our dependent and independent variables is as follows: where represents the dependent variable, represents the regression constant, represents the regression coefficient, represents the value of the independent variable. Model Evaluation This study used ANN and MLR methods.To verify the prediction results of the models M1, M2 and M3, the performance indices were calculated.The outcomes of all established models are illustrated as measured and predicted values.Eq. 3, 4, 5 and 6 were used to find the R 2 , RMSE, VAF and a10-index of each model, respectively.] × 100 (5) In addition, to further assess the reliability of the model, a new engineering index a10-index was applied to the studied models. where, is the measured value, is the predicted value, ̅̅̅̅̅̅̅ and ̅̅̅̅̅̅̅ are the mean of the measured and predicted value, respectively, and n shows the number of the dataset.10 denotes the dataset with a value rate of measured UCS/predicted UCS between 0.90 to 1.10 and represents the dataset number.The first step is to determine whether the data under consideration is appropriate for linear regression analysis.Numerous tests are suggested in the literature for this purpose.Apart from R 2 , another most frequently used test is the ANOVA test.In the first case, linear regression was used to determine the relationship between the dependent variable measured UCS and three independent variables: , BTS, and (50) .In Table 6, the R 2 values of UCS are estimated by using different equations of the MLR models as M1, M2 and M3 in the overall dataset, training and testing data, i.e., 0.65 for M1, 0.62 and 0.83 for M2, and 0.65 and 0.84 for M3 respectively.Therefore, the R 2 values of UCS are quite satisfactory at M1, M2 and M2 models.Furthermore, the ANOVA test also rejected the null hypothesis at a significance value of P < 0.001. Taylor Diagram Taylor's diagram addresses a short numerical explanation of how the fit patterns match their connection and standard deviation.The expression of Taylor diagram can be expressed as follows in Eq. 7: where, R denotes the correlation, Z denotes the discrete points, and represents two variables, and shows the l and m standard deviation, and ̅ and ̅ denotes the average of and . Fig. 15 indicates the Taylor diagrammatic correlation between the R 2 , RMSE and standard deviation of the original and predicted UCS for the M2 and M3 ANN and MLR models at the testing stage, respectively.The prediction of the M3 ANN model is highly correlated with the original values and as compared to the other developed models, the standard deviation is similar to the original value.Thus, the M3 ANN model with R 2 = 0.99 is most suitable for predicting UCS of soft sedimentary rocks in the Thar Coalfield, Pakistan, among other developed models.In an ideal scenario, the best-fit prediction model is to be considered when the R 2 value is highest, the RMSE is lowest, the VAF is maximum and the reliable a10-index.Therefore, according to Fig. 15, M3 (ANN) model at the testing dataset revealed the optimal results and is proposed as the best-fit prediction model for UCS in this study. Sensitivity Analysis It is crucial to accurately analyze the most important parameters that have a great influence on the rock UCS, which can certainly be problematic in the design of the structure.Therefore, the cosine amplitude method (Momeni et al. 2014;Ji et al. 2017) is used for the relative influence of the input parameters on the output in this study.The general formula of the adopted method can be expressed as follows in Eq. 8: where, and = input and output values and n denotes the dataset numbers during the testing stage.Finally, ranges between 0 and 1, specifying the additional evidence on the accuracy between each variable and the target.According to the Eq. 6, if of any parameter is 0, this indicates that there is no significant relationship between this parameter and the target.On the contrary, when is equal to 1 or approximately 1, a significant relationship can be considered that can greatly influence the UCS of the rocks. Fig. 16 The effect of input variables on the result of the established model Fig. 16 shows the relationship between each input parameter ( , BTS, and ( 50) ) of the developed model and output (UCS).Therefore, it can be seen from the figure that the Brazilian tensile strength and point load index are the most influential parameter in predicting UCS. Conclusions In this study, an intelligent method was used to predict the UCS output of soft sedimentary rocks collected from Block IX of the Thar coalfield, using ρd, BTS, Is(50) as input parameters.The physical and mechanical properties of rock samples were determined laboratory in accordance with ISRM and ASTM standards at the Department of Mining Engineering, Mehran University of Engineering and Technology.This study concluded the predictive performance of ANN and MLR models by determining the highest R 2 , the smallest RMSE, the highest VAF and reliable a10-index.For ANN models R 2 , RMSE, VAF and a10-index were 0.98, 0.02568, 0.98 and 0.98 respectively at M1, 0.87 and 0.91, 0.02932 and 0.00030, 0.99 and 0.99 and 1.03 and 1.10 respectively at the training and testing dataset of M2 and 0.91 and 0.99, 0.02232 and 0.00060, 0.97 and 0.99 and 1.04 and 0.99 respectively at the training and testing dataset of M3.In comparison the MLR models' R 2 , RMSE, VAF and a10-index were 0.68, 0.00764, 0.99 and 1.07 respectively at M1, 0.62 and 0.86, 0.00001 and 0.36488, 0.81 and 0.97 and 1.06 and 0.92 at the training and testing dataset respectively for M2 and 0.65 and 0.84, 1.10895 and 0.01245, 0.82 and 0.99 and 1.01 and 1.10 at the training and testing dataset respectively for M3.Thus, the proposed M3 (ANN) model at the testing dataset, respectively yielded the optimum results and are proposed as the best-fit prediction models for UCS in this study.Finally, by performing sensitivity analysis, it was concluded that the BTS and Is(50) were the most influential parameter in predicting UCS. Future Work The current study used only the Aartificial neural network to predict UCS in comparison with multiple linear regression, and this could have produced a more suitable results.However, future work could expand the dataset used in this study and employ techniques such as support vector machine (SVM), random forest (RF), extreme gradient boosting (XGBoost), boosted decision tree regression (BDTR), etc. to further understand the nature of the study. Fig. 2 Fig. 2 (a) Universal testing machine (UTM), and (b) deformed rock core specimen for Brazilian tensile strength test, (c) deformed rock core specimen for UCS test, (d) Point load testing device (TS-706), and (e) deformed rock core specimen for point load index test.(Source for Fig. 2 (e): (Geology 2017)) Fig. 3 represents histogram plots of the original dataset under this study: (a) Dry density (g/cm3), (b) BTS (MPa), (c) Is(50) (MPa), and (d) UCS (MPa).Fig. 4 shows the pairwise plot of the original dataset of different parameters and UCS under this study.Notably, none of the parameters are well correlated to the UCS, thus all the parameters are analyzed for UCS prediction.In addition, Fig. 4 represents a moderate positive correlation of BTS and Is(50) with UCS, however, the dry density shows a negative correlation with UCS. Fig. 9 Fig. 9 ANN model M1 results for UCS plotted against the measured data. Fig. 11 Fig. 11 ANN model M2 results for UCS plotted against the measured data at the (a) training and (b) testing data. Fig. 11 Fig. 11 shows the predicted outputs of the ANN M2 model for UCS versus measured data at the training and testing data.So, at the training and testing data, the predicted R 2 values of the M2 model are 0.87 and 0.91, respectively.According to the M2 estimated results at the training data, Fig. 12a displays the aggregated comparison of predicted against measured values for UCS.Fig. 12b shows the change in relative error between the measured and predicted values.The MSE value of the M2 is 0.00086.Fig. 12c denotes the error histogram of the performed model M2.As a result, it can be seen that the distribution of the errors is almost zero, which indicates that the performance of the proposed model M2 is satisfactory and reliable.Similarly, in M3 estimated outputs at the testing data, Fig. 12d exhibits the aggregated comparison of predicted against measured values for UCS.Fig. 12e denotes the change in relative error between the measured and predicted values.The MSE value of the M2 is achieved as ~ 0. Fig. 12f represents the error histogram of the M3 model.Consequently, it can be seen that the distribution of the errors is nearly zero, which indicates that the performance of a proposed model M2 is acceptable. Fig. 13 Fig. 13 ANN model M3 results for UCS plotted against the measured data at the (a) training and (b) testing data. Fig. 15 Fig. 15 Demonstration of Taylor diagram at the testing data based on the ANN and MLR. Table 1 . Previous studies using intelligent methods to predict UCS. Fig. 1 Geological site of collected rock samples. Table 2 . Physical and mechanical parameters of dataset. Table 3 . The minimum, maximum, average, and standard deviation of dataset. Table 4 . The dataset distribution for ANN and MLR models. Table 7 . Performance indices of ANN and MLR models at the overall dataset, training dataset and testing dataset for UCS.
4,576.6
2021-12-03T00:00:00.000
[ "Computer Science" ]
Peak hour evaluation – a methodology based on Brazilian airports This paper aims to establish a new methodology to calculate the design peak-hour passenger based on Brazilian Airport data. First, a cluster analysis using the Ward Hierarchical Method is applied grouping similar airports in terms of annual passenger throughput and EPH (Equivalent Peak-Hour). Then, we proceed with the calculation of the coefficients of variation of the aggregated hourly passenger throughput of the last seven years of the airports in each cluster. We propose the peak-hour for each airport cluster to be determined at the point where the stability of these coefficients is reached. We conclude by estimating a relationship of our proposed design peak-hour passenger as a function of the variables used to determine the clusters. Introduction The estimation of the design peak-hour passenger is critical for the design of airport terminals and their accessibility as well as for the evaluation of the level-of-service provided by an airport existing infrastructure (Piper, 1990;McKelvey, 1988;Yen, 1995).The concept of the design peak-hour passenger is associated with an hourly passenger throughput although below the absolute peak-hour recorded in a year, but still sufficient to ensure adequate level-of-service at the great majority of the operation time of the airport. The level-of-service provided to passengers is directly related to the critical moments in the airport operations and how these critical moments are perceived.There are service-standards developed and published, for example, by IATA, BAA and Aéroports de Paris (Ashford, 1988) to be used as benchmark; obviously, in off-peak hours, it is simple to assess the level-ofservice, but generally at those moments the quality-of-service is very high, therefore the airport authority should consider to define critical periods of the year to assess the quality-of-service in order to evaluate if such quality is reasonable or if it is necessary to plan for expansions. This paper establishes a new methodology for determining the design peak-hour passenger for airports based on cluster analysis by Ward Hierarchical Method, by grouping similar airports regarding annual passenger throughput and their EPH (Equivalent Peak Hour), a number derived from the division of the typical day throughput by its highest hourly throughput (Wang, 2012). The next step is to calculate the coefficient of variation of the hourly passenger throughput during the period from 2005 to 2011 in order to identify at what point stability in these movements is reached.Such level of stability is the key point for the methodology to estimate the design peak-hour passenger by means of a regression using as explanatory variables the same ones that were used to determine the clusters. Literature review In 1976, ICAO (International Civil Aviation Organization) established a study group (GE / TRAP) to investigate the traffic peak on an international basis and define approaches to improve the situation.In 1978, AACC (Airports Associations Coordinating Council) and IATA (International Air Transport Association) decided to collaborate on a study of peak-hour and airport capacity use, producing a first edition with guidelines for airport management and an updated version in 1990: Guidelines for Airport Capacity / Demand Management. Peak-hour passenger movement instead of annual passenger movement is the basis for the design of the passenger terminal and its facilities.Furthermore, the operational costs involved in running these facilities are also determined by the peaks.The most important role of operational management of an airport is to maximize the use of existing facilities and minimize the problem of congestion. Passengers peak-hours should not be considered a problem, in reality it is a phenomenon, i.e., no matter what the political or charging structure is present in an airport, there will always be a tendency to concentrate movements in certain periods of the day due to many factors such as: interest of passengers, airlines planning, fleet usage maximization, etc. According to Ashford (1997) some factors are crucial to establish the movement of passengers in peak hours: Domestic/International; Traffic Characteristics; Geographical Location; Hub; Catchment Area; and Terminal Capacity. According to Brunetta (1999) design peak-hour passenger is usually defined from historical data, and it may be the 30th busiest time of the year.Ashford (1997) has outlined below the three most important methodologies for determining design peak-hour passenger: The Standard Busy Rate (SBR), Busy Hour Rate (BHR) and Typical Peak-Hour Passenger (TPHP). The problem presented by this latter methodology is the discontinuity of the curve.For instance, an airport with an annual throughput of 29,999,999 passengers derives a design peak-hour passenger of 12,000 passengers, while an airport with an annual throughput with one more passenger (30 million/yr) would derive a design peak-hour passenger of 10,500 passengers, a quite different figure for just one more passenger a year.Wang (1999) developed for Brazilian airports a criterion based on the assumption that highest peak-hours at airports are random at certain levels, i.e. the highest absolute peak-hours in a given year behave in a random way from year to year.To forecast peak-hour demand to a statistically significant level it is necessary to achieve a certain stability.By means of the calculation of the coefficient of variation applied on 48 Brazilian Airports, Wang concluded that stability would be achieved at 96.5% of the annual passenger throughput.So far Wang's Methodology is the only with statistical criteria to choose and to calculate the design peak-hour, therefore this paper uses the same principle to measure the stability of the data. A great variety of methods is used to determine the design peak-hour at airports, but none of them provides unrestricted service for the peak-hour, as this may result in wasted resources.There is a consensus that planning should aim to meet the demand at some level below this absolute peak, resulting that most passengers receives adequate service levels, and only a small percentage would experience the impact of congestion during very short periods of time. Data Analysis This section is divided into three subsections: 1) presentation of the variables used, 2) cluster analysis for 56 INFRAERO's airports based on the 2011 passenger annual throughput; and 3) coefficient of variation of passenger movements for each cluster to establish passenger peak-hour. The preparation of the data leads to the model to calculate the passenger peak-hour using annual passenger throughput and EPH as explanatory variables. Description of the database The data used for this study were obtained from INFRAERO strategic data base, therefore this paper will not present detailed information but only the results of the analysis. Basically there are two variables derived from this database of 56 INFRAERO's airports: i) 2011 Annual Passenger Throughput and ii) EPH (Equivalent Peak Hour). As defined by Wang (2012), the EPH is a measure of the infrastructure usage throughout a typical day, which can be used for the evaluation of capacity and of the need for future investments.EPH can be calculated by adding up all the 24 medians (50th percentile) of the passenger hourly throughput of every day of a given year and then dividing it by the maximum value of these medians.High EPH values mean high usage of the infrastructure along the day and/or high homogeneity of the data.On the other hand, low EPH is derived from low usage of the infrastructure and/or high hourly throughput concentration.Low EPH is not desired by infrastructure providers. Cluster Analysis The cluster analysis aims to divide the elements of a group of interest into sub-groups of elements sharing similar characteristics and presenting heterogeneity when compared to elements of another sub-group (e.g., see Al-Sultan and Marrof Khan, 1996;Kaufman Roussweuw and, 1990;Koskosidis and Powell, 1992;Laporte et al. 1989). Techniques for building the clusters are classified according Hair (2009) in two types: hierarchical and non-hierarchical techniques.The hierarchies are used in exploratory analyzes of the data in order to identify possible clusters and the probable value of the number of groups.As for the use of non-hierarchical techniques, it is necessary that the value of the number of groups already be pre-specified by the researcher. In this paper, we use the hierarchical method of Ward (1963) which is an agglomerative hierarchical technique.Ward's method assumes that at the beginning of the clustering process there are n groups, since each element is considered to be an isolated conglomeration.At each step of the algorithm, the sample elements are grouped to form new clusters in which all elements share similar characteristics.For example, in the first step, two clusters are formed, with n-1 groups; in the second step another cluster is added with n-2 subgroups left.In this algorithm all clusters are formed after n-1 steps. At each stage of the clustering algorithm, the two more "similar" clusters are combined to form a new conglomerate.In Ward method cluster similarity is measured by the distance between clusters defined as: Where ni and nj are the sizes of the clusters e when the grouping process is at stage K and ̅ e ̅ are the centroids (vector of the variable means) of and , that is, where ̅ ∑ is the mean of the variable number "l" of the cluster.At each step of the algorithm, the two clusters which minimize the distance above defined are combined.Only one new cluster may be formed in each step. Hierarchy Property: At each step, each new conglomerate formed is a grouping of clusters formed in the early stages.If two sample elements are grouped in the same cluster at some stage of the clustering process, they will remain grouped in all subsequent stages, that is, once united these elements cannot be separated.Due to the hierarchy property, it is possible to construct a graph called a dendogram, which is the "tree" or the history of the grouping. The final choice of the number of groups "g" in which the data set should be allocated is subjective.There are some statistical measures that may be used to assist in determining "g".The criteria adopted in this work are: the distance behavior analysis and the similarity level, detailed below: Distance: In each step of the clustering algorithm, compute the Euclidean distance between the centroids of the clusters that are being formed.As it progresses in the algorithm, the distance between the centroids is increased, which means that the combined groups become less similar.Thus, if one makes a graph of the distances at every stage of the process, it would be able to verify "jump points" relatively large compared to other distance values.These points indicate the ideal time to stop the algorithm, i.e., the number of clusters g and the final composition of the groups. Similarity: This test is analogous to the previous one, but this time the behavior of the similarity level is monitored instead of observing the distance at each stage.If the groups and are united at a certain stage, the level of similarity between them is defined by: (3) where is the largest distance between the sample elements in the distance matrix of the clustering process first stage.In this case, the key is to detect points at which there is a sharp decrease in the similarity of the clusters, indicating that the algorithm should be stopped. One way to assess whether the partition achieves satisfactory requirements of cohesion (or similarity) and isolation (or separation) of the formed clusters is to calculate the sums of total squares, between and within groups, defined by: Sum of Total Square (SQTotal): where is the vector of p measurements observed for the element number "r" of the sample group "i", i.e. and ̅ is the vector of global means, regardless any partition, i.e. where ̅ ∑ is global average of the variable number, . The Sum of Total Squares within the partition groups (Sum of the Residual Squares): Total Sum of Squares between the "g" partition groups: The Sum of Total Squares is the sum of the residual squares sum with the sum of squares between groups, yielding the total variation.If a good partition is performed, it is expected that the formed groups have internal cohesion but are heterogeneous in comparison to the others.Thus, it is expected that variations within groups (sum of squares within each group, , and altogether, ) are small in relation to the total sum of squares, or, equivalently, it is expected that the variation between groups ( Sum of squares between groups) represents the majority of the data variation. Coefficient of variation -Stability in peak-hour passenger Considering the clusters established in the previous section, for each cluster the data used will be the passenger hourly throughput for the last seven years, i.e., from 2005 to 2011. Using the methodologies of determining peak-hour cited in the literature review, it was found that for the 56 airports in this study, no airport peak-hour was below the hundredth highest hourly throughput.Thus, as a safety margin to achieve a stabilization of the passenger throughput, as well as to save processing time, this study will only take into account the 300 highest hourly throughput for each year to compound the peak-hour database.For each ranked hourly throughput there will be performed a calculation of their coefficient of variation which is the standard deviation divided by the mean.By definition, the stability is achieved when this coefficient does not vary more than 0.01 unities for the entire sample, and thus, the hourly throughput (which the stability was achieved) is to be considered as the representative for the respective cluster.The maximum variation of the coefficient was chosen as 0.01 unities, because if this variation is less than 0.01 unities, in the last two clusters the stability would not be able to be achieved for less than 300 hours. Groupings According to the method of Ward (1963), let us now consider as the criterion of agglomeration the variables: annual passenger throughput and EPH.In the search for a small number of groups, we will analyze the measures of Similarity and Distance for the last 15 steps.Table 1 shows the values of these measures and their percentage change from the previous step to the current.From Table 1, observing the behavior of distances or equivalently the similarities in the various steps of grouping, one realizes that there is a large loss of similarity (44.32% -it is a figure of two digits contrasting previous onedigit-figures from 41st step to 49th step) and a sharp increase in distance or fusion level (116.40%)from step 49 to 50. Another way to check the heterogeneity of the groups that were formed is to analyze the centroids (means) of the variables in study for each group (Table 3 = 3,192,410.1 and EPH = 7,7 From Table 3, it can be noticed that standardization is necessary to transform the variables to compatible scales, since, in numerically terms, "Passengers" are million times "EPH".After standardization both variables were similarly important for the definition of the groups.There are clear differences between the mean values for both groups of passengers as for EPH, while groups 2 and 3 have close values of EPH. Stability in passenger peak hour. In possession of the coefficients, it seems that the peak hour for Cluster 1 is obtained in the 6th highest hourly throughput, corresponding to 99.88% of annual passenger movement, for Cluster 2 it is the 8th highest hourly throughput, corresponding to 99, 06% of annual passenger movement, for Cluster 3 it is the 14th highest hourly throughput, corresponding to 99.47% of annual passenger movement, for Cluster 4 it is the the 25th highest hourly throughput, corresponding to 99.04% movement of passengers annually; for Cluster 5 it is the 30th highest hourly throughput, corresponding to 97.94% of annual passenger movement, for Cluster 6 it is the 37th highest hourly throughput, corresponding to 96.42% of the movement passengers annually, and for Cluster 7 it is the 57th highest hourly throughput, corresponding to 88.57% of annual passenger movement. From the above it can be inferred that additional clusters would implicate in higher difficulty of achieving enough stability.In other words, a theoretical cluster 8 in this case would implicate in something around the 100th hour to reach stability, proving mathematically that enough levels of stability are not achievable for every airport. Least Square Regression. According to the variables presented in this study: annual movement of passengers (PAX) and Equivalent Peak Hour (EPH), the Equation (10) yielded is: hp = (10) It is possible to verify that the variables PAX and EPH are statistically significant at 1%, i.e., there is strong evidence of the causal effect of these variables on passenger peak-hour.Regarding elasticities (P-value ranges: p<0.01 and adj.R-sq : 0.964), it is possible to conclude that annual passenger throughput (Pax) impacts more than EPH, i.e., for every additional 1% of PAX there is an increment of 0,6368% (t-test 0,034) of the peak hour demand, while for every additional 1% in the EPH results in 0,3170% (t-test 0.084) peak hour demand increment. Conclusion The methodology proposed by this paper is very useful for defining, mathematically, a design peak hour.A clustering process was chosen because, according to Hair (2009), the best advantages of the hierarchical method is its simplicity, the development of measures of similarity and the results are achieved in little time.The clustering method by Ward, which grouped the airports into seven clusters, presented very significant variables: the annual passenger throughput and the Equivalent Peak Hour (EPH).Other variables were tested such as: declared airport capacity, connecting passenger throughput and international passenger throughput; but the two that were chosen for this study (annual passenger throughput and EPH) were the ones who achieved best results.Another interesting finding was the use of the coefficient of variation to determine the stability of the peak-hour demand for the seven clusters.The independent variables were chosen to explain objectively the phenomenon of airport passenger concentration, other important variables, such as the level-ofservice perceived by the users, is very subjective and cannot be mathematically modeled at the statistical level of significance this study required. After defining the independent variables, it was possible to model a linear regression which it was proven to be, statistically, very significant.The contribution intended with this model is to have a robust and consistent methodology, taking into account the impacts on peak hour demand by the annual passenger throughput and the EPH, two easy variables to get, either in the present time, or by means of econometric scenarios for the future.All of the models from the literature review do not take into account the impact of restrictions derived from airport capacity, this present methodology is the first one to evaluate it and to present a simple model considering EPH as a very good airport capacity proxy. Table 4 - Results of the Linear Regression Model.
4,279.8
2016-12-01T00:00:00.000
[ "Business", "Economics", "Engineering" ]
Analysis of Different College Music Education Management Modes Using Big Data Platform and Grey Theoretical Model Colleges and universities are crucial hubs for talent development, carrying out the vital task of supplying top-notch talent for all spheres of society. The growth of society and the economy is significantly influenced by the quality of education and teaching. A significant factor is the caliber of the instruction. Therefore, it is essential to innovate the educational management model in order to promote the holistic development of college students as well as the orderly development of educational and teaching activities in colleges and universities. The college and university music education model has made progress after years of development. In spite of its successes in reform, there are still a lot of issues that require close attention. The education system is constantly being updated and improved in order to meet modern development needs. It is essential to develop innovative strategies with strong operability from multiple perspectives in response to the actual issues that arise in order to carry out the reform work smoothly and steadily advance the innovation of the management model of music education in colleges and universities. This paper analyzes the teaching impact of the single-line education management model and the credit system education management model and predicts the students' musical performance based on the gray theoretical model and the big data platform. The experiment revealed that as learning time increases, the teaching effects of the two learning modes increase, but it is evident that the credit system education management mode's teaching effect is superior. The single-line education management model among them has an average score of 83.3 points, and the credit system education management model has an evaluation score of 86.1 points. Introduction To ensure that teaching activities proceed without a hitch, it is helpful to analyze the various management styles used in college music education. College education management's primary responsibility is to ensure the systematic growth of teaching activities in colleges and universities. e cuttingedge education management model aids in streamlining the education management procedure, developing more realistic learning objectives in line with contemporary development needs, and timely modifying the lesson plan in response to student feedback to support the orderly progression of various teaching activities and contribute to the modernization of higher education [1,2]. e innovative education management model can help colleges and universities develop long-term plans for the future by synthesizing past experience and lessons and basing them on the current development environment. is will help colleges and universities develop in a distinctive, contemporary, and innovative way. Colleges and universities should review the state of educational development as well as their own actual situation when practicing innovative education management. is cutting-edge instructional approach is beneficial for developing compound talents that address social development needs [3]. is objective directs colleges and universities to continuously identify, examine, and resolve issues that arise during the innovation process while also keeping an eye on all teaching activities. e educational evaluation methods used by many colleges and universities today are generally straightforward, focusing only on the students' theoretical fundamental knowledge and professional quality examination, ignoring the practical ability and innovation ability of college students, leading to low practical ability of college students. College students' overall development is obviously not facilitated by this type of assessment method that uses test scores as the primary reference [4]. As a result, if colleges and universities are serious about raising the caliber of instruction, they must innovate the evaluation model, broaden the scope of the evaluation's subject matter, and examine college students from a variety of perspectives. College students' practical aptitude, moral conviction, interests, and other factors should also be taken into consideration [5] in addition to their professional accomplishments. At the same time, it is important to integrate the process and result evaluations in a natural manner, paying attention to students' unique circumstances in daily study and life in addition to their overall performance at the end of each semester or school year. is is the only way to further improve education. Fairness and objectivity when judging: the integration of teacher evaluation, student evaluation, student-student evaluation mutually, and student evaluation independently should also be considered. Increase the thoroughness of teaching evaluations, and provide teachers and students with timely feedback on the evaluation results so they can make any necessary adjustments [6]. In this essay, the representative models of institutions that manage music education in regular colleges and universities are compared and studied, and the current state of various college music education management institutions is intuitively analyzed. Institutions in charge of managing art clubs and the teaching of music vary among colleges and universities. We draw conclusions about the management organization model for music education in typical colleges and universities based on the differences. is model is more scientific and standardized. e performance of the students in the music class is predicted using the gray theoretical model and the big data platform, and the impact of the single-line and credit system education management models on student learning is examined. Given that music education in regular colleges and universities is still in its infancy, the analysis and research on its management style aim to analyze a more scientific and uniform management style for the music education managers of regular colleges and universities and to provide more in-depth information for the reform of music education management in regular colleges and universities and improve the research techniques and management of music education in regular colleges and universities by using references. is paper's novel idea: the correlation between students' behaviors and grades under the single-line education management mode and credit system education management mode of various colleges and universities is examined using the evaluation model of music teaching in colleges and universities that has been established based on the gray theoretical model and the big data platform. Big data makes predictions about student performance to examine how the single-line and credit system education management models have an impact on instruction. Related Work In regular colleges and universities, music education plays a significant role in aesthetic and musical education and is an integral part of the college experience, particularly for nonmusic majors. Relevant academics have different perspectives on the study of the management model for music education institutions in colleges and universities, and they debate how to set up and enhance the management system for music education in regular colleges and universities, strengthen the development of art courses in regular colleges and universities, and continuously enhance teaching and learning, starting with standardization, institutionalization, and science, promote and develop music education in regular colleges and universities, improve and strengthen music education management, and strengthen teacher preparation programs [7]. Lei and Hao research pointed out that behavior quantification refers to the use of appropriate methods and strategies to obtain typical behavior characteristics that can systematically and comprehensively reflect the learner's life and learning status based on the learner's original behavior data in a data environment. At the same time, through feature analysis, the interaction relationship between behaviors is clarified. Because there are many behavioral factors that affect students' academic performance, and the motivations that affect students' behavior are more complex and diverse. How to make accurate attribution of its behavior and try to quantify its characteristics with campus data is one of the key points of academic performance prediction [8]. Wang web-based educational system yields valuable insights into student behavior, and to demonstrate the broader utility of these data, this study proposes a basic classification system for early detection of underperforming students [9]. Kang H invented a method that uses datadriven techniques to identify high-risk students at an early stage in online courses, and they found that temporal characteristics are the key features for predicting students' academic performance [10]. Yan believes that education management should not only act according to the laws of education, but also follow the laws of management. If only emphasis is placed on running education in accordance with the laws of education and ignore the laws of management, it is easy to fall into the misunderstanding of education for education's sake. If we only emphasize the laws of management, ignore the particularity, laws, and characteristics of education, and confuse military management, enterprise management, government management, and education management, the order and quality of education will be destroyed [11]. Shi et al. believe that educational activities are always inseparable from educational materials, and there is a minimum requirement for educational materials. e specific standards vary with the times and the development level of the country. If the minimum requirements are not met, it is impossible to set up education. At this time, the elements of things play a decisive role in the development of education [12]. Liang research found that because most college leaders do not pay attention to public music education, they have a temporary view and only regard it as a superficial and formal decoration, so they are reluctant to transfer full-time music teachers to deal with it. It is a severe form, but only adopts the method of external teachers for temporary guidance and completes the tasks of college students' cultural performances arranged by the superiors every year [13]. Based on educational data mining technology, Xia et al. use the deterministic factor method and sequential pattern mining in association rule mining to mine the minimum association rules for students' course selection and students' temporary interest learning patterns, so as to analyze students' behavior [14]. Chen and University completes the analysis of student behavior through data mining of students' behavior characteristic data. e data mining method mainly adopts the cluster analysis method, where the students are divided into the best categories, and the characteristics shared by the students are determined through their behavior characteristics to realize the classification of the students' characteristics, so as to provide special classification management for the counselors [15]. Yan studied the functional architecture and key algorithms of a technology-based college student behavior analysis system, completed the distributed storage and processing of campus heterogeneous data, and realized the in-depth mining and analysis of campus data, analyzed scientific research data, assisted timely attention to the school's scientific research trends, and better guided the direction of scientific research development. Analyze teacher data, assist in grasping the current situation of teaching in a timely manner, and reasonably guide the formulation of teaching plans. en, analyze the student data to assist in grasping the behavioral dynamics of students in a timely manner and predict the behavioral development of key students [16]. Cheng-Wu et al. conducted a multifaceted statistical analysis on the behavioral data of course learners, trying to use a classification model to effectively determine whether they successfully completed the learning task and obtained the certificate from the characteristics and laws of the learner's behavior [17]. Based on the student loan data and reading behavior of university library, Wang proposed a heuristic course setting decision-making algorithm in line with the learning progress of university students through the Apriori correlation algorithm and realized the preliminary practice of combining decision-making system construction and data mining through course setting examples. However, these systems or methods have a low degree of personalization, cannot dynamically correlate the relationship between behavior and performance, and cannot timely intervene in students' daily behavior and performance problems. Due to technical conditions, many colleges and universities have not been able to apply it universally [18]. e topic of music education in general colleges and universities, which is extremely practical and has a wealth of literature and resources, is studied in the context of national college education management. Each regular college has a different approach to music education, as do the institution's particular management structures. eoretical understanding where meet and combine is challenging. e discussion of music education management institutions in regular colleges and universities, which is useful to analyze the music education management system and explore the construction of music education management institutions in regular colleges and universities, is where the significance of the research on music education management institutions in colleges and universities lies. Design of the Evaluation System for the Management Model of Music Education in Colleges and Universities e standard method of evaluation is to choose several experts to rate the item being evaluated and then to calculate this using the weighted average method in accordance with the ratings of each expert. However, in this case, choosing to rely on a panel of experts to evaluate the thing being evaluated directly causes the amount of data to grow over time. e scores awarded are also different due to the divergent viewpoints of the experts, which results in some interference. e current state of data interference makes the conventional weighted average method inapplicable. is topic integrates the gray theoretical model and the big data platform to obtain objective and scientific evaluation results, and it then analyzes those results with a focus on the aforementioned issues. Grey eory Model. Grey theory is a new theoretical discipline structure system. It includes basic theories such as gray algebraic systems, gray equations, and gray matrices and uses these basic theories as the bottom layer. In order to construct a complete theoretical system, more theoretical systems are needed. e method system of gray theory is based on the generation of gray sequences. Based on [19], the analysis system theory uses twelve correlation spaces, and the model system is derived from the original gray model GM (1, 1). e technical system involves system evaluation, analysis, modeling, decision-making, control, prediction, optimization, and many other aspects. e school has not established a special management organization for the management of art education such as music, so there is no relevant policy regulations or institutional guarantee for music education, which leads to the failure of the school's music education operation system to be implemented smoothly. Even if some colleges and universities have established art education management institutions, they are often perfunctory under the pressure of some policies. In the gray system, the degree of correlation of data is evaluated according to the similarity or dissimilarity of the dynamic change trends of the data, and the correlation law in the process of information change is described. Since it is based on the development trend, this method does not have strict requirements on the sample size and distribution law. e concept of gray in gray theory is shown in Table 1 [20]. e GM (1, 1) model processes the irregular initial data, obtains a new sequence through continuous superposition that can be followed, and establishes a model. e data obtained by the established model can be continuously accumulated to obtain the original data. Regarding the predicted value, the original sequence is shown in (1) e first-order accumulation generates a sequence as shown in e regularity of the accumulated data is shown in After the initial data is superimposed, the stability of the initial data can be effectively improved. e original sequence and the sequence obtained after the first-order accumulation satisfy the quasismoothness, and the quasiexponential law test is shown in e development gray number is used to represent the data change trend of the original sequence, which reflects the law of data change. Assuming that the value of the original data is 2, the solution after discretization is shown in By solving the discrete equation, the predicted value of the sequence can be obtained, which is shown in the following formula after discretization: rough the solving process of the gray theoretical model, we can know that the so-called new information must be different from the inherent known information and old information, and the difference of this new information will increase our cognition, and the future Grey modeling, gray prediction, gray analysis, gray evaluation, gray decisionmaking, and so on will play an active role in the application process. e new information that is added will add uncertainty to the system. Repeatedly, in the thousands of changing worlds, the difference of information will always exist, and people's continuous exploration, cognition, and discovery of new information will make uncertain information always exist, so the grayness is immortal [21]. ere is no fixed classification support for classification in the classification process, and its classification process does not have any prior knowledge. In the entire classification implementation process, data preparation, data feature extraction, proximity calculation, classification, and result evaluation need to be completed. e gray theory classification process is shown in Figure 1. In order to improve the qualitative and quantitative analysis of optimized data, data preparation and processing are required, and operations such as attribute quantification, feature standardization, and data dimensionality reduction are required for the original data items. In this way, the common vector can be more characterized to obtain the processed vector set. Converting and extracting each of the vectors are performed to make the features of each vector obvious and prominent, and then an appropriate measurement method is selected to calculate the proximity between the vectors. In the process of classification analysis, the selection of fine or coarse classification results depends on its use, and in general, its boundaries can be blurred or precise. We can measure the proximity of vectors to each other and the distance between vectors by a specific calculation method [22]. Finally, this is used as a classification according to the requirements of the objective function, and the obtained results also need to be evaluated to determine whether the performance effect of the classification algorithm can meet the objective function. Big Data Platform. In terms of teaching formats and methods, music education in colleges and universities differs from that in professional music colleges. e professionalism of music instruction in colleges and universities cannot be compared to that of professional colleges. Its educational objective is to raise students' practical and musical literacy. When choosing their teaching methods, teachers should give careful thought to the characteristics of music education in colleges and universities and use effective teaching strategies. Prior to performing data cleaning and desensitization, data storage, analysis, and processing for management business, and finally using visual display to make the analysis results more approachable, it is necessary to analyze and integrate college data. After data integration, renormalization and cleaning of the data are referred to as data quality. Data redundancy and other phenomena will unavoidably result from the integration of data from multiple sources. e validity of the analysis results can be effectively ensured by cleaning and preprocessing the integrated data prior to data analysis. e main purpose of the data standard is to describe and explain the data in the information system. For data relationships and data quality between systems, there are unified requirements. It can successfully realize the management of data across businesses and departments to ensure the standardization of college data. Figure 2 displays the big data platform's classification procedure for various students. A variety of information about students' lives and studies can be gathered and integrated against the backdrop of big data. Forming a thorough and extremely accurate student portrait is simpler than that in the traditional data environment. A group of tags can be used to describe the portraits of business system students. Depending on the type of data, tags in this paper are divided into static tags and dynamic tags that are extracted from different behavior trajectories. By using data gathered from various business systems over a range of time periods, the student information is described. e student portrait is represented by the student trajectory label, which also serves as a guide for the analysis of the student behavior trajectory. e two's combined knowledge can strengthen one another's expertise and help meet the ever-changing demands of their respective industries. e key components of a behavior's time, place, and specific events make up the behavioral trajectory unit. In behavioral trajectory analysis, creating a behavioral trajectory model is the first step. Studying behavioral trajectories is another motivation behind creating student portraits. Certain time regularity and location periodicity can be seen in user behavior, according to some analyses of their trajectory. It is easier to categorize student groups, identify the behaviors that make up those groups, and then establish behavior characteristics when similar behavior trajectories can be found to serve as a model and lay the groundwork for the subsequent student portrait work. Data desensitization mainly uses data bleaching technology to ensure the security of college data and avoid the Journal of Environmental and Public Health leakage of student private data. Different desensitization methods are used for different data, and some digital pieces of information such as student ID cards are used. Regarding the method of masking or partial replacement, the reason for using partial replacement is to retain the data on the source of students, the year of enrollment, and other data in these data for subsequent practical operations. Due to the wide range of data sources and the diversification of access methods, the quality of the integrated data will be affected in many ways, such as incomplete data, noisy data, inconsistent data logic, and other issues, mining results, and then, form high-quality decisions. So, the implementation of data preprocessing will affect a series of operations of data mining analysis. In addition, for different data mining methods, the data also needs to be transformed differently, including data normalization and data discretization. e system executes the integration of the university academic platform, the digital orientation system, and other systems, which can be viewed as a data life cycle management process. Record the beginning of student registration using the orientation system as the start of the data life and initial storage, continuously record the student's crucial time points throughout the academic year, and record any changes to the student's status up until the student graduates and leaves the school. e student data is then archived, in part, until the data is recovered. e entire stage concludes the administration and support of the entire student life cycle. Data Processing. e implementation of music education in a college will be directly impacted by the understanding of music education held by the administrators of typical institutions of higher learning. According to the characteristics and learning goals of music education students in various colleges and universities, many colleges and universities are continually implementing various research and innovation reforms, striving to perfectly combine music education and management, open up new teaching concepts, and cultivate more exceptional musical talents. e music education model in colleges and universities has made progress after years of development. Despite its successes in reform, there are still a lot of issues that need to be seriously addressed. e education system is continually being improved upon in accordance with contemporary development needs. We must develop creative strategies with strong operability from multiple perspectives in accordance with the actual issues that arise in order to promote the steady reform of the management model of music education in colleges and universities. is will enable us to carry out the reform work efficiently. e subject of music needs a lot of inspiration, continuous development and innovation, and the combination of sensibility and rationality. is paper analyzes the music education management models of two different colleges and universities. e music education management modes of the two colleges and universities are the single-line education management mode and the credit system education management mode. Based on the big data platform, the student behavior of the two schools is analyzed. Correlation analysis and prediction of student achievement are based on gray theory model. Among them, the behavior of students includes daily life behavior in 12 different locations, namely, laundry room, bathroom, teaching building, printing room, office building, library, restaurant, school bus, supermarket, school hospital, card office, and dormitory. In the implementation process of this paper, 80% of the student samples are randomly selected from each major for training, 10% of the student samples are used for verification, and 10% of the student samples are used for testing. ese records reflect some consumption and access behaviors of students on campus. Statistics are shown in Table 2. is paper also employs the statistical techniques of factor analysis and principal component analysis to demonstrate the relationship between student behavior characteristics and academic performance. e first step is to extract the common factors using factor analysis. en, by rotating the component matrix, the factors affecting academic performance are logically explained, and the proportion of each factor and the common factor affecting performance is discussed. e goal of factor analysis is to minimize information loss by breaking down the complex relationship between the variables into a small number of comprehensive factors through the analysis of the correlation coefficient matrix between the variables. It is a part of the dimensionality reduction process. Finding independent, comprehensive indicators that reflect multiple variables using principal component analysis is one way to show the internal relationships between various variables using a number of principal components. e influencing factors of 20 performance rankings were examined using principal component analysis to investigate the correlation between various variables and the prediction function, as shown in Figure 3. Figure 3 shows that good and regular living habits have certain benefits on academic performance. ese habits are closely related to students' self-control and self-restraint abilities. Whether it is the use of books, the contribution to social activities, or the consumption of food, it shows the importance of self-discipline to a college student, which is also determined by the characteristics of college students themselves. e ranking factors are analyzed, and the top 10 can explain 72.63% of the overall variance, effectively reflect the overall information, and have a significant relationship with academic performance. Student Data Feature Processing Based on Gray eory. e gray theory analysis method is a method of clustering according to the objective classification standard, and the characteristics of multidimensional vector as the research object can be widely applied to solve the problem of multidimensional variables. e purpose of educational management is to achieve certain expected goals. is is to effectively develop and rationally allocate limited educational resources, improve school-running conditions, stabilize teaching order, increase school-running efficiency, improve education quality, and promote educational development, provide better opportunities and conditions for human development, and provide more and better services for social development. Since most of the dimensional indicators of the evaluated objects have different meanings in practical applications, it will affect the role of some indicators in clustering when applying the gray theory analysis method. In order to solve the above-mentioned limitations, the following two methods are generally adopted. One is to assign weights to all indicators in advance. e other is to uniformly classify each index as a dimensionless index, in which case an initial value operator or an average operator is usually used for dimensionlessization. e duplication of evaluation information caused by correlation has not been well solved. Due to insufficient thinking about the amount of information about individual indicators, it may affect the evaluation results or even fail to evaluate. In this case, we can use the layered evaluation fuzzy method to carry out optimization. In this paper, the combination of experimental features is used for feature selection to design diligence-related indicators. e diligence-related indicators mainly include the frequency of entering the library, the frequency of borrowing books, the frequency of breakfast behavior, and the length of stay in the rest area. Behavioral regularity indicators mainly include behavioral variability indicators such as behavioral slope and mean value, and information entropy of student behavior, which is an indicator of behavioral complexity. Ablation experiments were carried out, and different combinations of quantified features were carried out to explore the performance of different algorithms. ere are model effects and the necessity of different features to predict the model, and then continuously add new features to the model, and retune parameters and model evaluation. It is equivalent to conducting five different experiments through the control variable method. Each experiment uses SVR, GBDT, RF, and four different regression algorithms to construct three prediction models. e prediction effects are shown in Figures 4-6.. Figure 6 illustrates how the indicators of the prediction models created by the three algorithms all exhibit results that are largely consistent for the influencing factors of various students' behavior. e foundation of school management is educational administration, which also serves as the basis for ensuring that teaching develops normally. A significant impact has been made by the entire teaching management system, including the management of teaching plans, the course selection system, and the school system, as well as traditional practices. e credit system has strong flexibility in the management of students' learning process, from the static and single management mode in the past to the dynamic, diversified, and information-based management mode. Higher standards for educational administrators' ideological framework, knowledge base, level of expertise, and computer proficiency are imposed by the credit system teaching management mode. e management organization of music education is not perfect, and the management level is unreasonable, which is the key to the further development of music education in ordinary colleges and universities, as can be seen from the analysis of the current state of music education management in ordinary colleges and universities that was done above. e causes of the aforementioned phenomena are nuanced. Obviously, some issues are challenging to temporarily resolve. ey can only be fundamentally improved by relying on the successful management policies of government management departments and developing a scientific management model. In addition to assisting educators in comprehending student behavior and investigating the information present in the data, this algorithm is useful in modeling students' behavior. Based on the above prediction model, this paper makes a prediction analysis of the different teaching effects of the single-line education management model and the credit system education management model. e time is set to 21 weeks. e prediction results for the two teaching modes are shown in Figure 7. It can be seen from Figure 7 that with the increase of learning time, the teaching effect of the two learning modes is on the rise, but it can be clearly seen that the learning effect of the credit system education management mode is better. Among them, the average score of the single-line education management model is 83.3 points, and the evaluation score of the credit system education management model is 86.1 points. Music teaching has its own regularity and particularity, different majors have different teaching characteristics, different majors have different teaching objectives and requirements, and different majors have different teaching forms, such as one-to-one main course for music majors, major theoretical courses for music singing, group lessons for instrumental ensemble performance, and group lessons for band ensemble. e traditional school-year course scheduling management model is that the educational affairs department uniformly arranges the class time, and the three major courses of public courses, professional basic courses, and professional courses are arranged according to time periods. Students in each college attend classes in a fixed classroom and time. After the implementation of the credit system, students' class time and classrooms are no longer uniform. With the increase in the types of courses offered, schools are required to have a large number of classrooms, piano rooms, performance classrooms, multimedia, audio equipment, and other infrastructure to meet the needs of students for course selection. Under the background of implementing the credit system teaching management mode, educational administrators should change their ideas, establish a "student-oriented" teaching management concept, strengthen the research on the credit system, and adapt to the requirements of credit system management. Teaching managers should strengthen the training of business skills and management level, fully understand the goals of art colleges and universities, the construction of music majors, curriculum settings, etc., follow the teaching rules, standardize teaching management, and explore innovative management mechanisms. At the specific operational level, staff must be proficient in the management of the credit system, course selection system, student status management, teaching evaluation, performance management, and a series of operating modes of the credit system management platform. Conclusions By creating a performance prediction model that predicts the effectiveness of various teaching methods in colleges and universities, this paper aims to advance the innovation of the management style of music education in those institutions. e educational management system is the primary guarantee for the growth and development of colleges and universities. Without a logical and efficient management system, development is impossible. Colleges and universities ought to use a humanized management model for music education. roughout the management process, they must be sensitive and understand the reasons. A clear attitude, clear communication, criticism, and instruction should all be used to address the phenomenon that violates the system. Humane management is substituted for coercion. An effective management model for music education provides the assurance that colleges and universities will continuously raise the caliber of their instruction. e primary area of research in music teaching management and inquiry is how to improve and innovate the educational management model. e field of educational management theory is currently a brand-new area of study for colleges and universities. e study of music education management in colleges and universities, which is a clear requirement for the advancement of contemporary education, is a crucial part of the talent support strategy for my nation's cultural and artistic endeavors. e project is also crucial to the study of music. Objectively speaking, a high-level and high-quality management model for music education is a requirement for a sustainable and incredibly successful professional music Journal of Environmental and Public Health education in colleges and universities. erefore, research on the organizational structure of music education in colleges and universities complies with the universal laws of modern development and has broad strategic implications. e music education model in colleges and universities has made progress after years of development. Despite its innovative successes, there are still a lot of issues that need to be carefully addressed. e education model's ongoing innovation is in line with the contemporary development requirements. With the goal of advancing the management model of music education in colleges and universities steadily and steadily, it is essential to develop innovative strategies with strong operability from multiple angles in order to carry out the innovation work smoothly. Due to the difficulty of data acquisition, some important factors related to students' academic performance were not explored in this study, such as students' historical test scores and classroom performance. In future research, we plan to collect more student-related data and combine them with students' daily behavior records to construct a more comprehensive feature representation and further improve the robustness of the student achievement prediction model. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors do not have any possible conflicts of interest.
8,076.6
2022-08-16T00:00:00.000
[ "Education", "Computer Science" ]
Optimal Covert Communication Technology Due to advancement in hacking/reverse engineering tools, threat against transfer of sensitive data or highly classified information is always at risk of being intercepted by an attacker. Covert communication outwit this malicious breach of privacy act better than cryptography as it camouflage secret information inside another innocent looking information, while cryptography shows scrambled information that might arouse attention of an attacker. However, the challenges in Steganography are the modification of carrier that causes some abnormalities, which is detectable and often the methods are not optimize. This paper presents an approach in Covert communication Chanel, which utilizes mathematical concept of combination to optimize time of transmission using sets of multiple transmitter’s, and receiver’s addresses where each abstractly represents a set of bits or characters combination without modifying the address. To minimize the number of physical address to be use, a combinatorial and permutation concept of virtual address generation from physical address is introduce. The paper in addition presents some technique like relationship and their application in both re-enforcing resistivity against Steganalysis and generating combinations. Furthermore, a concept of dynamical clockwise and anti-clockwise rotation of combination over addresses after every transmission is introduce to further improve on resistivity against Steganalysis. A simple test was performed for demonstrating relay address, combination and permutation concepts. Based on test results and analysis, the method is effective as expected and it is quite easy to use as it can be implemented in different platform without much difficulties. Domains by using inter-word and inter-domains correlation using semantics analysis [16]. They took into account word embedding and same part of speech in addition to detecting using frequency distribution of words and part of speech. Furthermore, a paper [17], presented a method based on time interval or delays, which takes the interval of the time such as ∆ = − −1 and ∆ is then compare with chosen sequences of keys to decode or encode hidden binary. The paper further discusses the drawback of most steganography methods, which are due to modification of carrier, and that it gives loophole for some sophisticated algorithm or statistical method to detect steganography flow. Nevertheless, this problem can be solve in this proposed novel methods, which does not modify anything. In addition, the fact that most method hides a single bit one after another, it is extremely difficult to attained optimal transmission based on time for transmitting these floods of bits and this proposed method introduces bit/string and address combination techniques to combat such problem. METHODOLOGY This paper presents theoretical mathematical approach in security(Network steganography), which utilizes mathematical concept of combinatorial, and binary string concatenation by combining bits to optimize time of transmission, using multiple transmitters and receivers addresses, for example email addresses, mail addresses, phone numbers, or network ports or addresses and many others where each is assigned a bits combination. In addition, the paper present the concept of dynamical clockwise and anticlockwise rotation of bits combination over the given transmitter-receiver addresses after every transmission to reinforce resistivity against any form of security analysis like (Steganalysis) or cryptanalysis. Combination This section presents the concept of combination of bits or characters such that maximum bits or characters can be send at once by making a given address represent such bit combination. Below shows examples of bits combination. In addition, how a formulae for calculating total number of possible bits or character combination "C" given that a number of bits or character to be combine is "n". Let W be set of string of possible combinations resulting from a given combinatorial such that W = { 0 , 1 , 2 , 3 , . . −2 , −1 } and is the total number of elements of excluding empty set w = {∅}. Below indicates bit combination. One by one bit combination; w = {0 ,1} so total combination are = 2 and = 1 . However for Two binary combination, the possibility are W = {00 ,01 ,10,11} and total combination =2 where n = 2 For three binary combination, W = {000 , 001 ,010 ,100 ,011,101,110 ,111} total combination = 2 3 where =3 For four binary combination are; W = {0000, 0001, 0010, 0100,1000,0011,0101, 1111, … . } and total combination = 16 where n=4 Therefore, the pattern keep on increasing such that the power of possible binary combination "C" for a given bit combine "n" can be express as = 2 . From the above, it is clearly evidence that, for binaries combination, the maximum possible binary combination " " is as below where "n" is the total number of binaries combination. given that W = { 0 , 1 , 2 … , (2 −2) , (2 −1) }. Total element of set can be express as follows. We know that total elements (cardinality) of a set can be express as = ( ). = For example = {0,1}, total elements of set is two ( ) = 2 , meaning binary or base two number system, so = 2 Therefore can be express as; = (2.1) Relationship Given two sets of addresses , such that {( , ): ∈ , ∈ } and each element of one set is maximally relating to all the elements of the other sets and the inverse relationship holds true( = { 0 , 1 , 2 , … . } ∶ ∈ ℕ = { 0 , 1 , 2 , … . }: ∈ ℕ)The relationship of the two sets " " and " " can be describe as; ℛ ⊆ = {( , ): ∈ , ∈ }. For the inverse case where the receiver wants to reply to the sender, the relationship inverse can be express as ℛ −1 = {( , ): ( , ) ∈ ℛ} and are total elements of sets and respectively. Therefore since the transmission is related, such that,{ , } ∈ ℛ, a ℛ b, Maximum crossing among addresses" ℒ" for sending information can be express as ℒ = ( ) * ( ) ⟹ 0 < ℒ; ℒ ∈ ℕ Or ℒ = ( ) ( ) And to calculate total number of cross transmission is as ℒ. Please see Figure 1 below for the address relationship involving only Transmitters and Receiver addresses. Transmitter's to Receiver's Address Relationship Please see figure 1 for relationship without relay address directly from sender to recipient address without intermediate address. Maximization of Address Here, an idea of how to maximize total number of address based on concept of combinatorial and permutation to produce more virtual addresses. This approach is idea that given a set of address ( ) with more than one distinct element i.e. ( ) ≥ 2, a given combination of virtual address can be generated. For example. Given address such as = { , , }, virtual address ′ can be generated combinatorial as ′ = { , , , } and for permutation as These are all distinct element although some are virtual and others are real address, so total address available for use has increased to + ′ . Three real addresses has generated four virtual addresses and in total seven new addresses are available for use when using combinatorial concept, However, for permutation twelve virtual addresses are generated from three real physical addresses. 15 addresses available for us. Combinatorial of Address For combinatorial nCr of address to generate virtual address, here, order of virtual address combination does not matter much as transmission is simultaneous, so indexing address is difficult if order of address combination is to be taken into account such as in permutation. For an address with two real elements, virtual can be generated as = { , } where virtual is ′ = { } only one distinct virtual address element can be generated. So formulae for finding total elements of virtual address ′ i.e. ( ′ ) that can be used from a given real address is shown here (2.5) where number of real address total is ( ) and is Combination xCr of address, is the selected address in combination. For relationship involving relay addresses, please see equation (2.31). In addition, for = 0 represent address for = represents Address , for 0 ≤ ≤ represent relay addresses. Relay Address (R) For address can be express as in (2.32) However, for address involving relay address please see (2.33) Permutation of Address In permutation, the order combination of physical address forming virtual address does matter very much because transmissions are sequential not simultaneous, and it is time index i.e. is different from . So two or more address combination representing one virtual address can be rearrange in such a way that the order of those address distinctively represents different address for example. A transmission from physical address and can be from the virtual address transmission received at 0 and 1 respectively given that 0 < 1 . For virtual address transmission received at 2 and 3 respectively given that 2 < 3 . It should be noted that to differentiate between virtual address sequential transmissions, time of transmission from the same combination should be within a defined range ∆ or see (2.34). Where values set such that ∆ ≤ ∆ = − −1 (2.34) From equation (2.5), permutation, nPr of such combination is the permutation of entire virtual address plus physical address generated from combination can be written or express in (2.35). Where is total number of address and number of address chosen. Just like in (2.30~2.33) total relationship and addresses involving relay address can be written as in (2.36) and (2.37). Please note: handling zero factorial in address here is when its zero. Zero factorial is defined as a mathematical expression for the number of ways to arrange a data set with no value in it, which equals one by definition (0! = 1). So, for this address, since total number of address is greater than zero, so ≥ 0 so ( ) much be greater than zero. Concatenation The concept of concatenation is mainly use in formal language theory like in programming languages and pattern. Concatenation of two strings and is often denoted as , || , or, in the Wolfram Language, <> [19]. However, throughout this text, it is denoted as a|| . From the two sets of strings of binary assigned to addresses A and B, the concatenation || consists of all strings of the form || where "a" is a binary string from A and "b" is a string from B, or formally A||B= {a||b: a ∈ A, b∈ B} for concatenation of a string set and a single string, and vice versa. A||b = {a||b: a ∈ A} and a||B = {a||b: b∈ B} However, as given by the work [19], the concatenation of two or more numbers is the number formed by concatenating their numerals. He gave an example, the concatenation of 1, 234, and 5678, whichare12345678. In addition, the value of the result depends on the numeric base. He further presented the formula for the concatenation of numbers and in base as in (2.38). Therefore, throughout this paper, binaries, or stream of bits are treated as string and string concatenation formulae and rules/law are applied as below. 2.5. Associative Law Rules of Binary operation applicable to string Concatenation here below. For the binary operation, is associative and repeated application of the operation produces the same result regardless how valid pairs of parenthesis are inserted in the expression. A product of two elements (addresses or bits combination) (( , ): ∈ , ∈ )may be written in five possible ways as below. Order of Concatenation Below, we discussed some of the order of concatenation that needs to be followed in order not to lose track of the transmitted code of combination. See Table 1 showing sample bit combination Given time series of transmission, as = { 0 , 1 , 2 , … , } such that ( > −1 > −2 > −3 > ⋯ > 1 > 0 ) By defining sender to receiver order as and receiver to sender order as Therefore from sender to receiver order = ( || ) 0 ||( || ) 1 ||( || ) 2 || … … ||( || ) And from receiver to sender order = ( || ) 0 ||( || ) 1 ||( || ) 2 || … … ||( || ) Above is an example of bit combination in table form shown in Table 1 Base on combination concept, to send letter 'H'=01101001, Can be separated into two 4 by 4 bits combination and sent at once in a single transmission so each transmission will carry one character of 8-bits (1byte) Application of Relationship for Generating Combination From the rule of string concatenation given above. ALGORITHM 1: Homogenous Relationship an example see Table 1 Function: Combination( , ) Please note from the above algorithm 1, function Array_size(x,y) is for pre-allocation of array of size however in some programming language, array size are allocate dynamically and use of this function is not needed. See Figure 3 for example of above algorithm 1 implementation. Rotation over Addresses Let represent sets of bit combination over addresses A, or B. for subscribe of W " " is the current position of bit combination over a given addresses A or B of index , and "J" represents total transmission time such that( ∈ ℕ, ∈ ℤ ).In addition, Q is the maximum addresses ( ∈ ℕ) or in other word, totals elements of set A or B.for example if 2 = 2 = 2 Given a bit combination, "C" can rotate over elements of either set A or B after every transmission such that the rotation is either clockwise or anti-clockwise. = { −1 , −2 , . . 2 Below are pseudo-code created functions that uses the above two function for rotation, to rotate array of string. Since stream of bits (binary) are treated as string so it can be converted into it subsequence array and the array index manipulated such that it is rotated as prescribe above. The idea creates two functions where each is input initial index , , and the function returns a numeric index after rotation for each element of the array. Furthermore, another function which call the two rotation function is created which determine the value of Where if it is negative, it calls anti-clockwise rotation, else it calls clockwise rotation, and the function returns array of all string after its rotation see Fig 7 for the output of the algorithm being tested. EXPERIMENTAL RESULT In this section, three experimental results are presented where the first one is done in a very simple environment to make it easily understandable by non-specialist is covert channel communication and easy to perform the experiment. It is based on the idea of using many phone numbers from two different locations where the confidential information is to be transmitted from and to a given location with those numbers representing the address. The second experiment was done using multiple client machines located in given location where information is to be send from to another location where recipient is located, in the recipient location are locate multiple server machine through this use of email forwarding functionality and use of intermediate proxy server is used. Experimental Result Based on Rotation In this, at the sender side, six phone numbers was set up as sender addresses and receiver side six phone numbers also was set up as receiver addresses. At sender side, only two bit combination were used and the same four at the receiver side. See Table 2 To withhold the identity of the user's phone number from exposing, only six digits are shown without their location of calls only just labeled as sender and receiver's location. Furthermore an extra phone number is used in addition to the one assigned bit combination, those phone number are label as null which means it does not carries any bit combination and any phone called from or to such carries no bit combination whether it is directed to the one with assigned bit combination see Table 2 for more details. Experimental Result 2 Based on Relay Address and Permutation Without rotation Electronic Mail Forwarding functionality: Here from the email address sender, send an open message to recipient via relay email by forwarding the email. Please see table 5 below contains sample email for transmission through relay address. To withhold the identity of email address involve in this practically, notation is use to represent email address such as<EMAIL_ADDRESS>In Table 4 10 11 The following message were recorded from the above emails address in the order (A2-R2~R1-B1)+ (A1-R2-B1~B2) + (A2-R1-B2~B1) + (A1~A2-R1-B2) decoding this into binary (011100000110010011010001) 4. ANALYSIS 4.1. Optimal Performance Since this combine bits such that two or more bit can be send at once, so time of transmission reduces significantly, as many bits can be send at once. Let " " be time required for sending total of " " bit or character combination for both relay and or without relay address at once. And "k" is a constant unit time per one bit transmission such that: ∝ ⟹ = and = = −1 (4.1) Therefore lim →∞ ( = ) ≈ 0 ; giventhat = 1 From the above, it is evidence that when number of bits combination increases so does the time required for transmission reduces. 4.2. Secure Analysis Using Probability This section shows how resistive the method is from someone or a program that need to detect it by applying probability theory. Probability without Relay Address Let sample space be for a total address at either sender or receiver's side where a bit combination can occupy at a given time. Therefore, the probability that a chosen bit combination occupy such address is as below: For the probability that it is not is given as ′ = (1 − 1 ) lim →∞ (1 − 1 ) ≈ 1 However, the probability that a chosen transmission line carries the right bits combination is different from above as the sample space is from equation and equal to maximum crossing, a combination of bits between sender and receiver addresses send at once. Let sample space be maximum crossing between sender and receiver's addresses (1 − 1 ℒ ) ≈ 1 This indicates that, increase in number of bit combination → ∞, makes probability ′ ≈ 1 Probability with Relay Address In this part, we shows the probability that a chosen line of transmission from sender to receiver through relay addresses are as below, it's an expansion of the probability for a chosen line transmission above. Here find the probability within each relay from the sender up to receiver. Consider the following: − 1 − 2 − 3 − ⋯ . − − From transmitter address to relay address 1 the probability that the line carries hidden information is else the probability that it does not carries is denoted as ′ for the next node of relay address it is given as 1 and ′ 1 respectively this continues up to the last point of receiver. Overall probability that the entire transmission carries hidden message involving relay addresses can expressed as an independent probability which is the product of all individual probability [20] and express as below (4.2); = ( ) ( 1 )( 2 ). . ( ) (4.2) And the probability that it does not carries is given as (4.3) ′ = ( ′ ) ( ′ 1 ) ( ′ 2 ) … . . ( ′ ) ( ′ ) (4.3) From above it can be noted that as more relay address is added, the probability that a chosen line of transmission caries hidden information tend to almost zero proving theoretically that this approach is good. 5. SAMPLE EXAMPLE Example 1: Given a bits combination n=3 after J=11 times of transmission in clockwise rotation. In addition, an initial position of the bits I= 3. a) What is the new position of the bits combination W 3 ? Solution: Therefore: = 2 3 ∶ = 3 < ( , w, . ) = W ( ⊘2 )+ , ≥ 2 ( , w, , ) = W (11⊘2 3 )+3 ( , w, , ) = W 6 The new position or index is six (6). So if on sender addresses is 3 6 or for receiver address, it is from 3 6 . b) Assuming after example 1, and rotating in anti-clockwise for 19 times transmission. What is the new position of the bits? Solution: Therefore: = −2 3 ; = 3 ≤ − and position I=6 is a new position after clockwise rotation. And since anti-clockwise, J=-19 ( , w, , ) = W ( ⊘−2 )+ , ≤ −2 ( , w, , ) = W (−19 ⊘ − 2 3 )+6 ( , w, , ) = W −3+6=3 So new position= 3 The new position or index is six (6). So if on sender addresses is 6 3 or for receiver address, it is from 6 3 . Example 2: A person want to send a hidden message by post office mail, from country "A" to country "B" using 8-bits binary system such that the mail carries normal message without being modified: a) How many mail addresses are requires from sender and receiver country so that at least each mail sent carries a character of 8-bits? Solution:
5,017.6
2022-01-31T00:00:00.000
[ "Computer Science" ]
Predicting temperature drop rate of mass concrete during an initial cooling period using genetic programming Thermal cracking on concrete dams depends upon the rate at which the concrete is cooled (temperature drop rate per day) within an initial cooling period during the construction phase. Thus, in order to control the thermal cracking of such structure, temperature development due to heat of hydration of cement should be dropped at suitable rate. In this study, an attempt have been made to formulate the relation between cooling rate of mass concrete with passage of time (age of concrete) and water cooling parameters: flow rate and inlet temperature of cooling water. Data measured at summer season (April-August from 2009 to 2012) from recently constructed high concrete dam were used to derive a prediction model with the help of Genetic Programming (GP) software “Eureqa”. Coefficient of Determination (R) and Mean Square Error (MSE) were used to evaluate the performance of the model. The value of R and MSE is 0.8855 and 0.002961 respectively. Sensitivity analysis was performed to evaluate the relative impact on the target parameter due to input parameters. Further, testing the proposed model with an independent dataset those not included during analysis, results obtained from the proposed GP model are close enough to the real field data. Introduction Mass concrete plays an important role in modern construction, especially in hydraulic and hydroelectric construction. For example, in China more than 10 million m 3 mass concrete are poured every year in hydraulic and hydroelectric engineering. Besides, the structure of harbor engineering and foundations of heavy machines are often built with mass concrete. In such structures, heat of hydration of cement increases the internal temperature of concrete due to its exothermic chemical reaction and induces thermal gap induces between inside and outside of structure leading to tensile stress which ultimately results in thermal cracking of such structures almost at early age [1,2]. In order to reduce/control the maximum internal temperature of concrete and to fasten the cooling process, chilled water is circulated through the interconnected cooling pipes embedded into the concrete during construction [3,4]. This technique was first studied in early 1930's by USA Bureau of reclamation in the design of Hoover Dam [5]. Thermal cracking in mass concrete structure at early age depends upon the rate of which the concrete is cooled (temperature drop rate per day, ∆t 0 C/day) at an initial cooling period. Thus, temperature of mass concrete should be dropped at suitable rate to control the cracking of mass concrete during the construction phase. There are lots of factors to be considered to prevent thermal induced cracking of mass concrete at an early age. Reducing the maximum internal temperature of concrete, adjustment of suitable combination of water cooling parameters: flow rate (q w ), inlet temperature of cooling water (T w ) and controlling the rate at which the concrete cools (temperature drop rate), are some of them. Further, ∆t depends upon many parameters like, thermal properties of concrete, spacing of pipe, q w , T w , age of concrete (A c ), construction season (Summer or Winter) and so on. Relationships between these parameters are non-linear in pattern, complicated and not well understood yet. Due to the lack of simple and practical formula, in recent construction of concrete dams, T w and q w are adjusted based on numerical simulation and engineering experience to control the temperature of concrete. A numbers of researches emphases on controlling temperature of mass concrete using numerical simulation [6][7][8][9]. Further, most recent researches conducted for determining the temperature field of concrete dam at construction phase are based on Finite Element Methods (FEM) are [10,11], composite element method [12] and heat fluid coupling method [13]. Thermal crack analysis during pipe cooling was simulated using Particle Flow Code method [14]. However, very few researches have investigated the cooling rate of mass concrete during an initial cooling period at the construction phase of massive concrete structures. EM 1110-2-2201(1994) suggests not to drop the temperature of mass concrete during more than ½ to 1 0 F per day during an initial cooling period [15]. ACI 207.4R (1993) reported the cooling rates, in degrees per day, for latter period should be lower than the permitted during initial periods because of the higher modulus of the elasticity at later ages [16]. Also this report suggests, after the concrete reached its peak temperature, cooling should be continued for a period of 1 to 2 weeks at a rate such that the concrete temperature drop generally not exceed 1 0 F (0.6 0 C) per day (the maximum rate that does not exceed the early age tensile strain and creep). For some condition, cooling rate of 2 0 F (1 0 C) per day can be accepted for a short period of time [16]. Nannan Shi et al. (2014) investigated; lower cooling rates can reduce the probability of concrete cracking [17]. Further, temperature control of mass concrete during the construction phase of concrete dams seems more complicated due to the varying properties of concrete with the passage of time (A c ). Moreover, construction time plays an important role in any project; thus, in order to complete the project within the time frame, it is often needed to pour the concrete in a high temperature summer seasons. If an effective temperature control measure is not taken for concrete poured at higher summer season undesirable and unavoidable cracks will be the result [18]. Therefore, in order to prevent the cracks during the construction phase of concrete dams, it is very important to understand the relationship between the parameters involved in the process. In the present study, attempts have been made to develop a robust prediction model to predict ∆t at an initial cooling period during the construction phase of concrete dams. Data measured at summer construction season (April-August from 2009 to 2012) are used to formulate the relationship. For this, T w , coefficient of pipe cooling (p 1 ) which is dependent q w and A c are taken as the input variables whereas values of ∆t is taken as the target variable. Data Source In order to formulate the relationship to determine ∆t, data were taken from the project named "Xiluodu high concrete arch dam (285.5 m high) which was recently constructed and located in the lower reach of the Jingsha River, Yunnan Province, in southwest China [19]. During the construction of the project, an optical fiber (shown in Figure 1) was embedded in concrete to monitor the temperature of concrete. Data's from monolith 15 and monolith 16 during the construction at summer season (April-August from 2009 to 2012) were chosen. Three types of concrete namely Concrete Type A, Concrete Type B and Concrete Type C were used while constructing the dam. In this study, Concrete Type A was used for formulating and verification of the developed model. Thermal properties of concrete (Concrete Type A) as: thermal conductivity, diffusivity, specific heat and density and water cooling data as: diameter of water cooling pipe, conductivity of pipe, length of cooling pipe and spacing of cooling pipe (H: V) as 1.5m*1.5m and lift height (1.5 m and 3m) were taken as the real situation of the research project. . Thermal properties of concrete are listed in Table 1. Data Analysis and Preparation Concrete gain its peak temperature after a few days to week of placement. After the concrete reaches its maximum/initial value, dropping of temperature from this value to the target value (design value) at an early age (around 30 days) is fully controlled by circulating water through the cooling pipes embedded in the concrete during construction. The time period that takes dropping the maximum/initial temperature to target temperature at early stage is known as initial cooling stage/period. The overall cooling process used during the construction phase is shown in Figure 2. Time(d) Figure 2: Overall Cooling Process of Xiluodu High Concrete Dam To develop the prediction model for temperature drop rate, concrete temperature data (recorded from optical fiber at different time interval within a day for each lift taken in this study) and water cooling data (measured at almost the same time of concrete temperature measurement) within initial cooling stage are utilized in this study. Concrete temperature data are taken at the time lag of almost 12 -24 hours between two consecutive days. For some cases, concrete temperature data of latter days are greater than previous days, in such cases, the value of ∆t is negative and those negative values of ∆t has been removed during developing the model. The set of data for ∆t before building the model is prepared as follows: Where, ∆t = ∆t i = temperature difference of mass concrete of [(i-1 th ) -i th ] day in 0 C/day t i-1 = temperature of concrete at i th -1 day in 0 C t i = temperature of concrete at i th day in 0 C and i=2, n: n is total numbers of days at initial cooling period for individual lifts. The relation given in equation (1) Further, coefficient of pipe cooling, p 1 (which depends upon q w ) can be derived from the following relationship, Zhu [20]. The rate of flow of cooling water is taken almost at the same time of calculating ∆t. In which, g is a coefficient to consider the influence of b/c and the material of pipe b= 0.5836* √ (S1*S2) Where, a is the thermal diffusivity of concrete (m 2 /day); b is the outer radius of concrete cylinder (m); c is the inner radius of concrete cylinder (m); c w is the specific heat of cooling water (kJ/Kg 0 c); CD is the Construction Date ; D is the diameter of concrete cylinder (m); L is the length of pipe (m); r 0 is the inner radius of non-metal cooling pipe (m); S 1 is the horizontal spacing between cooling pipes (m); S 2 is the vertical spacing between cooling pipes; λ is the coefficient of thermal conductivity of concrete (kJ/m h 0 c); λ1 is the coefficient of thermal conductivity of non-metal cooling pipe (kJ/m h 0 c); ρ w is the density of cooling water (kg/m 3 ); and η = λ/λ1 Genetic Programming, Eureqa and Automated Solution Seeking Recently, soft computing technique (like artificial neural networks (ANNs) and fuzzy neural network systems),which are considered as strong machine learning techniques have been gaining popularity for solving complex problems in the field of civil engineering [21][22][23]. Genetic Programming (GP), which was first introduced by John Koza in (1992) [24] is another branch of machine learning method, which automatically generates computer programs based on the rule Darwinian natural selection and biologically inspired operations to solve the user-defined task. Genetic programming (as an extension of genetic algorithm) evolves a series of computer programs (semi complex mathematical equations) instead of data to solve a complex non-linear problem and can be suitable for real problem [25]. GP uses an evolutionary algorithm in order to optimize the computer programs through expression tree according to fitness function. Eureqa® software package sometimes can be called as robot scientist, developed by Dr. Hod Lipson [26] is fairly new, publically available product from Cornell Creative Machines Lab [27] is a symbolic regression tool for automated numerical regression methods, optimization, detecting equations and hidden mathematical relationships in raw data and is based on GP. Eureqa has been applied for solving some problems in civil engineering field [25,[28][29]. Correlation coefficient (R) and Mean Square Error (MSE) were used to evaluate the performance of each model. In Eureqa, each variable values can be assigned to single rows and searches are specified by writing a search function. A solution fit plot against predicted and actual data, list of candidate function ranked by fitness (error/complexity), a plot of solution respective to their error size; residual error plot and a plot of different fitting statistics of the generated solutions can be obtained as output in Eureqa [28]. Development of model using GP To get the suitable GP model, basic arithmetic operators (+, -, *, /), trigonometric operator (sin, cos) and some basic exponential functions (exponential, natural logarithm, square root, factorial and power) were utilized in this study. Un-normalized data of the individual variables has been normalized during building the model. The GP software identified un-normalized data of the individual variables. And those identified un-normalized data's from GP software for each variables is normalized by algorithm "subtracted by mean and divided by standard deviation". 145 numbers of rows of each input and an output variable were gathered. 51 numbers of data were chosen for training, 24 numbers of data were chosen for validation and 70 numbers of data were chosen for testing of the proposed model (chosen randomly). Following function is used to obtain the hidden relationship between ∆t and the influencing variables: GP-Based Formulation for ∆t The GP approach was employed in this study to predict ∆t of mass concrete at an initial cooling period during the construction phase of concrete dam. Final model is established by comparing the output from the developed model with the experimental data. The best two models evaluated from GP are shown in Table 3. The value of R close to 1 and low value of MSE indicates the data were more fitted. The value of R given in Table 3 is for validation dataset. According to the performance evaluation criteria, model no 1 is the comprising model among others. Due to the lack of previously developed rational models to predict ∆t at an initial cooling period during the construction phase of concrete dam, it is not possible to conduct a comparative study of the results obtained from this study to those of previous studies. According to Smith [30], when the model gives |R|>0.8, a strong correlation exists between the predicted and measured values. As can be seen from Table 3, the entire model has |R|>0.8, which reveals that the proposed model has a good predictive ability. Performance of the Model In order to determine the prediction capability of the proposed model, comparisons were made between the predicted values of ∆t from GP model with real ∆t that were not included during an analysis by plotting the graph as shown in Figure 3. It is obvious from Figure 3, during testing phase the predicted and real ∆t were strongly correlated with a linear relationship with R 2 /R of 0.7202/0.8486 from the proposed model. Beside validation, performance of the derived model is verified by comparing the prediction output from the derived model with the real field data those were not included during the analysis as shown in Figure 4. Comparison (shown in Figure 4) was made between the predicted ∆t and real ∆t (data available from the same research project having different value of qw and T w per day). The calculated results from proposed model shows a pretty good agreement with the real field data for summer month from April to August for different year which indicates that the proposed model is obvious. Sensitivity Analysis Sensitivity analysis was performed to evaluate the relative impact on the target variables due to input variables. For a given model in the form z=f(x, y,…..), sensitivity is expressed as follows: Where: = partial derivate operator, σ(x) = standard deviation of x in the input data, σ(z) = standard deviation of z [27]. The term percent positive is defined as the percent of data in which the partial derivative of the target value with respect to the i th input is greater than zero. This number shows the possibility that increasing the specified input parameter would increase the target value in the model and the same concept applies for a negative value of the aforementioned derivative term known as percent negative. Further, positive magnitude is the number that denotes generally how big the positive impact is, when increases in this variable lead to increases in the target variable and same concept applies for a negative magnitude [27]. A summary of sensitivity, percent positive, positive magnitude, percent negative and negative magnitude values for the GP-based model is shown in Table 4. Table 3. Table 4 indicates, input parameters p 1 and T w are possess high positive and negative impact to the target parameter respectively, whereas parameter A c has percent negative of 59% ≈ 60%, thus it is supposed to have negative impact on the target parameter. The proposed GP model developed in this study was formulated from the data available from single dam constructed at summer season from April to August. Therefore, the model derived from this study can be used in preliminary design stages rather than in final decision making. Although, the derived model holds good agreement within the range of data given in Table 2, the model should be used carefully for prediction outside the range of parameter taken in this study. This proposed model could be improved to make a precise prediction over a wider range considering different pipe spacing and initial cooling duration, if the data could be made available from other similar projects. This research is expected to be helpful where high concrete dams are anticipated to build in near future. Conclusion The GP approach was employed in this study for predicting ∆t of mass concrete at an initial cooling period during the construction phase of high concrete dam. This model will be capable to sort out the complication of adjusting the water cooling parameters (q w and T w ) during the construction phase of concrete dam in order to drop the maximum temperature of concrete. The developed model has the R value of 0.8855(which is greater than suggested good fit |R|>0.8) and significantly low MSE (0.002961), which indicates the proposed model have good predictive ability. Beside validation, testing the model applicability with an independent dataset (those not included during analysis), proposed model is capable of generalizing the input and output variables with reasonably good predictions. Sensitivity analysis results clarifies that input parameters: p 1 possess positive impact and T w is sensitive in-terms of negative impact on the target variable whereas parameter A c has percent negative of 59% ≈ 60% which is supposed to be have negative impact on the target variable. Using the derived model, required ∆t during initial cooling period at construction phase of concrete dam can be easily calculated using the variable taken in this study, which will be beneficial in-terms of time saving for sophisticated laboratory experiment. Conflicts of Interest The authors declare no conflict of interests regarding the publication of this paper.
4,334.4
2018-02-01T00:00:00.000
[ "Engineering", "Materials Science" ]
Patterns of intents, decisions, and tendencies based on Behavioral Styles Human behavior is the expression of held values and beliefs. While it is correct to believe that behavior can adjust over time due to the shift in personality that’s being expressed, after a certain age, the personality development becomes considerably more adaptive, which leads to a reasonably consistent behavioral style that makes for a good tangible measure of someone’s character. The study aimed to find patterns between someone’s intents, inclinations, and tendencies to act in the future and their behavioral style based on the DiSC model of behavior. It encompassed 187 undergraduate and 72 postgraduate students of Pharmaceutical Sciences from both public and private universities in Jordan from over eight nationalities. The researcher used a survey made up of two sections; the first section focuses on the student's intents, beliefs, and general information. The second section’s questions are used to determine which of the four behavioral styles the student belongs to. The study showed that, in terms of numbers, the “S” style of behavior makes up (66.8%) of total respondents, followed by “C” style (17%), “I” style (12%), and “D” style (4.2%). The analysis showed both the undergraduate and postgraduate student’s perspectives and revealed several links between each style and academic potential, self-belief, thoughts about higher education, and future career plans. Introduction It is difficult to narrow what causes someone to act a certain way or do a specific action down to a solitary motive, but a substantial contributor to the decision-making process is that person's personality traits (Hurtz & Donovan, 2000) As, for example, they significantly influence informationseeking behavior (Halder et al., 2017). They can be defined as the collection of intrinsic and extrinsic factors that may affect the behavior of an individual (Abdullah et al., 2016). Which, in turn, are plentiful in numbers and are of varying complexity. The vast amount of variables makes it easy to link the causes misguidedly. However, the development of personality and achievement-based variables is not alike (Haan et al., 1986) but since behavior is the expression of personality (Funder, 2013), and there appears to be substantial stability of individual differences in personality, even over several decades (Roberts and DelVecchio, 2000;Hampson and Goldberg, 2006;Caspi and Roberts, 2001) if a correlation is found between behavior and a set of actions, past and current experiences, intents and tendencies, then there could be a link between the traits that make up someone's personality and shape his behavioral style, in a large sample size, and the manifestation of patterned tendencies. And bearing in mind that behavior is a mechanism within psychosocial processes that reveals the importance and nature of personality constructs and measures (Furr, 2009), a better understanding of those links could be used to enhance how academic programs are currently structured. Methodology A survey was sent out to both undergraduate and postgraduate students majoring in pharmaceutical sciences at several universities in Jordan. This specific field was chosen not only due to the program being offered by most Jordanian [2] universities, both public and private, but also due to it having a diversified student base with students from different nationalities and academic backgrounds, while also including students who should be cognitively at a level to where the desired survey can be sent out without having to lessen the level of questions to be assured of total comprehension. The survey was separated into two sections containing 47 questions in total; the first one focused first on amassing general information about the student with typical questions about age, sex, nationality, GPA, monthly income, etc. However, the other questions from the first section used two methods; contextually retrospective behavioral selfreports and hypothetical behavioral self-reports to collect data about the student's intents, previous decisions, and current beliefs. It involved questions that were asked to discover their future career plans, belief in the value of a postgraduate degree, whether or not they intend or intended to pursue higher education, their willingness to take continuous negative feedback even if they knew it was for their benefit, measure their level of ambition, and other questions that will be discussed later on. The second section of the survey was made up of multiplechoice questions, which were comprised of 12 questions with both a most likely and a least likely option to determine the student's behavioral style based on the DiSC model of behavior. Depending on their answers, each respondent gets grouped into either a D, I, S, or a C behavioral style. The aim of this study, as previously mentioned, was to find a connection or a correlation between each style and the tendencies and intents of the student based on the data collected from the first section. Results The overall number of respondents to the survey was 259. Of those, 61% were female, and 39% were male. About 27.8% of them were postgraduate students, while the others were undergraduate students. However, only 21.23% were 25 years of age or older. In terms of nationality, the majority of the students were, as to be expected, Jordanian, followed by large numbers of Iraqi and Palestinian students. Minority groups of Syrian, Kuwaiti, Saudi, Lebanese, and Emirati students rounded up the total. When it comes to the financial status of the students, The majority of them reported a monthly income that would categorize them in either the middle or upper-middle class. As for the behavioral styles, the Dominant (D) style was found in only 11 students (4.2%), 7 of them were undergraduates, and 4 were postgraduate students. The Influence (I) style was found in 31 students (12%), which contained only 3 postgraduate students while the Compliance (C) style was found in 44 students (17%) divided into 18 undergraduate and 26 postgraduate students. Finally, the Steadiness (S) style was found in 173 students (66.8%) that included 39 postgraduate students. The remaining results are detailed in the tables shown at the the end of the paper. Discussion When evaluating the previously shown tables one by one, you can start to see the deviation in tendencies for each style of behavior. Keeping in mind that behavior and Personality measures are promising predictors of academic outcomes (Conrad, 2006). Table 1 showed that C style students performed the best academically in terms of GPA, followed by D, S, and I, respectively. Table 2 showed that students with the S style of behavior were the most unsure/undecided when it comes to having career plans before graduating while also having the least intention of all four styles to pursue higher education. However, they were the keenest on working in a hospital or a retail pharmacy. D style students were the most intent on working for a corporation. Also, they were the least unsure/undecided along with C style students who, in their regard, were the most willing to pursue higher education. Lastly, I style students were the least willing to work for a corporation. Table 3, a pattern became apparent when looking at the responses for the question " Would you be comfortable getting constant negative feedback from a professor or a supervisor? " as the postgraduate students from all four styles of behavior reported higher willingness than their undergraduate counterparts which indicates that strength of character plays a noteworthy role in the students career choices. As for When looking at each style individually, beginning with D style students, they were the most positive about finding a job after graduating and the most willing to take constant negative feedback from a supervisor or a professor along with being the least satisfied with working in the same position with just increased pay over-time. They also conveyed having a high regard for higher education based on their answers for the first four questions of Table 3. No noteworthy differences were found between the responses of the two segments of students. Nevertheless, Students with this style are limited by their impatience. [3] Secondly, I style students reported the second-highest percentage of certainty about finding a job post-graduation in spite of having the most academic regrets between all behavioral styles. Furthermore, they sought extracurricular sources of information the least. It's noticeable that postgraduate students with this behavioral style were less likely to advise someone to get a postgraduate degree than undergraduate students of the same style. Also remarkable is the dropoff in self-belief between undergraduate and postgraduate students in this style, this along with the decrease in advisement might indicate an overestimation, on the students' part, of their capabilities or a lack of understanding and awareness of what their field entails. Students with this style of behavior are limited by being impulsive, disorganized, and having a lack of followthrough. As for S style students, it's clear from their answers on the first four questions of Table 3 that they hold higher education in the lowest regard relative to the three other behavioral styles. They had the lowest overall percentage of self-belief and the highest in terms of being satisfied with working in the same position with increased pay over-time. They ranked in the middle of the pack for all other questions. They are predictable and consistent but also indecisive and fear the loss of stability. These tendencies, along with them being mostly average academically, their job preferences, and them making up 66.8% of total students, leads to believe that they are the most average or typical of all four behavioral styles, yet they are the most content and satisfied style as subjective well-being is known to be related to personality traits (Weiss et al., 2008). Students with this style are limited by their fear of change. Finally, C style students had the highest regard for higher education while also reporting the highest percentages for academic performance and extracurricular scientific interests. They did, however, rank the lowest of all four styles when it came to the willingness to take negative feedback. Most of the time, student's information-seeking behavior is a result of the need to complete course assignments (Fister, 1992), but students with this style appear to seek it more frequently than others. All of that put together makes them the most academic centric style and the most likely to enter that field, as in terms of numbers, more postgraduate students reported having this style than undergraduate students. Students with this style are, however, limited by their fear of criticism. It's important to mention that the sex of the students had no noticeable effect on their style of behavior, which is to be expected (Costa et al., 2001). Neither did their economic status based on their monthly income. Conclusion Several patterns of behavior pertaining to each style were found throughout the study, which, if proven further after more research on this subject, could be used to widen our understanding of the strengths and weaknesses of each person's character based on their behavioral style and what areas should be focused on to be improved. Tables Table 1:-
2,541
2019-09-30T00:00:00.000
[ "Materials Science" ]
Ferroelectret-based Hydrophone Employed in Oil Identification—A Machine Learning Approach This work focuses on acoustic analysis as a way of discriminating mineral oil, providing a robust technique, immune to electromagnetic noise, and in some cases, depending on the applied sensor, a low-cost technique. Thus, we propose a new method for the diagnosis of the quality of mineral oil used in electrical transformers, integrating a ferroelectric-based hydrophone and an acoustic transducer. Our classification solution is based on a supervised machine learning technique applied to the signals generated by an in-home built hydrophone. A total of three statistical datasets entries were collected during the acoustic experiments on four types of oils. The first, the second, and third datasets contain 180, 240, and 420 entries, respectively. Eighty-four features were considered from each dataset to apply to two classification approaches. The first classification approach is able to distinguish the oils from the four possible classes with a classification error less than 2%, while the second approach is able to successfully classify the oils without errors (e.g., with a score of 100%). Introduction Transformers are fundamental equipment in electrical power systems; their main function is the adjustment of voltage levels, serving as a link between generation, transmission, and distribution of electrical energy. Due to their importance, it is common sense that these equipment are directly related to the continuity and quality of the electricity supply system. Therefore, periodical or online monitoring of their operating condition has become increasingly important [1,2]. A key component of power transformers that requires constant supervision is the mineral oil placed inside them. This oil, responsible for insulating the internal coils and provide a cooling agent, undergo deterioration due to electrical and thermal efforts, which in turn generates decomposition products that cause the occurrence of equipment failures. Over time, some of its physical and chemical properties, such as color, viscosity, and water content, vary due to environmental conditions and exposure to electric fields [3]. A significant number of studies have presented effective solutions for monitoring transformers, aiming predictive maintenance [1,4,5], and many of them focus on oil analysis [6][7][8][9]. Among the diagnostic techniques available to assess the isolation condition of the transformers, it is possible to mention the physical-chemical analysis, the analysis of dissolved gases, the optical analysis, the measurement of the degree of polymerization (DP), the measurement and analysis of furans by High Performance Liquid Chromatography (HPLC) and acoustic analysis. Two of these techniques are widely used in preventive programs, and they include the analysis of dissolved gas (DGA) in oil and the physical-chemical analysis. Some studies address acoustic and ultrasonic analysis as a predictive technique in the transformers monitoring for the detection of moisture in transformers oil [10][11][12]. Normally, water is hardly dissolved in oil due to the polarity of the molecules and the hydrophobic characteristic of the oil. However, new or regenerated oil contains minimal amounts of water, which are measured in ppm (parts per million). In addition, over time, during operation, the humidity level may increase due to the degradation of polymeric materials and the absorption of external moisture [13]. Therefore, the detection of moisture in insulating oil is important and is necessary due to its harmful characteristics to oil and other insulating components. Among the damages caused by water, it is possible to mention the decrease of the dielectric resistance of the oil, acceleration of the cellulose aging (used as insulating coating internally in the transformers), and formation of bubbles when the equipment is exposed to high temperatures [14]. Such features have been explored to acquire the quality of the oil inside the transformers. Therefore, in order to detect types of oils by using only acoustic analysis, we developed two machine learning classification approaches. These techniques are in widespread use in research and industrial communities. Thus, even though this framework was developed to classify types of oils by using supervised learning, the same approaches and techniques can be used in similar problems or in a totally new classification problem. However, our solution is mainly based on an in-home built thermoformed ferroelectret as an acoustic transducer, providing a very low-cost solution. We organize the paper as follows. Section 2 discusses some relevant literature related to acoustic and ultrasonic analysis, and ways of discriminating liquids. Section 3 details and contextualizes the problem addressed by this work. Section 4 presents our experimental setup and details the main materials used. In Section 5, we discuss the proposed framework utilized, detailing the two classification approaches. The results and discussions are presented in Section 6. Finally, our final remarks and futures works are presented in Section 7. Related Works As previously mentioned, some research involving acoustic and ultrasonic analysis in the detection of oil in transformers has been done in the last few decades [10][11][12]15,16]. In the article by Tokitou and Shida [10], a detection system to discriminate water in oil is presented. The detection method adopts the difference in the propagation time of an ultrasonic wave in water and oil, for each characteristic temperature. When water and oil are heated, these propagation times change in reverse. The authors claim that, for this reason, the proposed system does not need to provide any previous absolute values database as a reference. Additionally, the authors suggest the possibility of detecting the water present in the oil of a transformer using the proposed method. Chang-ping and collaborators [11] developed a method for detecting moisture in transformer oil based on the difference in ultrasonic transit time. The proposed methodology assumes that the ultrasonic speed of oil and water are similar, but there is a relatively large difference with ice. In this way, the oil-water samples are frozen for measurements. In the experiments, different samples (300 ml) of mixtures, prepared by the authors, of oil and water are used. The method consists of using two identical measuring cells, where two transducers are attached, one operating as a transmitter and the other as a receiver, separated by a fixed distance. A standard oil sample is measured in a cell, without adding water. In the other cell, different oil-water mixtures are tested. A sinusoidal signal pulse is emitted, which passes through the medium and is then detected by the receiver. The two signals are compared and the analysis is made based on the difference in the ultrasonic propagation time of the signals. The results show that the higher the water content in the oil, the lower the propagation speed in the medium, and the greater the difference in propagation time between a standard oil and an oil-water mixture. In the article, the authors do not reveal the transducer model or the frequency used in the experiments. Tyuryumina, Batrak and Sekackiy [12] use the acoustic emission method (AE) as an online diagnostic technique to identify failures in power transformers. The proposed method is used to measure acoustic signals caused by impurities (water, cellulose, gas) in the transformer oil. The methodology consists of a signal generator, two piezoelectric transducers and a computer where the signals are processed via software. Initially, a comparison test is carried out between two oil samples, a new and an aged one, present in transformer tanks, to verify the sensitivity of the proposed method. To determine the influence of water, cellulose and gas on transformer oil, the authors add different concentrations of these to different samples of new oil and then measure the signal. The authors analyze the frequency spectrum of the signal (1 to 10 kHz) to obtain information about the condition of the transformer. According to the results obtained, water and cellulose influenced the quality of the transformer oil. However, the AE method is not sensitive to the determination of the gas phase in the transformer oil in the selected frequency range. The article by Palitó et al. [15] suggests an ultrasonic system developed to investigate the presence of moisture in transformers oil. This system consists of a function generator, two ultrasonic sensors, acting as an emitter and the other as a receiver, an acoustic chamber and an oscilloscope. In the experiments, samples of 600 ml of water and different oil samples from transformers with different levels of degradation are used. The experiments are carried out at frequencies of 2.25, 3.5, 5, and 10 MHz. Measurements of the amplitude of the sinusoidal burst are made for each of the liquids tested and for each frequency. In this study, it was observed that the higher the water content in the samples, the greater the amplitude of the signal. The authors conclude that of the four frequencies studied, the one that best suits the study is the frequency of 5 MHz because it was this that presented the greatest discrepancy between the amplitudes of the samples of transformer oils. In addition to signal amplitude measurements, measurements of ultrasonic propagation speed were also performed. The authors declare, based on the results obtained, that as the oil has higher water content, the ultrasonic propagation speed is lower, as reported in the literature. Recently, the authors of Reference [16] provide solutions for power transformer problems using machine learning. Kunicki and Wotzka [16] use energy patterns based on the discrete wavelet transform to detect partial discharges (PD), and in a second step, eight classes of various faults or anomalies. The proposed two-step classification method is tested with real-life measurements, providing results that exceeded 98% of classification accuracy. Although its high accuracy, it depends on the PD occurrence. Problem Definition and Paper Contributions As already mentioned, the condition of the oil considerably affects the performance and service life of the transformers. A combination of electrical, physical and chemical tests can be performed to measure the change in electrical properties, the extent of contamination and the degree of deterioration in the insulating oil. The results of these tests are used to establish preventive maintenance procedures, to avoid unscheduled stops, early failures, and to prolong the life of the equipment [17]. Traditionally, the physical-chemical analysis and the dissolved gas analysis (DGA) in oil are the two techniques widely used by the maintenance sectors of electricity systems as predictive solutions for monitoring the conditions of the power transformers immersed in insulating mineral oil (IMO) [7,8,[18][19][20][21]. The physical-chemical analysis allows inferring the state of the IMO by laboratory analysis of IMO samples from a transformer in service. The main physical-chemical characteristics, or tests, used as parameters for the classification of IMO, are color, appearance, dielectric strength, water content, acidity index, interfacial tension, dielectric losses, and density [8]. Table 1 presents the reference values for starting the control of IMO in a new equipment. This table contains test limits for ensuring that the IMO, in equipment after the processing and standing time before energization, is dry, contains no excess particulate matter, and contains a minimum amount of dissolved gas [22]. The moisture or water content in the transformer oil is highly undesirable, as it negatively affects the oil's dielectric properties, increasing the electrical conductivity and dissipation factor, and reducing its electrical resistance [23]. In addition, the water content of the insulation system accelerates the degradation of the insulation, decreases the cooling efficiency of the transformer, and causes the emission of bubbles at high temperatures. In transformer oil, water can be originated from the atmosphere or be produced by the deterioration of insulating materials (cellulose and oil) [4,14]. Usually, the water content in the oil, given in milligrams per kilogram (mg/kg) or part per million (ppm), is measured by the Karl Fischer coulometric titration method, in which reagents are used [22,24]. In new or regenerated oil, the amounts of water must be minimal. As a rule, the acceptable unit content for a new oil in new equipment is 20 ppm, whereas for oil in equipment (transformers and reactors) in operation, the limit value for corrective action is 35 ppm (both values for the equipment category up to 69 kV). As the equipment category increases, the moisture content allowed in the oil decreases [22]. Dissolved Gas Analysis (DGA) is the most used technique to monitor the performance of power transformers [18,20,25,26], and other electrical equipment containing oil. Using the DGA, it is possible to assess the operating condition of the equipment's insulation since this technique is able to identify various types of gases, thus allowing the diagnosis of different types of failures. The formation of gases can occur due to the natural aging process and/or as a result of equipment failure, even if it is still in its incipient phase [8]. Generally, DGA is performed using gas chromatography, a traditional method, which provides acceptable results. The chromatographic analysis of the gases dissolved in the oil is done in three stages. First, the oil samples are collected from operating transformers and transported to the laboratory. Then, the extraction of the gases from the oil sample can be done by vacuum extraction, stripper extraction, and headspace sampling. After the extraction, the gases are analyzed using techniques for interpretation of DGA, for example, the key gas, Doernenburg ratio, Rogers ratio, IEC ratio, and Duval triangle methods [20,27]. Physical-chemical analysis and DGA are consolidated techniques and are widely used by concessionaires, however for these analyses to be carried out, it is necessary to collect the oil sample in the field so that it can be taken to the laboratory for further analysis. In some cases, for example, in substations located far from the laboratory, the time required to transport this sample for analysis can cause economic losses for utilities. Another technique that can be applied in diagnosing the quality of IMO is acoustic analysis. It is a method based on the propagation of acoustic waves, commonly known as the acoustic emission method (AE). This sensitive technique employs acoustic sensors to detect acoustic waves (continuous or pulsed) that are transferred to the environment [28,29]. The acoustic waves are then monitored according to acoustic parameters such as the speed of propagation, attenuation, acoustic impedance, and these parameters can be related to some physical properties of the environment, such as density, viscosity and elasticity. One of the advantages of this technique is that it can be used to indirectly evaluate variables of industrial and research processes in a non-destructive way, with the possibility of non-invasive and online applications [30]. The present paper provides a low-cost solution to detect the quality of IMO by using machine learning and acoustic analysis from signals collected by an in-home built acoustic transducer. Our key contributions are: • a non-invasive acoustic method technique for diagnosing the IMO quality without dependence on partial discharges events; • a preventive, local and fast-diagnosing technique complementary to a high-cost and offline Physical-chemical IMO quality analysis; • a low-cost solution for IMO quality evaluation based on an in-home built acoustic transducer; • an analysis of real-life transformer's IMO measurements, and the proposal of two classification approaches using machine learning to recognize IMO quality; • an improved experimental setup compared to Reference [31]. Experimental Setup The experimental setup of our proof-of-concept evaluation is presented in Figure 1. The experimental setup comprises a function generator (a), an ultrasonic emitter (b), an acoustic chamber (c), an acoustic transducer (d) and an oscilloscope (e). The function generator (a) is programmed to generate a sinusoidal signal in the SWEEP mode, feeding the ultrasonic emitter (b), which transmits the signal inside the acoustic chamber (c). Then, the acoustic transducer (d) receives the signal and feeds the digital oscilloscope. Finally, the signal is stored for further processing. Ultrasonic Emitter Piezoelectric materials are increasingly popular and, due to their reduced cost, can be used as sensors and actuators in several scientific studies with a wide range of applications [2,[31][32][33]. The ultrasonic emitter, shown in Figure 2, is a piezoelectric ceramic with a 50 mm in diameter, 2.6 mm thick, resonant frequency of 40 kHz, and power of 50 Watts. This ultrasonic ceramic is responsible for emitting the acoustic signal that will propagate in the liquid. To provide the electrical contact of this ceramic and connect it to the acoustic camera, an acrylic piece was developed, as also shown in Figure 2. The electrical connection was made through a female BNC connector, and the ultrasonic ceramic was fixed to the acrylic support using a high temperature sealing silicone. Acoustic Chamber An acoustic chamber was developed to accommodate the ultrasonic emitter and the acoustic transducer. The prototype of the acoustic chamber was made in 10 mm crystal acrylic and had internal dimensions 80 x 100 x 120 mm in width, height and length, respectively. This prototype is illustrated in Figure 3 with the ultrasonic emitter and the acoustic transducer already attached. Acoustic Transducer The main elements that make up the acoustic transducer consist of a metallic enclosure, an electronic pre-amplification circuit and a ferroelectret, which are presented in Figure 4. This proposal consists of an improvement of Prototype 1 presented in Palitó et al. [31] for liquid application purposes. The metallic enclosure is responsible for the electrical shielding of the device and for the packaging of the amplifier and the ferroelectret sensor, the metallic electrodes and the pre-amplifier circuit board. The rear material, made of nylon, comprises the layer underlying the piezoelectric element and is responsible for dampening the vibration of the electromechanical film, which prevents reflections on the back of the active element and, consequently, avoids generating interference in the reception signal of the transducer. Inside the metallic enclosure, an electronic circuit composed of a preamplifier, a high-pass filter, and a differential amplifier is mounted. These three stages are presented in detail in Reference [31]. In order to increase the gain of the differential amplifier, we exchange the resistor R G = 5.6 kΩ of the circuit presented in Reference [31], by a 1 kΩ resistor. With this modification, the differential amplifier started to present a gain of 50.4 times (34.05 dB) and response in flat frequency up to 100 kHz. The complete amplification circuit ( Figure 5) of this acoustic transducer has a final gain of 38.13 dB. Ferroelectret The ferroelectret that makes up the acoustic transducer is the result of the thermoformed piezoelectric technology, and it was produced with fluorinated ethylene-propylene (FEP) films and with open tubular channels. This technology was chosen due to previous results that showed stable piezoelectricity of 160 pC/N, at 80 • C [34]. The operating principles of the ferroelectret and its manufacturing process are described in detail in [34]. It consisted of laminating two 50 µm FEP films at 300 • C with a 100 µm polytetrafluoroethylene (PTFE) mold between them. The mold was designed to create a ferroelectret with equally spaced open tubular channels (1.5 mm wide and 100 µm high). The films were thoroughly cleaned with acetone before lamination to avoid grease or dust particles. After lamination, the PTFE mold was removed from the fused FEP layers, forming a two-layer polymeric structure with ten open channels. The polymeric structure was transformed into ferroelectret after circular aluminum electrodes were deposited on both sides of the structure, and a DC voltage of 3 kV was applied directly over the electrodes for 10 seconds. Figure 6 shows the acoustic transducer with the piezoelectric sensor used in experiments. Measurement Methodology To drive the ultrasonic emitter a function generator (Agilent 33210A, 10 MHz) was used in SWEEP mode, with a sine wave of 10 V from peak to peak, covering a frequency range from 10 kHz to 100 kHz, linearly, over a 1 second time interval (in total 100.000 samples are generated). The function generator drives the acoustic transducer, causing it to transmit the low frequency ultrasound wave through the oils (800 ml) and producing an acoustic signal pattern due to multiple reverberations within the acoustic chamber. Subsequently, the acoustic transducer receives the signal and performs its function of converting the sound signal into an electrical signal and amplifying it. Finally, the signal is captured by the oscilloscope (Agilent Keysight DSOX2002A, 70 MHz, 2 GSa/s) where the signal is visualized and the data is saved with 100.000 samples. Figure 7 presents a photo of the experiment setup. These oil samples were collected from transformers under maintenance by the company Potencial, which provided the samples with the technical report of the physical-chemical analysis. The results of the physical-chemical analysis of the oil samples are presented in Tables 2 and 3. The oils with suffixes 1 and 2 correspond to Databases 1 and 2, respectively. Figure 8 shows the SWEEP signal for the four different types of oils. As one can note, the behavior is similar, so that there is no way to classify the types of oils by a simple visual inspection. However, there are a number of differences regarding the amplitude when a specific time is set. In Section 5, we discuss how to use these responses as a way of classification the oils analyzed. The experiments were repeated nine and ten times for each sample of transformer oils, then five and six measurements were made for different SWEEPs in each medium of Databases 1 and 2, respectively. Therefore, 45 and 60 measurements were taken for each transformer oil sample (new_1, processed_1, contaminated_1, out of service_1, new_2, processed _2, contaminated_2, and out of service_2), respectively. Proposed Classification Framework This section presents our proposed machine learning framework. The target problem focuses on classifying a SWEEP signal from the experiment to detect which class oil it comes from. The creation of the statistical dataset, as well as the classifiers used in this work, are detailed as follows. Statistical Dataset One of the greatest challenges in statistical learning is how to feed the learning machine with features and values that really can be used as part of the learning strategy [35]. On the other hand, we cannot use the SWEEP signal because it is a temporal series, instead, we calculated some general and moving statistical parameters that are important to our problem. For the former, we use the complete signal to get the measures, while the latter, we have to define a window, which is used to get the measures from that specific moving interval. For this paper, the statistical parameters calculated from the SWEEP signals include mean, moving mean, variance, moving variance, the difference between peaks and bottoms (Vpb), moving Vpb, correlation, and moving correlation. To calculate the moving statistical values, a window of 5.000 samples was defined. Thus, we created a dataset with these statistical values for all oil samples and established labels from which oil class the signals come from. This is an important step, once the classification problem is a type of supervised learning. In total, we have three datasets from the experiments made in 2016 (Database 1), 2018 (Database 2), and another with both databases together. Also, each dataset has 84 features with 180, 240 and 420 entries, respectively. Table 4 resumes the main configuration of the pre-processing phase of this proposed classification framework. In addition, Figure 9 shows the correlation heatmap of the features of the three datasets. Note that Figure 9a,b are very similar; in other words, the features behave in a very similar way, which means that in the classification phase, the same features (statistical parameters) can be used by different machines of both datasets, due to high correlation. On the other hand, it is possible to note that Database 2 has more complexity than Database 1, due to low correlation among features (near 0). Also, Figure 9c shows the correlation for both datasets together, and, as expected, behavior characteristics related to correlation from both are presented in this one. In other words, it inherits characteristics from datasets 1 and 2, including low and high correlation complexity. Machine Learning Classifiers After creating the statistical dataset, we applied it to the machine learning framework. This is necessary to find the best classification technique, that is, that with the highest score. Also, in order to increase the classification score, we use the techniques of Feature Selection [36] and Model Tuning [37], in which the framework is able to select the best features (statistical parameters) according to the classifier and to select the best configuration from a set previously defined of that classifier. In addition, we developed two classification approaches according to the types of oils we had. For example, we have four types of oils-new, processed, contaminated and out of service. The first two are good oils, while the last two are not. Our first approach is to classify the statistical values from one SWEEP signal into one of the four types. While the second uses machine committee by breaking the classification in two steps. In this approach, the first machine is responsible for classifying the signal into good and bad oils and then, the second machine does the final classification. In this case, feature selection and model tuning are performed again. The output of the first machine is then utilized as a new feature by the second machine. Figures 10 and 11 show, in the form of a block diagram, how the proposed framework works. [37] and Cross-Validation [38] are used in conjunction to find the best classifier (with the highest score). Thus, this step helps to increase the overall system score and reduce the number of features; There are a set of machine learning classifiers; however the ones employed in this work for simplicity and whose behavior are well-known among the research community include Random Forest, ExtraTree Classifier, Logistic Regression, Support Vector Machines, k-Nearest Neighbors and Stochastic Gradient Descent. In addition, we use 70% and 30% of the original dataset as the training and test set, respectively. A brief description of each classifier is presented as follows: • Random Forest: it is a estimator that fits a number of Decision Tree classifiers of several sub-samples of the dataset and uses the average to improve the predictive accuracy and to control the over-fitting [39]. • ExtraTree Classifier: it is a learning technique very similar to Random Forest, in which it aggregates several decisions trees. However, it differentiates by using multiple de-correlated decision trees collected in a "forest" to output its classification result [40]; • Logistic Regression: it is a statistical model that uses a logistic function to model a binary variable. In other words, a binary logistic function or variable has only two possible values, such as yes/no. It has low implementation complexity, is suitable for linearly separable data, and is less prone to over-fitting [41]; • Support Vector Machines (SVM): it is a discriminative classifier formally used to separate data between hyperplanes [42]. Results and Analysis This section presents and discusses our results regarding the classification of the oils, the results of Feature Selection and Model Tuning. As already presented in Figures 10 and 11, we developed two classification approaches, and three datasets (also presented in Section 5.1) were applied in each of these approaches. Those results are discussed separately in Sections 6.2 and 6.3, respectively. Firstly, the analyses of feature selection and model tuning are presented in Section 6.1. Feature Selection and Model Tuning Even though feature selection and model tuning are part of a pre-processing phase, it is also very important for the classification phase. Firstly, we perform feature selection to reduce the number of features, according to the chosen classifier. In other words, we have six different outputs of feature selection, which are applied for each classifier. In total, to decide which of the estimators have the best score to our problem, we need to perform 36 and 72 classifications in the pre-processing step, considering the classification approaches 1 and 2, respectively. Table 5 shows the average score achieved in the step of Feature Selection and Model Tuning. Note that KNN, Random Forest, and Extra Tree Classifiers have the best scores. Therefore, we have decided to use only KNN, Random Forest, and Extra Tree Classifiers as the machines at the final classification. Classification Approach 1 This section presents and discusses the results regarding the first classification approach. Table 6 shows the total number of machines/classifiers with scores (ratio between correct predicted values and true labels) greater or equal to 0.98 or 0.99 or 1. Note that by using this approach, a total of 7 classifiers with a score greater or equal to 0.98 is achieved. Also, only the Datasets 1 and 3 have classifiers with 100% of the score, which means those results can achieve a perfect classification. Figure 12 illustrates this behavior by showing the confusion matrices for each dataset. As one can note, the machines used in Datasets 1 and 3 were able to classify all types of oils correctly. However, the machine used in Dataset 2 has only one error (an Out of service class is selected as a Contaminated one). This is not a serious error regarding the original problem, once the two entries are the types of bad oils. In addition, Figure 13 shows the score for each type of oils. Note that with the exception of 1 misprediction of contaminated and out of service oils from database 2, the rest are able to be recognized by the machines, and classified correctly. Classification Approach 2 This section presents and discusses the results achieved by using the second classification approach. Similarly to Tables 6 and 7 shows the total number of machines/classifiers with scores greater or equal to 0.98, 0.99 and 1. It is easy to notice an increase in the number of machines with very high scores by only using the machine committee. Thus, by selecting only one machine committee with 100% accuracy for each dataset, we generate the results shown in Figures 14 and 15. The former shows the confusion matrices for all datasets analyzed, while the latter shows the recognition rate (score) per oil and dataset. Both figures show the behavior expected for these results, where it is achieved a perfect classification with no errors. Therefore, it is important to point out that a series of steps and decisions were taken in the pre-processing phase to avoid low scores and overfitting as feature selection, model tuning with cross-validation, as already mentioned. Table 8 shows the main classification parameters for one possible configuration of machines for perfect classification, considering the approaches and the datasets involved. Note that the ExtraTree and Random Forest classifiers are the most common techniques for this problem. Moreover, considering that we were able to classify all oils available correctly, thus it is possible to integrate both classification approaches with the acoustic signal analysis from our in-home built hydrophone to create a new predictive solution for monitoring transformers. Conclusions In the present study, a new acoustic technique applied to the diagnosis of mineral oil used in transformers was presented. The technique integrates an in-home built thermoformed ferroelectret as an acoustic transducer and a machine learning method to support variations in the oil classification. First, we demonstrated how the transducer was built followed by the ferroelectret sensor, further some experimental results were showed and finally we integrated the collect signals into machine learning classification frameworks. Two approaches of classification were developed; in both, we were able to classify all oils available correctly. From the employed methods, we were able to obtain scores of 100% and 99% using the Extra Tree classifier from classification approach 1, and 100% with Extra Tree, the Random Forest and the k-Nearest Neighbors classifiers by using classification approach 2. The proposed method is fast compared to the physical-chemical analysis that needs measurements at energy substation followed by laboratory evaluation often located kilometers away. Our method is based on the acquisition of the signal and the classification of the oil can be made on-site in a few seconds. As future studies, we plan to feed our framework with more experimental results from oils, water with different concentrations of pollutants as well as seawater aiming to build a fast and reliable system to detect oil leakages.
7,394.8
2020-05-01T00:00:00.000
[ "Physics" ]